Overview
Research

Why are we so mean to Artificial Intelligences?

 

Recently, I was on a roadshow presenting our ARTIS solution for the construction of thematic portfolios, which is based on machine learning and natural language processing and enables us to process hundreds of thousands of unstructured documents in the public domain to find the right stocks for a given investment theme. In a way, we use a form of #artificialintelligence to help us build the best portfolios considering all the information out there. While there was a lot of positive feedback on the approach, there was of course also quite a few critical questions on this approach.

Now, from a previous role of building and running the data science lab for a major financial services company in Germany, I am no stranger to facing skepticism with regards to ML and AI based solutions, but I can’t help but feel that those solutions are being given a hard time.

I believe this starts with our (human) expectations. When judging the quality of the work of a machine, we tend to expect 100% accuracy, while we are way more lenient with any human operator. With many machines that is probably not wrong. We can expect a simple calculator to be correct 100%, whereas we all know that multiplying large numbers in our heads is hard and can easily lead to mistakes.

This does however lead us to unrealistic expectations when it comes to the results of machine learning algorithms or artificial intelligences. Unlike a simple calculator or more complex rules-based program, but very much like humans, AIs too will make mistakes. What is harder for us to understand is that they will make very different mistakes to those we would make. And we conveniently forget about all those mistakes that we tend to make (and accept) to blame the AI for the really, really stupid mistakes it makes.

A good example for this are self-driving cars: In 2016 in what is considered the first fatal crash involving Tesla’s autopilot software the AI failed to recognize a white trailer in a bright sky causing shockwaves of bad press globally. Everyone was shocked, how could the AI not see a massive trailer? No human would have made that mistake, so clearly the AI must be bad.

On the other hand, when listening to the radio here in Germany, one of the most common traffic accident announcements is that of “a car/truck crashing into the end of a traffic jam”. This isn’t particularly newsworthy though as it happens all the time. The only reason it is even on the radio is in order to warn further drivers to not do the same thing. Now, I am fairly certain that an AI-driven car would not make the same mistake. Thanks to radar and ultrasound sensors and almost instant reaction times, the autopilot has a much better chance of hitting the brakes in time. Yet, we accept that humans sometimes react late, drive too fast and hit the end of a traffic jam. But with AI we are not that forgiving, we start calling for autopilot to be banned and question the entire technology instead of embracing the many benefits that it generates.

Maybe another reason for our poor perception of AI performance is rooted in our intuitive understanding (or misunderstanding) of probabilities. Machine learning models are often measured in their quality by accuracy (my data scientist bubble will probably be very quick to point out that this is not the best approach, but it is nonetheless often the measurement we use to sell those models to the customers). Many models in production achieve very high accuracy rates of over 80% or even 90% percent. Sounds good, right?

After all, if the weather forecast for tomorrow says there is a 98% chance of rain, that directly translates into “it will rain tomorrow”. We tend to ignore that it really means that of 100 days where that prediction was made, it will not rain on the next day on two occasions, i.e. the model will be wrong every other month or so.

Back to the AI, if I have a model that picks the right stocks for a theme with a 98% accuracy, it also means that I need to expect two wrong picks in a portfolio of 100 stocks. And instead of focusing on the amazing result of picking 98 correct stocks fully automatically just by browsing publicly available documents, we start discussing the two stocks that don’t quite fit. In fact, most likely, no human would have picked them as here, just as in the car crash example, the AI will make different mistakes than we expect humans to make. And here too we tend to ignore all the mistakes that the AI didn’t make, in this case specifically missing many relevant stocks due to a lack of information.

With this attitude we risk discarding the benefits that the new technology brings for a lack of understanding how to best use it. Now, don’t get me wrong, those accidents are terrible and I also wouldn’t want to be driven by an #autopilot while taking a nap at the wheel. But I wouldn’t want to miss automated collision avoidance, pedestrian recognition and many other “AI” features in my car in combination with my attention. And this is where I see the real future for AI applications. Rather than delegating decision making fully to an AI, only for it to make “stupid” mistakes, I am a believer in augmented decision making, where human and AI work hand in hand to avoid both types of mistakes and approach perfection. So I’ll take care of looking out for those white trailers in bright light and I’ll happily let the AI brake for me when we are approaching the end of that traffic jam.

Same thing for our indices, the ARTIS and AI part enables us to process way more information and find better stocks than with manual work alone, but adding a final layer of manual (human) sanity checks allows us to iron out those few stupid choices by the AI as well.

So, be nice to AIs, they usually just want to help!

This blog was originally posted on LinkedIn on September 20, 2022. Follow Konrad Sippel and Solactive on LinkedIn for further insights and opinion pieces.