Data And Algorithms – Alchemy Or Science?
I’ve just finished a great book. It’s called How to Make the World Add Up by a brilliant non-fiction author called Tim Harford. (In North America it is titled The Data Detectives).
Tim is an economist, broadcaster and commentator who is incredibly well-educated but is also blessed with the ability to explain and summarise really complicated academic ideas so that normal people like you and me can understand them.
The book is all about how we as individuals should go about thinking about numbers and the statistics we are bombarded with for almost every moment of our waking lives. It contains a brilliant chapter on how we should think about algorithms and big data.
About a decade ago, as the endless possibilities of the interconnected digital world and the internet of things started to become clear, there was an optimistic euphoria that gripped progressive society. For a while it seemed all knowledge would be ours and sampling errors and biases would be a thing of the past. Accurate and near-instantaneous algorithmically-driven decision-making would be at everyone’s fingertips.
One example of this seeming omniscience from that time was the phenomenal success of an experiment called Google Flu Trends. By analysing search terms by geography and comparing that information to flu disease data, Google’s engineers were able to find correlations between the two.
Google was then able to make some amazingly prescient predictions about flu infection rates in different places. These stats were accurate and appeared in almost real time – way faster than official evidence-based stats that were laboriously collected by regular surveys of health institutions.
Futurologists rushed to embrace the predictive qualities of Google as the future of public health intelligence.
But after a few winters it stopped working
Flu Trends began first overestimating and then underestimating flu cases in different areas. Then the Trends algo was completely flummoxed by an outbreak of summer flu.
It turns out Google was really good at spotting all the things that correlated with traditional flu season – for example colder weather or a particular US basketball tournament that started in November. You see, the engineers and statisticians weren’t looking to prove cause and effect – they were just looking for correlations in very significant amounts of data and using them to make predictions.
With data sizes so big the idea was just to find patterns and not worry too much about explaining things logically. We were encouraged merely to “trust the numbers”. But instead of a really useful public health tool, what Google’s boffins had probably invented was a really sophisticated way of showing where winter was coming. You see as anyone knows, Winter correlates to flu.
Except when it doesn’t when it is summer flu! Flu Trends was merely an expensive winter detector! It has since been retired. Needless to say, winter detection does not require sophisticated analysis. Any human can do it. As an industry grappling with data-driven decision making, we must heed the above lesson in common sense. It will soon show up in our loss ratios if we don’t. Yes, sometimes things may correlate, but it really is common sense to say that if there is no logical connection between two things being analysed, then any insights are probably meaningless. We can’t just trust numbers, we must use common sense too.
But none of this is to say that machine-learnt data analysis is not going to be extraordinarily helpful to us in insurance. So what’s the key to unlocking the value and avoiding the duds? Harford first argues that the most important thing is for data to be made available to as many as possible, so that the value of the analysis can be assessed independently.
For example, machine learning is already doing a very good job of analysing satellite imagery pre- and post-loss events. A key feature of satellite imagery is that it is universally available and there are many competing suppliers, so sources of data can be verified and contrasted very quickly. The focus is therefore on the analysis. Harford used the example of the rise of science and the historical fall of alchemy. Both exercised the finest minds for centuries, so why did science triumph and alchemy fail?
Because the prize in alchemy was so lucrative (turning base metal into gold) it simply had to be secretive. You could only learn the techniques of alchemy through knowledge handed to you, probably verbally, by a highly trusted friend or relative. This meant possibly centuries of experimental endeavour were likely to have been lost, much to the detriment of human development.
Science on the other hand was an open amateur club, full of avid correspondents and journals. Peers read each other’s theories and results and rushed to the lab to try and replicate them. History shows which was more successful. Since the Renaissance scientific progress has been on an exponential upswing that only seems to be accelerating. Meanwhile alchemy is dead.
What can we learn from this?
Instead of looking to mine proprietary data that might not be of sufficient quality to provide valuable insights, insurers should pool it wherever possible. Google sometimes shares its data but more often than not it doesn’t because it is proprietary and deemed valuable. But that is a problem.
If we can’t peer review the data it is much less valuable or useful than anyone thinks. The second point Harford makes is around the peer review of the algorithm itself. Somewhere in a black box Google is also jealously guarding the algorithm that found the spurious correlations between search terms and flu rates.
If this had been available for inspection, its flaws may have been discovered sooner. Medical science may be a better example. Its trials data is made available so that peers and regulators can replicate and validate the results. Only when a drug is proven safe and effective to high statistical standards is it allowed out in public.
Yet drug companies still own the rights to their discoveries so that they are incentivised to continually re-invest in research. It’s a good job that Governments didn’t use Google Trends when it was underestimating the spread of infection by a large margin. The greater good and Google’s own quest for added value would have been better served by greater openness and less whispered secrecy.
The core lesson is that wherever possible we shouldn’t be Google-y alchemists, hiding our added value away, we should be true data scientists sharing our discoveries so that they can be robustly peer-reviewed. The paradox of pure science is that whenever you give something away it turns out that you end up getting way more back in return.
We live in a syndicated market with a long tradition of benefiting from successful peer review of each other’s handiwork. We also suffer the occasional failures of group-think when effective scrutiny was absent. If we can work out how to share data and peer review proprietary analysis while still rewarding and incentivising its creators, we will have a glittering future of untrammelled growth and prosperity. If not, we may find ourselves as dead as the medieval alchemists.