Informally, an algorithm is a set of instructions that transforms inputs into outputs. However without us noticing, and combined with big data, they have taken over modern life. From airport runways, to personalised advertising to even replicating the voice of Donald Trump, algorithms are behind the success of tech giants like Google and have saved lives by matching kidney patients with donor organs. Just how do these silent strings of code affect our lives and what are the risks of following the decisions of a computer?
Have algorithms become too complicated?
“If it works, it works. We shouldn’t even try to work out what the machine is spitting out – they’ll pick up patterns we won’t even know about.” It is frightening to think the algorithms which are making increasingly important decisions in our lives cannot even be fully understood by their own creators. A recent The Times Literary Supplement article entitled “God is in the Machine” shows how complicated and incomprehensible modern algorithms have become, so much so that it is almost impossible to determine why algorithms make the decisions they make. According to the Wall Street Journal, 15 states in the US now use automated tools to assist judges on parole decisions. Is it fair to use algorithms in life changing situations without being able to explain their decisions?
Do they work in the ways we would want?
Statistically poorer people are more likely to have worse credit rating and live in high crime areas, surrounded by other poorer people. Algorithms use this data to detect these people as “high risk” and allegedly block them from jobs, hike up rates for mortgages, car loans and insurance. This sends out information which drives their credit rating down further, starting a vicious feedback loop. We need more active human intervention to monitor the morals and potential biases of algorithms. “Weapons of Math Destruction” is how Cathy O’Neil describes these algorithms saying they are “increasing inequality and threatening democracy.”
Medical benefits are clear
Often, however, the benefits of algorithms are clear. Drop-seq is a technology that allows biologists to analyze and collect data on gene expressions by packaging them in droplets of water. By encapsulating the individual cells in tiny droplets biologist can quickly profile thousands of cells at the same time. Algorithms are used to analyse the data and better understand mental illnesses like schizophrenia. ChexNet is an algorithm which can diagnose pneumonia from chest X-rays. It uses machine learning and is capable of outperforming 4 radiologists working in the same time span. Tools like this could reduce the amount of missed cases and significantly improve the current health care system.
We still need to apply human intelligence and insight
“Relying on big data alone increases the chances we’ll miss something, whilst giving us the illusion we know everything” says Trician Wang in her TED talk on the human insights missing from big data. In order for us to not unfairly exploit poorer groups or wrongly release some prisoners, the way we use big data and algorithms needs to change. The context of big data must be kept by combining it with other insights and data. For example, using qualitative research such as executive interviews, consumer focus groups and ethnography helps to ensure that new up and coming trends are not missed. It also helps us to understand why people behave in an observed way. If we can understand the “why” then we can intervene, influence and affect consumer behaviour which is the fundamental purpose of marketing. It also ensures that we won’t place too much reliance on big data and algorithms. According to Trician Wang, “It’s not big data’s fault, it’s the way we use big data.”