Staying Sane In The Age of Algorithm

We live the age of algorithm. They’re the magic sauce which keeps our machines running. Their omnipotence has grown to a point where some see our bodies’ and minds’ inner workings as algorithms[1] – just like former epochs saw the universe or a mechanical turk in every human being. And – like all men-made gods – algorithms, too, are demonized, as if they were the new satans infiltrating our world, in particular our digitized online lives. Just like a god’s (or a demon’s) deeds, their workings leave us puzzled, irritated, frustrated, or outright furious: Why am I recommended a new brand of orange rubber boots? Why is my timeline filled with posts about New Zealand holidays and sheep farms? Why is my application for the coding class ignored, no matter how often I send it? Why am I excluded from getting reasonable health insurance?

Caught in the algorithmic circle of hell?

We feel helpless – and sometimes hopeless – when we bang our heads against the stubborn structures built by algorithms. These feelings are further worsened by the fact that algorithms’ apparent lack of sensitivity seems to be closely intertwined with an invasion of another type of wrathful online demon, appearing in the form of disrespect, insult, abuse, slander, and outright threats hurled at us from random sources: Why does customer service not answer my email? Why is my call disconnected after 37 minutes of holding the line with atrocious music? Why does someone I don’t even know call me a slut in a tweet? Why does my pal from university unfriend me on Facebook? Why does some random guy threaten my life?

The age of algorithm sometimes feels like a pit of hellish fire[2]. We’re helplessly torn and cut to pieces, slowly boiling in a scorching cauldron, viciously stirred by leaden ladles in the hands of some dark master, following a hidden recipe. In this burning mess, we lose our minds and hearts, stripped of all we once cherished as the selves we used to be, robbed of all feelings of identity, self-confidence, and trust. This is sheer panic, just like any panic grabbing us when horror or trauma attacks, chains us, and devours our innermost being. As with any other panic, our reflex will be to fight, freeze, or flee into the cocoon of (temporary) digital silence. And as with any other panic, these reactions will not make disappear that which haunts us.

What, then, can save us? Or is it time to accept that all is lost and we’ll henceforth and forever burn alive in fiery algorithmic circles beyond our control? Giving up would amount to conceding defeat to a reign of terror under the iron thumbs of algorithms and abuse. So maybe there is a way to calm, steer, redirect, or outrightly destroy their power? I’d rather give it a try than give in. And – as those who know me a little will not be surprised to hear –  I’ll start poking holes into them by trying to understand as much as possible about how they work. So: What is an algorithm? And how does it gain its power? And why and how is it related to abuse?

Generally, of course, algorithms are clearly defined series of steps to follow in order to get from a certain input (and initial state) to a certain output (and end state). The standard example is a kitchen recipe: It starts with a list of ingredients, goes through a finite number of instructions on peeling, chopping, frying, seasoning, and baking for a predefined time at a predefined temperature, and ends with a tasty vegetable lasagne which can then be enjoyed accompanied by a glass of wine and a friend. Just like most kitchen recipes, many online algorithms are relatively harmless, translating sentences from one language into another, mapping restaurants close to those who are hungry, connecting drivers and those who want a ride. They might go in circles or produce unwanted results, but they’re not threatening and certainly not algorithms from hell.

The “Customers, who…” algorithm

Specifically, however, algorithms are losing their innocence mainly because of the widespread presence of one very common type of online algorithm, namely the “Customers, who” algorithm. The most salient example for this is Amazon’s infamous: “Customers who bought this item also bought that item”, suggesting that I, who bought a Japanese kitchen knife, should strongly consider also buying a certain new brand of orange rubber boots. In more abstract terms, this algorithm takes a trait or a behavior of a person as input and generates another trait or behavior of/for that same person as output.

Let’s unpack what’s happening behind the scenes. Taken apart into its main components, the “Customers, who…” algorithm consists of four distinct steps that run roughly as follows:

  1. Observation: Pick a person’s trait or behavior –
    “Anja bought a Japanese kitchen knife.”
  2. Generalization: Put that person in a group defined by that trait or behavior –
    “Anja is the kind of person who buys a Japanese kitchen knife.”
  3. Correlation: Identify other traits or behaviors typical for that group –
    “People who buy Japanese kitchen knives also buy a certain new brand of orange rubber boots.”
  4. Conclusion: Ascribe those other traits or behaviors to the initial person –
    “Anja should buy this new brand of orange rubber boots.”

If the “Customers who…” algorithm was limited to suggestions of online purchases (which I can then decide to ignore), I’d probably find it amusing or annoying, but not abusive or overpowering. Unfortunately, the very same algorithm is used to define what I see in some of my social media feeds (creating the phenomenon of the filter bubble or echo chamber). It is also used to decide whether to give me access to certain products or services through profiling or scoring mechanisms (potentially deciding about my well-being, health, and life expectancy). And it is even used to identify potential criminals or terrorists (and put them under surveillance or into prison)[3]. Such proliferations can have huge impact on our lives, diminishing our sense of agency and severely limiting our ability to shape our destinies. And by consequence we feel patronized and mistreated, pushed into a corner that we never actively chose to dwell in – or (if worst comes to worst) locked into a cell while not even being informed which crime we’re allegedly guilty of.

In addition, the “Customers, who…” algorithm is emotionally loaded in two other ways. Firstly, because for most of us, regardless of content, its outcome – someone telling us what to do or telling us who/how we are – evokes memories of childhood irritation, frustration, and anger. Amazon’s: “You have to buy this new brand of orange rubber boots!”, sounds a lot like dad’s: “Don’t go outside without wearing your mittens!”. Facebook’s: “We think you might like these origami videos!”, sounds a lot like grandma’s: “I’m sure you’ll like the neighbor’s new little puppy!” – while all the while I was horrified of dogs, no matter what shape or form they came in. We shrink when we’re told what to do, and we shrink even more when someone tells us who we are[4].

Secondly, the “Customers, who…” algorithm is emotionally loaded because our very own human minds tend to follow the same logic. When someone plays the violin next to the entrance to a tube station (Observation), we put them in the category of people who make music on the street (Generalization). We then equal those with people who make little money and have little standing in society (Correlation), so we assume the person playing the violin is some homeless person (Conclusion) trying to make a few bucks – and it comes as a total surprise when we then learn that this was world famous violinist Joshua Bell. In this way, we as human beings use algorithms, too – more often than not making snap judgments about other people in the blink of an eye. And most of the time we’re not even aware of why we’re coming to the conclusion we’re landing on – it just feels intuitively true[5]. We feel strongly about the outcomes of algorithms, because in our own experience these outcomes are what makes us feel right in our guts (and therefore good).

When algorithms go wrong

Unfortunately, when using the “Customers, who…” algorithm, both machines and human beings can go wrong. And: The feeling of being at the mercy of unknown forces arises in us precisely when we suspect that in the course of applying this algorithm, something went off course. So how exactly do these aberrations happen? And what (if anything) can we do get back on track? Firstly, both machines and humans can be wrong about inferring that the person who bought a Japanese kitchen knife is actually the kind of person who buys a Japanese kitchen knife. When moving from observation to generalization, machines overlook the fact that I bought the knife for a friend, that it was accidentally added to my shopping cart by my son, or that I bought it only to then find out that it was utterly useless for cutting edges. In contrast, human beings jump to conclusions too quickly, taking as the starting point for their algorithm not the fact that I bought a knife but assumptions like: “Anja likes to cook”, “Anja is planning a murder”, or: “Anja is interested in Japanese culture”. Machines trip over ownership of traits or actions; human beings trip over objective interpretation of traits or actions.

Then, both machines and human beings can be wrong about inferring that people who buy Japanese kitchen knives also buy a certain new brand of orange rubber boots. When moving from generalizations to correlations, machines go for averages, probabilities, and majorities, settling on less-than-perfect interdependencies, ignoring the fact that 37 percent of buyers of Japanese knife buyers have shown no interest at all in rubber boots of any color. In contrast, human beings go for similarities, likelihood, and convincing stories, remembering that one video clip where a Swedish chef cuts radishes while wearing orange rubber boots. Machines get it wrong, because we’re often less ordinary than some mathematical aggregate of past observations; human beings get it wrong, because we’re often more ordinary than some salient recent event impressed in our memories. Machines mess up because they over-generalize; human beings mess up because they over-individualize.

And: Both machines and human beings can be wrong about inferring that just because people who buy Japanese kitchen knives also buy a certain brand of orange rubber boots, I personally should immediately put a pair of those into my shopping cart. When moving from correlation to conclusion, machines miss out on the information that I have a strong rubberphobia, on the fact that I’m a dedicated barefoot-only person, or on the circumstance that I live in a city famous for getting the least amount of rain on the whole continent. In contrast, human beings give to much weight to the recent memory of a joint walk in the pouring rain, to my long-time crush on the color orange, and to their own liking for boots. Machines get it wrong because they overlook tiny, but relevant details; human beings get it wrong because they pay too much attention to tiny, but irrelevant details. Machines screw up on pragmatism; human beings screw up on principles.

With all this, it’s no wonder we constantly find ourselves trapped in the workings of algorithms the outcomes of which make us cringe, clash with our desires, and sometimes go against our deepest convictions. And it’s no wonder that this being trapped immediately feels like we’re being abused: It starts as a slight constraint, grows into a menace, and turns into a life-threatening infringement on who we really are.

IMG_0117

A road to escape?

At the same time, with all this, we can also start to loosen the seemingly unbreakable chains of algorithms and abuse. The trick is to take things backwards, starting at the very end where we’re being told to do something – or told that we are something. For this, it doesn’t make a difference whether the algorithm at play was applied by a machine or by another human being. So: First, we look at the conclusion that irritates us, and the question to ask is: Does this conclusion matter to me? Quite often, fortunately, the answer is: “No!”. I don’t have to buy a pair of rubber boots just because some algorithm tells me so, and Joshua Bell can probably live very well with being mistaken for a homeless person. In this case, the conclusion is just a crumbling castle in somebody else’s sky, dissolving into thin air. Unfortunately, however, quite often the answer is: “Yes, it matters a lot!”. I don’t want to be caught in a filter bubble, I don’t want to live without health insurance, and I certainly don’t want to be exposed to constant death threats.

If and when the conclusion matters, we have to move backwards and look at the correlation. The question to ask here is: Can I reframe the correlation? The web’s knee-jerk way of answering this question is a fierce “Not all…”, thrown right back at the sender: #NotAllMen are rapists, #NotAllGirls like pink, #NotAllBuyersOfJapaneseKnives buy orange rubber boots. Unfortunately, this usually leads nowhere, but rather triggers endless discussions about how to weigh the pain of individual victims of rape, whether to worry about our parenting skills when our daughters actually do like pink, or how to raise awareness for the diversity amongst buyers of Japanese knives. What works better: Completely replace one part of the correlation – “Star musicians often play on the street as part of their regular practice” – or rewrite it in full: “People with rubberphobia don’t ever buy rubber boots”. Reframed, the correlation arises as a new paradigm, shifting our perception of who is who and what is what. In serious cases, this is where human rights or moral imperatives come in: “Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family […]”, or: “Thou shalt not kill”. This is the place in the algorithm where mathematics can (and should) be overruled by basic principles of human beings living together. All algorithms – those running in our heads and those we teach to our machines – need to be hard-stopped when such principles are at risk[6].

Sometimes, though, correlations work well and no basic principles are at stake. If we feel caught in an algorithm where the conclusion matters and the correlation cannot be reframed, we need to move back even further and look at the generalization. The next question to ask is: “Can I cut through the generalization?”. Quite often, our struggle with being labeled: “The kind of person who…” comes not from a genuine mistake in attribution – these cases are usually quickly clarified and laughed about. Instead, it comes from broader assumption in our own heads: I don’t want to be the kind of person who buys a Japanese kitchen knife, because I myself (most likely: unconsciously) hold the worrisome belief that people who buy Japanese kitchen knives are more likely to die by drowning. By linking Japanese kitchen knives to death by drowning, my very own (hidden) algorithm makes me emotionally oppose the seemingly harmless generalization offered by the “Customers, who…” algorithm. Uncovering these hidden algorithms can be draining, as many of them are associated with our fundamental hopes and fears about being, becoming, or ceasing. By the same token, though, cutting through such hopes and fears can be quite liberating – regardless of whether we then buy rubber boots or not.

IMG_0122

Finally, sometimes we’ll come to the point where we cannot cut through the generalization, so we’re back to the initial trait or behavior that got the algorithm rolling in the first place – the observation. Once there – and assuming we’re still feeling choked by algorithms’ tightening bonds – the only remaining question to ask is: “Can I stop what started it?”. If nothing else helps and we’re still fighting for breath, the only way out is to stop doing what we did, stop saying what we said, stop being what we were. Sometimes, never ever buying a Japanese kitchen knife again is the only way to evade algorithm’s grips – even if the knife once bought cannot be unbought. And sometimes, this is the lesser pain compared to suffering all of algorithm’s consequences. Many times, though, there will be no need to stop anything at all, as somewhere on the way the algorithm already lost its hold on us.

The one mistake we must never make is to stop buying Japanese kitchen knives just because someone told us to buy orange rubber boots. Or to doubt that we’re Joshua Bell just because we once played the violin next to the entrance of a tube station. In other words: Never, ever change a trait or behavior just because you don’t like some algorithm’s outcome. Take your time to look at all steps of the algorithm, backwards and one by one. Don’t panic. Look each step straight in the eyes, don’t run. The deceitful demons will retreat once they see you’re not afraid. When they’re gone, you’re free to change what you want to change, keep what you want to keep, become what you want to become.

Be the algorithm you want to see in the world.


[1] The chapter “Organisms are Algorithms” in Yuval Noah Harari’s “Homo Deus” (2015) gives a good overview of this paradigm.BACK TO TEXT

[2] Sometimes, of course, this age is pure bliss, too. Go back to this old post on elephants  crying on trees in the desert, if you want to relax a little and regain some of your trust in all things digital [retrieved May 11th, 2017].BACK TO TEXT

[3] Of course, many of these algorithms use more than just one parameter to draw conclusions. The basic logic, however, remains the same, so the following arguments remain valid even for much more complex algorithms.BACK TO TEXT

[4] Sometimes, of course, we look for this kind of guidance, but then we do it out of our own motivation by asking somebody for advice, doing some research, taking a personality test, or reading a horoscope. These are all deliberate actions the results of which we might or might not follow. Things are completely different when someone attaches a label to us without us having been asked. I blogged about this a while ago – read more here [retrieved May 11th, 2017] .BACK TO TEXT

[5] Two of the many obligatory references about this phenomenon go out to Malcolm Gladwell’s “Blink” (2005) and Daniel Kahnemanns “Thinking, Fast and Slow” (2011).BACK TO TEXT

[6] Isaac Asimov’s famous three laws of robotics (from his 1942 story “Runaround”) are a prime example of how this principle itself can be translated into an algorithm machines can follow:
“1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws”.BACK TO TEXT

Respond to Staying Sane In The Age of Algorithm

Leave a Reply