Facebooktwitterredditlinkedin

artificial intelligence (AI)

“The machine required incessant attention. Its work had to be watched with anxiety, and its arithmetical music had to be elicited by frequent tuning and skilful handling. This volume is the result, and thus the soul of the machine is exhibited in a series of tables.”

Take away the poetic language and this quote could have easily have come from any recent overexcited article about artificial intelligence.

Instead, this was said by government statistician William Farr in 1864. The volume he refers to is The English Life Tables of 1864, which included life expectancies for different segments of the population, so insurance companies could price their premiums better.

And the soulful machine, in case you were wondering, looked like this:

To the modern gaze, this machine has about as much soul as a kitchen toaster.

But we mustn’t get too smug. Today’s hype around AI is everywhere, including when it comes to marketing. And judging by Gartner’s hype curve, we can confidently look forward to several more years of the AI-dominated headlines.

Gartner Hype Cycle for Digital Marketing and Advertising

Everyone seems to want a piece of the action: a study by MMC Ventures found that startups claiming to work in AI attract between 15% and 50% more funding compared to other companies. This helps explain the study’s other major finding, which is that 40% of European AI startups don’t actually use AI.

With the majority of asset management firms anticipating an increased spend on AI technology, isn’t it worth unpacking this term – AI – to avoid accidentally paying for an empty label?

Finding a formal definition isn’t necessarily helpful. Here, for example, is the Meriam Webster dictionary’s take on the subject:

 

definition of artificial intelligence

 

Notice how both suggestions are circular, in that they define intelligence using ‘intelligent behaviour’. Worse still, both definitions talk about the appearance or imitation of intelligence, begging the question, who is it meant to fool?

Asking people to define AI also proves a landmine. Earlier this summer, I attended London’s AI Summit, including a special breakfast session held by DevelopHer, a non-profit community dedicated to elevating women in technology. Before the session began, participants were handed post-its and asked to scribble what they thought AI was.

This small selection shows that definitions of AI are currently so vague, they frequently become contradictory.

artificial intelligence definitions

Instantly unfogging AI

For a long time, it was helpful to think of computers in simple terms: given an input (a mouse-click on an icon), the computer followed a pre-defined series of steps to arrive at an output (say, opening a browser).

Those pre-defined steps to execute a task were planned and outlined by the hardware and software designers. This was great: whenever the computer did something stupid, you knew who to blame. But it also meant that complex tasks were increasingly harder to solve through a series of deterministic steps.

So now, back to AI. Or rather, next time you see AI somewhere, try mentally replacing it with the term computational learning. These are tasks a computer learns to perform via a series of computations.

Instead of telling the computer exactly what to do, you show it a lot of inputs and corresponding outputs. You ALSO tell the computer that the way to arrive from the inputs to the outputs follows a specific recipe, controlled by a bunch of parameters. PLUS, you tell the computer which series of computations it needs to perform to try and improve these parameters, so that outputs match their corresponding inputs as much as possible.

Had the computer been a sentient employee, they’d crossly call this micromanagement. It absolutely is: William Farr would have felt right at home.

Let’s also unravel deep learning, which wonderfully conjures the image of a superior being steeped in thought. No such luck, I’m afraid.

Like we said, developers tell the computer which recipe it needs to follow to learn a task. There are many types of recipes, with one specific type being neural networks. Like trifle, these are layered. Three layers is shallow, but six, for example, is deep. (Nowadays, there can be many more.)

 

Based on this discussion, two things immediately stand out:

AI is not intelligent

Clearly, this type of “learning” is very different to how humans think about learning. Each individual task is restricted in scope. For example, we train computers to sort data into different buckets (technically known as classification), or to recognize which bits of an input are of a particular type (often termed labelling). Additionally, certain kinds of tasks can learn to generate new objects similar to ones they’ve seen (these are called generative algorithms).

I don’t mean to minimize what can be achieved with these methods. Facial recognition has advanced to the point that it’s helping catch criminals; voice recognition now easily enables automatic subtitling; current image generation technology is wildly impressive.

 

None of these people has ever existed

 

All along, though, it’s still human software developers who break down a problem into a series of simpler tasks.

It’s a technical challenge, for sure, but also one requiring a specific kind of creativity. So, when we think of AI’s negative consequences, our real issue isn’t how smart the AI is, but what happens when that kind of human creativity goes unchecked.

Conversely, for AI to add value to a business, it’s not the technology that’s the problem – it’s finding the right people to figure out how and what to do.

Predicting customer churn is a good example. Marketers spend an inordinate amount of time acquiring new customers, but what makes an existing customer decide to leave? It’s up to people to join the dots, says Abhijit Akerkar, Head of AI for Lloyds.

Speaking at the AI summit, he observed that this boils down to deciding what signals in your data you look for. To underline this point, he highlighted that variety of data is far more important than volumes and volumes of it.

 

 

AI is not artificial

The commercial value in AI lies in its ability to help delegate decision-making, for problems where the line between right and wrong is too blurry to spell out using a set of rules. Deciding whether to send an ‘out of office’ email is a task with clear-cut rules; making a call over a customer’s credit-worthiness is perhaps less so.

One of the great perks in delegating decision-making to an algorithm is that no one is held personally accountable because “the AI did it”. However, over time this lack of accountability erodes public trust, not just in a single industry player, but in an industry as a whole.

Public distrust in airline seating policies, for example, has led to an investigation by the UK’s Civil Aviation Authority to uncover whether algorithms deliberately split up families in an effort to charge additional fees for changing seats (there’s strong evidence to suggest it’s true, though airlines deny it).

More recently, Reuters’ annual Digital News Report showed that 55% of the sample audience across 38 countries worries about distinguishing real news from the fake variety; 32% said they actively try to avoid news altogether.

It’s good business for any single airline to split up families likely to pay for seats, or for publishers to offer sensational news, which are proven to promote more traffic and engagement. Both these tasks can also be easily achieved with AI. Crucially, the machine couldn’t care less one way or another. It’s human values and judgment that dictate what to achieve and how.

And this judgement is often lacking. All too often, the data used to computationally learn a task is comprised of our previous biases and judgments; the learned performance merely holds up a mirror to our face.

Women – the majority of the human population – are disturbingly ignored in training data sets, resulting in skewed performance. Speech recognition technology, for example, is notoriously poor at comprehending female voices. Much of the time, it’s wilful neglect: when Apple’s AI, Siri, was newly launched, it could confidently advise on what to do in case of a heart attack. In the case of rape, though, it retorted: “I don’t know what you mean by ‘I was raped.’”

But sometimes biased performance arises directly from the opaqueness of the computational learning process. We know that popular face recognition services commonly mistake black females for males (Amazon and IBM get it wrong 30% of the time. Microsoft errs in 21% of test cases).

But even when the data sets are fair, facial recognition algorithms perform differently on different racial groups, and scientists are just not sure why.

Computers can do more tasks – and do them better – than in the past. But it’s still up to us to choose how to put new capabilities to use. When we do use technology, it’s also up to us to ensure that it aligns with the values we claim to hold.

Being smart about AI

AI is much too snappy a term to be replaced by some nerdy alternative. But I fuss over this for the same reason philosopher Ludwig Wittgenstein said:

Ludwig Wittgenstein quote

If AI is a concern, it may as well be a concern for the right reasons.

There are plenty of marketing technology solutions out there, yet it’s still up to marketers to judge: Is widespread face recognition is ethical? Is it deceitful to create marketing videos with people who never existed saying whatever we want them to?

Any business considering embedding machine learning into its workflow must now contend with such questions. But these are all about human values. Glorifying computer chips into soulful beings merely abdicates the responsibility of owning up to our choices.

 

 

 

 

Vered Zimmerman

Vered is an investment writer in our London office. She holds an MBA from Cass Business School and an MSc in mathematics from the Hebrew University in Jerusalem.
Vered Zimmerman