Artificial Intelligence: time to stop worrying and start doing

Insight

Whether you’re an exuberant techno optimist or a dyed-in-the-wool neo luddite, there’s no escaping the hype around artificial intelligence and the predictions, prognostications and prophecies about what it will and won’t be able to accomplish. But, future gazing aside, AI has arrived and is already transforming a huge range of business activities. Organisations who aren’t experimenting risk being left behind.

There’s a lot of confusion about Artificial Intelligence, mainly caused by the fact that people talking about AI are often thinking of completely different things. The first thing to look at when talking about AI in the broad sense, is the distinction between different types, or levels of artificial intelligence.

General AI and Narrow AI – categorising machine intelligence

Let’s start with two loose definitions.

General AI, or Artificial General Intelligence, or AGI, is possessed by a machine that can perform any intellectual task that a human can. General AI is versatile, in that it can interpret and reason in its environment as a human would. It’s represented in fiction by minds like HAL 9000 in Arthur C. Clarke’s 2001. General AI is also sometimes referred to as Strong AI, though in academia that term is reserved for machines which are capable of experiencing consciousness, which isn’t necessary for AGI.

Narrow AI, or Applied AI, or Weak AI is machine intelligence that can perform specific reasoning or problem-solving tasks. Plenty of examples of Narrow AI already exist: Deepmind’s Alpha Go, speech and image recognition software, self-driving cars. A rules engine that can transcribe audio cannot be used to drive a car, and vice versa.

The philosophical jury’s still out on whether Strong AI is even possible. Ray Kurzweil, the futurist and founder of Singularity University, predicts that computers will be able to pass the Turing test – in which human evaluators judge the natural language abilities of a machine designed to exhibit human-like responses, and which is commonly held to be a benchmark of AGI – by 2029. But when Turing invented the test in 1950, he never intended it to be a measure of intelligence, and it’s perfectly conceivable that a narrow AI could be built to pass it by mimicking human language tics sufficiently well. A chatbot called Eugene Goostman made headlines in 2014 for ‘passing’ the test, but with the qualifier that ‘Eugene’ was a non-native English speaker. And, looking at some of Eugene’s archived conversations, it seems that the judges were very lenient.

But regardless of how we determine the advent of General AI, for our purposes, it’s clearly a long way off. So let’s focus on Narrow AI.

Yesterday’s magic is today’s normal

I wouldn’t call most things AI. We’re simply using the massive computing power of the cloud to process more information that we’ve been able to before. We’re also using massive amounts of data combined with that increased computing power to gain deeper insights than ever before.

I’m reminded of John McCarthy, who coined the term Artificial Intelligence in 1956. He said “as soon as it works, no one calls it AI anymore”. Many of the logic-based systems that were previously called AI are no longer considered AI today.

Invoking the ghost of Arthur C. Clarke again, the science fiction author proposed the third of his three laws in 1973:

“Any sufficiently advanced technology is indistinguishable from magic.”

The example he gave was that, if someone told him in 1962 that there would come a day when a book-sized object was capable of holding the content of an entire library, he would have believed them, but, thinking of the huge linotype machines of the day, he would never have believed that the same device could find a specific word in under a second and then convert it into any typeface and size.

That’s how AI will creep up on us. There won’t be some big bang. Algorithms will become ever more powerful, the volume of data to analyse will grow exponentially, tasks will become automated, jobs will change and 10 years from now, the world will look very different to today.

Machine learning and hyperpersonalisation

Machine learning is the use of statistical techniques to give machines the ability to ‘learn’ from data, by progressively getting better at performing specific tasks. It’s also referred to as deep learning or deep neural networks. While it has a huge range of applications, from email filtering, to facial recognition, to disease identification, to the kind of computer vision required for autonomous vehicles, one of the areas in which it’s having the biggest impact in our daily lives is where it’s used to generate personalised experiences for consumers.

Spotify’s Discover Weekly is a great example of this, as is FeedForward AI’s Figaro. Both have the power to dramatically improve the conversation with the customer by making them feel that their individuality is being taken into account. But beyond recommendation engines, the improving ability of natural language processing software is transforming the marketing landscape. Gartner predicted a few years ago that 20% of all business content produced in 2018 will be authored by machines, and bots are already making inroads into writing the news. When you combine this ability with prescriptive analytics, where machines use data to make decisions on e.g. campaign strategy, it becomes clear that we’re heading towards a world of hyperpersonalised content and experiences.

If you’re not experimenting with AI in your own business today, you’re in danger of being left behind by those that are.

It’s heartening that the BBC is keeping up with this trend, building personalisation into the new iPlayer with show recommendations based on your viewing history and preferences. And they’re going much further than that.

Matt Jukes of Notbinary, who’ve been leading software development and product management on related BBC projects, explains: “BBC audiences, especially youth audiences, expect the best content to be available to them in a single place. And they expect it to be personalised to their preferences and interests. When content and audience data is distributed across myriad systems that are hard to connect, as it is at the moment, this is impossible. There’s also a lack of content metadata, which makes content discovery difficult. All of which seriously hampers the BBC’s ability to engage the next generation of TV license fee payers, who are already less likely to have an affinity for the corporation than the rest of the population.

“The BBC Datalab addresses these issues by bringing together everything we know about all BBC content in one place (the Content Graph) and using machine learning to enrich it with additional metadata. That means we can identify the content which is most relevant to an individual’s interests and the context in which they’re viewing. By creating a data “platform”, which can be extended by other BBC teams and used by many different products, this approach uses data to create more consistent and relevant experiences for BBC audiences.”

The BBC is even experimenting with personalisation within programmes through Object-Based Media.

Ethical questions, of the type thrown up by the recent scandals over ‘fake news’ and Cambridge Analytica, will grow increasingly urgent as more and more of what we see, hear and read is authored by machines and presented to us by algorithms. But with the huge implications for optimising marketing activity, including ads, this is a tide that won’t be turned back.

How should businesses be reacting to AI?

If you’re not experimenting with AI in your own business today, you’re in danger of being left behind by those that are.

Chatbots use natural language processing to guide users through complex decision trees in a conversational way. This can provide an alternative, and more intuitive, way of delivering services and content to customers than journeys through the information architecture of websites and apps, boosting engagement and driving goal completions.

To take just a small-scale example, Sephora, a makeup retailer, used a chatbot on social messaging platform Kik, wildly popular with US teens, to instigate conversations with potential customers and make product recommendations based on responses to some simple questions, like choosing a facial shape from a set of predetermined options. This is fairly basic stuff, but it’s already helped the company increase bookings for in-store makeovers by 11%.

But bots don’t just have huge implications for improving the customer experience. They can also drive huge efficiencies on the operational side. Robotic Process Automation (RPA) involves the creation of rules-based algorithms to automate many mundane, repetitive tasks. Machine learning is accelerating this process and broadening the range of activities to which it can be applied. The banking industry is a leader in RPA, where bots have been used to successfully automate tedious tasks like processing loan applications.

Now’s the time to be learning about what’s possible, as well as figuring out ways of utilising the data that probably already exists inside your organisation. That’s probably the biggest challenge of all for most companies.