The Creativity Code

We used to think that no matter how smart a computer became, it could never match a human. But will computers surpass human reasoning? Maybe.

It used to be that while computers were very clever, they would never match a human. We understood they had immense processing capacity and could crunch numbers a bit like a super calculator, but we knew they would never have our ability to ‘think’. Take, for example, a game of chess: this was seen as an example of what separated us from machines and made us superior. Chess required complexity and creativity to play effectively that only a human could possess. At the time, this was seen as a line that could not be crossed. All was good – we could rest easy.

Until we couldn’t.

In 1997, IBM’s Deep Blue defeated Garry Kasparov, at the time the reigning global chess champion- shock, horror.
The goalposts had moved significantly. Not to worry, though, there remained another exercise that would retain our superiority over so-called Artificial Intelligence: the Chinese game of Go, requiring even more creativity and intuitive powers. There was no way a computer would ever beat a human at this.

Until it did.

Almost 20 years later in 2016, 18-time world Go champion Lee Sedol was beaten by AlphaGo, a computer programme developed by Google’s DeepMind. In the end, it became so good as it learned, that it not only won more matches, but actually became unbeatable.

So now what?

In his new book The Creativity Code, Marcus Du Sautoy, Professor of Mathematics at Oxford, uses this existential crisis to craft a skilful journey of questions:

  • Can computers create?
  • Should computers create?
  • Why should computers create?
  • What will happen?
  • Who controls this artificial intelligence?
  • What will happen to humanity?
  • What IS the difference between a computer and a human?

So many questions, and the issue is that technology advances so rapidly that we barely have time to digest one question before another one has arisen, and another and another. It feels like the issue of artificial intelligence has been with us forever. However, Artificial Intelligence was previously in fiction; in science fiction and speculative fiction. Artificial being the key word: not only was it not human, but the subject seemed artificial; not really real; surely not possible? Here we are, though, actually having to grasp the issue and take it seriously. But how do we untangle fact from fiction? After all, our ‘knowledge’ mostly exists from stories and films and that’s all usually the stuff of nightmares. Throw in a good dash of general mistrust and scepticism and you have dystopian future scenarios for real.

The experience is everything. Find out more about our strategy & research expertise.

As thousands of websites are launched, each and every minute algorithms are guiding us through the digital age. An astonishing 90% of the world’s data has been created in the last year. There is a revolution happening and data is flowing like 21st century oil, but instead of being difficult to find, it is incredibly easy to tap into, and it’s free. Big Data indeed, so big we cannot even begin to grasp or process it. Well, humans anyway- but computers are now learning to deliver more and beyond what is being input. They are now able to train themselves from a perpetual stream of data. This is allowing computer processing to evolve and even think.

This is all just data though, right? Just number crunching – like a super calculator – so what?

Previous data methodologies involved a traditional approach whereby top-down reasoning was used: questions and input were predetermined. The new revolution has risen by the flipping this to bottom-up reasoning where algorithms explore and build as they go. Cumulative questioning is building a decision tree and providing computers with human like abilities to make decisions. Human creativity certainly got the ball rolling, but now the machine’s newfound freedom means it too is being creative! And this ability to process millions and millions of data points means that a machine can arrive at answers when a human wouldn’t be able to keep up. As du Sautoy writes, “At what point will algorithmic activity take over and human involvement disappear? Our fingerprints will always be there, but our contribution may at some point be considered to be much like the DNA we inherit from our parents. Our parents are not creative through us, even if they are responsible for our creation.”

We must be vigilant. Understanding and assimilating the benefits of computer help is one issue; coming to terms with what this means for humanity is a different issue. But perhaps the most important matter is the ever-increasing encroachment of robots in the workplace. Bank machines, automated cash registers, robot assembly line production, drones, GPS and virtual assistant systems such as Siri appear to be just the start. What may come next could very well present even greater threats to job security. Profit-seeking corporations are always looking for ways to reduce overheads and improve margins, and increasingly fewer and fewer actual people are benefiting. There are counter-arguments that suggest that new technology can provide new opportunities, and therefore new roles and new jobs. Whatever the developments are, we must ensure that development of artificial intelligence benefits us all, not just the few. With this in mind, I’ve come away from reading this very thoughtful analysis with a final thought. I am left remembering Isaac Asimov’s Three Laws of Robotics written over 50 years ago:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Amen

Share this Post:

You may also like

elasticity newsletter