Given that headline, we can all go home as there is no longer a blog worth reading here, yes? But before you click away from here, just hear me out.
Yes, I’ve been reading around what I believed to be artificial intelligence for a number of months now, in preparation for my PhD on the subject. I’ve had a lot of discussions with other people interested in the topic, played around with a few programs, and done my share of the reading. So by accounts, I should know something about AI, shouldn’t I?
In reality, I should’ve been much more specific: what I know about would be much better classed as machine learning. In other words, I’ve learned about algorithms that can learn from data, and what the current applications of these algorithms are. That’s how I’ve been able to talk to you so far about how algorithms might be used to convince you to buy things you might not need, or comparing modern techniques to a fictional “general” AI.
You might be thinking, “Isn’t that AI?”. Maybe you want to lump robots in there too, because robots are very cool, very artificial, and seem smart enough. That’s what I thought when I was first conceiving this blog, after all, and these topics seem quite consistent with what the media talks about when it’s talking about artificial intelligence. It’s not hard to see why people would form this view.
But, as I’ve discovered very quickly through my PhD, artificial intelligence is so much more than just algorithms and robots.
Defining Artificial Intelligence
Go ahead, try answering this one yourself first: what is Artificial Intelligence? Feel free to share your answer in the comments below!
While you’re coming up with that, here is a picture that serves no purpose other than to take up space:
Now ideally you’ve already come up with a definition without having seen the list below, which the kitten hopefully distracted you from. Without any ability to know what you’ve decided on, I’m going to assume your definition included at least one of the following keywords (or something close enough):
How did I do?
I’ve compiled that list by a very quick look at the top three papers on Google Scholar for “artificial intelligence definition” (sorted by proposed relevance). Within those three papers, I found seven different definitions of the term. And believe me, that is barely scratching the surface of debates that have been had on this topic.
The challenge here is that in order to define ‘artificial intelligence’, you need to define both ‘artificial’ and ‘intelligence’. Defining artificial isn’t easy, but defining intelligence has challenged scholars for millennia, way back since the days of Aristotle. Here are a few thinking points on the matter:
- You can’t just say intelligence is being smart, because what is being smart?
- Is intelligence doing the right thing? Then what is the right thing? And does the subject have to “know” that it’s doing the right thing to be intelligent?
- Is intelligence just knowing a lot of things? But there are plenty of people who know a tonne of things but we’d consider to be complete idiots in many areas of life. I may have a master’s degree in applied mathematics, but I had no idea what to do when my boiler started leaking. How do you decide which knowledge matters more when it comes to this supposed intelligence definition?
- And does intelligence have to be defined by comparing to humans? There are plenty of species of animals we consider intelligent, even if they haven’t developed the civilisation we have. So can you say something is intelligent if it acts like a human, or is that too anthropocentric?
And to add to the difficulty, here are some points about defining artificial:
- Is it just things that are man-made? What about things that are cultivated by people but develop according to nature, like crops growing on farms?
- Can only humans make artificial things? Are spider webs, anthills or beehives artificial?
- Does something that’s artificial even have to be physically made? Can concepts resulting from our society, like biases, knowledge or faith be considered artificial?
To summarise, calling something artificial intelligence is not simply a case of comparing it to robots and computers doing sci-fi style things, because actually defining artificial intelligence is much more abstract than that. And this is a good thing, because only thinking of artificial intelligence in that way is very limiting to our ideas about what it could be, or what form it could take.
What people usually think of as AI
So why does media stick to robots and computer programs when it talks about AI? Because it’s easy, exciting and familiar. Most people in these outlets’ markets have come across their share of futuristic media in their life, but even without HALs or Terminators, they can appreciate the types of AI the media talks about because they’re both tangible and relatable.
We already have smartphones and apps that do all kinds of fantastic operations, and computers are an integral part of our society that we make use of every day. We give instructions by inputs, and without us understanding how it all works, it gives us the result we want, as if by magic. Our camera apps can detect faces, translation engines can let us read articles in different langauges, and our Netflix account keeps giving us new shows to watch. All these ‘artificial intelligences’ don’t need to be understood or challenged as candidates for AI: they are a part of our daily lives, they’re cool, and that’s enough for the average person.
Yet despite the seemingly advanced nature of these technologies, once you get past the fun user interfaces and to the underlying systems, many of them are using simply cases of applied maths and statistical methods. For many modern algorithms, a large amount of training data is used to make a model for the problem, and this is what your app is relying on. For example, it might look at millions of photos to “learn” what faces look like, in the sense that you can “learn” the average of a set of numbers, and how to draw a box around these faces successfully. The input from your app then goes in to the model, the model processes it, and out pops the result, which is hopefully what you were expecting. Is this really artificial intelligence? Or is this just applied statistics? Is there even a distinction?
At another end, when we think about advanced AI robots, they inevitably look like humans. Note how many results look humanoid if you put Robot in Google images. Putting a familiar face, or a familiar shape, onto robots just makes them that much more accessible, even if we haven’t managed to get past the uncanny valley yet. In reality, there are all kinds of robots that aren’t humanoid, working as mechanical arms in factories, drones moving through remote locations, or engaging in brutal oil-sport with each other for our entertainment.
It’s both important and relevant to draw a distinction here: the bulk of modern robots simply follow a pre-programmed routine or respond to controls as they’re input, rather than doing anything that might be considered ‘intelligent’. Self-driving cars, designed to navigate busy city streets without anyone telling them where to go, could in comparison be considered a potential example case for an intelligent robot. The fact remains, however, that this sort of robot is still in its infancy and is a minority, but new types are on the way. In general, you should not look at a robot and consider it intelligent by default.
Other Forms of AI
We’ve discussed algorithms and robots, and given examples of why they could or could qualify as artificial intelligence. Now let’s move away from those and consider other potential technologies that could count as artificial intelligence, whether that be producing a new intelligence, or enhancing what we already have in an artificial manner. This is naturally a very short list, and should not be taken as definitive. The point of this article is to expand our imaginations of what could count as artificial intelligence, after all.
Whole Brain Emulation
We know for a fact that humans are pretty intelligent. Yes, we’re quite good at blowing ourselves ups and making life difficult for ourselves, but we’re still heads and shoulders above any other known form of life as far as civilisation and technology goes. We owe a great deal of our success to the human brain: 3 lbs of unrivalled organic thinking power.
Even the average human brain’s thinking power outstrips any computer solution we have at the moment, by virtue of its flexibility and ability to handle uncertainty. While an individual algorithm can potentially outperform humans on a specific problem, it will become useless on a different problem. Meanwhile the human will at least be able to give it an honest shot, draw upon existing knowledge, and make some inferences about the task at hand.
As the old adage goes: don’t reinvent the wheel. Since brains can work as well as they do, why not use them as the basis for new forms of artificial intelligence?
The idea is straightforward to summarise: use computers to replicate brains. Using our understanding of neuroscience to program the mechanics, imaging technology to fully map brain structures, and hardware capabilities to make a sufficiently powerful computer, we replicate the neutral structure of the human brain right in a computer shell. Even if you don’t make an exact copy of a ghost in a machine, you could plausibly abstract the brain processes well enough that it could still be let loose on a range of problems and approach it how a human would. All that general intelligence in an artificial body, ready to be packaged as needed.
The obstacles in this method exist in both the hardware and design. Simply put, we need more imaging technology and hardware developments to pull this off. The brain contains an estimated 86 billion neurons, and although we have the technology to see the components that make it up clearly, mapping the whole thing would be a very complex job indeed, much less replicating it through a suitable software.
However, despite the challenges, whole brain emulation is considered one of the more viable routes to artificial intelligence. We already have proof that brains work, so we just need to get good enough at copying them to pull it off.
Have you seen the movie Lucy? To summarise, Scarlett Johansson winds up with a leaking bag full of blue pop rocks in her stomach that give her superpowers by unlocking the 90% of her brain that humans don’t use (note: this myth is a complete load of crap!). There’s also a very tacked on plotline about South Korean drug dealers who we’re supposed to consider any kind of dramatic threat once Johansson becomes omnipotent, but that’s beside the point.
Although the movie is a very daft take on the subject, it does (loosely) illustrate a different form of artificial intelligence many modern views overlook. Rather than trying to build new forms of intelligence externally, or replicate what we already have, why not just improve on what we’ve got? Evolution did a great job of putting us on top as far as thinking goes, but who’s to say that we can’t nudge it along a little further with the right technology?
Brain enhancement would aim to extend our own cognitive facilities by a few potential means. Computer-brain interfaces, performance enhancing brain drugs, and genetic engineering have been suggested, but each of these comes with their own problems. Computer-brain interfaces have a myriad of medical factors at play, are not simple by any means, and are often much less practical than simply building better computers outside our heads. Performance enhancing drugs do exist at the moment, but their efficacy is debatable, and it seems unlikely that we’ll double our thinking power with a single pill. And while genetic engineering could potentially create a generation of super babies, we get into a very murky ethical debate at that point.
Though these routes have their issues, they are still all potential avenues to artificially enhancing our intelligence. Brain-computer interfaces have been used to improve correct lever tasks in rats, pharmaceutical enhancements have been studied, and as the technology for gene sequencing and genetic engineering becomes more affordable and available, it may (strong emphasis) only be a matter of time before we ‘unlock the smart genes’.
Finally, rather than rely on just one intelligence thinking powerfully enough, why not combine enough good ones to achieve the same result?
We already have a form of networked intelligence that humans use every day: I’m communicating with you on it right now. With the internet, we’re free to share our ideas with people all over the planet, come together to collaborate on projects, and access an endless stream of information. We just have to navigate around all the cat videos, dodgy social media and clickbait in the process.
The biggest barriers in networked intelligence come from efficiency. We lie, produce misinformation, and get things wrong. We waste time getting to our point with a lot of faff and referencing movies that barely relate to our topic. We put cat pictures in articles to ensure it’s read how we want to. We produce self-deprecating run-on paragraphs by the time people have already understood what we’re on about.
The good news is that we can overcome these limitations. Steps to improve information validation, de-biasing, and otherwise enhance communications through networks make this kind of intelligence more powerful. Other steps, including overcoming bureaucratic hurdles like ending power games and avoiding pointless meetings, present other means of more effective networked intelligence. Practically speaking it all comes down to just communicating and operating more efficiently. And really, doesn’t that just sound like something you’d like to have in general?
If you’d like to learn more about these different forms of artificial intelligence, my primary source for these summaries has been Nick Bostrom’s Superintelligence, where he discusses the topics in far more detail. Alternatively, your local search engine can provide you with many informative articles on the subjects.
The key thing to remember when talking about artificial intelligence is that the things that come to mind, the supercomputers and androids, are arguably just types of artificial intelligence. Until we can agree on a uniform definition for the term (good luck), the perception that AI is entirely robots and smart programs is an understandable but quite likely incorrect interpretation. Yes, it’s where the bulk of investment and news stories are centred at the moment, but will it always be that way? We’ve discussed a number of alternative forms of artificial intelligence, and technology has a tendancy to evolve in ways that we can sometimes predict very well, and sometimes not at all. What will AI look like in five, ten, or fifty years? Only time will tell.