Graham Vyse: How do you see ChatGPT in the history of artificial intelligence as a whole?
Sarah Myers West: Artificial intelligence has been around as a field for almost 80 years now, but its meaning has changed a lot along the way.
In its early days, AI focused on what we’d call “expert systems”—technologies that would replicate human intelligence in certain ways. Now, what we refer to as AI is very different—largely, an array of data-centric technologies that, in order to work effectively, rely on a couple of things that didn’t really exist before.
The first is massive amounts of data.
This was enabled by the internet boom of the 2010s, when tech companies developed the capabilities to leverage the production of data on a huge scale—that is, to develop systems that could look for patterns in extremely large data sets. In this sense, when we talk about AI today, it’s essentially what people were talking about as big data starting in the 1990s.
The second thing these systems rely on is massive amounts of computational power to process all this data.
Overall, what this means is that AI, as a field, has become increasingly dependent on the resources of a small number of big tech companies that have built or acquired these two things: huge data sets and huge computational power.
What it doesn’t really mean, though is any close replication of human intelligence. So although it’s very effective at doing a small subset of tasks, what we refer to as AI is very different from what humans are able to do.
Vyse: What’s new about ChatGPT in this history, then? |