There is nothing futuristic about the age of artificial intelligence; it has already begun. In 2024, AI won two Nobel Prizes, one given to John Hopfield and Geoffrey Hinton for laying its foundations and the other given to Demis Hassabis, John Jumper and David Baker for applying it to biology. The five largest companies in the world are all technology companies. The company that designs AI chips, Nvidia, has multiplied its value 27 times over the past five years. But ChatGPT is only two years old, and the AI applications are still prototypes, yet millions of people are already using them. That is where the monster might rear its head.
The current debate about AI is dominated by warnings about its very real risks, such as its impact on employment. But that leaves aside the possible good that might come of it. While declaring herself a pessimist, philosopher Carissa Véliz, says: “The best of all possible worlds, however improbable it may seem to me, is one where AI automates the most boring tasks, efficiently, accurately and cheaply, and where we humans use our free time for rewarding tasks, such as caring for our loved ones, spending time together, resting, reading, or creating.”
Looking at the possible positives is an exercise of the imagination. As Dario Amodei, CEO of AI development company Anthropic, says in a recent essay, “It’s critical to have a genuinely inspiring vision of the future, not just a plan to put out fires.” Good futures are not inevitable, but we can envision them by peering into labs, companies and homes around the world. What could AI do for us?
It will multiply intelligence
Imagine a classroom where each child has an artificial tutor. In this classroom, my daughter will be able to talk to her books. If she doesn’t understand a math problem, AI will be able to explain it to her. And if she’s curious about something that’s not in her book, she’ll get her answers. “Kids have a stage where they ask why about everything,” Véliz says. “I’ve seen kids talk to AI about why the Sun is red at sunset or why the clouds are white. It’s great because it has infinite patience, as long as the system is accurate and secure.” The potential is obvious: AI tutors will be cheap, they will be everywhere and available 24/7. The first classroom experiments have proven positive.
Imagine visiting the doctor and always getting a second opinion. In a recent experiment at Stanford, ChatGPT proved itself capable of making better diagnoses than human doctors. Given a report, a history and tests, it flagged up possible explanations. Large language models (LLMs) could be an “extension of the doctor.”
AI’s potential lies in multiplying intelligence — at least some form of intelligence, limited and different from ours, but real. A few days ago, GPT o1 took the Korean university entrance exam and came in the top 4% of the country. That is how unique this is: in the past, technology has multiplied energy and information, but never intelligence, which humans have had a monopoly on until now. Moreover, there is no need to make wild projections, because modest intelligence can be revolutionary if it is omnipresent.
Imagine, for example, that each young researcher had a laboratory of AI agents to manage. These could review scientific literature, reproduce dubious results and help collect data. This latter task was successfully tested by a Google Research group: they would pass a paper to Gemini, their LLM-like AI, and ask it to look up a specific piece of data to copy it into a table. It was an easy task that could be done by a student, so why all the fuss? Because they handed it 200,000 papers, went to lunch, and when they came back the job was done.
We can make a lot of progress with limited intelligences if there are millions of them. For example, most programmers already use AI co-pilots like Cursor, which look at your code and autocomplete it and solve its errors, or simply write code. Will we have similar agents for other tasks? AI could buy you a train ticket, check your calendar or set up your microphone. As Véliz says, one of the big barriers to our productivity is emails: “I have a theory that one of the reasons we don’t see philosophers as being as brilliant as we used to is because, instead of having time to think about the important issues, people are writing emails,” she says.
Domestic AI could be just the tip of the iceberg. If Amodei’s predictions come true, we will have virtual experts in the role of lawyers, doctors, accountants, nutritionists and even therapists. It’s inevitable that such a prospect makes us feel giddy: are we going to put ourselves in the hands of artifacts? But then I think back to my first year as a father, and I realize that we are already often misinformed by things on the internet, including influencer accounts, as well as unscientific advice passed down the generations.
Amodei describes further possibilities. For example, he thinks AI could become “virtual biologists,” capable of performing all of a biologist’s tasks and accelerating the current pace of discovery in biology or biotechnology by a factor of 10. Equally optimistic is Jaime Sevilla, director of Epoch AI, a non-profit organization that investigates the trajectory of AI: “We have identified a recipe for more capable and general AI,” he says. “Soon we expect it to be able to solve advanced math problems and perform programming projects that take hours. Why do we expect these incremental advances? Between 2012 and 2024, we have moved to training models with 100 million times more computation. By 2030, I expect 10,000 times larger models to be trained, a leap similar to that between GPT-2 and GPT-4.”
It goes beyond language
One reason for the enormous enthusiasm for AI is its omnipresence. It doesn’t matter who you talk to, whether they are bankers, journalists, designers, programmers or soccer analysts. They are all using AI. One reason is that LLMs are useful to all of us because the language is universal. But there is another explanation that is also transcendental: we are dealing with a technology — an algorithmic architecture in the form of a gigantic network that learns from massive data — that has proven capable of deciphering the patterns of very diverse phenomena. Just as ChatGPT handles text, other AIs can navigate proteins, images, genomes, and meteorological data.
An AI model from the European Centre for Meteorology (ECMWF) predicted the path of Hurricane Milton with a margin of error of just 12 kilometers, outperforming established models that have been developed over decades. The center is already training a larger model “to increase its resolution.”
The most celebrated example is the AlphaFold AI, which won Demis Hassabis and John Jumper the Nobel Prize in Chemistry for predicting the structure of 200 million proteins — a task that previously took years and $100,000 for each protein. The database is used by millions of people and promises to accelerate our ability to develop drugs.
Regarding genetics, Evo, another AI agent trained on millions of bacterial genomes, predicts the effects of DNA changes and designs functional systems for CRISPR, the gene-editing technique. The biology of organisms is absurdly complex, and will resist deciphering, but any breakthrough regarding prediction can open up mind-blowing avenues.
We’re talking about tools that already work. An MIT researcher has studied the adoption of an AI tool in a large R&D lab. Result: technicians with access to AI discover 44% more materials and file 39% more patents.
These AI models will not be noticed in the same way that chatbots are because we don’t interact with them, but their impact will be profound: they are modeling complex systems. We are no longer just talking about processing data or automating tasks, but about deciphering deep patterns in fields as diverse as biology, meteorology, and chemistry. From proteins that take on three-dimensional shapes to genetic mutations, hurricane trajectories, or words that form thoughts, these networks absorb massive data and find convoluted patterns. Consequently, AI is on track to predict phenomena that have so far escaped science.
AI as a mirror
Finally, AI could help us understand ourselves. It is the first artifact capable of replicating some of the skills specific to humans, such as reasoning and handling language. It is also relevant that AI is the product of automatic learning, and that its capabilities have emerged from this autonomous process, without us having designed each step.
Professor Ethan Mollick gives an example in his book Co-intelligence: Living and Working with AI. Regarding language models like ChatGPT, he writes that the mind-boggling thing is that no one is sure why these systems that only predict tokens [fragments of words] result in “AI with seemingly extraordinary abilities,” capable of understanding commands and solving creative problems. “This might suggest that language and the thought patterns behind it are simpler and more mechanical than we thought, and that LLMs have discovered some deep, hidden truths about them, but the answers are still unclear.”
We can learn from generative AI because it has surprised us. It is excellent at typing, analyzing, programming, and chatting, which are all “intensely human” tasks, as Mollick puts it, and yet it struggles with tasks at which machines traditionally excelled, such as repeating a process or making calculations. “It doesn’t act like traditional software; it acts like a human being.”
That’s why Mollick encourages us, rather controversially, to interact with AI as if it was a person. “I’m not suggesting that these AI systems are conscious like humans, or that they’re ever going to be conscious like humans. Instead, what I’m proposing is a pragmatic question: treat AI as if it were human because, in many ways, it behaves like one.”
Many experts warn of the dangers of careless anthropomorphizing, with reasonable grounds, as Mollick acknowledges. But it is at the very least interesting that the most effective way to use an algorithm is to interact with it by pretending it is a person.
Why aren’t we more optimistic?
I have deliberately avoided two issues. The first is to speculate about an AI much more powerful than the current one. There are specialists who are betting that a general AI will soon surpass us in everything; others believe that its development will hit a wall. The second is that I have not addressed its social implications, although there is room for optimism there too.
As I came to the end of this article, I was thinking: why is it so difficult to conjure up a better future? The challenge is not only in the word “future,” which is always uncertain, but also in the word “better.” And it is difficult to make the progress that has already occurred visible. As Kevin Kelly, founder of Wired magazine and veteran technologist, said: “Progress is mostly what doesn’t happen. It’s a 92-year-old who didn’t die today, a child who wasn’t mugged on the way to school, a 12-year-old girl who wasn’t married to a man of 30.”
To achieve more of these invisible victories, technology is essential: it separates the future from the past. That is why it is impossible to think seriously about improving the world without reflecting on new technology such as AI. It is not enough to avoid its dangers and hope that good uses will come on their own, as if by an invisible hand. If you want to shape the future, you have to think deeply, with genuine curiosity and, yes, also with optimism about the technology we want in our lives.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
AI is a real pipe dream: It can make the world radically better | Technology