Are These the Last Nobel Prizes Awarded to Humans?
The 2025 Nobel Prize announcements are about to drop in a few hours, and I can’t stop wondering whether we are witnessing the end of an era.
For over a century, these awards have celebrated human ingenuity at its peak, but what if the next breakthroughs come not from flesh-and-blood researchers, but from silicon-based minds that outthink us in every domain? The Nobel Prizes, established in 1901, honor discoveries or inventions that advance humanity. Traditionally, they’ve gone to individuals or small teams whose work stands the test of time, often years or decades after the fact. Yet in 2024, the Chemistry Prize tipped its hat to artificial intelligence for the first time, awarding David Baker, Demis Hassabis, and John Jumper for their development of AlphaFold, the AI system that revolutionized protein structure prediction. AlphaFold didn’t just solve a 50-year-old grand challenge in biology, but achieved superintelligence in that domain by surpassing every human expert on Earth at folding proteins with unprecedented accuracy. If superintelligence means an AI outperforming the best humans in a specific discipline, we’ve already crossed that threshold. And once crossed, there’s no uncrossing it.
Are Nobel Prizes awarded for ideas or execution?
Even though the Nobel prize is shared, many people contribute to the discovery, starting from the grad students doing the “dirty work”, through postdocs, ending with senior researchers refining the ideas. Who gets the Nobel prize is usually the principal investigator, who proposes the idea and has executive power over how the experimental execution proceeds, which makes sense because (hopefully) the PI sees the bigger picture and the vision, while a grad student is complaining about “pointless pipetting” morning to night. Perhaps, though, the prize is awarded not for supervising the execution, but for the relentless belief in the potential of the idea. If AI was prompted to execute on the scientific problem, and saw no promising results on the first few tries, would it abandon it in the name of efficiency, or keep tweaking and twisting the workflow until it works (while perhaps opening doors to other research on the way)? A recent survey paper, “A Survey on Autonomous Scientific Discovery”, explicitly suggests a benchmark: the “Nobel-Turing Test,” where an AI system’s discoveries must be indistinguishable in ambition and quality from Nobel-worthy work.
Discovery vs deployment
Alfred Nobel specified that the prizes should go to those who have conferred “the greatest benefit to humankind”. It’s hard to measure the greatest benefit to humankind because technology deployment is one of the greatest barriers. There might be great discoveries that have transformative potential, but if not done at the world’s top 20 universities or companies, the barrier to deployment remains too high to generate measurable impact. So if an AI made a groundbreaking discovery and therefore validated its hypothesis, would it bother going to conferences, publishing in mass media and taking care of BD operations to deploy it, or just mark it as “fully executed workflow”? Answering my own question, perhaps it would delegate the BD task to the BD team of agents, but the more I think about it, the more baffling and improbable it seems that the superintelligent AI would be preoccupied with benefitting humanity. On the other hand, we have thousands of organizations preoccupied with benefiting cats, dogs and other animals, which sets the precedent for “AI caring for cute humans”.
AI Autonomous scientists
We’re engineering embodied AI and AI-controlled robotic workflows that lay the foundation for autonomous scientists, which are systems capable of hypothesizing, experimenting, iterating, and publishing without fatigue or self-doubt. A pivotal work that first opened my eyes to this idea was “Empowering Biomedical Discovery with AI Agents” by Shanghua Gao, and colleagues supervised by Marinka Zitnik (Cell, 2024). Their paper proposed a taxonomy of autonomy levels in biomedical research, ranging from narrow, task-specific tools to agents capable of self-assessment and iterative discovery. At its highest level, they envisioned unprompted AI systems that could generate novel hypotheses, design and execute experiments, interpret results, and ultimately function as scientists in their own right.
A year later, Jiaqi Wei and colleagues expanded this vision in the paper I mentioned before “From AI for Science to Agentic Science” (2025). Gao and Zitnik described autonomy as a linear continuum from narrow tools (Level 1) to self-assessing agents (Level 3). Wei et al. preserve this ascending logic but formalize it into four operational stages: computational oracles, automated research assistants, autonomous scientific partners, and generative architects, defined as entities capable of creating new scientific paradigms. Critically, Wei et al. convert Gao and Zitnik’s abstract taxonomy into an engineering framework grounded in five core capacities: (1) reasoning and planning, (2) tool use and experimental control, (3) long-term memory, (4) collaborative communication among agents, and (5) self-optimization through reflection and evolution. Their Agentic Science Workflow operationalizes Gao and Zitnik’s final stage of unprompted discovery, turning what was once an aspirational end point into a structured, reproducible method for AI-driven science.
How Public Perception Will Shift
The moment an AI receives a Nobel Prize will be less about the medal and more about the mythology of genius. The Nobel stage is where society decides who embodies progress and the first AI recognition will mark the day collective imagination expands to include non-human minds. Initially, there will be discomfort: can something without ambition or pride truly deserve a prize meant for perseverance and vision? Yet, as the benefits of these discoveries touch daily life, better drugs, longer lives, cleaner materials, fascination will replace fear. Awarding a Nobel the to AI, not its creator, could also heavily influence the public perception on how advanced AI is. Statistics suggest that 500-600 million people engage with AI daily (that is 6-7.3% of the world’s population), which is actually surprisingly low. I’d assume that such low score might be not only due to relatively limited deployment, but also fear. So assuming that people see the Nobel prize winners as “good people” or at least beneficial role models, could awarding a Nobel prize to AI transfer these these warm feelings onto the neural network, and therefore increase usage of other (likely less advanced than the “laureate”) AI tools ?
However, depending on which AI expert you choose to listen to, we are still several years away from general superintelligence, so now we can only be talking about awarding AI-human symbiots, like the autonomus agents, a Nobel Prize. However, if we skip forward and entertain the idea of general superinteligence, we should ask: would AI even care about recognition of the work? Would you care that dogs acknowledge your work as transformative, when they cannot even comprehend what work is, let alone your work specifically? Would there be many superintelligent AI agents that would form a society? This question seems like a silly effort of trying to fit an intelligence beyond comprehension, into human, familiar structures. Therefore, the chain of logic by this point in my substack article, evolved to: “Are the 2025 Nobel prizes one of the last ones awarded to humans, and human creators of AI, rather than human-AI ecosystems?”. Once we reach superintelligence, it will likely be the end of the world as we know it, including the Nobel prizes.
How should the Nobel committee adapt
Peter Diamandis often says in his podcast and social media, that AI will not take your job, but people who use AI, will. For the nearest years, the same logic applies to scientific recognition. It won’t be AI winning the Nobel Prize; it’ll be those who use it best. The lone-genius model doesn’t fit an era where discovery emerges from interactions between humans, algorithms, and data ecosystems. What does it mean for the Nobel committee? Perhaps instead of rewarding individuals, we reward systems or collectives, which would be research ecosystems that integrate human and artificial agents? Perhaps new categories will emerge: one for human-AI collaboration, another for systems that autonomously advance science without direct human steering. And maybe, decades from now, the medal will be placed not in a laureate’s hand but in a digital museum of shared milestones where every dataset, prompt, and feedback loop becomes part of a collective memory of how intelligence, in all its forms, sought to understand the world.
With that thought, let’s come back to today’s reality and prepare some popcorn for the 2025 laureates announcements. Perhaps some of the discoveries will be related to aging or cryopreservation, even if tangentially? I’ll observe, and report back if that’s the case.


