The thoughts on the future and life advice from the world’s laureates.
The future of research, according to people who already shaped its past.
Over the weekend, I found myself right in the heart of the World Laureates Association gathering in Shanghai. The conference center was filled with some of the most celebrated minds in science: Nobel, Turing, Breakthrough Prize winners, you name it. I listened to their talks, asked questions, broke bread (well, rather bao buns) with them, listening to their stories, searching for stories I can’t find on wikipedia. And out of that slightly chaotic bundle of conversations, a few insights started to stand out to me about where the real challenges and opportunities in science are right now. Plus, of course, a bit of life advice sprinkled in.
The State of Curiosity
There were multiple, parallel sessions going on, my attention was brough to the one was what I called “academic speed dating”: five-minute “flash pitches” by “young scientist” (quotation marks because they were all associate professors at top iniversities) followed by rapid-fire questioning from laureates. It was chaotic in the best possible way. Physicists interrogated bioengineers, chemists poked at astrophysicists, and somehow it all made sense. A reminder that innovation often happens in collisions, not committees.The talks themselves felt like previews of a near future already in beta: a biodegradable pacemaker smaller than a rice grain, light-powered enzymes disassembling plastics at the molecular level, graphene nanofluidics that probably defy intuition but definitely got everyone nodding wisely.
After the “speed dating” all laurates at the session had a panel disussion. “Blue-sky research” was the quiet hero of the event. I was not familiar with the term they all advocated for so intensly; it’s research with no immediate application, just audacity. The Chan Zuckerberg Foundation seems to be betting big on it, and the laureates were eager to follow.
And here’s another little gem I hadn’t really considered before: intellectual property. It turns out that at places like Stanford, your IP is fully yours, which was kind of a revelation to me. I hadn’t really thought much about who owns the ideas you come up with in academia until this conversation. It definitely made me realize that the whole intellectual property landscape is a bigger deal than I gave it credit for, and it’s something I’ll be paying a lot more attention to from now on.
The Machines and the Minds
My favorite track was ofcouse the AI and supercomputers one. It ranged from scientific foundations for quantum computing to thinking ahead of the risks and premises how the world might change. Laureates have a way of slipping existential bombs into casual talk. One suggested that universities might lose relevance within five years, as AI outpaces academic infrastructure and learning becomes distributed, automated, decoupled from institutional calendars. He also pointed out that we’re publishing way too many papers, and that flood of publications can give the public a really skewed idea of what scientists actually do. I agree with this one; sometimes the publications I see really belong to Substack more than to a journal. So in a world when a growing number of papers, especially the non-experimental ones, are written by LLMs, reviewed by LLMs, and submitted by humans who pretend this is not the case, can we honestly say each of the is adding to the understanding of the world? I am not arguing against the AI use, on the contrary, but I do think that a new benchmarking system for “scientific novelty and applicability” is needed.
That thought carried into a discussion about AI’s own limitations. Large models have already consumed most written text, so their future depends on learning efficiency rather than volume. Energy costs, adaptability, and self-supervised learning came up repeatedly, plus the question of whether AGI might appear faster than regulation can spell it.
Meanwhile, the physicists turned the conversation toward supercomputers and the arms race of computation. The unit inherent to supercomputer’s power is an exaflop: 10^18 calculations per second. If every person on Earth did one calculation every second, it would take more than 4 years to do what an exascale computer can do in just one second. What can you actually do with that kind of power? Well, it opens up incredible possibilities. Exascale computing can be used for things like simulating complex climate models, running highly detailed genomic analyses, designing new materials at the atomic level, or even modeling the entire human brain in ways we couldn’t before. The discussions highlighted four key directions. First, there’s quantum metrology (measuring stuff, units etc) , which leverages the delicate properties of quantum states to measure tiny effects with a precision that classical methods just can’t match. Then there’s quantum communication, which is all about securely sharing quantum bits between partners using keys that can’t be eavesdropped on. We also touched on quantum simulation, where we mimic complex materials or exotic quantum phases by using systems of artificial atoms. And finally, of course, there’s quantum computing itself, harnessing superposition and entanglement to solve certain problems that are just out of reach for classical computers. There’s still plenty of research ahead before these technologies are fully realized, but these are the big directions that are shaping the field right now.
So, talking about the top supercomputers, the current top five systems are a pretty impressive lineup. You’ve got El Capitan, Aurora, and Eagle, all American systems. Then there’s Jupiter, which is German, and finally one from Italy, which I did not expect. There’s also one up in Finland. But what really caught my attention was that China was not on the leaders board; it turns out that China has stopped submitting performance data to the global repository that tracks the fastest supercomputers. So while we have this sense that they’re likely outperforming almost everyone else at scale, we just don’t have the actual numbers to confirm it.
The Human Element
After a few hours of exaflops, quantum roadmaps, and politely phrased existential dread, the conversations drifted back to a simpler variable: attention. Several laureates admitted their most durable work began in their twenties or thirties, because reputation hadn’t yet crowded out curiosity and there was still enough ignorance to try the impolite experiments. The warning was implicit (protect your attention before committees rent it by the hour), as was the reassurance (early sparks survive if you feed them consistently, not theatrically).
I also had the heart to heart conversation with one of the laureates about the quality of human relationships. I was shocked when he mentioned working on your shadown side, because I though it was just a gen-alpha tiktok trend, but it turns out that it is deeply rooted in Jungien psychology. He fiercly claimed that we are unable to be flly at piece and we will always be ruled by our emotions, if we don’t identify, reason though and see the impact of our childhood traumas. A discount code to betterhelp, anyone?
The most unexpectedly personal moment, however, came when I introduced myself as Polish. People genuinely praised my country’s scientific and democratic transformation in the recent years. Growing up, I was told I had two obstacles: being a woman and being from Poland. Sitting there among laureates who saw neither as disqualifiers, I realized one of those barriers had quietly dissolved.
If the weekend had a moral, it was this: the tools are getting sharper, AI, quantum, exascale, but the real differentiator remains the same. It’s the willingness to ask naïve questions in sophisticated rooms.



