Reassessing the Scientific Method

2026-02-195:5020www.santafe.edu

Is the scientific method really the best approach to learning about the world? A new paper in Collective Intelligence applies the scientific method to itself, finding that some common strategies…

Choosing experiments randomly can help scientists develop better theories, a new model reveals. (image: Edson de la O/SFI)

The race to develop a virtual scientist — an AI creation that conducts every stage of research, from idea to publication — has consumed researchers, start-up founders, and tech juggernauts alike.

It has also illuminated fundamental philosophical questions about the process of doing science. Is the scientific method really the best approach to learning about the world?

A new paper in Collective Intelligence applies the scientific method to itself, finding that some common strategies scientists consider gold standards for designing experiments perform worse than random choice. In other words, random exploration may produce better theories than carefully planned experiments.

“These results contradict some common intuitions about the scientific method,” says lead author and SFI Complexity Postdoctoral Fellow Marina Dubova.

“The traditional ways we teach people to do experiments seem very premeditated: let’s confirm what we know, let’s try to falsify a dominant theory, let’s resolve a disagreement between two theories. But weirdly enough, we found that such carefully motivated experiments don’t seem to guide scientists toward useful theories as well as randomly chosen ones,” Dubova explains.

Dubova, a cognitive scientist, collaborated with former SFI Postdoctoral Fellow Arseny Moskvichev and Kevin Zollman of Carnegie Mellon University on the paper.

To determine what makes experiments succeed, the authors built an agent-based model. This technique from complexity science enabled them to represent human scientists as individual actors, or “agents,” in a computer program. Next, the authors created a statistical “ground truth” for the scientist-agents to explore. (Consider a fictional alien species: the ground truth would be all the alien characteristics that scientists might discover, like height, weight, brain size, and behavior in response to testing.)

Within the computer program, the scientist-agents conducted a series of experiments, forming and refining theories based on the results. They also shared their observations and theories with other agents, simulating how real-life scientists learn socially through publications or conferences.

The model revealed that the most informative, predictive scientific results emerged when scientist-agents randomly collected data — not when they selected experiments to confirm, falsify, or resolve disagreement between theories.

One of the most interesting findings was that agents selecting theory-driven experiments grew convinced they were succeeding, even though they weren’t. When the scientist-agents communicated with each other, they managed to give plausible accounts (represented mathematically) of the ground truth, and rated the success of their theories highly. They never found out their accounts were often wrong.

“The agents were able to develop an illusion of progress. Using theory-motivated experimentation strategies, agents collected a narrower set of data, which made it less likely for them to encounter observations that challenged their theories,” Dubova says.

It’s too early for human scientists to ditch carefully designed experiments for random experimental roulette, Dubova is quick to point out. However, all scientists would do well to check their epistemic assumptions, she says: “There is a vicious cycle you can enter, where you collect data using what you think is a good strategy and grow confident in your success, but actually, you’re not learning much about the world.”

Read the study "Against theory-motivated experimentation: Can random experimental choice lead to better theories?" in Collective Intelligence (February 16, 2026). DOI: 10.1177/26339137261421577


Read the original article

Comments

HackerNews