Rethinking High-School Science Fairs

2026-02-1318:376272asteriskmag.com

America’s earliest science fairs gave students the chance to do independent research. Today, they’re a competitive gloss to glorified internships. It’s time for a new format.

At a certain point, when enough data has come in, you have to acknowledge your experiment has failed. The competitive high school science fair machine keeps churning — one of the largest competitive fairs runs more than 350 feeder fairs and offers over $9 million in annual prizes — but science fairs have drifted a long way from their original purpose. They no longer serve elite or ordinary students well. Science fairs should be about formation to think as a scientist, not about students attaching themselves, remora-like, to prestigious labs. Internships have their place in professional training and career planning, but a science fair should channel students’ competitive and exploratory energies in a more thoughtful direction.

In 2007 (my senior year), I went to two major national science fairs. I brought my double-stacked trifold poster boards to Albuquerque, New Mexico for the International Science and Engineering Fair, and I flew down to Huntsville, Alabama for the Junior Science and Humanities Symposium sponsored by the Department of Defense. Wherever I went, I had a story to tell about how potential targets for future prostate cancer drugs could work synergistically with rapamycin. According to all the adults, my peers and I were among the best of the best. But I didn’t believe them.

Wherever I went, I saw plenty of familiar faces. I went to public school on Long Island, and like many of my classmates, I spent the summer before senior year doing an internship in a professional lab. I went to work at Cold Spring Harbor, a lab I’d visited before for summer camps I genuinely enjoyed. As a middle schooler, I’d transfected bacteria with plasmids and made them glow. While we waited for gels to run, we’d use pipettes to joust on rolly chairs. I was grateful for the opportunity (and even more grateful for my mother, who drove me there and back throughout the summer and nearly every weekend), but I felt very far from being a scientist.

In my lab placement, I didn’t make any choices. I was plugged into an existing experiment and learned to infect cells, split them, let them grow, and lyse them to harvest their DNA and eventually run them through PCR to see which cells were missing. Which genes (if any) that we messed with caused these cells to die when exposed to rapamycin but left healthy cells standing? It’s a good question for someone to ask, but it’s not one that makes much sense if your goal is for a high schooler to develop their own ability to make sense of the world around them.

As I was travelling and presenting, I knew which of my classmates I’d left behind. Marc had spent the year working on surveying classmates about whether they were getting the new HPV vaccine or not. He was genuinely interested in the question as an aspiring doctor. He wrote his own survey instrument. He was going nowhere, and everyone knew it.

For JSHS, only two students were selected from our fair from all the categories. Small sample cancer vaccine surveys were never going to beat wet bench cancer research. The next year, Marc was in a lab, plugged into a project like most of the rest of us. But in the year he was planning his own experiment, he was much more of a scientist than many of his competitors. He was passing every part of the test that professor of science education William F. McComas proposed to assess a student’s depth of inquiry: 

1. Who proposed the problem?2. Who designed the research method?

3. Who makes sense of the data?

If the student is responsible for all three tasks, there is little doubt that the activity involves high-level inquiry.

McComas was right to think that the success of a science fair should be judged by how much it asked students to reason as scientists do. There are other real goods that can come from interning or shadowing in a professional lab, but it’s a mistake to imagine it’s an introduction to deep, critical thinking. A student needs more room to stumble than joining an existing project allows. 

The way things worked

At America’s first science fairs, many of the competitors would have looked like Marc. The earliest exhibits were student-driven and didn’t need to be at the bleeding edge of a field. In the 1920s and 1930s, science educator Morris Meister founded and formalized science fairs for the students of New York City. Meister argued that the practical experiments and demonstrations of his fairs were a necessary compliment to classroom instruction. Science education and science fairs should “enable our pupils to appreciate the methods of science and to use this method and the thinking procedure of science in their every-day lives,” Meister argued. 

These early student science fairs were a complement to industrial fairs where companies showed off new inventions like the telegraph and the Singer sewing machine. Many of the early student science fair divisions had an agricultural and biological focus. Students might prepare a cage of living insects, exhibit plaster casts they collected of animal tracks, or offer their attempt at “the best and most original notebook or record book” for a particular organism. (This last would not have been far from the nature study journals proposed by the British 19th century education reformer Charlotte Mason, since embraced by modern homeschoolers).

These projects were a long way from the carefully composed Hypothesis, Methodology, Results, Discussion, Further Study sections of our contemporary tri-boards. The students, as amateurs, would have been a long way from knowing enough to formalize a hypothesis. These fairs directed students toward the first skills every scientist must have: observation and curiosity.

For students in lower grades, Meister proposed exhibit categories that drew on their capacity for play. Elementary schoolers were encouraged to illustrate laws of physics using Erector sets or Tinkertoys. Older children were encouraged to embrace a blend of science and home economics, with prizes for “displays of homemade useful commodities, such as soaps, dyes, candles, inks … ” Science and home ec both offered ways for children to understand the material world, and let its properties inform their deliberate choices. Young students could rework a recipe due to rationing or plan out reagents for a chemical reaction. As long as the world around them was tractable, they could fiddle with it, iterate, and improve. 

Meister’s New York fairs soon began to spread nationwide. As they became more popular, the fairs were given multiple, possibly conflicting goals. They could help students make sense of a rapidly changing, further industrializing world. They could engage the struggling students who were now expected not to drop out of high school, but were seen at risk of juvenile delinquency. And they could strengthen American science education for the purpose of improving America’s military readiness. 

This last concern came to dominate American science fairs during and after World War II. Fairs were structured with the goal of identifying the next generation of powerhouse researchers, instead of — as Meister originally hoped — forming a broad swathe of students for thoughtful citizenship. The Westinghouse Science Talent Search, established in 1942 (today called Regneron STS, the country’s oldest and most prestigious science fair) sifted applicants from around the country to 40 finalists. The winners were flown to Washington D.C., where they were feted and also encouraged to embark on science as national service.

“For the good of the nation, the ablest men should be trained,” M. H. Trytten, the director of the National Research Council’s Office of Scientific Personnel declared in 1945. When it came to science education, Trytten argued, “The fact that each individual so trained is thereby better off personally is secondary.” The STS competition selected finalists through a series of academic filters. Students needed to pass an aptitude test and have a panel judge their academic record as outstanding before anyone looked at their experiments. 

The STS competition still relies on a holistic package of research and academic achievement, even if it no longer administers its own aptitude tests. Today, however, the emphasis on public service has been replaced with a laser-focus on college applications. While the earliest science fairs directed the students to attempt to illuminate something about the world, modern competitive fairs direct the student to use science as a form of self-promotion. Students entering STS and other nationally competitive science fairs have learned that the critical hypothesis they must prove is “I’m better than my classmates.”

Writing the answer first

Science exploration driven by genuine curiosity is more open-ended than experiments that come in a box and test students on whether they get the right answer. I remember in my high school physics class we were first taught the value of Earth’s gravitational constant g, and then asked to perform an experiment that should reveal it.

Of course, working with cruder tools, limited patience, and air resistance, many of us didn’t wind up squarely on 9.8 m/s2. One of my partners was quick to scribble out our actual observation, and she and I had a brief struggle over control of the pencil as she attempted to put in the “correct” answer. She had a better sense of the teacher’s intentions than I did. The tables that honestly reported a “wrong” result were encouraged to repeat their experiment until they got a trial that “worked.” We missed the chance to talk about how scientists reconcile noisy data. We missed the chance to run an experiment for the purpose of exploring the unknown.

Students in science fairs and adults in professional labs know the answer they’re supposed to get. A result has to be statistically significant to “count” and, just like in my physics class, students can be tempted to keep reworking their results until they get the right answer. There was no p-hacking at my research internship, but part of the education I received in a professional lab was how much the scientific process was dominated by anxiety, not curiosity. 

I had come in partway through a pre-established procedure. I genuinely admired the work being done. The team’s process for high throughput mRNA screening allowed them to test many potential drug targets in parallel. The process would surely turn up false positives, but it was the first step of sifting a haystack for needles. Eventually, some of our candidates would go on to live mouse trials. 

In the endless tedium of splitting cells, I was a little excited that this was one tiny part of the work of finding a cancer drug. It was easier for me to remain focused on the final goal because I was only in the lab for a year, and all I would get out of it personally was a science fair project. The stakes were different for the post-doc I worked under, whom I found crying in the lab one weekend. A different lab had published a paper on the mTOR pathway we were working on, one he felt anticipated some of our results and would make his work unpublishable. A year of his life was wasted. 

I was as sympathetic as I could be, but inside I was blazing with anger. It’s good news for cancer patients to have two labs independently identify a promising drug target. It shouldn’t be bad news for an individual scientist that he appears to really be onto something. I wished my supervisor could have felt proud of offering useful, corroborating data, no less valuable for coming second. I was furious that the academic system he worked in made him feel like his work was worthless. My year in the lab helped me decide against ever doing wet bench work again.

Record, evaluate, iterate

What I carried out of the lab and what enriched my life didn’t have much to do with the scientific method of “explore, wonder, hypothesize, test, repeat.” Instead, the most valuable things I learned were the virtues of reproducibility and legibility. I had very limited ability to contribute intellectually to the work I was carrying out, but I had almost unlimited potential to wreck it by mislabeling the petri dishes, forgetting to change out a pipette tip, sloppily loading the gels … or even just being lax about recording the work I did in the lab notebook that was the authoritative source of truth for the protocol. 

My internship made it clear how much work it took to do something completely consistently across many partners. Even though I wasn’t knit into the culture of the lab, I still was a little awed by the trust I was given. Long after I’d left the bench behind, I remembered how important it was not just to do something right but to do it legibly right. I haven’t smelled agar plates for almost 20 years, but I still draw on old skills every time I annotate a draft for an eventual factchecker.  

Ideally, students studying science should get to do some of the shadowing I did, to see how much slow, faithful, unpublishable work it takes to seek the truth. I was glad I did my internship; I just didn’t think it made much sense for me to take the results into competitions. At a science fair, I’d rather see students tackling their own questions, even if an adult could answer them better. A science fair should be more about giving intellectual and moral formation to the student than about pushing out the boundaries of what is known. 

Professional internships are good for students who aspire to careers in the sciences. But everyone needs a clear sense of how we seek to understand the world. Planning out a research project and then realizing you don’t have the funds to reach the sample size you need for a sufficiently powered study is a valuable education in itself. Seeing how complex, unwieldy, and expensive the scientific process is can help clarify questions like “Why has no one checked this?” or “Why don’t scientists always agree?” Realizing halfway through a project that you wish you’d set it up differently is ok. 

I’d like science fairs to be less competitive and more playful. They should give students the opportunity to lean hard into subskills of scientific literacy and informed curiosity. Fairs should be realistic that students mostly cannot execute world-class research, especially within the span of a year or two. 

So, what divisions might exist in an alternate high school science fair? What skills can students practice that will serve them well, whatever their career? What scope of work could they take on as amateurs with light mentorship, instead of being gofers in a professional lab? I’ll take the baton from Morris Meister and propose a few new divisions for regional, and even national fairs of my own:

Null Results Division

Students submit papers and experiments that turned up no significant results. The hypotheses being tested should be plausible, and the student should be able to explain why their experiment was sufficiently powered to detect a relevant effect size. 

Best in Class Trophy: The “Blind Alley Closure” award, a gilded figure holding a WRONG WAY sign

Study Proposals and Pilots Division

Students design and do initial stress tests on an experiment that goes beyond what their own resources can sustain. A student who proposes a survey might sit down with prospective members of the population to be studied, to tape them thinking through the instrument aloud and refining it accordingly. A student with an observational study might begin exploring the inter-rater reliability of their classification system. 

Any student competing in this division would need to cost out their proposals and set up a believable timeline for data collection and analysis (and IRB placating). A few standout entries would be chosen to be funded and executed in partnership with a sponsor. 

Best in Class Trophy: The “Bases Covered” award, actually a baseball trophy with glasses added to the player sliding home. 

Meticulous Replication Division

Two tracks here: In the first, students reproduce highly cited, seldom rerun studies from early in a field’s history. They get to operate under IRB waivers that apply only light sanity checks to prior setups. (The Stanford Prison Experiment is excluded.) Every study that was originally run only on college students (possibly receiving extra credit for a psychology course) must also be rerun on an alternate population. If the original experimental design is scanty, students should attempt to interview the original authors, and, if unable, come up with three plausible protocols. 

In the second track, students rerun the data analysis for pre-registered studies with publicly available codebooks and data. These challenges would be stratified by difficulty level and would include an oral exam on the work. 

Best in Class Trophy: The “Gimlet Eye” award, in which the top performer in each track is presented with three nearly identical trophies and is told they should take home the one that differs subtly from the other two. 

Fraud Exposure Division

Students who enroll in this division can access a non-public tipline for questionable findings (or, of course, they can just go browsing). Successful projects surface copy-pasted Western blot lines, mathematical errors, Benford’s law distribution violations, and the like. Major deviations from preregistered analyses can also be entered in this division. 

Entries in this division are reviewed in private session. Substantial scholarships and cash bounties are awarded to findings that are judged to merit retraction (regardless of whether the journal agrees). Publicly announced winners are also awarded a defamation insurance policy by a major sponsor. 

Best in Class Trophy: The “Meddling Kids” award, which is a gilded cast of a piña colada

***

A science fair built around subspecialties like these would reward genuine curiosity and allow students enough authorship over projects to be allowed to fail. It’s the design of an experiment, not the revelation of results that should be the most high-stakes at the high school level. My medals and ribbons have been gathering dust in the back of my childhood closet for two decades. The only thing I carry everywhere is my mind.


Page 2

Picture a fall afternoon in Austin, Texas. The city is experiencing a sudden rainstorm, common there in October. Along a wet and darkened city street drive two robotaxis. Each has passengers. Neither has a driver.

Both cars drive themselves, but they perceive the world very differently. 

One robotaxi is a Waymo. From its roof, a mounted lidar rig spins continuously, sending out laser pulses that bounce back from the road, the storefronts, and other vehicles, while radar signals emanate from its bumpers and side panels. The Waymo uses these sensors to generate a detailed 3D model of its surroundings, detecting pedestrians and cars that human drivers might struggle to see.

In the next lane is a Tesla Cybercab, operating in unsupervised full self-driving mode. It has no lidar and no radar, just eight cameras housed in pockets of glass. The car processes these video feeds through a neural network, identifying objects, estimating their dimensions, and planning its path accordingly.

This scenario is only partially imaginary. Waymo already operates, in limited fashion, in Austin, San Francisco, Los Angeles, Atlanta, and Phoenix, with announced plans to operate in many more cities. Tesla Motors launched an Austin pilot of its robotaxi business in June 2025, albeit using Model Y vehicles with safety monitors rather than the still-in-development Cybercab. The outcome of their competition will tell us much about the future of urban transportation.

The engineers who built the earliest automated driving systems would find the Waymo unsurprising. For nearly two decades after the first automated vehicles emerged, a consensus prevailed: To operate safely, an AV required redundant sensing modalities. Cameras, lidar, and radar each had weaknesses, but they could compensate for each other. That consensus is why those engineers would find the Cybercab so remarkable. In 2016, Tesla broke with orthodoxy by embracing the idea that autonomy could ultimately be solved with vision and compute and without lidar — a philosophical stance it later embodied in its full vision-only system. What humans can do with their eyeballs and a brain, the firm reasoned, a car must also be able to do with sufficient cameras and compute. If a human can drive without lidar, so, too, can an AV… or so Tesla asserts.

This philosophical disagreement will shortly play out before our eyes in the form of a massive contest between AVs that rely on multiple sensing modalities — lidar, radar, cameras — and AVs that rely on cameras and compute alone.

The stakes of this contest are enormous. The global taxi and ride-hailing market was valued at approximately $243 billion in 2023 and is projected to reach $640 billion by 2032. In the United States alone, people take over 3.6 billion ride-hailing trips annually. Converting even a fraction of this market to AVs represents a multibillion-dollar opportunity. Serving just the American market, at maturity, will require millions of vehicles.

Given the scale involved, the cost of each vehicle matters. The figures are commercially sensitive, but it is certainly true that cameras are cheaper than lidar. If Tesla’s bet pays off, building a Cybercab will cost a fraction of what it will take to build a Waymo. Which vision wins out has profound implications for how quickly each company will be able to put vehicles into service, as well as for how quickly robotaxi service can scale to bring its benefits to ordinary consumers across the United States and beyond.

To understand how this cleavage between sensor-fusion and vision-only approaches emerged, we must begin with the earliest breakthroughs in driving automation.

Early computer driving (1994–2003)

Fantasies of self-driving vehicles are ancient, appearing in Aristotle’s Politics and The Arabian Nights. But the clearest antecedent to today’s robotaxis first emerged in 1994, when German engineer Ernst Dickmanns installed a rudimentary automated driving system into two Mercedes sedans.

Dickmanns’ sedans were able to drive on European highways at speeds up to 130 kilometers per hour while maintaining their lane position and even executing passing maneuvers in traffic. Dickmanns had been testing prototypes on closed streets since the 1980s, and by 1995 his team was ready to demonstrate their system on a 1,600-kilometer open-street journey, driving autonomously 95% of the time.

The vehicles sensed the world using two sets of forward-facing video cameras: one pair with wide-angle lenses for short-range peripheral vision and another pair with telephoto lenses for long-range detail. Cameras in 1995 were reasonably fit for Dickmanns’ purpose. The chief bottleneck his system faced was in computer capacity. His work-around involved what he called, grandly, “4-D dynamic vision”: algorithms that efficiently processed visual data by focusing limited computational resources on specific regions of interest, much like human visual attention.

Despite the vehicles’ impressive achievements, Dickmanns was candid about the limitations of 4-D dynamic vision. It could be confused by lane markings — the cameras could “see” only in black and white, and so were blind to information conveyed by color, like yellow lines painted over white ones in construction zones. It also struggled when lighting conditions changed.

Most importantly, 4-D dynamic vision failed when road conditions changed suddenly, such as when another car cut sharply into the lane ahead. Relying only on cameras to model the world around it, the system had to measure distance via motion parallax, looking for differences in the size or position of objects in two frames taken at different times.

This was a reasonable approach for a vehicle in its own lane that the automated driving system might slowly overtake. But it was dangerously unsafe for cars that suddenly entered the lane ahead. Without stereo vision or other range-finding sensors, the car needed several video frames to model the world accurately, which posed great risks when the car and its neighbours were moving at autobahn speeds.

Dickmanns’ work suggested that the physics of visual perception imposed fundamental constraints that the algorithms of the day couldn't overcome. Other modalities were required.

DARPA and sensor fusion (2004–2016)

Amid the wars in Afghanistan and Iraq, Pentagon leaders increasingly looked to automation as a way to keep American soldiers out of harm’s way. Congress had already directed the military, in the 2001 defense budget, to pursue unmanned ground vehicles for logistics and combat roles by 2015. DARPA interpreted this mandate to require a push for autonomous resupply technologies, a goal that gained more immediacy as improvised explosive devices began inflicting significant casualties on US convoys in Iraq. DARPA's goal was to reduce the risk resupply operations posed to human soldiers. To that end, it organized its first Grand Challenge competition in 2004, offering a $1 million prize for an AV that could navigate a 142-mile desert course.

There were many sophisticated entrants from a variety of companies and universities. But the prize was large for a reason: The problem was daunting. No vehicle finished the course. The most successful entrant, Carnegie Mellon University's “Sandstorm” — a modified Humvee — traveled only 7.4 miles before its undercarriage stuck on a rock, leaving its wheels with insufficient traction to get it moving again. The other vehicles failed even earlier, getting stuck on embankments, being confused by fences, or in one case, flipping over due to aggressive steering.

The next year’s Grand Challenge had dramatically different results: Five vehicles finished the 2005 course. The winner, Stanford University's “Stanley,” a modified Volkswagen Touareg, crossed the finish line in six hours and 54 minutes, travelling 132 miles without human intervention.

What made the difference? In a word: sensor fusion. Stanley carried five laser scanners mounted on its roof rack, aimed forward at staggered tilt angles to produce a 3D view of the terrain ahead. All this was supplemented with a color camera focused for road pattern detection and two radar antennas mounted on the front to scan for large objects.

This collection of sensing modalities was not Stanley’s innovation. Sandstorm had also been equipped with cameras, lidar, and radar, as well as GPS. What Stanley had was the ability to collate the inputs of these sensors and fuse them into a consistent model of the vehicle’s surroundings. That fusion mitigated the weaknesses of individual modes. When dust kicked up by the lead vehicle obscured the camera and lidar, radar could still register metallic obstacles, while radar's lower resolution was supplemented by rich lidar point clouds and camera vision.

The 2007 DARPA Urban Challenge shifted the domain from the desert to a more challenging one: a mock city environment. Participants were expected to navigate intersections and parking lots while obeying traffic laws and avoiding collisions with other vehicles. These demands encouraged participants to take sensor fusion to new heights. 

Carnegie Mellon University, which came in second in 2005, made a comeback with its winning vehicle, “Boss.” A modified Chevy Tahoe, Boss was notable for the full range of sensors it carried: 11 lidar sensors, five for long range and six for short; cameras; and four radar units. This rich set of sensor data, fused together, allowed Boss to handle otherwise-impossible scenarios, like detecting a car partially occluded by another at an intersection.

None of this was cheap. Boss’ sensor suite cost more than $250,000, exclusive of the computer-processing hardware that filled its trunk. So while Boss and vehicles like it were capable of automated driving, they were nowhere near ready to be rolled out to consumers. 

Still, the DARPA competitors’ success demonstrated the potential of sensor fusion, which became the default approach in the nascent automated driving system sector. Google launched its self-driving car project in 2009 under Sebastian Thrun, who oversaw Stanley’s victory in the Grand Challenge for Stanford. From the start, this project — which was spun out into an independent subsidiary, Waymo, in 2016 — used a multisensor approach: lidar, radar, cameras, and detailed maps of the operational area. As limited deployment of AVs on public roads began in the mid-2010s, Waymo and its then-competitors, such as Cruise, Argo AI, Uber, and Aurora, were committed to sensor fusion. 

Decades of work had yielded a consensus: Multiple sensor technologies, with outputs that could be fused by computers, transcended the limitations of any one sensor. It was expensive and complex, but it worked. All that was required was more deployment and time, to inch down the cost curve, year after year. 

That consensus was about to be challenged. 

The vision-only insurgency (2016–2019)

If you want to understand the Tesla perspective on driving automation, watch the firm's “Autonomy Day” video.

In an auditorium at Tesla's Palo Alto headquarters on April 22, 2019, Elon Musk and his technical leadership team flatly rejected the sensor-fusion consensus. Within minutes of taking the stage, Musk fired the first salvo: “What we're gonna explain to you today is that lidar is a fool's errand and anyone relying on lidar is doomed. Doomed! Expensive sensors that are unnecessary. It's like having a whole bunch of expensive appendixes ... appendices, that's bad. 'Well, now we'll put [in] a whole bunch of them'? That's ridiculous.”

After Musk's provocative opening, Andrej Karpathy, then the company's senior director of AI, took the stage to exhaustively dismantle the sensor-fusion consensus. “You all came here, you drove here, you used your 'neural net' and vision,” Karpathy said. “You were not shooting lasers out of your eyes and you still ended up here.” By this, Karparthy meant that human drivers can navigate their cars through the streets using only passive optical sensors — their eyes — coupled with powerful neural processing.

“Vision really understands the full details,” Karpathy argued. “The entire infrastructure that we have built up for roads is all designed for human visual consumption. … So all the signs, all the traffic lights, everything is designed for vision. That's where all that information is.” In this view, lidar and other nonvisual inputs weren't merely unnecessary but counterproductive. They were “a shortcut. … It gives a false sense of progress and is ultimately a crutch.”

Musk similarly dismissed high-definition mapping. “HD maps are a mistake. … You either need HD maps, in which case if anything changes about the environment, the car will break down, or you don't need HD maps, in which case, why are you wasting your time?” For Musk, depending on pre-mapped environments meant that the “system becomes extremely brittle. Any change to the system makes it [so that] it can't adapt.” A true automated driving system should be able to boot up anywhere and drive appropriately based purely on what it sees.

Tesla’s approach to driving automation was consistent with Musk’s design philosophy at all of his firms. Javier Verdura, reflecting on his time as Tesla’s director of product design, reminisced that 

if we’re in a meeting and we ask, “Why are the two headlights on the cars shaped like this?” and someone replies, “Because that’s how they were designed when I was at Audi,” that’s the worst thing you can say. This means we’re telling how things are done at other companies that have been doing it for years without innovation. For Elon, everything we do must be started from scratch, stripping everything down to the basics and starting to rebuild it with new notions, without worrying about how things are normally done.

At Tesla, the goal was to do away with features other manufacturers took for granted. Musk has said that “the best part is no part. The best process is no process. It weighs nothing. Costs nothing. Can't go wrong.” Tesla's introduction of touchscreens as primary vehicle-control interfaces exemplifies this philosophy. By replacing the buttons and dials that stud a traditional dashboard with a touchscreen, Tesla streamlined user interactions and reduced the number of physical components. This minimalist design not only makes an aesthetic statement but also simplifies the car’s manufacturing and maintenance processes. In the process, of course, the car arguably becomes less safe to operate; but every design decision involves trade-offs.

The same logic that eliminated dashboard buttons militates against lidar in favor of a camera-only approach. If there is no lidar in the vehicle, then the lidar does not have to be sourced, does not have to be installed, does not have to be paid for, and does not need to be replaced; indeed, it cannot fail. While Waymo had to invest immense sums and effort in obtaining and installing and maintaining expensive lidar sets, Tesla was free of those burdens. 

In its own way, Tesla’s choice to pursue minimalist design in sensor modalities was as audacious as when Apple did away with physical keyboards for the iPhone, or when SpaceX announced its plan to stop using single-use rockets. This break from orthodoxy was classic Musk: Like SpaceX’s unprecedented success with reusable boosters, it positioned Tesla as a company with an insight into what was possible, one that everyone else had fundamentally misunderstood.

In this case, the insight depended on recent progress in computer vision. In 2012, AlexNet, a neural network developed by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, won the ImageNet challenge, marking the beginning of the deep learning era in vision. Tasks like detecting cars and pedestrians in camera images to a high level of accuracy were now feasible. Deep learning went from strength to strength between 2012 and 2016, when Tesla began equipping all vehicles with cameras and compute hardware designed for eventual self-driving. They believed that with sufficient data and computing power, the fundamental limitations of earlier camera-only systems could be overcome.

“Neural networks are very good at recognizing patterns,” Karpathy explained at Autonomy Day. “If you have a very large, varied dataset of all the weird things that can happen on the roads, and you have a sufficiently sophisticated neural network that can digest all that complexity, you can make it work.” This was Tesla's advantage: hundreds of thousands of consumer vehicles already on the road, collecting real-world driving data with every mile traveled.

Each Tesla vehicle was a data-gathering platform, continuously feeding information back to Tesla's training systems. The company had built what Karpathy called a “data engine” — an iterative process that identified situations in which its autonomous system performed poorly, sourced similar examples from the fleet, trained improved neural networks on that data, and redeployed them to the vehicles. Though Waymo was also collecting data, scale matters for neural networks. In 2019, Waymo had obtained approximately 10 million miles of driving-automation data, while Tesla had over one billion miles collected via vehicles equipped with Autopilot. That two-orders-of-magnitude difference meant, in Tesla's view, that their neural network would outperform any competitor.

This data advantage complemented their hardware-cost advantage. In 2019, while a Waymo vehicle might have carried more than $100,000 worth of sensors and computing equipment, Tesla's vision-only approach added perhaps $2,000 to a vehicle's cost. In the firm’s view, these advantages would reinforce each other: Cheaper vehicles would mean more deployment, which would capture more data, which would improve their neural networks, which would make their product more competitive, enabling even more deployment. It was a virtuous cycle for scaling quickly.

“By the middle of next year,” Musk predicted during the 2019 event, “we'll have over a million Tesla cars on the road with full self-driving hardware, feature complete.”

Blind spots (2019–Present Day)

Musk’s prediction did not come true in mid-2020. As of late 2025, it remains unfulfilled. Throughout the early 2020s, Musk continually asserted that Tesla's vehicles would be capable of "full self-driving" by year's end. These announcements triggered market excitement without ever coming true. Tesla did launch a robotaxi pilot in Austin in June 2025, but using Model Y vehicles with safety monitors in the passenger seat and operating in a geofenced area of approximately 245 square miles. (Musk stated in October 2025 that safety monitors would be removed by year's end, which would fall far short of the widespread, unrestricted deployment he had suggested … and it remains to be seen if this promise will be kept.) 

Those who have been paying only casual attention to the field may find this surprising. Don’t all Tesla vehicles come with something called “Autopilot”? Don’t many of them feature “Full Self-Driving”? If that feature is not actually full self-driving, then what is it?

Tesla's Autopilot is an Advanced Driver-Assist System that offers adaptive cruise control and lane-keeping. Full Self-Driving expands on this system, adding features like automatic lane changing and traffic-signal recognition. Despite the name, FSD requires constant supervision from a human driver who has to be ready to assume control of the vehicle at any moment. That requirement was obscured by the feature’s misleading name, in some cases with tragic results. Ultimately, Tesla decided truth was the better part of branding, and in early 2024 quietly renamed the feature Supervised Full Self-Driving (an oxymoron). 

When will unsupervised FSD be ready? There are many hurdles Tesla must clear before that feature will be widely available. Technical limitations in Tesla's existing vehicle hardware are one; Musk has acknowledged that legacy Tesla vehicles equipped with Hardware 3 may not be capable of unsupervised FSD. Consequently, Tesla has committed to providing free upgrades to Hardware 4 for customers who have purchased the FSD package, ensuring their vehicles can support the technical demands of unsupervised driving. 

But another delay, harder to overcome, comes from the limitations of the vision-only approach.

In May 2016 and again in March 2019, Tesla vehicles in Autopilot mode were involved in nearly identical fatal accidents in Florida, where they collided with the side of white tractor-trailers crossing highways. In both cases, the National Transportation Safety Board found that the vision systems failed to detect the broad side of a white truck against a bright sky. These incidents, occurring three years apart — the latter just weeks before the triumphant Autonomy Day presentation — demonstrated that even with substantial improvements, specific visual scenarios  like light-colored objects against bright backgrounds continued to challenge Tesla's pattern-recognition systems.

As recently as October 2024, the National Highway Traffic Safety Administration opened a new investigation into Tesla's FSD following reports of four crashes in low-visibility conditions, including one that killed a pedestrian in Rimrock, Arizona, in November 2023. According to the NHTSA documentation, these crashes occurred when Tesla vehicles encountered sun glare, fog, or airborne dust: precisely the kinds of challenging visual conditions with which cameras struggle. This investigation, covering approximately 2.4 million Tesla vehicles from the 2016 through 2024 model years, represents a significant shift in regulatory approach. Rather than focusing solely on driver attentiveness, NHTSA is now examining whether the FSD system itself can “detect and respond appropriately to reduced roadway visibility conditions.” This broader scope is evidence of increasing regulatory scrutiny of the vision-only approach.

The pattern is clear. Despite years of neural network improvements and billions of miles of training data, vision-only systems continue to face fundamental limitations that software alone seems insufficient to overcome. These limitations include glare, darkness, and depth perception.

Glare is most obvious. Extreme contrasts of brightness — such as driving directly toward the setting sun or encountering headlights at night — can temporarily “blind” cameras. In these scenarios, human drivers typically slow down and proceed with caution, and camera-only systems should do the same. Conversely, an automated system equipped with lidar can continue to operate at speed. 

Too much light is a problem for cameras, but so is too little. Lidar, radar, and sonar are “active” sensors: Each emits signals (lasers, radio waves, and sound waves, respectively) and measures the return reflections to determine object presence, distance, and velocity. Cameras, by contrast, are “passive” sensors, relying solely on ambient light in their environment. In its absence, they are inert. 

As a consequence, Tesla vehicles struggle in a variety of conditions. Most obviously, at night there is often little light available. The car’s headlights help, of course, but are most useful for detecting reflective objects like road signs, well-painted and maintained lane lines, and the taillights of other cars. Nonreflective objects, like dark-clad pedestrians or road debris, are harder for cameras to notice. 

This may seem to be a strange gap in Tesla’s capability. Since humans can drive at night using vision alone, shouldn’t camera-only vehicles be able to do the same? But human eyes have capabilities that cameras lack. Human eyes employ two distinct photoreceptor systems: rod cells for low-light monochrome vision and cone cells for color in daylight. When darkness falls, our eyes and brains effectively switch sensor modes. While cameras have advantages — perfect consistency, no fatigue, and the ability to deploy multiple synchronized units for 360-degree coverage — they still can’t match the low-light performance of a biological system that has been refined by millions of years of avoiding nocturnal predators. 

But while human night vision under normal conditions is impressive, lidar is better. And while lidar, like cameras, is challenged by rain, snow, and fog — raindrops or snow can obscure a camera’s vision or cover its lens while also blocking a lidar’s pulse — radar is serenely unaffected. In one documented case, a Waymo One robotaxi was able to safely navigate dense fog in San Francisco. To be sure, it did not complete its journey, but it found a place to pull over safely and suspended the trip, in a situation where a vision-reliant system could not have proceeded at all.

Camera-only systems can also struggle with depth perception. Cameras don't directly measure distance; the best they can do is provide depth estimates. One method is to compare images from slightly different angles. In nature, this arrangement is called stereoscopic vision, and it’s why predator animals usually have two front-facing eyes: Predators need to make quick estimations of distance. Another method is to compare images across time (i.e., motion parallax), like Dickmanns’ rudimentary automated driving system from the 1990s. 

Tesla vehicles use both methods. They have multiple cameras strategically positioned around the vehicle. The current Hardware 4 suite includes eight external cameras: three forward-facing with different focal lengths for near-, medium-, and long-range detection; two side pillars; one backup; and two fender cameras. Each camera is tuned for specific use cases, from high-speed highway driving to precise parking maneuvers, with overlapping fields of view to enable depth estimation through stereoscopic vision. Thanks to these cameras and computer-based depth-estimation tools, Tesla estimates that in practice its vehicles can see vehicles about 250 meters ahead on highways, which is roughly comparable to radar range. 

This approach works well, but only in ideal conditions. Problems arise at longer distances, or when visibility is poor, and there are other less obvious failure modes. Imagine cresting a hill to discover a stopped vehicle on the other side. No sensor modality can detect such a thing, but lidar will detect it as soon as the hill ceases to interpose and the car can respond immediately. A camera-only system needs more time to make stereoscopic or motion-parallax estimates, and as a result may not brake soon enough. 

Beyond physical limitations like glare and darkness, vision-only systems face a more fundamental AI challenge: training data bias. 

Neural networks can recognize only the patterns they've been trained on, and unusual scenarios may be underrepresented in their training data. This challenge is magnified by Tesla's end-to-end machine learning architecture, where camera inputs feed directly into neural networks that output driving commands. Unlike Waymo's modular architecture, which separates perception from planning so engineers can diagnose whether errors stem from misunderstanding the world or making bad decisions, Tesla's end-to-end system processes camera images inside a black box. Images go in one end, driving commands come out the other. If the system brakes, there is no way to be certain why it did so. Was it because it recognized the red light, or detected a pedestrian, or noticed another vehicle, or responded to some unrelated visual pattern, like a missing manhole cover? There is no way to tell. What that means, ironically, is that Tesla's massive data collection is both its greatest strength and a significant constraint. The vast quantities of data make processing by humans impractical. While competitors can address specific edge cases by updating discrete components or rules, Tesla must retrain its entire system.

This long-tail problem is particularly challenging for safety-critical systems. When a Tesla encounters an unusual object configuration — like a truck with an unusual shape, a fallen tree, or construction equipment — the neural network may misclassify it or fail to detect it entirely. While expanding the training dataset helps, it's impossible to capture every edge case through fleet learning alone. Each new scenario requires collection, annotation, and retraining: a time-consuming process that creates inevitable gaps in the system's perception. This is in contrast to lidar and radar, which detect physical objects regardless of their visual appearance or how frequently they've been encountered before.

These limitations of vision-only systems are not theoretical; they manifest in the firm’s performance data. According to California DMV reports from 2023 to 2024, Waymo reported a remarkably low rate of 0.0004 disengagements (that is, returning control to a human driver) per mile. Tesla, which does not yet offer full automated driving system but merely a driver-assist system — its Supervised FSD — is not required to submit formal disengagement reports, but some third-party analyses estimate disengagement rates between 0.05 and 0.10 per mile. If these reports are accurate, Tesla cars disengage at least an order of magnitude more often than dedicated sensor-fusion systems.

In November 2025, weeks after Waymo's co-CEO challenged the industry to “be transparent about what's happening with their fleets," Tesla released its most detailed safety report to date. The release is good news, in that it finally offers a complete picture of Tesla’s contributions to road safety, but it’s also bad news, in that it underscores the gap between where Tesla is and where it needs to be — that is, where Waymo already is. 

For Tesla’s Supervised FSD — where a human driver remains ready to intervene — Tesla reports approximately 2.9 million miles between major collisions in the most recent 12-month period, compared with roughly 505,000 miles for average US drivers. This represents about an 82% reduction in serious crash frequency, or roughly five to six times safer than human driving. For minor collisions, Tesla reports approximately 986,000 miles between incidents, compared with 178,000 miles for average drivers.

That’s good, but it’s failing to clear the bar that Waymo reaches. Waymo reports even more dramatic safety improvements for its vehicles, which don’t have a human standby driver, as Teslas do. Waymo's robotaxis are involved in 91% fewer crashes involving serious injury — roughly 11 times safer than humans in this respect — and 79% fewer crashes involving airbag deployment. 

The most revealing comparison, however, comes from Tesla's robotaxi pilot in Austin. When Tesla attempted unsupervised operation — removing the human safety backup — early performance was catastrophic. In the first month of operation, covering approximately 7,000 miles, the vehicles were involved in three crashes. That rate of roughly one crash every 2,300 miles is orders of magnitude worse than Waymo's performance, and significantly worse than even average human drivers. While Tesla has not provided updated figures for the Austin pilot beyond this initial period, the contrast is stark. These differing safety records suggest that, despite advances in neural networks and computer vision, sensor-fusion systems continue to outperform vision-only approaches in real-world conditions.

Beyond the dichotomy

Perhaps in recognition of this reality, Tesla has quietly shifted its stance. 

Tesla ostentatiously removed radar from its vehicles, throughout 2021 and 2022, as part of its commitment to “Tesla Vision.” In late 2023, without fanfare, the company reintroduced radar, incorporating a high-resolution radar unit (codenamed “Phoenix”) into its Hardware 4 suite. The reintegration played to the firm’s strengths: Whereas earlier use of radar was a separate stream to the automated driver assistance system with hard overrides, radar input was now incorporated directly into the ADAS’ neural network. Even so, for a company that had so loudly insisted on the sufficiency of cameras alone, this limited use of camera-and-radar sensor fusion represented a significant change. Similarly, Tesla vehicles quietly incorporated on onboard mapping to understand their position in space.

Meanwhile, Waymo and other sensor-fusion companies have increasingly embraced neural networks. Waymo now employs transformer-based foundation models — the same technology powering advanced language models — across its entire self-driving pipeline: perception, prediction, and motion planning. The system is trained end-to-end, with gradients flowing backward through components during training, presumably in the same fashion that Tesla does. However, Waymo has chosen to maintain distinct perception and planning networks: If the car makes a mistake, engineers can determine whether it misunderstood the world or made a poor decision. This modular architecture allows independent testing and validation of components. 

One consequence of this is that Waymo needs fewer sensors, even as the economics driving these decisions have shifted dramatically. Early automotive lidars like Velodyne's HDL-64E cost upwards of $75,000 in 2007, making them impractical for mass-market vehicles. However, technological advances and economies of scale have caused prices to plummet. By 2020, Velodyne’s automotive-grade lidars were in the $500 range at production volumes: a remarkable 90% cost reduction in just over a decade. Waymo used Velodyne lidars early in the firm’s life but has been building their own lidar in-house for years at what the firm said in 2024 was “a significantly reduced cost.” Computing hardware costs have followed a similar trajectory. Today, industry projections suggest that by 2030, comprehensive sensor suites including multiple lidars might add only $2,000 to $3,000 to vehicle cost, approaching the price premium of Tesla's camera array and computing hardware.

Waymo and Tesla are not alone in the self-driving car space, and their competitors are also converging on sophisticated AI, sensor fusion, and multiple sensor modes. Mobileye, which supplies driver-assist systems to dozens of automakers, relies on cameras and basic radar for basic capability while adding more sophisticated sensing as autonomy levels increase. Their robotaxi platform incorporates lidar for redundancy and robustness: The camera subsystem alone can drive safely, and the lidar/radar subsystem alone can drive safely, running in parallel. Like Tesla, Mobileye built its reputation on vision-based ADAS, but for higher levels of autonomy, the firm recognizes the value of sensor fusion.

Another instructive example is Wayve, a UK-based startup whose approach blurs the line between vision-only and sensor-fusion. Like Tesla, Wayve emphasizes end-to-end deep learning: Its neural networks take raw video input and directly output driving commands. But unlike Tesla, Wayve does not insist on a vision-only approach. Its vehicles incorporate inertial measurement units, GPS, and occasionally radar to augment their understanding of the environment. Their approach underscores how much the earlier dichotomy is breaking down. 

The fundamental question of sensor-fusion versus cameras-only is beginning to lose its sharpness. As it recedes, the question is no longer what sensing approach should we use, but what standard of safety is necessary for successful driving automation. 

The argument of Tesla’s 2019 Autonomy Day, which Musk still hypes on X, is that if humans drive with vision alone, so can cars. 

It’s pithy. It’s memorable. And in several ways, it’s misleading.

It’s misleading because humans don't actually drive with vision alone. We have other senses to engage. We use hearing to detect sirens, screeching tires, and warnings from pedestrians. We have proprioception that helps us feel g-forces, vibrations, and loss of traction. It’s true that we have vision, which we supplement with our brains, so it is fair to say that both humans and computers possess vast contextual knowledge about driving environments. But we can also — through reading facial expressions and gestures — rapidly discern other drivers’ intentions in ways that no computer can.

And despite these advantages, humans are terrible drivers. 

Globally, human drivers cause approximately 1.19 million deaths annually. Human error contributes to over 90% of crashes. In the United States alone, roughly 40,000 people die in traffic accidents each year. Humans can’t shoot lasers out of our eyes, but if we did, we'd be much safer drivers. Our cars can. Why shouldn't we aspire to the level of safety that sensor fusion offers? Progress in this field, understood properly, should constitute living up to driving automation's capability, not living down to human weakness.

So as Waymo robotaxis and Tesla's Model Y-based robotaxis now ply the streets of Austin, the two vehicles indeed embody different philosophies about how AVs should perceive the world. The Tesla robotaxi sports its array of cameras, while the Waymo will spin its lidar alongside a suite of complementary sensors. But the competition will not be as sharp as it would have been in 2019.

Tesla challenged convention, but since then it has quietly reintroduced radar; it seems possible that it will bring in other modalities beside it. In 2020, Waymo pioneered comprehensive sensor fusion, but since then it has streamlined its hardware and has enhanced its AI capabilities. It seems certain that it will continue to do so. Looking ahead, the paths forward for each firm seem likely to converge. 

If that’s correct, it means that observers — including the regulators who will admit this technology into the streets of other cities — have a different question to ask. Rather than cameras versus lidar, the real contest is between robotaxis that are as safe as human drivers and those that are better

Which standard are we prepared to accept? What vehicles can meet the one we choose? How soon can those vehicles arrive? These questions aren’t technical but political, which means that, as citizens, it is up to us to decide. 

The driving-automation future we get will depend on our answer. 


Read the original article

Comments

  • By kristopolous 2026-02-1713:374 reply

    They should actually be science - mostly reproducing/validating things in rigorous methodical ways.

    There's far too many adults that don't seem to grasp the basic principles of what the discipline of science is.

    At the earliest level instead they should be replaced with kids having to come up with experiments to either show something is or is not real based on an existing demonstration.

    For instance, I'll make the claim that each different color of Trix has a unique flavor. And that you can test this because when you eat a certain color you always taste the same flavor. Then the kids will have to come up with experiments that show that although that is true, the claim that they are flavored differently is false.

    Another example. I'll claim that a tall slender glass holds more liquid then a short and narrow one of equal volume and as evidence I'll show that companies charge more for the tall and slender and people pay more for it willingly.

    Essentially you start out with something where the response is "that's true but," and then an experiment is constructed from there.

    This approach allows for some important lessons.

    1. You don't have to successfully explain the phenomena to demonstrate the claim is false.

    2. That science is about skepticism until you're forced to accept something essentially by exhaustion of objections

    3. That poorly constructed tests can reproduce a phenomena that you're trying to isolate, etc.

    The change is to not make it open ended or have projects or theme fairs. No posterboards or presentations.

    Instead it's a form of debate club with interlocutors where groups are trying to fake and fool the others and separate things

    When the general public accuses people of putting their "trust in science" I mean, this will kill that. No, science is all about strong methodological distrust - that's one of the basic premises.

    • By hk1337 2026-02-1715:111 reply

      > 1. You don't have to successfully explain the phenomena to demonstrate the claim is false.

      Sometimes I feel like this is taken to the extreme for non-scientists that say that lack of evidence is in itself evidence. Depending on the circumstance and the tests, that could be true but its often a default mode.

      • By timr 2026-02-1715:22

        > Sometimes I feel like this is taken to the extreme for non-scientists that say that lack of evidence is in itself evidence.

        But of course, the lack of evidence is itself evidence, if you have a sufficiently large data sample and haven't seen the thing you're looking for. Keep pursuing the increasingly unlikely outcome, and you're just engaged in science-flavored religious catechism.

        I see the fallacy routinely misapplied by all sides of most hot-button science-meets-politics issues. A great many scientists will regularly substitute their own pet theories for conclusions, and strenuously ignore the lack of supporting evidence, citing the old "absence of evidence is not evidence of absence" saw. Then they turn around and mock "non-scientists" for doing the same thing. Neither side is right, of course, but dressing in a lab coat doesn't make it better.

        Just to circle it back to the topic of science education, I'd love to see a science curriculum at the middle- and high-school level that equipped people to reason through this kind of thing by focusing on tearing apart pop research. A "science fair" is actually hard to do well just because most science fails, but a "scientific bullshit fair" would have almost infinite fertile ground from nutrition studies alone.

    • By biophysboy 2026-02-1714:201 reply

      You can believe in a phenomena and still do good science. It all depends on if your exp design is free of bias. Randomizing, blinding, instrumentation, pre-registration, statistical rigor - there all sorts of ways to do this. I say this because I think non-scientists regularly say that science is biased because scientists are biased. The cool thing about science is you don't have to have any pretense of objectivity as a person as long as your experiment is independent.

      • By reliabilityguy 2026-02-1715:071 reply

        > I say this because I think non-scientists regularly say that science is biased because scientists are biased.

        As any other field, the science is as good as the scientist that produced it. For example, there is a serious reproducibility crisis in multiple fields, like psychology, and social sciences. In the latter it is hard to say of its due to systemic educational failure of the PhD students in those fields, or that the field and personal politics are merging too tightly.

        Unfortunately, all it takes is one bad scientist to discredit the rest, e.g., Wakefield.

        • By biophysboy 2026-02-1715:421 reply

          I know what you are saying, but I'm arguing that science is great because it produces better output than the people that make it as long as they stick to good methods.

          As for reproducibility, its my opinion that it has more to do with incentives and constraints than the ethics or intellectual capacity of the researcher (although those are real components too)

          • By reliabilityguy 2026-02-1715:461 reply

            > I'm arguing that science is great because it produces better output than the people that make it as long as they stick to good methods.

            I don’t think we have a contradiction here. What I am saying is that science is made by people, and we as scientists have to be extremely vigilant today not to let “ends justify the means” crowd to use the name of science for their own agenda.

    • By dnautics 2026-02-1713:441 reply

      Did you get to the end of the article?

      • By kristopolous 2026-02-1713:542 reply

        Still misses the mark.

        It should be designed for the 95% of students who are not going to be scientists so that they become better citizens and we aren't just flooded by piles of misguided people when it comes to funding public policy. Instead we want people to have the basic literacy to know that say, MMS (Miracle Mineral Solution) is wildly unsound.

        It should be seen as a society-wide improvement project like the declination of smoking or how people are exercising more and eating healthier then half a century ago.

        This is the same kind of project.

        People should know "What is science? What do people who do it do? If someone claims to do science, how can I know if their claims are legitimate? What are the red flags? If I'm presented with a scientific looking document, what questions should I ask?"

        They should know it's a self-correcting system and not a belief tribe.

        Example: My friend sent me this anti-vax report a few years ago showing how children who didn't get vaccinated had a lower reported occurrence of a number of diseases. I mean obviously - the parents who distrust clinicians aren't going to get their child diagnosed. Of course the reported occurrence is lower. Measurement bias.

        That's the kind of thing I'm talking about. A graph that was so convincing to my friend shouldn't have been. They should have been inculcated from an early age to ask such questions and shouldn't have fallen for such bad science.

        There's a bunch of subjects we can probably slack on with the general public without dire consequences. Adults can be ignorant on say american literature, chemistry, or ancient history without much affect. Core scientific literacy, however, is proving to be one of the important ones.

        • By wdrw 2026-02-1714:111 reply

          If it's for 95% if students who aren't interested in being scientists, then it shouldn't be an extracurricular science fair, but a part (maybe a big part) of the regular curriculum in science class. Science fairs are for the science enthusiasts, I think.

          • By alistairSH 2026-02-1716:29

            Yep, this should be mandatory. The "science nerds" can still nerd out on more complex topics, but the "normies" should be required to prove a basic level of scientific literacy.

        • By the_jizzler 2026-02-1716:271 reply

          still misses the mark

          Okay, but we still want you to admit that GP was subsumed by the article.

          • By kristopolous 2026-02-1722:30

            It's a different targeting. The article is great in teaching modern empirical methods

            However the problem lies in the lesson of new math - which was an effort to teach actual mathematician mathematics starting in elementary school as opposed to the number manipulating arithmetic that most people need.

            As a result only future mathematicians really understood it and most people were baffled by it.

            I like the sentiment but we have to acknowledge that our favorite thing in the world, whatever it is, is simply unapproachable to others

            I run into this problem constantly with my software efforts. I think my stuff is obvious but everybody else thinks it's too arcane and obscure.

            To reference a deep cut from 1945, Norman Corwin, Variety magazine, article "Radio not in a class by itself"

            "[Radio] rises no higher and sinks no lower than the society which produces it." A few paragraphs later, "I believe people get the kind of radio, or pictures, or theater, or press they deserve... The gist of what I am saying is that the radio of this country cannot be considered apart from the general culture... If the American people support soap operas and tolerate singing commercials; if they pay higher honor to Gildersleeve than to Beethoven, then it is not primarily the job of radio to elevate their tastes."

            And so with education - we can only build from the legos in the bucket.

    • By ModernMech 2026-02-1714:124 reply

      I like the science and engineering fairs though because "I built a thing" doesn't fit into the scientific model. I always had this trouble in the fair with robotics because what's the hypothesis? "I hypothesize I can build a robot that..." no that's not science that's engineering.

      • By MITSardine 2026-02-1714:352 reply

        To be fair, a lot of science doesn't follow the scientific method. I've yet to see an applied mathematician (to speak only of what I know) come up with a hypothesis, it's usually rather: here's how people solve this problem currently, this has this and that drawback, and our paper introduces a new method that solves bigger problems faster/new classes of problems.

        The same could be said of theoretical work: here, we tightened up an inequality.

        This is also research, not all of it is experimental!

        • By ModernMech 2026-02-1714:44

          Yeah I get it when giving projects to kids it's easier to be like "Here are the 5 sections you have to do" and then grade them on how well they did the 5 sections... but that's really limiting the spirit of the thing if the idea was to let the kids off the leash and see where they can take their minds.

        • By musicale 2026-02-185:581 reply

          I concur - research can include both scientific and engineering research.

          I note MIT (like many universities) has a department of Electrical Engineering and Computer "Science".

          • By ModernMech 2026-02-1813:06

            It's interesting seeing the EECS and CS+CompENG programs splitting into two CompE and AI programs currently. This is happening in my department where we are standing up an AI major and we're all asking "Is the CS department the AI department now or what? Where do all the systems people go?"

      • By timr 2026-02-1714:171 reply

        It's also really damned hard to come up with an interesting, novel question, that is testable, with resources available to the average school child, in a reasonable amount of time.

        Allowing engineering opens up the workable space by quite a bit.

        • By anon291 2026-02-1716:23

          Questions don't need to be new to be science.

          My brother and I, for example, did an experiment where we tested pH of various water bodies around us. The hypothesis was based off of local drainage patterns.

          Not a new question... Still scientific.

      • By anon291 2026-02-1716:221 reply

        That's not science then. Just because engineering is equally enigmatic to most people as science doesn't mean they're the same.

        Science means applying the scientific method.

        "I hypothesize that method X is the best way to construct a robot that does Y and I've tested methods A, B, and C to validate that claim"

        That's science.

        Just building something can actually be a pursuit of non science..this is why many engineers have not invented here syndrome. They think they're thinking scientifically, but they don't. Thus, instead of checking their assumptions, they run with them.

        • By ModernMech 2026-02-1717:261 reply

          I think you're agreeing with me? Or maybe you meant to reply to someone else because that's my point: building the robot is not science, but it's something I think kids should still do, that's why science and engineering fairs are my preference. Or maybe there should be engineering fairs?

          • By anon291 2026-02-1719:552 reply

            No I'm saying a science and engineering fair doesn't really make sense. There's science in engineering, but just any engineering task is not scientific necessarily.

      • By wildzzz 2026-02-1716:46

        We did ISEF in highschool. I always wanted to build something but just couldn't figure out a way to justify it. My teacher usually just said do an experiment please. I usually just did some lame science project that didn't really produce anything interesting.

        Freshman year: effect of light wavelength on basil plant growth. I shined a black light, a regular light bulb, and a very bright IR light at some basil plants. I probably could have made it better by doing colored lights with controlled lux levels. Didn't win anything and the judges were unimpressed.

        Sophomore year: effect of water pH on electrolysis gas production. Varied the pH on some water and put it in an electrolysis apparatus. I actually got 3rd place in the highschool chemistry group surprisingly. It wasn't a very rigorous project but I guess no one else did anything terribly interesting. Not enough to go onto regional or state level (not sure what came next). Even my parents were surprised.

  • By geooff_ 2026-02-1714:07

    Where I grew up there was no "Science Fair Circuit" like described in this article. Science fairs where a way for young kids (aged 8-10) to test silly hypothesis. There was no feeding into national fairs or anything like that.

    I remember one being how fast bean sprouts grew when watered with different liquids (Water, Olive Oil, Wine, Coke, ect). An idea a kid came up with and tested with only minimal help from parents.

    To me they should be about exploring independent ideas. I love the Adam Savage quote: "Remember kids, the only difference between screwing around and science is writing it down". To me this is what they should capture.

  • By bluedino 2026-02-1713:322 reply

    It always seemed like there were three tiers of students/projects when the science fair came around each year.

    1. Just generic stuff. Growing plants with something different in the soil/water, what shape of balsa wood makes the strongest structure, etc

    2. You were taken under the wing of the science teacher, were given better ideas which were encouraged, got to use some equipment from the school, that sort of thing.

    3. The highest tier, one of your parents was a biologist, engineer or chemist at one of the local big companies, you did some very specialized research, were able to use some very fancy equipment, etc. These kids almost always seemed to have the best chance of winning.

    • By CGMthrowaway 2026-02-1713:41

      I was into model rockets at the time, so I cant remember what my hypothesis was, but I remember it was whatever would allow me to shoot off the most and biggest rocket engines possible :)

    • By groundzeros2015 2026-02-1714:15

      At my school the generic stuff one because parents and teachers are dumb

HackerNews