
In the quantum double-slit experiment, did the particle go through the slits or did the wave function? 1920's quantum physics has a simple answer: "NO".
According to modern QFT, there are no particles except as an approximation. There are no fields except as mathematical formalisms. There's no locality. There is instead some kind of interaction of graph nodes, representing quantum interactions, via "entanglement" and "decoherence".
In this model, there are no "split particle" paradoxes, because there are no entities that resemble the behavior of macroscopic bodies, with our intuitions about them.
Imagine a Fortran program, with some neat index-based FOR loops, and some per-element computations on a bunch of big arrays. When you look at its compiled form, you notice that the neat loops are now something weird, produced by automatic vectorization. If you try to find out how it runs, you notice that the CPU not only has several cores that run parts of the loop in parallel, but the very instructions in one core run out of order, while still preserving the data dependency invariants.
"But did the computation of X(I) run before or after the computation of X(I+1)?!", you ask in desperation. You cannot tell. It depends. The result is correct though, your program has no bugs and computes what it should. It's counter-intuitive, but the underlying hardware reality is counter-intuitive. It's not illogical or paradoxical though.
This is incorrect. There are particles. They are excitations in the field.
There still is the 'split particle paradox' because QFT does not solve the measurement problem.
The 'some kind of interaction of graph nodes' by which I am guessing you are referring to Feynman diagrams are not of a fundamental nature. They are an approximation known as 'perturbation theory'.
I think what they must be referring to is the fact that particles are only rigorously defined in the free theory. When coupling is introduced, how the free theory relates to the coupled theory depends on heuristic/formal assumptions.
We're leaving my area of understanding, but I believe Haag's theorem shows that the naïve approach, where the interacting and free theories share a Hilbert space, completely fails -- even stronger than that, _no_ Hilbert space could even support an interacting QFT (in the ways required by scattering theory). This is a pretty strong argument against the existence of particles except as asymptotic approximations.
Since we don't have consensus on a well-defined, non-perturbative gauge theory, mathematically speaking it's difficult to make any firm statements about what states "exist" in absolute. (I'm certain that people working on the various flavours of non-perturbative (but still heuristic) QFT -- like lattice QFT -- would have more insights about the internal structure of non-asymptotic interactions.)
Though it doesn't resolve whether a "quanta" is a particle or a measurable convergence of waves, Electrons and Photons are observed with high speed imaging.
"Quantum microscopy study makes electrons visible in slow motion" https://news.ycombinator.com/item?id=40981054
There exist single photon emitters and single photon detectors.
Qualify that there are single photons if there are single photon emitters:
Single-photon source: https://en.wikipedia.org/wiki/Single-photon_source
QFT is not yet reconciled with (n-body) [quantum] gravity, which it has 100% error in oredicting. random chance. TOD
IIRC, QFT cannot explain why superfluid helium walks up the sides of a container against gravity, given the mass of each particle/wave of the superfluid and of the beaker and the earth, sun, and moon; though we say that gravity at any given point is the net sum of directional vectors acting upon said given point, or actually gravitational waves with phase and amplitude.
You said "gauge theory",
"Topological gauge theory of vortices in type-III superconductors" https://news.ycombinator.com/item?id=41803662
From https://news.ycombinator.com/context?id=43081303 .. https://news.ycombinator.com/item?id=43310933 :
> Probably not gauge symmetry there, then.
> This is a pretty strong argument against the existence of particles except as asymptotic approximations.
I think it's also a pretty strong argument against the mathematical well-definedness of typical (interacting) QFTs in the first place.
Perhaps a better way to say it is that particles are not longer small balls of dirt [1], but a mathematical construction that is useful to generate an infinite serie [2] to calculate the results.
Since in some conditions these mathematical tricks behave very similar to small balls of dirt, we reused the word "particle" and even the names we used when we thought they were small balls of dirt.
[11] We probably never thought they were made of dirt, and in any case the magnetic moment is the double of the value of the small ball of dirt model.
[2] That has so many infinites that would make a mathematician cry.
Note that particles are not just for perturbation theory. There is a particle whenever there exists a particle annihilation/creation field configuration. A proton is a particle so writing down its creation/annihilation field configuration is in theory possible, though maybe not in practice.
Another point is that infinities do not necessarily make mathematicians cry. Abraham Robinson is quite pleased with them. It seems a possible hypothesis that at least some QFT are mathematically well-defined using non-standard analysis. Where 'some QFT' at least renormalizable and perhaps also asymptotically free. I don't know enough about it to know how the Haag theorem, mentioned in another comment impacts this.
Another analogy (flawed as any of them). Sports teams "exist" in a sense. They meet one another in well-defined interactions, called matches, and such an interaction can be described as if teams were well-defined atomic entities, producing a score.
But a sports team is not atomic, not a "final reality" entity. A sports team can pass through one gate, or through several gates, when entering a stadium. From a doctor's perspective, the team "does not exist", a doctor only operates in terms of individual players' organisms.
Particles are an approximation to the actual behavior of the field, and are used in perturbation theory to calculate the more complicated field behavior.
This works well when interactions are weak. Electrons do not couple strongly to the electromagnetic field, so it makes sense to view electrons as particles. However, quarks couple very strongly to the strong force (hence the name), so the perturbative approach breaks down, and it makes less sense to view quarks as particles.
So in a non-perturbative QFT calculation which has a well defined particle-number operator, that's just "an approximation" within the theory? What is it approximating?
Energy capacity?
I'll bite: Explain yourself.
Also, for context, my question was posed because the idea of "particle number" as well as "quantum states of particles (which are countable) represented in a Fock space" and in general the idea of particles are, like, page 2 of any QFT textbook. It doesn't approximate anything in the theory. Creation and annihilation of particles (and hence the well-defined concept of a particle) is fundamental to the construction of the theory itself, perturbative or not.
Particles are page 2 of any QFT textbook because the free particle is the only system we can exactly solve. In practice, that solution is usually used as the basis for a perturbative expansion.
That doesn't validate your assertion that particles are just "an approximation". Just because it's used in perturbation theory doesn't mean it's exclusive to it.
You're also manifestly wrong on "the free particle is the only system we can exactly solve".
Okay, it's not "the only" system we can exactly solve, but it's 99% of what we solve in practice, and it's the exact solution you'll see over and over again in QFT 1.
The free particle solution is an approximation to reality, because reality includes interactions. There's a mathematical formalism to this that we'd agree on, but you might disagree about how to describe it in words.
False in multiple ways.
QFT doesn’t discard local fields and replace them with only nonlocal graph nodes.
Maybe this is coming from some speculative quantum gravity ideas.
>There's no locality.
How so? QFT is Lorentz invariant. Even has such a thing as the norm flux.
My bad; QFT actually postulates locality. I was thinking about the casual set theory which strives to solve some of the QFT's difficulties, and where locality is an emergent / statistical phenomenon rather than a postulated condition.
> Lorentz invariance is also violated in QFT assuming non-zero temperature.
If you couple your system to a heat path that is at rest wrt a specific Lorentz frame, you of course lose Lorentz incariance. On the other hand the lagrangian of the standard model itself is to my knowledge fully Lorentz invariant.
I don't know what they talk about there, but it sounds like some kind of thermodynamic approximation is involved there. Does thermodynamics survive Lorentz transformation?
just because QFT follows an internal logic, doesn't mean the jump from macro physics to quantum physics itself is logical. In my opinion we still don't have a logical explanation for why the model changes so dramatically from classical to quantum physics.
The Universe is fundamentally quantum in nature; if anything, we'd need a model that explains why classical physics works so well most of the time.
As a naïve fool with no understanding of quantum physics, I want to take a stab at this! Here’s my hypothesis:
Consider a world in which everything is “very quantum”, and there are no easy approximations which can generally be relied on. In such a world, our human pattern-matching behavior would be really useless, and “human intelligence” in the form we’re familiar with will have no evolutionary advantage. So the only setting in which we evolve to be confused by this phenomena is one where simple approximations do work for the scales we occupy.
Sincerely, I don’t think this argument is super good. But it’s fun to propose, and maybe slightly valid.
The main objection is: if there wasn't a classical limit, our brains would have evolved differently.
So yes, we can use the antrophic argument as evidence for the existence of the classical limit, but it doesn't have explanatory power for why there is a classical limit.
This is called the anthropic principle. I personally have objections to it, specifically that due to emergence it is hard to make definitive statements about what complex phenomena may emerge in alternate universes. However, it's taken seriously by many philosophers of physics and certainly has merit.
Isn't that argument from ignorance? You can consider a class of physics similar enough to our physics, it should give enough space for research.
My point is that it isn't possible to determine the emergent behaviour of a complex system from first principles. So arguments of the type "these physics don't result in atoms being produced, so life can't emerge" doesn't imply that other complex structures _like_ life don't emerge.
Then how do we make technology if we don't know the result?
Technology is made iteratively by repeated trial and then observed error in the physical structures we've created (i.e. we build machines and then watch them fail to work properly in a particular way).
Technology that works in a different universe without atoms, would require us to be able to experiment within that universe if we wanted to produce technology that works there with our current innovation techniques.
I'm a fool too but two things I remember. One was a paper discussing the thermodynamics of groups of particles. When they have strong interactions with nearby particles classic behavior emerges very quickly as the number of particles increases. And not n equals 1 million, or 1000, but more like two dozen.
And then there was Feynman asked to explain in layman's terms how magnets work. And he said I can't. Because if I taught you enough to understand you wouldn't be a layman. But he said it's just stuff you're familiar with but at a larger than usual scale. And he hinted even then one level down and you run out of why's again.
> have no understanding of quantum physics
But you know about the Anthropic Principle :)
We do have a model. That’s statistical physics.
Any standard course goes over various derivations of classical physics laws (Newtonian dynamics) from quantum mechanics.
I did study physics, and our statistical physics lecture only derived thermodynamic laws.
We also had a somewhat shoddy derivation of Newton's Laws from the Schrödinger equation, but wasn't really satisfactory either, because it doesn't really answer the question when I can treat things classically.
What I'd really like (and haven't seen so far, but also haven't searched too hard) is the derivation of an error function that tells me how wrong I am to treat things classically, depending on some parameters (like number of particles, total mass, interaction strength, temperature, whatever is relevant).
(Another thing that drove me nuts in our QM classes where that "observations" where introduced as: a classical system couples to a quantum system. Which presupposes the existence of classical systems, without properly defining or delineating them. And here QM was supposed to be the more fundamental theory).
>What I'd really like (and haven't seen so far, but also haven't searched too hard) is the derivation of an error function that tells me how wrong I am to treat things classically, depending on some parameters (like number of particles, total mass, interaction strength, temperature, whatever is relevant).
There are plenty of ways to do this and things like Wigner functions literally calculate quantum corrections to classical systems.
But generally if you can't even measure a system before it's quantum state decoheres then it's quantum status is pretty irrelevant.
I.e. the time it takes for a 1 micrometer wide piece of dust to decohere is ~10^-31 s and it takes a photon ~10^12s to cross it's diameter. So it decoheres 10 billion billion times faster that a photon could even cross it.
The error is usually taken as ratio of wavelength to your desired precision, but in general depends on your use case, sometimes you have full precision all the way down, sometimes you have insufficient precision on astronomic scale. Quantum physics doesn't have an absolute scale cutoff.
i started writing a response about how the human brain is designed to operate in an environment where classical physics is the norm, so we need to bridge the deviations from that if we are to really understand the world. But I don't know how much that's really true if you consider neural biology and I won't claim to know where quantum stops and classical begins as it relates to brain function.
You need quantum physics to understand how chemistry works.
So, given that chemistry plays a huge role in how the human (or any) brain works, it would be quite a stretch to argue that the brain works with classical physics.
We are often sloppy and sort all the chemistry in with classical physics, but that's a very human-centric approach. In reality, the Universe doesn't have different "domains" with separate rules for chemistry and physics; it evolves according to the Schrödinger equation, and we use Chemistry as an abstraction to not have to deal with nasty mathematics to predict how certain reactions will work.
I think the parent was really referring to "mind" instead of "brain". It's not the hardware of the brain that's classical, but our sense perception and model of the world.
I do think there's something to this approach though - our sensory organs and processing ability are not abstract powers of understanding the universe - they developed exactly to give us enhanced survival chances. We should not expect to even be able to detect (let alone intuitively understand) aspects of reality that can't be used for survival.
I do understand the point you’re making but my counter argument to that would be that physics hasn’t relied on our sensory input for a hundred years or more.
It’s been almost entirely based on maths and careful measurements from machined instruments purpose built for observing phenomena.
So at this point you’d hope the limitations of our biological senses would have been long surpassed.
It's impossible to surpass the physical limits of our biological senses. Can a person blind from birth form visual mental images?
We can try to retrain and reuse the sense for other purposes though. I'm reminded of that film "The Zero Theorem".
>our [...] processing ability are not abstract powers of understanding the universe
Neural nets are called universal approximators for a reason. If what you guys are discussing is true, then a neural net would not be able to learn from a dataset about quantum experiments. I doubt this is the case. Also there is quantum cognition, and by that I mean the fact some researchers figured out a lot of puzzling results from experimental cognitive science seem to make more sense once analyzed from a quantum perspective.
Humans are affected by prejudice, so learning straight from a dataset is easier said than done, but possible, yes.
Yes, though our cells have machinery that does use quantum phenomena.
>In my opinion we still don't have a logical explanation for why the model changes so dramatically from classical to quantum physics.
I think you have this backwards. QM IS the law of the universe and Classical Physics is just a high mass low energy approximation of it. In any case there doesn't need to be a logical explanation at all, the laws of physics are as they are. Why is the value of the fine structure constant what it is?
s/FOR/DO/
Observation is more important than model; if we take the model too seriously, we can be led astray. It's much like extending a metaphor too far.
We observe double-slit diffraction and model it with the wave-function. This doesn't preclude other models, and some of those models will be more intuitive than others. The model we use may only give us a slice of insight. We can model a roll of the dice with a function with 6 strong peaks and consider the state of the dice in superposition. The fact that the model is a continuous real function is an artifact of the model, a weakness not a strength. We are modeling a system who's concrete state is unknown between measurements (the dice is fundamentally "blurred"), and we keep expecting more from the model than it wants to give.
Programmers may have better models, actually. The world is a tree where the structure of a node births a certain number of discrete children at a certain probability, one to be determined "real" by some event (measurement), but it says little about "reality". The work of the scientist is to enumerate the children and their probabilities for ever more complex parent nodes. The foundations of quantum mechanics may be advanced by new experiments, but not, I think, by staring at the models hoping for inspiration.
The models of quantum mechanics have already withstood experiments to a dozen decimal places. You aren't going to find departures just by banging around in your garage; you just can't generate enough precision.
The only way forward at this point is to start with the model and design experiments focusing on some specific element that strikes you as promising. Unless you're staring at the model you're just guessing, and it's practically impossible that you're going to guess right.
>You aren't going to find departures just by banging around in your garage
This kind of rhetoric saddens me. Someone says "design an experiment" and you jump to the least charitable conclusion. That people do this is perhaps understandable, but to do it and not get pushback leads to it happening more and more, to the detriment of civil conversation.
No, the experiment I had in mind would take place near the Schwarzchild radius of a black hole. This would require an enormous effort, and (civilizational) luck to defy the expectations set by the Drake equation/Fermi paradox. It's something to look forward to, even if not in our lifetimes!
> No, the experiment I had in mind would take place near the Schwarzchild radius of a black hole
I think the GP was thinking of more practical experiments, not science fiction.
I mean you did just suggest that classical QM can be supplanted by your heavily underspecified finite(?)-state model for which you provide essentially no details, you must admit that's pretty crank-y behaviour.
This is one of the reasons I believe science and technology as a whole are on an S-curve. This is obviously not a precise statement and more of a general observation, but each step on the path is a little harder than the last.
Whenever a physics theory gets replaced it becomes even harder to make an even better theory. In technology low hanging fruit continues to get picked and the next fruit is a little higher up. Of course there are lots of fruits and sometimes you miss one and a solution turns out to be easier than expected but overall every phase of technology is a little harder and more expensive.
This actually coincides with science. Technology is finding useful configurations of science, and practically speaking there are only so many useful configurations for a given level of science. So the technology S-curve is built on the science S-curve.
I don't think this is strictly true. Rather it seems that the problem is that we, at some point, invariably assume the truth of something that is false, which then makes it really difficult to move beyond that because we're working off false premises, and relatively few people are going out of there way to go back in time and challenge/rework every single assumption, especially when those assumptions are supported by decades (if not centuries) of 'progress.'
An obvious example of this is the assumption of the geocentric universe. That rapidly leads to ever more mind-boggling complex phenomena like multitudes of epicycles, planets suddenly turning around mid-orbit, and much more. It turns out the actual physics are far more simple, but you have to get passed that flawed assumption.
In more modern times relativity was similar. Once it became clear that the luminiferous aether was wrong, and that the universe was really friggin weird, all sorts of new doors opened for easy access. The rapid decline in progress in modern times would seem most likely to suggest that something we are taking as a fundamental assumption is probably wrong, rather than that the next door is just unimaginably difficult to open. This is probably even more true given the vast numbers of open questions for which we have defacto answers, but yet they seem to defy every single test of their correctness.
---
All that said, I don't disagree that technology may be on an s curve, but simply because I think the constraints on 'things' will be far greater than the constraints on knowledge. The most sophisticated naval vessel of modern times would look impressive but otherwise familiar to a seaman of hundreds or perhaps even thousands of years ago. Even things like the engines wouldn't be particularly hard to explain because they would have known full well that a boiling pot of water can push off its top, which is basically 90% of the way to understanding how an engine works.
It's true that Ptolemaic cosmology stuck thinkers in a rut for a very long time; but what got us out of that rut was observation (and simplification). Copernicus saw that heliocentrism led to a simpler model that fit observation better (ironically he wanted to recover Ptolemy's perfectly circular orbits!). In turn, Kepler's perfectionism led him to ditch the circular orbit idea to yield the first accurate description of orbits as ellipses. Yes, transgression against long-held belief was necessary to move forward, but in every case the transgression explained observation. Transgression itself is undesirable. In fact, transgression unmotivated by observation is what powers the dark soul of the "crank", who is at best a time-waster and at worst a spreader of mental illness.
Even Einstein did not produce (e.g. special relativity) out of whole cloth. He provided a consistent conceptualization of Lorentz contraction, itself the result of observing descrepencies in the motion of Jupiter's moons. The same could be said of the photoelectric effect, the ultraviolet catastrophe, and QM.
All this to say that your statement "The rapid decline in progress in modern times would seem most likely to suggest that something we are taking as a fundamental assumption is probably wrong" is unsupported. Nothing could be more popular than questioning fundamental assumptions in science today!
It could very well be that, as Sean Carroll puts it, we really know how everything larger than the diameter of a nuetron works! Moreover, we know that even if we find strangeness at tiny scales, our current theories WILL remain valid approximations, just like Newtonian mechanics are valid approximations of special and general relativity. The path to progress will not happen because a rogue genius finds something everyone missed and boldly questions assumptions long-held. Scientific revolution first requires an observation inconsistent with known models, but even the LHC hasn't given us even one of those. There is reason to think that GR, QM, and the standard model are all there is...until we do some experiments near a black hole!
> Copernicus saw that heliocentrism led to a simpler model that fit observation better.
That's not true, he didn't.
Geocentric model of the time was a better fit of the data than the Copernican model. What Copernican model had was simplicity (at some cost to observational data fidelity).
Making the heliocentric model approach (and breach) the accuracy obtained by the geocentric model took a lifetime of work by many people.
As a kinematic model (description of the geometry of motions) as observed from Earth's reference frame geocentric is still pretty darn accurate. There's a reason why it is so. Compositions of epicycles are a form of Fourier analysis -- they are universal approximators. They can fit any 'reasonably well behaved' function. The risk is, and it's the same risk with ML, deep neural nets, that one (i) could overfit and (ii) it could generate a model with high predictive accuracy without being a causal model that generalises.
Heliocentric model was proposed much much earlier than Copernicus but the counterarguments were non-ignorable. Reality, it turned out was very surprising and unintuitive.
Truth be told, I don't know much about Copernicus. He may indeed have been right but for the wrong reasons! If so, he's a very good example against my point that observation must precede successful revolution. It seems strange that the Catholic church took him so seriously if his claim was supported by his enthusiasm and not observation. It's definitely something I'd like to learn more about - any book recommendations?
This history is absolutely fascinating. Let me find a blog post by Baez that covers a lot of that history.
I don't think this history says anything against your point -- sometimes the time is just not right for the idea -- and even classical science can be very unintuitive and weird, so much so that common sense seems like very strong counter arguments against what eventually turn out to be better models.
I of course learned this over many books, but the mind blanks out over which one to suggest. I think biographies of Copernicus and Kepler would be good places to start.
Edit: you may find this interesting:
https://news.ycombinator.com/item?id=42347533
HN do you know what happened to John Baez's blog that listed his multiparty blog posts ? They are a treasure trove that I do not want to lose. Azimuthproject too seems to have disappeared
As a tangential hit on this issue, the relationship between the Catholic Church and science [1] is an interesting read. It's nowhere near as antagonistic as contemporary revisionary takes would suggest. In particular the most famed example of this is with Galileo (whose name is mentioned no less than 146 times on that fairly short page...) yet that was far more interpersonal issues than his concepts being an affront to theology. He wrote a book calling the Pope (at the time very much one of his supporters) through hardly veiled proxy, a simple minded idiot. Burning bridges is bad enough, but burning one you're standing on is lunacy.
If one does genuinely believe in a God then the existence of science need not pose a threat to that, since there's nothing preventing one from believing that God also then created the sciences and rationality of the universe. The classical 'gotchas' like 'Can God create a stone so heavy that he could not lift it?' were trivial to answer by simply accepting that omnipotence does not extend to things which are logically impossible, like a square circle.
[1] - https://en.wikipedia.org/wiki/Science_and_the_Catholic_Churc...
LLM could have kept the geocentric theory alive for another hundred or more years! Awesome.
I especially like your last paragraph. Even if our fundamental assumptions are wrong, current theories still work very well within appropriate bounds. And those bounds basically contain all practical scenarios here on earth. That's a big reason why it's hard to make progress on string theory, because we can't create scenarios extreme enough here on earth to test it.
So even if our fundamental assumptions are wrong and some new theory is able to explain a bunch of new stuff, chances are it won't impact the stuff we can practically do here on earth, because scientists have already been doing the most extreme experiments they can, and so far progress is still stalled on fundamental physics.
Copernicus and Kepler did interpretations, not observations, they explained observations, but geocentrism explained observations too, so heliocentrism wasn't unquestionably superior.
Heliocentrism from its earliest formulation was pretty bad for many reasons, including as you mentioned the desire to maintain circular orbits, as well as uniform velocities, epicycles, and more. You could easily pick a million holes in heliocentrism to 'disprove' it. And the geocentric view, as convoluted as it was, was observably accurate and predictive with 'holes' being plugged by simply having the entire dysfunctional model absorb them - e.g. by simply assuming retrograde motion as a natural phenomena, and otherwise - just add more epicycles.
Heliocentrism was most fundamentally driven by somebody, with extremely poor interpersonal skills (which is much more the reason he was left living his final days in house imprisonment, rather than his theory itself), moving forward on his own somewhat obsessive bias.
Similarly, with relativity. I have no idea what you mean by a 'consistent conceptualization' of Lorentz contraction, but length contraction was a completely ad hoc explanation for the Michelson Morley experiment. It's correctness was/is more incidental than anything else. Einstein did not cite Lorentz (or anybody for that matter), and I do not think that was unfair or egotistical of him.
--
I'm also unsure of what you're referencing with Sean Carroll, but I'd offer a quote from Michelson of the Michelson-Morley experiment saying essentially the same, "The more important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote.... Our future discoveries must be looked for in the sixth place of decimals."
So convinced was Michelson that the 'failure' of his experiment was just a measurement issue that he made that comment in 1894, near to a decade after his experiment and shortly before physics and our understanding of the universe was about to revolutionary explode thanks to a low ranking patent inspector.
Max Planck famously said, "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."
Now we know how to prevent it: popularize ideas like "physics is mathematics", "shut up and calculate", "it's useless philosophy not worth to think about", "nobody can understand it, so it's useless to even try". Also a nice excuse for ignorance.
>I have no idea what you mean by a 'consistent conceptualization' of Lorentz contraction, but length contraction was a completely ad hoc explanation for the Michelson Morley experiment. It's correctness was/is more incidental than anything else. Einstein did not cite Lorentz (or anybody for that matter), and I do not think that was unfair or egotistical of him.
In "On the Electrodynamics of Moving Bodies"[1] Einstein checks his derivation against Lorentz contraction. It's on page 20 of the referenced English translation. Lorentz' model was ad hoc, E derived it with only 2 postulates (equivalence principle; c invariance). Lorentz was indeed cited, and the cite is useful to connect E's theory to real-world observation. This is true whether or not you want to get pedantic about the meaning of "cite" vs "reference".
1 - https://www.fourmilab.ch/etexts/einstein/specrel/specrel.pdf Originally "Zur Elektrodynamik bewegter Koerper"
> The rapid decline in progress in modern times would seem most likely to suggest that something we are taking as a fundamental assumption is probably wrong, rather than that the next door is just unimaginably difficult to open.
We actually know we have:
Bell’s inequality tells us that the universe is non-local or non-real. We originally preferred to retain locality (ie, Copenhagen interpretation) but were later forced to accept non-locality. But now we have a pedagogy and machinery built on this (incorrect) assumption — which people don’t personally benefit from re-writing.
Science appears trapped in something all too familiar to SDEs:
A technical design choice turned out to be wrong, but a re-write is too costly and risky for your career, so everyone just piles on more tech debt — or modern epicycles.
And perhaps that’s not a bad thing, in and of itself. Eg, geons were initially discarded because the math doesn’t work out — but with the huge asterisk that they might still be topologically stabilized. But the math there is hard and so it makes sense to continue piling onto the current model until enough advances in modeling (eg, 4D anyons) allow for exploring that idea again.
Similar to putting off moving tech stacks until someone else demonstrates it solves their problems.
But at least topological geons would explain one question: why does space look like geometry but particles look like algebra?
Because topological surgery looks like both!
- - - -
> clear that the luminiferous aether was wrong
Another interpretation is that the aether exists, but we’re also made of aether stuff — so we squish when we move, rather than rigidly moving through it (as per the theory tested by Michelson-Morley). That squishing cancels out the expected measurement in MM. LIGO (a scaled MM experiment) then works because waves in the aether squish and stretch us in a detectable way.
Modern theories are effectively this: everything is fields, which we believe to be low-energy parts of some unified field.
It's an S-curve only so long as intelligence doesn't increase exponentially as well. What would the story look like if an ASI existed?
It's just accelerated. AI is bound by physics just like everything else.
The S-curve is really about fundamental limits. Lets say ASI helps us make multiple big leaps ahead, I mean mind blowing stuff. That still doesn't change that there must be a limit somewhere. The idea that science and tech is infinite is pure science fiction.
Exponential increases in intelligence doesn’t imply that the universe is more complex to compensate.
The first turn in an S-curve can easily look like an exponential. ASI has physical limitations, so I don’t see why it wouldn’t take an S-curve as well, although at a much different rate than human intelligence.
To be fair quantum mechanics was invented by guessing that energy might be quantized. It just happened to model the universe well.
Waves are quantized (one wave, two waves, ...), so energy transfers by waves are quantized too.
What you are describing is periodicity. That’s different from quantization.
One particular model: the electron g-factor.
Now go look up how precise a prediction the same model makes for the muon g-factor.
That is true for classical probability, but the idea that unknown quantities are determining the outcomes in quantum mechanics has been disproven in the event of the speed of light being a true limit on communication speed. This is known as, "Bell's theorem."
Bell's Theorem disproves local hidden variables.
Reality can be interpreted as non-local. There has been no conclusive proof it isn't.
c isn't a limit on the kind of non-locality that is required, because you can have a mechanism that appears to operate instantaneously - like wavefunction collapse in a huge region of space - but still doesn't allow useful FTL comms.
Bell's Theorem has no problem with this. Some of the Bohmian takes on non-locality have been experimentally disproven, but not all of them.
The Copenhagen POV is that particles do not necessarily exist between observations. Only probabilities exist between observations.
So there has to be some accounting mechanism somewhere which manages the probabilities and makes sure that particle-events are encouraged to happen in certain places/times and discouraged in others, according to what we call the wavefunction.
This mechanism is effectively metaphysical at the moment. It has real consequences and was originally derived by analogy from classical field theory, with a few twists. But it is clearly not the same kind of "object" as either a classical field or particle.
There may be no conclusive proof, but it's a philosophically tough pill to swallow.
Non-locality means things synchronise instantly across the universe, can go back in time in some reference frames, and yet reality _just so happens_ to censure these secret unobservable wave function components, trading quantum for classical probability so that it is impossible for us to observe the difference between a collapsed and uncollapsed state. Is this really tenable?
Strip back the metaphysical baggage and consider the basic purpose of science. We want a theoretical machine that is supplied a description about what is happening now and gives you a description of what will happen in the future. The "state" of a system is just that description. A good _scientific_ theory's description of state is minimal: it has no redundancy, and it has no extraneous unobservables.
Why isn’t the accounting mechanism a quantum extension of the principle of least action?
De Broglie–Bohm theory is a hidden-variable theory but does not allow for FTL communication.
My understanding is that it is not that simple, pilot-wave theories, are not the traditional hidden-variable theories. While some setups look very simple in pilot-wave compared to say the schrodinger equation, other setups are as unintuitive in pilot-wave as schrodinger equation is in some.
My lightly held conclusion is if it really was a full and more straight forward solution it would dominate the conversation more than it does now. This option was formed reading some primary sources but mostly reviews and comparisons of QM theories. Unlike other methodologies I have never working through a full QM example problem in pilot-wave theory.
I'm not sure what the point is you're trying to make. OP claimed
> the idea that unknown quantities are determining the outcomes in quantum mechanics has been disproven in the event of the speed of light being a true limit on communication speed.
and I provided an immediate counterexample. Yes, Bell's Theorem and its exact assumptions are not entirely straightforward but let's please stop propagating those falsehoods that die-hard proponents of the Copenhagen interpretation commonly propagate.
Let me throw in "Hydrodynamic Quantum Analogs" [1] as a fascinating review of how quantum effects emerge in experiments with bouncing oil drops on liquid. This is fully a pilot wave driven experiment and there has been a lot of academic work analyzing the system and trying to fit it into the de Broglie-Bohm formulations of quantum dynamics.
To quote section 10.2: "The [experimental] system represents a classical realization of wave–particle duality as envisaged by de Broglie, wherein a real object has both wave and particle components."
We've already got all those fields interacting in the real world, so I don't find it very far fetched that quantum mechanics emerges from their fully classically described interactions, probably expressed in some really gnarly 4D math.
[1] https://thales.mit.edu/bush/wp-content/uploads/2021/04/BushO...
Tim Maudlin's "Philosophy of Physics: Quantum Theory" makes for an excellent read! It addresses tons of questions which are rarely answered (let alone asked) in your run-of-the-mill university-level QM class.
> The foundations of quantum mechanics may be advanced by new experiments, but not, I think, by staring at the models hoping for inspiration.
To come up with new experiments that might shed light it certainly helps to spend time exploring the models to come up with new predictions that they might make. Sure, one can also come up with new experiments based only on existing observations, but it's most interesting when we can make predictions, as testing those advances some theories and crushes others.
A model is supposed to be accurate. When it's inaccurate, you should understand where and how it's inaccurate and not just become agnostic.
The trouble with QM is with it's interpretations, not with the accuracy of it's predictions. The latter informs interest in the former. QM works, but the models imply that nature is neither "local" - e.g. entanglement experiments undermine hidden-variables, nor "real" - e.g. a particle does not have a momentum (or position) until you measure it. These physical properties are not just hidden, they are undefined. These implications fly in the face of basic macroscale intuitions about what "physical reality" means, which makes it interesting. Inconsistency is a signal that we have discoveries yet to make. Note that "Many worlds people" think there is no inconsistency - my sketch of a model is fully consistent with that interpretation, if you wish, by simply assign a new universe to every child node in which the node is reached.
What you say doesn't quite correspond to quantum physics as it's known. Quantum physics is quantitative and precise, so it's difficult to say there's something undefined there. It doesn't suggest nonlocality, absence of hidden variables means only absence of hidden variables. It doesn't suggest antirealism, if only due to precision, you can say it doesn't work how you want, but at worst this makes it unintuitive. Conversely Dirac formalism works as if quantum state exists in itself in precise form, which has a good compatibility with basic macroscale intuitions about what "physical reality" means.
But quantum physics can't predict exactly where the individual dots on the detector will be, only their distribution. That does not sound totally quantitative and precise and defined. You would not accept such predictions for macroscopic objects :)
It can't do it, because you want it to make a classical prediction. It shouldn't be able to predict what doesn't happen.
I want any theory to predict what actually happens, which is individual dots at detector.
Or at least some clear statement how comes our reality is not like that.
At least it shouldn't be nonlocal just because of the erroneous rumor that Bell proved that quantum physics is nonlocal or because randomness, nonlocality and retrocausality are just directly observed.
Would you be satisfied if the theory clearly states: "At the time of measurement, the position of the photon interaction is determined by randomly sampling from the quantum distribution"?
Einstein was not satisfied afaik. Plus by this you're proving we haven't moved on since, which isn't very satisfying either.
Your 6-sided dice example sort of brings some focus to his argument of 'its not a real wave it's a math wave ". The result of a 6-sided dice roll exists more in our minds as "math dice" because for most people, if you rolled and it fell in a sewer, lost etc, you wouldn't consider the roll complete until you grabbed a different dice and rolled it. More attached to the person rolling it and the resulting 'what does the number affect'.
>The fact that the model is a continuous real function is an artifact of the model, a weakness not a strength.
The wave function is the square root of a probability distribution. The wavefunction is a continuous real function of position because position is modeled as a continuous real variable. The idea of the wavefunction as a function of position is generally supported by the fact that it can be used to predict the measurement results of diffraction experiments like the double-slit experiment, but also practically the whole field of X-ray diffraction.
There is not just one experimental result that is explained by wavefunctions. There are widely used measurement techniques whose outcomes are calculated according to the quantum properties of matter — like X-ray diffraction and Raman scattering — which are widely considered to be extremely reliable. There is a good reason to explain the model of reality expressed by the equations as clearly as possible, because we want people to be able to use the equations.
Plenty of people (though certainly not all) expect quantum mechanics to be eventually modified to have a consistent theory of gravity. But physicists have experience with this. Special relativity and classical quantum mechanics were both more complex than Newtonian (classical) mechanics, and quantum field theory is more complicated than either. General relativity is substantially more involved than special relativity. It is likely that further extensions will continue to get worse.
The model of reality taught by Newtonian (classical) mechanics is also still widely discussed and used in introductory physics courses and many areas of physics (such as fluid dynamics) and engineering. This model also discusses position on the real line. Even though classical mechanics had to be modified, the use of Cartesian coordinates and real numbers turned out to be durable.
Usually the finitists will formally "rescue" countability by suggesting that the world could exist on the computable numbers, which are countable and invariant under computable rotations. But the computable numbers are a very unsatisfying model of reality, and have a lot of the same "weirdness" as the real numbers. Therefore they suggest that some other model must exist without giving a lot of specifics. Why this should be somehow helpful and not injurious to the pedagogy of physics is not clear.
Finally! Too much of physics is obsessed with the map and not the territory.
This is how you get the tortured reasoning that views measurement and observation as somehow different. Even einstein struggled.
Doesn't the difference between measurement and observation stem from an extension of the double slit experiment discussed in thus artucle?
It you place a detector on one of the two slits in the prior experiment, (so that you measure which slit each individual photon goes through) the interference pattern disappears.
If you leave the detector in place, but don't record the data that was measured, the interference pattern is back.
> If you leave the detector in place, but don't record the data that was measured, the interference pattern is back.
This is not remotely true. It looks like you read an explanation of the quantum eraser experiment that was either flawed or very badly written, and you're now giving a mangled account of it.
I have heard similar things but this is THE most deeply weird result and I’ve never heard a good explanation for the setup.
A lot of people pose it as a question of pure information: do you record the data or not?
But what does that mean? The “detector” isn’t physically linked to anything else? Or we fully physically record the data and we look at it in one case vs deliberately not looking in the other? Or what if we construct a scenario where it is “recorded” but encrypted with keys we don’t have?
People are very quick to ascribe highly unintuitive, nearly mystical capabilities with respect to “information” to the experiment but exactly where in the setup they define “information” to begin to exist is unclear, although it should be plain to anyone who actually understands the math and experimental setup.
It's a little simpler than you're thinking: only fully matching configurations (of all particles etc) can interfere. If you have a setup where a particle can pass through one of two slits and then end up in the same location (with the same energy etc) afterward, so that all particles everywhere are in the same arrangement including the particle that passed through one of the slits, then these two configurations resulting from the possible paths can interfere. If anything is different between these two resulting configurations, such as a detector's particles differently jostled out of position, then the configurations won't be able to interfere with each other.
An interesting experiment to consider is the delayed-choice quantum eraser experiment, in which a special detector detects which path a particle went through, and then the full results of the detector are carefully fully stomped over so that the particles of the detector (and everything else) are in the same exact state no matter which path had been detected. The configurations are able to interfere once this erasure step happens and not if the erasure step isn't done.
Another fun consequence of this all is that we can basically check what configurations count as the same to reality by seeing if you still get interference patterns in the results. You can have a setup where two particles 1 and 2 of the same kind have a chance to end up in locations A and B respectively or in locations B and A, and then run it a bunch of times and see if you get the interference patterns in the results you'd expect if the configurations were able to interfere. Successful experiments like this have been done with many kinds of particles including photons, subatomic particles, and atoms of a given element and isotope, implying that the individual particles of these kinds have no unique internal structure or tracked identity and are basically fungible.
Interference pattern also disappears when detector detects absence of detection, which shouldn't change properties of the particle.
If anything is different between the two resulting configurations of possibly affected particles, such as the state of the particles of the detector, then interference can't happen. It's not just about whether the individual particle going through one of the slits is in an identical location.
An important thing to realize is that interference is a thing that happens between whole configurations of affected particles, not just between alternate versions of a single particle going through the slit.
Do you have a reference for that last paragraph?
I'm not a physicist, but that doesn't really sound right. Might I ask you a reference or an explanation?
It is correct. There's SO MUCH weirdness surrounding the double slit.
https://en.wikipedia.org/wiki/Double-slit_experiment#Variati...
Hm, it says the observer-at-the-slit experiment hasn't been performed because it would absorb the photons. But it also says the experiment can be done with larger particles, so that shouldn't be a problem ...
I believe I first read about it in the book, Gödel, Escher, Bach.
"The Tao that can be told is not the eternal Tao."
The wave went through the two slits, for any normal everyday definition of "go through". Yes, you can say "the wave function is just a function that assigns an amplitude for the particle's presence at every (x, y, z, t) co-ordinate, it doesn't go anywhere". But that's no more valid than saying that a regular water wave is "just a function that assigns a height to the water at every (x, y, t) co-ordinate, it doesn't go anywhere".
There is a pattern to the wavefunction, where the amplitude at (x+delta, y, z, t+delta) is closely related to the amplitude at (x, y, z, t). (Specifically, it's that amplitude rotated by delta times the mass of the particle). Or, unless you're being wilfully obtuse, the wave packet moves from x to x+delta in time t to t+delta, rotating as it goes as quantum mechanical waves do.
You can, if you really want, insist in Zeno's paradox fashion that nothing ever goes anywhere, that things just exist at given places and times, and in a certain sense that's true. But there's nothing QM-specific about that, and it's misleading to complicate a discussion of QM by claiming so. If we allow that things can move through space, and waves can move through space, then the wave moves through the two slits in the normal sense of all those concepts.
I wish people would stop going out of their way to make QM sound confusing/weird/"spooky". Most of it is just normal wave behaviour for which you can observe exactly the same thing with everyday classical waves.
> But that's no more valid than saying that a regular water wave is "just a function that assigns a height to the water at every (x, y, t) co-ordinate, it doesn't go anywhere".
I think this is a very important distinction actually. A wave amplitude represents an actual displacement in some medium, and waves interfere constructively/destructively because they both move the medium in the same/opposite direction at the same time at the same location. So when a water wave gets pushed through two slits, it breaks into two separate water waves, one coming from each slit, and those two waves push the water up and down at the same time at different locations.
But wavefunctions are very much not like that. A wavefunction amplitude does not represent a displacement in any kind of medium. They represent a measure of the probability that the system being described is in a particular state at a moment in time. That state need not even be a position, it might be a charge, or a spin, or a speed, or any combination of these. Basically quantum systems oscillate between their possible states, they don't oscillate in space-time like matter affected by a wave does.
This also makes it very hard to conceptualize what it means for these wavefunctions to interfere. So the simple picture of "wave A and wave B are pushing the water up at the same time in the same location, so the water rises higher when both waves are there" is much harder to apply to probability oscillations than a direct comparison makes it out to be.
An additional problem when comparing wavefunctions to waves in a medium is that there is no source of a wavefunction. Any system you're analyzing has a single wavefunction, that assigns probability amplitudes to every possible configuration of that system. You can decompose the system's wavefunction as a sum of multiple wavefunctions corresponding to certain measurables, but this is an arbitrary choice: any such decomposition is exactly as valid. In matter waves, if I drop two stones in water at different locations, the water surface's movements can be described as a single wave, but there is a natural decomposition into two interfering waves each caused by one of the stones. There is no similar natural decomposition that quantum mechanics would suggest for a similar quantum mechanical system.
> when a water wave gets pushed through two slits, it breaks into two separate water waves, one coming from each slit, and those two waves push the water up and down at the same time at different locations.
What physically observable distinction are you drawing? The points of water on the far sides of the slits will have a certain height at each point at each time, forming the interference pattern you'd expect. You can decompose that function into a sum of two separate waves, if you want, but you don't have to. And exactly the same thing is true of the quantum mechanical wavefunction for a particle passing through a pair of slits.
> A wavefunction amplitude does not represent a displacement in any kind of medium. They represent a measure of the probability that the system being described is in a particular state at a moment in time. That state need not even be a position, it might be a charge, or a spin, or a speed, or any combination of these. Basically quantum systems oscillate between their possible states, they don't oscillate in space-time like matter affected by a wave does.
I don't think that's a real distinction. Water height is a different dimension from (x,y) position and it behaves very differently; that the wave is moving across the surface and that the surface is moving up and down are orthogonal facts, the reason the former is movement isn't that the latter is movement. A classical electromagnetic wave moves even though it isn't in a medium that's moving (and so does e.g. a fir wave).
> You can decompose the system's wavefunction as a sum of multiple wavefunctions corresponding to certain measurables, but this is an arbitrary choice: any such decomposition is exactly as valid. In matter waves, if I drop two stones in water at different locations, the water surface's movements can be described as a single wave, but there is a natural decomposition into two interfering waves each caused by one of the stones. There is no similar natural decomposition that quantum mechanics would suggest for a similar quantum mechanical system.
Again I don't think this is a real distinction. You have exactly that natural decomposition in the QM system - it's not the only valid decomposition, but it is a valid one and it has some properties that make it nice to work with. And similarly for dropping stones in the water, infinitely many other decompositions are possible and equally valid (e.g. decomposing as two copies of a wave where you dropped two half-sized stones into the water).
In classical mechanical waves like water waves or sound waves, the wave equations are just an approximation of the "real" movement happening. You can theoretically get the precise results of the movement of every part of the medium by just applying Newton's laws of motion to all of the particles of the medium (assuming they're coarse grained enough that you can ignore quantum effects, of course). That is, you can model the surface of the water and the rocks as little balls of mass connected by ideal springs and such, and you'll see that the movement of the water is directly caused by the two rocks, whose contact with the water pushes it here and there. And so, the "real" picture is two separate waves, and any other way to model the system, even if mathematically equivalent, is artificial in some sense.
Even for EM waves, the classical theory explains them somewhat mechanistically, as an interaction between electrical and magnetic forces that originate from the charged sources and self-propagate.
There is no similar picture you can draw for the quantum mechanical wavefunction. It's the base reality of the system, and it turns out in fact that Newton's laws can be derived as an approximatiom of the wavefunction. But there isn't any kind of "reason" for which the wavefunction does what it does, like there is for the water waves. And so all models for separating the wavefunction into different components is just as "natural" as any other.
> you can model the surface of the water and the rocks as little balls of mass connected by ideal springs and such, and you'll see that the movement of the water is directly caused by the two rocks, whose contact with the water pushes it here and there. And so, the "real" picture is two separate waves
I don't see that that follows? Both wave pictures are a mathematical approximation to the "real" system of particles moving. Both are ways of calculating the same result. Neither is objectively more valid than the other. You can say that picturing it as "the wave from the first rock overlaid on the wave from the second rock" is an interpretation that makes more physical sense or is nicer to think about, but that's just as true in the QM case.
> Both wave pictures are a mathematical approximation to the "real" system of particles moving.
Yes, exactly.
> You can say that picturing it as "the wave from the first rock overlaid on the wave from the second rock" is an interpretation that makes more physical sense or is nicer to think about, but that's just as true in the QM case.
No, because in QM, the wavefunction is the real, final picture. Any split is arbitrary, the two rocks didn't "cause" the waves, the the wavefunction of all the molecules that make up the rocks and the water has higher amplitudes for states in which the water molecules look like the wavey surface, and lower amplitudes for states in which the rocks touch the water but the water doesn't move at all, etc. That's all the physics can say.
> No, because in QM, the wavefunction is the real, final picture. Any split is arbitrary, the two rocks didn't "cause" the waves
But at the wave picture level that's always true. You can break down a water wave into a bunch of particles moving according to forces, sure. But that doesn't actually help you answer the question of whether a wave that's gone through a pair of slits is now one wave or two waves, because when you're looking at the particles and forces you can't see the waves any more (except as patterns in the particles and forces - but then you're back in the wave picture).
I agree things like the slit experiments are easily explained by classical waves, and thus seem to point to particles being waves; and I see no inherent reason to bring up probability distributions and such to answer that observation.
Where I'm struggling is that classical waves will always spread out spherically, and their energy must do so too. The issue here being that if a photon is a minimal quanta of energy, but is just a classical wave, what prevents it from spreading out and having sub-photon energy? Or if indeed it does, how does that sub-photon quantity get measured? -- if these experiments claim to be emitting a time-series of single photons, classical wave interference won't occur (again, being separated in time).
> The issue here being that if a photon is a minimal quanta of energy, but is just a classical wave, what prevents it from spreading out and having sub-photon energy? Or if indeed it does, how does that sub-photon quantity get measured? -- if these experiments claim to be emitting a time-series of single photons, classical wave interference won't occur (again, being separated in time).
Right, so that part is new and "spooky" - quantum phenomena are quantised (hence the name). The photon does spread out as a wave, there is in a sense half a photon heading towards the top half of the screen and half a photon heading towards the bottom half of the screen - but then when it hits the screen what we see is a single whole photon that hits either the top half or the bottom half (or, perhaps, half of an us sees a photon hit the top half and half of an us sees a photon hit the bottom half). This is the "wave-particle duality" and while it does fall out of the equations, it's definitely unfamiliar compared to classical physics.
If you want to fully understand, all I can suggest is "work your way through a QM textbook" - every popular explanation I've seen has messed it up one way or another. But it sounds like you're understanding correctly, and thinking for yourself as well - you've hit upon the actual essence of it, the kernel that really is hard.
I don't know how fringe this is, but Huygens Optics gives a possible answer here: https://www.youtube.com/watch?v=tMP5Pbx8I4s. TL;DW: if you assume certain non-linear properties of vacuum as a medium, it seems possible for light (EM waves) to self-contain spatially instead of spreading out, and such trapped energy seems, for all intents and purposes, to behave like particles.
Again, IANAPhysicist, so I don't know what to think of that video, but the channel seems legit, and the explanation is beautiful in its simplicity.
My personal interpretation is that elementary particles physically are those waves and never anything else. Those waves interact with each other with probabilistic events exchanging some energy and momentum and reshaping each other. They can get narrowed down if they exchange energy/momentum or get spread apart, for example through interactions on the edge of an object. What's completely virtual for me is the idea of pointlike particles occupying some specific location and having some specific momentum. Almost everything we know contradicts this idea, and yet we cling to it.
I think Craig Bohren wrote in one of his books, that to get anything calculated and done the waves are all you need. Particles are nice for some kind of visualization, but they don't really help getting things done. I liked that.
Instead of particles I like to view the interactions like the forming of a lightning in a thunderstorm. The energy-field builds up, And at some point of contact the energy is being released in a single lightning strike.
What I still wonder is, if the interaction really depletes the energy-field instantly in a single point, or if there is more going on (on different timescales - maybe with speeds not related to the speed of light).