To me blink as a render engine is too closely coupled to Google. Even though technically chromium is disconnected and open source, the amount of leverage Google has is too high.
I dread the possibility that gecko and webkit browsers truly die out, and the single biggest name in web advertising has unilateral sway over the direction of web standards.
A good example of this is that through the exclusive leverage of Google, all blink based browsers are phasing out support for Manifest V2. A widely unpopular, forcing change. If I'm using a blink based browser I become vulnerable to any other profit motivated changes like that one.
Mozilla might be trying their hardest to do the same with this AI shlock, but if I have to choose between the trillion dollar market cap dictator of the internet and the little kid playing pretend evil billionaire in their sandbox? Well, Mozilla is definitely the less threatening of the two in that regard.
This feels like a very indirect way of saying "yes the fourier transform of a signal is a breakdown of its component frequencies, but depending on the kind of signal you are trying to characterize for it might not be what you actually need."
Its not that unintuitive to imagine that if all of your signals are pulses, something like the wavelet transform might do a better job at giving you meaningful insights into a signal than the fourier transform might.
Leaves me wondering if this will allow for superconducting cryogenic transistors? If my hobby level understanding of how silicon doping works, this new superconducting germanium would be a p-type? I could imagine something like ion implantation could be able to establish n-type regions within the germanium while allowing bulk regions of the lattice to maintain superconducting properties.
Though admittedly, I'm not actually aware what parts of a semiconductor circuit are the biggest power dissipation sources, so I guess its entirely possible that most of the power is dissipated across the p-n junctions themselves.
This is rich coming from the company that scraped the entire internet and tons of pirated books and scientific papers to train their models.
Maybe if you didn't scrape every single site on the internet they wouldn't have a basis for their case that you've stolen all of their articles through training your models on them. If anyone is to blame for this its openAI, not the NYT.
Play stupid games win stupid prizes.