
Watch biological neurons learning to play Doom.
If this can be taken at face value... it's creepy.
I get that they're doing it for the meme. But perhaps something getting close to human intelligence, made out of human cells, shouldn't be forced to play a violent video game without any alternative options? Does 'the meme' justify that?
I dunno. Nothing against violent games myself. Just feels like it's starting to get quite questionable, ethically speaking.
The truth is, God really gave 11 commandments.
It's just "Thou shalt not grow a brain in a test tube and force it to play a 1993 shooter" didn't make any sense to Moses and therefore didn't make the editors cut.
One of those five he dropped.
Tragically this reference is all but lost generationally.
Born in 1979 but I don't get it. What is it about?
Mel Brooks' History of the World, Part I[1].
This is the equivalent of “only 90’s kids will get this”. Don’t shame others for not knowing a reference you like, share it with them instead.
https://xkcd.com/1053/ (The alt text is particularly relevant)
Though I disagree it would be tragic to lose this reference. It’s not a good movie. It’s basically “say thing, immediately interpret it literally”. Throw in some stereotypes from time to time. Rinse and repeat.
"And keep 'em up!"
"An old man! They don't let you live, they don't let you breathe!"
I dunno, I feel like we’re well within the territory of the first commandment when it comes to growing brains in a vat.
“I am the Lord thy God. Thou shall not have strange gods before Me.”
To be pedantic he actually gave 613 commandments.
It is creepy, I agree.
I saw this article over the weekend and felt similarly: https://theinnermostloop.substack.com/p/the-first-multi-beha...
> Watch the video closely. What you are seeing is not an animation. It is not a reinforcement learning policy mimicking biology. It is a copy of a biological brain, wired neuron-to-neuron from electron microscopy data, running in simulation, making a body move.
And the simulated world they put it in is a sort of purgatory-like environment.
It's 200k neurons. Less than an ant has. Somewhat creepy, but if you're imagining that this thing is conscious and knows that it's in doom... yeah definitely not.
Still I don't understand why they would invite the extra creepy factor of using human brain cells rather than e.g. mouse brain cells. Surely it makes no difference biologically but it's going to lead to fewer comments like this.
> if you're imagining that this thing is conscious and knows that it's in doom... yeah definitely not.
I'm not imagining that (although one assumes their plan is to scale this up), but nonetheless there's something troubling to me about taking any living thing and wiring its senses up to a profoundly incomplete simulacrum of reality.
Of course we (as a species) have a long history of doing horrible things to living creatures in the name of science and progress.
These stories evoke a different feeling for me, though.
> there's something troubling to me about taking any living thing and wiring its senses up to a profoundly incomplete simulacrum of reality.
How do we communicate this to the engineers at YouTube who refuse to make an offramp for children from the infinite baby shark AI video loop?
> How do we communicate this to the engineers at YouTube who refuse to make an offramp for children from the infinite baby shark AI video loop?
Actually I have a thought which I'd like to share. Why don't we upload good quality/human-curated children media to archive.org and create a more human curated platform instead of shark AI video and we can upload videos for free on archive.org right now. The issue seems to be the more human filter which seems to be the issue.
Sharing this because Youtube Kids is absolutely not safe for kids and youtube is turning a blind eye to all of this because of their monopoly and also (profit? from having children watch a single thing on loop for so long)
Also a minor reason why I don't trust corporations which say protect the kids or governments when they can try to regulate a public company like youtube much easier than trying to control every device but it feels like surveillance goals more than anything to me.
I had watched some video on rabbithole/ "horrors on YT kids" video[0] sometime ago and I rewatched it again and there are even things like Animal Ai Abuse and so so much more vile things being shown to YT kids.
There are comments on that video like: "My 7 year old younger brother came up to me asking if you can drink chlorine. I asked him where he heard this and he told me that he was watching a lego building video on youtube KIDS, where suddenly mid video they started saying stuff like this."
[0] https://www.youtube.com/watch?v=w3PtN-CmybE&t=64s (Caution: The thumbnail is terrifying/horrifying and in general the video is not-safe-for-work while showing things are available on YT Kids so just take that into account on how horrifying the thumbnail/videos in YT kids can be)
Children should simply not be on YouTube (Kids or not). We don’t need a “safer” alternative for damaging their cognitive development.
Just because there’s demand for something doesn’t mean it should be legal to supply.
> We don’t need a “safer” alternative for damaging their cognitive development.
By Safer, I meant like some educational content or shows which are genuinely good fwiw. So I grew up watching Adventure Time on Cartoon Network. So curating shows like those and channels say veritasium or some Vsauce videos.
My question was that, can there not be a human volunteer curated group effort to find some decent channels from Youtube which are nice/safe for kids.
Calling the whole of youtube channels as bad might be unwise as well and mix some of it with cartoons and just having an archive/tag designed for it so that either an app or even you yourself could look at the archive tags and see which channels the videos are from and cartoons and just a more collective human effort into making a small library of things that are safe for kids?
Because kids will watch Youtube someday and they will hear about it from their friends and feel left out. You then trust that something like YT kids might work only to realize that it doesn't. Even something like rss list of those channels with something like freetube could be good as well fwiw.
What do you even recommend that people watch? I used to watch cartoon network for many hours growing up watching shows like beyblade and pokemon and Adventure Time etc. but it seems that cartoon network itself is nowadays struggling compared to Youtube kids :/
there definitely should be more to why/how Youtube kids is so prevalent. one can say bad parenting but I have seen good parents slip up in this case. They think its harmless. There's defintiely more to it (imho)
> yeah definitely not
I don't know about ants, but after a refresher on the people favorite fruit fly, I'd be hard pressed to be so dismissive - 200K seems to be plenty: https://news.ycombinator.com/item?id=47302051
I inspire you to look up what is known about fruit flies' behavior.
The reason it's probably nevertheless not as messed up as people might assume it to be is specifically because it's an organoid, not an actual brain. Which is to say, it has the numbers but not the performance, not by a long shot.
> Surely it makes no difference
It absolutely should, though specifically with organoids, I guess it might not. Ironically, I would expect the ethics angle to be actually worse with small animals. The size of the organoid will be closer to the real thing comparatively, after all, so more chances of it gaining whatever level of sentience the actual organism has.
But then this will be heavily muddled by what people believe consciousness is and whether or how humans are special, I suppose.
> so more chances of it gaining whatever level of sentience the actual organism has
Yeah but people have no problems experimenting on actual fully working mice already.
Yes *, and in the real world. The question then is if you rate that to be an equivalent existential horror to being a varyingly maldeveloped, malnutritioned, disembodied version of those mice, forced to live out life in a low fidelity version of the Matrix [0], potentially in constant or recurring agony. You get a potential match or approximate match in cognitive ability and operation, but with a lot different set of circumstances.
* They kinda do have a problem with that too, that's why ethics committees exist, and why the term "animal testing" pops up in the news cycle every so often.
Given that no one understands how the mental relates to the physical in the first place, I have no idea how you would reach such a confident conclusion about the phenomenological status of 200k human neurons in a petri dish playing Doom?
But we do understand where overconfidence usually come from, don't we?
Elephants have 3x the neurons of a human. Bees have about a million and they have complex relationships, emotions, and can remember the faces of humans. Neuron counts correspond more to body size than actual cognitive abilities.
And brains are pretty complicated in how they're arranged. A large portion of the brain basically serves as an operating system of sorts, just managing breathing, moving, detecting smells, producing language, decoding language, etc. Cut all of that out and we're left with thinking and emotions.
I don't think it works like that. Most likely high intelligence & consciousness requires both a large number of neurons and wiring them up in a specific way.
If you have a small number (200k is tiny) you aren't going to achieve consciousness.
The problem is: you can’t prove it. We have no idea what “conscious” even means. It’s literally why the Turing Test exists.
>Somewhat creepy, but if you're imagining that this thing is conscious and knows that it's in doom... yeah definitely not.
I don't know if it knows it's in doom - looks like all it knows is to shoot when startled. More than creepy imo.
You are confusing intelligence with consciousness (qualia). We simply don't know how qualia develops or how to measure it. We cannot discard, for example, than an ant has a greater qualia level than us. There are theories about qualia being connected to microtubules on neurons and quantum effects... the DOOM-playing neurons also have those microtubules. So you cannot say "definitely not".
> consciousness (qualia)
I've never heard the word qualia used as a synonym for consciousness, only as a related but distinct concept.
> an ant has a greater qualia level than us
What? where does this come from?
Why do you quote only the end, the full sentence is: We cannot discard, for example, than an ant has a greater qualia level than
They're saying that since we don't know how to "measure consciousness" we can't be certain that an ant doesn't have more "consciousness" than us. Obviously it seems very unlikely, but we can't be certain
I don't share that interpretation, maybe they clarify what was meant themselves?
fwiw i share the same interpretation as the other commenter.
Someone in the comments says it's not as bad (ethically speaking) as it appears:
>This an impressive simulation. But it's just not honest to call this 'brain emulation', a 'brain upload' or to say that this is doing anything like 'sensorimotor loop in simulation'. Aside from the fact that a connectome is not a brain, and so we have no idea whether the parts that have been filled in by ML actually function like a brain, the motor control in this framework is not even driven by the brain simulation. The output from the 'brain' is not a sequence of motor commands. It is a steering mechanism, a 2-dimensional descending signal (essentially, turn left or right, speed up or slow down). That is then fed into a series of CPG oscillators, outside of the brain emulation, that model fly movement in response to that 2-dimensional descending signal. Since outputting a 2-dimensional descending signal is not what a fly brain does, the simulated brain is not operating as a fly's brain does. It's machine learning, clamped into the shape of a fly connectome, that has a resting state of 0Hz, being zapped with simple inputs, not virtual sensory data.
Nevertheless a worrying direction.
I have no mouth, and I must scream (https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Sc...)
Even if that might not be the case. There are truly some biological feats which sound scary.
I read the sapiens book once and it had the concept of how humanity had paganism as a religion worshipping just the amalgamation of different animals for thousands of years.
I am writing the comment on what the book said below the image of one of the things humanity has made in recent years Now we have mouse on whose back scientists grew an ear made of cattle cartilage cells. It is an eerie echo of the lion-man statue from the stadel cave.
Thirty thousands years ago, Humans were already fantasising about combining different species. Today, they can actually produce such chimeras.
The image can only be described as an eldritch horror. (Pg 449, of mice and men, sapiens)
The last line of the book is: Is there anything more dangerous than dissatisfied and irresponsible gods who don't know what they want.
I think this last line is something that you are resonating with. (I highly recommend reading Sapiens if someone hasn't. I have only had animal farm and 1984 hook me up to a book so much.)
Funny though how many are dismissive of trillion-synapses brains that can understand and speak tens of languages, write decent code, discuss history and philosophy, solve math problems...
And then are creeped by 200k neurons that barely find a target when they're told where it is.
You can probably train an ANN with only a few hundred neurons at most to do the same.
Maybe you're a brain in a jar somewhere being forced to live this life you're living.
Sure would explain a lot
That’s why you shouldn’t take it at face value. Ethically speaking, the experiment must have been approved by the institutional review board. If there’re ethical concerns, these can be raised with them.
But I don’t think anyone “feeling uneasy” should be an argument once the ethical concerns have been considered and experiment has been approved.
One take is that we made human brain cells to live in hell. On the flip side, we gave them a super shotgun.
> something getting close to human intelligence
This seems very very far fetched. If I understand correctly, these cell brains just respond to some stimuli, it does not seem more intelligent than any automat to me, just creepier.
Would it be able to distinguish between violent or not? Would it be suffering or not? What exactly does it get in terms of signals? Does it even, "experience" anything? Is it even an "it"?
Your "violent or not" point is really interesting. Without a world model that includes a model of violence, whether that's instinctual or learned, it would not distinguish DOOM and https://en.wikipedia.org/wiki/Chex_Quest
I mean, it's nowhere close to human intelligence, and it's still not a sentient being, so it cannot be "forced" to do anything, even if we take it at face value.
As for being creepy, the things humans do to other actual sentient beings are exponentially more horrifying and creepy than making them play computer games. If the monkeys that Volkswagen tortured with their exhaust gases were made to play Doom, that would be a much better world. And they are much, much closer to human-level intelligence than this chip.
Ethically speaking, it got "questionable" way long ago; this is not a valid concern for this project imo.
>it's still not a sentient being
How do you know?
Let's go and assume that the chip is actually sentient (without any proof that it is). Even then, my comment fully stands. Blasting fully sentient beings with exhaust fumes in the face for hours is way worse than forcing them to play computer games. How we treat actual sentient beings is so abhorrent that this (worried about a chip playing Doom) is a misplaced first-world concern imo.
>But perhaps something getting close to human intelligence
this isn't getting close to human intelligence. They're using about as many cells as a fruit fly has (of course not actually functioning like an animal brain) processing signals to play Doom. The treatment of a single farm chicken is about a few magnitudes more worrying than this.
I'm sorry to tell you that you're made out of human cells and I don't think you got consent from each brain cell before firing up the old boomer shooters.
At 200k, this application is already more neurons than a fruit fly (130k), but still within the same order of magnitude. It's an interesting question of "how many" should be considered problematic from an ethics standpoint, and I don't think that line of questioning should be ignored. If any of this research turns out useful, you can be sure to see it scale up.
How else are they going to train the pilot wetware for the AI robot army?
> Just feels like it's starting to get quite questionable
There's no way the technology to make and modify "life" including cloning humans hasn't been secretly used or attempted at least once ever since it was discovered.
Yeah, people get shot/stabbed/"fall off a building by accident" every day and we should be considerate of the feelings of a petri dish.
This is the "There are starving children in Africa, eat your greens" argument.
Are discussion about petri dishes diverting relevant resources away from building safety initiatives?
Can I be allowed to torture small animals so long as human suffering persists?
No this is the stop playing with your dolls argument. It is rock solid. As for torturing animals, you do you I guess.
The thing should watch cats.
> it's creepy.
It's awesome.
People's ick around bodies, which are machines, have always held us back.
It wasn't until we started cutting them open that modern medicine was developed.
We might have brain uploads already had we not been so averse to sticking brains with electrodes.
I'll go further: had we not been so scared of cloning, we'd probably have cured cancer and every major ailment if we'd begun cloning monoclonal human bodies in labs. Engineered out the antigens and did whole head transplants. You could grow them without consciousness or deencephalize them, rapidly grow them in factories, and have new blood / tissue / organ / body donors for everyone.
New young bodies means no more cancer, no more cardiac or pulmonary age. It's just brain diseases left as the final frontier once we cross that gap. And if we have bodies as computers and labs, we'd probably make quick work on that too.
Too tired to lay out the case / refute, so past discussions:
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
High tech hell is reversing the light cone, pulling everyone who ever lived throughout history back into consciousness by simulating them at the neurotransmitter level, and then forcing them into actual hell / torture simulators with no way to die. All without consent, mind you.
That's also sci-fi. I hope.
What I described before - using clonal technology to solve nearly every disease - is a medical miracle that will vastly improve the state of people's lives throughout the world.
The two scenarios come in a package though. If you make one possible, the other one comes for free.
The same technology can also be used to force people to live with bodies engineered to make their existence a living hell. Similar things can be done with brain uploads.
I don't think anyone objects to curing cancer and better figuring out how our bodies work, but getting into conciousness/ mind uploads/ simulated humans is another can of worms ethically speaking. I'm assuming you've already read the fantastic story about Lena by qntm [1], if not, enjoy some existensial dread.
It seems a bit more complicated than first blush: https://www.rdworldonline.com/the-neurons-playing-doom-are-a...
Personally, dislike this direction a lot. I don't like that they're using a killing game (I understand the trope, doesn't make me like it any less) and the general idea of this whole thing makes me quite uneasy.
> The neurons serve as a biological filter: the training system translates screen pixels and ray-cast distances into electrical zaps, the living cells fire spikes, and those counts feed straight into a PyTorch decoder that maps them to Doom actions. The PPO agent, CNN encoder and entire reward loop run on ordinary silicon elsewhere. Cole’s ablation modes make the split testable, set decoder output to random or zero and the game still plays. The CL1 hardware interface works exactly as advertised. What remains unproven is whether 200,000 human neurons can ever carry the policy instead of just riding along.
Yeah… That’s quite the smoking gun.
So it’s quite likely then that the neurons are just acting as a bad conductor. The electrodes read a noisy version of the signals that go into the neurons, and they just train a CNN with PPO to remove that noise, get the proper inputs, and learn a half-decent policy for playing the game.
If this worked as advertised they shouldn’t need a CNN decoder at all! The raw neuron readout should be interpreted as game inputs directly.
Besides, they are not streaming the video into the neurons at all. Just the horizontal position of the enemies and the distance, or some variant of that. In that sense it’s barely more than pong isn’t it? If enemy left, rotate left, if enemy right, rotate right, if enemy center shoot. At a stretch, if enemy far, go forward, if enemy close, go back. The rest of the time just move randomly. Indeed, the behavior in the video is essentially that…
While we are at it, the encoded input signal itself is already pretty close to a decent policy if mapped directly to the keys (how much enemy left, center, right), even without any CNN, PPO or neurons.
EDIT: It seems like the readme does address these concerns, and the described setup differs significantly from the description in the critical blogpost. Still not entirely convincing to me, a lot of weights being trained in silicon around the neurons, but it sounds better. I don’t have time right now to look deeper into it. They outline some interesting details though.
> Quote from: https://raw.githubusercontent.com/SeanCole02/doom-neuron/mai...
Isn't the decoder/PPO doing all the learning?
No, this is precisely why there are ablations. The footage you see in the video was taken using a 0-bias full linear readout decoder, meaning that the action selected is a linear function of the output spikes from the CL1; the CL1 is doing the learning. There is a noticeable difference when using the ablation (both random and 0 spikes result in zero learning) versus actual CL1 spikes.
Isn't the encoder/PPO doing all the learning?
This question largely assumes that the cells are static, which is incorrect; it is not a memory-less feed X in get Y machine. Both the policy and the cells are dynamical systems; biological neurons have an internal state (membrane potential, synaptic weights, adaptation currents). The same stimulation delivered at different points in training will produce different spike patterns, because the neurons have been conditioned by prior feedback. During testing, we froze encoder weights and still observed improvements in the reward.
How is DOOM converted to electrical signals?
We train an encoder in our PPO policy that dictates the stimulation pattern (frequency, amplitude, pulses, and even which channels to stimulate). Because the CL1 spikes are non-differentiable, the encoder is trained through PPO policy gradients using the log-likelihood trick (REINFORCE-style), i.e., by including the encoder’s sampled stimulation log-probs in the PPO objective rather than backpropagating through spikes.
> If this worked as advertised they shouldn’t need a CNN decoder at all!
yeah!
the whole point was to make neurons BE the neural net
I literally can't wait for this petri dish to learn how to interact with LLMs and start vibe coding JS libraries.
What if the braincell-vibe JS libraries turn out pretty much identical to the legacy human JS libraries, aside from being better-commented. That might lead to an existential crisis for some folks.
"Petri dish rewrites React in Rust"
Old news. Google "my dog vibecoded a game".