Hello world does not compile

2026-02-073:055942github.com

Tested inside fedora 43 container, ubuntu 26.04 container and on regular fedora 42 installation, same error Took example directly from README.md GCC is present and can compile code just fine: root:...

@AvoidMe
@AvoidMe

Tested inside fedora 43 container, ubuntu 26.04 container and on regular fedora 42 installation, same error Took example directly from README.md

GCC is present and can compile code just fine:

root:/claudes-c-compiler# cat > hello.c << 'EOF'
#include <stdio.h>
int main(void) {
    printf("Hello from CCC!\n");
    return 0;
}
EOF
root:/claudes-c-compiler# ./target/release/ccc -o hello hello.c
/usr/include/stdio.h:34:10: error: stddef.h: No such file or directory
/usr/include/stdio.h:37:10: error: stdarg.h: No such file or directory
ccc: error: 2 preprocessor error(s) in hello.c
root:/claudes-c-compiler# gcc -o hello hello.c
root:/claudes-c-compiler# ./hello
Hello from CCC!
root@1b5343a2f014:/claudes-c-compiler#```
Reactions are currently unavailable

You can’t perform that action at this time.


Read the original article

Comments

  • By nextaccountic 2026-02-073:451 reply

    This is hilarious. But the compiler itself is working, it's just that the path to the stdlib isn't being passed properly

    https://github.com/anthropics/claudes-c-compiler/issues/1#is...

    • By rf15 2026-02-0714:49

      except it's... all wrong: this dependency-free compiler has a hard dependency on gcc (even as it's claiming it's a drop-in replacement), it has so many hardcoded paths, etc.

  • By nomel 2026-02-073:409 reply

    The negativity around the lack of perfection for something that was literal fiction fiction just some years ago is amazing.

    • By parker-3461 2026-02-073:432 reply

      If more people are able to step back and think about the potential growth for the next 5-10 years, then I think the discussion would be very different.

      I am grateful to be able to witness all these amazing progress play out, but am also concerned about the wide ranging implications.

      • By dvfjsdhgfv 2026-02-0710:03

        > think about the potential growth for the next 5-10 years,

        I thought about it and it doesn't seem that bright. The problem is not that LLMs generate inferior code faster, is that at some point some people will be convinced that this code is good enough and can be used in production. At that point, the programming skills of the population will devolve and less people will understand what's going on. Human programmers will only work in financial institutions etc., the rest will be a mess. Why? Because generated code is starting to be a commodity and the buyer doesn't understand how bad it it.

        So we're at the stage when global companies decided it's a fantastic idea to outsource the production of everything to China, and individuals are buying Chinese plastic gadgets en masse. Why? Because it's very cheap when compared to the real thing.

      • By rescripting 2026-02-073:524 reply

        This is what the kids call “cope”, but it comes from a very real place of fear and insecurity.

        Not the kind of insecurity you get from your parents mind you, but the kind where you’re not sure you’re going to be able to preserve your way of life.

        • By dvfjsdhgfv 2026-02-0710:051 reply

          > Not the kind of insecurity you get from your parents mind you

          I don't get this part. At least my experience is the opposite: it's basically the basic function of parents to give their child the sense of security.

        • By ares623 2026-02-074:53

          Sorry but I think you have it the other way around.

          The ones against it understand fully what the tech means for them and their loved ones. Even if the tech doesn't deliver on all of its original promises (which is looking more and more unlikely), it still has enough capabilities to severely affect the lives of a large portion of the population.

          I would argue that the ones who are inhaling "copium" are the ones who are hyping the tech. They are coping/hoping that if the tech partially delivers what it promises, they get to continue to live their lives the same way, or even an improved version. Unless they already have underground private bunkers with a self-sustained ecosystem, they are in for a rude awakening. Because at some point they are going to need to go out and go grocery shopping.

        • By ThrowawayR2 2026-02-075:38

          My hot take is that portions of both the pro- and anti- factions are indulging in the copium. That LLMs can regurgitate a functioning compiler means that it has exceeded the abilities of many developers and whether they wholeheartedly embrace LLMs or reject LLMs isn't going to save those that have been exceeded from being devalued.

          The only safety lies in staying ahead of LLMs or migrating to a field that's out of reach of them.

        • By ppoooNN 2026-02-074:56

          [dead]

    • By gtowey 2026-02-073:55

      There is a massive difference between a result like this when it's a research project and when it's being pushed by billion dollar companies as the solution to all of humanities problems.

      In business, as a product, results are all that matter.

      As a research and development efforts it's exciting and interesting as a milestone on the path to something revolutionary.

      But I don't think it's ready to deliver value. Building a compiler that almost works is of no business value.

    • By politelemon 2026-02-075:10

      The negativity is around the unceasing hype machine.

    • By jascha_eng 2026-02-073:56

      Noone can correctly quantify what these models can and can't do. That leads to the people in charge completely overselling them (automating all white collar jobs, doing all software engineering, etc) and the people threatened by those statements firing back when these models inevitably fail at doing what was promised.

      They are very capable but it's very hard to explain to what degree. It is even harder to quantify what they will be able to do in the future and what inherent limits exist. Again leading to the people benefiting from it to claim that there are no limits.

      Truth is that we just don't know. And there are too few good folks out there that are actually reasonable about it because the ones that know are working on the tech and benefit from more hype. Karpathy is one of the few that left the rocket and gives a still optimistic but reasonable perspective.

    • By DustinEchoes 2026-02-073:483 reply

      It’s a fear response.

      • By array_key_first 2026-02-079:37

        It could also be that, so often, the claims of what LLMs are achieve are so, so overstated that people feel the need to take it down a notch.

        I think lofty claims ultimately hurt the perception of AI. If I wanted to believe AI was going nowhere, I would listen to people like Sam Altman, who seem to believe in something more akin to a religion than a pragmatic approach. That, to me, does not breed confidence. Surely, if the product is good, it would not require evangelism or outright deceit? For example, claiming this implementation was 'clean room'. Words have meaning.

        This feat was very impressive, no doubt. But with each exaggeration, people lose faith. They begin to wonder - what is true, and what is marketing? What is real, and what is a cheap attempt for companies to rake in whatever cold hard AI cash they can? Is this opportunistic, like viral pneumonia, or something we should really be looking at?

      • By rgoulter 2026-02-074:02

        No.

        While there are many comments which are in reaction to other comments:

        Some people hype up LLMs without admitting any downsides. So, naturally, others get irritated with that.

        Some people anti-hype LLMs without admitting any upsides. So, naturally, others get irritated with that.

        I want people to write comments which are measured and reasonable.

      • By dvfjsdhgfv 2026-02-0710:10

        This reply is argumentum ad personam. We could reverse it and say GenAI companies push this hype down our throats because of fear that they are burning cash with no moat but these kinds of discussions lead nowhere. It's better to focus on core arguments.

    • By sublinear 2026-02-075:451 reply

      How does a statistical model become "perfect" instead of merely approaching it? What do you even mean by "perfect"?

      We already have determinism in all machines without this wasteful layer of slop and indirection, and we're all sick and tired of the armchair philosophy.

      It's very clear where LLMs will be used and it's not as a compiler. All disagreements with that are either made in bad faith or deeply ignorant.

      • By nomel 2026-02-0720:401 reply

        > All disagreements with that are either made in bad faith or deeply ignorant.

        Declaring an opinion and then making discussion about it impossible isn't a useful way to communicate or reason about things.

        • By sublinear 2026-02-087:501 reply

          Sure it is. I haven't made discussion impossible. If you choose to reply to my line of discussion, I eliminated an entire category of what I think are trivial arguments that miss the point. Yes indeed calling that stuff trivial is my opinion, but I think you were trying to say something else.

          You found room by claiming I have some other opinions. In fact, I originally asked some questions you chose not to answer.

          That all begs some more questions: what about my statements isn't factual? What about your statements isn't factual?

          I have a few guesses. You may think AI can write a better compiler. You may think AI has already written a better compiler. You may think humans shouldn't write code anymore.

          All of those are examples of opinions you might declare, but maybe you meant to say something factual. If those really are the only things you meant to debate, I have to agree I didn't think they were going anywhere and have been done to death. I thought maybe you had something else in mind.

          • By nomel 2026-02-0820:25

            I think your perspective is an instantaneous one, which is fine, because that's where facts about behaviors of systems (that are swapped out every few months) must come from. Since we can't know the performance of architectures that will be released in the near future, we can only form opinions and speculate about them. Not wanting to speculate, and framing everything on what exists right now, is fine. Listening to people guess is usually boring. And, knowing the practical outcome of ongoing research is hit or miss.

            But, if your perspective is immediate, you need to be more precise with your words, to not confuse the reader into thinking that you're extending your observations, that apply only to the present, into the future.

            I personally don't find discussions on current capabilities, about something that was fiction some years ago, and has shown a fairly steady rate of increase in utility, all that interesting. I'm an engineer at heart and live and enjoy the iterative process of improvement. As a consequence, I think the present is the boring place, because that's where iteration dies! I don't think we'll entertain each other. ;)

    • By Insanity 2026-02-073:43

      I think it’s a good antidote to the hype train. These things are impressive but still limited, solely hearing about the hype is also a problem.

    • By largbae 2026-02-073:43

      Schadenfreude predates AI by millenia. Humans gonna human.

    • By rsynnott 2026-02-0710:17

      "We can now expensively generate useless things! Why are you not more impressed?!"

  • By netsharc 2026-02-073:54

    Ah, two megapixel-PNG screenshots of console text (one hidpi too!), and of some IDE showing also text (plus a lot of empty space)... Great great job, everyone.

HackerNews