...

arw0n

76

Karma

2025-09-05

Created

Recent Activity

  • The trick is to keep a layer of management or engineering below you that can be blamed if things go wrong.

  • This is assuming a coherent national security strategy, which is unlikely. We know a lot of generals disagree with the attack on Iran, and none of the geopolitical experts I trust think it is a good idea, be they conservative, realist, liberal, leftist or something else.

    There's a number of reasons this is happening now that I think are more plausible than American interest:

    - Saudis want Iran weak as they are primary geopolitical rivals. There are deep ties between the Saudi dynasty and the Trump dynasty. Without Iranian support, the Houthis will have a much tougher time. (Although they should not be underestimated regardless. They are not an Iranian proxy, but an ally, and field one of the strongest armies in the whole region.)

    - Israel wants Iran weak, and pro-zionism is a strong wedge in American politics. Again, there's also a lot of personal business interests involved. Iranian allies and proxies are the chief causes of grief for Israel's expansionist agenda, and a very credible threat to their national security.

    - This war conveniently moves the headlines away from a faltering economy, the Epstein files, and ICE overreach. There's probably hope that it will improve chances with the 'war president bonus' in the mid-terms. It could also be a convenient cover for and excuse to increase rigging in the elections.

    Expecting positive regime change after bombing a school full of little girls is... naive. This is not how you turn an enemy into a friend.

  • There seems to be a massive misunderstanding here - I'm not sure on whose side. In my understanding, if the DoD orders an autonomous drone, it would probably write in the ITT that the drone needs to be capable of doing autonomous surveillance. If Lockheed uses Anthropic under the hood, it does not meet those criteria, and cannot reasonably join the bid?

    What the declaration of supply chain risk does though is, that nobody at Lockheed can use Anthropic in any way without risking being excluded from any bids by the DoD. This effectively loses Anthropic half or more of the businesses in the US.

    And maybe to take a step back: Who in their right minds wants to have the military have the capabilities to do mass surveillance of their own citizens?

  • Google wasn't bleeding money like crazy at the time. Google was operating in a post-hype cycle. We are most likely somewhere in an epsilon around the peak of the AI hype and OpenAI is more comparable to AOL or Yahoo. One striking similarity is the inability to innovate themselves, instead relying on copying others or acquiring.

    The OpenClaw guy is surely a decent product person, but OpenClaw did not innovate in any real sense. He was just pushing an existing idea to the limit without any concern for quality or security. It had its hype moment, it inspired a bunch of people, and might find its own niche, but it is a flavor of the week kind of thing. I've been getting a lot more cold-calls by non-technical people in the last few weeks thanks to it. Congratulations, the quality threshold that justifies my response rose in equal measure. Nothing was gained, just a lot of tokens spent.

  • Their most valuable asset is the connections the CEO and others on the board have. The US is a banana republic, and the government chooses the winners. There's a continously escalating level of blatant corruption at the top level, and OAI positions themselves as the next recipient. Betting on OAI is betting on how far american democracy will fall. I don't think the odds are bad.

HackerNews