...

yosito

8951

Karma

2014-05-18

Created

Recent Activity

  • "Anthropic had built its brand around promoting AI safety, emphasizing red lines it said it wouldn’t cross. Its usage guidelines contain strict limitations that prohibit Claude from facilitating violence, developing or designing weapons, or conducting mass surveillance."

    I can't say that I fully trust this at face value, but I will say, at least at face value, that this commitment to non-violence is something I wish more tech companies in history had made. Whether it's an authentic commitment or just PR remains to be fully seen.

  • You're right. And it's also important to be mindful that the LLMs can also translate between human intent and formal queries incorrectly, so they still shouldn't be fully trusted even when integrated with a more deterministic system.

  • Yeah, really dumb move on the part of Chase bank. They'd previously marketed their accounts as being geared towards international travelers, but now their cards can't be used in much of the world.

  • US "freedom" is more propaganda than reality, and that's becoming more and more apparent.

  • I'm very skeptical of using AI in this way. I've given Claude access to calendars and travel plans and asked it to do similar analytical tasks cross referencing documents that would take days for me to do manually. Since it was about my own plans and life that I knew well, it was possible for me to spot subtle errors that seemed correct at the surface level but actually weren't the conclusions I would make. I've attempted these types of tasks 10-20 times with similar experiences each time. In the end, it's made me very skeptical, like your wife. I don't trust any AI output without a thorough review. Hallucinations are still a frequent problem.

HackerNews