...

p-o

218

Karma

2020-04-25

Created

Recent Activity

  • I am not delusional about those power-hungry people, but I somehow thought that with better access to information, society would have been able to better regulate them.

    Maybe in hindsight, "flooding the zone" will be considered a much bigger threat than it is today. Most of what's going on in the last 12 months have happened in plain sight and would have never worked 30 years ago. Today, it just flies, attention span be damned.

  • Why did we have to go through all this pain. Was that really necessary? And given we mostly talk about technology here, let me put this through that lens:

    With all the technology advancement and improvement with access to information in the last 30 years, why does it feel that all of this culminates to more disinformation, more pain, and less understanding?

  • >For many years we had to rely on our own internally developed fork of FFmpeg to provide features that have only recently been added to FFmpeg

    I really wonder if they couldn't have run the fork as an open source project. They present their options as binary when it fact they had many different options from the get go. They could have run the fork in an open-source fashion for developers of FFmpeg to see what their work was and be able to understand what the features they were working on was.

    Keeping everything close source and then contributing back X amount of years later feels a little bit disingenuous.

  • Yes, I think you're right that it is not the end of the road for LLM and the application of LLM might be adopted over time across a variety of industries.

    I guess what I failed to convey in my original comment was that, like the Internet 20 years ago, the current advancement made by AI might stall at a foundational level, while the landscape evolves.

    Essentially, I believe what you're saying is really close in spirit to what I'm saying.

  • I think the brunt of the disruption regarding AI is already behind us for LLMs at least. It's possible we'll see improvements over the following months/years, but government will inevitably start to catchup to the level of disinformation and confusion that AI has brought to this world.

    Laws & regulations that needs to be created to reign in AI will undoubtedly increase the opportunity cost of training LLMs.

    For some, it might be similar to the early 2000s, but I think it's just a healthy rebalance of what AI is, and how the society needs to implement this new, hardly controllable, paradigm. With this perspective, OpenAI has a lot to lose as it hasn't been able to create a moat for itself compared to, let's say, Anthropic.

HackerNews