You can if you own the copyright to the content. I don't know the state of Linux, but this is a reason the FSF (and many other projects) requires people assign their copyright to them when they submit code.
It also helps when you take an offender to court. If I contribute to a project but don't assign copyright, then they cannot take offenders to court if my code was copied illegally. The burden is on me to do so.
Of course, all code released prior to the change still remains on the original license.
> Here's what I think. If you have a public blog, it's fair game at an interview. If you write mostly about data science stuff but you apply for a software engineering job, you ought to be prepared to explain the contrast. Understand that, for most top firms, hiring good people and getting them to stick is hard. Most employers will want some assurance that you are serious about the position you're applying for. If you send signals that you might want some other position, be prepared to get asked about those signals.
Great! Let me trawl through all candidates' HN and social media comments, and ask why they spend more time talking about politics, movies, science fiction, than CRUD SW development. They need to justify it!
> (I also don't find over 50% hallucination to be accurate for Google AI summaries in my experience, but that depends on your queries, and in any case, I digress...)
I should point out that I'm not saying 50% of the AI summaries have an error. Merely that the references it provides me don't state what the summary is claiming. The summary may still be accurate, while the references incorrect.
> To clarify my thoughts on this, I'm not against using AI to research/hone your arguments. It's no different to using Wikipedia or googling.
> I don't think that's what this new HN guideline is against either.
This is actually how many commenters here are interpreting it, though - and that's what I'm pushing back against. They are actively advocating against using LLMs this way.
I don't have the LLM write the comment for me. I (sometimes) give it my draft, along with all the parents to the root, and get feedback. I look for specific things (Am I being too argumentative? Am I invoking a logical fallacy? Is it obvious I misinterpreted a comment that I'm replying to? Is my comment confusing? etc). Adding things like (Am I violating an HN guideline?) are fair game.
Earlier today I wrote a lot of comments without using the LLM's feedback. In one particular thread I repeatedly misunderstood the original context of the discussion and wasted people's time. I reposted my draft to the LLM and it alerted me of my problematic comment. Had I used it originally, I would have saved a lot of people time.
Incidentally, since I started doing this (a few months ago), I've only edited my comment once or twice based on its feedback. Most of the time it just tells me my comment looks good.