This post title is completely misleading.
From the article: the shooter's behavior triggered internal alarms, and some employees asked leadership to alert authorities, but:
| OpenAI leaders ultimately decided not to contact authorities.
The article title isn't wrong unless you assume that the title implies the employees did in fact contact authorities, but if they did it would read "Open AI raised", not "Open AI employees raised". We all know how much company leadership listens to its employees, of course.
Article title should read “OpenAI Employees silenced alarms…” not raised.
I'm more interested of what employees can read of chat logs. Is everything that's put into OpenAI access accessible by the employees? These kind of stories imo may dissuade people from LLMs unless there is greater privacy control. But ultimately... to have total privacy.. how long until we see people building off-line personal LLMs with no guard rails?
This. One would hope there human reviewers were somewhat limited and/or trained. Say what you want about banking, but some.. not a lot but..some standardization exists there.
One would hope, but I think history has taught us that there's always someone reading and watching. There's always someone with access abusing it.