Ethan Marks
I'm a 15-year-old software developer
https://ethmarks.github.io/
I beg to differ. Tactical use of a scientific or graphing calculator can absolutely replace large parts of the thinking process. If you're testing for the ability to solve differential equations, a powerful enough calculator can trivialize it, so they aren't allowed in calculus exams. A 10-digit calculator cannot trivialize calculus, so they are allowed. That's the distinction. LLMs operate at the maximum level of "helpfulness" and there's no good way to dial them back.
As someone who had a college English assignment due literally just yesterday, I think that "the vast majority" is an overstatement. There are absolutely students in my class who cheat with AI (one of them confessed to it and got a metaphorical slap on the wrist with a 15 point deduction and the opportunity to redo the assignments, which doesn't seem fair but whatever), but the majority of my classmates were actively discussing and working on their essays in class.
Whatever solution we implement in response to AI, it must avoid hurting the students who genuinely want to learn and do honest work. Treating AI detection tools as infallible oracles is a terrible idea because of the staggering number of false reports. The solution many people have proposed in this thread, short one-on-one sessions with the instructor, seems like a great way to check if students can engage with and defend the work they turned in.
> now you're left with the bajillion other "grey market" models that won't give a damn about that.
Exactly. When the barrier to entry for training a okay-ish AI model (not SOTA, obviously) is only a few thousand compute hours on H100s, you couldn't possibly hope to police the training of 100% of new models. Not to mention that lots of existing models are already out there are fully open-source. There will always be AI models that don't adhere to watermark regulations, especially if they were created a country that doesn't enforce your regulations.
You can't hope to solve the problem of non-watermarked AI completely. And by solving it partially by mandating that the big AI labs add a unified watermark, you condition people to be even more susceptible to AI images because "if it was AI, it would have a watermark". It's truly a no-win situation.
This is very interesting. So it's an MCP server that connects to what is effectively a sandboxed MCP "hub". This is a clever middle ground between using dozens of context-munching MCP servers and just giving the agent access to your command line.
One question: why is Deno used? I thought that it was a JavaScript runtime. Can pctx only run sandboxed JavaScript code? If so, what do you do if you need the agent to run a Python script? If not, I don't understand how using a sandboxed JavaScript runtime allows you to sandbox other things.
This project is an enhanced reader for Ycombinator Hacker News: https://news.ycombinator.com/.
The interface also allow to comment, post and interact with the original HN platform. Credentials are stored locally and are never sent to any server, you can check the source code here: https://github.com/GabrielePicco/hacker-news-rich.
For suggestions and features requests you can write me here: gabrielepicco.github.io