HPC administrator by day, developer and researcher by night.
You can reach me via https://bayindirh.io
Thanks for your rude words.
The gist is, Agents and ideas underpinning Agentic LLMs are 20+ years old, and agents were managing systems and keeping things up autonomously for decades now. JADE has been developed by Telefonica to keep tabs on the telephone infra, also since the agents can migrate, it was also the original edge computing, but I digress...
You don't have to give a damn about my research. The point is not that. You challenged my knowledge, and I shown you what I know, how I know, plus you read a small history of intelligent agents, to boot.
I don't know what you are trying to achieve with asking me being autistic or not. I'm not, and it doesn't matter. The way it comes is bluntly insulting regardless of my situation.
So yes, Agentic LLMs are new, but the Agents itself is not, and the agents I'm talking about are not dumb chatbots. They can wander distributed systems, process data, learn from that data, report their findings and optimize themselves as they operate. They are not just parrots, but real programs which keep infrastructures intact.
Since you're losing your temper, and getting into ad-hominem category, and seeing it's tea time here, I'll prefer to sip my tea and continue my day.
Thank you for the chat and insults, and have a nice and productive life.
> By the time results start appearing, my brain is already fast at work processing the output to qualify whether the information the LLMs return is accurate, whether it's a good leaping off point, whether I can keep drilling deeper, expand my prompt scope, etc.
Seems unnecessarily tiring. Instead I use a SEO spam and ad-free search engine. It's called Kagi. It allows me to further refine my search via lenses and site prioritization. Also, it has zero hallucination chance, because it's a deterministic search engine.
> It feels like I'm plugged into the Matrix rather than getting SEO'd garbage. I know the results have issues, but that doesn't matter - I can quickly draw together the pieces and navigate around it. Compared to Google, it feels like piloting a star ship.
Same for Kagi, without selling my data or trawling information obtained without consent or disregard of ethics, and many other things.
Note: I don't use any of the Kagi's AI features, incl. proofreading.
So, I took your word, assumed I knew nothing about Agents and Agentic AI, and started digging. Wikipedia states the following for Agentic AI:
> "Agentic AI is a class of artificial intelligence that focuses on autonomous systems that can make decisions and perform tasks without human intervention."
I can work with that. So we have agents that autonomously react to their environment, changes, or what we can say impulses. They sit there and do what they are designed to do, and do that autonomously. Makes sense. However, this sounds a bit familiar to me. Probably me hallucinating something, so let's dig deeper. There seems to be an important distinction, though:
> "Agentic AI operates independently, making decisions through continuous learning and analysis of external data and complex data sets."
So, we need to be able to learn, evolve, and analyze external and complex data sets. That's plausible, but my hunch is still lingering there, tingling a bit stronger. At this point, for Agentic AI, we need an independent "thing" which can decide, act, learn, and access external data sources to analyze and learn from them. In short, I need to be able to give this Agentic AI a goal, and it accomplishes it automatically with the things at its disposal. Fair enough.
We were discussing (software) agents and their history. So let's pivot more to agents. Again, turning to Wikipedia, we find this sentence:
> "In computer science, a software agent is a computer program that acts for a user or another program in a relationship of agency."
Again, a piece of software that acts for a user or another program. Hmm... They have five basic attributes: 1)are not strictly invoked for a task, but activate themselves, 2)may reside in wait status on a host, perceiving context, 3) may get to run status on a host upon starting conditions, 4)do not require interaction of user, 5)may invoke other tasks including communication. That hunch, though. It feels more like mild kicking. Where do I know these concepts? Somewhere from the past? Nah, I'm hallucinating. You told me that they are new.
As I skim the article and pass "Intelligent Agents" past, I see something very familiar line under "Notions and frameworks for agents" title: "Java Agent Development Framework (JADE)". I know this. Now I remember!
I have used this framework to code a platform where an agent gets orders from a client for a set of items, and submits them to another agent, where other agents send their best prices, and another agent calculates the best combination for the cheapest price. Doing a "combinatorial reverse auction" for a set of items. We had no time to implement feedback-based price adjustment strategies, but the feedback and announcement code were there, so every agent knew how the transaction went. They all were autonomous. A single agent acted on behalf of the user, and the whole platform responded to that without any humans at any step, including final decisions!
That was my Master's thesis. I have also presented it at the IEEE Symposium on Intelligent Agents, IA in Orlando in 2014 [0]!
When did I complete my Master's thesis?
Oh. 2010. 15 years ago.
Alright. This solves it.
Now, on to your second question. Let's put it right here:
> You blindfold a human and prevent him from using any tools and then ask him what tools he used? What do you expect will happen. Either the human will lie to you about what he did or tell you what he didn't do. No different from an LLM.
You're mangling my question here. The question I ask is different:
> Generate me a Python code for solving problem X, then tell me which source repositories you used to generate this code. Cite their licenses, if possible.
All of this information is in the core network for the first part of the problem. LLMs without tool capabilities can generate code, and generate it well. The source of this knowledge came from their training set, which consists of at least "The Stack", and some other data sources on top of that. So, the LLM can generate the code without any tools, but it can't know where the source came from. It's just there, in the core network.
You think the question is stupid, but it's not. This is where all the ethical questions regarding LLM training are rooted. LLMs hallucinate licenses, don't know where the code came from, and whatnot. If you ask me about a code piece in my source code, I can give you the source, the thought process, and design, citing the originality or how I found it elsewhere and got into my codebase. Lying about it would be a big problem in the light of licenses, but LLMs get scot-free because they're just fair using it. Humans can't do the same thing, why LLMs? Because their owners have money and influence? Seems so.
> You didn't store those references in your brain right? You looked that shit up.
No, when I looked that shit up, I recorded where I read it alongside other contexts, including the weather that day in some particular cases. I don't answer "I just know, I don't know how" when people ask me about the source of my knowledge.
This is the difference between humans and LLMs; this thin line is very important.
This project is an enhanced reader for Ycombinator Hacker News: https://news.ycombinator.com/.
The interface also allow to comment, post and interact with the original HN platform. Credentials are stored locally and are never sent to any server, you can check the source code here: https://github.com/GabrielePicco/hacker-news-rich.
For suggestions and features requests you can write me here: gabrielepicco.github.io