Learn Prolog Now (2006)

2025-11-1214:54326242lpn.swi-prolog.org

This version of Learn Prolog Now! embeds SWI SH , SWI-Prolog for SHaring. The current version rewrites the Learn Prolog Now! HTML on the fly, recognising source code and example queries. It is not yet…

This version of Learn Prolog Now! embeds SWI SH , SWI-Prolog for SHaring. The current version rewrites the Learn Prolog Now! HTML on the fly, recognising source code and example queries. It is not yet good at recognising the relations between source code fragments and queries. Also Learn Prolog Now! needs some updating to be more compatible with SWI-Prolog. All sources are on GitHub:

LearnPrologNow LPN SWISH Proxy SWISH

Learn Prolog Now! is an introductory course to programming in Prolog. The online version has been available since 2001, and now there is also a throughly revised version available in book form.

We wanted to do two things with this course. First, we wanted to provide a text that was relatively self contained, a text that would permit someone with little or no knowledge of computing to pick up the basics of Prolog with the minimum of fuss. We also wanted the text to be clear enough to make it useful for self study. We believe that if you read the text, and do the associated exercises, you will gain a useful partial entry to the world of Prolog.

But only a partial entry, and this brings us to our second point. We want to emphasize the practical aspects of Prolog. Prolog is something you do. You can't learn a programming language simply by reading about it, and if you really want to get the most out of this course, we strongly advise you to get hold of a Prolog interpreter (you'll find pointers to some nice ones on this website) and work through all the Practical Sessions that we provide. And of course, don't stop with what we provide. The more you program, the better you'll get....

We hope you enjoy the course. And whether you're using this book to teach yourself Prolog, or you're using it as the basis for teaching others, we would like to hear from you. Please send us any comments/corrections you have so that we can take them into account in later versions.


Read the original article

Comments

  • By disambiguation 2025-11-1217:1819 reply

    I am once again shilling the idea that someone should find a way to glue Prolog and LLMs together for better reasoning agents.

    https://news.ycombinator.com/context?id=43948657

    Thesis:

    1. LLMs are bad at counting the number of r's in strawberry.

    2. LLMs are good at writing code that counts letters in a string.

    3. LLMs are bad at solving reasoning problems.

    4. Prolog is good at solving reasoning problems.

    5. ???

    6. LLMs are good at writing prolog that solves reasoning problems.

    Common replies:

    1. The bitter lesson.

    2. There are better solvers, ex. Z3.

    3. Someone smart must have already tried and ruled it out.

    Successful experiments:

    1. https://quantumprolog.sgml.net/llm-demo/part1.html

    • By jodrellblank 2025-11-1217:469 reply

      > "4. Prolog is good at solving reasoning problems."

      Plain Prolog's way of solving reasoning problems is effectively:

          for person in [martha, brian, sarah, tyrone]:
            if timmy.parent == person:
              print "solved!"
      
      You hard code some options, write a logical condition with placeholders, and Prolog brute-forces every option in every placeholder. It doesn't do reasoning.

      Arguably it lets a human express reasoning problems better than other languages by letting you write high level code in a declarative way, instead of allocating memory and choosing data types and initializing linked lists and so on, so you can focus on the reasoning, but that is no benefit to an LLM which can output any language as easily as any other. And that might have been nice compared to Pascal in 1975, it's not so different to modern garbage collected high level scripting languages. Arguably Python or JavaScript will benefit an LLM most because there are so many training examples inside it, compared to almost any other langauge.

      • By YeGoblynQueenne 2025-11-131:041 reply

        >> You hard code some options, write a logical condition with placeholders, and Prolog brute-forces every option in every placeholder. It doesn't do reasoning.

        SLD-Resolution with unification (Prolog's automated theorem proving algorithm) is the polar opposite of brute force: as the proof proceeds, the cardinality of the set of possible answers [1] decreases monotonically. Unification itself is nothing but a dirty hack to avoid having to ground the Herbrand base of a predicate before completing a proof; which is basically going from an NP-complete problem to a linear-time one (on average).

        Besides which I find it very difficult to see how a language with an automated theorem prover for an interpreter "doesn't do reasoning". If automated theorem proving is not reasoning, what is?

        ___________________

        [1] More precisely, the resolution closure.

        • By jodrellblank 2025-11-133:522 reply

          > "as the proof proceeds, the cardinality of the set of possible answers [1] decreases"

          In the sense that it cuts off part of the search tree where answers cannot be found?

              member(X, [1,2,3,4]),
              X > 5,
              slow_computation(X, 0.001).
          
          will never do the slow_computation - but if it did, it would come up with the same result. How is that the polar opposite of brute force, rather than an optimization of brute-force?

          If a language has tail call optimization then it can handle deeper recursive calls with less memory. Without TCO it would do the same thing and get the same result but using more memory, assuming it had enough memory. TCO and non-TCO aren't polar opposites, they are almost the same.

          • By YeGoblynQueenne 2025-11-1310:191 reply

            Rather, in the sense that during a Resolution-refutation proof, every time a new Resolution step is taken, the number of possible subsequent Resolution steps either gets smaller or stays the same (i.e. "decreases monotonically"). That's how we know for sure that if the proof is decidable there comes a point at which no more Resolution steps are left, and either the empty clause is all that remains, or some non-empty clause remains that cannot be reduced further by Resolution.

            So basically Resolution gets rid of more and more irrelevant ...stuff as it goes. That's what I mean that it's "the polar opposite of brute force". Because it's actually pretty smart and it avoids doing the dumb thing of having to process all the things all the time before it can reach a conclusion.

            Note that this is the case for Resolution, in the general sense, not just SLD-Resolution, so it does not depend on any particular search strategy.

            I believe SLD-Resolution specifically (which is the kind of Resolution used in Prolog) goes much faster, first because it's "[L]inear" (i.e. in any Resolution step one clause must be one of the resolvents of the last step) and second because it's restricted to [D]efinite clauses and, as a result, there is only one resolvent at each new step and it's a single Horn goal so the search (of the SLD-Tree) branches in constant time.

            Refs:

            J. Alan Robinson, "A computer-oriented logic based on the Resolution principle" [1965 paper that introduced Resolution]

            https://dl.acm.org/doi/10.1145/321250.321253

            Robert Kowalski, "Predicate Logic as a Programming Language"

            https://www.researchgate.net/publication/221330242_Predicate... [1974 paper that introduced SLD-Resolution]

            • By YeGoblynQueenne 2025-11-1312:371 reply

              I don't want to keep editing the above comment, so I'm starting a new one.

              I really recommend that anyone with an interest in CS and AI read at least J. Alan Robinson's paper above. For me it really blew my mind when I finally found the courage to do it (it's old and a bit hard to read). I think there's a trope in wushu where someone finds an ancient scroll that teaches them a long-lost kung-fu and they become enlightened? That's how I felt when I read that paper, like I gained a few levels in one go.

              Resolution is a unique gem of symbolic AI, one of its major achievements and a workhorse: used not only in Prolog but also in one of the two dominant branches of SAT-Solving (i.e. the one that leads from Hillary-Putnam to Conflict Driven Clause Learning) and even in machine learning, in of the two main branches of Inductive Logic Programming (which I study) and which is based on trying to perform induction by inverting deduction and so by inverting Resolution. There's really an ocean of knowledge that flows never-ending from Resolution. It's the bee's knees and the aardvark's nightgown.

              I sincerely believe that the reason so many CS students seem to be positively traumatised by their contact with Prolog is that the vast majority of courses treat Prolog as any other programming language and jump straight to the peculiarities of the syntax and how to code with it, and completely fail to explain Resolution theorem proving. But that's the whole point of the language! What they get instead is some lyrical waxing about the "declarative paradigm", which makes no sense unless you understand why it's even possible to let the computer handle the control flow of your program while you only have to sort out the logic. Which is to say: because FOL is a computational paradigm, not just an academic exercise. No wonder so many students come off those courses thinking Prolog is just some stupid academic faffing about, and that it's doing things differently just to be different (not a strawman- actual criticism that I've heard).

              In this day and age where confusion reigns about what even it means to "reason", it's a shame that the answer, that is to be found right there, under our noses, is neglected or ignored because of a failure to teach it right.

              • By rramadass 2025-11-1315:01

                Excellent and Informative comments !

                The way to learn a language is not via its syntax but by understanding the computation model and the abstract machine it is based on. For imperative languages this is rather simple and so we can jump right in and muddle our way to some sort of understanding. With Functional languages it is much harder (you need to know logic of functions) and is quite impossible with Logic languages (you need to know predicate logic) Thus we need to first focus on the underlying mathematical concepts for these categories of languages.

                The Robert Kowalski paper Predicate Logic as a Programming Language you list above is the Rosetta stone of logic languages and an absolute must-read for everybody. It builds everything up from the foundations using implication (in disjunctive form), clause, clausal sentence, semantics, Horn clauses and computation (i.e. resolution derivation); all absolutely essential to understanding! This is the "enlightenment scroll" of Prolog.

          • By xabush 2025-11-134:261 reply

            I don't understand (the point of) your example. In all branches of the search `X > 5` will never be `true` so yeah `slow_computation` will not be reached. How does that relate to your point of it being "brute force"

            >> but if it did, it would come up with the same result

            Meaning either changing the condition or the order of the clauses. How do you expect Prolog to proceed to `slow_computation` when you have declared a statement (X > 5) that is always false before it.

            • By jodrellblank 2025-11-1315:34

              The point is to compare a) evaluate all three lines (member, >5, slow_computation) then fail because the >5 test failed; against b) evaluate (member, >5) then fail. And to ask whether that's the mechanism YeGoblynQueyne is referring to. If so, is it valid to describe b as "the polar opposite" of a? They don't feel like opposites, merely an implementation detail performance hack. We can imagine some completely different strategy such as "I know from some other Constraint Logic propagation that slow_computation has no solutions so I don't even need to go as far as the X>5 test" which is "clever" not "brute".

              > "How do you expect Prolog to proceed to `slow_computation` when you have declared a statement (X > 5) that is always false before it"

              I know it doesn't, but there's no reason why it can't. In a C-like language it's common to do short-circuit Boolean logic evaluation like:

                  A && B && C
              
              and if the first AND fails, the second is not tested. But if the language/implementation doesn't have that short-circuit optimisation, both tests are run, the outcome doesn't change. The short-circuit eval isn't the opposite of the full eval. And yes this is nitpicking the term "polar opposite of" but that's the relevant bit about whether something is clever or brute - if you go into every door, that's brute. If you try every door and some are locked, that's still brute. If you see some doors have snow up to them and you skip the ones with no footprints, that's completely different.

      • By tannhaeuser 2025-11-1220:271 reply

        Prolog was introduced to capture natural language - in a logic/symbolic way that didn't prove as powerful as today's LLM for sure, but this still means there is a large corpus of direct English to Prolog mappings available for training, and also the mapping rules are much more straightforward by design. You can pretty much translate simple sentences 1:1 into Prolog clauses as in the classic boring example

            % "the boy eats the apple"
            eats(boy, apple).
        
        This is being taken advantage of in Prolog code generation using LLMs. In the Quantum Prolog example, the LLM is also instructed not to generate search strategies/algorithms but just planning domain representation and action clauses for changing those domain state clauses which is natural enough in vanilla Prolog.

        The results are quite a bit more powerful, close to end user problems, and upward in the food chain compared to the usual LLM coding tasks for Python and JavaScript such as boilerplate code generation and similarly idiosyncratic problems.

        • By jodrellblank 2025-11-134:05

          "large corpus" - large compared to the amount of Python on Github or the amount of JavaScript on all the webpages Google has ever indexed? Quantum Prolog doesn't have any relevant looking DuckDuckGo results, I found it in an old comment of yours here[1] but the link goes to a redirect which is blocked by uBlock rules and on to several more redirects beyond which I didn't get to a page. In your linked comment you write:

          > "has convenient built-in recursive-decent parsing with backtracking built-in into the language semantics, but also has bottom-up parsing facilities for defining operator precedence parsers. That's why it's very convenient for building DSLs"

          which I agree with, for humans. What I am arguing is that LLMs don't have the same notion of "convenient". Them dumping hundreds of lines of convoluted 'unreadable' Python (or C or Go or anything) to implement "half of Common Lisp" or "half of a Prolog engine" for a single task is fine, they don't have to read it, and it gets the same result. What would be different is if it got a significantly better result, which I would find interesting but haven't seen a good reason why it would.

          [1] https://news.ycombinator.com/item?id=40523633

      • By hunterpayne 2025-11-1221:315 reply

        Its a Horn clause resolver...that's exactly the kind of reasoning that LLMs are bad at. I have no idea how to graft Prolog to an LLM but if you can graft any programming language to it, you can graft Prolog more easily.

        Also, that you push Python and JavaScript makes me think you don't know many languages. Those are terrible languages to try to graft to anything. Just because you only know those 2 languages doesn't make them good choices for something like this. Learn a real language Physicist.

        • By jodrellblank 2025-11-1223:361 reply

          > Also, that you push Python and JavaScript

          I didn't push them.

          > Those are terrible languages to try to graft to anything.

          Web browsers, Blender, LibreOffice and Excel all use those languages for embedded scripting. They're fine.

          > Just because you only know those 2 languages doesn't make them good choices for something like this.

          You misunderstood my claim and are refuting something different. I said there is more training data for LLMs to use to generate Python and JavaScript, than Prolog.

          • By hunterpayne 2025-11-130:481 reply

            I'm not. Python and JS are scripting languages. And in this case, we want something that models formal logic. We are hammering in a nail, you picked up a screwdriver and I am telling you to use a claw hammer.

            • By Karrot_Kream 2025-11-134:342 reply

              What does this comment even mean? A claw hammer? By formal definitions, all 3 languages are Turing complete and can express programs of the same computational complexity.

              • By dragonwriter 2025-11-155:441 reply

                > By formal definitions, all 3 languages are Turing complete and can express programs of the same computational complexity.

                So is Brainfuck.

                Turing equivalence does not imply that languages are equally useful choices for any particular application.

              • By lionturtle 2025-11-136:451 reply

                But we kinda don't use python for a database query over sql do we?

        • By cpfohl 2025-11-133:35

          No call for talking down at people. No one has ever been convinced by being belittled.

        • By dragonwriter 2025-11-155:42

          > I have no idea how to graft Prolog to an LLM

          Wrapping either the SWI prolog MQI, or even simpler an existing Python interface like like janus_swi, in a simple MCP is probably an easy weekend project. Tuning the prompting to get an LLM to reliably and effectively choose to use it when it would benefit from symbolic reasoning may be harder, though.

        • By numpy-thagoras 2025-11-1222:23

          We would begin by having a Prolog server of some kind (I have no idea if Prolog is parallelized but it should very well be if we're dealing with Horn Clauses).

          There would be MCP bindings to said server, which would be accessible upon request. The LLM would provide a message, it could even formulate Prolog statements per a structured prompt, and then await the result, and then continue.

        • By Woodi 2025-11-138:30

          > Its a Horn clause resolver...that's exactly the kind of reasoning that LLMs are bad at. I have no idea how to graft Prolog to an LLM but if you can graft any programming language to it, you can graft Prolog more easily.

          By grafting LLM into Prolog and not other way around ?

      • By disambiguation 2025-11-163:38

        This sparked a really fascinating discussion, I don't know if anyone will see this but thanks everyone for sharing your thoughts :)

        I understand your point - to an LLM there's no meaningful difference between once turing complete language and another. I'll concede that I don't have a counter argument, and perhaps it doesn't need to be prolog - though my hunch is that LLM's tend to give better results when using purpose built tools for a given type of problem.

        The only loose end I want to address is the idea of "doing reasoning."

        This isn't an AGI proposal (I was careful to say "good at writing prolog") just an augmentation that (as a user) I haven't yet seen applied in practice. But neither have I seen it convincingly dismissed.

        The idea is the LLM would act like an NLP parser that gradually populates a prolog ontology, like building a logic jail one brick at a time.

        The result would be a living breathing knowledge base which constrains and informs the LLM's outputs.

        The punchline is that I don't even know any prolog myself, I just think it's a neat idea.

      • By cpill 2025-11-1218:571 reply

        Of course it does "reasoning", what do you think reasoning is? From a quick google: "the action of thinking about something in a logical, sensible way". Prolog searches through a space of logical proposition (constraints) and finds conditions that lead to solutions (if one exists).

        (a) Trying adding another 100 or 1000 interlocking proposition to your problem. It will find solutions or tell you one doesn't exist. (b) You can verify the solutions yourself. You don't get that with imperative descriptions of problems. (b) Good luck sandboxing Python or JavaScript with the treat of prompt injection still unsolved.

        • By jodrellblank 2025-11-1220:402 reply

          Of course it doesn't "do reasoning", why do you think "following the instructions you gave it in the stupidest way imaginable" is 'obviously' reasoning? I think one definition of reasoning is being able to come up with any better-than-brute-force thing that you haven't been explicitly told to use on this problem.

          Prolog isn't "thinking". Not about anything, not about your problem, your code, its implementation, or any background knowledge. Prolog cannot reason that your problem is isomorphic to another problem with a known solution. It cannot come up with an expression transform that hasn't been hard-coded into the interpreter which would reduce the amount of work involved in getting to a solution. It cannot look at your code, reason about it, and make a logical leap over some of the code without executing it (in a way that hasn't been hard-coded into it by the programmer/implementer). It cannot reason that your problem would be better solved with SLG resolution (tabling) instead of SLD resolution (depth first search). The point of my example being pseudo-Python was to make it clear that plain Prolog (meaning no constraint solver, no metaprogramming), is not reasoning. It's no more reasoning than that Python loop is reasoning.

          If you ask me to find the largest Prime number between 1 and 1000, I might think to skip even numbers, I might think to search down from 1000 instead of up from 1. I might not come up with a good strategy but I will reason about the problem. Prolog will not. You code what it will do, and it will slavishly do what you coded. If you code counting 1-1000 it will do that. If you code Sieve of Eratosthenes it will do that instead.

          • By joshmoody24 2025-11-1221:45

            The disagreement you have with the person you are relying to just boils down to a difference in the definition of "reasoning."

          • By hunterpayne 2025-11-1221:381 reply

            Its a Horn clause interpreter. Maybe lookup what that is before commenting on it. Clearly you don't have a good grasp of Computer Science concepts or math based upon your comments here. You also don't seem to understand the AI/ML definition of reasoning (which is based in formal logic, much like Prolog itself).

            Python and Prolog are based upon completely different kinds of math. The only thing they share is that they are both Turing complete. But being Turing complete isn't a strong or complete mathematical definition of a programming language. This is especially true for Prolog which is very different from other languages, especially Python. You shouldn't even think of Prolog as a programming language, think of it as a type of logic system (or solver).

            • By jodrellblank 2025-11-132:281 reply

              None of that is relevant.

              • By cpill 2025-11-1721:22

                Of course it is. Your confusing reasoning with intelligence. They are not the same thing.

      • By superdisk 2025-11-1311:031 reply

        Contrary to what everyone else is saying, I think you're completely correct. Using it for AI or "reasoning" is a hopeless dead end, even if people wish otherwise. However I've found that Prolog is an excellent language for expressing certain types of problems in a very concise way, like parsers, compilers, and assemblers (and many more). The whole concept of using a predicate in different modes is actually very useful in a pragmatic way for a lot of problems.

        When you add in the constraint solving extensions (CLP(Z) and CLP(B) and so on) it becomes even more powerful, since you can essentially mix vanilla Prolog code with solver tools.

        • By YeGoblynQueenne 2025-11-1312:551 reply

          The reason why you can write parsers with Prolog is because you can cast the problem of determining whether a string belongs to a language or not as a proof, and, in Prolog, express it as a set of Definite Clauses, particularly with the syntactic sugar of Definite Clause Grammars that give you an executable grammar that acts as both acceptor and generator and is equivalent to a left-corner parser.

          Now, with that in mind, I'd like to understand how you and the OP reconcile the ability to carry out a formal proof with the inability to do reasoning. How is it not reasoning, if you're doing a proof? If a proof is not reasoning, then what is?

          • By jodrellblank 2025-11-1316:582 reply

            Clearly people write parsers in C and C++ and Pascal and OCAML, etc. What does it mean to come in with "the reason you can write parsers with Prolog..."? I'm not claiming that reason is incorrect, I'm handwaving it away as irrelevant and academic. Like saying that Lisp map() is better than Python map() because Lisp map is based on formal Lambda Calculus and Python map is an inferior imitation for blub programmers. When a programmer maps a function over a list and gets a result, it's a distinction without a difference. When a programmer writes a getchar() peek() and goto state machine parser with no formalism, it works, what difference does the formalism behind the implementation practically make?

            Yes maybe the Prolog way means concise code is easier for a human to tell whether the code is a correct expression of the intent, but an LLM won't look at it like that. Whatever the formalism brings, it isn't enough that every parser task is done in Prolog in the last 50 years. Therefore it isn't any particular interest or benefit, except academic.

            > both acceptor and generator

            Also academically interesting but practically useless due to the combinatorial explosion of "all possible valid grammars" after the utterly basic "aaaaabbbbbbbbbbbb" examples.

            > "how you and the OP reconcile the ability to carry out a formal proof with the inability to do reasoning. How is it not reasoning, if you're doing a proof? If a proof is not reasoning, then what is?"

            If drawing a painting is art, is it art if a computer pulls up a picture of a painting and shows it on screen? No. If a human coded the proof into a computer, the human is reasoning, the computer isn't. If the computer comes up with the proof, the computer is reasoning. Otherwise you're in a situation where dominos falling over is "doing reasoning" because it can be expressed formally as a chain of connected events where the last one only falls if the whole chain is built properly, and that's absurdum.

            • By rramadass 2025-11-143:31

              > If a human coded the proof into a computer, the human is reasoning, the computer isn't. ... If the computer comes up with the proof, the computer is reasoning.

              That is exactly what "formal logic programming" is all about. The machine is coming up with the proof for your query based on the facts/rules given by you. Therefore it is a form of reasoning.

              Reasoning (cognitive thinking) is expressed as Arguments (verbal/written premises-to-conclusions) a subset of which are called Proofs (step-by-step valid arguments). Using Formalization techniques we have just pushed some of those proof derivations to a machine.

              I pointed this out in my other comment here https://news.ycombinator.com/item?id=45911177 with some relevant links/papers/books.

              See also Logical Formalizations of Commonsense Reasoning: A Survey (from the Journal of Artificial Intelligence Research) - https://jair.org/index.php/jair/article/view/11076

            • By YeGoblynQueenne 2025-11-1322:231 reply

              With Prolog, the proof is carried out by the computer, not a human. A human writes up a theory and a theorem and the computer proves the theorem with respect to the theory. So I ask again, how is carrying out a proof not reasoning?

              >> I'm not claiming that reason is incorrect, I'm handwaving it away as irrelevant and academic.

              That's not a great way to have a discussion.

              • By jodrellblank 2025-11-1414:201 reply

                The word "reason" came into this thread with the original comment:

                    3. LLMs are bad at solving reasoning problems.
                
                    4. Prolog is good at solving reasoning problems.
                
                I agree with you. In Prolog "?- 1=1." is reasoning by definition. Then 4. becomes "LLMs should emit Prolog because Prolog is good at executing Prolog code".

                I think that's not a useful place to be, so I was trying to head off going there. But now I'll go with you - I agree it IS reasoning - can you please support your case that "executing Prolog code is reasoning" makes Prolog more useful for LLMs to emit than Python?

                • By YeGoblynQueenne 2025-11-1416:151 reply

                  This is not my claim:

                  >> "executing Prolog code is reasoning" makes Prolog more useful for LLMs to emit than Python?

                  I said what I think about LLMs generating Prolog here:

                  https://news.ycombinator.com/item?id=45914587

                  But I was mainly asking why you say that Prolog's execution is "not reasoning". I don't understand what you mean that '"?- 1=1." is reasoning by definition' and how that ties-in with our discussion about Prolog reasoning or not.

                  • By jodrellblank 2025-11-1421:352 reply

                    "?- 1=1." is Prolog code. Executing Prolog code is reasoning. Therefore that is reasoning. Q.E.D. This is the point you refused to move on from until I agreed. So I agreed. So we could get back to the interesting topic.

                    A topic you had no interest in, only interest dragging onto a trangent and grinding it down to make ... what point, exactly? If "executing Prolog code" is reasoning, then what? I say it isn't useful to call it reasoning (in the context of this thread) because it's too broad to be a helpful definition, basically everything is reasoning, and almost nothing is not. When I tried to say in advance that this wouldn't be a useful direction and I didn't want to go here, you said it was " not a great way to have a discussion". And now having dragged me off onto this academic tangent, you dismiss it as "I wasn't interested in that other topic anyway". Annoying.

                    • By rramadass 2025-11-1517:091 reply

                      > "?- 1=1." is Prolog code. Executing Prolog code is reasoning. Therefore that is reasoning. Q.E.D.

                      This is the dumbest thing i have read yet on HN. You are absolutely clueless about this topic and are merely arguing for argument's sake.

                      > If "executing Prolog code" is reasoning, then what? I say it isn't useful to call it reasoning (in the context of this thread) because it's too broad to be a helpful definition, basically everything is reasoning, and almost nothing is not.

                      What does this even mean? It has already been pointed out that Prolog does a specific type of formalized reasoning which is well understood. The fact that there are other formalized models to tackle subdomains of "Commonsense Reasoning" does not detract from the above. That is why folks are trying to marry Prolog (predicate logic) to LLMs (mainly statistical approaches) to get the best of both worlds.

                      User "YeGoblynQueenne" was being polite in his comments but for some reason you willfully don't want to understand and have come up with ridiculous examples and comments which only reflect badly on you.

                      • By jodrellblank 2025-11-1517:392 reply

                        You call it the dumbest thing you have ever read, and say that I know nothing - but you agree that it is a correct statement ("Prolog does a specific type of formalized reasoning").

                        > "What does this even mean?"

                        For someone who is so eager to call comments dumb, you sure have a lot of not-understanding going on.

                        1. Someone said "Prolog is good at reasoning problems"

                        2. I said it isn't any better than other languages.

                        3. Prolog people jumped on me because Ackchually Technickally everything Prolog does is 'reasoning' hah gotcha!

                        4. I say that is entirely unrelated to the 'reasoning' in "Prolog is good at reasoning problems". I demonstrate this by reductio ad absurdum - if executing "?- 1=1." is "reasoning" then it's absurd for the person to be saying that definition is a compelling reason to use Prolog, therefore they were not saying that, therefore this whole tangent about whether some formalism is or isn't reasoning by some academic definition is irrelevant to the claim and counter claim.

                        > "are merely arguing for argument's sake."

                        Presumably you are arguing for some superior purpose?

                        The easiest way for you to change my mind is to demonstrate literally anything that is better for an LLM to emit in Prolog than Python - given the condition that LLMs don't have to care about conciseness or expressivity or readability in the same way humans do. For one example I say it would no better for an LLM to solve an Einstein Puzzle one way or the other. The fact that you can't or won't do this, and prefer insults, is not changing my mind nor is it educating me in anything.

                        • By rramadass 2025-11-1616:24

                          You edited your comment without any indication tags which is dishonest. However, my previous response at https://news.ycombinator.com/item?id=45939440 is still valid. This is an addendum to that;

                          > The easiest way for you to change my mind is to demonstrate literally anything that is better for an LLM to emit in Prolog than Python

                          I have no interest in trying to change your mind since you simply do not have the first idea about what Prolog is doing vis-a-vis any other non-logic programming language. You have to have some basic knowledge before we can have a meaningful discussion.

                          However, in my previous comment here https://news.ycombinator.com/item?id=45712934 i link to some usecases from others. In particular; the casestudy from user "bytebach" is noteworthy and explains exactly what you are asking for.

                          > The fact that you can't or won't do this, and prefer insults, is not changing my mind nor is it educating me in anything.

                          This is your dishonest edit without notification. I refuse to suffer wilful stupidity and hence retorted in a pointed manner; that was the only way left to get the message across. We had given you enough data/pointers in our detailed comments none of which you seem to have even grasped nor looked into. In a forum like this, if we are to learn from each other, both parties must put forth effort to understand the other side and articulate one's own position clearly. You have failed on both counts in this thread.

                        • By rramadass 2025-11-1518:25

                          > but you agree that it is correct.

                          No, i did not; do not twist nor misrepresent my words. Your example had nothing whatsoever to do with "Reasoning" and hence i called it dumb.

                          > you sure have a lot of not-understanding going on.

                          Your and my comments are there for all to see. Your comments are evidence that you are absolutely clueless on Reasoning, Logic Programming Approaches and Prolog.

                          > 1. Someone said "Prolog is good at reasoning problems"

                          Which is True. But it is up to you to present the world-view to Prolog in the appropriate Formal manner.

                          > 2. I said it isn't any better than other languages.

                          Which is stupid. This single statement establishes the fact that you know nothing about Logic Programming nor the aspect of Predicate Logic it is based on.

                          > 3. Prolog people jumped on me because Ackchually Technickally everything Prolog does is 'reasoning' hah gotcha!

                          Which is True and not a "gotcha". You have no definite understanding of what the word "Reasoning" means in the context of Prolog. We have explained concepts and pointed you to papers none of which you are interested in studying nor understanding.

                          > 4. I say that is entirely unrelated to the 'reasoning' in "Prolog is good at reasoning problems". I demonstrate this by reductio ad absurdum - if executing "?- 1=1." is "reasoning" then it's absurd for the person to be saying that definition is a compelling reason to use Prolog, therefore they were not saying that, therefore this whole tangent about whether some formalism is or isn't reasoning by some academic definition is irrelevant to the claim and counter claim.

                          What does this even mean? This is just nonsense verbiage.

                          > Presumably you are arguing for some superior purpose?

                          Yes. I am testing my understanding of Predicate Logic/Logic Programming/Prolog against others. Also whether others have come up with better ways of application in this era of LLMs i.e. what are the different ways to use Prolog with LLMs today?.

                          I initially thought you were probably wanting a philosophical discussion of what "Reasoning" means and hence pointed to some relevant articles/papers but i am now convinced you have no clue about this entire subject and are really making up stuff as you go.

                          You are wasting everybody's time, testing their patience and coming across as totally ignorant on this domain.

                    • By YeGoblynQueenne 2025-11-1510:211 reply

                      I'm sorry you find my contribution to the discussion annoying, but how should I feel if you just "agree" with me as a way to get me to stop arguing?

                      But I think your annoyance may be caused by misunderstanding my argument. For example:

                      >> If "executing Prolog code" is reasoning, then what? I say it isn't useful to call it reasoning (in the context of this thread) because it's too broad to be a helpful definition, basically everything is reasoning, and almost nothing is not.

                      Everything is not reasoning, nor is executing any code reasoning, but "executing Prolog code" is, because executing Prolog code is a special case of executing code. The reason for that is that Prolog's interpreter is an automated theorem prover, therefore executing Prolog code is carrying out a proof; in an entirely literal and practical sense, and not in any theoretical or abstract sense. And it is very hard to see how carrying out a proof automatically is "not reasoning".

                      I made this point in my first comment under yours, here:

                      https://news.ycombinator.com/item?id=45909159

                      The same clearly does not apply to Python, because its interpreter is not an automated theorem prover; it doesn't apply to javascript because its interpreter is not an automated theorem prover; it doesn't apply to C because its compiler is not an automated theorem prover; and so on, and so forth. Executing code in any of those languages is not reasoning, except in the most abstract and, well, academic, sense, e.g. in the context of the Curry-Howard correspondence. But not in the practical, down-to-brass-tacks way it is in Prolog. Calling what Prolog does reasoning is not a definition of reasoning that's too broad to be useful, as you say. On the contrary, it's a very precise definition of reasoning that applies to Prolog but not to most other programming languages.

                      I think you misunderstand this argument and as a consequence fail to engage with it and then dismiss it as irrelevant because you misunderstand it. I think you should really try to understand it, because it's obvious you have some strong views on Prolog which are not correct, and you might have the chance to correct them.

                      I absolutely have an interest in any claim that generating Prolog code with LLMs will fix LLMs' inability to reason. Prolog is a major part of my programming work and research.

                      • By jodrellblank 2025-11-174:122 reply

                        > "how should I feel if you just "agree" with me as a way to get me to stop arguing?"

                        Triumphant? Victorious? magnificent, successful, proud, powerful, insert any adjective which applies to a situation where someone wanted something, and then got it.

                        > "And it is very hard to see how carrying out a proof automatically is "not reasoning. The same clearly does not apply to Python, because its interpreter is not an automated theorem prover; it doesn't apply to javascript because its interpreter is not an automated theorem prover"

                        And that does not stop Python or Javascript from being used to find solutions to e.g. an Einstein Puzzle, something a human might call "a reasoning problem". This means Prolog 'doing reasoning' must not be the thing which solves the 'reasoning problem', something else must be doing that because non-reasoning systems can do it too.

                        If Prolog 'doing reasoning' meant it could solve 'reasoning problems' that no other programming language could, that would be a strong reason to use Prolog, but that is not something you or the other 'reasoning' commenters have claimed or offered examples of. Clearly the word 'reasoning' has different definitions in the different sentences and that is important here because I am responding to one and youall on the other.

                        If 'doing reasoning' is not the thing which makes it useful for 'solving reasoning problems' - if that neither compels one to use Prolog when working to 'solve a reasoning problem', nor convinces one to avoid other languages - if the definition does not influence one's decision in any way - it's very hard to see how it is the relevant version of 'reasoning' to focus on, and what point is trying to be made by this insistence on focusing on it, except academic one-upping.

                        • By YeGoblynQueenne 2025-11-198:40

                          >> And that does not stop Python or Javascript from being used to find solutions to e.g. an Einstein Puzzle, something a human might call "a reasoning problem". This means Prolog 'doing reasoning' must not be the thing which solves the 'reasoning problem', something else must be doing that because non-reasoning systems can do it too.

                          To solve an Einstein puzzle in Python et al. you have to code 1) a definition of the problem and 2) a solution that you come up with. In Prolog you only have to code a definition of the problem and then executing the definition gets to the solution.

                          Other languages indeed can solve problems that Prolog can, but a human programmer must code the solution, while Prolog comes built-in with a universal problem solver, SLD-Resolution, that can solve any problem a human programmer can pose to it.

                          I looked around for an example of this with real code and found this SO thread on programmatically solving a Zebra puzzle (same as the Einstein puzzle):

                          https://stackoverflow.com/questions/318888/solving-who-owns-...

                          There are a few proposed solutions in Python, and in Prolog. The Python solutions pull-in constraint solving libraries, encode the problem constraints and then use for-loops to iterate over the set of solutions that respect the constraints.

                          The Prolog solutions do not pull in any libraries and do not iterate. They declare the constraints of the problem and then execute the constraints, letting the Prolog interpreter find a solution that satisfies them.

                          So the difference is that Prolog can solve the problem on its own, while Python can solve it only if you hand-code the solution, which includes importing a constraint solver. Constraint solving is of course a form of reasoning, and that's how you can get Python to do reasoning: by implementing a reasoning algorithm. In Prolog you don't need to do that, because SLD-Resolution is a universal problem solver that can be applied to constraint problems, like any other problem. This is not an academic matter, as you insist that it is; it is a practical matter, of knowing how to code a universal problem solver and getting it to run on real-world hardware.

                          I say that solving constraints is a form of reasoning. You won't find anyone to disagree with this in the CS and symbolic AI community. While you also won't find an agreed-upon, formal definition of "reasoning", we don't need one because we've been studying reasoning since the time of Aristotle and his "Syllogisms" (literally, "Reasonings" in Greek). In the same way you won't really find an agreed-upon definition of "mathematics", but we don't need one because we've been studying maths since the time of the ancient Babylonians (at least; my memory is hazy).

                          You argue that what Prolog does isn't reasoning, but that's a very niche view. Not that this means you're wrong, but one reason I insist with this discussion is that your view is so unorthodox. If you're right, I'd like to know, so I can understand where I was wrong. But so far I still only see a misunderstanding of Prolog and a continued unwillingness to engage with the argument that Prolog does reasoning because it has an automated theorem prover as an interpreter.

                          Note that the Prolog solutions in the SO thread are a bit over-engineered for my tastes. The one in the link below is much more straightforward although it's for a simplified version of the problem. Still, it shows what I mean that you only need to define the problem and then the interpreter figures out how to solve it.

                          https://www.101computing.net/solving-a-zebra-puzzle-using-pr...

      • By riku_iki 2025-11-1218:301 reply

        Even in your example (which is obviously not correct representation of prolog), that code will work X orders magnitude faster and with 100% reliability compared to much more inferior LLM reasoning capabilities.

        • By szundi 2025-11-1218:452 reply

          This is not the point though

          • By Onavo 2025-11-1219:59

            Algorithmically there's nothing wrong with using BFS/DFS to do reasoning as long as the logic is correct and the search space is constrained sufficiently. The hard part has always been doing the constraining, which LLMs seem to be rather good at.

          • By riku_iki 2025-11-1220:57

            > This is not the point though

            could you expand what is the point? That authors opinion without much justification is that this is not reasoning?

      • By nuc1e0n 2025-11-1217:554 reply

        What makes you think your brain isn't also brute forcing potential solutions subconciously and only surfacing the useful results?

        • By jodrellblank 2025-11-1220:492 reply

          Because I can solve problems that would take the age of the universe to brute force, without waiting the age of the universe. So can you: start counting at 1, increment the counter up to 10^8000, then print the counter value.

          Prolog: 1, 2, 3, 4, 5 ...

          You and me instantly: 10^8000

          • By nuc1e0n 2025-11-1516:20

            The brain can still use other means of working in addition to brute forcing solutions. For example, how would you go about solving the chess puzzle of eight queens that doesn't involve going through the potential positions and then filtering out the options that don't match the criteria for the solution?

            Prolog can also evaluate mathematical expressions directly as well.

          • By numpy-thagoras 2025-11-1222:25

            There's a whole lot of undecidable (or effectively undecidable) edge cases that can be adequately covered. As a matter of fact, Decidability Logic is compatible with Prolog.

        • By mkirsten 2025-11-1217:574 reply

          Can you try calculating 101 * 70 in your head?

          • By hacker_homie 2025-11-1218:06

            I think therefore I am calculator?

          • By johnisgood 2025-11-1218:561 reply

            Very easy to solve, just like it is easy to solve many other ones once you know the tricks.

            I recommend this book: https://www.amazon.com/Secrets-Mental-Math-Mathemagicians-Ca...

            • By jabbywocker 2025-11-1219:181 reply

              Completely missing the point on purpose?

              • By johnisgood 2025-11-1221:201 reply

                Elaborate.

                • By jabbywocker 2025-11-131:111 reply

                  you don’t solve it by brute forcing possible solutions until one sticks

                  • By johnisgood 2025-11-138:361 reply

                    Yeah, read it in another comment. Why do you think doing calculations in your head is brute-forcing? Many people can do it flawlessly, without even knowing of these "tricks". They just know. Is that brute-force?

                    • By jabbywocker 2025-11-142:221 reply

                      [flagged]

                      • By johnisgood 2025-11-1411:041 reply

                        You are not replying to what I said. I am not going to repeat myself, see the parent comment you replied to.

                        • By Izkata 2025-11-1415:121 reply

                          Your original comment completely missed the point of what it was replying to, you phrased it like you were correcting them but you were actually in agreement and didn't seem to realize it. They tried to clarify when you asked and when you responded you assumed they had the opposite viewpoint from what they actually have.

                          • By johnisgood 2025-11-159:261 reply

                            I replied to "Can you try calculating 101 * 70 in your head?".

                            Yes, many people can successfully calculate it through learned tricks or no learned tricks.

                            That is all I am saying. And my question is: how is calculating it in your own head brute-force, especially if done without such tricks?

                            No need to complicate it, answers to my question would suffice.

                            • By Izkata 2025-11-1517:481 reply

                              > how is calculating it in your own head brute-force

                              Doing this calculation in your head is not brute force, which was their entire point. Their math question was an example of why the brain isn't brute-forcing solutions, replying to this:

                              > What makes you think your brain isn't also brute forcing potential solutions subconciously and only surfacing the useful results?

                              • By johnisgood 2025-11-2013:32

                                And I posted a link to a book that helps with mental math. Does it make it brute force if I know some mental math tricks?

                                In any case, I am not sure why he kept on telling me that you do not solve it through brute-force because that is not what I said.

          • By froggit 2025-11-1221:351 reply

            I can absolutely try this. Doesn't mean i'll solve it. If i solve it there's no guarantee i'll be correct. Math gets way harder when i don't have a legitimate need to do it. This falls in the "no legit need" so my mind went right to "100 * 70, good enough."

            • By NetMageSCW 2025-11-133:54

              Or you could do (100 + 1)*70 => 100*70 + 1*70

          • By ang_cire 2025-11-1218:102 reply

            Um, that's really easy to do in your head, there's no carrying or anything? 7,070

            7 * 101 = 707 * 10 = 7,070

            And computers don't brute-force multiplication either, so I'm not sure how this is relevant to the comment above?

            • By polotics 2025-11-1218:571 reply

              I think it is very relevant, because no brute-forcing is involved in this solution.

              • By nutjob2 2025-11-1220:291 reply

                That's not true, the 'brute force' part is searching for a shortcut that works.

                • By numpy-thagoras 2025-11-1222:28

                  The brute force got reduced down to fast heuristics, like Arthur Benjamin's Mathemagics.

            • By jabbywocker 2025-11-1219:20

              It’s almost like you’re proving the point of his reply…

        • By troupo 2025-11-1218:431 reply

          human brains are insanely powerful pattern matching and shortcut-taking machines. There's very little brute forcing going on.

          • By nutjob2 2025-11-1220:341 reply

            Your second sentence contradicts your first.

            • By troupo 2025-11-1222:002 reply

              Pray tell how it contradicts the first.

              Just note: human pattern matching is not Haskell/Erlang/ML pattern matching. It doesn't go [1] through all possible matches of every possible combination of all available criteria

              [1] If it does, it's the most powerful computing device imaginable.

              • By versteegen 2025-11-1311:15

                I 100% agree with nutjob :|

                There are hundreds of trillions of synapses in the brain, and much of what they do (IANANS) could reasonably be described as pattern matching: mostly sitting idle waiting for patterns. (Since dendritic trees perform a lot of computation (for example, combining inputs at each branch), if you want to count the number of pattern matchers in the branch you can't just count neurons. A neuron can recognise more than one pattern.)

                So yes, thanks to its insanely parallel architecture, the brain is also an insanely brute force pattern matcher, constantly matching against who knows how many trillions of previously seen patterns. (BTW IMHO this is why LLMs work so well)

                (I do recognise the gap in my argument: are all those neurons actually receiving inputs to match against, or are they 'gated'? But we're really just arguing about semantics of applying "brute force", a CS term, to a neural architecture, where it has no definition.)

              • By antonvs 2025-11-131:58

                > [1] If it does, it's the most powerful computing device imaginable.

                Well, my brain perhaps. Not sure about the rest of y'all.

        • By dbtc 2025-11-1218:18

          Just intuition ;)

      • By ux266478 2025-11-1220:141 reply

        Everything you've written here is an invalid over-reduction, I presume because you aren't terribly well versed with Prolog. Your simplification is not only outright erroneous in a few places, but essentially excludes every single facet of Prolog that makes it a turing complete logic language. What you are essentially presenting Prolog as would be like presenting C as a language where all you can do is perform operations on constants, not even being able to define functions or preprocessor macros. To assert that's what C is would be completely and obviously ludicrous, but not so many people are familiar enough with Prolog or its underlying formalisms to call you out on this.

        Firstly, we must set one thing straight: Prolog definitionally does reasoning. Formal reasoning. This isn't debatable, it's a simple fact. It implements resolution (a computationally friendly inference rule over computationally-friendly logical clauses) that's sound and refutation complete, and made practical through unification. Your example is not even remotely close to how Prolog actually works, and excludes much of the extra-logical aspects that Prolog implements. Stripping it of any of this effectively changes the language beyond recognition.

        > Plain Prolog's way of solving reasoning problems is effectively:

        No. There is no cognate to what you wrote anywhere in how Prolog works. What you have here doesn't even qualify as a forward chaining system, though that's what it's closest to given it's somewhat how top-down systems work with their ruleset. For it to even approach a weaker forward chaining system like CLIPS, that would have to be a list of rules which require arbitrary computation and may mutate the list of rules it's operating on. A simple iteration over a list testing for conditions doesn't even remotely cut it, and again that's still not Prolog even if we switch to a top-down approach by enabling tabling.

        > You hard code some options

        A Prolog knowledgebase is not hardcoded.

        > write a logical condition with placeholders

        A horn clause is not a "logical condition", and those "placeholders" are just normal variables.

        > and Prolog brute-forces every option in every placeholder.

        Absolutely not. It traverses a graph proving things, and when it cannot prove something it backtracks and tries a different route, or otherwise fails. This is of course without getting into impure Prolog, or the extra-logical aspects it implements. It's a fundamentally different foundation of computation which is entirely geared towards formal reasoning.

        > And that might have been nice compared to Pascal in 1975, it's not so different to modern garbage collected high level scripting languages.

        It is extremely different, and the only reason you believe this is because you don't understand Prolog in the slightest, as indicated by the unsoundness of essentially everything you wrote. Prolog is as different from something like Javascript as a neural network with memory is.

        • By jodrellblank 2025-11-133:143 reply

          The original suggestion was that LLMs should emit Prolog code to test their ideas. My reply was that there is nothing magic in Prolog which would help them over any other language, but there is something in other languages which would help them over Prolog - namely more training data. My example was to illustrate that, not to say Prolog literally is Python. Of course it's simplified to the point of being inaccurate, it's three lines, how could it not be.

          > "A Prolog knowledgebase is not hardcoded."

          No, it can be asserted and retracted, or consult a SQL database or something, but it's only going to search the knowledge the LLM told it to - in that sense there is no benefit to an LLM to emit Prolog over Python since it could emit the facts/rules/test cases/test conditions in any format it likes, it doesn't have any attraction to concise, clean, clear, expressive, output.

          > "those "placeholders" are just normal variables"

          Yes, just normal variables - and not something magical or special that Prolog has that other languages don't have.

          > "Absolutely not. It traverses a graph proving things,"

          Yes, though, it traverses the code tree by depth first walk. If the tree has no infinite left-recursion coded in it, that is a brute force walk. It proves things by ordinary programmatic tests that exist in other languages - value equality, structure equality, membership, expression evaluation, expression comparison, user code execution - not by intuition, logical leaps, analogy, flashes of insight. That is, not particularly more useful than other languages which an LLM could emit.

          > "Your example is not even remotely close to how Prolog actually works"

          > "There is no cognate to what you wrote anywhere in how Prolog works"

          > "It is extremely different"

          Well:

              parent(timmy, sarah).
              person(brian).
              person(anna).
              person(sarah).
              person(john).
          
              ?- person(X), writeln(X), parent(timmy, X).
          
              brian
              anna
              sarah
              X = sarah
          
          That's a loop over the people, filling in the variable X. Prolog is not looking at Ancestry.com to find who Timmy's parents are. It's not saying "ooh you have a SQLite database called family_tree I can look at". That it's doing it by a different computational foundation doesn't seem relevant when that's used to give it the same abilities.

          My point is that Prolog is "just" a programming language, and not the magic that a lot of people feel like it is, and therefore is not going to add great new abilities to LLMs that haven't been discovered because of Prolog's obscurity. If adding code to an LLM would help, adding Python to it would help. If that's not true, that would be interesting - someone should make that case with details.

          > "and the only reason you believe this is because you don't understand Prolog in the slightest"

          This thread would be more interesting to everybody if you and hunterpayne would stop fantasizing about me, and instead explain why Prolog's fundamentally different foundation makes it a particularly good language for LLMs to emit to test their other output - given that they can emit virtually endless quantities of any language, custom writing any amount of task-specific code on the fly.

          • By YeGoblynQueenne 2025-11-1313:21

            The discussion has become contentious and that's very unfortunate because there's clearly some confusion about Prolog and that's always a great opportunity to learn.

            You say:

            >> Yes, though, it traverses the code tree by depth first walk.

            Here's what I suggest: try to think what, exactly, is the data structure searched by Depth First Search during Prolog's execution.

            You'll find that this structure is what we call and SLD-Tree. That's a tree where the root is a Horn goal that begins the proof (i.e. the thing we want to dis-prove, since we're doing a proof by refutation); every other node is a new goal derived during the proof; every branch is a Resolution step between one goal and one definite program clause from a Prolog program; and every leaf of a finite branch is either the empty clause, signalling the success of the proof by refutation, or a non-empty goal that can not be further reduced, which signals the failure of the proof. So that's basically a proof tree and the search is ... a proof.

            So Prolog is not just searching a list to find an element, say. It's searching a proof tree to find a proof. It just so happens that searching a proof tree to find a proof corresponds to the execution of a program. But while you can use a search to carry out a proof, not every search is a proof. You have to get your ducks in a row the right way around otherwise, yeah, all you have is a search. This is not magick, it's just ... computer science.

            It should go without saying that you can do the same thing with Python, or with javascript, or with any other Turing-complete language, but then you'd basically have to re-invent Prolog, and implement it in that other language; an ad-hoc, informally specified, bug-ridden and slow implementation of half of Prolog, most like.

            This is all without examining whether you can fix LLMs' lack of reasoning by funneling their output through a Prolog interpreter. I personally don't think that's a great idea. Let's see, what was that soundbite... "intelligence is shifting the test part of generate-test into the generate part" [1]. That's clearly not what pushing LLM output into a Prolog interpreter achieves. Clearly, if good, old-fashioned symbolic AI has to be combined with statistical language modelling, that has to happen much earlier in the statistical language modelling process. Not when it's already done and dusted and we have a language model; which is only statistical. Like putting the bubbles in the soda before you serve the drink, not after, the logic has to go into the language modelling before the modelling is done, not after. Otherwise there's no way I can see that the logic can control the modelling. Then all you have is generate-and-test, and it's meh as usual. Although note that much recent work on carrying out mathematical proofs with LLMs does exactly that, e.g. like DeepMind's AlphaProof. Generate-and-test works, it's just dumb and inefficient and you can only really make it work if you have the same resources as DeepMind and equivalent.

            _____________

            [1] Marvin Minsky via Rao Kampampathi and students: https://arxiv.org/html/2504.09762v1

          • By rramadass 2025-11-135:46

            This is a philosophical argument.

            The way to look at this is first to pin down what we mean when we say Human Commonsense Reasoning (https://en.wikipedia.org/wiki/Commonsense_reasoning). Obviously this is quite nebulous and cannot be defined precisely but OG AI researchers have a done a lot to identify and formalize subsets of Human Reasoning so that it can be automated by languages/machines.

            See the section Successes in automated commonsense reasoning in the above wikipedia page - https://en.wikipedia.org/wiki/Commonsense_reasoning#Successe...

            Prolog implements a language to logically interpret only within a formalized subset of human reasoning mentioned above. Now note that all our scientific advances have come from our ability to formalize and thus automate what was previously only heuristics. Thus if i were to move more of real-world heuristics (which is what a lot of human reasoning consists of) into some formal model then Prolog (or say LLMs) can be made to better reason about it.

            See the paper Commonsense Reasoning in Prolog for some approaches - https://dl.acm.org/doi/10.1145/322917.322939

            Note however the paper beautifully states at the end;

            Prolog itself is all form and no content and contains no knowledge. All the tasks, such as choosing a vocabulary of symbols to represent concepts and formulating appropriate sentences to represent knowledge, are left to the users and are obviously domain-dependent. ... For each particular application, it will be necessary to provide some domain-dependent information to guide the program writing. This is true for any formal languages. Knowledge is power. Any formalism provides us with no help in identifying the right concepts and knowledge in the first place.

            So Real-World Knowledge encoded into a formalism can be reasoned about by Prolog. LLMs claim to do the same on unstructured/non-formalized data which is untenable. A machine cannot do "magic" but can only interpret formalized/structured data according to some rules. Note that the set of rules can be dynamically increased by ML but ultimately they are just rules which interact with one another in unpredictable ways. Now you can see where Prolog might be useful with LLMs. You can impose structure on the view of the World seen by the LLM and also force it to confine itself only to the reasoning it can do within this world-view by asking it to do predominantly Prolog-like reasoning but you don't turn the LLM into just a Prolog interpreter. We don't know how it interacts with other heuristics/formal reasoning parts (eg. reinforcement learning) of LLMs but does seem to give better predictable and more correct output. This can then be iterated upon to get a final acceptable result.

            PS: You might find the book Thinking and Deciding by Jonathan Baron useful for background knowledge - https://www.cambridge.org/highereducation/books/thinking-and...

    • By bobbylarrybobby 2025-11-1218:262 reply

      IIRC IBM’s Watson (the one that played Jeopardy) used primitive NLP (imagine!) to form a tree of factual relations and then passed this tree to construct Prolog queries that would produce an answer to a question. One could imagine that by swapping out the NLP part with an LLM, the model would have 1. a more thorough factual basis against which to write Prolog queries and 2. a better understanding of the queries it should write to get at answers (for instance, it may exploit more tenuous relations between facts than primitive NLP).

    • By nextos 2025-11-1217:571 reply

      We've done this, and it works. Our setup is to have some agents that synthesize Prolog and other types of symbolic and/or probabilistic models. We then use these models to increase our confidence in LLM reasoning and iterate if there is some mismatch. Making synthesis work reliably on a massive set of queries is tricky, though.

      Imagine a medical doctor or a lawyer. At the end of the day, their entire reasoning process can be abstracted into some probabilistic logic program which they synthesize on-the-fly using prior knowledge, access to their domain-specific literature, and observed case evidence.

      There is a growing body of publications exploring various aspects of synthesis, e.g. references included in [1] are a good starting point.

      [1] https://proceedings.neurips.cc/paper_files/paper/2024/file/8...

    • By mindcrime 2025-11-1218:461 reply

      I am once again shilling the idea that someone should find a way to glue Prolog and LLMs together for better reasoning agents.

      There are definitely people researching ideas here. For my own part, I've been doing a lot of work with Jason[1], a very Prolog like logic language / agent environment with an eye towards how to integrate that with LLMs (and "other").

      Nothing specific / exciting to share yet, but just thought I'd point out that there are people out there who see potential value in this sort of thing and are investigating it.

      [1]: https://github.com/jason-lang/jason

    • By rramadass 2025-11-1217:48

      You might find Eugene Asahara's detailed Prolog in the LLM Era series of about a dozen blog posts very useful - https://eugeneasahara.com/category/prolog-in-the-llm-era/

    • By hacker_homie 2025-11-1217:502 reply

      Prolog doesn't look like javascript or python so:

      1. web devs are scared of it.

      2. not enough training data?

      I do remember having to wrestle to get prolog to do what I wanted but I haven't written any in ~10 years.

      • By kamaal 2025-11-134:28

        >>Prolog doesn't look like javascript or python so:

        Think of this way. In Python and Javascript you write code, and to test if its correct you write unit test cases.

        A prolog program is basically a bunch of test cases/unit test cases, you write it, and then tell the Prolog compiler, 'write code, that passes these test cases'.

        That is, you are writing the program specification, or tests that if pass would represent solution to the problem. The job of the compiler to write the code that passes these test cases.

      • By jm4 2025-11-1218:183 reply

        It's been a while since I have done web dev, but web devs back then were certainly not scared of any language. Web devs are like the ultimate polyglots. Or at least they were. I was regularly bouncing around between a half dozen languages when I was doing pro web dev. It was web devs who popularized numerous different languages to begin with simply because delivering apps through a browser allowed us a wide variety of options.

        • By hunterpayne 2025-11-1222:301 reply

          No web dev I have ever met could use Prolog well. I think your statement about web devs being polyglots is based upon the fact that web devs chase every industry fad. I think that has a lot to do with the nature and economics of web dev work (I'm not blaming the web devs for this). I mean the best way to succeed as a webdev is to write your own version of a framework that does the same thing as the last 10 frameworks but with better buzzword marketing.

          Generally speaking, all the languages they know are pretty similar to each other. Bolting on lambdas isn't the same as doing pure FP. Also, anytime a problem comes up where you would actually need a weird language based upon different math, those problems will be assigned to some other kind of developer (probably one with a really strong CS background).

          • By victorbjorklund 2025-11-1311:281 reply

            That you haven’t met any webdevs using prolog probably is because 1) prolog is a very rare language among devs in general not just webdevs (unless you count people that did prolog in a course 20 years ago and remember nothing) 2) prolog just isn’t that focused on webdev (like saying ”not many embedded devs know react so I guess it is because react is too hard for them”)

        • By johnisgood 2025-11-1218:571 reply

          I have the complete opposite view of web developers. :)

          • By jm4 2025-11-1219:10

            Maybe the ones these days are different. I left the field probably 15 years ago.

        • By zelphirkalt 2025-11-131:31

          Maybe they were, but these days everything must be in JS syntax. Even if it is longer than pure CSS, they want the CSS inside JS syntax. They are only ultimate polyglot as long as all the languages are actually JS.

          (Of course this is an overgeneralization, since obviously, there are web developers, who do still remember how to do things in HTML, CSS and, of course JS.)

    • By schmuhblaster 2025-11-1611:52

      This is my own recent attempt at this:

      https://news.ycombinator.com/item?id=45937480

      The core idea of DeepClause is to use a custom Prolog-based DSL together with a metainterpreter implemented in Prolog that can keep track of execution state and implicitly manage conversational memory for an LLM. The DSL itself comes with special predicates that are interpreted by an LLM. "Vague" parts of the reasoning chain can thus be handed off to a (reasonably) advanced LLM.

      Would love to collect some feedback and interesting ideas for possible applications.

    • By lynndotpy 2025-11-1217:47

      As someone who did deep learning research 2017-2023, I agree. "Neurosymbolic AI" seems very obvious, but funding has just been getting tighter and more restrictive towards the direction of figuring out things that can be done with LLMs. It's like we collectively forgot that there's more than just txt2txt in the world.

    • By cpill 2025-11-1219:02

      YES! I've run a few experiments on classical logic problems and an LLM can spit out Prolog programs to solve the puzzel. Try it yourself, ask an LLM to write some prolog to solve some problem and then copy paste it to https://swish.swi-prolog.org/ and see if it runs.

    • By f1shy 2025-11-1217:45

      Wouldn’t that be like a special case of neuro-symbolic programming?! There are plenty of research going on

    • By fudged71 2025-11-1223:20

      I think prolog is the right format to codify expertise in Claude Skills. I just haven’t tested it yet.

    • By naasking 2025-11-1217:37

      > LLMs are bad at counting the number of r's in strawberry.

      This is a tokenization issue, not an LLM issue.

    • By jnpnj 2025-11-1218:461 reply

      Can't find the links right now, but there were some papers on llm generating prolog facts and queries to ground the reasoning part. Somebody else might have them around.

      • By AlanYx 2025-11-1218:54

        There's a lot of work in this area. See e.g., the LoRP paper by Di et al. There's also a decent amount of work on the other side too, i.e., using LLMs to convert Prolog reasoning chains back into natural language.

    • By IshKebab 2025-11-1221:38

      I think that's what these guys are doing

      https://www.symbolica.ai/

    • By chvid 2025-11-1220:48

      If you are looking for AGI. And you understand what is going on inside of it - then it is obviously not AGI.

    • By poulpy123 2025-11-1310:26

      There are people working on integration deep learning with symbolic AI (but I don't know more)

    • By Avicebron 2025-11-1217:571 reply

      @goblinqueen, you around?

      • By lkuty 2025-11-1218:401 reply

        @YeGoblynQueenne Dunno if it will ping the person

    • By alfonsodev 2025-11-1411:34

      I've been thinking a lot about this, and I want to build the following experiment, in case anyone is interested:

      The experiment is about putting an LLM to play plman[0] with and without prolog help.

      plman is a pacman like game for learning prolog, it was written by profesor Francisco J. Gallego from Alicante University to teach logic subject in computer science.

      Basically you write solution in prolog for a map, and plman executes it step by step so you can see visually the pacman (plman) moving around the maze eating and avoiding ghost and other traps.

      There is an interesting dynamic about finding keys for doors and timing based traps.

      There are different levels of complexity, and you can also write easily your maps, since they are just ascii characters in a text file.

      I though this was the perfect project to visually explain my coworkers the limit of LLM "reasoning" and what is symbolic reasoning.

      So far I hooked ChatGPT API to try to solve scenarios, and it fails even with substancial amount of retries. That's what I was expecting.

      The next thing would be to write a mcp tool so that the LLM can navigate the problem by using the tool, but here is where I need guidance.

      I'm not sure about the best dynamic to prove the usefulness of prolog in a way that goes beyond what context retrieval or db query could do.

      I'm not sure if the LLM should write the prolog solution. I want to avoid to build something trivial like the LLM asking for the steps, already solved, so my intuition is telling me that I need some sort of virtual joystick mcp to hide prolog from the LLM, so the LLM could have access to the current state of the screen, and questions like what would be my position if I move up ? What's the position of the ghost in next move ? where is the door relative to my current position ?

      I don't have academic background to design this experiment properly. Would be great if anyone is interested to work together on this, or give me some advice.

      Prior work pending on my reading list:

      - LoRP: LLM-based Logical Reasoning via Prolog [1]

      - A Pipeline of Neural-Symbolic Integration to Enhance Spatial Reasoning in Large Language Models [2]

      - [0] https://github.com/Matematicas1UA/plman/blob/master/README.m...

      - [1] https://www.sciencedirect.com/science/article/abs/pii/S09507...

      - [2] https://arxiv.org/html/2411.18564v1

    • By marcelr 2025-11-1218:04

      yes

  • By ux266478 2025-11-1216:341 reply

    Prolog really is such a fantastic system, if I can justify its usage then I won't hesitate to do so. Most of the time I'll call a language that I find to be powerful a "power tool", but that doesn't apply here. Prolog is beyond a power tool. A one-off bit of experimental tech built by the greatest minds of a forgotten generation. You'd it find deep in irradiated ruins of a dead city, buried far underground in a bunker easily missed. A supercomputer with the REPL's cursor flickering away in monochrome phosphor. It's sitting there, forgotten. Dutifully waiting for you to jack in.

    • By lmf4lol 2025-11-1217:102 reply

      When I entered university for my Bachelors, I was 28 years old and already worked for 5 or 6 years as a self-taught programmer in the industry. In the first semester, we had a Logic Programming class and it was solely taught in Prolog. At first, I was mega overwhelmed. It was so different than anything I did before and I had to unlearn a lot of things that I was used to in "regular" programming. At the end of the class, I was a convert! It also opened up my mind to functional programming and mathematical/logical thinking in general.

      I still think that Prolog should be mandatory for every programmer. It opens up the mind in such a logical way... Love it.

      Unfortunately, I never found an opportunity in my 11 years since then to use it in my professional practice. Or maybe I just missed the opportunities?????

      • By ux266478 2025-11-1217:361 reply

        Did they teach you how to use DCGs? A few months ago I used EDCGs as part of a de-spaghettification and bug fixing effort to trawl a really nasty 10k loc sepples compilation unit and generate tags for different parts of it. Think ending up with a couple thousand ground terms like:

        tag(TypeOfTag, ParentFunction, Line).

        Type of tag indicating things like an unnecessary function call, unidiomatic conditional, etc.

        I then used the REPL to pull things apart, wrote some manual notes, and then consulted my complete knowledgebase to create an action plan. Pretty classical expert system stuff. Originally I was expecting the bug fixing effort to take a couple of months. 10 days of Prolog code + 2 days of Prolog interaction + 3 days of sepples weedwacking and adjusting the what remained in the plugboard.

        • By cpill 2025-11-1219:05

          This sounds interesting. Perhaps you could write a blog post about it? I'm always looking for use cases for Prolog

      • By hunterpayne 2025-11-1221:45

        Prolog is a great language to learn. But I wouldn't want to use it for anything more than what its directly good at. Especially the cut operator, that's pretty mind bending. But once you get good at it, it all just flows. But I doubt more than 1% of devs could ever master it, even on an unlimited timeline. Its just much harder than any other type of non-research dev work.

  • By mattbettinson 2025-11-1216:023 reply

    In university, Learning prolog was my first encounter with the idea that my IQ may not be as high as I thought

    • By vidarh 2025-11-1216:351 reply

      I also found it mindbending.

      But some parts, like e.g. the cut operator is something I've copied several times over for various things. A couple of prototype parser generators for example - allowing backtracking, but using a cut to indicate when backtracking is an error can be quite helpful.

      • By exasperaited 2025-11-1218:322 reply

        "Keep your exclamation points under control. You are allowed no more than two or three per 100,000 words of prose."

        Elmore Leonard, on writing. But he might as well have been talking about the cut operator.

        At uni I had assignments where we were simply not allowed to use it.

        • By vidarh 2025-11-1411:22

          That may make sense for Prolog code - I don't know Prolog enough to say. But the places I like to use it, it significantly simplified code by letting me write grammars with more local and specific error reporting.

          That is, instead of continuing to backtrack, I'd use a cut-like operator to say "if you backtrack past this, then the error is here, and btw. (optionally) here is a nicer error message".

          This could of course alter semantics. E.g. if I had a rule "expr ::= (foo ! bar) | (foo baz), foo baz would never get satisfied, whereas with "expr ::= (foo bar) | (foo baz)" it could. (And in that example, it'd be totally inappropriate in my parser generator too)

          I'm guessing the potential to have non-local effects on the semantics is why you'd consider it problematic in Prolog? I can see it would be problematic if the cut is hidden away from where it would affect you.

          In my use, the grammar files would typically be a couple of hundred lines at most, and the grammar itself well understood, and it was used explicitly to throw an error, so you'd instantly know.

          There are (at least) two ways of improving on that, which I didn't bother with: I could use it to say "push the error message and location" and pop those errors if a given subtree of the parse was optional. Or I could validate that these operators don't occur in rules that are used in certain ways.

          But in practice in this use I never ended up with big enough code that it seemed worth it, and would happily litter the grammars with lots of them.

        • By hunterpayne 2025-11-1221:46

          I used to use a cut operator about every 2 to 4 rules. If you are constantly using it as error handling, I would agree you are using it too often. If you are using it to turn sets into scalars or cells, then you are using it correctly. It just makes the code really hard to reason about and maintain.

    • By chanux 2025-11-1216:201 reply

      I thoroughly enjoyed doing all the exercises. It was challenging and hence, fun!

      I don't think I ever learned how it can be useful other than feeding the mind.

      • By chadcmulligan 2025-11-133:33

        There was a time when the thinking was you can load all the facts into a prolog engine and it would replace experts like doctors and engineers - expert systems, it didn't work. Now its a curiosity

    • By ge96 2025-11-1216:17

      intro to quantum physics for me (which is only sophomore) I noped out of advanced math/physics at that point, luckily I did learn to code on my own

HackerNews