
Here, we accelerate our way out of this primitive syntax, and it all starts with the great escape character. We make many great leaps in this section that aren't entirely explained for the sake of…
Here, we accelerate our way out of this primitive syntax, and it all starts with the great escape character. We make many great leaps in this section that aren't entirely explained for the sake of brevity, but you are free to play around with all of these things by using the repl. In any case, I hope you will enjoy this great leap in syntax technology; by the end, we will have reached something with real structure.
Here we define a preliminary prefix escape character. Also you will notice that 2crank ing 0 crank is used as
padding between lines:
2crank ing 2 crank comment.cog load 2crank ing 0 crank 2crank ing 1 crank # preliminary escape character \ 2crank ing 1 crank \ 2crank ing 0 crank halt 1 quote ing crank 2crank ing 1 crank compose compose 2crank ing 2 crank VMACRO cast quote eval 2crank ing 0 crank halt 1 quote ing dup ing metacrank 2crank ing 1 crank compose compose compose compose 2crank ing 2 crank VMACRO cast 2crank ing 1 crank def 2crank ing 0 crank 2crank ing 0 crank
This allows for escaping so that we can put something on the stack even if it is to be evaluated, but we want to redefine this character eventually to be compatible with stem-like quotes. We're even using our comment character in order to annotate this code by now! Here is the full quote definition (once we have this definition, we can use it to improve itself):
2crank ing 0 crank [ 2crank ing 0 crank 2crank ing 1 crank # init 2crank ing 0 crank crankbase 1 quote ing metacrankbase dup 1 quote ing = 2crank ing 1 crank compose compose compose compose compose 2crank ing 0 crank 2crank ing 1 crank # meta-crank-stuff0 2crank ing 3 crank dup ] quote = 2crank ing 1 crank compose compose 2crank ing 16 crank drop swap drop swap 1 quote swap metacrank swap crank quote 2crank ing 3 crank compose dup quote dip swap 2crank ing 1 crank compose compose compose compose compose compose compose compose 2crank ing 1 crank compose compose compose compose compose \ VMACRO cast quote compose 2crank ing 3 crank compose dup quote dip swap 2crank ing 1 crank compose compose compose \ VMACRO cast quote compose \ if compose 2crank ing 1 crank \ VMACRO cast quote quote compose 2crank ing 0 crank 2crank ing 1 crank # meta-crank-stuff1 2crank ing 3 crank dup ] quote = 2crank ing 1 crank compose compose 2crank ing 16 crank drop swap drop swap 1 quote swap metacrank swap crank 2crank ing 1 crank compose compose compose compose compose compose compose compose \ VMACRO cast quote compose 2crank ing 3 crank compose dup quote dip swap 2crank ing 1 crank compose compose compose \ VMACRO cast quote compose \ if compose 2crank ing 1 crank \ VMACRO cast quote quote compose 2crank ing 0 crank 2crank ing 1 crank # rest of the definition 2crank ing 16 crank if dup stack swap 0 quote crank 2crank ing 2 crank 1 quote 1 quote metacrank 2crank ing 1 crank compose compose compose compose compose compose compose compose 2crank ing 1 crank compose \ VMACRO cast 2crank ing 0 crank 2crank ing 1 crank def
Um, it's quite the spectacle how Matthew Hinton ever came up with this thing, but alas, it exists. Then, we use it in
order to redefine itself, but better as the old quote definition can't do recursive quotes
(we can do this because the definition is used before you redefine the word due to postfix def, a
development pattern seen often in low level cognition):
\ [ [ crankbase ] [ 1 ] quote compose [ metacrankbase dup ] compose [ 1 ] quote compose [ = ] compose [ dup ] \ ] quote compose [ = ] compose [ drop swap drop swap ] [ 1 ] quote compose [ swap metacrank swap crank quote compose ] compose [ dup ] quote compose [ dip swap ] compose \ VMACRO cast quote compose [ dup dup dup ] \ [ quote compose [ = swap ] compose \ ( quote compose [ = or swap ] compose \ \ quote compose [ = or ] compose [ eval ] quote compose [ compose ] [ dup ] quote compose [ dip swap ] compose \ VMACRO cast quote compose [ if ] compose \ VMACRO cast quote compose [ if ] compose \ VMACRO cast quote quote [ dup ] \ ] quote compose [ = ] compose [ drop swap drop swap ] [ 1 ] quote compose [ swap metacrank swap crank ] compose \ VMACRO cast quote compose [ dup dup dup ] \ [ quote compose [ = swap ] compose \ ( quote compose [ = or swap ] compose \ \ quote compose [ = or ] compose [ eval ] quote compose [ compose ] [ dup ] quote compose [ dip swap ] compose \ VMACRO cast quote compose [ if ] compose \ VMACRO cast quote compose [ if ] compose \ VMACRO cast quote quote compose compose [ if dup stack swap ] compose [ 0 ] quote compose [ crank ] compose [ 1 ] quote dup compose compose [ metacrank ] compose \ VMACRO cast def
Okay, so now we can use recursive quoting, just like in stem. But there are still a couple things missing that we probably
want: a good string quote implementation, and probably escape characters that work in the brackets. Also, since Cognition
utilizes macros, we probably want a way to notate those as well, and we probably want a way to expand macros. We can do
all of that! First, we will have to redefine \ once more:
\ \ [ [ 1 ] metacrankbase [ 1 ] = ] [ halt [ 1 ] [ 1 ] metacrank quote compose [ dup ] dip swap ] \ VMACRO cast quote quote compose [ halt [ 1 ] crank ] VMACRO cast quote quote compose [ if halt [ 1 ] [ 1 ] metacrank ] compose \ VMACRO cast def
This piece of code defines the bracket but for macros (split just splits a list into two):
\ ( \ [ unglue [ 11 ] split swap [ 10 ] split drop [ macro ] compose [ 18 ] split quote [ prepose ] compose dip [ 17 ] split eval eval [ 1 ] del [ \ ) ] [ 1 ] put quote quote quote [ prepose ] compose dip [ 16 ] split eval eval [ 1 ] del [ \ ) ] [ 1 ] put quote quote quote [ prepose ] compose dip prepose def
We want these macros to automatically expand because it's more efficient to bind already expanded macros to words,
and they functionally evaluate identically (isdef just returns a boolean where true is a non-empty string, false
is an empty string, if a word is defined):
\ (
( crankbase [ 1 ] metacrankbase dup [ 1 ] =
[ ( dup \ ) =
( drop swap drop swap [ 1 ] swap metacrank swap crank quote compose ( dup ) dip swap )
( dup dup dup \ [ = swap \ ( = or swap \ \ = or
( eval )
( dup isdef ( unglue ) [ ] if compose ( dup ) dip swap )
if )
if ) ]
[ ( dup \ ) =
( drop swap drop swap [ 1 ] swap metacrank swap crank )
( dup dup dup \ [ = swap \ ( = or swap \ \ = or
( eval )
( dup isdef ( unglue ) [ ] if compose ( dup ) dip swap )
if )
if ) ]
if dup macro swap
[ 0 ] crank [ 1 ] [ 1 ] metacrank ) def
and you can see that as we define more things, our language is beginning to look more or less like it has syntax!
In this quote.cog file which we have been looking at, there are more things, but the bulk of it is pretty much done.
From here on, I will just explain the syntax programmed by quote.cog instead of showing the specific code.
As an example, here is expand:
# define basic expand (works on nonempty macros only)
[ expand ]
( macro swap
( [ 1 ] split
( isword ( dup isdef ( unglue ) ( ) if ) ( ) if compose ) dip
size [ 0 ] > ( ( ( dup ) dip swap ) dip swap eval ) ( ) if )
dup ( swap ( swap ) dip ) dip eval drop swap drop ) def
# complete expand (checks for definitions within child first without copying hashtables)
[ expand ]
( size [ 0 ] > ( type [ VSTACK ] = ) ( return ) if ?
( macro swap
macro
( ( ( size dup [ 0 ] > ) dip swap ) dip swap
( ( ( 1 - dup ( vat ) dip swap ( del ) dip ) dip compose ) dip dup eval )
( drop swap drop )
if ) dup eval
( ( [ 1 ] split
( isword
( compose cd dup isdef
( unglue pop )
( pop dup isdef ( unglue ) ( ) if )
if ) ( ) if
( swap ) dip compose swap ) dip
size [ 0 ] > ) dip swap
( dup eval ) ( drop drop swap compose ) if ) dup eval )
( expand )
if ) def
Which recursively expands word definitions inside a quote or macro, using the word unglue. We've used the expand
word in order to redefine itself in a more general case.
I'm not persuaded this is a better idea than, say, Racket's ability to configure the reader layer [1]. This lets you create, for example, an embedded Datalog implementation [2] that uses Datalog syntax but interops with other Racket modules. (The underlying data model doesn't change.) This gives you the ability to metaprogram without being confined to S-expressions, but it does so in a high-level way.
It is very neat to see this kind of syntax bootstrapping. I think there's some value (in a researchy-sense) to being able to do that. But I'm not sure if there's something fundamentally "better" about this approach over Racket's approach.
Postscript: Lisp (and Scheme and Racket) macros typically operate on AST (typically because Lisp has reader macros and Racket has a full-bodied reader extension) but Rhombus [3] operates on a "shrubbery", which is like an AST but it defers some parsing decisions until later. This gives macros a little flexibility in extending the syntax of the language. Another interesting point in the design space!
[1]: https://docs.racket-lang.org/guide/hash-reader.html
[2]: https://docs.racket-lang.org/datalog/datalog.html
[3]: Flatt, Allred & Angle et al. (2023-10-16) Rhombus: A New Spin on Macros without All the Parentheses, Proceedings of the ACM on Programming Languages. https://doi.org/10.1145/3580417
Heck, I'm not convinced that this is a better idea than Common Lisp readtable[1], and I think Racket's #lang is more ergonomic than the CL readtable.
1: Which is powerful enough to implement a C compiler in: https://github.com/vsedach/Vacietis
I'm not sure that something which uses brainfuck as its default example is intending you to take it seriously. Personally I burst out laughing at the introduction of "metacrank".
> Lisp ... macros typically operate on AST
In Lisp they don't. See Emacs Lisp, Common Lisp, ISLISP. A Lisp macro gets only passed some data and returns some data. There is nothing like an AST.
If we define a macro foo-macro and we call it:
(foo-macro ...)
then ... can be any data.Example:
(defmacro rev (&rest items)
(reverse items))
Above macro REV gets data passed and reverses it. The &rest list of ITEMS gets reversed. ITEMS is the list of source arguments in the macro call.We can then write:
(rev 1 2 3 4 +)
(rev (rev 10 n -) (+ a 20 b) (rev 30 a *) list)
Example: CL-USER 7 > (let ((n 4) (a 3) (b 2))
(rev (rev 10 n -)
(+ a 20 b)
(rev 30 a *)
list))
(90 25 -6)
There is no syntax tree. All the macro does, is to reverse the list of its source arguments.If I let the macro describe the thing it gets passed, then we get this:
CL-USER 8 > (defmacro rev (&rest items)
(DESCRIBE items)
(reverse items))
REV
CL-USER 9 > (rev 1 2 3 4 +)
(1 2 3 4 +) is a LIST
0 1
1 2
2 3
3 4
4 +
10
As a side effect, we can see that the macro gets a simple list passed. A list of numbers and a symbol ("symbol" is also a data type). Not text. Not an AST.Also data which is possibly not read, but computed by other code. We can compute the arguments of the macro REV and pass the thing to EVAL. Works the same as above.
CL-USER 10 > (eval (append '(rev) (loop for i from 1 to 4 collect i) '(+)))
(1 2 3 4 +) is a LIST
0 1
1 2
2 3
3 4
4 +
10
The macro only gets that data and can do anything with it. There is no idea of Abstract Syntax Tree: it does not need to be a tree, it does not need to be valid Lisp code and it carries no syntactical information. Generally, it also does not need to return valid Lisp code. To be computed all we need is that the evaluator eventually sees valid Lisp code, not the individual macro.In Lisp the "reader" by default only parses a data layer: symbolic expressions. EVAL, macros and other Lisp functionality gets mostly data passed. EVAL has to figure out, what (a b c) actually is as a program. It can traverse it in an interpreter or compile it. A compiler may internally create AST representations -> but that free to the implementation.
The Lisp language then typically is not defined over text syntax, but data syntax.
A Lisp interpreter processes during execution not text, but s-expressions. It's a "List Processor", not a Text Processor. Lisp not Texp. ;-) The function COMPILE gets an s-expression passed, not text.
Racket and Scheme have other macro systems.
From my limited understanding, it seems like lispers means AST in an ad-hoc way. There's no statically predefined solid structure describing what an if-node or a lambda-node is. We all agree to squint and look at (<idea> <parameters-potentially-recursive>) as an ast in spirit.
Scheme did implement actual syntax objects and other lisps may have a concrete ast (pun slightly intended) layer but it's not required to enjoy the benefits of sexp as code and data.
my two cents
An "AST in spirit" is in our mind, but not the machine.
(let ((s (s s)))
(flet ((s (s) (declare (type s s)) s))
(tagbody (s) s (go s))))
The function READ has no idea what S is: variable, operator name, data symbol, type, go tag, function name, macro name, ...? For each above we don't know what s is. We would need to parse it according to some syntax (and possibly know the current state of the runtime.An AST would be the product of a parser and the parse would parse the code above according to some provided syntax. The AST then would encode a tree built from some syntax, where the nodes would be classified. In Lisp symbols and lists have multiple uses and the s-expression does not encode which uses it is: is it data, is it some kind of syntactical element. There is also no defined parser and no syntax it encodes on that level. READ is at best an s-expression reader, where it has zero knowledge what the lists and symbols supposed to mean in Lisp. For the reader (+ a b), (a + b), (a b +) are just s-expressions, not Lisp code.
I mostly agree, but it seems (I can only speak as a spectator/reader) that this lack of information was not a deal breaker for the community or even producing large systems. The type resolution will happen at use-site, and if `s` is not existent where it should be (a binding in the environment, a tag in the right namespace) or not what it should be, your code fails and people will adjust.
Where there serious defects caused due to this dynamic nature (honest question) ? it seems to me that people adjusted to this without big troubles. Not that I'm against any improvement.
The Lisp compiler will need to figure it out and complain about various problems: undefined function/variable/tag/..., argument list mismatch, type mismatch, etc.
The compiler will expand the macros at compile time and the generated source code then is checked.
One thing this macro system enables are macros which can implement relatively arbitrary syntactical extensions, not restricted by a particular syntax. That can be seen as useful&flexibility or as potential problem (-> needs knowledge and discipline while implementing macros, with the goal of ensuring the maintainability of the code).
More complex structures are built from that input by macros and compiler machinery. Nothing says compiler has to stick to it.
Nanopass shows an example of using "rewriting" while keeping to pretty much same structure for AST till you end up with native code.
It could theoretically be a graph.
For example I can make a circular list being a part of the source and the macro may process it.
Get the first two items from a list and add them:
CL-USER 21 > (defmacro add-2 (a)
(list '+ (first a) (second a)))
ADD-2Now we construct a circular list and use the macro:
CL-USER 22 > (add-2 #1=(1 2 3 . #1#))
3
The circular list is really a part of the source code: CL-USER 23 > '(add-2 #1=(1 2 3 . #1#))
(ADD-2 #1=(1 2 3 . #1#))
Another example: One could pass in a string and have the macro parse the string...It usually is a tree, just not a syntax tree. For example, the "if" special form in CL has the syntax:
(IF <text-expr> <then-expr> [<else-expr>])
But appearing in a macro expansion, it would just be a list where the first item was the symbol "IF" from the "COMMON-LISP" package. In addition, you could put e.g. (IF foo bar baz biff quux)
Inside the body of a macro you would get a list that does not match the syntax of the "if" special form, despite superficially looking like such a list. This doesn't match anything that one would call an "AST" in other languages, which would enforce syntactical correctness of special forms.You are correct and the precision is appreciated. However your emphasis is somewhat misleading.
In the lisp world, when data represents code, it's probably stored in a tree. Elsewhere, when data represents code, it's probably stored as an array of bytes. There code may not be recognised as being data at all.
It's the difference between eval taking a byte string and taking a parse tree. Rather literally.
This is only a convention. Lisp does it right, but as almost everything else thinks code is strings, confusion abounds.
There is a "data AST" in Lisp. The data AST is source code to Lisp evaluation/compilation. Compilation may produce another AST, but typically this is something internal, and implementation-specific.
The data AST is rich enough to for ergonomic, precise source-to-source manipulation. Even a pretty advanced Lisp compiler can be built which goes straight from the data AST to an intermediate representation, skipping the code AST stage.
"Data AST" means that when we have (+ 1 2), this doesn't say "I'm an arithmetic expression", but rather something weaker: "I'm a list of three elements: a symbol object, and two integer objects".
The list object is an abstract syntax tree for the printed list. It must be. It not a parse tree because a parse tree would preserve the representation of the parentheses: in a parse tree, every grammar symbol appears: the nodes of the tree are 1:1 to the grammar rules that they match. Since the parentheses, and whatnot, are gone, it must be abstract syntax.
Most of Lisp syntax is designed such that its syntactic units correspond to nodes of the data AST. That makes source-to-source transformations ergonomic, because the data AST data structure is easy to manipulate.
Some advice to the author: you can considerably tighten up your writing by putting the most important things first. Take the introduction (which is, bizarrely, not the Introduction that comes three paragraphs later.) There are over 300 words before the actual project, Cognition, is mentioned (second sentence of second paragraph). All this stuff about Lisp is great, but is that the most important part of the project? Should it not be something about the project itself?
When I'm reading something informational (rather than recreational) I'm always asking myself "is this worth my time?" You should address this as soon as possible, by telling the reader what the document is about right at the start. "Cognition is a new language exploring user modifiable syntax" or something similar. I didn't get past the first four paragraphs because I couldn't determine it was worth continuing.
> When I'm reading something informational (rather than recreational) I'm always asking myself "is this worth my time?"
It doesn't. You will not be using this language. And even if you will, you'll get all the information from a documentation, not from this article. If your time is money you wasted your time reading the article.
Really why some people believe that all the content of Internet must be attuned to their personal quirks? Why they believe that it is better to change internet, than to adapt to what is already here? It is a text, not video or something sequential. You could scan it diagonally looking for something that is interesting for you. You could reject it if nothing was found. Or you could return to the beginning and read sequentially from there. And these features are available for a text structured in any way. I highly recommend to learn the technique, it could deal with whole books, by selecting useful pages to read and rejecting most of other pages, which tell you nothing new.
I'd argue that diverse article styles are much better, because they force you to consciously and actively sort through information you are consuming. You shouldn't do it passively, becouse your mind becomes lazy and stop thinking while consuming.
OTOH I would agree with you if it was not a text but a video. I hate videos because you need to decide upfront are you investing time into watching it or not. 2x speed and skips by 5-10 seconds helps somehow, but do not solve the problem.
> Really why some people believe that all the content of Internet must be attuned to their personal quirks?
I read the OP as constructive feedback not an attack. I also think that their advice is practically bog standard writing advice not a personality quirk.
I would also suggest that you might look in the mirror at your own comment when accusing someone of imposing their personality quirks on someone else.
Constructive feedback usually has a more inviting, accepting and helpful tone of voice. As opposed to the commenter you are referring to, which seemed dismissive and slightly rude to me, tbh.
Aside from the tone, the specific feedback was also a miss, imo. It was written from the standpoint of as if they were reading a marketing page, or a show hn post, which it is not. It is simply a blog post. An article. To me it provided a bunch of context and was kind of enjoyable.
When you have a marketing page or some kind of "check it out" post, there is a certain level of expectation that the reader can expect and it's even reasonable to complain when the given post does not get to the point. I agree with that.
This is not such a thing
I wonder where you got that impression from. For me as well, it's rather the overboard lecturing response that seemed like an invocation of "Trevors axiom".
By the way, what even defines "simply a blog post"? That's just a media format that doesn't say much about the content itself, does it? I do agree with the sentiment that it would have been good to know what the article is even going to be. Not even the paragraphs were very helpful in this regard. It does improve the reception of an article if it meets the intended audience.
In retrospect, this article is mostly about a basic compiler frontend language and how to bootstrap that into a fully-featured concatenative programming language. Which to me personally is a "hm, okay, interesting" kind of thing but I didn't read the article for this premise. I read it for the "new antisyntax language" and feel it was clickbaitish.
Inverted pyramid structure is a useful tool.
Sometimes it's inappropriate, and ultimately the author is the best judge of this.
If you view text composition as a UX problem, it will help you figure out when to use the tool and when not to.
Examples where it isn't used (clickbait headlines, recipe blogs with three pages of meandering before they get to the ingredients, SEO "optimised" youtube videos) are, IMO, examples of dark UX patterns more often than not. But your use-case may be valid.
(This comment written using an inverted pyramid structure).
For what it’s worth, I was also thinking this is a very wordy article. It has lots of asides, and seems to get distracted from the main point. It gets distracted enough that I have a hard time following it, which is not conducive to what any writer wants to achieve: telling some sort of story.
can't agree more; all arguments in this comment are also why I don't understand video and yt popularity at all. I can't do anything to quickly get a grasp of what's in a video, or whether it answers my question. Sometimes even if the question is a yes or no, this sequential format forces me to wait a few minutes to learn the answer. By that time, I'll forget why I wanted it in the first place.
In my experience, videos about how to cut wood or use tools are infinitely better than reading the equivalent text. Videos about writing software are infinitely worse than text. The difference is that wood and tools are 3D objects, software is text.
I'd say it depends on the software. If you are just writing code that manipulates text, I completely agree. If you're dealing with 3d rendering such as Blender or game engines, if you're dealing with audio like DAWs, etc that have components beyond the raw text, video can have value because just staring at the code or similar doesn't tell you what watching/hearing it in motion will.
> It doesn't. You will not be using this language. And even if you will, you'll get all the information from a documentation, not from this article. If your time is money you wasted your time reading the article.
Hard disagree, not always we read things which has a direct impact on what we do. Sometimes ideas in one area can spark solutions for other problems.
Imagine aerospace engineers never looking at birds, or military equipment not getting inspiration from chameleons.
> not always we read things which has a direct impact on what we do. Sometimes ideas in one area can spark solutions for other problems.
Of course, but this effect is unpredictable. You could get an idea while reading some fiction, because the plot sparked some chain reaction of associations in your brain. But if it will happens or not is not predictable on basis of an abstract of an article. GP clearly talks about something else.
> military equipment not getting inspiration from chameleons.
A good example. If your goal is to fight the enemies it will be counterproductive to seeks for black swans by finding and studying new life forms. But you can still do it in a "fishing mode", just looking for something that seems interesting.
Science is living on a government support exactly because it is unpredictable. It can sometimes discover electricity, but most of the efforts of scientists gives little to no useful knowledge. One cannot know a priori if their research will have a big impact or not.
The article is badly written if the author's intent is to communicate their ideas to other people.
[dead]
The order seemed pretty rational to me. Describe the problem, then introduce your solution. I knew pretty much within a few sentences this was going to be some quixotic solution to a "problem" 99.999% of people don't care about (including me, as I've heard of Lisp but never used it outside of emacs config files), but I kept reading anyway, because why not.
it also had headers, and is structured in a way that solved all the above issues for me entirely
> All this stuff about Lisp is great, but is that the most important part of the project?
It's clearly not the most important part of the project but it serves to illustrate the kind of problem that the project intends to solve. Without something like this section the following sections would be even more difficult to understand.
Well, the other problem is that the stuff the article says about Lisp is incorrect and gravely misinformed.
For instance:
> This makes the left and right parenthesis unchangable from within the language
The parenthesis character in Common Lisp can be redefined, even just temporarily, to anything you want.
This is a misunderstanding and shouldn't be in your opening play.
It sounds like the author thought of something they felt was neat, but felt they had to justify its existence first by a quick pot-shot at another similar thing. It would be more effective if either it were correct, or they just described their own creation from the get-go.
Writer of this article here.
It was never meant to be a pot-shot, and I have nothing against lisp. I can tell why it reads that way, and we added that in because we wanted to illustrate why people should care.
As to your claim about us being wrong: I don't have an issue with being wrong, and maybe at the same time we are. At the same time, I think it is possible that there are misunderstandings that cause people to believe we aren't doing something new. Again, maybe we're not.
We're two 18 year olds, fresh out of high school. It's a research project, but we're not graduate students.
A lot of these comments are claiming it's not new because reader macros exist. From my understanding, our tokenization system is unique because it can all be done at runtime without backtracking or executing anything instantly, which is possible because cognition always makes use of the text read in, and never makes use of anything not yet read in, which means you don't have to backtrack. I mean, you could backtrack but it would be less elegant.
If I'm wrong about this then that's fine but then we still made something cool without even knowing it existed beforehand.
For a lot of this stuff, there is no "wrong" because it's a matter of taste and familiarity, rather like asking which of the human alphabets is "wrong". It's undoubtedly very clever, a reimagining of lexing from scratch.
On the other hand, I'm adding it to my list of examples of "left handed scissors" languages, along with LISP and FORTH themselves. Languages which a few percent of people regard as more intuitive but most users do not and prefer ALGOL derivatives.
Common Lisp comes with absolutely best syntax customization tooling I know of.
Like if you want to integrate JSON or XML into the language syntax you can.
Rather ironic to use it as an example of inflexible syntax.
Here's an example which adds completely integrated JSON support in under 100 lines of code, using only standard language APIs: https://gist.github.com/chaitanyagupta/9324402
It’s kind of a tradition now for flashy projects (not this one) to not mention the problems they are solving, if any, and god forbid explaining shortcomings and trade-offs made. Feels like marketers changed careers to programming but failed to get the idea. That is… frustrating, especially when praised by “clients”. Imagine tfw postgres would go full-disney on their frontpages.
I tend to agree. I'm interested in the concept, but the opening line seems to justify its need as a reaction to s-expression syntax in Lisp. Knowing nothing about that, I fear I'm going to miss the context of the whole article, and I also can't determine whether this is a straw man or not. And, as another commenter mentioned, it makes this whole thing feel like it's serving a very niche need. Which doesn't jive with the title, which is very generalized, and could be quite a compelling concept.
I think it’s completely fine. The text as written identifies what problem it’s trying to solve within the first two sentences.
That, to me, is much more useful to gauge my interest than your proposed introduction.
As a guy who’s aware of lisp ways, I found first few paragraphs absolutely useful for establishing context and hinting at what’s next, at the same time retaining my attention. TFA is fine, just not everyone is its audience.
If it was “look, shktshfdthjkl\n\nbhhj, so cool”, it would signal rocket rainbow unicorn and lose me at that. IMO we need more properly structured non-SV prose like this in tech, not less.
For me as a Lisp programmer I think it makes a lot of sense to start there since it sets the stage.
When I saw the headline, my first thought was "what about Lisp macros?" so at least for me it starts out by addressing exactly that question ...
The author did not address Common Lisp's reader macros, or Racket's #lang syntax. It's not like either of those languages are obscure (relatively speaking).
Right, but they hedge their way out of it with "This makes the left and right parenthesis unchangable from within the language (not conceptually, but under some implementations it is not possible)" so it's not even clear which "Lisp" they're talking about. If I'm not mistaken, ( itself is a reader macro in CL.
It's also not that relevant. You'll can add different syntax variants, the Lisp evaluator doesn't see parentheses in any way, just actual interned data.
This is an interesting article, and I hope the authors ignore the snarky comments here and aren't discouraged from pursuing their dark magic rituals.
That said, personally I think that Forth is as philosophically pure as I'm willing to gaze up the ladder of programming purity. :P
Writer of this article: Thank you! I don't mind the snakry comments. In fact, I welcome them as I think they are pretty funny themselves. We'll most certainly be working on more dark magic in the future.
Why does your work remind me of this?
https://aphyr.com/posts/353-rewriting-the-technical-intervie...
It reminded me of that as well. Very clever and very alien.
I saw it as ... akin to poetry, almost?
Then again I was once a pure math grad, so "beautiful, fascinating, albeit completely practically useless" is something I mean as a compliment.
I need to read up on Stem and then come back and read this post of yours again to better appreciate it, I suspect. But that sounds like fun.
you're supposed to touch a few nerves. that means you're showing something different to the comfort of most. something really different. two most common reactions to this is irritation and laughter.
not much is novel nowadays. great job!