I see mixed comments,
so let me add some praise.
I am one of countless, who match
his intro-filter:
repeatedly hearing 'enlightened'
people lament that the vast masses don't "get" lisp and FP,
and repeatedly attempting/failing to pick up the red string myself.
background - I am a computer science major with 30+ years experience. I did do a mandatory class of 'implement your own lisp' many eons ago.
It just never really 'clicked' for me.
I do, by accident, assimilation and lazyness,employ
FP style designs in my software.
And I guess fp techniques gradually rub off on me from e.g. javascript, lambdas,closures, and map-filter-reduce.
in particular, lambdas are useful to me.
But I am one of the guys who
continue to read the "let me tell you what monads really are", and every time I fall off the bicycle.
So, well, I appreciated this 'Xfor 5year olds" :-)
jmkr 6 hours ago [-]
I think Lisp is more on the liberal arts side of programming languages.
That the "enlightenment" of Lisp is that you can use functions everywhere. Write macros that look like functions and modify behavior, and build your code as a language.
Things like monads are more on the evolution of functional languages, and I also fall off the bike. It's as difficult as you want it to be, and I find scheme and lisp to be easier high level languages than javascript or python and makes more sense.
The Dan Friedman books are pretty good in general: "The Little Schemer,"
and the sequel "The Seasoned Schemer" which are both more "recursion" books. He also has another book "Scheme and the Art of Programming." Which I think is a great comp sci book that's not too difficult and doesn't seem too well known.
How to Design Programs is supposed to be a pretty good comp sci intro:
If we're naming names, for me personally, Lisp in Small Pieces by Christian Queinnec tops my Lisp books list. But, yes, only after perusing the SICP and The Little Schemer first.
"Liberal arts," nice :)
baq 4 hours ago [-]
my epiphany with lisp was that it is not a functional language in the modern sense, i.e. mutability is fine, loops are fine, etc. it's primarily a list processor, not lambda calculus.
msla 5 hours ago [-]
Lisp is "functional" in a 1970s sense in that it has functions as first-class objects you can pass as parameters to other functions, but those functions are basically subroutines which can have side-effects and un-functional behavior. As you allude to, this is pretty much par for the course in procedural languages now, and OO As She Is Spoke is procedural with some extra stuff added. Even garbage collection is common enough now that languages which don't have it trumpet the fact and make it their whole personality. Lisp is heady and revolutionary if your baseline is FORTRAN and maybe C, in other words, unless you actually do begin to write your own macros, at which point the C people begin to look at you funny.
Haskell is functional in that it demands its functions be functions, not subroutines. A function has inputs mapped to outputs and no side-effects. Functions can be composed and composition always works. Haskell uses monads to represent the regrettable fact that having an impact on the outside world is, in a very real sense, a side-effect, so it marks all side-effecting functions with an indelible stain. Haskell requires a different mode of thought from Python, or even from C++, and it's definitely not another Lisp.
dawnofdusk 4 hours ago [-]
The author of this blog is clearly eloquent and, as per their interspersed quotations of David Hume and others, it is refreshing to see someone so well-read in the software/tech blogosphere.
I love Lisp. The last few paragraphs are a pretty good description. It's nice to have a very flexible set of tools, instead of being forced to conform to object-oriented design or whatever paradigm. IMO the only legitimate reason in sticking steadfast to a design paradigm is for performance reasons, but of course this can only really justify array programming/imperative programming. But at the point where you want some flexible abstractions, it's nice to have the power to do introspection, delayed evaluation, and so on. Disclaimer: my background is physics/math, so function abstractions are much more intuitive to me than objects, or whatever other structures are taught to CS students.
kazinator 2 hours ago [-]
The author may be eloquent, but unfortunately calls imperative operators, like while, "functions".
layer8 1 hours ago [-]
Imperative code is a function from one program state to another.
int_19h 50 minutes ago [-]
Coincidentally R is one language in which `if` and `while` can be written as functions, because all function arguments are lazily evaluated, and one can get access to the underlying lambda for repeated re-evaluation. In fact, `if` and `while` are functions in R, and you can call them as such if you properly quote the keyword so that it's treated as an identifier. And then the familiar C-style syntactic forms are just syntactic sugar for function calls.
R takes it up a notch though by making all syntactic constructs boil down to a function call. Function definitions are themselves calls, for example, and so are assignments and even curly braces.
It sounded of interest to me, but I read it and closed the tab within a page or so as it wandered off into tech arcana. Shame. There may be an interesting idea in here but it's phrased in terms I think few will be able to follow and understand.
I did not finish it but I saw no mention of the lambda calculus or of currying, both of which -- from my very meagre understanding -- seem directly relevant to what I understood to be the core point, which seems to be about anonymous functions.
Jach 8 hours ago [-]
I don't think you're missing much. Yeah, the main point seems to be that if your language has closures, you suddenly can express a lot of things that were out of reach before. Not a new insight. But there's another point I think that is hinted at on the topic of control abstractions. Or at least I'm reminded of the topic. It's better and more succinctly and explicitly talked about in an early chapter of the free book Patterns of Software: https://dreamsongs.com/Files/PatternsOfSoftware.pdf
The extra point might be that more languages should facilitate defining your own control abstractions just as they support defining your own data abstractions. Functions are one way of making data abstractions, but languages often provide multiple ways. Closures are one way of doing a type of control abstraction (involving such things as delayed or multiple evaluation), but there are other ways too. For some reason we see value and a need for defining our own data abstractions, but not so much for control abstractions, even though (according to the book) once they were often co-designed, like Fortran's arrays and DO loop. And for some reason even in the few languages that do support making your own control abstractions, like Lisp, you'll still find users who disapprove of doing so, claiming all you need are the standard existing methods like looping, map/reduce style functions, and some non-local exits.
smcameron 6 hours ago [-]
There's an upside to C's limitations. Several times I've seen expressed the notion that "every big Lisp project uses its own dialect of Lisp". This is less true in C, I think, because it doesn't have the power (ignoring preprocessor abuse, which does happen, see Bourne shell source[1]). C++ has a bit more power, and there we see projects tend to use their own subset of C++.
In both Lisp, and C++, taking some isolated snippet from a codebase, you can't quite really be sure what it's doing without reading the rest of the program because of what might be called the "excessive" power of the languages.
In C, it is much more likely that you can look at an isolated snippet of code from some codebase and be reasonably sure about what it is doing, and be able to extract this snippet more or less as-is, and re-use it in some other, unrelated code base. At least, this has been my experience, ymmv.
Well, yes, an often heard, and reasonable argument.
But then what happens in a reasonably sized C project is that other tools are brought in to fill the gaps. Tools like code generators (see some other comments on this post), for example, which are necessarily written in a different language and, most importantly, come with their own terminology and way of doing things. Which, in the end, becomes an extra mental load, the kind that you've rightly mentioned. Only you've partitioned or spread that load horizontally in multiple little pieces as opposed to building abstractions on top of each other.
Or, each piece of code is understandable by itself, but one has to read dozens of such "little islands" to make sense of what the code is achieving since, by necessity, you need to have many such pieces to achieve something meaningful. The alternative being the abstraction pyramid which does more with less code. The downside being, there is an upfront investment in understanding those abstraction layers before one can juggle with them, so to speak. In your example, the upfront investment is less so, but every-time you must do something meaningful, including reading the code, you pay the penalty of not having powerful enough tools. As an example: engineers go to universities to study calculus, physics and material science so that, with these, they can better plan and build a highway (let's say). There is a huge upfront investment in that, let's say 16 years of study. Versus, use our shovels and buckets and start building right away. That is easy to learn. You learn it in a day. You start building right away. Everyone can supervise and, thus, understand the work because it is straightforward. 1 day upfront investment. But the later will advance at a horrible pace. There is no way to visualize what you're doing. There is no study beforehand. Maybe the hill will collapse and you need to start over. Maybe the two ends of the road will not meet where you'd expect to. Maybe the materials crack after a week. Maybe the road is not even, so you start over. Etc, etc.
Jach 4 hours ago [-]
It's sometimes expressed and repeated but I think usually by people who don't do much or any Lisp programming. Like they hear Lisp is great for making DSLs and interpret that as every interesting program must have created its own wacky DSL and is basically incomprehensible outside of the program. That's not the case. At least in Common Lisp. Similarly I don't tend to see much (any?) "CL is too expressive and big and complex, thus we define a language subset and if you contribute/work here you must use said subset" attitude which is more prevalent in C++. Certain people and companies have their own stylistic preferences, sure. I could believe there are some out there that go all the way to requiring fset/immutable collections everywhere (Clojure-esque without switching languages to Clojure), or outright banning CLOS or lists or using conditions or ever writing your own defmacro or something (to try and put an analog on "no virtual methods, no linked lists, no exceptions, no preprocessor beyond #include" that I've seen for C++ guides). But none come to mind.
What did just come to mind is Coalton, a project I like though haven't tried using seriously yet, which more says: stock CL is too inexpressive for programs we want to write that make use of static algebraic data types with more compile-time guarantees. So here's a macro wrapped in a library that's effectively a new language, but we didn't have to go build a whole new language, we could just build on top of CL, and we kept the syntax tasteful and familiarly lispy. The interop is great. If you see a project using coalton and you want to use their code in CL, you can. And vice versa, using CL from coalton, it's just a library. And it's transparently done, obvious what is happening / what you're doing. CL allows this flexibility. Most projects do not birth a new language to support something.
Pick a random file from some larger projects that do come to mind: Kandria (commercial game), SHOP3 (a Hierarchical Task Network planner), Mezzano (an OS), Open Genera (an ancient OS), Maxima (computer algebra software that still uses code from the 80s), and it's pretty much all just... normal Lisp code. There are style differences, sure, but it's normal, easy to understand what it's doing mechanically. (i.e. the same as C -- even if you have no idea what/why it's doing at a more meaningful level, like what's a bessel function, there's some value just in knowing that a chunk of code idiomatically isn't depending on or doing too much crazy stuff outside the context.) Random short snippet:
Yup, pretty normal looking to me. Might be better as a pair of defmethods. Easily understandable mechanically even if I don't yet know what a primitive-node is, or any of the application specific concepts really. I don't fault C if I don't know what a vma->vm_page_prot is, I just know it's a field on a vma struct that's getting dereferenced, and nothing else super crazy because -> can't be overloaded in C.
You'll find the occasional macro in such lisp projects, but it's unlikely to be an impressive display of macrology that requires sitting down to study it. Same thing with most popular libraries. Most Lisp code doesn't actually have much "spooky action at a distance stuff" an otherwise innocent looking line of C++ can be known for like implicit conversions, copies, confusing precedence, or overloaded operators. Even Lisp's take on exception handling (the condition system) isn't so spooky despite allowing more control flexibility, simply because it doesn't automatically unwind the stack and allows for restarts -- you can resume execution from where the condition was signaled. The worst you'll tend to see somewhat frequently are method calls that may have surprising :before/:after/:around dynamics. But now here's my Java apologist talking: Lisp can be thought of as more of a programming system than a language, and as such in practice the context of working with Lisp is with a Lisp-aware editor, so it's not exactly fair to compare it (or modern Java) with another language if your arena of choice is black and white paper print-outs on a desk rather than a typical working environment. This means, to address that spooky action possibility, you can resolve your curiosity at any time by calling compute-applicable-methods and compute-effective-method. In addition, you have things like intellisense on demand for symbol/function/macro documentation, ability to jump-to-source (even of the lisp implementation's functions), ability to cross-reference callers and callees of something, ability to macro-expand any macro call, ability to disassemble a function, ability to compile changes to functions, and so on and so on. This is also all pretty much built in to the language itself ("compile-file" is a function available at runtime) and just more conveniently exposed via Lisp-aware editors.
I don't do much C anymore. Three projects that came to mind: sqlite, the linux kernel, and atril (pdf viewer bundled with mate desktop environments, forked from the old gnome2 evince). Take random files from these projects. I don't think you can really claim that you can just take snippets more or less as-is and use them in some other project. Part of the problem again stems from C's lack of expressive power along with other weaknesses like an impoverished runtime -- which larger projects are going to feel the pain of even more acutely, and thus use different and incompatible methods of solving that. But the other part of the problem is just a nature of any project in any language: it's made up of its own data abstractions that are highly relevant to that project. Re-use at the snippet level is rare. You aren't going to gain anything from (dart at wall) sqlite's mutex.c file if you try to cherry pick snippets. Everything depends on its own struct (which is different for three platforms -- hey, that's useful to study and maybe copy, but there's enough specific sqlite stuff even in the short definitions that you can't literally just copy the code over), you can't even re-use anything that allocates because it calls their own sqlite malloc that does whatever differently from a system malloc... Picking a random linux kernel file, io_uring/cancel.c, what are you going to be able to extract from this? It's straightforward C code, yes, but it's intimately tied to the context. There are references to mutexes in it -- how much do you want to bet you can't just use sqlite's notion of a mutex in place, or vice versa? (Sqlite uses a pthread mutex; needless to say the struct layout is not compatible with the kernel's mutex.)
Atril came to mind because last year I hacked it so it would show me a time estimate of how long it'd take to reach the end of a book when I have it auto-scrolling. I dived in and noticed types like gboolean and gint. Oh great, custom typedefs over basic things, like so many other C projects, what do I have in store... at least they're sensible. https://github.com/GNOME/glib/blob/main/glib/gtypes.h#L56 But yeah, to make a long story short, it uses glib. glib is an amazing and kind of cursed library to bring a lot of higher level language features to C. Except (at least to me, encountering it with the goal of understanding code for the first time) without the normal tools of high level languages that aid in development and understanding. I pieced what I needed to piece together and made my hack work, but it involved quite a few printfs. (Incidentally, Common Lisp includes the standard function "trace", which you can call at any time, applied to a function, and from then on when that function is invoked its input args and output values will be printed. You can "untrace" it later. Editors have hot keys.)
PaulHoule 8 hours ago [-]
There's closures and there's being able to transform the expression tree.
which is allegedly about programming with macros but I'd say 80% of the time he implements something with closures and then makes a macro-based implementation that peforms better. That 80% can be done in Python and the other 20% you wouldn't want to do in Python because Python already has those features... And if you wanted to implement meta-objects in Python you would do it Pythonically.
Graham unfortunately doesn't work any examples that involve complex transformations on the expression trees because these are hard and if you want to work that hard you're better off looking at the Dragon book.
You can work almost all the examples in Norvig's Common Lisp book
Currying is done automatically in Haskell but not in Lisp. If you wanted currying in Lisp you could write it, but Lisp programmers don't depend on or talk about currying as much as Haskell programmers do.
tmtvl 10 hours ago [-]
The core point, to me, seemed to be about limiting factors in language extension. To allow something like:
Where the various parameters are lazily evaluated.
Or like:
frobnicate (frazzle: foo, frozzle: bar, frizzle: baz);
Where frazzle, frozzle, and frizzle are position-independent keyword variables.
Allowing those in C would require a modicum of effort, while other languages make these kinds of syntax extension fairly easy.
PaulHoule 9 hours ago [-]
In languages like Java (or C) you can build S-expression like structures like so
Variable<Integer> x = newVariable();
Expression<Integer> = add(x,literal(5));
x.set(15);
System.out.println(eval(x)) // prints "20"
and it is not that hard to either serialize these to code or run them in a tree-walking interpreter where quote() and eval() imply an extended language where you can write functions that work on Expression<Expression<X>>. Type erasure causes some problems in Java that make you sometimes write a type you shouldn't have to and you do have to unerase types in method names which is a little ugly but it works.
I did some experiments towards this to convince myself it would work
had I really kept at it I would have bootstrapped by developing a ferocity0 which was sufficient to write a code generator that could generate stubs for the Java stdlib + a persistent collections library and then write a ferocity1 in ferocity0, and if necessary ferocity(N+1) in ferocityN until it supported "all" of Java, though "all" might have omitted some features like "var" that are both sugar and use type inference that ferocity would struggle with -- if you need sugar in this system you implement it with metaprogramming.
The idea is that certain projects would benefit from balls-to-the-walls metaprogramming and the code compression you get would compensate for the code getting puffed up. My guess is a lot of people would see it as an unholy mating of the worst of Java and Common Lisp. However, I'm certain it would be good for writing code generators.
molteanu 9 hours ago [-]
The solution, which I often seen in practice, is to eventually write code generators, which is what Lisp macros are, after all. I've seen it in C and wrote a big piece about it that was posted here some time ago[1], about the extra tools, code generators, special formats and standards employed and needed to make up for C's deficiencies (in respect to meta-programming, at least).
Everywhere I see code generators it means a feature is lacking in the main language used for the project. Then you bring in other tools to make up for that deficiency. Only, usually, we don't call that deficiency, since we are used to things being that way. It is called day-to-day business. I think that's what I've tried to convey in the article.
I don't see code generation as a bad smell at all.
At my job we use the JooQ code generator which is well integrated with maven and either IntellJ IDEA or Eclipse so autcompletion "just works". In modern Java you can pack up a code generator as a maven plugin [1] and put something in your POM that runs the generator. It's easy. There are other ways to hook the compiler too, see the controversial
Lisp does come closest to a "language construction set" that lets you bend a language to your will. I think a compiler could be built for a language that looks more conventional that would be just as malleable, maybe even more malleable, but a generation of system programmers were traumatized by slow C++ builds and want to have nothing to do with a compiler which could be slow, even if you could make up for the slowness by having dramatically less code.
[1] A maven plugin is just a Java class which can do everything in the ordinary Java way which gets dependencies injected by the maven runtime. It's a common rookie mistake to try to solve problems by writing XML. I mean, if you can write a POM that makes existing plugins do what you want go right ahead, but if you can't just write your own plugin.
molteanu 7 hours ago [-]
> I don't see code generation as a bad smell at all.
Well, exactly!
> At my job we use the JooQ code generator...
And in Lisp one would use the...Lisp code generator. That is, the macro. And the beauty of it is that it doesn't work with pure strings but 'understands', parses, manipulates the code as expressions in its own language. That is, it has at its disposal the entire language for manipulating those expressions.
And I think that is one of the "aha" moments. At least, it was for me.
When you realize the reason for having those code generators, regardless of the project and language, in the first place. That is, something missing from the language. Some extra feature that can't be implemented. Some solution that works really close to 99%, but not beyond. Something that one would like to express, but can't. Some piece of code that you want to be parameterized like you would a function, some piece of code that you want to use in multiple places but you don't want to write the same boilerplate or copy/paste it all over the place with the risk that when you modify something, you'll need to modify in all those places. Or some piece of code that you want to be auto-generated when you build/deploy/etc.
The examples are countless. The world of meta-programming offers enough of them. The article gives the control statements as an example, as a hint to build the appetite, as is suggested in the intro, in fact.
qsort 8 hours ago [-]
I have done something exactly like this in production for a system that turned natural language into SQL. This was pre-LLM, so we had models that produced intent and keywords as structured output and we had to turn it into queries for several backends.
The project didn't work out for a variety of reasons, but technically it was beautiful: it produced query plans that in many cases were identical to those from the queries analysts wrote by hand.
So yeah, I accidentally wrote a compiler. Does it still count?
JooQ isn't everybody's taste but I use it for my job and I think it's great particularly in that you can reuse expressions and write generators for complex queries. We have a powerful search interface that combines full-text with other kinds of queries ("Is about topic T", "Data was collected between S and E") that is beautiful. I think it's funny how JooQ has that lispy f(a,b) style (no accident it is like ferocity) and how Sqlalchemy is really fluent and takes advantage of operator overloading.
mrbluecoat 9 hours ago [-]
At least "the dead C" was a nice pun :D
molteanu 8 hours ago [-]
I find it is a real challenge to come up with a good title. On the one hand it should, probably, convey to the potential reader something about the contents of the article, on the other hand it should be something to differentiate it from the rest of the articles published on the same subject.
I like those that read something like a punch-line, that come across as something different that just a summary of the article. But these maybe work best for literature, prose, movies, etc.
timewizard 5 hours ago [-]
typedef int fn_t(int, int);
int iff(bool cond, fn_t a, fn_t b) {
if (cond)
return(a());
else
return(b());
}
Now just write the implementation in terms of a() and b(). I don't get it. C doesn't have convenient syntax but this is compiled and not an evaluated language. This argument didn't make sense to me.
background - I am a computer science major with 30+ years experience. I did do a mandatory class of 'implement your own lisp' many eons ago. It just never really 'clicked' for me. I do, by accident, assimilation and lazyness,employ FP style designs in my software. And I guess fp techniques gradually rub off on me from e.g. javascript, lambdas,closures, and map-filter-reduce. in particular, lambdas are useful to me. But I am one of the guys who continue to read the "let me tell you what monads really are", and every time I fall off the bicycle. So, well, I appreciated this 'Xfor 5year olds" :-)
That the "enlightenment" of Lisp is that you can use functions everywhere. Write macros that look like functions and modify behavior, and build your code as a language.
Things like monads are more on the evolution of functional languages, and I also fall off the bike. It's as difficult as you want it to be, and I find scheme and lisp to be easier high level languages than javascript or python and makes more sense.
The forward and preface to SICP is good reading.
https://mitp-content-server.mit.edu/books/content/sectbyfn/b...
The Dan Friedman books are pretty good in general: "The Little Schemer," and the sequel "The Seasoned Schemer" which are both more "recursion" books. He also has another book "Scheme and the Art of Programming." Which I think is a great comp sci book that's not too difficult and doesn't seem too well known.
How to Design Programs is supposed to be a pretty good comp sci intro:
https://htdp.org/2024-11-6/Book/index.html
"Liberal arts," nice :)
Haskell is functional in that it demands its functions be functions, not subroutines. A function has inputs mapped to outputs and no side-effects. Functions can be composed and composition always works. Haskell uses monads to represent the regrettable fact that having an impact on the outside world is, in a very real sense, a side-effect, so it marks all side-effecting functions with an indelible stain. Haskell requires a different mode of thought from Python, or even from C++, and it's definitely not another Lisp.
I love Lisp. The last few paragraphs are a pretty good description. It's nice to have a very flexible set of tools, instead of being forced to conform to object-oriented design or whatever paradigm. IMO the only legitimate reason in sticking steadfast to a design paradigm is for performance reasons, but of course this can only really justify array programming/imperative programming. But at the point where you want some flexible abstractions, it's nice to have the power to do introspection, delayed evaluation, and so on. Disclaimer: my background is physics/math, so function abstractions are much more intuitive to me than objects, or whatever other structures are taught to CS students.
R takes it up a notch though by making all syntactic constructs boil down to a function call. Function definitions are themselves calls, for example, and so are assignments and even curly braces.
https://news.ycombinator.com/item?id=28851992
https://news.ycombinator.com/item?id=44359454
No comments on any of them.
It sounded of interest to me, but I read it and closed the tab within a page or so as it wandered off into tech arcana. Shame. There may be an interesting idea in here but it's phrased in terms I think few will be able to follow and understand.
I did not finish it but I saw no mention of the lambda calculus or of currying, both of which -- from my very meagre understanding -- seem directly relevant to what I understood to be the core point, which seems to be about anonymous functions.
The extra point might be that more languages should facilitate defining your own control abstractions just as they support defining your own data abstractions. Functions are one way of making data abstractions, but languages often provide multiple ways. Closures are one way of doing a type of control abstraction (involving such things as delayed or multiple evaluation), but there are other ways too. For some reason we see value and a need for defining our own data abstractions, but not so much for control abstractions, even though (according to the book) once they were often co-designed, like Fortran's arrays and DO loop. And for some reason even in the few languages that do support making your own control abstractions, like Lisp, you'll still find users who disapprove of doing so, claiming all you need are the standard existing methods like looping, map/reduce style functions, and some non-local exits.
In both Lisp, and C++, taking some isolated snippet from a codebase, you can't quite really be sure what it's doing without reading the rest of the program because of what might be called the "excessive" power of the languages.
In C, it is much more likely that you can look at an isolated snippet of code from some codebase and be reasonably sure about what it is doing, and be able to extract this snippet more or less as-is, and re-use it in some other, unrelated code base. At least, this has been my experience, ymmv.
[1] https://www.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/sh
But then what happens in a reasonably sized C project is that other tools are brought in to fill the gaps. Tools like code generators (see some other comments on this post), for example, which are necessarily written in a different language and, most importantly, come with their own terminology and way of doing things. Which, in the end, becomes an extra mental load, the kind that you've rightly mentioned. Only you've partitioned or spread that load horizontally in multiple little pieces as opposed to building abstractions on top of each other.
Or, each piece of code is understandable by itself, but one has to read dozens of such "little islands" to make sense of what the code is achieving since, by necessity, you need to have many such pieces to achieve something meaningful. The alternative being the abstraction pyramid which does more with less code. The downside being, there is an upfront investment in understanding those abstraction layers before one can juggle with them, so to speak. In your example, the upfront investment is less so, but every-time you must do something meaningful, including reading the code, you pay the penalty of not having powerful enough tools. As an example: engineers go to universities to study calculus, physics and material science so that, with these, they can better plan and build a highway (let's say). There is a huge upfront investment in that, let's say 16 years of study. Versus, use our shovels and buckets and start building right away. That is easy to learn. You learn it in a day. You start building right away. Everyone can supervise and, thus, understand the work because it is straightforward. 1 day upfront investment. But the later will advance at a horrible pace. There is no way to visualize what you're doing. There is no study beforehand. Maybe the hill will collapse and you need to start over. Maybe the two ends of the road will not meet where you'd expect to. Maybe the materials crack after a week. Maybe the road is not even, so you start over. Etc, etc.
What did just come to mind is Coalton, a project I like though haven't tried using seriously yet, which more says: stock CL is too inexpressive for programs we want to write that make use of static algebraic data types with more compile-time guarantees. So here's a macro wrapped in a library that's effectively a new language, but we didn't have to go build a whole new language, we could just build on top of CL, and we kept the syntax tasteful and familiarly lispy. The interop is great. If you see a project using coalton and you want to use their code in CL, you can. And vice versa, using CL from coalton, it's just a library. And it's transparently done, obvious what is happening / what you're doing. CL allows this flexibility. Most projects do not birth a new language to support something.
Pick a random file from some larger projects that do come to mind: Kandria (commercial game), SHOP3 (a Hierarchical Task Network planner), Mezzano (an OS), Open Genera (an ancient OS), Maxima (computer algebra software that still uses code from the 80s), and it's pretty much all just... normal Lisp code. There are style differences, sure, but it's normal, easy to understand what it's doing mechanically. (i.e. the same as C -- even if you have no idea what/why it's doing at a more meaningful level, like what's a bessel function, there's some value just in knowing that a chunk of code idiomatically isn't depending on or doing too much crazy stuff outside the context.) Random short snippet:
Yup, pretty normal looking to me. Might be better as a pair of defmethods. Easily understandable mechanically even if I don't yet know what a primitive-node is, or any of the application specific concepts really. I don't fault C if I don't know what a vma->vm_page_prot is, I just know it's a field on a vma struct that's getting dereferenced, and nothing else super crazy because -> can't be overloaded in C.You'll find the occasional macro in such lisp projects, but it's unlikely to be an impressive display of macrology that requires sitting down to study it. Same thing with most popular libraries. Most Lisp code doesn't actually have much "spooky action at a distance stuff" an otherwise innocent looking line of C++ can be known for like implicit conversions, copies, confusing precedence, or overloaded operators. Even Lisp's take on exception handling (the condition system) isn't so spooky despite allowing more control flexibility, simply because it doesn't automatically unwind the stack and allows for restarts -- you can resume execution from where the condition was signaled. The worst you'll tend to see somewhat frequently are method calls that may have surprising :before/:after/:around dynamics. But now here's my Java apologist talking: Lisp can be thought of as more of a programming system than a language, and as such in practice the context of working with Lisp is with a Lisp-aware editor, so it's not exactly fair to compare it (or modern Java) with another language if your arena of choice is black and white paper print-outs on a desk rather than a typical working environment. This means, to address that spooky action possibility, you can resolve your curiosity at any time by calling compute-applicable-methods and compute-effective-method. In addition, you have things like intellisense on demand for symbol/function/macro documentation, ability to jump-to-source (even of the lisp implementation's functions), ability to cross-reference callers and callees of something, ability to macro-expand any macro call, ability to disassemble a function, ability to compile changes to functions, and so on and so on. This is also all pretty much built in to the language itself ("compile-file" is a function available at runtime) and just more conveniently exposed via Lisp-aware editors.
I don't do much C anymore. Three projects that came to mind: sqlite, the linux kernel, and atril (pdf viewer bundled with mate desktop environments, forked from the old gnome2 evince). Take random files from these projects. I don't think you can really claim that you can just take snippets more or less as-is and use them in some other project. Part of the problem again stems from C's lack of expressive power along with other weaknesses like an impoverished runtime -- which larger projects are going to feel the pain of even more acutely, and thus use different and incompatible methods of solving that. But the other part of the problem is just a nature of any project in any language: it's made up of its own data abstractions that are highly relevant to that project. Re-use at the snippet level is rare. You aren't going to gain anything from (dart at wall) sqlite's mutex.c file if you try to cherry pick snippets. Everything depends on its own struct (which is different for three platforms -- hey, that's useful to study and maybe copy, but there's enough specific sqlite stuff even in the short definitions that you can't literally just copy the code over), you can't even re-use anything that allocates because it calls their own sqlite malloc that does whatever differently from a system malloc... Picking a random linux kernel file, io_uring/cancel.c, what are you going to be able to extract from this? It's straightforward C code, yes, but it's intimately tied to the context. There are references to mutexes in it -- how much do you want to bet you can't just use sqlite's notion of a mutex in place, or vice versa? (Sqlite uses a pthread mutex; needless to say the struct layout is not compatible with the kernel's mutex.)
Atril came to mind because last year I hacked it so it would show me a time estimate of how long it'd take to reach the end of a book when I have it auto-scrolling. I dived in and noticed types like gboolean and gint. Oh great, custom typedefs over basic things, like so many other C projects, what do I have in store... at least they're sensible. https://github.com/GNOME/glib/blob/main/glib/gtypes.h#L56 But yeah, to make a long story short, it uses glib. glib is an amazing and kind of cursed library to bring a lot of higher level language features to C. Except (at least to me, encountering it with the goal of understanding code for the first time) without the normal tools of high level languages that aid in development and understanding. I pieced what I needed to piece together and made my hack work, but it involved quite a few printfs. (Incidentally, Common Lisp includes the standard function "trace", which you can call at any time, applied to a function, and from then on when that function is invoked its input args and output values will be printed. You can "untrace" it later. Editors have hot keys.)
Graham's On Lisp is a really interesting book
https://paulgraham.com/onlisptext.html
which is allegedly about programming with macros but I'd say 80% of the time he implements something with closures and then makes a macro-based implementation that peforms better. That 80% can be done in Python and the other 20% you wouldn't want to do in Python because Python already has those features... And if you wanted to implement meta-objects in Python you would do it Pythonically.
Graham unfortunately doesn't work any examples that involve complex transformations on the expression trees because these are hard and if you want to work that hard you're better off looking at the Dragon book.
You can work almost all the examples in Norvig's Common Lisp book
https://www.amazon.com/Paradigms-Artificial-Intelligence-Pro...
in Python and today Norvig would advocate that you do.
Allowing those in C would require a modicum of effort, while other languages make these kinds of syntax extension fairly easy.
I did some experiments towards this to convince myself it would work
https://github.com/paulhoule/ferocity/blob/main/ferocity0/sr...
had I really kept at it I would have bootstrapped by developing a ferocity0 which was sufficient to write a code generator that could generate stubs for the Java stdlib + a persistent collections library and then write a ferocity1 in ferocity0, and if necessary ferocity(N+1) in ferocityN until it supported "all" of Java, though "all" might have omitted some features like "var" that are both sugar and use type inference that ferocity would struggle with -- if you need sugar in this system you implement it with metaprogramming.
The idea is that certain projects would benefit from balls-to-the-walls metaprogramming and the code compression you get would compensate for the code getting puffed up. My guess is a lot of people would see it as an unholy mating of the worst of Java and Common Lisp. However, I'm certain it would be good for writing code generators.
Everywhere I see code generators it means a feature is lacking in the main language used for the project. Then you bring in other tools to make up for that deficiency. Only, usually, we don't call that deficiency, since we are used to things being that way. It is called day-to-day business. I think that's what I've tried to convey in the article.
[1] https://news.ycombinator.com/item?id=41066544
At my job we use the JooQ code generator which is well integrated with maven and either IntellJ IDEA or Eclipse so autcompletion "just works". In modern Java you can pack up a code generator as a maven plugin [1] and put something in your POM that runs the generator. It's easy. There are other ways to hook the compiler too, see the controversial
https://projectlombok.org/
Lisp does come closest to a "language construction set" that lets you bend a language to your will. I think a compiler could be built for a language that looks more conventional that would be just as malleable, maybe even more malleable, but a generation of system programmers were traumatized by slow C++ builds and want to have nothing to do with a compiler which could be slow, even if you could make up for the slowness by having dramatically less code.
[1] A maven plugin is just a Java class which can do everything in the ordinary Java way which gets dependencies injected by the maven runtime. It's a common rookie mistake to try to solve problems by writing XML. I mean, if you can write a POM that makes existing plugins do what you want go right ahead, but if you can't just write your own plugin.
Well, exactly!
> At my job we use the JooQ code generator...
And in Lisp one would use the...Lisp code generator. That is, the macro. And the beauty of it is that it doesn't work with pure strings but 'understands', parses, manipulates the code as expressions in its own language. That is, it has at its disposal the entire language for manipulating those expressions.
And I think that is one of the "aha" moments. At least, it was for me. When you realize the reason for having those code generators, regardless of the project and language, in the first place. That is, something missing from the language. Some extra feature that can't be implemented. Some solution that works really close to 99%, but not beyond. Something that one would like to express, but can't. Some piece of code that you want to be parameterized like you would a function, some piece of code that you want to use in multiple places but you don't want to write the same boilerplate or copy/paste it all over the place with the risk that when you modify something, you'll need to modify in all those places. Or some piece of code that you want to be auto-generated when you build/deploy/etc.
The examples are countless. The world of meta-programming offers enough of them. The article gives the control statements as an example, as a hint to build the appetite, as is suggested in the intro, in fact.
https://www.jooq.org/
and
https://www.sqlalchemy.org/
JooQ isn't everybody's taste but I use it for my job and I think it's great particularly in that you can reuse expressions and write generators for complex queries. We have a powerful search interface that combines full-text with other kinds of queries ("Is about topic T", "Data was collected between S and E") that is beautiful. I think it's funny how JooQ has that lispy f(a,b) style (no accident it is like ferocity) and how Sqlalchemy is really fluent and takes advantage of operator overloading.
I like those that read something like a punch-line, that come across as something different that just a summary of the article. But these maybe work best for literature, prose, movies, etc.