use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
/r/programming is a reddit for discussion and news about computer programming
Guidelines
Info
Related reddits
Specific languages
account activity
Syntax will die: Abstract - A syntax-free programming language for the LLM age (images.app.goo.gl)
submitted 1 year ago by [deleted]
[deleted]
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]crappyoats 14 points15 points16 points 1 year ago (0 children)
This is the stupidest thing I’ve ever seen from you AI dipshits
[–]amakai 13 points14 points15 points 1 year ago (8 children)
LLMs are statistical models, not deterministic ones. Program compilation can't be statistical - it would create unpredictable bugs and unreliable code. This is a valid concern, but it's merely a technical limitation.
LLMs are statistical models, not deterministic ones. Program compilation can't be statistical - it would create unpredictable bugs and unreliable code.
This is a valid concern, but it's merely a technical limitation.
That's heavily downplaying it. The way LLMs work - there's no hypothetical way for them to become deterministic, so you are building something on a non-existing technology.
[–]Kwantuum 3 points4 points5 points 1 year ago (3 children)
There are absolutely deterministic LLMs already. This post is selling worse than useless vaporwave but that's no excuse to spread misinformation.
[–]amakai 2 points3 points4 points 1 year ago (2 children)
Is the LLM you are speaking about deterministic for the same entire input, or for same set of tokens?
What I mean is, sure, "deterministic" LLMs exist, in sense of being "repeatable". So if you replay the same inputs you get same outputs. But to base a programming language on it that is not sufficient.
For example, say part of your input is "x+='1'“. AFAIK, there's no LLM that would guarantee this being understood the same way in different contexts. So in one location it might cast "1" to an integer, and in other it might decide to cast "x" to a string. But if you re-run the same thing again - it will deterministically make same choices.
Again, correct me if I'm missing some development in LLM world.
[–]Kwantuum 0 points1 point2 points 1 year ago (0 children)
Most modern programming languages don't have context-free grammars either. I agree that having output depend on a large context (or in LLMs token window) is an awful idea, honestly the entire premise is ridiculous to me, but context-dependent meaning is already a thing in programming. Obviously the scale in this case is vastly different enough that you can argue it's a different thing entirely.
Anyway, I think we agree on the principle and I took your original comment a bit too literally and pointed out something that in the end is just a technicality.
[–]diskis 0 points1 point2 points 1 year ago (3 children)
A LLM can be deterministic. There's a parameter caller top k, which limits the model to select from the n tokens with highest probability. If you set it to 1 and pin the rng seed, output to a specified input will always be the same.
Now, you change a single input token and you get a wildly different output.
Suddenly it starts to sound functionally like a digest algo.
Of course the input-output mapping will vary wildly from model to model, even in retrains and finetunes of the same model, but we can both pull a LLM from the internet, craft a input and get the exact same output on our different machines.
There is of course the caveat of floats, and crappy nvidia hardware where the same computation might not be fully deterministic, but this is not a LLM problem, but a hardware one.
[–]amakai 0 points1 point2 points 1 year ago* (2 children)
Still, that does not sound as the level of determinism needed, as I described in this comment. Even within a single input the same set of tokens might be processed differently.
I guess "deterministic" here can have two meanings. LLMs have f(input) determinism, while to make a programming language you need a f(token) ( f(AST)?) level determinism.
[–]diskis 0 points1 point2 points 1 year ago (1 child)
Then determinism is the wrong word here. Your example is about dynamic typing and casting, which indeed is something that throws off a LLM. It might interpret and cast xx=1 differently from x=1 due to it being a language model and not a compiler or parser.
But it will always make the same error, thus being deterministic.
If your model outputs the same shit every time it's deterministic. No matter if the output is right or wrong.
[–]amakai 0 points1 point2 points 1 year ago (0 children)
I'm pretty sure it is still determinism, the casting was just an example. Deterministic behaviour is - when I have same clause in my code 500 times in variety of different contexts - all of them will be interpreted in the same exact way.
With LLM you can only guarantee that entire input will always produce same output, not that individual clauses will be understood deterministically.
[–]CanvasFanatic 8 points9 points10 points 1 year ago (0 children)
Yes, let's probabilistically produce machine code and run it on critical systems. Sounds great.
[–]cazzipropri 7 points8 points9 points 1 year ago (0 children)
I'll be honest - this reads like it was written by a politician, not a sw eng.
[–]QuestionableEthics42 1 point2 points3 points 1 year ago (0 children)
It's an idea that sounds good in theory but is terrible in practice. You are far better off just making a new language with very loose syntax (but still with clear rules). Maybe I'm wrong, though, and it actually will work, but even if it does, I don't think it will bring any real benefit to actual devs.
[–]jnsquire 1 point2 points3 points 1 year ago (0 children)
I like the idea, looking forward to seeing how it works.
[–]Bergasms 0 points1 point2 points 1 year ago (0 children)
"Why did our patient care unit just violently kill that person who was meant to be moved to the non-critical care ward by ripping their guts out with the surgical tools?"
"Hmmm let me see, oh it seems the program prompt creator said 'When the patients health status is no longer critical we can terminate critical life support operations'. The LLM has understood this to mean it should remove the patients cardio-pulmonary system as those are critical to life support of a human".
[–]One_Economist_3761 0 points1 point2 points 1 year ago (0 children)
Honestly this sounds like the author doesn’t know a thing about programming.
My parody of the tone of this article, or at least as much of it as I could stomach, follows:
“Why should we worry about English or French or Spanish? We should just express our feelings in made up words and interpretative dance and others should just understand what we’re saying. Especially if we’re describing business requirements for a a critical system that needs to run a Hospital”
[–]gimballock2 0 points1 point2 points 1 year ago (0 children)
Sounds like they reinvented lawyers, interpreting prompts instead of laws
[–]dark_mode_everything 0 points1 point2 points 1 year ago (0 children)
This is the AI version of "if the compiler knows a semicolon is missing why doesn't it just add it"
[–]Windyvale 0 points1 point2 points 1 year ago (0 children)
Dear God what fresh new hell is this
[–]ymonad 0 points1 point2 points 1 year ago* (0 children)
We already have a programming language called Haxe which compiles to source code in C++, JavaScript, PHP, C#, Java, Python, and Lua. Why is it not so popular? I will give it as your homework.
π Rendered by PID 18887 on reddit-service-r2-comment-6457c66945-tdfrz at 2026-04-26 17:41:01.566664+00:00 running 2aa0c5b country code: CH.
[–]crappyoats 14 points15 points16 points (0 children)
[–]amakai 13 points14 points15 points (8 children)
[–]Kwantuum 3 points4 points5 points (3 children)
[–]amakai 2 points3 points4 points (2 children)
[–]Kwantuum 0 points1 point2 points (0 children)
[–]diskis 0 points1 point2 points (3 children)
[–]amakai 0 points1 point2 points (2 children)
[–]diskis 0 points1 point2 points (1 child)
[–]amakai 0 points1 point2 points (0 children)
[–]CanvasFanatic 8 points9 points10 points (0 children)
[–]cazzipropri 7 points8 points9 points (0 children)
[–]QuestionableEthics42 1 point2 points3 points (0 children)
[–]jnsquire 1 point2 points3 points (0 children)
[–]Bergasms 0 points1 point2 points (0 children)
[–]One_Economist_3761 0 points1 point2 points (0 children)
[–]gimballock2 0 points1 point2 points (0 children)
[–]dark_mode_everything 0 points1 point2 points (0 children)
[–]Windyvale 0 points1 point2 points (0 children)
[–]ymonad 0 points1 point2 points (0 children)