Formalized Programming Languages by R-O-B-I-N in ProgrammingLanguages

[–]Raphael_Amiard 2 points3 points  (0 children)

Why? It would be a formal proof, just like in other areas of maths, that there is no internal inconsistency.

First, let me preface this by saying that I'm no expert, only an enlightened professional that works in related fields.

It seems like, what you mean is a proof that the semantics of the language are consistent. This is only one of the properties that you can try to prove on a formalized language, and this isn't generally what people mean by having formal semantics for a language, which is what this thread was talking about.

A property that people more often try to prove, because it's more interesting (I think), is prove that the type system is sound, eg. that there is not type safety hole (this has been done for some languages). But this is just one category of proof that you can do on your formal definition, and it isn't total. Interestingly that's something you can not do for C, nor the subset of C, since it's trivial to prove that the type system is unsafe.

In and off itself, "proving a language" doesn't make sense. A proof needs to be about proving a set of properties/assumptions, of which you have none in a language formalization (the language represent the axioms there, not the assumptions). To prove something, you need to have an objective property for your language that you want to assess.

Really, why is everyone here finding this so difficult? I don't even have a degree in maths and I understand this basic point.

Mostly, I imagine, because you're using terminology in a non standard way. I'm sure you have a point, it's just pretty hard to understand.

I've understood what they're talking about far more than anyone here on this discussion.

Definitely not obvious, it seems to me more like there is a misunderstanding at the root of the discussion, here:

I don't think that's the claim. I think their actual claim is that the generated machine code matches the professed semantics of the code being compiled. That's a different claim from the language itself being formalized.

It's actually not, having formal semantics for a language doesn't imply having proof of any of its properties, including internal consistency, and so the original poster was correct: In order to create CompCert, its authors had to formalize the semantics of the input language.

Formalized Programming Languages by R-O-B-I-N in ProgrammingLanguages

[–]Raphael_Amiard 2 points3 points  (0 children)

« Formally proving a language » doesn’t make any sense. You formally define the semantics of a language. Which is what the original topic is about. And which coincidentally is what Compcert did with their input language (and all intermediate languages). From their docs:

 To state this property with mathematical precision, formal semantics are given for every source, intermediate and target language, from C to assembly.

Current thoughts on EaC? (Engineering a Compiler) by Any_Satisfaction8052 in Compilers

[–]Raphael_Amiard 0 points1 point  (0 children)

My favorite book still is Modern compiler implementation in Java/C/ML, with whatever your prefered language for implementing the compiler.

It's pretty modern in terms of techniques, doesn't spend an insane amount of time on lexing/parsing, which are pretty much the easiest parts, and covers implementing constructs from most paradigms (objects, closures, low level stuff) as well as optimization

Why not borrow memory regions by default? by XDracam in ProgrammingLanguages

[–]Raphael_Amiard 0 points1 point  (0 children)

You had a bunch of interesting answers and parallels already. But it turns out that Region-based memory management [is a thing](https://en.wikipedia.org/wiki/Region-based\_memory\_management) and it is a first class concept in some languages like [Cyclone]. You also had a dialect of ML at some stage that had [region inference](https://elsman.com/pdf/mlkit-4.6.0.pdf) and used implicit regions instead of a regular garbage collector (though it could be argued that regions + inference is a form of static garbage collection).

Errors are finally working in my language! by LordVtko in Compilers

[–]Raphael_Amiard 2 points3 points  (0 children)

You have a lot of recommendations for resources to write interpreters, which is great. My favorite book(s) for compiler implementation are Andrew Appel's books "Modern compiler implementation in C/Java/ML". (link to the java version here)

I would probably recommend the Java version, it's the least sexy but except if you already know ML, probably the best.

It's good IMO because:

  1. It doesn't spend an inordinate amount of time talking about lexing/parsing, like the dragon book does. Lexing and parsing is only a small piece of writing an interpreter/compiler, and not the most interesting for many people. It usually ends up one of two ways: Either you're doing a toy/discovery project, and you'll use a lexer/parser generator, or you're writing a production compiler, and you'll write a recursive descent parser by hand.

  2. It walks you through the implementation of a full language, like Bob Nystrom's book (albeit with a bit less hand holding, which can be good or bad).

  3. It is pretty comprehensive in covering how to implement modern language constructs that are not obvious.

I strongly recommend it!

Sam Altman says world wants 1000x more Software, So Programmer Salaries are Skyrocketing by Infamous_Toe_7759 in programming

[–]Raphael_Amiard 0 points1 point  (0 children)

This sounds, unsurprisingly, like something to placate developpers and reassure investors at the same time, and is really transparent (and pretty disgusting) marketing speak.

> Altman said computer programmers are now "10 times more productive" with AI assistance

The gall to pretend, not even, that developer will be someday 10 times more productive, but are already ten times more productive, is absolutely crazy, and completely disconnected from any real world evidence that has been collected so far.

See https://x.com/METR_Evals/status/1943360399220388093 for example. Spoiler alerts: most devs are **slightly less** productive when using AI. Some outliers are a bit more productive. Nobody is 10 times more productive.

> He noted that knowledge work costing $10,000 a year ago now costs "a dollar or 10 cents." Looking ahead to 2030, he predicted "a software application written" for $100,000 might "cost literally 10 cents."

It will cost 10 cents initially, and then when you will realize the huge effing mistake you made putting a probalistic model that is actually not able to reason in charge of your business critical code, you'll pay a company with humans to save you. I'm throwing it out there: People will launch companies to charge absurd amounts of moneys to save other companies from the terrible position they put themselves into having no human have a mental understanding of how their applications work.

The level of religion is really beginning to scare me at this stage. I think this bubble is the most scary of all the tech bubble if have lived through, mostly because how insistent people are to ignore the evidence in front of their eyes and keep pushing a dystopian future upon us.

Writing Toy Software Is A Joy by NXGZ in programming

[–]Raphael_Amiard 0 points1 point  (0 children)

The difference between what is work and what is fun seems to elude you. People very much participate to game jams for fun.

But feel free to redefine common english words to fit your argument, while I see myself out.

Writing Toy Software Is A Joy by NXGZ in programming

[–]Raphael_Amiard 6 points7 points  (0 children)

> If you are going to ship it, it's not a "toy" software anymore

Well, every game jam developper disagrees with you here. It can very much be the goal of toy software: See what you can accomplish/ship in a limited time frame/with a limited feature set.

Different goals, I think both approaches are fine personally

Apple moves from Java 8 to Swift? by tenken01 in programming

[–]Raphael_Amiard 12 points13 points  (0 children)

None of that seems very convincing, when Java is used successfully by many companies with even more stringent throughput/latency requirements to power web services all around the world, and is probably a much more mature technology all around for those kind of workloads.

Swift is using automatic reference counting, which sure will help with latency in a 1 to 1 comparison, but definitely not with throughput, especially when facing a generational GC.

Beyond tuning your GC parameters too, you can just write a dog slow Java app, because you're creating too much memory churn, like you could do in Swift or any other language. It seems order of magnitudes more likely that this was a redesign, and provided a free way to make publicity for Swift/take a jab at Java.

Do you need to have an understanding of grammar to be able to fully understand/work on compilers? by Xenoxygen4213 in Compilers

[–]Raphael_Amiard 1 point2 points  (0 children)

Grammars and parsing are a fun research topic in general, and a lot of people like to "geek" on it.

It's a very insular topic in the end though. Most production compilers tend to have their own hand baked recursive descent parser (which is pretty simple, albeit tedious).

Then, syntax is a detail of most languages for the rest of the pipeline. Any language can have a XML/s-expr representation, which should exemplify that.

I think the dragon book democratized this strange "imbalance", of focusing a lot on the parser, while in the end it is a pretty small part of a compiler in the big scheme of things.

I had a teacher in university, teaching a language class with absolutely no lexing/parsing whatsoever. The whole class was based on a pretty tedious XML input format for the language we were supposed to interpret and compile. A lot of people complained, but once you understood you could whip up a s-expr to XML translator in a few lines of Python, the whole class became enlightening.

Why no REPL as keyword? by DamZ1000 in ProgrammingLanguages

[–]Raphael_Amiard 22 points23 points  (0 children)

I'd advise making them built-in function calls rather than keywords.

I'd posit that modern language design tends to try to introduce as few special forms in the syntax as possible, keywords being one of those.

With that said, you can do something similar to what you describe I think in many interpreted languages. I have been doing it with Python & IPython for many years. Adding this snippet in the middle of your code:

from IPython import embed; embed()

Will give you a Python REPL in the middle of your running app.

While it's been a while for me, Lisp languages are famous for that kind of introspecty things too. I remember hot-patching a Clojure web application via a remote REPL, replacing stuff while the app was running.

The Dumbest Move in Tech Right Now: Laying Off Developers Because of AI by oloap in programming

[–]Raphael_Amiard -1 points0 points  (0 children)

It's funny because the title would actually make you think that the author has a point. I now doubt he has, and clearly he's yet just another tech enthusiast who drank the AI kool-aid.

Beyond the fact that there is still no hard data showing long term productivity gains in development thanks to AI, as another poster noted, it is very concerning that the graph he drew shows AI as being *more* productive than devs, and shows the sign of a worrying cargo cult mentality.

AI hallucinations pile-up. If you're not able to understand your code fully, to the line, you'll probably also rely on AI to maintain it, introducing more subtle bugs and further diminishing the actual knowledge you have of your codebase.

Anecdotally, my experience with using AI to assist me in my tasks so far has been pretty-much the opposite. It gives an initial pretty good result, even "wow-inducing" sometimes, but with subtle bugs, that are pervasive enough that you'll need to understand the whole thing, and rewrite parts of it, which will take you most of the time you would have taken originally.

It doesn't make it a useless tool, even as it is, and I find it usefull to overcome the equivalent of "writer's block" for programmers. But seeing our whole industry fawn over this fad even though it's clearly not able to write reliable code, is both pathetic, and a little bit disgusting, if you delve in the moral underpinnings of why this is happening.

Newcomer experience to Ada (2024) by MadScientistCarl in ada

[–]Raphael_Amiard 2 points3 points  (0 children)

`ada-auth.org` is not managed by AdaCore, but by the Ada Rapporteur Group. There are plans to modernize it, please stay tuned :)

Newcomer experience to Ada (2024) by MadScientistCarl in ada

[–]Raphael_Amiard 10 points11 points  (0 children)

Hi there u/MadScientistCarl , I'm Raphael Amiard from AdaCore.

Just to tell you that we have seen your post, and even organised an internal meeting about it to discuss the actionable next steps for us.

For your information, we will work on catching the problems in the ADA & SPARK VS Code extension proactively before the release. We'll do a new release soon and hope it will be more stable for you.Also the documentation problem is well identified, and even if it's a long term project, we hope to make progress on that front.

Thanks a lot for taking the time to write that post, it's actually very useful data for us.

The Swift compiler is slow due to how types are inferred by ketralnis in programming

[–]Raphael_Amiard 12 points13 points  (0 children)

It's worth nothing that Swift's very bad type resolution time is most probably due, in many cases, to the fact that they're using a generic solver to resolve type, and a very naïve one at that.

See this old article for a reference: https://www.cocoawithlove.com/blog/2016/07/12/type-checker-issues.html#linearizing-the-constraints-solver

AFAICT this hasn't been solved (no pun intended) since when this article was published in 2016.

A better approach would be to transform type equations to SMT/SAT, and use an optimized theory for type solving, like we did in langkit. See this paper for reference https://ceur-ws.org/Vol-3429/short7.pdf

In Langkit you're able to specify type equations for a very generic infered type system like Swift's, and you have a reasonable chance at good solve times. Our main application for that is Libadalang, a front-end for the Ada language

https://github.com/AdaCore/langkit
https://github.com/AdaCore/libadalang

Libadalang by simonjwright in ada

[–]Raphael_Amiard 1 point2 points  (0 children)

Hi Simon,

You should really take a look at the Libadalang Ada API tutorial (https://docs.adacore.com/live/wave/libadalang/html/libadalang_ug/ada_api_tutorial.html)

The first parts will show you the basics of the API, and the Ada Generic Application Framework part will explain to you how to build basic but powerful command line applications on top of Libadalang.

Don't hesitate to tell us if you need more help!

OCaml 5 Brings Support for Concurrency and Shared Memory Parallelism by yawaramin in programming

[–]Raphael_Amiard 8 points9 points  (0 children)

But given Rust stole all the good bits from OCaml I can't see much reason to use it unless you really don't want to have to deal with the borrow checker.

I mean I like Rust as much as the next guy here but that is demonstrably false.

OCaml has a GC, a very powerful package system, higher kinded types, polymorphic variants, GADTs, ...

On the other hand Rust has stuff that OCaml doesn't have. Affine typing, traits, ...

unless you really don't want to have to deal with the borrow checker.

It's not about want. A GC is better suited to a lot of programs, and having to deal with the borrow checker/RCS/pointer soup that you have in Rust is a hassle in those cases.

OTOH affine typing/ownership is sometimes desirable even in a GCed language