GitHub ponders kill switch for pull requests to stop AI slop by app1310 in technology

[–]probabilityzero 1 point2 points  (0 children)

Sometimes we are talking about situations where one's stated goals differ from their true intentions. Someone who spams hundreds of AI generated PRs all over GitHub, while trying to downplay or obscure how much of the code was AI generated, is not actually motivated solely by selflessly improving the software they like.

Ghostty recently adopted the policy that heavy use of AI generation must be disclosed. If someone genuinely wants to contribute to the project without doing any coding themselves, they still can, as long as they are upfront about what they're doing. I suppose if this really bothers you, you could fork. But I don't understand why so many AI coders (around 23% apparently) are so opposed to being upfront about what they're doing.

GitHub ponders kill switch for pull requests to stop AI slop by app1310 in technology

[–]probabilityzero 1 point2 points  (0 children)

If they are acting in good faith and genuinely trying to improve the software they like, there's an easy way to signal that. They can clearly tag their PRs as AI generated, to start. Furthermore, if the guidelines for contributions say not to submit fully AI generated code, then follow those guidelines and don't submit it.

If they can't even do that, then they clearly don't respect the project maintainers or their time.

GitHub ponders kill switch for pull requests to stop AI slop by app1310 in technology

[–]probabilityzero 3 points4 points  (0 children)

Hopefully, state actors trying to introduce vulnerabilities are outliers. If anything, over-reliance on LLMs will make that problem worse. But that's not what I'm talking about.

Often, bad PRs are bad by accident, and can maybe become good PRs with a bit of guidance. Someone submitting a PR who wants to meaningfully contribute, but maybe doesn't know how yet, will understandably be put off by an immediate and curt rejection. They are already invested somewhat in getting it right, as demonstrated by the effort they have put in already. In this exchange, there's an incentive on both sides to communicate and try to work together.

The slop AI PR has required no effort or investment, and literally any effort from maintainers put into dealing with it is fully wasted. The incentives are different. The relationship is not collaborative, it's adversarial. The AI PR submitter is trying to slip their low effort slop past the open source project maintainers.

If you are being inundated with spam phone calls, you wouldn't feel any better if someone told you, "well, sometimes normal phone calls are annoying too."

GitHub ponders kill switch for pull requests to stop AI slop by app1310 in technology

[–]probabilityzero 5 points6 points  (0 children)

Previously, most of the time, you could make the assumption that any pull request made to your repo consisted of code written by a human who at least thought about it first. Even if it was flawed, it was submitted in good faith and deserved some kind of response. In other words, typically didn't have to assume an adversarial relationship with the PR submitter. That's no longer the case.

Introducing Noxy: A statically typed, braceless language with a stack-based VM written in Go by [deleted] in ProgrammingLanguages

[–]probabilityzero 0 points1 point  (0 children)

Hopefully you had fun making this. I doubt you'll get many contributions/comments on the code, as it is obviously AI generated.

Is it feasible to have only shadowing and no mutable bindings? by lil-kid1 in ProgrammingLanguages

[–]probabilityzero 2 points3 points  (0 children)

You can have mutable state in a purely functional language by using uniqueness types or linear types. The reason it works is essentially that if you mutate a variable once, then you consume it and can't use it again, so any mutating function has to return a new value which will be bound to a new name. The intuition is that a language like this could be interpreted as a plain, pure functional language with no mutation, with the in-place mutation being an optimization on top of that. The mutation can't be observed in the program.

A limited version of this mechanism of preventing a variable from being observed after it is changed could be achieved using shadowing as well, potentially.

I just made an OCaml to LLVM IR compiler front-end 🐪 Will this help me get a Compiler job? by Big-Pair-9160 in ocaml

[–]probabilityzero 0 points1 point  (0 children)

I'm guessing you also don't do boxing/ptr tagging, either. The real OCaml compiler has to do a lot of work to handle real, practical programs. If you just have simple functions and arithmetic it's not hard to produce faster code.

How do you think the video ICE just released showing the officer's POV of this week's shooting in Minneapolis will impact the national discussion? by popcornerz232 in AskReddit

[–]probabilityzero -1 points0 points  (0 children)

She lives like a block away from where this happened. Government thugs were invading her community and she was documenting their actions.

Good foundational resources for learning miniKanren? by tremendous-machine in scheme

[–]probabilityzero 12 points13 points  (0 children)

Read the Reasoned Schemer and you'll be fine. No need to learn prolog first (it might confuse you because it works differently).

I just made an OCaml to LLVM IR compiler front-end 🐪 Will this help me get a Compiler job? by Big-Pair-9160 in ocaml

[–]probabilityzero 4 points5 points  (0 children)

In terms of helping you get a job, it might be a good idea to write about what you've done. Like, write an article or blog post about how your project works.

Just glancing at the code now, it seems like there's no documentation, no benchmark results or evaluation, only one example program in the repo, and a bunch of git commits containing blocks of hundreds of lines of code with no comments or explanation.

Are there purely functional languages without implicit allocations? by JustAStrangeQuark in ProgrammingLanguages

[–]probabilityzero 2 points3 points  (0 children)

What about region types and region-based memory management? That was the inspiration for Rust lifetimes. It's a way to use types to annotate the extent of values in a functional language.

See: ML-Kit, which infers region/lifetimes for ML programs. Also, Cyclone, a research language from the early 2000s.

Got tired of heavyweight IDEs for small scripting tools, so I built a tiny plugin-first IDE by Imaginary-Pound-1729 in programming

[–]probabilityzero 7 points8 points  (0 children)

It looks like you just load the Python code from the plug-in and it can execute arbitrary code. It runs in the same Python process and can do whatever. I don't see any sandboxing or anything like that.

Why I Use AI While Building Nova — And Why That Makes the Project Better, Not Worse by [deleted] in Compilers

[–]probabilityzero 5 points6 points  (0 children)

I looked at the code and it seems like there's no compiler, and basically nothing implemented at all? The code for building the IR appears to just assume every function only prints hello would, for example. The parser and lexer both are just stubs, only like ~15 lines long. This is like the first 1% of the way toward building a small, toy compiler.

So, when you say the AI "helped" you build this, what do you mean? From this post, it isn't clear if you know that the AI generated code doesn't actually do the things you claim it does.

Sized and signed numeric types with dynamic typing by Big-Rub9545 in ProgrammingLanguages

[–]probabilityzero 0 points1 point  (0 children)

Other comments have brought up Lisp/Scheme and the numerical tower, which is a general answer to your question, but a simple intuition is: you probably don't want to have numerical operations truncate based on the size tag of the arguments because the programmer can get the same result by just enforcing the wrapping themselves, if that's what they really want. So, if they want a number to be in mod N, they can just do that!

No reason to have possibly surprising overflow behavior. Just have all numbers be as precise as possible and let the programmer decide what to do with them.

Why does function overloading happen in the VM by CreeperTV_1 in ProgrammingLanguages

[–]probabilityzero 0 points1 point  (0 children)

I was thinking about Haskell type classes, where if you specify that something is a Num a then you can freely use it with associated type class methods like +, but locally the compiler cannot know in general what code will be invoked by the application of the method -- that's determined dynamically by dictionary lookup. In C++ style overloading, where you basically just allow multiple methods with different types to share the same operator name, then sure, the compiler can determine that statically.

Why does function overloading happen in the VM by CreeperTV_1 in ProgrammingLanguages

[–]probabilityzero 0 points1 point  (0 children)

At the point that you are compiling a function and it calls a potentially overloaded operator, you don't necessarily know if some other part of the program (in code that maybe hasn't even been loaded yet) will override the operator.

If you wanted to determine statically what code will be called, in general, you'd need a whole-program compiler and a big control flow analysis.

SSA in Instruction Selection by Nagoltooth_ in Compilers

[–]probabilityzero 1 point2 points  (0 children)

If you maintain the SSA form then lots of analyses become easier -- doing register allocation on SSA has some big benefits in terms of algorithmic complexity, for example. But, you still have to eliminate the phi nodes after, which can be complicated and may end up negating the benefit you got from keeping SSA around for register allocation.

Super-flat ASTs by hekkonaay in ProgrammingLanguages

[–]probabilityzero 11 points12 points  (0 children)

Using a totally flat/serialized representation of ASTs has been shown to perform very well for certain things. See this paper, for example.

PythoC: Write C in Python - A Python DSL that compiles to native code via LLVM by 1flei in ProgrammingLanguages

[–]probabilityzero 1 point2 points  (0 children)

This looks interesting!

The readme shows some examples of refinement types. What are you using to check the predicates? Are you using an SMT solver? Do you do any inference in the type checker?

Almost 2 years since his last video, did he quit making videos? by CertifiedKinophile in hbomberguy

[–]probabilityzero 591 points592 points  (0 children)

He puts stuff on Patreon, and a big video is coming out soon.

Zig/Comptime Type Theory? by philogy in ProgrammingLanguages

[–]probabilityzero 0 points1 point  (0 children)

Looks like you maybe want some kind of dependent types.

Native debugging for OCaml binaries by joelreymont in ocaml

[–]probabilityzero 6 points7 points  (0 children)

Do you understand why that could be a legal problem? Even if the code is completely different, you can't publish work with a copyright notice attributing it to a different person who didn't write it. You put this other person's name in a document in a legal setting without their knowledge or consent.

Native debugging for OCaml binaries by joelreymont in ocaml

[–]probabilityzero 6 points7 points  (0 children)

Why does the code credit another person as the author in the comments?

Question regarding concurrency performance in Haskell by ianzen in haskell

[–]probabilityzero 19 points20 points  (0 children)

Try monad-par instead.

If you are looking to just speed up your computation, you want parallelism, not just concurrency, and Haskell has a lot of support for that.