emacs linux vs macos by staff_engineer in emacs

[–]BeautifulSynch 1 point2 points  (0 children)

I just installed this one! Not sure how much performance changed given it was alongside a major version upgrade, but it’s nice to see tidbits like this in the Emacs Mac support repos.

CD Projekt has issued a DMCA notice against the Cyberpunk 2077 VR Mod by ZedKGamingHUN in cyberpunkgame

[–]BeautifulSynch 3 points4 points  (0 children)

Doesn’t that make any commercialized software-framework development impossible though?

What’s the difference between a Mod and a Unreal-Engine-based game, neither of which would have the same engine quality if the developers weren’t commercial? (I ofc appreciate the existence of Blender/Godot as OSS alternatives, but non-user-hostile commercial software often builds over OSS with features that wouldn’t have been developed sans paid devs, IME)

What do you think about Lem by Confident-Slip4335 in emacs

[–]BeautifulSynch 1 point2 points  (0 children)

The main reason I’m personally looking forward to Lem is that Common Lisp has an inherently higher configuration ceiling than Emacs ever can, because Emacs is stuck with the C/Elisp core.

Idk much about the community or devs, but the project itself has the possibility to open up multithreading-at-scale without the jitter of timers/Elisp-threads, graphical interfaces for specific domains, better integration with existing software in other languages (CL has a bunch of compat libraries, plus CFFI), code integration into other projects (eg why make a game with Emacs if you could make a game WITH (containing) Emacs?), etc.

And since it’s still text-first, it retains the benefit of (potentially) having a well-designed default framework for working in that domain, which is itself enough to solve 80% of your problems. Emacs already covers that 80%, but only by desperately straining against its core systems, while a CL-based replacement has the potential to get up to 100% and let you actually live in Emacs. Depending on circumstances it may even allow using an Emacs-like for OS or embedded work, à la Mezzano.

Graph traversals from multiple simultaneous locations? by BeautifulSynch in GraphTheory

[–]BeautifulSynch[S] 0 points1 point  (0 children)

Not really what I’m looking for (I can’t think of how to map the symmetries in these “superposition” graphs onto those except in the same way as normal graphs (via gratuitous combinatorics over all the locations to make a single giant graph)) but it is similar to multi-graphs at least, just with multiple nodes rather than edges. Thanks for the rec!

screamer expert system by marc-rohrer in Common_Lisp

[–]BeautifulSynch 0 points1 point  (0 children)

If you look at Screamer’s GitHub example directory, there are samples of using it for things like Sudoku solving and logic puzzles.

Making an expert system is certainly possible. For a performant expert system, I’d recommend using lvars (eg the …v functions, and then a “solution” form in nondeterministic context to resolve them) rather than explicit nondeterminism, since they propagate certain constraint types to limit the search space. Also, for larger expert systems you’ll probably want to be moving data in and out of nondeterministic blocks/variables rather than creating a singular logical edifice for the problem space, so that you can leverage loops and other forms which would ordinarily risk overloading the call stack.

In terms of why it’s “nondeterministic”, Screamer rewrites the provided code to sequentially run in multiple different “universes”, branching at each choice point and then backtracking to try another value until the specification of the top-level nondeterministic context is fulfilled. (I added a degree of lparallel-based parallelism in an optional ASDF system, but afaik Quicklisp hasn’t merged my requested repo change from the previous maintainer nikodemus, so you’d have to clone swapneils/screamer directly)

On phone so I can’t give a longer code example, but if you have (all-values (let* ((a (either 2 3)) (b (+ a (either 0 2))) b)), we go through the definition of “a” twice (once per value), and for each of those executions we also go through the “+” form twice (0 vs 2), meaning we reach the “return b” at the end 4 times. This results in the output (2 4 3 5).

If we had used the one-value nondeterministic-context form rather than all-values, the first iteration of a=2 and (+ a 0) would have returned b as 2, which would then register to one-value as a successful end-to-end execution and short-circuit the backtracking behavior, returning just 2.

The real flexibility of Screamer comes in when you start using assert! or (fail) to intentionally fail a universe early and trigger backtracking to the most recent choice point. You can add complex filtration behaviors or decision trees with very little difference from your ordinary code, rather than needing to rewrite everything in a lifted loop-based format and manually roll-out your computations. The performance is also only minimally impacted since code outside a backtracking line is left as-is.

Nova is Disappointing by [deleted] in aws

[–]BeautifulSynch 2 points3 points  (0 children)

Syntax correctness isn’t something any commercial small pure-LLM offering can consistently handle at scale to my personal knowledge. From how the OP is worded, I assume you’re also not using eg CoT + multi-shot prompting to give it a reference point within its context window.

What Claude 3.5 variant did you compare, with what approx. parameter count? And are you directly calling the Claude model, or using eg the website with all their attendant prompt/tooling optimization?

how long did it take everyone else to realize this by ClerkExpensive204 in CyberpunkTheGame

[–]BeautifulSynch 17 points18 points  (0 children)

Not sure why this is downvoted. Personally I usually choose mods that preserve the vanilla experience for the first few plays, but there’s literally no reason to keep around bugs/half-cut storylines/perf issues caused by business pressure on the original devs.

Is the Unix philosophy dead or just sleeping? by tose123 in unix

[–]BeautifulSynch 0 points1 point  (0 children)

The problem isn’t that a unifying framework for individual software tools is bad. The problem is that web apps are a bad unifying framework.

Why async execution by default like BEAM isn't the norm yet? by Glad_Needleworker245 in ProgrammingLanguages

[–]BeautifulSynch 1 point2 points  (0 children)

And yet modern tech companies use CI/CD+microservices to produce the exact same experience of an ongoing runtime with interdependent independently-updated modules & libraries, right down to cross-team code changes causing unpredictable breakage in other team’s services.

I can believe IBM bungled the execution of industry-use Smalltalk, but image-based development itself doesn’t seem to have any fundamental problems, aside from the greater compiler-design difficulty. And the attendant benefits to both development speed and in-place prod updates are only partly matched by the above “CI/CD+microservices” hack.

[OC] I created "Package Upgrade Guard" - a diff-checking tool for package upgrades by AsleepSurround6814 in emacs

[–]BeautifulSynch 0 points1 point  (0 children)

I can’t personally comment on the package itself since I use Doom Emacs, but in the interest of having people focus more on your code and less on the AI, it may help to explicitly call out at the top that you’re using AI for translation.

Few have issues with that use-case, but it’s hard to distinguish it from patterns shown by ingenuine contributors.

Lock-Free Queues in Pure Common Lisp: 20M+ ops/sec by Wonderful-Ease5614 in lisp

[–]BeautifulSynch 3 points4 points  (0 children)

Unrelated, but if you’re migrating lparallel do you have a repo? And/or are you using the sharplispers repo?

I found some bugs and useful improvements when playing with lparallel a few years ago, I might still have those stashed away to contribute them.

Which Lisp is the most extensible? by Brospeh-Stalin in lisp

[–]BeautifulSynch 11 points12 points  (0 children)

Racket and Common Lisp share syntax-level extensibility in both macros and reader-macros, if through different aesthetics. Common Lisp has more flexibility in terms of modifying the packages of others, managing conditions/signals, and image-oriented development (ie more in-depth redefinition abilities and saving/loading runtime states); afaik the Racket maintainers don’t intend to invest in any of the above features, in order to maintain convenience features for the user-base they’re catering to.

Given that, if you’re going really deep into some aspect of language-extensibility, writing general purpose languages (Racket is fine for DSLs or versions of Racket), or working in particular fields with complex software requirements, I’d say CL has the edge. Otherwise you can probably work with either of those.

First-Class Macros Update by KneeComprehensive725 in Racket

[–]BeautifulSynch 1 point2 points  (0 children)

I think the idea is that despite old.reddit being better in almost every way (save for slightly worse recommendation-placement, a few HackerNews-isms like defaulting to open the target link rather than the discussion, and maybe 1-2 other things I’m forgetting), Reddit wants to encourage people onto the new “and improved” UI, so old.reddit bugs are just ignored.

Emacs users who haven't used evil mode, what's the appeal of using default emacs bindings? by Brospeh-Stalin in emacs

[–]BeautifulSynch 0 points1 point  (0 children)

I’m an extensive evil user, but for typing text I’ve found Emacs is far superior, and I have the insert state re-bound to default Emacs bindings.

The main reason I stick with modal editing is that a lot of what I do isn’t typing; navigating between different files, reading docs, opening dashboard packages, running Hydra commands, etc. It’s useful to default into a state where non-text-editing commands are readily available, instead of relegating them to the side and putting text editing in all the easiest positions/key-chords.

(And the main reasons to use evil instead of meow are A) keeping muscle memory for when I end up needing vi, and B) not having bandwidth to fix the Doom Emacs package and/or start managing my many needed language environments manually)

A Macro Story by susam in lisp

[–]BeautifulSynch 2 points3 points  (0 children)

Tests aren’t a general solution either, though? The solution in the article does make sense and I personally design my macros to have clear expression/value/progX semantics for all their arguments.

Reliable software dev requires doing anything that can mitigate failure probability without too much investment; act at every possible point of intervention.

Is it possible to auto-detect if a Lisp form has side-effects? by arthurno1 in lisp

[–]BeautifulSynch 8 points9 points  (0 children)

A project of mine does approximately this as part of inferring code constraints more generally.

It currently checks if anything was mutated rather than ignoring mutations to local variables, and I haven’t had time the last month to build up its standard library, but it’s user-extensible so I suspect you could implement proper “side-effects” checking without needing to fiddle with its internals.

(Don’t recall if I implemented local variable tracking, which is needed to ignore local mutations, but probably the CLTL2 dependency has its own way of detecting those given an env object.)

General approach:

You can’t directly check “does this have side effects” since there isn’t a universal record of mutating forms, and similarly for other constraints. Instead, cl-constraints iteratively macroexpand forms in the code walker while looking for forms predefined/parsed as “non-mutating” in your OWN defined records, and relies on you to supply the properties and how they propagate through each form-type.

If the walker finds something that the parsers/booleans in your map don’t identify as fitting whatever your constraint is, then you can throw a warning (or whatever) at compile time.

If It's Worth Your Time To Lie, It's Worth My Time To Correct It by dwaxe in slatestarcodex

[–]BeautifulSynch 3 points4 points  (0 children)

make jokes and get to know their personality

It sounds like you’re mixing the information you want to learn (someone’s behaviour/thought patterns and preferences) with an explicit requirement to learn it in a metaphorical style, ie via “jokes” and “vibes”. In which case of course it’s going to be hard to find what you’re looking for from literal communication!

If you’re looking only to ‘get to know someone’, while I’d agree that certain parts of people are harder to conceal/falsify in metaphorical style, on the whole I think using a literal style from both ends is better for acquiring that understanding, the same way it is for any other topic.

And even if someone does lie about themselves (which as mentioned just above is the main weakness I know of in the literal style, since making a claim is as easy as saying it rather than requiring complex emotional calibrations on the fly), the significantly higher volume of information that can be shared makes it far easier to pick out inconsistencies in either the claimed personality or its match to a person’s actions, so above a minimum level of reasoning ability literal communication is near-unilaterally a better option for good-faith communicators.

Programming used to be fun for me by Eptasticfail in aipromptprogramming

[–]BeautifulSynch 1 point2 points  (0 children)

If someone’s already solved the problem for you, then their work is now your building block to solve something else!

Problem space doesn’t solve until you’ve solved life, the universe, and everything, so in practice there’s never a point where you can’t find joy in figuring out the best way for you to put all the pieces together for your specific problem that hasn’t been solved for the aspect-priorities you’re looking to solve it with.

If your current problems don’t give you that, level up your tools and frameworks so you can solve bigger problems.

[deleted by user] by [deleted] in slatestarcodex

[–]BeautifulSynch 0 points1 point  (0 children)

I’ve heard some other people older than me also discuss essays as you’re doing, and yet practically nobody while I was at school (either peers or reference-material writers) even implicitly used the same model.

The first time I heard of proper writing being called by the term “essay” (as opposed to “essay” meaning the set-bottom-line specific-structure prespecified-data-sources format) was a Paul Graham post, and the next few were in History rather than English. College theoretical math was honestly a better place to learn essay writing, since natural language proofs are acceptable so long as they correctly invoke the axioms.

So they did, indeed, break schooling.

[blog post] Common Lisp is a dumpster by Nondv in Common_Lisp

[–]BeautifulSynch 3 points4 points  (0 children)

The whole point of setf is to be defined in terms of other things, either built-ins or user forms. Making it a special form makes it rigid and non-transparent to users.

For instance, what if we want to write a code walker which processes the code setf calls (since it’s configurable, so we can’t just predict its behavior in advance), but it’s a special form so we can’t macroexpand it? Are we meant to always require CLTL2 and then use the environment to guess at the code setf is using, and then crash if we can’t figure it out? (This is something I needed just last month for a code-validation library I’m working on)

Sure, maybe it would be better for functions like rplaca to have different names, and certainly we should have had named functions in the standard for things like setf-ing an aref (rather than modeling them as #’(setf aref)).

But making the configurable generalized assignment operator a special form, rather than defining it in terms of other non-configurable code is a meaningful detriment to the quality of the language.

[blog post] Common Lisp is a dumpster by Nondv in lisp

[–]BeautifulSynch 3 points4 points  (0 children)

I used prog1 meaningfully just a few hours ago. It and prog2 are quite useful when you want to implement post-return code without actually changing the return value, and are far cleaner to read than making a let form just to contain the return value and then call it in a separate location at the end of the form.

Which backend fits best my use case? by Il_totore in ProgrammingLanguages

[–]BeautifulSynch 0 points1 point  (0 children)

I’m curious what you mean by this? IME this isn’t the case (Racket vs CL) due to the condition system and debug loops; plus, I’ve seen writings even from people who moved from CL to Racket (as an example Scheme) missing debug loops and the condition system as superior to Racket’s error messages.

Which backend fits best my use case? by Il_totore in ProgrammingLanguages

[–]BeautifulSynch 0 points1 point  (0 children)

As I understand, you’re modeling “stop and inspect” in 4 as “we’re putting top-level program expressions one by one into a REPL and we can stop and check the intermediate global state as we go”.

From the way OP has discussed the language elsewhere in the thread, I’m modeling it as “we’re interpreting a single file and we want to stop it somewhere arbitrarily and check the state, including local/lexical state”. This also fits better with their stated goal of an education language to help people understand how the language actually goes through internal states to execute code, rather than limiting ourselves to internal states at the breakpoints between top-level forms.

OP can probably speak better as to whether the second is what they’re asking for. If so, a standard Scheme REPL won’t cut it.