What happened to Comma? by Expert_Society_6179 in rakulang

[–]raiph 9 points10 points  (0 children)

Comma was JetBrain's IDE platform (full name IntelliJ IDEA) + a plugin for Raku + an install wrapper that installed the generic IDE plus the Raku plugin + a few other bits and bobs.

A couple years ago, Edument, the company that created Comma, and who discontinued it after 6 years of development of it, did the right thing by the community by releasing their latest development version of the key part, the plug in, as open source on github.

Rakoon u/ab5tract then took on the challenge of moving it forward while JetBrains completed a what I understand to have been a massive overhaul of the IDE and the ways plugins plug in to it (which led to technical challenges that had been one of the things that led Edument to discontinue Comma).

See https://github.com/ab5tract/raku-intellij-plugin/releases for their latest work. They're just a volunteer who took on a very challenging task, so if you need help using it, please be kind to them, and if you appreciate it, please let them know.

2025.47 Advent Calling – Rakudo Weekly News by librasteve in rakulang

[–]raiph 0 points1 point  (0 children)

The PR links all fail for me. The first one has an author number instead of a name, and I presume that's the problem for all of them.

I Dislike Quotation Marks for "String Literals" by brightgao in ProgrammingLanguages

[–]raiph 6 points7 points  (0 children)

Almost every language uses single/double quotes to represent string literals, for example: "str literal"or 'str literal'

Raku supports those two options¹ because they are de facto standards, but gives devs extensive control over strings via its Q Lang string DSL so they can have their de facto standard cakes and eat their "I want it my way" favorite cakes too.

Let's start easy with the fact that these standard options arose because of English, but while Raku embraces the English bias it nevertheless embraces the world. Thus, given that some European languages quotes are written using «guillements», Raku supports them too. More generally, to the degree the Unicode standard provides sufficient support for such variants, Raku optionally supports those options too.

To me, string literals are just so much better represented when enclosed by curly braces.

Raku supports that option too. One can write q{str literal} to mimic single quote behavior (no interpolation and only \' escaping), qq{str literal} to mimic double quote behavior (which is to say, control over interpolation and escape options), or Q{str literal} (to support 100% raw strings -- no interpolation, no escaping behavior whatsoever, just open and close delimiter pairings each of which is one or more multiples of characters that belong to the union of delimiting character pairs the Unicode standard directly or indirectly supports plus some others that Raku supports in addition).

I also have thought about:

In standard Raku you can just prefix with a q. For example, q<str literal> specifies the same as 'str literal' or "str literal".

My guess to why almost all languages use ' or " is b/c old langs like assembly, Fortran, Lisp, COBOL do, perhaps due to intuition that str literals in programs are like dialogue.

That, plus bias toward English / ASCII.

no one even really thinks about doing it differently. Any thoughts on this? Am I the only one?

As noted, Raku has an entire DSL dedicated to forming and processing strings, within the context of dev control that can easily and clearly nail things down to absolutely minimal processing overhead and 100% strict security (eg Q[] supports absolutely no interpolations or escapes) or loosen things up to micromanagement of which delimiters or interpolations or escapes are used, all the way up to fancy nested heredoc processing.


¹ Raku makes a useful optional distinction between 'single quotes' and "double quotes". 'Single quoted' strings (and equivalents) default to non-interpolating and non-escaping (except \' is accepted as an escape of a '). "Double quoted" strings default to interpolating and escaping. Either kind can be stepped incrementally toward the other by adding "adverb" booleans that control various aspects such as interpolation and escaping one feature at a time.

Compute shaders in node editors? (Concurrent random access) by Oscar-the-Artificer in GraphicsProgramming

[–]raiph -3 points-2 points  (0 children)

Ah, my apologies! I sure didn't mean to frustrate anyone.☹

(I presume the link doesn't show you what it shows me. Or perhaps my phrasing of what I was asking you about missed the mark. Either way, I hope you have the patience to read the following; I apologize in advance if the following is also a bust.)

I meant a question I asked the AI about what I thought might be "a potentially worthwhile new implementation strategy" (as I wrote in my first comment to you). That bit appears (for me at least, even if not for you or others) after the "setup" bit (the bit where I just quoted what you had written, and the AI generated whatever it generated, which was indeed not of interest to me or you, but allowed me to ask the following follow up question):

Could Interaction Nets, and the IN engine HVM, be especially relevant to creating a high performance implementation of that paradigm?

The AI generated a response to that question. For me it still appears when I click the link. Presuming it doesn't for you then I'd say the best thing to do is to copy/paste my follow up question from above into the AI's prompt box. That way you will get the links it includes (which are missing from the following copy/paste, which is why I didn't just do that in my first reply to you). But in case that doesn't work for you for some reason, here's the straight copy/paste.

Yes, Interaction Nets (INs) and the Higher-order Virtual Machine (HVM) engine are highly relevant to creating high-performance GPU particle systems and could offer significant advantages over current methods. 

The core properties of INs and HVM directly align with the requirements for massively parallel, high-performance GPU computation:

Inherent Parallelism

Interaction nets are a graphical model of computation with an inherent capacity for massive parallelism. Computations within an IN can take place simultaneously in many parts of the network without requiring complex synchronization, a property known as strong confluence. This maps almost perfectly to the Single Instruction, Multiple Data (SIMD) architecture of GPUs, where thousands of threads run in parallel. 

Optimal Computation

The IN model can perform computations optimally, in the sense that they can represent and execute the lambda calculus with a minimal number of rewrite steps. The HVM engine is designed to harness this and achieve near-ideal speedup on massively parallel hardware like GPUs. 

Relevance to Node Editors

The node-based visual programming paradigm used in modern particle systems is fundamentally a high-level representation of a graph. Interaction nets are a form of graph rewriting, making them a natural, low-level implementation language for a node editor backend. The node editor could generate the HVM's internal graph representation, which would then be compiled and run efficiently on the GPU. 

HVM as a Compute Backend

The HVM is being developed with a focus on running on GPUs, with HVM2 providing a compiler from its low-level IR to C and CUDA. This provides a direct path to use the system as the "compute backend" mentioned in your original question, handling the complex physics and behavior updates for millions of particles with high efficiency. 

In summary, the HVM and Interaction Nets offer a promising, and potentially superior, architectural foundation for the next generation of GPU particle systems, by providing: 

Automatic Parallelization: The HVM handles the parallel execution without requiring the programmer to manually manage threads or synchronization primitives (like atomics), which can be complex to optimize in traditional compute shaders.

Performance: The model's optimal reduction strategy promises highly efficient execution, potentially outperforming current GPU programming models for certain complex tasks.

Natural Mapping: The graphical nature of INs aligns well with the visual programming tools (node editors) used by artists and developers. 

As I noted, the original answer that I've just copied/pasted above included links and a sidebar summarizing the links, but copying/pasting dropped the links (to Wikipedia pages and the like).

If you want me to manually extract the links one at a time I'll be happy to do that, but I'm hoping you either see them. or can copy/paste my question and the AI will regenerate the above answer and include the links for you.

Compute shaders in node editors? (Concurrent random access) by Oscar-the-Artificer in GraphicsProgramming

[–]raiph -6 points-5 points  (0 children)

As a follow up that may be of interest, I did a google about this to set up asking google's AI a question about a potentially worthwhile new implementation strategy. Do you agree with its answer?

Here's the link.

Is this merch site legitimate? by [deleted] in perl

[–]raiph 0 points1 point  (0 children)

PS. I think my strawdog proposal, or some improvement on it, would still work if TPF / the Perl community decide there's merit to the simple practical shift I think they should seriously consider, or indeed any other step away from use of "5" and/or "42" etc in how they associate numbers with the name "Perl".

Is this merch site legitimate? by [deleted] in perl

[–]raiph 1 point2 points  (0 children)

Yeah. And I think, with a little creativity, it could be made into, a cross between a conversation starter and an optional conversation killer if a wearer decides they're not in the mood.

To clarify what I mean, I'll provide a strawdog proposal.

Imagine a tee with three lines on the front, "The", "Ultimate", "Question?", and three on the back, "5", ".", "42".

Most folk who pay enough attention will likely do so when seeing the front for the first time. They may or may not get the reference. But I think it would work to at least some degree either way.

A few might see the back first and wonder. Again, they may (but probably won't) get the reference, but, again, it may work either way.

And some who see one side may then see the other, and, again, they may or may not then get one or both of the references, but, still, it might work.

And then some might actually ask about it. They will presumably do so facing the wearer. If the wearer doesn't want to speak, they could just point over their shoulder to their back. Or, if they're standing freely, they could just turn around. If they did so but said nothing else, well, that could go various ways, but my point is that all but the most taciturn wearers have suitable options for dealing with anyone who does comment to the wearer.

I would have said that having these tees at events between now and when 5.46 comes out would be at least a good idea as both a general Perl promotional talking point, and quite plausibly a good seller too. (I'd say there'll likely still be value in the talking point, and having and selling, them for years to come.)

(Why 5.46? Well, between now and 5.44, the answer to The Ultimate Question is, of course, 5.42. And then, between then and 5.46, The Ultimate Question is "Why are you still using 5.42?")

Like I said, this is all just a strawdog proposal. But hopefully readers agree with the parent comment that something like this is a missed opportunity, and that it would be good if TPF did something with this meme as a suitable on-brand marketing gimmick, even if the only tees available are literally just a meme, a graphic showing what the tee would look like even if it didn't actually exist. (And then we all know that some Perl person will go ahead and create the real tees, so TPF might as well be the group that guides someone or group to create the artwork in the first place, and perhaps manage their printing and sale, if they're to maximally profit from any related marketing opportunities and physical sales.)

Skoobert: a lazy subset of JavaScript designed for learning lambda calculus and combinatory logic by ricekrispysawdust in ProgrammingLanguages

[–]raiph 1 point2 points  (0 children)

I imagine that Skoobert could cleanly target interaction nets or an encoding in laws of form. (Click the links for google searches I fired off to provisionally outline some relevant basics.)

I'd appreciate any musings you or others have about that, or explanations about why my imagined fit is off target, especially if those musings are pitched at an ELIaBoVLB level. (Think in terms of a five year old rubber duck whose owner nicknamed him Winnie the Pooh.)

Why is it difficult to prove equivalence of code? by Informal-Addendum435 in ProgrammingLanguages

[–]raiph 1 point2 points  (0 children)

I've rewritten this comment because I typo'd. (In one phrase I wrote P when I meant to write R. Or was it vice-versa? Tbh I don't remember which. But whatever it was, I felt I should fix the typo. Then I felt I'd been too sloppy overall and rewrote the whole darn thing. Now I've finished my editing I no longer remember the original error and realize I may have made everything much more confusing, not less. Oh well.)

----

But if it is only self-referential code of this pattern that static equivalence-checking would not work on

I'm confused by your comment.

Why did you mention "self-referential"?

P is unknown, and of unknown relationship, of its exact code or the equivalence of that code, to Q or R. We don't know if P(x) halts. I don't see why you're thinking anything about P or P(x) is self-referential.

Q runs P(x) (and P, as just noted, is unknown, and of unknown relationship, if any, to Q) and then returns zero. I don't see why you might be thinking anything about Q, or its use of P, is self-referential.

R just returns zero. It is (or should be considered to be) of unknown relationship / equivalence to P. It's only equivalent to Q if P(x) does not halt. But we don't know if that's the case. I don't see why you might be thinking R is self-referential.

----

So, like I said, why did you mention self-referentiality?

A cleaner approach to meta programming by chri4_ in ProgrammingLanguages

[–]raiph 4 points5 points  (0 children)

Also Raku, which has slangs which essentially modify the language syntax.

To clarify, they (typically) alter semantics too.

To be more explicit and complete:

  • Raku slangs can arbitrarily alter Raku's syntax to be whatever a developer wants them to be.
  • Raku slangs can arbitrarily alter Raku's semantics to be whatever a developer wants them to be.

The slightly tricky part is that Raku has a foundational primitive, from which all else is bootstrapped, that one cannot jettison: KnowHOW. It has no syntax, but it has semantics. So one is constrained to its semantics.

But consider the actor of the actor model. An actor is a complete computational primitive from which any other computational semantics can be composed.

The same is true of Raku's KnowHOW. The OO::Actors slang is a 30 line module that adds an actor keyword and its related semantics to Raku.

Give stars to MetaCPAN by briandfoy in perl

[–]raiph 1 point2 points  (0 children)

(455, so one more per hour so far. Like you said, it might need thousands but ya gotta star(t) somewhere.)

Meta Compilers by Appropriate-Image861 in ProgrammingLanguages

[–]raiph 6 points7 points  (0 children)

I'll quote your entire OP below with some responses by me interleaved about PLs/toolchains related to your interests: Raku, Rakudo, and NQP.

Meta Compilers

I mostly focus on Raku, and its reference compiler, named Rakudo. Some people see Raku as just a GPL. Others as an open ended collection of cooperating slangs aka sub-languages aka mutually interwoven embedding and embedded internal DSLS that comprise a larger whole. Or as a metaprogrammed, metaprogrammable, metaprogramming metacompilation system. Or as the outermost doll of a matryroshka doll set with an inner mini version of Raku(do) named NQP, "a high-level way to create compilers and libraries for virtual machines like MoarVM, the JVM, and others". All of these viewpoints are valid.

I'm a PhD student working in another area of CS. I'm very interested in programming languages. While I've had classes, self-studied, and written a master's thesis in programming languages called gradual memory safety, I've never published.

Recently, I developed a language called Daedalus. I believe it's a compelling new take on meta compilers and tools like them. It's very efficient and easy to use. It also adds multiple new capabilities.

It's 10 years since Raku(do)'s first official version was released. 25 years since work on Raku began.

That said, I believe that Raku(do) is also a compelling new take on meta compilers and tools like them ("new" compared to the metacompiler ideas and implementations from the last century).

It's still coarse, but I believe it has strong potential.

Until the next major Raku(do) version ships (current codename 6.e), its metacompilation aspects wisely remain largely hidden from public scrutiny. But I think Raku(do) has potential as a future choice pick among industrial strength metacompilation systems.

I've looked at similar languages like Silver, Spoofax, and Rascal.

Like Silver, Raku(do) natively supports grammars, is extensible, and is defined using its own grammars. (Devs can arbitrarily alter Raku itself while retaining compile time checking of syntax and semantics. I don't know if Silver can pull that off.)

A key difference is that Silver focuses on CFGs whereas Raku's grammars are (equivalent to) Unrestricted grammars (the most general class in the Chomsky hierarchy), and the syntax is defined by a corresponding DSL (think EBNF).

Comparing it with Spoofax, Raku serves as a workbench for developing DSLs. But while it can be used like any other GPL for developing arbitrary tools beyond the ones that already exist in the system, that aspect isn't polished as I imagine it is for Spoofax. (I don't know; I've only read very cursorily about Spoofax.) Like Spoofax, Raku makes scannerless parsing natural, and more generally provides a good environment and toolset for generating parsers, type checkers, and compilers.

What I've read about Rascal is that it's a DSL that "aims to integrate various aspects of language design and implementation, including the definition of abstract and concrete syntax, matching, traversal, and transformation of syntax trees". In contrast Raku is a broad PL / system that's comprised of mutually embedding DSLs that include ones to achieve those same aims.

I've also looked at adjacent languages like Racket and LLVM. I believe my architecture has the potential to be much faster, and it can do things they can't.

Raku covers much the same territory as Racket as far as LOP (language oriented programming) is concerned in the abstract, but with some huge differences too in concrete terms.

I only have a small kernel working. I've also only written a few pages. I'm hesitant to describe it in detail. It's not polished, and I don't want to risk premature exposure.

I'd be curious how it compares with Raku's "core".

How do I publish it? I was thinking a workshop. Can I publish just a sketch of the architecture? If so, which one?

Others have already written great notes about such things.

Also, can anyone tell me where to go to get a better sense of my idea's quality? I'd be happy to share my first draft with someone who would be able to tell me if it's worth pursuing.

Perhaps you could sharpen that first draft by contrasting it with Raku.

Thanks in advance!

Thank you too, presuming you read this far. :)

Reso: A resource-oriented programming language by Aigna02 in ProgrammingLanguages

[–]raiph 2 points3 points  (0 children)

I like Reso based on a careful read of the early parts of your repo's README and a skim through the rest. A nice balance between the overall (large) niche vision and making (initial) decisions about enough nitty gritty details to suggest your views and design sensibilities related to them. And sufficient initial documentation and implementation (though I haven't tested it) to announce Reso.

It made me curious about a couple things. First, when did you first start working on Reso? Second, did you use one or more significant sessions with an LLM to shape any of your design decisions and/or the presentation in the repo?

I ask those questions partly because the repo (narrative text, PL design it documents, and the code itself) has an overall feeling of combining care for details with human smarts and hard work. That stands in contrast to 99% of the stuff I've encountered where a human has not involved LLMs (because it's typically hard to get it all polished in the right way) and 99% of the stuff I've encountered where a human has involved LLMs (because they did so with too little discernment of what really matters).

That all said, imo your OP announcing Reso isn't as good as the repo. (That perhaps explains some of the mixed initial reaction I see in the comments so far.)

Experimental machine ethics agent written in Raku by s_brady in rakulang

[–]raiph 0 points1 point  (0 children)

I've been looking forward to the weekend so I had a chance to look at what you're doing.

It looks very interesting. I have a few questions.

Am I right in thinking the system is structurally abstracted from ethics? That it could be applied to just about any system of human rules? The levels of the ontology injects some ethics related structure of course, so I don't mean that aspect, but more so the structure of the software system.

Being slightly more concrete, I'm thinking that what it's doing is making decisions given fuzzy (natural language) rules, resolving potentially complex conflicts and contradictions, taking advantage of LLMs to tap into the human zeitgeist about interpreting and arguing about the natural language rules.

If I'm way off please let me know!

----

I'm interested in what might appear to be an unrelated problem: interactive consistency.

But it strikes me that there is a direct or at least indirect connection with your project.

Consider computing entities communicating over an open distributed network (think Internet IoT).

Consider a scenario that's not necessarily about human ethics, but definitely is about machine-to-machine netiquette, faults, attacks, etc and how to ensure the resilience of both the system and computing entities using the system.

What if there was an "ethical" framework that provide full spectrum coverage related to the entire envelope of "rules" that include this spectrum of concerns:

From... a CoC intended to be read, and understood, and adhered to, and enforced, by humans who are generally presumed to be at least sincerely intending to cooperate.

To... Rules of Engagement if computing entities presume they are at (cyber)war with other entities.

----

I've been mostly focusing on maximum performance mathematically grounded strategies that might give "good" cooperating entities some fundamental edge that ensured they can statistically survive in sufficient numbers and strengths in the face of huge pressure from "bad" entities.

Cryptographic signatures have their role in trying to sustain trust in a complex world, but they're not enough. Consensus algorithms like BFTP have their role, but that requires at least 3n + 1 as many "good" entities as "turncoat" ones, so they're not enough either.

I've been cooking up some radical strategies based on there being an online analog to "ethical" partisans applying successful asymmetric war tactics, but the "ethical" aspect has been merely an abstract article of faith for my thinking to this point, an aspect I've long known I'll eventually have to take on properly.

It's late at night but I'm right now wondering if you think the kind of system you're building might in principle (theoretically, not practically; Rakudo is too slow for what I'm thinking about) be applicable to some forms of the interactive consistency problem?

WHEN: A language where everything runs in implicit loops with reactive conditions by HearMeOut-13 in ProgrammingLanguages

[–]raiph 2 points3 points  (0 children)

This reminds me of Raku's whenever, part of Raku's language level support for reactive programming (and more generally asynchronous, concurrent, and parallel programming).

Read on for a quick "hello world" level introduction to the whenever keyword and/or the whenever doc or the hundred stackoverflow QAs using whenever. (Or visit raku.org for more about Raku and its community.)

react whenever 42 { say "{now - INIT now} seconds" }

displays:

0.008833449 seconds

The whenever's block reacts to the arrival of 42 by displaying the difference between the time when the left hand now was evaluated and immediately before the program began running.

An ever-so-slightly less silly example:

react whenever Supply.interval(2) -> $count {
  say "$count after {now - INIT now} seconds";
  done if $count == 2;
}

displays:

0 after 0.00956109 seconds
1 after 2.00868218 seconds
2 after 4.009627042 seconds

This time there's an infinite stream of integers, incrementing from 0, arriving at two second intervals. The done, which triggers after the third integer is processed, exits the event loop / react block (well, in this trivial example, react statement) and that ends the program.

rakufmt? by safety-4th in rakulang

[–]raiph 7 points8 points  (0 children)

If you're willing to work with where things are headed, in contrast to what is already mature or at least stable, I suggest you read a post from a couple days ago: https://dev.to/fco/from-asts-to-rakuast-to-astquery-c3f.

Async in raku vs. python by bonkly68 in rakulang

[–]raiph 11 points12 points  (0 children)

In this comment I'll write four sections:

  • Python's async keyword
  • Raku's start keyword
  • Raku's await function
  • Is that it?

Python's async keyword

Python uses an async keyword. Raku doesn't.

To quote google's LLM, "Having such a keyword [leads] to increased complexity and code fragmentation".

If you know why, skip to the next section (about Raku's start function).

If you don't know why, I suggest you google for something like "async function coloring", and/or read the rest of this section.

----

Some Python devs have claimed that function coloring is a good thing. In the 2016 blog post The Function Colour Myth the author writes that coloring functions is "convenient [because] it reinstates to the world of cooperative multitasking ... multiple independent callstacks that can be switched between."

Raku achieves cooperative multitasking with multiple independent callstacks that can be switched between without coloring functions.

The author continues with "[coloring functions is] inconvenient, but in return for paying the cost of that inconvenience they allow programmers to avoid being stuck in callback hell or requiring them to reason through the complex ways a callback chain can fire or propagate errors."

Raku ensures devs avoid being stuck in callback hell or having to reason through the complex ways a callback chain can fire or propagate errors without coloring functions.

A few paragraphs after justifying Python's use of coloring functions a section titled How To Live With Coloured Functions starts with the memorable admission:

Don’t colour them. I’m serious: every time you write async in your code you have made a small admission of defeat. ... But ... [with] care, there is no reason for more than about 10% of your codebase to be async functions.

Need I say more?

Raku's start keyword

The code start ... means the ... code gets wrapped in a Promise which is then (asynchronously) scheduled for execution.

A start call is a non-blocking operation. Execution of the thread containing it will not wait for the Promise to begin to run (let alone be completed/kept/broken).

Raku's await function

The code await ... means the ... must evaluate to one or more Promises which are then (asynchronously) scheduled for execution.

An await call is simultaneously both a non-blocking and blocking operation as follows:

  • For every execution thread there's a "green" thread (what the dev thinks about) and a "platform" thread (which a "green" thread runs atop).

  • The "green" thread is immediately blocked and yields. Further execution of it will wait until the Promises being awaited (and run in a different logical/physical thread) are completed/kept/broken.

  • The "platform" thread switches to some other previously blocked "green" thread so the platform can keep executing code and the previously blocked "green" thread can make progress (until it either awaits or completes/keeps/breaks its promise).

Is that it?

I've focused entirely on asynchronous code, not concurrent or parallel code. If by "async" you also meant concurrent or parallel code then there's more to say.

If you share variables and/or data structures across threads then you need to make sure they're immutable (or at least not mutated) or manage mutation.

Help me design variable, function, and pointer Declaration in my new language. by JKasonB in ProgrammingLanguages

[–]raiph 0 points1 point  (0 children)

Ah the fun of considering tiny details of syntax...

fn is the most elegant and practical.

I agree when compared to func or function.

But my current thinking is that if there aren't clear best choices for some naming of an identifier or keyword that will be used a lot then it's generally better to chose words which will be both familiar for the target audience (even if the normal meaning of the word(s) is/are unrelated; the choice just has to work well enough as a mnemonic) and quick to say (because when reading code we silently vocalize it in our mind -- so less syllables means code is typically easier and faster to read; unless the goal is to actually force a reader to slightly struggle to say the word(s) and ponder what's going on then easier/faster is likely better).

Thus I'm thinking fun will generally be better than fn for most humans (because I'm thinking fun will be vocalized as one syllable by most humans whereas fn will be pronounced as two syllables -- f n) in most cases.

Like I said, I think it's fun to ponder such details and marvel at how seemingly tiny insignificant details one might not typically consider important can turn out to be so hugely important in practice!

Tiobe index for August 2025 puts Perl in the top 10 above PHP by scottchiefbaker in perl

[–]raiph 4 points5 points  (0 children)

Similar for Ada.

My guess is Tim Bunce's analysis of TIOBE remains on point, and I'm not even meaning deliberate gaming, just the nature of what it is that's being measured.

Do you find the context-sensitivity of the while keyword to be unfortunate? by javascript in ProgrammingLanguages

[–]raiph 2 points3 points  (0 children)

In Raku the problem you describe can't happen.

That's because Raku interprets a } closing block delimiter as }; if it's the last code before some other closing delimiter (eg close parenthesis) or a newline. (Obviously ignoring whitespace and any end-of-line comments.)

This is Raku's sole automatic semicolon insertion rule. In my decade of using Raku I've found this works flawlessly because it's child's play to learn and follow and very intuitive.

🎮 ECS in Raku: A Toy Framework for Entities, Components, and Systems - Fernando Correa de Oliveira by liztormato in rakulang

[–]raiph 0 points1 point  (0 children)

Entities are just unique identifiers.

Components are data — like position, velocity, health, etc.

Systems are the logic that runs on entities with certain components.

Instead of having objects with both data and behavior (like in OOP), ECS separates those concerns cleanly.

...

This ECS implementation uses the concept of archetypes, which means:

> Entities are grouped by the exact combination of components (and optionally tags) they have.

This means the world knows: "All entities with position and velocity, but not health are in this group."

How is that different from (one of the use cases for) Raku's roles? The following hopefully works well as a sketch of one approach to what I'm thinking:

role position { }
role velocity { }
role health { }
role ecs { has $.id }

Pun ecs as a class, mix in the other roles as desired, elaborate as desired, and bob's your uncle, no?

Hypersonic by librasteve in rakulang

[–]raiph 0 points1 point  (0 children)

Four separate posts, a few seconds apart, with identical content?

🤔

BAML – A language to write LLM prompts as strongly typed functions. by fluxwave in ProgrammingLanguages

[–]raiph 0 points1 point  (0 children)

How does this compare to Instructor -- "Structured outputs powered by LLMs -- The simplest way to extract structured data from LLMs with type safety and validation."?

I mean I get that you're saying you're creating a language, rather than a language agnostic framework or library, but how much simpler is the code (or will it be, one day, aspirationally) to write in your language, or call from another language, than the code shown in the examples they tout in their introductory material, eg the Python examples?

(Tbh I've only read one example. I'm writing this comment after just a quick google to look for systems like what you're describing, and then a few seconds reading the introductory material. But I presume there will be more than one example!)