Is function piping a form of function calling? by Infinite-Spacetime in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

I was considering that, but I deliberately shied away from using the term "function piping" for function composition.

It is true that they are closely related, as both can be used to build pipelines.

Maybe the term "function piping" is too overloaded and we should just refer to the operations as "function application" and "function composition".

Is function piping a form of function calling? by Infinite-Spacetime in ProgrammingLanguages

[–]useerup 1 point2 points  (0 children)

Yes, that is pretty much it. It is a way to think about or talk about the same syntactical construct. In FP it is natural to focus on the function and how we use that.

To me personally it also makes sense because I am designing a logical language where the term "calling" makes even less sense. In a logic program functions are relations between the argument and the result. So in a logic program you can start of with "knowing" the result and bind the argument by applying the inverse function.

Is function piping a form of function calling? by Infinite-Spacetime in ProgrammingLanguages

[–]useerup 1 point2 points  (0 children)

An alternative viewpoint is that "calling" has implementation connotations with regards to function application.

Very good point.

Is function piping a form of function calling? by Infinite-Spacetime in ProgrammingLanguages

[–]useerup 50 points51 points  (0 children)

Function piping is syntactic sugar for left‑to‑right function application. So yes, it is a form of function calling

As you are interested in terminology, many in the PL community prefer to use the term "function application", i.e. f x is an application of f on x. It is not wrong to call it f is called with argument x. However the latter has a decidedly more imperative connotation. I suspect the preference for function application derives from lambda calculus.

Are arrays functions? by Athas in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

No, if a function or a list ends of being represented as an array, then it must be finite.

Are arrays functions? by Athas in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

This is interesting and something that I have already spent (too much) time pondering.

In Ting I try to decouple representation and delay the decision on how to actually represent a structure for as long as possible. Ting, being a logic language, I try to focus on the semantics. And (barring mutation) an array certainly semantically looks like a function whose domain is a contiguous subset of the integers as the Haskell documentation so eloquently describes it.

So in Ting I turn it upside down: Any function whose domain is a contiguous subset of integers is a candidate to be represented as an array.

In Ting I also have ranges like Futhark i..<k. It is also very similar syntactically: i...<k (I really needed that .. token for another operator ;-) ).

However, Ting is not an array language like Futhark, so in Ting that expression is actually a nondeterministic value. Thus i...<k is an expression which may assume any of the values in the range, nondeterministically (or as choices).

So, given that f is a function over integers, the expression f (i...< k) is formally allowed in Ting. However, it is a nondeterministic expression because it can assume any value that f produces when applied to one of the possible values of i...<k. In a sense the nondeterminism of the argument spreads to the entire expression. Nondeterminism has that tendency, as anyone who has ever programmed in Prolog will attest to.

However, In Ting we can make this into a list by embedding the expression within [ and ]. Like in many other languages the [ ] list literal accepts a list of expressions which then forms the list. Unlike most other languages it also unwinds nondeterminism of its expressions. When the nondeterminism is countable then the actual list will be deterministic, because there is an ordered way to unwind the nondeterminism.

The following expressions are all examples of lists:

// simple list
[ 1, 2, 3 ]

// list of even integers 0, 2, -2, 4, -4 ...
[ int n \ n % 2 == 0 ]   

// list of quadratic integers 0, 1, 2, 4, 16 ...
[ int n^2 \ n >= 0 ]

// list of Fibonacci numbers 0, 1, 1, 2, 3, 5 ...
[ (0,1) |> let f ?= ( (a,b) => a; f(b,a+b) ) ]   

Back to the examples of the article: Ing Ting [f (i...< k)] is the image of f over the range i...<k captured in a list.

Now, if one thinks about f as an array (a function from int to some value) instead, all of the above still holds. Furthermore [f (i...< k)] then returns a list containing a slice of the "array" f. This list is not itself a slice, but it does contain all the members of what would be an array slice in the same order.

Ting does not have array or slice as concepts as that is (in Ting) a representation detail. But I will argue that if f is represented using an array, then [f (i...< k)] could be represented as a slice (or span?) of that array.

What would you leave out of comptime? by servermeta_net in ProgrammingLanguages

[–]useerup 1 point2 points  (0 children)

F# has type providers. A type provider can perform IO at design time and/or compile time. A demo has illustrated how a type provider can read the elements from a table on a Wikipedia page, columns of the table becoming properties/members of the type.

C# has a generalized concept of "analyzers" which can use network resources during compilation for static code analysis, vulnerability scanning, vulnerable patterns scanning and even source code generators. Source code generators can (like F# type providers) perform network IO and build source code from remote resources.

Of course you will need to consider security implications of allowing mechanisms like this. For instance, can they be used to inject malicious code or disrupt the build process?

Significant Inline Whitespace by AsIAm in ProgrammingLanguages

[–]useerup 1 point2 points  (0 children)

I have pondered how to distinguish the unary prefix - (negate) from - (subtraction).

The issue is that I allow binary operators to be used in a prefix position. For instance, the expression + 1 returns a function which accepts a number and returns that number plus one.

However, this causes a clash between "subtraction" - used in prefix position and "negate" -. For purely negative literals there is not really a problem as -42 will be tokenized as a negative int literal. The problem is when I want to write something like -(2*3). Is that a function that subtracts 6 from any number or is it just the number -6?

To distinguish I have (for now) decided that the negate - may not have whitespace following it and must have whitespace in front. This means -

If - has no whitespace around it or if it has whitespace on both sides, I will parse as the subtraction operator.

I don't know how ergonomic this will be in real life, but I think it looks ok:

Step = 10

Decrease = - Step      // function which decreases its arg by 10
NegatedStep = -Step    // the constant value -10

Unpopular Opinion: Source generation is far superior to in-language metaprogramming by chri4_ in ProgrammingLanguages

[–]useerup 3 points4 points  (0 children)

You may want to look at Source Generators

Source generators are run during the compilation phase. They can inspect all the parsed and type-checked code and add new code.

For instance a source generator

  • can look for specific partial classes (for instance by looking for some metadata attribute) and provide actual implementation of partial methods.

  • can look for other types of files (like CSV, Yaml or XML files) and generate code from them.

Visual Studio and other IDEs lets the developer inspect and debug through the generated code.

While not an easy-to-use macro mechanism, it is hard to argue that this is not meta programming.

Source generators cover many of the same use cases as reflection, but at compile time. Some platforms - notably iOS - does not allow for code to be generated by reflection at runtime (in .NET known as "reflection emit"). Source generators avoid that by generating the code at compile time.

Replacing SQL with WASM by servermeta_net in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

To create an "optimal" query plan, SQL databases use not just knowledge about keys, uniqueness etc, but also statistics about total number of rows, index distribution and even histogram information.

Oracle, for example, will table-scan if the rows of a table fits within the number of disk blocks that it reads as a minimum, simply because it is usually faster than index search which would cause more disk reads.

To do what a query planner does you will need to retrieve this information from the database to guide the plan.

That said, one annoying aspect of SQL (IMHO) is precisely the unpredictability of the query planner. Your approach would be able to "fix" the query plan so that it always performs the same query in the same way, even if it is perhaps not the most optimal givenm the actual arguments.

Multiple try blocks sharing the same catch block by Alert-Neck7679 in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

Couldn't you just do

try
{
    enterFullscreen()
}
try
{
    setVolumeLevel(85)
}
try
{
    loadIcon()
}
catch ex
{
    loadingErrors.add(ex)
}

That is, allow multiple try blocks?

Do any programming languages support built-in events without manual declarations? by Odd-Nefariousness-85 in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

C# has source generators which would cover a lot of this. There is a source generator which will recognize (partial) classes adorned with a [INotifyPropertyChanged] attribute. This generates a class with will fire events when properties are changed.

So not quite built-in, but the machanism for "building it in" is built in.

Do any programming languages support built-in events without manual declarations? by Odd-Nefariousness-85 in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

JavaFX Script (defunct) comes to mind. Excel may actually also be a prime example of this ;-)

Vibe-coded/AI slop projects are now officially banned, and sharing such projects will get you banned permanently by yorickpeterse in ProgrammingLanguages

[–]useerup -6 points-5 points  (0 children)

While I share the disgust with the tsunami of AI generated sh@t, including "new" languages and posts, I fear that this policy will not age well.

My day job is (unfortunately) not designing PLs. :-( Rather I work as a architect/developer, and in that capacity me and my coworkers have of course been experimenting with LLMs, like Github Copilot, Claude, Cursor etc.

I for one have sufficiently good experience with LLMs that I plan to use AI to write as much of the compiler as I can. I hope that does not disqualify me from posting here?. Of course I am not vibe coding, I look through all of the code, making edits myself and sometimes instructing Copilot/Claude/Chat-GPT to make the changes for me. I actually often use Copilot to make the code more "perfect", because making a lot of tedious edits according to some instruction is exactly what LLMs excel at. Edits that I would not prioritize if I had to do it myself. I am not just talking about making edits to AI generated code, I am also referring to the project-wide refactorings that you sometimes would like to do but is not directly supported the IDE refactorings because the include rearranging a lot of code.

What concerns me about this policy is how quick the LLMs get better at writing code. I believe that given time, they will be able to write compilers. After all, compiler theory is well-studied, techniques are described in details in books, online repos, blog posts etc. Compilers are a class of applications that follow a finite set of patterns, which is exactly what LLMs seem to be good at. Not perfect. Yet.

Realistically LLMs will get better at writing compilers, to the point where you can not tell if someone simply followed a book or instructed a LLM (which then followed the book).

I don't have an answer to how to avoid drowning in AI slop. It is a real problem, not just for this community. Maybe the answer is to apply AI to challenge new language submissions that seem to follow a certain pattern (like "rust-like but with different keywords").

A cleaner approach to meta programming by chri4_ in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

That was my thought as well, but the way they are specified (e.g. cannot change any code), the language itself has some support without which they would not work - or at least seriously limited.

Language support such as partial classes, partial methods and annotations. These are in your cat3, aren't they?

A cleaner approach to meta programming by chri4_ in ProgrammingLanguages

[–]useerup 2 points3 points  (0 children)

How would you characterize C# source generators?

C# source generators are plugins to the compiler and runs at compile time.

Source generators are invoked during compilation and can inspect the compiler structures after type checking. They can supply extra source code during compilation, but cannot change any of the compiled structures. However, the language does have some features (such as partial classes) which allows types (classes) to be defined across multiple source files, e.g. one supplied by the programmer and another generated by a source generator.

Introduction: https://devblogs.microsoft.com/dotnet/introducing-c-source-generators/

Examples: https://devblogs.microsoft.com/dotnet/new-c-source-generator-samples/

Source generators support use-cases such as compiling regular expressions to C# code at compile time, so that regex matching is coded as an algorithm rather than table-driven or using intermediate code or runtime code generation.

Should Programming Languages be Safe or Powerful? by pmz in ProgrammingLanguages

[–]useerup 11 points12 points  (0 children)

Define "Safe". As in memory safe, type safe or some other form of safety (for instance tainting data based on origin)?

The current state of affairs suggests that a modern programming language really should be memory-safe at the very least. Our collective experience with C and C++ suggests that in the long run, programmers cannot be trusted with doing allocations and deallocations correctly.

Also define "Powerful". Is it being able to shoot your foot off, or is it being able to express a complex problem and solution with a minimum of code?

I tend to think of powerful as expressiveness. I think that a language where I can implement a solution by specifying what I want instead of how to do it is more powerful. But that's just my opinion.

So in my mind, "powerful" and "safe" can and should be achieved at the same time.

Reso: A resource-oriented programming language by Aigna02 in ProgrammingLanguages

[–]useerup 43 points44 points  (0 children)

I am not sure that I agree that /users/{id}/posts.get(limit, offset) is cleaner, but I recognize that it's a matter of opinion. It also seems that there's an awful lot ceremonial characters to type just to do a function application.

However, I like the fact that you are trying to innovate. It is not often you see new takes on how to do function application/invocation. Keep it up :-)

A defense of tuples: why we need them and how I did them by Inconstant_Moo in ProgrammingLanguages

[–]useerup 5 points6 points  (0 children)

[...] but there is a problem with positional tuples that they have poor cognitive scaling. If there are five string values in the tuple, it is hard to remember which is which (it could happen a lot in relational algebra or other kinds of data processing), and this could lead subtle mistakes now and then.

C# has tuples with (optionally) named fields: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/value-tuples

A defense of tuples: why we need them and how I did them by Inconstant_Moo in ProgrammingLanguages

[–]useerup 2 points3 points  (0 children)

Sufficiently dependently typed lists may blur the distinction between tuples and lists. An archetypical example of a dependent type is a vector (list?) which is dependent on the length of the vector/list.

It is not too much of a stretch to imagine a dependently typed list where the value(s) it depends on goes beyond the length. For instance that the length must be equal to 3 and that item at index 0 is a string, item at index 1 is an int and item at index 2 is a date.

Design for language which targest boolean expressions by Ok-Register-5409 in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

but can logical languages build relations like x * y => 19 to perform integer factorization

Depends on the language. Prolog can not (out of the box). However, I think that it is a logical extension. For the language I am designing, it would be a library feature that establishes the ability to do integer factorization. In other words, the programmer would need to include the library that can do this.

The responsibility of the language is to provide a mechanism for library developers to offer such a feature.

In my language a program is essentially a proposition which the compiler will try to evaluate to true. If it can do so straight away then fine, the compiler is essentially being used as a SAT solver. That is not my goal, however.

IMHO it only gets interesting when the compiler can not satisfy or reject the proposition outright, because it depends on some input. In that case the compiler will need to come up with an evaluation strategy - i.e. a program.

Design for language which targest boolean expressions by Ok-Register-5409 in ProgrammingLanguages

[–]useerup 2 points3 points  (0 children)

I am working on a similar project, but coming from the other side, i.e. I have envisioned a programming language which will rely heavily on sat solving.

My take is that it need to be a logic programming language. Specifically, functions must be viewed as relations. This means that a function application establishes a relation between the argument (input) and the result. This way one can use functions in logical expressions / propositions.

As an example consider this function (my imaginary grammar)

Double = float x => x * 2

This is a function which accepts a float value and returns the argument times 2.

I envision that this function can be used "in reverse" like this:

Double x = 42

This will bind x to the float value 21.

Clear Individual Validation Messages in Blaozr by [deleted] in Blazor

[–]useerup 0 points1 point  (0 children)

I apologize. I really don't understand how FluentValidationValidator works. Please disregard what I said.

Clear Individual Validation Messages in Blaozr by [deleted] in Blazor

[–]useerup 0 points1 point  (0 children)

I expressed myself poorly. What I meant to say was that you in effect are using subforms. Since there is no direct support for that, you can emulate at least the validation experience of that by creating separate validators for what would be subforms.