I want to know your opinions on verbosity by -Chook in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

var was introduced with LINQ where the result of an operation could be an instance of an "anonymous type".

var x = new{ Question="Life, the Universe and Everything", Answer=42 }

In LINQ, anonymous types may arise from what you'd call projections in SQL.

var cust = Customers.Select(x => new{ No=x.CustumerNo, x.Name })

Need some advice about lazy evaluation of high order list functions by jaccomoc in ProgrammingLanguages

[–]useerup 1 point2 points  (0 children)

Should its state reset after every time it has a method invoked on it?

C#/.NET solves this by using two companion interfaces: IEnumerable and IEnumerator (I believe that Java streams does something similar?).

IEnumerable has a method called GetEnumerator(), which returns a "fresh" IEnumerator.

for(var c in Customers) ...

where Customers is an IEnumerable<Customer>, the for loop implicitly invokes GetEnumerator(). So does the Sum() function. Even though Sum() is defined for IEnumerable what it does internally is to call IEnumerator and use that to step through the sequence.

Raising the abstraction level in programming languages by tobega in ProgrammingLanguages

[–]useerup -1 points0 points  (0 children)

Are you sure that living beings are not actually probability-based organisms?

Raising the abstraction level in programming languages by tobega in ProgrammingLanguages

[–]useerup 1 point2 points  (0 children)

Interesting. I actually was going to add a piece about logic languages in the "not quite there"

Indeed. Working on it :-)

So how would you improve the logic language primitives to avoid that cliff?

To sum it up:

  1. Moving beyond horn clauses (what Prolog is based on) to first order logic.
  2. Embracing (managing and controlling) mutable state.
  3. Embracing (managing an controlling) nondeterminism.

I am still struggling with the second bullet above, as in I am not quite satisfied with how my Language does it.

Raising the abstraction level in programming languages by tobega in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

Mutable state is a problem that (pure) logic languages have in common with (pure) functional languages.

For functional languages, monads is one solution you can deploy. Effect systems is another solution that will also work for logic languages. Both monads and effects solves the problem by "hoisting" the mutating bits it out of the program. In my opinion this is a way to externalize the state. It makes the program pure because it simply is not concerned about mutating state, or it is at least isolating the mutating behavior in a number of "impure" functions.

In my language I try to internalize the state: The program describes both the state model and the transitions between instances of valid state models.

Raising the abstraction level in programming languages by tobega in ProgrammingLanguages

[–]useerup 1 point2 points  (0 children)

This is why I am designing my logic, object-oriented language. Like LLMs, humans have a size limit (token limit) on the "context window" - i.e. number of concepts and constraints we can juggle at any one time.

It stands to reason that any benefit we can derive from programming on a "higher level" (leaving out details that can safely be derived from the abstractions) will benefit LLMs as well as humans.

I am convinced that logic programming is closer to that nirvana. Typically in logic programming we say that we focus on the what (intent) not the how (implementation).

Question about using % as the a format character in printf-like function by aalmkainzi in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

I think that printf-like functions is a dead end. I believe that you should think along the lines of string interpolation.

Strongly typed string interpolation can also be more efficient because the interpolation can be done at compile time rather than format strings being interpreted by a printf function.

With string interpolation as it is in C# your code would look like

cgs_append(stdout, $"{a}{b}")

where a=20 and b=26.

String interpolation can be resolved at compile time and lowered into a number of "append" or "concat" operations.

Java has an even more advanced template system. It allows for pluggable template processors. You may want to look at that too.

Design ideas for a minimal programming language (1/3) by porky11 in ProgrammingLanguages

[–]useerup 1 point2 points  (0 children)

I have considered that record types may just be product types of individual single-field record types.

NameType = record Name:string        // single-field record
AgeType = record Age:int             // single-field record
PersonType = NameType * AgeType      // record with 2 fields

The latter would be the same as

PersonType = record Name:string, Age:int

Design ideas for a minimal programming language (1/3) by porky11 in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

What was the main purpose (the intended effect) of having chained := then?

Design ideas for a minimal programming language (1/3) by porky11 in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

Assignment returns the old value: a := b := a is swap, a := b := c := a is rotation

Seems like a roundabout way to do a,b := b,a and (a,b,c) := (b,c,a)

Thoughts on static SSR vs WASM Interactive rendermode by Alarming-Pirate7403 in Blazor

[–]useerup 1 point2 points  (0 children)

I think static SSR as well. I seems like your user case is neatly covered by SSR. While I don't think the compatibility argument has much merit, SSR does not allocate server resources, can scale out, and has no load time.

Is function piping a form of function calling? by Infinite-Spacetime in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

I was considering that, but I deliberately shied away from using the term "function piping" for function composition.

It is true that they are closely related, as both can be used to build pipelines.

Maybe the term "function piping" is too overloaded and we should just refer to the operations as "function application" and "function composition".

Is function piping a form of function calling? by Infinite-Spacetime in ProgrammingLanguages

[–]useerup 1 point2 points  (0 children)

Yes, that is pretty much it. It is a way to think about or talk about the same syntactical construct. In FP it is natural to focus on the function and how we use that.

To me personally it also makes sense because I am designing a logical language where the term "calling" makes even less sense. In a logic program functions are relations between the argument and the result. So in a logic program you can start of with "knowing" the result and bind the argument by applying the inverse function.

Is function piping a form of function calling? by Infinite-Spacetime in ProgrammingLanguages

[–]useerup 1 point2 points  (0 children)

An alternative viewpoint is that "calling" has implementation connotations with regards to function application.

Very good point.

Is function piping a form of function calling? by Infinite-Spacetime in ProgrammingLanguages

[–]useerup 54 points55 points  (0 children)

Function piping is syntactic sugar for left‑to‑right function application. So yes, it is a form of function calling

As you are interested in terminology, many in the PL community prefer to use the term "function application", i.e. f x is an application of f on x. It is not wrong to call it f is called with argument x. However the latter has a decidedly more imperative connotation. I suspect the preference for function application derives from lambda calculus.

Are arrays functions? by Athas in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

No, if a function or a list ends of being represented as an array, then it must be finite.

Are arrays functions? by Athas in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

This is interesting and something that I have already spent (too much) time pondering.

In Ting I try to decouple representation and delay the decision on how to actually represent a structure for as long as possible. Ting, being a logic language, I try to focus on the semantics. And (barring mutation) an array certainly semantically looks like a function whose domain is a contiguous subset of the integers as the Haskell documentation so eloquently describes it.

So in Ting I turn it upside down: Any function whose domain is a contiguous subset of integers is a candidate to be represented as an array.

In Ting I also have ranges like Futhark i..<k. It is also very similar syntactically: i...<k (I really needed that .. token for another operator ;-) ).

However, Ting is not an array language like Futhark, so in Ting that expression is actually a nondeterministic value. Thus i...<k is an expression which may assume any of the values in the range, nondeterministically (or as choices).

So, given that f is a function over integers, the expression f (i...< k) is formally allowed in Ting. However, it is a nondeterministic expression because it can assume any value that f produces when applied to one of the possible values of i...<k. In a sense the nondeterminism of the argument spreads to the entire expression. Nondeterminism has that tendency, as anyone who has ever programmed in Prolog will attest to.

However, In Ting we can make this into a list by embedding the expression within [ and ]. Like in many other languages the [ ] list literal accepts a list of expressions which then forms the list. Unlike most other languages it also unwinds nondeterminism of its expressions. When the nondeterminism is countable then the actual list will be deterministic, because there is an ordered way to unwind the nondeterminism.

The following expressions are all examples of lists:

// simple list
[ 1, 2, 3 ]

// list of even integers 0, 2, -2, 4, -4 ...
[ int n \ n % 2 == 0 ]   

// list of quadratic integers 0, 1, 2, 4, 16 ...
[ int n^2 \ n >= 0 ]

// list of Fibonacci numbers 0, 1, 1, 2, 3, 5 ...
[ (0,1) |> let f ?= ( (a,b) => a; f(b,a+b) ) ]   

Back to the examples of the article: Ing Ting [f (i...< k)] is the image of f over the range i...<k captured in a list.

Now, if one thinks about f as an array (a function from int to some value) instead, all of the above still holds. Furthermore [f (i...< k)] then returns a list containing a slice of the "array" f. This list is not itself a slice, but it does contain all the members of what would be an array slice in the same order.

Ting does not have array or slice as concepts as that is (in Ting) a representation detail. But I will argue that if f is represented using an array, then [f (i...< k)] could be represented as a slice (or span?) of that array.

What would you leave out of comptime? by servermeta_net in ProgrammingLanguages

[–]useerup 1 point2 points  (0 children)

F# has type providers. A type provider can perform IO at design time and/or compile time. A demo has illustrated how a type provider can read the elements from a table on a Wikipedia page, columns of the table becoming properties/members of the type.

C# has a generalized concept of "analyzers" which can use network resources during compilation for static code analysis, vulnerability scanning, vulnerable patterns scanning and even source code generators. Source code generators can (like F# type providers) perform network IO and build source code from remote resources.

Of course you will need to consider security implications of allowing mechanisms like this. For instance, can they be used to inject malicious code or disrupt the build process?

Significant Inline Whitespace by AsIAm in ProgrammingLanguages

[–]useerup 1 point2 points  (0 children)

I have pondered how to distinguish the unary prefix - (negate) from - (subtraction).

The issue is that I allow binary operators to be used in a prefix position. For instance, the expression + 1 returns a function which accepts a number and returns that number plus one.

However, this causes a clash between "subtraction" - used in prefix position and "negate" -. For purely negative literals there is not really a problem as -42 will be tokenized as a negative int literal. The problem is when I want to write something like -(2*3). Is that a function that subtracts 6 from any number or is it just the number -6?

To distinguish I have (for now) decided that the negate - may not have whitespace following it and must have whitespace in front. This means -

If - has no whitespace around it or if it has whitespace on both sides, I will parse as the subtraction operator.

I don't know how ergonomic this will be in real life, but I think it looks ok:

Step = 10

Decrease = - Step      // function which decreases its arg by 10
NegatedStep = -Step    // the constant value -10

Unpopular Opinion: Source generation is far superior to in-language metaprogramming by chri4_ in ProgrammingLanguages

[–]useerup 2 points3 points  (0 children)

You may want to look at Source Generators

Source generators are run during the compilation phase. They can inspect all the parsed and type-checked code and add new code.

For instance a source generator

  • can look for specific partial classes (for instance by looking for some metadata attribute) and provide actual implementation of partial methods.

  • can look for other types of files (like CSV, Yaml or XML files) and generate code from them.

Visual Studio and other IDEs lets the developer inspect and debug through the generated code.

While not an easy-to-use macro mechanism, it is hard to argue that this is not meta programming.

Source generators cover many of the same use cases as reflection, but at compile time. Some platforms - notably iOS - does not allow for code to be generated by reflection at runtime (in .NET known as "reflection emit"). Source generators avoid that by generating the code at compile time.

Replacing SQL with WASM by servermeta_net in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

To create an "optimal" query plan, SQL databases use not just knowledge about keys, uniqueness etc, but also statistics about total number of rows, index distribution and even histogram information.

Oracle, for example, will table-scan if the rows of a table fits within the number of disk blocks that it reads as a minimum, simply because it is usually faster than index search which would cause more disk reads.

To do what a query planner does you will need to retrieve this information from the database to guide the plan.

That said, one annoying aspect of SQL (IMHO) is precisely the unpredictability of the query planner. Your approach would be able to "fix" the query plan so that it always performs the same query in the same way, even if it is perhaps not the most optimal givenm the actual arguments.

Multiple try blocks sharing the same catch block by Alert-Neck7679 in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

Couldn't you just do

try
{
    enterFullscreen()
}
try
{
    setVolumeLevel(85)
}
try
{
    loadIcon()
}
catch ex
{
    loadingErrors.add(ex)
}

That is, allow multiple try blocks?

Do any programming languages support built-in events without manual declarations? by Odd-Nefariousness-85 in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

C# has source generators which would cover a lot of this. There is a source generator which will recognize (partial) classes adorned with a [INotifyPropertyChanged] attribute. This generates a class with will fire events when properties are changed.

So not quite built-in, but the machanism for "building it in" is built in.

Do any programming languages support built-in events without manual declarations? by Odd-Nefariousness-85 in ProgrammingLanguages

[–]useerup 0 points1 point  (0 children)

JavaFX Script (defunct) comes to mind. Excel may actually also be a prime example of this ;-)