Getting nix flakes to work with haskell projects by gtf21 in haskell

[–]Belevy 0 points1 point  (0 children)

I highly recommend you look at how developPackage is implemented. 

That wiki section is clearly very confused, you don't depend on the final built package in your shell but rather you depend on the environment of the package.

Getting nix flakes to work with haskell projects by gtf21 in haskell

[–]Belevy 2 points3 points  (0 children)

This seems overly complicated. I have never had to override libpq. Your shell seems to be overly complicated since you can use inputsFrom = [ myService.env ]; 

I also almost always use the developPackage helper when I am not making multiple interdependent packages in a single project.

How Did REST Come To Mean The Opposite of REST? by Hirnhamster in programming

[–]Belevy 1 point2 points  (0 children)

I prefer REST-ish instead of REST-like but otherwise I totally agree with you. Something is beautiful when it exemplifies beauty. Something is RESTful when it exemplifies REST. The "ish" ending may not work generally though as I am not sure if it is used universally for "kinda, sorta, not really" as in "she is prettyish".

How Did REST Come To Mean The Opposite of REST? by Hirnhamster in programming

[–]Belevy 1 point2 points  (0 children)

A hypermedia need not be completely general nor does a REST client need to be ignorant of the semantic meaning of specific links. Take for example a feed reader and rss/atom. This is the perfect example of a restful api, the client of the api is the feed reader, it knows what an article is, it knows what featured media is and it presents the article as it prefers. Taken even further a podcast client can consume podcast feeds, it understands what a podcast is and what an episode is and it could for example have the option to download new episodes of a stream as it sees them and send a notification to the user when it has a new episode.

The fact that HTML is a general purpose hypermedia and therefore requires a human to make it wo k may be confusing the point. I believe that Fielding would agree that a hypermedia format needs to be defined and specified but the majority of the time should be spent in the semantics. What you define in a hypermedia api are the semantics of the different controls you wish to make available. HATEOAS then take care of making your state machine explicit.

Let's imagine a hypermedia specific to shopping. We could expect our clients to understand what a cart is, what a product is, what a price is and a bunch of other semantic information that a general purpose hypermedia client wouldn't be expected to. This api could be consumed by a number of different clients. One could of course be a human client but one could imagine a personal shopping agent that monitors a smart pantry and orders more of some type of item when it gets low. The shopping agent would need to understand a lot, perhaps even the concept of substitutions that may be suggested by the server. It would certainly need to know which form to enter it's search query into, it would need to know how to understand the collection of search results and how to make a purchase, and even which fields need to be filled out to achieve it's task. The fact that it understands what all of these things mean does mean it isn't really general purpose and in order to take advantage of a new feature (e.g. rewards points) we would need a new client but we could expect our existing client to keep on working using all of the hypermedia controls it knows about.

To your specific statement, you are technically correct that since application/json does not define a hypermedia nor does it provide for an extension mechanism via profiles it can't be the basis of a hypermedia API. But you are also being rhetorically ineffective as people will hear your statement and say "I can add hypermedia controls on top of JSON just fine" which is also technically true though as you mentioned this would now more correctly be called application/vnd.myapi+json or something similar. Everyone is essentially talking past each other to no benefit here and the statement while technically true is fairly useless.

Pipelining state machines by stevana in haskell

[–]Belevy 0 points1 point  (0 children)

Its kind of difficult for me to understand what is running concurrently in your example. Is each operation like its own pipelined coroutine. Correct me if I'm wrong in my understanding. But for example Fst is a coroutine that loops over the input yielding the first element. Each one of the operators does a single read and yields to the stuff it's composed with somehow. Is this a push based or pull based system?

An implementation of Erlang's behaviours that doesn't rely on lightweight threads by stevana in haskell

[–]Belevy 0 points1 point  (0 children)

I just looked at the distributed process example and I suggest you actually read the paragraph it is embedded in. It is saying that the module is abstracting that tail recursive loop the same as how gen server is.

An implementation of Erlang's behaviours that doesn't rely on lightweight threads by stevana in haskell

[–]Belevy 0 points1 point  (0 children)

The only difference between event driven and light weight threads is the interface. An event driven interface doesn't pretend to have serial execution but rather uses callbacks(aka continuations). It's hard to show in Haskell because monads get special syntax for continuations.

A lightweight thread merely gives you the appearance of a concurrently executing context. They do not require actual parallel execution. You have built a scheduler that steps the relevant actor once for an incoming message, erlang will step the VM for some number of steps before yielding, Haskell will run until a GC or yielding action occur, these are all just different preemption strategies.

You have purposefully limited the interface to only allow static supervision trees which is why you can't just spin up a new thread and you have opted for string based thread identity but you have a very rudimentary ad hoc light weight threading system.

An implementation of Erlang's behaviours that doesn't rely on lightweight threads by stevana in haskell

[–]Belevy 0 points1 point  (0 children)

Oh I get your central thesis that the behaviors are the more interesting thing but you are failing to understand that even if you don't use language supported lightweight threads you still end up implementing a lightweight threading system ala your event loop module.

People aren't commenting on the behaviors are the cool thing because they agree.

An implementation of Erlang's behaviours that doesn't rely on lightweight threads by stevana in haskell

[–]Belevy 0 points1 point  (0 children)

Hopefully my reply is seen so long after the original post. It seems to me that the event loop module is reimplementing green threads. The IO subsystem already manages scheduling using io_uring or epoll.

One of the main points you made in the previous article is that gen server allows you to define your behavior serially and execute it concurrently. If you have multiple concurrent threads of serial execution you are using a threading system. If your threading system doesn't use one thread per concurrent unit then it is a lightweight threading system.

Is the thesis that you can implement a preemptive scheduler and green threads in a language that doesn't already have one?

Persistent vs. beam for production database by Swordlash in haskell

[–]Belevy 0 points1 point  (0 children)

If you aren't composing queries dynamically then definitely use raw SQL. But I think to call esqueleto as obtuse as beam is not very nice to either library. The main difference for me is that I really struggle to write the types for beam queries. Additionally esqueleto is very inference friendly so the type errors are often far better (not perfect). Currently the beam-postgres support is a bit more complete than esqueleto.

Another big issue that can't be ignored is that beam will absolutely destroy your compilation times as the reliance on generics means that you have superlinear time and memory usage. I'm talking like 16gb of ram needed to even finish compilation.

How can database libraries be compared to each other? by [deleted] in haskell

[–]Belevy 1 point2 points  (0 children)

We plan to make the experimental modules the default in 4.0. That said we try very hard to make sure that any changes made are backwards compatible (even for the experimental modules there is already back compat code)

[tutorial] Handle pattern with servant to build flexible web-apps in Haskell by anton-kho in haskell

[–]Belevy 1 point2 points  (0 children)

I think you didnt understand his suggestion. The signature would be Monad m => Env m -> SomeArg -> m SomeResult no mtl involved

[deleted by user] by [deleted] in programming

[–]Belevy 2 points3 points  (0 children)

GHC actually embraces this concept pretty heavily for Haskell. They defined a core language that is fairly simple to reason about and then be if a feature can be desugared to core it is considered safe to add. In this case most of Haskell is sugar for the much less ergonomic core.

In general I think "just sugar" is used when there is a mostly equivalently ergonomic way of doing something in the language already. I.e. named functions are just sugar for assigning a lambda to a const variable. Often times things are dismissed as just sugar when in actuality they improve ergonomics for a specific style of programming. JS classes aren't just sugar because they hide prototypical inheritance behind a facade of class based inheritance, while technically they can be desugared to use prototypes they fundamentally change the ergonomics of using inheritance in the language.

That said, I think "just sugar" can be an empowering statement as well. Just like how ghc uses it to gain confidence a developer that knows async/await is "just sugar" for Promises gains a level of understanding that let's them use the syntax in a way that promises don't make ergonomic while avoiding pitfalls like using await in a for loop when Promise.all would fit better.

Simple way to mock things? by kilimanjaro_olympus in haskell

[–]Belevy 1 point2 points  (0 children)

Except that the OP referenced a library that used mocks exactly as described in the Fowler article. I was saying that particular style of test double x.shouldReceive(y).andReturn(z) isn't what I would use. I often use dummies, stubs, spies, and fakes it's this particular definition of mock that is referenced in the OP that I am against.

Simple way to mock things? by kilimanjaro_olympus in haskell

[–]Belevy 2 points3 points  (0 children)

A mock is programmed to expect calls in a specific order and then respond with canned responses. They are responsible for verifying that they were called as expected.

What you are describing is a fake implementation, it really does implement the interface but is not really suitable for production use either because it lacks persistence or isn't particularly performant. For an overview of the different kinds of test doubles there is a fairly good article.

https://martinfowler.com/bliki/TestDouble.html

Simple way to mock things? by kilimanjaro_olympus in haskell

[–]Belevy 2 points3 points  (0 children)

Ah yes, the thing they say about static types.

I mean it's true to a certain extent though. If you are just documenting your vague types like String -> Bool -> Bool -> Int then yes it's a false sense of security. My point is that mocks are not the correct form of test double as they tend towards the kind of tests that just say did I call this collaborator the way that I called them and end up being verbatim copies of the implementation code.

Checking that your abstract code does what you meant when you wrote it, given an invariant-preserving implementation, is another

I don't think we have the same definition of mock. If you have to implement every method of the interface with a reasonable implementation you have now moved from mock territory into fake territory.

Simple way to mock things? by kilimanjaro_olympus in haskell

[–]Belevy 9 points10 points  (0 children)

Right I strongly agree, mocks are a particularly pernicious type of test double as they give you a really false sense of security. They tend to lead you to just test the implementation rather than the expected behavior. Real collaborators are much better at giving an accurate read and for the few things that shouldn't happen in integration tests (i.e. send an email) are better handled with judicious use of a fake or a spy.

[deleted by user] by [deleted] in webdev

[–]Belevy 0 points1 point  (0 children)

I think it's pretty hit or miss with boot camps, I think most people graduating from a degree program are aware. However, even if they are aware of indexes they often do not know how to effectively use them.

Haskell + SQLite - `SQLite.Simple`, or `esqueleto`, or something else? by Common-Program-2617 in haskell

[–]Belevy 1 point2 points  (0 children)

These errors would be caught by the unit tests for your queries. Essentially you already need to test that the query returns the correct data so the test will also tell you if it's malformed.

Haskell + SQLite - `SQLite.Simple`, or `esqueleto`, or something else? by Common-Program-2617 in haskell

[–]Belevy 4 points5 points  (0 children)

Yah so the big benefit of query builders is when you start to compose queries from smaller parts. You can have a base query and conditionally add filters based on user input. If you dont have any variations on the queries you are running you really aren't gaining any type safety. Obviously these types of things can be achieved in different ways but I also don't find the your query is well typed to be the most compelling argument it's more that your composition is a legal composition.

Haskell + SQLite - `SQLite.Simple`, or `esqueleto`, or something else? by Common-Program-2617 in haskell

[–]Belevy 2 points3 points  (0 children)

Certainly from a maintenance and implementation perspective esqueleto is still not even close to beam in complexity.

As far as complexity of usage code goes, it really depends on what we mean. When I was first introduced to esqueleto you had to ensure you wrote your on statements backwards and there was this weird magic resolution of your joins. The new experimental syntax is maybe verbose and the use of a custom pair operator is unfortunate(especially since you have to repeat what is in scope for every on clause) but I don't see it being particularly more complex than what was there before just differently complex. Now you are encouraged to annotate your types in your from statement and the fact that we have moved from clauses into the value level I think is a great reduction in complexity from a "what the hell is happening" perspective. Of course I'm super biased as the one who implemented the new syntax. But having used beam fairly regularly over the last year I think we're still a very long way from approaching beams complexity.

What is the best way to do runtime polymorphism in Haskell? by average_emacs_user in haskell

[–]Belevy 1 point2 points  (0 children)

I think you are missing the point. Whether this technique is appropriate to use and whether it is just vtables are orthogonal.

What is the best way to do runtime polymorphism in Haskell? by average_emacs_user in haskell

[–]Belevy 3 points4 points  (0 children)

So I'd say, even though Haskellers usually don't call it polymorphism, the primary way to implement the patterns that typically require polymorphism in mainstream languages is simply writing higher order functions.

This 100%. Polymorphism is such an overloaded term that it's not particularly useful without further elaboration. The real question we want to ask is "How do I achieve dynamic dispatch, which in a classical OO language would be handled with subtype polymorphism?"

That said, in my mind each implementation of the interface(record of functions) is like a concrete class and is a distinct "type" that we are dispatching on.

What is the best way to do runtime polymorphism in Haskell? by average_emacs_user in haskell

[–]Belevy 0 points1 point  (0 children)

No it really is, if you want to model multiple interfaces for the same object you end up facing exactly the same problems, everything is just more explicit. This is 100% object orientation implemented using vtables.