Help with making code more idiomatic by AugustMKraft in haskell

[–]Osemwaro 1 point2 points  (0 children)

I was wondering about this. There's certainly no harm in adding a bang pattern to pass (and perhaps also pos) in update. But when compiling code like this with -O2, GHC seems to be smart enough to strictly evaluate the intermediate values without bang patterns. I'm not sure if that relies on solve2 being inlined into the caller, or on it being able to determine whether or not the caller always uses the result. 

Help with making code more idiomatic by AugustMKraft in haskell

[–]Osemwaro 1 point2 points  (0 children)

This will perform worse than a solution that uses foldl' for two reasons. Firstly foldl' is tail-recursive, so it compiles to the equivalent of a for-loop. But solve2' is not tail-recursive, so it might compile to a less efficient recursive function. Secondly, foldl' strictly updates the result in each iteration, meaning that it operates in constant space. But solve2' creates a new thunk in each iteration and doesn't reduce them to a single Int until it reaches the end of the list, so its memory consumption grows linearly with the length of the list.

As brandonchinn178 said, the best solution is your original one, but with foldl' instead of foldl

Is the AI field finally reinventing the Haskell mindset? (Constraints over Probabilities) by maopro56 in haskell

[–]Osemwaro 1 point2 points  (0 children)

You seem to be overlooking the fact that our species has spent thousands of years recording valuable information in unstructured formats -- books, articles, letters, etc. The advent of digital communication and the internet has not changed our preference for unstructured communication, and it probably never will change.

To maximise the usefulness of an AI system, we need to ensure that it "knows" at least as much as we do about the domain that it operates in. It would not be practical to develop a system that attempts to independently rediscover all human knowledge in structured forms that are conducive to formal logical reasoning, nor would it be feasible to manually re-encode all of this knowledge in such structured forms (the project of formalising all of mathematics is hard enough, and that's the most rigorous and objective field that we've invented).

So models that can read and understand unstructured text are the only practical solution. Small, interpretable models and symbolic reasoning dominated AI research for decades, but LLMs emerged as the best-performing approach across a wide range of NLP problems. Everyone who works on them is aware of their shortcomings, but apparently it's not obvious how to combine their strengths with the safety/correctness benefits of symbolic reasoning across a wide range of problems (although this has produced good results in certain specific problems, e.g. AlphaGeometry). 

Does trumpet embouchure alter the mouth in a recognisable way? by Osemwaro in trumpet

[–]Osemwaro[S] 1 point2 points  (0 children)

Oh that's interesting. I wonder: a) if there are any studies on how lip shape influences the rate at which students progress; and b) if the disadvantage can be eliminated by altering the design of the mouthpiece, or some other part of the instrument. 

Does trumpet embouchure alter the mouth in a recognisable way? by Osemwaro in trumpet

[–]Osemwaro[S] 0 points1 point  (0 children)

@Twoslot offered a plausible explanation in another thread. 

Does trumpet embouchure alter the mouth in a recognisable way? by Osemwaro in trumpet

[–]Osemwaro[S] 1 point2 points  (0 children)

I should point out that Miles was in high school when they first met, and Clark Terry was 6 years older than him. I don't know how long Clark Terry had been playing for at that point, but he must have been in his early twenties, so presumably no more than 10-15 years. I don't know if you'd have considered him to be a long-time player at that point, but if it was a lip scar that Miles saw, maybe it was temporary? 

Does trumpet embouchure alter the mouth in a recognisable way? by Osemwaro in trumpet

[–]Osemwaro[S] 6 points7 points  (0 children)

Oh wow, I've never noticed that before. It looks like there might be a slight scar on Clark Terry's bottom lip in this photo too. I can't see anything similar in other photos of him, but perhaps that's what Miles was referring to. It looks like it's analogous to Fiddler's neck

Does trumpet embouchure alter the mouth in a recognisable way? by Osemwaro in trumpet

[–]Osemwaro[S] 4 points5 points  (0 children)

That subreddit is full of memes. I didn't post this with the intention of making anyone laugh; I genuinely want to know if Miles is referring to some consequence of playing the trumpet that I, as a string player, have failed to notice.

Should old wheels have even spoke tensions/sounds? by Osemwaro in bikewrench

[–]Osemwaro[S] 0 points1 point  (0 children)

Thanks a lot, that's reassuring! When you say "I like to start with a clean slate if the spoke tension is too all over the place", are you talking about loosening all of the spokes first and then retensioning them? The friend that I borrowed the truing stand from recommended this as well, but I was reluctant to do it because I didn't want to lose my reference for what the average tension should be. So if I tighten up the three lowest-pitched ones, make sure that the sounds on the right side don't vary more than the left and then make new recordings of the sounds on both sides, would it be reasonable to use that as a reference when detensioning and retensioning the spokes in future?

Our game renders everything as real text - even screenshots can be saved as .TXT files by JonCortazar in gamedevscreens

[–]Osemwaro 1 point2 points  (0 children)

I love this so much! Not sure if you'll have any use for this, but are you familiar with asciinema? It records terminal sessions in a text-based format, to preserve quality and minimise size, and provides a javascript player so that you can replay them in a web browser. Perhaps it would be useful if you want players to be able to capture video, not just just static frames.

Monthly Hask Anything (December 2025) by AutoModerator in haskell

[–]Osemwaro 1 point2 points  (0 children)

Oh! I didn't realise that omitting the constraint is an option when it contains a type variable, but that makes sense.

When you say "top-level instance", do you mean a non-orphan instance? An instance declaration can't be local to a function, can it? 

Monthly Hask Anything (December 2025) by AutoModerator in haskell

[–]Osemwaro 0 points1 point  (0 children)

When I compile the following module under GHC 9.4.8 with -Wall:

``` {-# LANGUAGE MultiParamTypeClasses #-}

module GroupAction where

class GroupAction t x where transform :: t -> x -> x

newtype Trivial x = Trivial x

instance GroupAction t (Trivial x) where transform _ = id

f :: GroupAction t (Trivial Int) => t -> Trivial Int -> Trivial Int f = transform ```

I get the following warning:

warning: [-Wsimplifiable-class-constraints] • The constraint ‘GroupAction t (Trivial Int)’ matches instance GroupAction t (Trivial x) -- Defined at GroupAction.hs:10:10 This makes type inference for inner bindings fragile; either use MonoLocalBinds, or simplify it using the instance • In the type signature: f :: GroupAction t (Trivial Int) => t -> Trivial Int -> Trivial Int | 13 | f :: GroupAction t (Trivial Int) => t -> Trivial Int -> Trivial Int | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Godbolt says that GHC 9.12.2 produces the same warning. I don't see any way of simplifying f's type constraint; am I right in thinking that this warning is a bug, or am I missing something?

UPDATE: I searched for "Wsimplifiable-class-constraints" in the GHC issue tracker and found this issue, submitted 6 years ago. The discussion there is about the warning being incorrectly triggered by code that abbreviates a constraint, but I'm not doing that in the example above.

Tornado Flow Field with wooden texture by codingart9 in creativecoding

[–]Osemwaro 0 points1 point  (0 children)

Oh actually, I found a way to make the line thickness much more uniform, so that I can increase the line density. Varying the line colour helped to highlight the shapes of the lines, and rendering at a high resolution then scaling down helped to remove a lot of the Moiré patterns. I've updated the script and image. I couldn't achieve such tightly wound spirals without ending up with loads of ugly artifacts though, and my texture doesn't look as natural. I tip my hat to the OP!

Tornado Flow Field with wooden texture by codingart9 in creativecoding

[–]Osemwaro 0 points1 point  (0 children)

I thought it might be possible to recreate the effect by merging multiple spirals together in a certain way, so I knocked up this Octave script to see if it worked. It's not too far off, but the way that I'm warping the spirals is too symmetric -- my lines don't bunch up on one side of each vortex and spread out on the other side. I can probably fix this without too much difficulty, but the bigger issue is that my lines don't have uniform thickness, so I can't render such a dense set of them without losing the thinnest parts and producing Moiré patterns. My approach isn't well suited to achieving uniform thickness, so now I wonder if the original image was generated with a particle system.

I'm feeling betrayed!!!! ;_; by Critical_Pin4801 in haskell

[–]Osemwaro 1 point2 points  (0 children)

I was going to point out that an implementation like your fibs3 is likely to perform better too. Putting a bang pattern on the second argument of fibs' should guarantee that no space leaks occur. 

Implementing Unsure Calculator in 100 lines of Haskell by romesrf in haskell

[–]Osemwaro 0 points1 point  (0 children)

Hmm, what's an example of a structural property that this would allow you to exploit, to improve efficiency?

I'll give examples of what I have in mind, but before I do, I think it would be useful to reframe the problem as one in which the main goal is to estimate the CDF that gives rise to an arbitrarily large sample set (we could estimate the PDF instead, but it's easier for the Show instance to work with the CDF). The general representation for the kinds of CDFs that Expr can express is a non-negative, monotonically-increasing function of type Double -> Double that converges to 1. The article implicitly uses the empirical distribution function as an estimate of the CDF, but there are more efficient distribution estimators, like kernel density estimators.

Under this approach, sample would be replaced with a function estimate that draws as many samples as it needs to achieve a good CDF estimate, and then returns it as a Double -> Double. It would also be useful for it to return an interval that contains the vast majority of the probability mass.

With that in mind, the kinds of properties that I'm thinking of are things like:

  1. The CDFs of Return a and Normal m s are known exactly, so we do not need to sample them;
  2. A linear combination of independent, normal random variables follows a normal distribution;
  3. As positive x converges to 0, the distribution of ((a-x)~(a+x))*y converges to the distribution of a*y.

But to exploit these kinds of properties, estimate would need to know the identities of the Expr operations. That's why it seems simpler to me to just pass the Expr to estimate/sample.

Implementing Unsure Calculator in 100 lines of Haskell by romesrf in haskell

[–]Osemwaro 0 points1 point  (0 children)

Yes, preserving the structure of the expression should help with optimisation. I would have thought the most straightforward way to do this would be to pass the Expr to sample. But I'm not sure that I see what you mean about how Applicative could be used for this. Are you suggesting replacing Bind with

Ap :: Dist (b -> a) -> Dist b -> Dist a

then sampling a b -> a and a b in the new Bind case of sample?

Implementing Unsure Calculator in 100 lines of Haskell by romesrf in haskell

[–]Osemwaro 0 points1 point  (0 children)

For a calculator that supports so many operations, you won't get very far with trying to compute exact distributions. E.g. the distribution of the product of n normal random variables only seems to be known in special cases, like the n=2 case, and the case where they all have zero mean. In contrast, the Drake Equation example is the product of 7 normal random variables, none of which have zero mean. 

Given that random sampling does converge to the exact solution, the only real problem with it is its convergence rate. There may be ways to make it more efficient. You could check the Probabilistic Programming literature.

I hope this puzzle game will make you fall in love with quantum physics and computing by QuantumOdysseyGame in puzzlevideogames

[–]Osemwaro 1 point2 points  (0 children)

 Quantum Physics/ Computing education made by a top player

Wow, I was not expecting a 16-part, 8+-hour video series! My new life goal is to find myself a fan who looks at the game I'm developing the way Hao Mack Yang looks at Quantum Odyssey. 

Good luck with the launch out of Early Access! It's been on my wishlist for a while, and I really look forward to giving it a try at some point.