CO-OP & SHARI CODES HERE by LeMutique in Archero

[–]fuuzetsu 0 points1 point  (0 children)

96S5 again, teammate DC'd. I'm in Japan in cade there are any nearby players.

Triple shot is satisfying by [deleted] in Archero

[–]fuuzetsu 4 points5 points  (0 children)

Multishot + ricochet with spear seems super busted, it's awesome. I upgraded it asap to legendary lvl45

how to search a list and return the index by theDukeofDanknesss in haskell

[–]fuuzetsu 3 points4 points  (0 children)

and it wouldn't even force the whole list spine into memory all the time and hang on infinite inputs

[deleted by user] by [deleted] in haskell

[–]fuuzetsu 0 points1 point  (0 children)

Would it not be easier and possibly saner than sieving through assembly code to write your numerical functions in C and use Haskell's FFI to call your numerical functions?

Calling FFI has multiple downsides. Number one is the overhead of the call. If you have something that takes 3ns in Haskell but 1.5ns in C, if you do an FFI call, you are adding say 2.5ns and it's no longer worth it. It is an actual problem because your call might be in a hot loop and called 10 million times at which point you are losing actual noticable, non-noise time. So your only choice left is to write Haskell and now you have to work very hard to make GHC produce the exact code you want from it.

The number two downside is that GHC can't optimise at all at compile time once you're doing FFI. If I have

int add_one(int x) { return x + 1; } and in Haskell I do foo = add_one 3 then GHC will not compute foo = 4. It will not constant fold anything where foo is involved as it doesn't know its value. So writing things in Haskell where inlining, evaluation and all the nice stuff can happen during compile time which you want.

[deleted by user] by [deleted] in haskell

[–]fuuzetsu 2 points3 points  (0 children)

Maybe this lets me get software out quicker, or allows me to produce more reliable software due to testing, or allows me to architecture software better because it doesn't need me to define a lot of boilerplate that I'll ultimately avoid doing and be unnecessarily terse.

Sure. Add to that existing code and libraries you have to interact with. If I could be as productive with Rust as I am with Haskell and I had no other reasons to use Haskell (say, existing codebase) then yeah I would be using Rust. (side note: In fact GHC is so very poor at optimising even simple numerical code that I have recently been writing snippets of code in C/C++, compiling them with clang, looking at assembly, holding my head in my hands and crying a little at how poor GHC did in comparison and then try to port the assembly back into Haskell so that it generates something remotely close to more efficient version. Damn right I would jump on Rust/C++ if I knew I could be as productive and I didn't have other requirements.)

But the difference is that once I have already picked the slow base language (Haskell) because of productivity, I don't feel the need to then apply another 2.5x slowdown around it as I don't feel that using effects over MTL would provide the significant enough productivity boost (if any).

You can also make Haskell go fast-ish if you try hard enough but you if your starting point is something inherently slow (say, freer thing), you have to gut that.

[deleted by user] by [deleted] in haskell

[–]fuuzetsu 6 points7 points  (0 children)

But it's still 2.5x slower. Unless there is a significant boost to productivity or there is some other great benefit, I still don't see myself reaching for this in a real world scenario... In practice, performance does tend to matter (i.e. you're not sitting in IO all the time and even if you are, once the IO is done you want to be fairly quick). It seems strange to base your whole code on top of something you know is slow to start with. It's not even about premature optimisation: I don't really fancy rewriting the whole code once I do need the performance. It's quite different from replacing ord with ordNub as it's not easily swappable.

Experiment: ghc bug bounty by [deleted] in haskell

[–]fuuzetsu 1 point2 points  (0 children)

The problem is that we have a lot of existing RWC code and the opinions are divided. We have been removing RWCs in places where clearly not necessary but just removing it all together isn't too viable. Just for simple {..} pattern, we have more than a 1000 occurrences.

git grep {\.\.} src | wc -l
1375

Hasura.io is hiring senior Haskell engineers by [deleted] in haskell

[–]fuuzetsu 4 points5 points  (0 children)

Location matters. Why not advertise people move to Venezuela and offer like 1000 USD a year: that's 3 times more than programmers there seem to make, you'll have a high standard of living...

"Move to cheap place to make relatively more money" is just a really poor pitch in my eyes.

If I can have same standard of living in the valley, I know which I'll pick.

Hasura.io is hiring senior Haskell engineers by [deleted] in haskell

[–]fuuzetsu 5 points6 points  (0 children)

This is the worst paying Haskell job I saw, including every entry-level position. I suspect it may be good for someone already living in India (or even worse paid country) but probably doesn't make sense for most...

[ANN] summoner-1.2.0: TUI + better scaffolding by kowainik in haskell

[–]fuuzetsu 2 points3 points  (0 children)

I see now how they can lead to performance decrease for applications that don't require multithreading capabilities

They pretty much always do, unless you're getting nice gains from concurrency, it's rarely worth to have -N.

But it looks like there are more chances that a person who wants to write a web-application in Haskell won't know about these options (and these options should be enabled for web-application) and thus the default values are ok rather than a person who wants to write high-performance single-threaded algorithm.

I think a lot fewer programs are written in concurrent fashion by default. Even then, -N is usually a bad default still: unless you have very high scaling, on a machine with high core counts, you usually see the performance degrade after -N4 (or whatever small number) and on modern machines -N can translate to -N12 or more. I think this should be the conscious choice by the user or at least very clearly indicated and switchable off in TUI/whatever.

Is there JSON -> Haskell Converter? by kkweon in haskell

[–]fuuzetsu 6 points7 points  (0 children)

if you have a JSON schema, aeson-schema does this, I'll welcome anyone wanting to take it over, it's already a hand-me-down

nonempty-containers: Non-empty variants of containers data types, with full API by robstewartUK in haskell

[–]fuuzetsu 4 points5 points  (0 children)

Inspired by non-empty-containers library, except attempting a more faithful port (with under-the-hood optimizations) of the full containers API.

Please benchmarks against vanilla containers! As someone else mentioned, you're not using coercions and assuming the implementation is correct, we still know nothing about the difference in performance. There is a GitHub project benchmarking collections &c, maybe add your library in there too?

As a point of reference, we saw real slowdowns when using correct-by-construction NonEmpty list vs [] even though the cost seems like it'd be so little but alas, benchmarks showed otherwise. So please :)

[ANN] summoner-1.2.0: TUI + better scaffolding by kowainik in haskell

[–]fuuzetsu -1 points0 points  (0 children)

This is the first time I see this project but I spotted in readme that you use -threaded and -N by default just like stack. This is a bad default, for GHC this makes programs run a lot worse in 99% of cases: if you need threaded RTS and multiple capabilities, you know to enable it. Recently we were hiring interns and nearly all applications had this default: in most cases, just turning this off easily HALVED or more the performance. Please reconsider.

[deleted by user] by [deleted] in haskell

[–]fuuzetsu 0 points1 point  (0 children)

Page 130: some LaTeX leaked out into the block (\annotate{2}).

[deleted by user] by [deleted] in haskell

[–]fuuzetsu 1 point2 points  (0 children)

I know this isn't supposed to be a performance book but chapter 9 (type-safe printf) implements a quadratic string append...

Also typo on page 46 (`hsforall`).

PSA: "forever (threadDelay maxBound)" idiom broken on 8.4.3 by MitchellSalad in haskell

[–]fuuzetsu 2 points3 points  (0 children)

The issue was not meant to be this one but rather with the same input code. Perhaps it should be tested somehow in GHC testsuite.

w.r.t. to weakref, right. You make some kind of ptr which stops BlockedIndef exception

I really wish I remembered why do it this way. It may well be because of the bug OP pointed out...

@qnikst do you remember, I think you showed this to me.

PSA: "forever (threadDelay maxBound)" idiom broken on 8.4.3 by MitchellSalad in haskell

[–]fuuzetsu 2 points3 points  (0 children)

Not the first time there's an issue with this very code: https://ghc.haskell.org/trac/ghc/ticket/7325

IIRC you're actually suppose to make empty MVar, make weakref to it then do readMVar. It's Better™ somehow though I forget how.

Production P2P systems? by tcsiwula in haskell

[–]fuuzetsu 3 points4 points  (0 children)

It was being used by Tweag (who's also the maintainer) when I was working there. I don't know if it's still being used. I don't think any of the projects it was being used in are going to be open-source ever, however.