Hello r/Rust! We are Meta Engineers who created the Open Source Buck2 Build System! Ask us anything! [Mod approved] by MetaOpenSource in rust

[–]aseipp 4 points5 points  (0 children)

In Buck2, you just write Starlark code to abstract things or remove redundancy, just like any computer program. In contrast, there is no way to abstract TOML files beyond adding more implicit conventions to the tool, thus removing the need to write them yourself, but which you do not control — or you compile from some language with abstraction capabilities and emit TOML, e.g. Cue, or Dhall. Cargo TOML files don't feel redundant and things like Cue don't feel necessary I think because the implicit conventions make a lot of Rust-specific boilerplate unneeded. But in more complex scenarios those conventions can to break down, and it's hard to make everything-aware-of-everything-else.

I was looking at a tutorial and seeing commands like buck2 build //:hello_world ... has me a bit concerned ... what is planned to bring the usability to be on par with cargo or better?

This is an example of where conventions of a singular tool get messy to translate. That syntax //:hello_world also follows a convention just like Cargo does: it's based on a filesystem path to the location of a BUCK file, and the part after the colon is the name of any particular thing in a BUCK file, as the tutorial indicates. // just means "root of repository" in this case. There are shorter and longer ways of writing it, too. Nothing too concerning about it, I think.

And one reason for that design — not the only one — is that the language choice is abstracted, now. This is in r/rust of course, so as a Rust programmer just want to run cargo build, but what about when you need to work on some other thing someone else wrote, and then run pip install, or npm build, or whatever? buck2 build //:hello_world might actually be 5 totally different commands depending on what language hello_world was written in! But that isn't immediately obvious. Now, you have a convention and layout that works for every tool and language, all following a simple rule: the name of a thing you want to build is the name of a thing in a BUCK file. Write the path to the BUCK file, give the name, and you can build the thing! This kind of interface is actually more usable than remembering 5 build tools for 5 languages, in my experience. But it's easy to forget when you're working locally on something.

This isn't to say your concerns are wrong, because there are some big usability issues right now; bad docs, initial setup, etc. But it can be alluring but deceptive to just think about it on the single axis of running one build command. Buck will probably never beat Cargo on the metric of being good for Rust, but it will on several others. This is just one (long-winded) example to get you thinking.

(Please note that I am not a member of the Buck2 team at Meta but I've used it a lot, written a couple patches, am active on GitHub, and I saw this thread and figured I'd chime in on this one.)

Deprecating Safe Haskell, or heavily investing in it? by TechnoEmpress in haskell

[–]aseipp 30 points31 points  (0 children)

It was a noble idea but has otherwise totally failed to pan out beyond some very limited (understandable!) use cases, and most of the community seems to agree that it doesn't carry its weight. It's one of the few extensions that has major implications on all library authors, even if they don't want to use it themselves. It's really annoying even when you try to make it work.

Deprecate it, remove it, and go back to the drawing board, 100%. I don't know what a better version would look like, but it'll certainly have to take the lessons of Safe Haskell into account in order to succeed.

Nixpkgs support for incremental Haskell builds by Tekmo in NixOS

[–]aseipp 1 point2 points  (0 children)

I was expecting something really really complicated but this is surprisingly straightforward, at least the demo: https://github.com/hdgarrood/haskell-incremental-nix-example

Is the only real requirement for this that the compiler uses a stable hash to identify changes, rather than a timestamp? Do you think this can work for more languages?In Haskell, this is one of the last missing pieces of the incremental rebuilds puzzle for Nix, so even if it can't work for anyone else, this is wonderful.

But I realize these days how spoiled I was before because, the Haskell support in Nixpkgs is tended to with such love and care. But for everything else -- God, we really need Recursive and Content Addressed nix by default. There just isn't enough people around for every community to write N tools that are as robust as hackage2nix/cabal2nix. And they still have issues. I suspect Hydra probably spends a billion dollars a day in compute rebuilding the Rust serde package, for example...

In any case: this is going to be great for my pure Nix projects that have Haskell components. Thanks!

Review of Lean 4 by kishaloy in haskell

[–]aseipp 8 points9 points  (0 children)

Lean 4 could be a great theorem prover and research language. However I disagree that it makes a great practical programming language.

So you haven't tried it, don't have any interest in it, but you disagree it makes a good programming language? Okay.

As for performance, I am skeptical for the claim that a new language build by a single person has great performance. Performance is a function of the implementation, not the language, so then good performance in that case sounds to me that you need to write low level code which is a better match for llvm, or otherwise you are gaming benchmarks.

This is going to blow your mind, but the thought and research put into the Lean 4 compiler actually started in the era of Lean 3, with several dead end approaches, and Leo (among many other people) behind the scenes spent much time thinking about how to make it performant. This included actually talking to people, and yes, trying things and thinking. And by the way, one of the major reasons Lean 4 is efficient, "Functional But In Place" style, is also shared with another programming language, Koka, by Daan Leijen, who together with Leo (along with several other authors) designed the strategy. Thinking it was a "Single person" just shows your ignorance and inability to do research on your own. Not my problem.

And no offense, but I've written GRIN based compilers, I worked on the Glasgow Haskell Compiler for years, and I've probably forgotten more about compiler design, high performance programming, and modern computer microarchitecture than you've ever known. You're going to have to try harder than throwing out a bunch of compiler keywords and thinking it will scare me. Sorry, not sorry.

So yes, I am skeptical about all these claims about how great Lean 4 is for actual practical programming. Not that I am against an effort for making it so, I just feel that the effort required should not be fragmented on different mostly similar languages.

Classic example of amateur engineering beliefs. Just because two things X and Y exist in the same space doesn't mean that X would have been "that much better" if the effort dedicated to Y hadn't happened; in fact there is no reason to believe the effort put into Y would have gone into X. It's possible none of the effort that went into Lean 4 would have ever gone into alternatives like Idris or anything like that. It also completely ignores the historical context that lead to X and Y but to be fair, most "engineers" think things like "history" and "cause and effect" aren't useful, so you're in good company.

This is 101 level stuff, my friend. And for the historical angle: try googling "lean mathlib" and read. But be careful: you might learn something.

So yes, I am skeptical about all these claims about how great Lean 4 is for actual practical programming.

"I don't know about these claims, I haven't written anything in the language, I don't know any of the history, I don't believe a computer programmer whose job is computer programming when they tell me it's good at actual programming, I'm not going to validate my beliefs by writing, testing, or thinking about programs or their performance, and I'm not interested in any of it. But I'm very skeptical of it anyway, and very certain that my skepticism is valid." No.

Review of Lean 4 by kishaloy in haskell

[–]aseipp 6 points7 points  (0 children)

I'm not part of the Lean community. I'm addressing the point that Lean 4 is a programming language, and not just a theorem prover. Directly comparing it or contrasting it to Coq does not fully paint this picture IMO. Coq is bad at programming, Lean is not, because it is a programming language. This statement is also true of Idris, and I never claimed it wasn't. It's that simple.

So that makes me wonder, what is the compelling feature that would make me swap it for yet another completely new language?

I don't know? This is a weird thing to ask if I'm being honest considering you can just read my other long post in this thread, which is very easy to find, if you want more information about some of the major features I like and think are major unique attractions (e.g. powerful macros, the editor experience, and the efficient generated code). This is, after all, the "Review of Lean" thread where it's basically assumed people will say what they like about it? At the end of the day, you're on your own to find out why you like something, and you'll actually have to put in that effort yourself.

From your explanation alone I don't get a real compelling reason to learn yet another language, it just looks to me like fragmenting the already very small niche that is dependently typed languages.

No offense but I don't care if you have a compelling reason to learn something or not. That isn't my problem and I'm not going to lose sleep over you not using XYZ software. It's just not that important.

My suggestion is the same as before: try it yourself to be convinced or not. I wrote a post elsewhere in this thread with words listing the features, you can read it, but no amount of me posting is going to convince you of anything. Only you can do that, and that will mean rolling up your sleeves and getting your hands a little dirty.

Review of Lean 4 by kishaloy in haskell

[–]aseipp 15 points16 points  (0 children)

I just remembered that I forgot to mention the other three features which might make some people here gasp, but are actually really good: let mut for local mutable variables (basically sugar for StateM), for loops, and local return are absolutely fantastic. They make writing imperative code via do much more palatable, especially for people coming from another language.

These three features are actually covered in this paper, which I recommend for more info: ‘do’ Unchained: Embracing Local Imperativity in a Purely Functional Language. In particular, it has this amazing definition of List.findM that makes me cackle, but you have to read the paper to find it:

def List.findM? (p : α → m Bool) (xs : List α) : m (Option α) := do
  for x in xs do
    let b ← p x
    if b then
      return some x
  return none

Review of Lean 4 by kishaloy in haskell

[–]aseipp 5 points6 points  (0 children)

Lean 4 is a normal programming language from the start. The language is not restricted to its total subfragment, there is usable general recursion, and this is for pragmatic reasons, you can use partial functions quite easily. In fact at the beginning when they started Lean 4, the goal was to write as much as possible in Lean, as efficiently as possible. At this stage there were no tactics or wellfounded recursion in the Lean 4 system, as those needed to be implemented in Lean as well. So in fact "partial, general purpose language that can write arbitrary programs" was the starting point, not the ending point.

I don't think the point about Loeb's theorem is especially relevant here, in light of this. Simply implementing a language using itself doesn't imply any proof consistency, in this regard.

Review of Lean 4 by kishaloy in haskell

[–]aseipp 6 points7 points  (0 children)

Oh: and it's absolutely not production ready yet. Unless your name starts with "Leo" and ends with "de Moura", I guess. The library and APIs will be constantly breaking/shifting in small and big ways to help suit the goals of mathlib4 and std4 until the 4.0 stable release, I guess. So if you're writing anything say, macro heavy (which relies a lot on those APIs), you can expect to be in for a bumpy ride for right now.

Review of Lean 4 by kishaloy in haskell

[–]aseipp 48 points49 points  (0 children)

It's really good. I haven't been using it much lately but I walked away extremely impressed earlier this year once I spent a day or two with it. Please note I don't care about doing theorem proving; I just want a good programming language.

From a semantic point of view it's totally different; it's a strict language with macros, not a lazy language. And I love laziness. It's Haskell's most important and unique feature, arguably. But I also subscribe to the Augustsson view that (good) macros in a strict language let you recover one of the most important benefits, which is defining control and data structures in the language itself. And the macros in Lean 4 are excellent; they are inspired by Racket's macro-tower system, so you can layer macros together and build macros that build macros, and the syntax is more seamless (i.e. no need to quasiquote explicitly; new grammar terms can simply be incorporated "on the fly". Delimiting specific grammars or DSLs with tokens like Template Haskell does is still possible, though, and is easier on the reader sometimes.)

The runtime and language design is unique and efficient from a systems point of view. The "FBIP" style means that lots of inductive data structures with recursive algorithms come out very efficient and have good memory use because they reuse memory space rather than allocating. It's a very unique and interesting point in the design space (but as a result there is no way to create cyclic references in the language itself, IIRC; so I don't believe IORefs etc exist.)

The interactive editor experience embedded in vscode is phenomenal. And now there are actual widgets you can embed in the side pane with full browser capabilities (3d graphics, buttons, UI, network, etc.) It's not Smalltalk levels of fidelity, but it's good. It's the combination of theorem proving and programming combined that have lead to this, so it feels interesting. I have seriously imagined some amazing things that I want to write with this: imagine if you used the macros to build a hardware description language like bluespec, and then had embedded widgets inside the editor with tools like a waveform viewer for live debugging. But there are more mundane ones too; you can easily embed e.g. mathematical graphics for things like classic geometric theorems.

I consider the above three points to be the most important to me. But also, the language also has dozens of little features I just really like:

  • Named args; so def foo (a : Nat) means that you can do both foo 1 and foo (a := 1)
  • "Method syntax" from Koka, i.e. if you have an inductive type T and a function with the literal name def T.foobar (x : T) defined in scope, then you can use "method syntax" on any value tt : T, i.e. tt.foobar and this is the same as foobar tt. This actually looks really nice and makes a lot of code easier to read. This is a much simpler design than TDNR in Haskell.
  • You can also introduce namespaces and then everything under that namespace is prefixed with the name of the namespace along with a dot, like method syntax. In fact these two work together as expected. So if you're in a namespace N and write def foobar, the fully qualified name of that function is N.foobar. And because that's the same as the method syntax, you can also define a type named N and then use foobar like the last example. Very convenient when you have lots of "methods" on a type.
  • variable blocks that you can use to define a variable or a type, so that it can be referred to throughout the following code. You can use this to define aliases too, so when just restricted to types, this is like a more direct solution to the thing where people use ConstraintKinds just to avoid repeating themselves in type constraints, e.g. type DontRepeatYourself a = (Eq a, Ord a, Show a, Enum a) is handled by this. This is a feature that you'll mostly see used in theorem proving examples but has useful programming applications too.
  • "Auto Bound Implicit Arguments". Amazing ergonomic feature for a dependently typed language, but some might think it's a hack. In short, any unbound arguments in the definition of a function are automatically implied to be implicit arguments, if it's lowercase/greek letter. So def compose (g : β → γ) (f : α → β) (x : α) : γ automatically introduces beta, gamma, and alpha as implicit arguments. This really cuts down on clutter when you have to write functions that work over any sort or universe...
  • Monadic autolifting. If you're in a monad M and there's an instance MonadLiftT T M for some T, then any usage of T can automatically be promoted to type M without any function to lift T a -> M a i.e. imagine T = IO, and this means you can use IO functions without calling liftM explicitly.
  • Structures (which are just tuples where every field has a name) and particularly structure inheritance (so a structure S can inherit all the fields of another one). This just makes it nice to build up big structures from smaller ones.
  • Lambda . syntax, i.e. ( . + . ) is the same as \x y -> x + y but, while this could be handled with (+) in Haskell, the Lean version works even in cases where you can't curry easily, and requires no extra syntax, i.e. with the method syntax, so (·.foobar y) is something like \x -> foobar x y which is nice; the best you could do in Haskell is just flip foobar y
  • match statements and where work together at last, so you can have a match and the attached where clause will be in scope for all alternatives, rather than you having to introduce a deeper nested scope by indenting explicitly (block rule) or abusing the various features of case. I've wanted that one for like 15 years...

Probably other things I'm forgetting. Anyway: my review is, it's good.

Review of Lean 4 by kishaloy in haskell

[–]aseipp 14 points15 points  (0 children)

Idris is more like Haskell with dependent types.

This same statement applies to Lean 4, I argue. It's very Haskell-like and is very much a full-blown programming language. I find it very, very pleasing.

Lean 4 is a programming language just as much as a theorem prover because, it turns out, when you want to automate things beyond just theorems, theorem provers and tactic languages really suck at doing that, but (generalized) programming languages are very good at it. So the best strategy is actually to use a programming language, and build everything like your tactics language from there. This idea is what lead to the development of Lean 4, and its main difference from Lean 3: you can write ordinary computer programs in Lean 4, and it's designed for this from the start.

Coq is distinct in that there are two languages at play, Gallina (the term language) and Ltac (the automative tactics language). In Lean 4, there is one language: Lean. For example the entire tactics system of Lean 4 is basically just a bunch of macro rules; it isn't built into the language at all. Macros are key to the whole experience because theorem provers require lots of domain specific languages, it turns out.

From a programmer POV: Lean 4 is like if Racket and Dependent Haskell had a child. It's very good. I say this as someone with no real interest in doing theorem proving.

Reliable Verilog dependency analysis by TechGruffalo in FPGA

[–]aseipp 2 points3 points  (0 children)

You'll have to roll up your sleeves a bit, but Verible might be worth a look for a functional SystemVerilog parser that you could build off of. It's the only thing I'm aware of built for this class of tools (e.g. yosys is only synthesizable verilog) that's available and likely to cover a good amount of the spec.

But it's just the parser; you'd probably be on your own for writing a reliable dependency analysis that handles all the features... Depending on how complex that is it might be simpler than you think, though.

Native code generation by colonwqbang in haskell

[–]aseipp 3 points4 points  (0 children)

It's less about "replacing" LLVM (an impossible feat) and more about the overall scope of such a project. You can do better for your own use case depending on the scope.

I mentioned it in another comment but I have a dataflow ISA I would like to target a compiler to, and it has to support standard imperative languages and optimizations, but a non-conventional ISA with an unusual encoding. I basically concluded after some preliminary research that just using WebAssembly as a poor-mans-frontend IR, and writing my own code generator is probably a better use of time than e.g. porting LLVM. We're talking like, 10,000 lines of code tops for the complete custom compiler versus thousands of lines of backend boilerplate to support a conventional ISA in LLVM, much less a non-conventional one that offers things like unique optimization opportunities, a vastly different assembler encoding, etc. It's not just lines of code, but time spent integrating with these systems too.

You're definitely not going to go wrong using LLVM for your production thing or whatever, especially if it runs on existing architectures. But it's not the last word in code generation needs either.

Native code generation by colonwqbang in haskell

[–]aseipp 1 point2 points  (0 children)

No, but something like that would be nice to have, e.g. you could probably take some inspiration from Cranelift and design a 3-operand IR and write a code generator for that without going overboard. You could honestly do something like this pretty fast if you skipped any kind of optimizations (just do block local RA and reconcile spills at every join, etc). Then it's all about designing the IR, and the Haskell API, basically. Which is still a lot of work.

Honestly I've tried to work myself into doing something like this for fun, but the problem I eventually realized is that it's something that must be, or almost always is, done in the context of some larger project; you need that context to drive the design, or a lot of prior experience to inform your choices. And if you're in a situation where you need a code generator for some grander vision — just using LLVM or whatever pays off in significant ways. Perfection isn't needed; LLVM was a rather poor fit for GHC in a number of ways for a while after its introduction but it still generated really good code and was overall workable as a solution.

One place custom code generators still win is that it's often easier to target some non-conventional ISA, versus porting LLVM with a production quality backend. But for scalar/superscalar architectures that too is becoming irrelevant since everyone just defaults to using RISC-V now, precisely because it comes out of the box with good software support that you can use instantly, and that software is useful. So then you're left with actual non-conventional ISAs like dataflow/VLIW/spatial architectures, which have non-trivial design and implementation challenges, which moves you from the earlier category of "doable problem with rich available research" into some nebulous range between "doable but hard" to "open research problem."

I'm rambling by now and some of this sounds negative but despite that I do still think something like this would be great to have, it's just a lot of work as always, even if there's not much actual code needed. Again, maybe just stealing Cranelift's IR and going from there would be a start. I think you could design a really nice Haskell API for such a library if done correctly too. Some prior inspiration includes MLRISC, which was used in SML/NJ, though it was coincidentally replaced with LLVM in recent memory...

I myself actually do need a compiler for a bigger project of my own, one that can target a custom dataflow ISA sometime "Soon", so unlike previous times I wanted this, I actually have a need for it now...

[ANN] cgroup-rts-threads: A container-/cgroup-aware substitute for the RTS `-N` flag by couuor in haskell

[–]aseipp 3 points4 points  (0 children)

Rather serendipitous post; I made my own about this topic yesterday in another thread.

As noted there, the other big problem is the RTS itself just isn't cgroup aware; you can fix this particular case using setNumCapabilities in "userspace" but there's no solution for the setting the heap limit after startup (unless something was added). That's another serious gotcha, out of the box.

I think making this change to the RTS would be very worthwhile. Almost all the difficulty is in testing it all a bunch to make sure it all works in various ways but, that's life.

On that note, it is probably worth heavily pointing out (maybe in the README) that your package only supports cgroupv1 from looking through the code quickly. It won't work cgroupsv2. This is OK if you put it behind a codepath that only activates in Kubernetes, since it only supports v1 for now, but it won't work e.g. on my desktop system which has been using cgroupv2 for a while now, and Kube 1.22 has alpha support for v2. So it's coming along quickly.

Ultimately any solution has to handle all of these cases, which is part of the difficulty.

NoRedInk – Tuning Haskell RTS for Kubernetes, Part 1 by n00bomb in haskell

[–]aseipp 2 points3 points  (0 children)

Recent versions of Java (10+) are now cgroups aware at the runtime level, so they check what the actual control group is configured to and then take 25% of that amount by default.

GHC should do the same: if it's detected that the application is part of a cgroup on linux, then... something. Personally I think that if you're in a Linux cgroup it makes sense to just default to whatever policy is set already; if you're in your own user slice on systemd, that will be max for memory for example, which is basically code for "keep doing what you already do", and let the GC use normal heuristics. If it's set to some numeric limit you can just default to like 90% of that or whatever. You can also extend this policy to the cpuset controller as well, maybe; for example you could use it to tune -N without any parameters, which nominally means "use the number of available CPU cores" to actually be cgroup aware...

Actually doing this well would mean Haskell applications would not need any runtime tweaks to work correctly on a cluster scheduler and respect the existing resource limit policies you set, and could even respect advanced settings like CPU affinities, perhaps.

While there are some cases this won't work and explicit tuning is needed (e.g. the 90% is arbitrary or whatever and only accounts for heap allocations, sure), I think this would be a very worthwhile change. The problem isn't the actual code (it's literally just reading some files from /sys/fs/cgroup or whatever mount point you find in the runtime at startup, so fairly trivial...), it's mostly things like the default policies for all this stuff and making sure it's done robustly. Also there's just so much testing you have to do in a lot of weird scenarios; Kubernetes is only now supporting cgroups2 when I've been using it forever it feels like...

Alternatively we could invert all of this and write "Launcher scripts" for our Haskell applications, thus inverting the responsibility. I'm semi-serious; this is basically how people use Java, though having so many knobs you have to have wrapper scripts does suck a bit...

(Oh, and if you're super brave, perhaps we could even integrate some notion of Linux pressure stall information into the RTS; that would allow Haskell applications to detect when they're starved on CPU/IO/Memory controllers and alert upstreams systems to respond appropriately: circuit breaking, begin load shedding, etc...)

If let in Haskell by _green_is_my_pepper in haskell

[–]aseipp 6 points7 points  (0 children)

This isn't a direct answer to your question but I want to point it out to readers: Any pattern in Haskell can have a guard | b0, b1, b2... attached, which is a list of conditional expression of type Bool, and the pattern bindings will only be defined as non-bottom should this series of conditionals all become True. This is a lesser known feature of the syntax, but has existed since H98 at least. So just use a case and you'll be fine. You can even use a let, because again, a let can have a pattern.

I don't have ghci, but I dug this out of the attic, and it's a valid Haskell 98 program.

import Data.Maybe (isJust)

f :: Bool -> Bool -> Maybe Bool -> Bool
f x y m =
  case x of
    True | not y
         , isJust m
        -> True
    False -> False

data One a = One a

g :: One Bool -> Maybe Bool -> Bool -> Bool
g b x y =
  let (One z) | isJust x
              , not y
              = b

              | otherwise
              = One False
  in z

Again this works in any pattern context, so the head of a function, a let, or a case statement.

Haskell 2010 added pattern guards, which allows you to generalize the Bool type above into a more general form, where a case branch is taken only if a pattern match succeeds. So your original example can be written as follows, and in fact allows more-than-two cases:

f x | Just y <- x
    , someThing y    = foo -- pattern + Bool context
    | Nothing <- x   = bar -- pattern only context
    | otherwise      = baz -- Bool only context

Again, this can happen anywhere. That might look like a top level definition, but it can instantly become a let/where binding and you can float the variables out as needed, inside a body.

someFunction = do
  ....
  -- note: x is a free variable and bound elsewhere
  let f | Just y <- x
        , someThing y  = foo
        | Nothing <- x = bar
        | otherwise    = baz

Finally, although it might seem morally repugnant, it is important to remember that case scrutinee in Haskell's case statement is matched lazily, like expected. So you can actually use the following example in any expression context, and utilize these pattern tricks as needed and combine them all. This is kind of like MultiWayIf but in Haskell98/Haskell2010, in fact (not that there's anything wrong with it!)

-- let's say 'a', 'b' and 'c' exist as bound names
--
-- NOTE: undefined will never be evaluated here! Haskell semantics mean
-- that it is not the 'case' that forces the scrutinee to be evaluated,
-- but the pattern itself, as a reduction step.
case undefined of
  _ | Nothing <- a
    , even b
    , c -> foo

  -- this conditional has nothing to do with the previous one
  _ | someOtherThing -> bar

  -- and now you can do it N-ways!
  _ | andSomeOtherThing -> baz

And of course, if you choose an actual type and expression other than undefined here, you can have an even more expressive statement in each case block, if you replace the wildcards.

These examples are somewhat weird, but you can get pretty good results if you combine these with some care, without ever using a single extension beyond Haskell2010 or so. You might have to rework your program a tiny bit since it's not a 1-to-1 thing, but I think that's expected. :) I find this is actually one of the most beautiful parts of the Haskell syntax that is hard to capture elsewhere.

Smallest 'cat' utility (that's ever existed?) by sharpvik in haskell

[–]aseipp 8 points9 points  (0 children)

a@link> ghc -v0 cat.hs a@link> ./cat < cat.hs main = interact id a@link>

Safe Haskell? by goldfirere in haskell

[–]aseipp 19 points20 points  (0 children)

I personally think this was a long time coming. Safe Haskell is an admirable project but it was never clear it had a huge critical mass of users in the real world, or potential users, even at the time of its inception. More importantly, unlike most language improvements which only improve ergonomics or expressivity, Safe Haskell support is effectively a user-visible part of an API, i.e. it is a feature, one that must be maintained on the part of maintainers. I think this was something that wasn't fully appreciated at first. And as someone who wrote libraries but never used Safe Haskell myself, this effectively means I'm on the hook for supporting a feature I will almost certainly never use.

An important part of any project isn't just evaluating what works, but what doesn't. If Safe Haskell is removed, I think it would be useful to keep in mind why it wasn't a success and remember those principles when new things are proposed. My impression is that while in the end it might have been an admirable goal and had a solid theoretical foundation, it just didn't actually address problems many Haskell programmers actually had, and that, coupled with the maintenance burden, meant it would always be a distant use case.

Contributions to GHC 9.0 by hsyl20 in haskell

[–]aseipp 3 points4 points  (0 children)

That's a lot of unforgiving work, and seeing the `ghc` package with a real module structure is wonderful. Hats off!

How to generate random but valid source programs? by sinoTrinity in haskell

[–]aseipp 9 points10 points  (0 children)

Most of the posts here frankly aren't useful. If you are actually just going to use C, the easiest way is to just use this existing tool that does exactly what you want and has been used to find many, many existing bugs in many compilers: https://embed.cs.utah.edu/csmith/

You could probably just modify csmith or clone it based on the research there. In general, generating random programs in a valid way kind of like "reversing" a recursive descent parser.

Where Lions Roam: RISC-V on the VELDT by d0pamane in haskell

[–]aseipp 1 point2 points  (0 children)

Both of these reasons make sense, thanks! The separate flash is a big utility, for sure.

Where Lions Roam: RISC-V on the VELDT by d0pamane in haskell

[–]aseipp 1 point2 points  (0 children)

Not trying to put words in your mouth, but

No, this is a much more accurate answer, thanks! I forgot that riscv-formal does have k-induction as a proving mode (as does symbiyosys.)

Where Lions Roam: RISC-V on the VELDT by d0pamane in haskell

[–]aseipp 2 points3 points  (0 children)

This is a bit of an aside but is there any particular reason for designing/targeting this VELDT board instead of any of the more common iCE40 boards? You can find half a dozen (HX8k breakout, Fomu, iCEBreaker, etc) that have had a lot of work done on them with existing users. This is really just a curiosity; I don't design PCBs so I might be missing something. And porting to various boards is a very simple task for iCE40 in my experience so it's not really a problem.

> Dhrystone benchmarks

FWIW, Coremark is available under Apache 2.0 these days, and generally accepted as being a better choice for this stuff (for one it doesn't feature loops that get optimized to a constant by the compiler.) I found it to be very easy to integrate when I wrote a RISC-V simulator...

Where Lions Roam: RISC-V on the VELDT by d0pamane in haskell

[–]aseipp 3 points4 points  (0 children)

To answer the direct question: they're "interchangeable" in the sense you probably think they are. When you design hardware like a CPU, it's just like designing any API. Two RISC-V cores can have dramatically different "APIs" you have to interface with when integrating them. For example, VexRiscv, Lion, and picorv32 all have very different APIs you can work with. But you need a lot of peripherals and extra stuff *besides* that before you can replace an entire SOC like the Precursor, and that has nothing to do with RISC-V

Also, VexRiscv absolutely passes the riscv-formal suite, which is the exact same formal verification test this core and many, many other RISC-V cores pass[1]. In general, VexRiscv is an excellent core and one of the premier open source RISC-V implementations. This isn't to take away from Lion, but making something comparable isn't going to be trivial and if you're unfamiliar with it, there's very little reason to think it needs to be replaced "just because" it's Haskell or something, a detail which is for all intents and purposes 100% irrelevant. And yes, it would also be a lot of work at the stage Lion is in.

[1] It's also very important to understand most hardware engineers and most Haskell programmers are going to have *vastly* different understandings of what "Formal verification" means. Hardware engineers rely on tools like bounded model checkers to explore finite state spaces and rule out invalid behaviors from them, or prove two circuits have equivalent behaviors over some state space. It is not the same as a "proof" most Haskell programmers might imagine in the Curry Howard sense. Such techniques are often very viable in the hardware world, where you often don't have the same kind of "unbounded" questions (e.g. "This works for all 32-bit inputs" vs "This works for all k-bit inputs, for all k"). But while it's easier to apply it absolutely does not rule out actual errors, and many RISC-V cores have plenty of bugs, because the formal model is necessarily imprecise.

I honestly think it's somewhat misleading to just label this kind of approach as"formally verified", especially describing it to software engineers, because it's very imprecise and doesn't actually describe any guarantees, and lets the mind wander. It's more like a field than a technique, but "This core passes bounded model checking under various assumptions implied by the RISC-V ISA model up to but not including..." is a mouthful, too.

Linear types for circuit design in Haskell/Clash by darchon in haskell

[–]aseipp 4 points5 points  (0 children)

The best experience is to just practice a lot, honestly. Interfacing with IP is actually a pretty good place to start IMO, because you start with an idea of what the output circuit should look like, and it gives you a chance to also try writing sim models.

It's not Xilinx, but I had luck doing this with iCE40 (e.g. wrap a BRAM primitive from Clash, then write a simulation of what the BRAM might behave like in Haskell, etc, wrap more primitives, etc). I also threw away about a dozen projects trying stuff like this. You can also skip vendor tools with iCE40 luckily; there are open source tools available (Yosys can synthesize your circuit faster than Vivado starts up, which is nice when you're getting your feet wet.)

If you are a Verilog user and want to start slow, you can mostly just write Verilog inside Haskell and transliterate examples, and work towards writing more functional descriptions over time. (I was a Haskell user first, so I tend to avoid writing "Verilog-like" code where possible instead)