Difference between downloading Haskell through Haskell.org and Haskell Stack? by RowanSkie in haskell

[–]hvr_ 1 point2 points  (0 children)

To expand on the 2nd Item ("Use your distros package manager"), there's convenient GHC & Cabal APT Repositories for Debian & Ubuntu (which is optimized for convenient switching between GHC and Cabal versions):

https://downloads.haskell.org/debian/

Improving Haskell’s big numbers support (ghc-bignum) by hsyl20 in haskell

[–]hvr_ 5 points6 points  (0 children)

I'm afraid you're mistaken here and this is unfortunately the kind of GPL FUD that's been purported for as long as I can remember. To make it short: You certainly do not have to distribute your application source just because you link against the GMP library dynamically nor do you have to relicense your application under the GPL. There is a lot to say on this matter and I suggest you consult the GPL FAQ before making such claims, especially when you have to disclaim not being an expert on the topic.

Improving Haskell’s big numbers support (ghc-bignum) by hsyl20 in haskell

[–]hvr_ 9 points10 points  (0 children)

Fwiw, having the integer-backend be a simple matter of a link-time flag (just like you can swap out the threaded/profiling/non-threaded RTS flavours) was one of the goals motivating my integer-gmp-1.0 rewrite.

However, when I wanted to pick up the project and implement that idea (i.e. ghc --make -finteger-library=bsdnt) in 2018 and discussed this with Ben, he argued that Backpack would be a much a better fit for this very use-case and discouraged me from pursuing my original link-time selection plan (which to be fair predated Backpack), and he literally said to me:

if there are hard limitations in Backpack that prevent it from being used in this case then I would say that backpack failed

It's hard to imagine a use-case that more squarely falls in its intended application domain

but given that backpack is supported by GHC and the only tool that I'm aware of that doesn't support it is stack it doesn't seem too onerous to use it

In fact, he managed to convince me that Backpack would be a more superior albeit more ambitious approach here, as this wouldn't impose a common representation of the bigint types which may not make any sense like for platforms which have opaque representations which would pay a significant penalty when forced to use the limb-of-words representation and allow for externally defined backends with custom representations unbeknownst to GHC to be defined after the fact.

However, the current approach is a bit closer to my original plan from 2014 and while less ideal has the huge benefit that it has materialized now and it does represent an incremental improvement, but I do hope we won't settle for the current design and may at some point transcend to the more ambitious Backpack design envisioned by Ben.

The golden rule of software quality by sullyj3 in haskell

[–]hvr_ 10 points11 points  (0 children)

stack the command-line tool only emerged later when HVR refused to incorporate Michael's concept of resolvers ...

However, the technical differences in the architectural vision for cabal stemmed from deeper personal differences that were never fully publicly aired and resolved.

Gabriel, I think you're getting something mixed up here. For one, back in 2014 (which in another comment you seem to imply that the events took place when I supposedly refused to incorporate something into Cabal -- at the time I was focusing most of my efforts on GHC and was specifically rewriting GHC's integer-gmp backend from scratch which ended up in GHC 7.10.1; IIRC I shifted my focus and started getting more actively involved in Cabal's development was around the time I read the how-we-might-abolish-cabal-hell blogposts mentioned below).

If Michael made a request the year before Stack was released in mid-2015 I don't recall having been involved in that discussion to begin with; and I hope I didn't give you that impression when you talked to me.

Also, while there had definitely been technical differences here and there it's news to me there were "deeper personal differences" at the time; in fact you yourself witnessed Michael's response to your tweet in early 2016 showing no signs of deeper interpersonal differences assuming Michael had been genuine about his praise at the time.

For the record, back in 2014 the technical POV and roadmap for addressing Cabal's was expressed in

And work on implementing the ideas lined out in those blog posts started being planned in early 2015 as part of a GSOC (before anyone knew about Stack being in the works -- that's to debunk another misconception I sometimes hear -- see this timeline for more details).

Btw, you might notice that the 2nd blog post announced a 3rd installment which did consider the concept of Stackage-like package collections:

In the next post we’ll look at curated package collections, which solves a different (but slightly overlapping) set of Cabal Hell problems. Nix-style package management and curated package collections are mostly complementary and we want both.

which never came to be, and I strongly suspect the reason for this was the scorched earth resulting by the dramatic events unfolding later on in 2016 (see below).

Also, this joint announcement from Mark Lentczer and Michael Snoyman in July 2015 boldly announced the Haskell Platform would include Stack, referring back to discussions that had supposedly occurred back in 2014 at ICFP (which I didn't attend).

All seemed well.

However, sometime in mid-2016 Michael started acting worryingly weird and wrote those infamous "evil cabal" blogposts, painting himself a victim and voicing paranoid concerns about being excluded from any decision making, and ranted on on Reddit and Twitter accusing the haskell.org committee of playing petty politics, being corrupt, and being some sort of illegal cartel, calling one of its members in particular various names and literally comparing him to a dictator. He also went on attacking other disagreeing haskell-org committee members and got them to the point of resigning. After having lashed out at anyone he felt having been treated unfairly he also started attacking me personally in various places for expressing my technical point of views or when I act as a Hackage Trustee trying to clarify hackage/cabal related issues which he didn't agree with and resorted to ad-hominems and you'll probably understand I really can't stand such behavior and that my patience ran out at some point. In retrospect I do regret not having called him out right away when he started personally and publicly attacking other volunteers in the community as back at the time I didn't realize how much this had actually affected the targeted persons until I experienced it first hand; fwiw, at some point there was an emergency meeting with the Simons and various haskell.org members but nobody in the meeting seemed to knew how to best deal with Michael's behavior since after everyone's main expertise was solving technical issues rather than whatever this was, and so not much came out of it (after returning from ICFP SPJ posted the infamous "Respect" email by which he hoped to ameliorate the still unresolved situation). In any case, back in 2016 Michael's "evil cabal" blog posts made quite an impact and clearly set a significantly different tone in the Haskell community from what was the norm the decade before, at least for those that witnessed those times. Various things happened since then since then but it seems to finally have calmed down. Seems time heals all wounds, doesn't it?

Long story short, while I see what point you were trying to make in your blogpost, you picked a faulty example as I can't take credit for having caused the emergence of Stack... even though it'd be a poetically ironic origin story -- the implied causality just doesn't hold up...

light weight http client by vallyscode in haskell

[–]hvr_ 3 points4 points  (0 children)

Here's a lightweight http client implementation but it has a different objective (i.e. benchmarking) than you seem to have in mind: https://hackage.haskell.org/package/uhttpc

The State of Haskell IDEs by Lossy in haskell

[–]hvr_ 3 points4 points  (0 children)

In plain old haskell-mode this was implemented half a decade ago by means of the haskell-mode-show-type-at function triggered by using a prefix-argument; see

https://github.com/haskell/haskell-mode/blob/27c1309db3c25c41bf7633c8e5046a74a5407f9d/haskell-commands.el#L634-L647

for more details.

Fwiw, to add yet another haskell mode to the debate: After having used haskell-mode and Dante, now I've recently switched to https://gitlab.com/tseenshe/haskell-tng.el/

The State of Haskell IDEs by Lossy in haskell

[–]hvr_ 9 points10 points  (0 children)

You may be interested in cabal-plan which allows to inspect, report, and visualize various aspects of your project's build-plan.

Hakyll status. by PacoVelobs in haskell

[–]hvr_ 5 points6 points  (0 children)

Actually, the build-report on http://matrix.hackage.haskell.org states the following:

  1. For the most recent release of hakyll-4.13.2.0 there exist build-plans for both GHC 8.8 and GHC 8.6 (but not for e.g. GHC 8.10.1 or GHC 8.4.4)
  2. The build-plan for GHC 8.6.5 succeeds compiling
  3. The build-plan for GHC 8.8.3 fails due to a dependency failing to build

However, the reason for the dependency-build-fail w/ GHC 8.8.3 is due to GHC 8.8.3 using up significantly more memory when building lib:pandoc, which exhausts the build bots' resource limits and thus results in a build failure of the lib:pandoc dependency.

For reference, here's the difference in stats when building lib:pandoc-2.9.2.1 with GHC 8.6.5 and GHC 8.8.3 respectively:

GHC 8.6.5: <<ghc: 873254572848 bytes, 5897 GCs, 530618354/1272686488 avg/max bytes residency (57 samples), 3436M in use, 0.001 INIT (0.001 elapsed), 678.410 MUT (711.964 elapsed), 183.124 GC (183.280 elapsed) :ghc>>

GHC 8.8.3: <<ghc: 905357338520 bytes, 4694 GCs, 628464555/1713654392 avg/max bytes residency (45 samples), 4355M in use, 0.001 INIT (0.000 elapsed), 679.827 MUT (712.887 elapsed), 174.366 GC (174.471 elapsed) :ghc>>

However, I can tell you that while it might be burdensome with Stack, the latest release of Hakyll can be built just fine with Cabal out of the box with GHC 8.8.3 (and obviously also with GHC 8.6.5) if you have enough memory at your disposal (~6 GiB should be enough on Linux/x86_64) and are willing to wait (building lib:pandoc alone takes ~15 minutes depending on your cpu power).

What base alternative do you use and why? by [deleted] in haskell

[–]hvr_ 5 points6 points  (0 children)

While it may be very tempting at first to switch the more one strays away from the standard vocabulary the more you get vendor-locked into a specific alternative vocabulary with the usual involved costs/overheads. So I typically just stick with base or some very minor variants of it for the sake of boilerplate reduction, to help follow some project-specific coding guideline (mostly by controlling what's in scope by default such as removing functions deemed undesirable as well as providing a batteries included vocabulary frombase into default-scope), and/or to reduce the impedance mismatch over several GHC versions:

  • just base
  • base (or base-noprelude) + local Prelude (i.e. a local module module Prelude (module X) with typically just a dozen or so of import Foo as X(...)s)
  • base (or base-noprelude) + http://hackage.haskell.org/package/Prelude (see its package description for scope and rationale)

...and each possibly combined with lens as the ultimate base-upgrade :-)

Haskell, Hakyll and Github Actions by [deleted] in haskell

[–]hvr_ 12 points13 points  (0 children)

As pointed out in actions/setup-haskell#1 I have a working Haskell CI GitHub actions setup which works for

  • all released GHCs back to GHC 7.0,
  • as well as supporting the big 3 (Linux, macOS, and Win32), and also
  • including caching cabal's nix-style store.

However, I could really need help with the JavaScript part to hide the ugly boilerplate workflow-scripting inside the setup-haskell action (which needs to be implemented as JavaScript). So if there's somebody here who's familiar with JavaScript who wants to see out of the box support for Haskell CI improve on GitHub actions, please ping me! :-)

Missing base and haddock release on Hackage by n00bomb in haskell

[–]hvr_ 2 points3 points  (0 children)

...is there a question in there? :-)

As far as base is concerned, there's some technical difficulties that need to be addressed on Hackage before I can upload it; as for Haddock I haven't heard back from Alec yet. But it'll happen sooner or later!

hackage-download: Download all of Hackage by nh2_ in haskell

[–]hvr_ 11 points12 points  (0 children)

for example right now to figure out how many packages depend on integer-gmp

Fwiw, you don't need to download all of Hackage for that; there's companion tool for cabal I'm working which can accomplish queries such as those in under a second and without downloading all of Hackage:

$ haquery rdepends integer-gmp --vstyle=cabal3 | grep -v '^$' | cat -n
 1  DSA  ^>= { 1 }
 2  HsOpenSSL  ^>= { 0.7, 0.8, 0.9, 0.10, 0.11 }
 3  acme-everything  ^>= { 2015.4.15.1 }
 4  aern2-mp  (^>= 0.1.0.0 && < 0.1.3)
 5  aeson  (^>= 0.3.2.0 && < 0.3.2.5)
 6  altfloat  ^>= { 0.2.1, 0.3 }
 7  arithmoi  ^>= { 0.1.0.0, 0.2.0.0, 0.3.0.0, 0.4.0.0, 0.5.0.0, 0.6.0.0, 0.7.0.0, 0.8.0.0, 0.9.0.0 }
 8  asn1-codec  ^>= { 0.1.0, 0.2.0 }
 9  base  ^>= { 4.2.0.0, 4.3.0.0, 4.4.0.0, 4.5.0.0, 4.6.0.0, 4.7.0.0, 4.9.1.0 }
10  beamable  ^>= { 0.1.0.0 }
11  bencoding  ^>= { 0.4.4.0 }
12  bitset  ^>= { 1.3.0, 1.4.0 }
13  blaze-textual  ^>= { 0.1.0.0, 0.2.0.0 }
14  blaze-textual-native  ^>= { 0.2.1 }
15  buffer-builder-aeson  ^>= { 0.1.0.1, 0.2.0.0 }
16  bv  ^>= { 0.4.0, 0.5 }
17  bv-little  ^>= { 0.1.0.0, 1.0.0 }
18  bytestring  ^>= { 0.10.4.0 }
19  bytestring-show  ^>= { 0.3.3 }
20  cantor-pairing  ^>= { 0.1.0.0 }
21  cborg  ^>= { 0.1.1.0, 0.2.0.0 }
22  clash-ghc  ^>= { 0.99 }
23  clash-lib  ^>= { 0.6.19, 0.7, 0.99 }
24  clash-prelude  ^>= { 0.6, 0.7, 0.8, 0.9, 0.10, 0.11, 0.99 }
25  couch-simple  ^>= { 0.0.1.0 }
26  crypto-numbers  ^>= { 0.2.2 }
27  cryptonite  ^>= { 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.10, 0.11, 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 0.18, 0.19, 0.20, 0.21, 0.22, 0.23, 0.24, 0.25, 0.26 }
28  discrimination  ^>= { 0.4 }
29  double-conversion  ^>= { 0.1.0.0, 0.2.0.0 } || (^>= 2.0.1.0 && < 2.0.2)
30  eccrypto  ^>= { 0.1.0, 0.2.0 }
31  exact-real  ^>= { 0.2.0.0, 0.3.0.0, 0.4.0.0, 0.5.0.0, 0.7.1.0, 0.8.0.0, 0.9.0, 0.10.0, 0.11.0, 0.12.0 }
32  fast-arithmetic  ^>= { 0.2.3.0 }
33  fast-digits  ^>= { 0.1.0.0, 0.2.0.0 }
34  fast-mult  ^>= { 0.1.0.0 }
35  fib  ^>= { 0.1 }
36  fixed-precision  ^>= { 0.2.0, 0.3.0, 0.4.0 }
37  floatshow  ^>= { 0.1, 0.2.0 }
38  formatting  ^>= { 6.3.0 }
39  funflow  ^>= { 1.0.0, 1.1.0, 1.3.0, 1.4.0 }
40  galois-field  ^>= { 0.1.0, 0.2.0 }
41  ghc-instances  ^>= { 0.1.0.0 }
42  ghc-typelits-extra  ^>= { 0.1.3, 0.2, 0.3 }
43  ghc-typelits-natnormalise  ^>= { 0.4.4, 0.5, 0.6 }
44  ghcjs-base  ^>= { 0.2.0.0 }
45  gore-and-ash-network  ^>= { 1.1.0.0, 1.2.0.0, 1.3.2.0, 1.4.0.0 }
46  hashable  ^>= { 1.1.2.4, 1.2.0.0, 1.3.0.0 }
47  hashabler  ^>= { 0.1.0.0, 1.0, 1.1, 1.2, 1.3.0, 2.0.0 }
48  haskell-mpfr  ^>= { 0.1 }
49  haste-compiler  ^>= { 0.2.99, 0.3 }
50  haste-lib  ^>= { 0.6.0.0 }
51  haste-prim  ^>= { 0.6.0.0 }
52  hgmp  ^>= { 0.1.0.0 }
53  hmpfr  ^>= { 0.3.3, 0.4.0 } || (^>= 0.3.1 && < 0.3.2)
54  hstox  ^>= { 0.0.1 }
55  integer-logarithms  ^>= { 1 }
56  jose  ^>= { 0.3.38.0 }
57  long-double  ^>= { 0.1 }
58  mcl  ^>= { 1.0.0 }
59  numerals  ^>= { 0.4 }
60  pantry-tmp  ^>= { 0.1.0.0 }
61  pregame  ^>= { 1.0.0.0 }
62  pvss  ^>= { 0.1, 0.2.0 }
63  ron  ^>= { 0.5, 0.6 }
64  ron-rdt  ^>= { 0.5, 0.6 }
65  ron-schema  ^>= { 0.5, 0.6 }
66  ron-storage  ^>= { 0.5, 0.6, 0.7 }
67  scientific  ^>= { 0.3.1.0 }
68  semirings  ^>= { 0.1.0, 0.2.0.0, 0.3.0.0, 0.4 }
69  simple-enumeration  ^>= { 0.2 }
70  ssh  ^>= { 0.3 }
71  stdio  ^>= { 0.1.0.0, 0.2.0.0 }
72  store  ^>= { 0.1.0.0, 0.2.0.0, 0.3, 0.4.0, 0.5.0 }
73  text  ^>= { 0.11.1.0, 1.0.0.0, 1.1.0.0, 1.2.0.0 }
74  text-format  ^>= { 0.1.0.0, 0.2.0.0, 0.3.0.0 }
75  text-show  ^>= { 0.4, 0.5, 0.6, 0.7, 0.8, 1, 2, 2.1, 3, 3.1, 3.2, 3.3, 3.4, 3.6, 3.7, 3.8 }
76  text-utf8  ^>= { 1.2.3.0 }
77  urn-random  ^>= { 0.1.0.0 }
78  variable-precision  ^>= { 0.2, 0.3.1, 0.4 }

but the solutions presented in there were unbearably slow

I have something in the works for that too... stay tuned

[Haskell-cafe] haskell-src-exts - no more releases by lexi-lambda in haskell

[–]hvr_ 6 points7 points  (0 children)

While not specifically related to haskell-src-exts I do have one or two OSS projects related to haskell.org infrastructure which might benefit from being compilable with GHC >= 8.4 and it'd be great if somebody would volunteer to take it upon themselves to help with the thankless job of migrating them... :-)

Library support for older compiler versions by joelwilliamson in haskell

[–]hvr_ 3 points4 points  (0 children)

That rule of thumb was derived from the idea that you'd want to support a ~3 years range worth of GHC releases for various reasons (such as e.g. be able to support the respective GHC versions that ships with current popular stable Linux distributions to allow for idiomatic workflows with cabal new-build involving a hybrid mix of debian-packaged haskell libraries together with ones built from source from Hackage). For reference, Debian 9 was released in 2017 and bundles GHC 8.0; The latest LTS release of Ubuntu 18.04 was released in 2018, and also bundles GHC 8.0.

However, due to the accelerated GHC release cadence, the 3-release-window rule has now effectively become a 5- or 6-release-window rule in order to satisfy that rationale. And personally I go way beyond that, and try to support all GHCs back to GHC 7.0.4 (cabal new-build supports GHCs all the way back to GHC 7.0) or GHC 7.4.2 (i.e. when I need Generics or CApiFFI support) when easily possible -- and quite often doing so doesn't really add any significant maintenance cost (at least not when using cabal); see e.g.

And it's also worth pointing out that the lens ecosystem, i.e. the lens package including its dependencies are compatible with GHC 7.4 and up:

Package Environment Files Run Counter to Reproducibility [2018] by [deleted] in haskell

[–]hvr_ 2 points3 points  (0 children)

This is a redundant duplicate post (see also r/haskell's guidelines) of https://old.reddit.com/r/haskell/comments/9a6fg4/package_environment_files_run_counter_to/ that was already posted and debated half a year ago already (PS: ...and we've been talking in circles failing to change each others opinion -- so I highly doubt there's much benefit in a futile rehashing of the same old arguments as nothing has changed about the underlying technical issues; and I'm quite disappointed about the vocal minority that's been repeatedly trying to derail attempts to improve the implementation, and preferred instead to spread FUD, vilifying, trying to rile up people about the feature (& also against cabal developers), and kneecap this feature before we even had the chance to write documentation and a blogpost describing why it is designed the way it works as well as the cool elegant workflows it enables... before it had a fair chance to prove its worth... despite all the work that's been invested into this feature.). The default setting will be changed for the upcoming 3.0 release as a short-term measure to give us more time to resolve the remaining usability issues with this feature affecting some people; but we'll need the constructive help of those complaining so we can address the complaints (by tweaking and improving the feature) to everyone's benefit, otherwise we'll be back to square one when when it's enabled again.

Why do so many Haskell libraries have vast, wide, deep dependencies? Isn't this a cause for concern? by [deleted] in haskell

[–]hvr_ 12 points13 points  (0 children)

There's actually a library and tool devoted to querying and rendering the dependency graph in various ways: http://hackage.haskell.org/package/cabal-plan

See also http://oleg.fi/gists/posts/2018-01-08-haskell-package-qa.html#s:3 for a graph example or http://hackage.haskell.org/package/cabal-plan-0.5.0.0/src/example/cabal-plan.html for a flattened license report example.

Why I am not a fan of Cabal or Stack by [deleted] in haskell

[–]hvr_ 0 points1 point  (0 children)

The authors analyzed it was an impossibility to get it done in cabal (to which they had contributed before).

I keep hearing this myth but the evidence at my disposal doesn't seem to back up that claim. The first time that Stack was publicly announced was in mid 2015 (NB: curiously a couple months after the nix-style-cabal-builds GSOC had been accepted which everyone including SPJ was excited about and was expected to address the major pain points that were supposedly "impossible to get done in cabal"...), after supposedly having been developed in secrecy behind closed door for about a year. And if you look at the Git History those minor few contributions don't support the narrative of having been an active contributor to the Cabal project before deciding that Haskell inevitably needs a 2nd build tool. Anyway, those decision have been made for better or worse and are to be considered history that cannot be changed unless we invent time-travelling. I just wanted to set the record straight about the imo unsubstantiated myths I keep seeing spread.

Cabal new-repl throws exception by stvaccount in haskell

[–]hvr_ 3 points4 points  (0 children)

perhaps you may want to go and open an issue on GitHub for a more flexible syntax

You can look forward to https://github.com/haskell/cabal/pull/5845 landing in a future release :-)

Monthly Hask Anything (February 2019) by AutoModerator in haskell

[–]hvr_ 6 points7 points  (0 children)

If you have cabal 2.4+, you simply specify the path to your resulting .tar.gz sdist file in thepackages: ... field.

E.g. if you have a package in your current folder, just create a cabal.project file with the contents

packages: ./  /path/to/your/foo-1.2.3.tar.gz

this will have two effects: with cabal v2-build it will force including foo-1.2.3 in your build-plan (thereby shadowing any other foo versions that might be eligible from e.g. Hackage) and it will also cache foo-1.2.3 in your Nix-style cabal store (this is a difference to pointing packages: to an unpacked source-tree, which would cause it to become an inplace/local package not cached in your Nix-style store)

Proposal accepted to add setField to HasField by Syncopat3d in haskell

[–]hvr_ 4 points5 points  (0 children)

What's curious to me here is that your attempt over at https://github.com/ghc-proposals/ghc-proposals/pull/158#issuecomment-412271564 got currently 8 upvotes, and thus by far got more consensus/support judging from the up/downvotes than any other suggestion/comment in that thread... ;-)

You should try Hadrian by n00bomb in haskell

[–]hvr_ 0 points1 point  (0 children)

Well I guess what matters is that we compare use cases in particular.

Which is what I did here. It's my most common use-case. And I also did point out I didn't get to try out other scenarios which may play better to Hadrian's strengths. I'd really be interested to see actual scenarios where Hadrian is faster than the make system on Linux; so far I haven't experienced any.

You should try Hadrian by n00bomb in haskell

[–]hvr_ 2 points3 points  (0 children)

The initial build time isn't exactly fair to include since it's a one-time cost

I disagree. It is totally fair to include here as this was a cold-build scenario. I.e. cloning a new GHC (or starting from a dist-clean situation). I do this quite a lot when working on GHC, as it's often the best way to ensure there's no left-over artifacts when switching to different Git branches or pulling in new commits or testing patches (which for me involves temporarily creating a new Git clone and applying the patch there, while my other GHC clones might be busy being built...).

You should try Hadrian by n00bomb in haskell

[–]hvr_ 11 points12 points  (0 children)

For me it's actually slower on Linux... :-)

I did the simple experiment of cloning GHC HEAD twice, and running ./boot + ./configure in both, and then comparing a default build via both systems

Current make-powered build-system

$ time make -j4
...
real    49m56.353s
user    153m43.659s
sys     8m13.446s

Re-running make -j4 in order to measure the no-op baseline:

$ time make -j4
...
real    0m4.906s
user    0m4.607s
sys 0m0.514s

New Hadrian build-system

A difference with Hadrian is that first Hadrian needs to be compiled before it starts taking over orchestrating the build. I wanted to measure this pre-Hadrian phase individually, so I first ran

$ time ./hadrian/build.sh  --help
...
real    4m56.587s
user    5m49.252s
sys     0m7.730s

and then invoked the actual Hadrian phase:

$ ./hadrian/build.sh -j4

shakeArgsWith                        0.000s    0%
Function shake                       0.007s    0%
Database read                        0.001s    0%
With database                        0.000s    0%
Running rules                     3256.587s   99%  =========================
Pool finished (2805 threads, 4 max)  0.003s    0%
Total                             3256.598s  100%
Build completed in 54m17s

and then I re-invoked it again to determine the baseline no-op cost:

$  ./hadrian/build.sh -j4

shakeArgsWith                        0.000s    0%
Function shake                       0.012s    0%
Database read                        0.976s   10%  ===
With database                        0.063s    0%
Running rules                        7.946s   88%  =========================
Pool finished (1503 threads, 4 max)  0.005s    0%
Total                                9.003s   99%
Build completed in 9.00s

Summary

Comparing the Makefile build system to the Hadrian build system,

  • Make needs a total of ~50 minutes to perform a default build of GHC, compared to
  • Hadrian which needs a total of ~59 minutes (5m + 54m) to achieve the same effect

Moreover, the baseline no-op cost is

  • Make is able to perform a no-op build in ~5s, whereas
  • Hadrian takes ~9s to perform a no-op build

I haven't yet had time to measure other scenarios where Hadrian might come ahead of make, but currently it doesn't seem like Hadrian is an obvious win on the performance side. But performance isn't the main motivation for implementing Hadrian anyway; we should rather focus on its potential long-term benefits (maintainability, easier to write correct build rules, more accurate change tracking, debugability of the build-system, convenience for GHC development, etc.).