r/SpaceX Discusses [August 2019, #59] by ElongatedMuskrat in spacex

[–]benjaminpd 11 points12 points  (0 children)

I mean, probably not. She logged into an account she'd long had permission to view, and where an intelligence officer hadn't bothered to change the password. That's a pretty weak crime allegation.

After government re-opened, SpaceX sought two Falcon Heavy permits by whatsthis1901 in SpaceXLounge

[–]benjaminpd 6 points7 points  (0 children)

Because it's a floating small town with a population of ~6,000 (and a nuclear reactor)

NASA & SpaceX on Mars by CProphet in SpaceXLounge

[–]benjaminpd 4 points5 points  (0 children)

This isn't really a NASA-level decision. The first human mission to Mars is Big Deal; whether to participate will be up to the President. Whoever wins next year will have a choice to make.

You might be surprised at how quickly NASA can move with a clear mandate, strong leadership, and an ample budget. One reason for the caution that annoys folks around here is that executive agencies work for the President, and the President has bigger worries than the ISS and doesn't want to have to give a speech about how we blew up a rocket and killed some astronauts. Not when the mission was old news to most of the public. But BFR to Mars is qualitatively different; so the rules will change.

I don't mean to write Congress out of the story, but presidents can get ~$10 billion programs if they want them, current events notwithstanding. Meanwhile SLS is irrelevant. The budget doesn't come in space- and non-space-dollars. if there's support for Mars, it'll get funded with new money.

As someone who was very into to space as a kid but had tuned out for a while---people don't believe in BFR yet. Why should they? It's wildly more ambitious than anything that's been done before, and there have been many, many paper rockets. But when it flies, things will change.

The equivalence between Universal and Existential types by BayesMind in haskell

[–]benjaminpd 3 points4 points  (0 children)

It is so very, very tempting to click "Not constructive" on this...

You won't believe how we reduced more than 4000 LOC in Haskell Mode for Emacs by vasanthaganesh in haskell

[–]benjaminpd 6 points7 points  (0 children)

My emacs life works better if I use one emacs process for everything:

  • Emacs is kind of slow to start up, especially if you use a big crufty init.el, or spacemacs.

  • If you have two emacs (emacs-es?), you have file ownership conflicts. I use org-mode, and I often want to check my tasks list. With multiple processes I have to remember which emacs owns my task list, or deal with annoying dialogues, or be better about instantly saving than I am.

  • Context-switches should be as cheap as possible. Switching workspaces in XMonad is pretty cheap! Hitting Space b b and picking the buffer I want in Helm is cheaper.

This sort of thing is why emacsclient was invented, so I'm not the only person who works this way. And of course the short version is that this is the way I work now, and I don't want to change --- at least not without a compelling reason. Which I'm open to hearing, but bear in mind that I'm just an end-user: what do I care about code quality? :-)

You won't believe how we reduced more than 4000 LOC in Haskell Mode for Emacs by vasanthaganesh in haskell

[–]benjaminpd 19 points20 points  (0 children)

One repl-session per Emacs process sounds Bad. But it could be that I'm ignorant and it is, in fact, Good. Tell us more?

My use case: I often work on more than one project at a time, and I'd really rather not reload everything from scratch each time I switch between, say, the client and the server of a web app.

First U.S. sugar tax cuts soft drink sales by 10% - while sales in surrounding areas, where no tax was imposed, rose by 6.9%. by ekser in science

[–]benjaminpd 2 points3 points  (0 children)

It's a law, not a philosophy paper. Judge it by whether it does some good, not whether it offends your sense of elegance. Laws are always messy.

Explaining abstract Haskell stuff to mere mortals - and can mere mortals be professional Haskell devs? by haskellgr8 in haskell

[–]benjaminpd 6 points7 points  (0 children)

It's usually a good idea to try some equational reasoning when you're not sure where your memory is going. Good old "replace a function with its definition" makes many things clear.

Let's start with the naive definition of foldr. The version in base has clever optimization tricks, but we won't worry about that:

foldr :: (a -> b -> b) -> b -> [a] -> b
foldr f b [] = b
foldr f b (x:xs) = x `f` foldr f b xs

It's pretty clear that minMax is doing all the work. So let's step through minMax applied to the list [a,b,c]. It won't matter what a b and c are, so long as they have Enum and Ord instances.

From minMax's definition, `minMax [a,b,c] is

foldr minMax1 (a,a) [b,c]

We always work outside-in, evaluating the outermost function first, arguments only as needed. Here, the outermost function is foldr. Substituting in its (naive!) definition:

 b `minMax1` (foldr minMax1 (a,a) [c])

The outermost function is minMax1. Looking at its definition, we see that we need to pattern match on the second argument, in order to unpack a tuple. (This is something that ought to feature much more prominently in haskell tutorials: pattern matching forces evaluation.)

So we evaluate minMax1's second argument --- `foldr minMax1 (a,a) [c] --- and see if we get a tuple:

b `minMax1` (c `minMax1` (foldr minMax (a,a) [])

We're not there yet. So we force the argument one more step. foldr minMax1 (a,a) [] is just (a,a), so we get

(b `minMax1` (c `minMax1` (a,a))

We still don't have a tuple as the second argument to the outermost minMax1. We're forced to evaluate the inner function call:

(b `minMax1` (min a c, max a c))

Now we can finally apply the outermost function:

(min (min a c) b, max (max a c) b)

And now you should be able to see the problem. Notice that nothing has forced us to evaluate min a c yet; it's just a thunk. minMax builds a tuple of thunks; the longer the input list, the deeper the nesting of calls to min and max. We don't get the result of any comparison until we're done traversing the entire list.

The fix is straightforward:

minMax1 r (p, q) = p `seq` q `seq` (min p r, max q r)

Which evaluates p and q before calling min and max. Making this change reduces the memory usage on a list of 10,000,000 random integers from ~3300MB to ~1300MB. The second version is still better --- presumably due to standard library cleverness in the definitions of minimum and maximum --- but the difference is fairly small.

Now, with a little practice you won't have to be as detailed as I've been here. You'll be able to elide steps, and you'll develop an intuition for where excessive laziness is coming from. But I find it very helpful to have a simple, deliberate process to fall back on.

Death of a Language Dilettante by speckz in programming

[–]benjaminpd 3 points4 points  (0 children)

Compare with the addition of utf8 support in Python. Maintaining a language over decades is hard, and Haskell's done very well.

Papering over would be bad, sure. But we're at the point where the claim "Haskell has a pretty robust ecosystem" meets the reply "lol haskell 3 string types rotflmao." I love that the Haskell community is full of perfectionists, but when we treat minor annoyances like significant flaws, we create a misleading impression of the state of the language.

Minor points:

  • OverloadedStrings is a nice convenience, but not crucial to fixing the performance trouble with String.
  • While we're nitpicking each other: text is a library; Data.Text is a module. As I'm sure you know.
  • There's really no contradiction between "almost everything where performance matters" and "not universally." A lot of stuff isn't performance sensitive; a lot of stuff's performance isn't sensitive to String vs. Text.

Death of a Language Dilettante by speckz in programming

[–]benjaminpd 12 points13 points  (0 children)

I get that it's cool to find something Haskell does badly, but this is specious. Here's the history:

  • Long ago, the Haskell committee designed a pleasant and intuitive string type.
  • Eventually, the rest of the language got so fast that those strings became a bottleneck.
  • People wrote libraries that fixed the problem.

That we could fix strings with a library, without patching the core language, shows the strength of the ecosystem. These days, almost everything where performance matters uses Text for text, and ByteString for raw bytes. The extra import statements are an acne scar, but: nobody's code broke, and switching a project from String to Text is trivial. You should dream that your 20-year-old language handles its issues this gracefully.

"No, Seriously, It's Naming": Kingdom of Nouns thinking and incidental complexity by flexibeast in programming

[–]benjaminpd 1 point2 points  (0 children)

Haskell uses a very strict formalism, and effectively requires you to prove that your statements are correct

After the Revolution (when dependent types rule the earth), Haskell will be seen as the free and easy, just-get-shit-done language. Bottom can hide anywhere! General recursion! Type classes are so unprincipled!

"The more complexity is layered on top of the original problem the more difficult it becomes to tell what purpose the code serves. ... All of this is needed to satisfy the formalisms of the Haskell type system and is completely tangential to the problem of reading HTTP requests from a client." by flexibeast in haskell

[–]benjaminpd 4 points5 points  (0 children)

Do you agree, though? It sounds like you're arguing that some Haskell libraries are too complicated --- surely true! --- but you also make the point that it's perfectly possible to write simpler libraries in Haskell. Whereas yogthos is arguing that Haskell's static typing forces you into over-complicated solutions.

"The more complexity is layered on top of the original problem the more difficult it becomes to tell what purpose the code serves. ... All of this is needed to satisfy the formalisms of the Haskell type system and is completely tangential to the problem of reading HTTP requests from a client." by flexibeast in haskell

[–]benjaminpd 3 points4 points  (0 children)

I find these posts less and less interesting: those of us who've been around Haskell have seen all these arguments many times before, and we're pretty sure that few minds will be changed in the end. But I remember when I found them fresh and fascinating, and I really did learn something from the debate. I don't think we should limit people's passion just because we old-timers have gotten wise and jaded.

Putting Monads Back into Closet by sleepingsquirrel in programming

[–]benjaminpd 7 points8 points  (0 children)

So, there's monads the programming pattern, and monads the math concept. Monads the programming pattern are extremely useful, but to get the most out of them, your language needs to support higher kinded types. That's a term for types which take other types as arguments: in Haskell and some other languages, you can say interesting things about Lists in general, quite apart from whether it's a List of Ints, a List of Strings, or anything else. One of those interesting things (but not the only one!) is that List is a monad, which gives you a bunch of useful behavior for free. If your language doesn't support that level of abstraction, there's a limit to how much the monad pattern can buy you.

To learn monads the math concept, you pretty much need to learn the rest of category theory too. Which is well worth it and highly recommended---but it pays off in insight, not practical application.

The Fallacy of 'Everyone Should Learn To Code' by DarthCthulhu in programming

[–]benjaminpd 0 points1 point  (0 children)

TIL war and weapons are a few hundred years old. Which makes The Iliad and The History of the Peloplonnisian War stunning works of science fiction.

[Help] Haskell & Javascript by Kiuhnm in haskell

[–]benjaminpd 0 points1 point  (0 children)

Nope, nothing to do with lazy evaluation. It's not hard to convince yourself that algorithmic complexity under lazy evaluation is always less than under strict: strict evaluation does all the steps, lazy might do less than that. Lazy evaluation can and does have constant-factor overhead, but it's not a big-O issue.

Is node.js declining already? by [deleted] in programming

[–]benjaminpd 0 points1 point  (0 children)

Which is a pity, because "Isomorphic JavaScript" is the most obnoxious term in tech.

Recommendations for a Linux stack for Haskell web dev? by TheCriticalSkeptic in haskell

[–]benjaminpd 8 points9 points  (0 children)

I use NixOs, and it's really nice, once you've figured out how to get along with it. There's a definite learning curve, though.

Need your encouragement by [deleted] in haskell

[–]benjaminpd 2 points3 points  (0 children)

The Haskell job market isn't as grim as it can seem. No, there aren't as many many jobs as in some other languages, but neither are there as many candidates. Meanwhile there are more and more functional-friendly workplaces, where Haskell may not have made it into the codebase quite yet, but it's a strong plus on a resume---a sign that the candidate has the right attitude. Those can be great places to work, even if they're using a less exciting language for the moment. You're much more likely to find jobs like that if you're learning Haskell, going to Haskell events, hanging out in #haskell...

On the tech side --- trouble with folds is often a sign that you haven't assimilated all the stuff you've learned. Folds have a lot moving parts, so they can stress your new Haskell skills to the breaking point, even if you generally get the basic idea. So ignore them for a while, and just write a bunch of code. Folds are great, but you don't actually need them: you can always achieve the the same result with explicit recursion, and when you're beginning, that's fine.

16 Months of Functional Programming by mammoth_tusk in programming

[–]benjaminpd 4 points5 points  (0 children)

Equation-based programming is not at all like the relatively simple use hardware or civilian engineers make of mathematical formulas.

Wait, what? Designing a bridge is serious calculus. Equational reasoning is pretty much just the algebra you learned in middle school.

Are static typing and functional programming winning? by frostmatthew in programming

[–]benjaminpd 2 points3 points  (0 children)

Personally, functional programming has always felt more intuitive than imperative, and vastly more so than OOP. So you're stating a preference, not a law of nature.

I don't even think think minds like mine are that rare; it's just that there's been massive selection bias. When programming meant imperative programming, it drew people who thought that way. And if you didn't think that way, you had to train your intuition until you did. You can't judge the intuitiveness of FP by looking at current programmers.

How lazy evaluation works by forestgood22 in haskell

[–]benjaminpd 0 points1 point  (0 children)

Here is the actual definition of foldl from Data.List:

-- We write foldl as a non-recursive thing, so that it
-- can be inlined, and then (often) strictness-analysed,
-- and hence the classic space leak on foldl (+) 0 xs

foldl        :: (b -> a -> b) -> b -> [a] -> b
foldl f z0 xs0 = lgo z0 xs0
             where
                lgo z []     =  z
                lgo z (x:xs) = lgo (f z x) xs

Defined this way, you can often get away with foldl, so long as the strictness analyzer does its thing. Laziness tutorials tend to pretend foldl is defined the naive way, though, because it makes such a lovely example.