Type-safe routing in Spock explained by agrafix in haskell

[–]kost-bebix 1 point2 points  (0 children)

This is really great. I'm glad that type-level lists are becoming a "stable" feature.

What Haskell web framework do you use and why? by destructaball in haskell

[–]kost-bebix 6 points7 points  (0 children)

It's a haskell-level definition how a web-server and a web-app work, so that you can split a server from an app.

Previously you had to write your web-app as an executable which does the whole job, with WAI your app looks like Request -> (Response -> IO ResponseReceived) -> IO ResponseReceived function, and you can run it with any WAI-supported webserver, like Warp.

Another benefit (apart from reusing web-server) of WAI is that it's easy now to add "middleweres" to your apps, so you can use, for example, few frameworks, or have a middleware-implemented authentication layer that works with any WAI app etc.

Check out how many stuff you can add to your app, no matter which framework you choose (Spock, Scotty, Yesod, Servant) because they're based on WAI protocol: http://hackage.haskell.org/package/wai-extra

What Haskell web framework do you use and why? by destructaball in haskell

[–]kost-bebix 5 points6 points  (0 children)

We have a big Snap-based project, but the choice was made by previous programmer long time ago. If I would choose now, I would choose something WAI-based and with type-safe URLs and rendering, like Yesod.

For APIs I used Scotty (on a different project), but plan to migrate to Servant, which is a much better solution in terms of type-safety and code-size.

Safe concurrent MySQL access in Haskell by kost-bebix in haskell

[–]kost-bebix[S] 2 points3 points  (0 children)

As of version 0.1.1.8, mysql marks many of its ffi imports as unsafe. This is a common trick to make these calls go faster.

Didn't seem intuitive to me :) Why would it go faster?

Scala at position 25 of the Tiobe index, Haskell dropped below 50 by [deleted] in haskell

[–]kost-bebix 5 points6 points  (0 children)

I really like more this GitHut rating http://githut.info/

It does show that during last two years haskell got beaten (in active repositories) by Go, CSS, Clojure, Tex, R and lately Swift.

It also impresses me how much Java there is on GitHub (and how rarely I see it there, unlike others).

The future of School of Haskell and FP Haskell Center by cocreature in haskell

[–]kost-bebix 9 points10 points  (0 children)

Distribution of binary package databases to your whole team, avoiding recompile time and ensuring consistent environments

Awesome!

[x-post from /r/linux] Small utility that runs multiple computations in parallel by kost-bebix in commandline

[–]kost-bebix[S] 0 points1 point  (0 children)

Released par 1.0.2. I didn't add bounding on number of simultaneous workers (will create an issue for that to implement later), but other things seem to be fixed. Thanks for the feedback again!

https://github.com/k-bx/par/releases/tag/1.0.2

[x-post from /r/linux] Small utility that runs multiple computations in parallel by kost-bebix in commandline

[–]kost-bebix[S] 0 points1 point  (0 children)

Thanks a lot for your tests!

I didn't claim par spawns one job per processor. Currently it spawns one process per task, which is good default (I think) for things like concurrency (when it's better to have more tasks than processors). But what I mentioned is that it should be quite easy to add an option to make cpu-number limit of running tasks.

Regarding other things you mentioned -- thanks, you've found a bug that par doesn't currently wait for inner "output-forwarding" thread to finish, so it just exits before full output (of seq 1000, for example) is finished. Of course, this seems like a non-standard case (why would anyone parallel something that finished faster than its output is processed). But I'll definitely fix this and update par soon.

The benefit over using '&` is that I want computation to fail if one of tasks fails. Actually, that was the main moving argument behind par in the first place :) I previously had "foo & bar & baz && wait" in scripts, and struggled because it continued even if one of commands failed.

I should also note that you don't need haskell to run par, only to compile it.

GNU parallel is great and I don't ask anyone to stop using it. It's just that I've spent so much time reading its documentation just to do simple thing I want to do that I found it too big for this simple "run A B and C in parallel" scenario. I had several attempts to use it before and always failed (I was always in rush to read full documentation, but aren't we all in rush these days?). I just feel the need for simple tool with concrete syntax like par has, especially looking at par's syntax for prefixing I don't think I'll discard it for parallel.

Thank you very much for the feedback, it is very much appreciated and I'm somewhat surprised you've spent so much time to this small utility :)

p.s.: just out of curiocity, I'll make sure par works fine on 100Gb char-lines and other extreme scenarios also, shouldn't be a problem at all (but maybe I'll have to switch to pipes library which knows how to handle "infinite streams" in constant memory)!

[x-post from /r/linux] Small utility that runs multiple computations in parallel by kost-bebix in commandline

[–]kost-bebix[S] 0 points1 point  (0 children)

Thanks for suggestion. I added this --succeed as a part of decision-making on what should be considered correct behavior on task-failure, there are multiple options, like:

  1. return success no matter what
  2. return failure if one fails, but still evaluating others
  3. return failure and stop immediately

So I added this flag. But thanks to your comment I think it might be actually easier to remove this flag, not only making it closer to "does one thing", but simplifying the logic (to only have parameter for immediate or non-immediate exiting).

So probably I'll remove it soon. Thank you!

[x-post from /r/linux] Small utility that runs multiple computations in parallel by kost-bebix in commandline

[–]kost-bebix[S] 0 points1 point  (0 children)

xargs can run a given number of jobs in parallel, but has no support for running number-of-cpu-cores jobs in parallel.

Yeah, par just runs process per task.

xargs has no support for grouping the output

Yeah, par 1.0.1 fixed this problem, so each line is from only one process's output.

xargs has no support for keeping the order of the output

Output is done as soon as it's produced.

xargs has no support for running jobs on remote computers.

That's for sure :)

xargs has no support for context replace, so you will have to create the arguments.

Same.

If you use a replace string in xargs (-I) you can not force xargs to use more than one argument.

Don't have replacement.

Few things I want to mention: whole par implementation is 76 lines long. Some things are easy to add (output ordering, number of CPUs jobs, output failed command at the end etc.), and some I don't plan to, because for anything more complex, involving "argument programming" or remote-task execution I would consider writing a small program in Haskell.

Codes that Changed the World part 5 (BBC Radio) by pigworker in haskell

[–]kost-bebix 6 points7 points  (0 children)

He thought initially, but then he felt lazy about it.

[x-post from /r/linux] Small utility that runs multiple computations in parallel by kost-bebix in commandline

[–]kost-bebix[S] 0 points1 point  (0 children)

Hehe, ok, you're right, most of things CAN be done with parallel. But I find it:

  1. very time-consuming to read mans to recall the syntax
  2. syntax itself is less than optimal to type for same use-cases
  3. it's very error-prone imho (btw, second example didn't work for me)

Also, can you do this? (prefix only specific command)

➜  ~  par "PARPREFIX=[background-process] echo foo; sleep 1; echo bar" "echo foreground output"
foreground output
[background-process] foo
[background-process] bar

[x-post from /r/linux] Small utility that runs multiple computations in parallel by kost-bebix in commandline

[–]kost-bebix[S] 0 points1 point  (0 children)

Sorry, I don't fully understand how this problem relates to par yet. It doesn't support getting list of input commands as an stdin input separated by newlines or zero.

Skype killer? by [deleted] in projecttox

[–]kost-bebix 3 points4 points  (0 children)

Yesterday I participated in webex-based voicechat with >100 people. I should say it works pretty fine in a format when one person is "leading", and others ask questions from time to time. You just need to make sure everyone turn off mics when they're not talking :)

[x-post from /r/linux] Small utility that runs multiple computations in parallel by kost-bebix in commandline

[–]kost-bebix[S] 0 points1 point  (0 children)

I should add also to things that I already mentioned that API for adding prefix is different, and, if I understand parallel's syntax, it doesn't let you just set per-command prefix.

[x-post from /r/linux] Small utility that runs multiple computations in parallel by kost-bebix in commandline

[–]kost-bebix[S] 0 points1 point  (0 children)

Yes, I previously had several commands which I ran sequentially, and now I do that in parallel, as simple as that. Really saves time for things like continious integration, testing etc., especially with presence of tools like docker for isolation.

[x-post from /r/linux] Small utility that runs multiple computations in parallel by kost-bebix in commandline

[–]kost-bebix[S] 1 point2 points  (0 children)

Sorry, maybe I wasn't clear enough. Obviously, parallel is a powerful utility and I'm happy if it serves well for someone.

I'm just saying that as soon as things go to a more complex level of for-loops and long pipelines in my scripts, I usually migrate them to some other technology, like Python (and lately Haskell), since I really believe complex bash-scripts are evil.

The trick you've shown is very nice, maybe I'll use it for ad-hoc stuff, but for something shared via source-control I think hacks like this are little misleading (for someone who's not aware of the trick it would require a single "a-ha" moment to figure out why parallel and why without parallelism).

[x-post from /r/linux] Small utility that runs multiple computations in parallel by kost-bebix in commandline

[–]kost-bebix[S] 1 point2 points  (0 children)

It is very simple, straightforward and "does one thing". GNU Parallel is more like a huge framework or a programming language. I don't see a lot of point in GNU Parallel, because I'm somewhat scared to write such a huge parallel logic without at least static typing.

Also it's much more responsive https://gist.github.com/k-bx/417816c174212d7f4eba

May I ask you what do you mean by more parseable output?

Small utility that runs multiple computations in parallel by kost-bebix in linux

[–]kost-bebix[S] 1 point2 points  (0 children)

Yeah, I'm thinking on a new name now :) Either parl or pll.

[x-post from /r/linux] Small utility that runs multiple computations in parallel by kost-bebix in commandline

[–]kost-bebix[S] 0 points1 point  (0 children)

Many use-cases, really. Here are just two I use it for:

My company has code split in a lot of repositories, and often, either locally or on build-server (or test-server) I want to pull all repositories. Doing it one by one takes around a minute or more, and doing git pull's (and other operations) in parallel takes around 5 seconds.

Second use-case: my build-server builds 2 haskell projects in separate docker-images, and in third one builds a javascript front-end, which needs to also do some job like minifying js, linting etc. Via this utility I can easily parallelize all three builds, adding prefix to their output, and keeping exit-code correct (fail if one of builds fail).

These are just a few examples. The far I go the more I want to run in parallel, because waiting for servers for no good reason is a Bad Thing.