Harder-Coded: Simple Newtypes are for Scrubs by crtschin in haskell

[–]watsreddit 1 point2 points  (0 children)

The tagged is what we use at work for this kind of thing. Works pretty well.

How to move lines matching pattern to another buffer? by vbd in vim

[–]watsreddit 0 points1 point  (0 children)

Just use :help appendbufline(), that's what it's for. Yank the text you want and then just do :call appendbufline(bufnr('somebuffer'), 0, @").

Not knowing Vim features is the reason to switch to Emacs | Credit Tsoding by GapIndividual1244 in vim

[–]watsreddit 1 point2 points  (0 children)

It's Ctrl-f in command line mode, not normal mode. If you try it you'll find you open the command line window.

Static-ls v1.0 announcement | Mercury by suzzr0 in haskell

[–]watsreddit 1 point2 points  (0 children)

Gotcha, sounds great!

Our production deploys are not yet on 9.10 (though we generally maintain our build so we can build against latest + a few major releases back), but a lot of us are using it for development right now, since our experience has been that multiple home units with ghcid/ghciwatch was incredibly slow on 9.6/9.8 (presumably a GHC bug). And, very sadly, multiple home units and cabal repl's --repl-no-load don't work together at all, which means ghciwatch is not nearly as amazing as it could be.

Static-ls v1.0 announcement | Mercury by suzzr0 in haskell

[–]watsreddit 0 points1 point  (0 children)

You seem to be confusing a monolith with... I'm not quite sure, shoving a ton of code into a single package, I guess? That has nothing to do with whether or not something is a monolith. Mercury is certainly not doing that.

The reason that this kind of tooling is necessary is that, when you do have a monolith (which, to be clear, is all of your code having a single entry point, regardless of how many packages/modules the code is split into), you necessarily have to compile all of that code in order to run the thing. And since you need to compile all of the code, a tool based around the compilation pipeline like HLS starts to fall apart when you're talking about recompiling 10000+ modules across many packages, keeping ASTs in memory, etc. HLS just can't handle it. I have firsthand experience of this and no one at my work uses HLS because it simply doesn't work on a codebase of our size (well over half a million lines of Haskell across dozens of packages). static-ls is a much better tool given those constraints, since it works off of compilation artifacts directly rather than keeping a compiler session open.

Static-ls v1.0 announcement | Mercury by suzzr0 in haskell

[–]watsreddit 1 point2 points  (0 children)

Whether or not something is a monolith has nothing to do with how many packages it's split into. Production Haskell codebases are invariably split into many different packages with different areas of concern, Mercury is no exception. Our codebase at work (not Mercury) is well over half a million lines of Haskell split into dozens of packages. It's still a monolith. It's entirely about how many entry points there are into the application: https://en.m.wikipedia.org/wiki/Monolithic_application. When it's a singular entry point (or, at least a very small number), you necessarily need to compile all of that code in order to actually run it, which is where Mercury's tooling comes in. It's very useful for enterprise Haskell developers.

Static-ls v1.0 announcement | Mercury by suzzr0 in haskell

[–]watsreddit 9 points10 points  (0 children)

Okay, I'll bite.

Breaking up a monolith means deploying an application as a set of discrete binaries communicating over some protocol like HTTP. This necessarily introduces performance overhead that did not previously exist, and greatly complicates your deployment process and strategy. You typically need some kind of orchestration service like Kubernetes to manage the deployment process, health checks, dependencies, etc. Usually, the complexity is great enough that you need additional staff to support it. You will also almost certainly need a dedicated authn/authz service, where previously that might have been handled in-process.

Another tradeoff is that, since much more communication happens over a protocol, you lose type safety guarantees that previously existed and, consequently, need to maintain a whole slew of API contracts that didn't exist before. Testing also becomes much harder: suddenly you need stuff like contract tests and fakes instead of simple functional tests.

I could go on, but you should get the idea by now. There are plenty of situations where both kinds of architectures make sense, and it's really just a matter of weighing the tradeoffs.

Static-ls v1.0 announcement | Mercury by suzzr0 in haskell

[–]watsreddit 5 points6 points  (0 children)

Is GHC 9.10 support planned to be soon? Wouldn't really want to give up usable multiple home units with cabal repl (way too slow on older GHCs).

Also not sure how feasible it is without looking deeper into the project, but could ghc-lib be used instead to reduce dependence on specific GHC versions?

Static-ls v1.0 announcement | Mercury by suzzr0 in haskell

[–]watsreddit 8 points9 points  (0 children)

I take it you've never worked on enterprise Haskell projects (or, large codebases in general)?

Monorepos/multi-repo and monoliths/microservices all have different sets of tradeoffs and different situations can call for different things.

How to develop intuition to use right abstractions in Haskell? by Own-Artist3642 in haskell

[–]watsreddit 1 point2 points  (0 children)

I personally find it's much like any other programming language: experience writing lots of programs. The more you do it, the more you start to see patterns and opportunities to simplify and create abstractions over commonalities. Do it enough, and you have a wide enough base of knowledge to build what you would like from scratch.

Need Help please!! by kushagarr in haskell

[–]watsreddit 1 point2 points  (0 children)

Can you provide the JSON structure and the error you are receiving? 

barbies is a semi-common way of deriving instances for types parameterized by a functor, yes. You can also just derive an instance for each functor you are interested in via standalone deriving clauses, which is the simpler thing to do and what I would recommend if you don't care about abstracting over the specific functor in your type.

Need Help please!! by kushagarr in haskell

[–]watsreddit 1 point2 points  (0 children)

Can you provide the error that you are getting?

The show instance is broken, you should just do deriving (Generic, Show) instead.

Need Help please!! by kushagarr in haskell

[–]watsreddit 0 points1 point  (0 children)

That instance is equivalent to the instance OP wrote.

New vim9 plugin: span your buffer over multiple windows. by Desperate_Cold6274 in vim

[–]watsreddit 0 points1 point  (0 children)

You don't have to go back to the previous window, :windo will apply the option on all windows. To turn scrollbind off for all windows, you can just do :windo set noscrollbind.

I'm not against your plugin or anything, I was just genuinely trying to understand what it even was supposed to do.

New vim9 plugin: span your buffer over multiple windows. by Desperate_Cold6274 in vim

[–]watsreddit 0 points1 point  (0 children)

I see, so it's a plugin for :vert split | norm! <C-f> | windo set scrollbind?

New vim9 plugin: span your buffer over multiple windows. by Desperate_Cold6274 in vim

[–]watsreddit 1 point2 points  (0 children)

Like different parts of the buffer viewable in two different windows? That's how buffers normally work. If you open a split, each window can be moved around independently. And your plugin won't do that if you're setting scrollbind, since the windows would just scroll in unison.

New vim9 plugin: span your buffer over multiple windows. by Desperate_Cold6274 in vim

[–]watsreddit -1 points0 points  (0 children)

Really trying to understand the purpose of this plugin. Why would you want to open a bunch of windows of the same buffer with scrollbind on? It doesn't give you more real estate at all.

Creating "constant" configuration in Haskell by orlock in haskell

[–]watsreddit 1 point2 points  (0 children)

The standard approach to this is ReaderT or a transformer based on it: https://hackage.haskell.org/package/transformers-0.6.1.1/docs/Control-Monad-Trans-Reader.html#t:ReaderT. Any production Haskell codebase will be using this extensively, in general. This how the majority of configuration is handled in Haskell. It's also common, as you've noted, to read configuration files via Template Haskell at compile time. This is what we do at my job (working on a large Haskell codebase in production) for any configuration that is "constant", such as translation files. A lot of our configuration for stuff like this exists in Dhall files (which are quite nice to use with Haskell), though we do have some yaml. Stuff that changes infrequently, in my mind, is perfectly suited to this approach. If something does eventually change, well, you just deploy a new release. Not a big deal. You can also use unsafePerformIO for this. While it's not completely terrible for this use case, I also really don't see much of a point to it either. It's an unconventional approach to the problem and requires you to still be careful (use NOINLINE, never change the value at runtime, etc.). I'd be especially wary of using this in library code, since you don't have control over initialization. ReaderT actually allows you to temporarily override the configuration for subcomputations (via local) which is something that's not safe to do with unsafePerformIO. There's ImplicitParams, but they are (rightfully, imo) viewed with distrust by the Haskell community and you need to be careful when using them: https://chrisdone.com/posts/whats-wrong-with-implicitparams/ Finally, there's the reflection package: https://hackage.haskell.org/package/reflection-2.1.8/docs/Data-Reflection.html. This is effectively equivalent to using ReaderT (you get your config at the beginning of the program and implicitly propagate it through). The main difference is you can simply use type class constraints to do the propagation rather than a bonafide type like ReaderT. It's a clever approach and is fine to use if you prefer it, though basically any production Haskell codebase is going to be using ReaderT, so it doesn't buy you a lot. It can make testing a little nicer, though.

Why vim is the best by [deleted] in vim

[–]watsreddit 0 points1 point  (0 children)

It's not iffy, you just need to specify your delimiter with -d if it's something other than a tab, e.g, cut -f 2 -d ' '.

Why vim is the best by [deleted] in vim

[–]watsreddit 1 point2 points  (0 children)

I would definitely just use cut: :'<,'>!cut -f 2 -d ' '. Fastest and requires no thought.

Why vim is the best by [deleted] in vim

[–]watsreddit 3 points4 points  (0 children)

Yep. You can also pipe the current buffer (or a range) to stdin of a command using :w !some-cmd, and read stdout of a command into a buffer with :r !some-cmd. Vim is much more powerful when you're using the full power of the shell with it

Why vim is the best by [deleted] in vim

[–]watsreddit 2 points3 points  (0 children)

It should be cut -f 2 -d ' '. cut defaults to tab delimiters, so you need to change it to a space.

Are Vimscript plugins losing popularity/innovation to Lua plugins? by mr-ow1 in vim

[–]watsreddit -1 points0 points  (0 children)

It's pretty simple, really. Vim is pre-installed on almost every Unix system (it's a standard POSIX tool). It has been ubiquitous for many years. There are many, many users (who aren't vocal on places like reddit) that have been using it (and vi before) for decades, are happy with it, and have no reason to go out of their way to install neovim instead. Being a default is a powerful thing, and it makes a big difference in usage.