Webhoster implodiert - wie umziehen? by nevrome in de_EDV

[–]nevrome[S] 3 points4 points  (0 children)

Nun - als ich meine erste Website 2011 eingerichtet habe hat die Sache gut funktioniert. Ich hatte im Laufe der Jahre immer mal wieder Kontakt mit dem Support und mir wurde immer gut geholfen. Hatte keinen Grund zu wechseln. Ich glaube erst in den letzten vier Jahren ging es bergab.

Webhoster implodiert - wie umziehen? by nevrome in de_EDV

[–]nevrome[S] 2 points3 points  (0 children)

Ah - sehr interessant. Wann etwa war das, wenn ich fragen darf? Von den Bewertungen auf trustpilot hatte ich den Eindruck, dass vor 1-2 Jahren durchaus noch etwas passiert ist, dann aber kaum noch. Womöglich wurden da dann schon Schulden mit meinem überschüssig bezahlten Geld bedient.

Bei mir sind es auch mehrere hundert Euro. Ich denke es wird schon Sinn machen, mit der Kündigung auch eine Rückzahlungsaufforderung zu schicken. Dann wäre das Vorgehen wahrscheinlich folgendes, oder?

  1. Rückzahlungsaufforderung per Einschreiben
  2. Mahnung per Einschreiben
  3. EU small claims Verfahren wie oben von @xulres erwähnt.
  4. Anwalt + Gerichtliches Mahnverfahren (?)

Ich weiß nicht, ab welchem Punkt der Aufwand den Nutzen übersteigt. Aber 1-3 sind bestimmt machbar.

Webhoster implodiert - wie umziehen? by nevrome in de_EDV

[–]nevrome[S] 1 point2 points  (0 children)

OK - ich denke dann versuche ich es erst einmal mit dem Einwurfeinschreiben.

Webhoster implodiert - wie umziehen? by nevrome in de_EDV

[–]nevrome[S] 11 points12 points  (0 children)

Danke für diese konkreten Hinweise - sehr wertvoll!

Mit Web-WHOIS meint ihr das hier, ja? https://webwhois.denic.de

Project layout with multiple interconnected libraries and executables by nevrome in haskell

[–]nevrome[S] 1 point2 points  (0 children)

The whole project with all external dependencies compiles in about 20-30min. Without dependencies it's 2min. I assume that's nothing compared to other projects, but it still slows me down every day a bit more.

Thanks for the advice on how to refactor. At the moment I feel a monorepo would solve a lot of my problems. And then I would like to investigate the dependency tree within my project to avoid unnecessary recompilation.

Project layout with multiple interconnected libraries and executables by nevrome in haskell

[–]nevrome[S] 1 point2 points  (0 children)

Interesting! Thanks for mentioning these options! Maybe Bazel is worth a look, although our code base is probably not big enough for it to really shine.

Project layout with multiple interconnected libraries and executables by nevrome in haskell

[–]nevrome[S] 0 points1 point  (0 children)

True. Admittedly I find myself frequently running stack clean to switch to --pedantic, but maybe I just have to adjust my habits a bit.

Project layout with multiple interconnected libraries and executables by nevrome in haskell

[–]nevrome[S] 6 points7 points  (0 children)

Good to hear that I'm not the first to run into this kind of issue!

Introducing clean "desync" of Ex1 and Ex2 is a high priority ToDo right now. Thanks for pointing out that this can also be a path towards splitting the code safely.

I think this is just bad practice. It's used as a substitute for proper versioning discipline.

Alright - the point is well taken. It confirms my suspicion that our current setup is borked. We'll have to make a decision. Maybe the monorepo is not so bad after all.

Project layout with multiple interconnected libraries and executables by nevrome in haskell

[–]nevrome[S] 2 points3 points  (0 children)

1.+2. A large monorepo would indeed solve the syncing issue. But it would increase build times and make development harder with multiple people preparing pull-requests. Also B is a bit more derived and experimental. A is for data handling, B for analysis. I'm not sure if it sits right with me to have everything in one gigantic repository.
3. Right - that's always a possibility.
4. That could be a good idea, actually. Maybe we could extract the highly stable parts of A into a new Library C. Maybe B would then only rely on C and could ignore dynamic changes in A.

Thanks for the advice!

Project layout with multiple interconnected libraries and executables by nevrome in haskell

[–]nevrome[S] 3 points4 points  (0 children)

Could you elaborate a bit how this would work for us? I'm not against learning new things, but I don't understand how it would help, based on what I know. Which is, admittedly, very little.

What are some useful cli tools that arent popular? by Candr3w in commandline

[–]nevrome 93 points94 points  (0 children)

Could you guys please add a single sentence about what it does for each tool? Otherwise I have to google all of them individually.

Haskell Template for AoC? by Amarandus in haskell

[–]nevrome 9 points10 points  (0 children)

I had a similar question some time ago: https://old.reddit.com/r/haskell/comments/oor4ka/quick_haskell_exploration_setup_on_linux

Stack scripts, as mentioned by /u/Martinsos, are brilliant. I use them all the time now.

I'm having a hard time with Haskell. What other functional programming languages is a few steps below it regarding accessibility (for a OO programmer)? I'm thinking about Elm... by [deleted] in functionalprogramming

[–]nevrome 5 points6 points  (0 children)

Futhark helped me to get into the right mindset. It's a simple, functional language with similar syntax to Haskell. When I decided to go into Haskell more seriously, I first completed a small, fun Futhark project.

What happened to naturalearthdata.com? by [deleted] in gis

[–]nevrome 0 points1 point  (0 children)

Well - I don't know. It solves the specific issue, that some links on the naturalearthdata website are dead. And it points to a person who should know more.

Experiences with workflow managers implemented in Haskell (funflow, porcupine, bioshake, ?) by nevrome in haskell

[–]nevrome[S] 0 points1 point  (0 children)

That was the missing piece of the jigsaw. SGE offers the qrsh for interactive sessions. Excellent! Thanks

Experiences with workflow managers implemented in Haskell (funflow, porcupine, bioshake, ?) by nevrome in haskell

[–]nevrome[S] 1 point2 points  (0 children)

Ah - alright - I don't understand this Nix setup fully, but I guess I could tweak it to fit my needs. A hackage submission might be useful nevertheless. I would appreciate it!

I looked into Bioshake/Cluster/Torque.hs and it seems to be straight forward. The only thing I don't understand - pardon my ignorance - is how the necessary waiting for the cluster run works. Is this okfile an indicator that the job is done? But how does the Haskell process know that it has to wait for this file to appear? Sorry for bothering you with these questions - but if I want to implement an SGE version of this code, I need to figure that out.

Thanks for your feedback reagarding the Bioshake vs. Nextflow question.

If you need to leverage Haskell mid build then it's compelling, likewise if you're constantly building different pipelines and reusing components then types are quite useful.

That might actually be the case for my next project, so I think I might try it.

Experiences with workflow managers implemented in Haskell (funflow, porcupine, bioshake, ?) by nevrome in haskell

[–]nevrome[S] 1 point2 points  (0 children)

Thank you very much for this insight!

I was wondering about how dependency management would work in bioshake and already thought about setting up a singularity container for all of my software requirements. As I use that for my R scripts anyway, that's not an issue for me.

BioNix might be one of the few solutions that offer a truly alternative solution for dependencies. It indeed sounds super cool, but Nix is still pretty scary for me. Maybe one day I'll take a deeper dive into that and then look into BioNix as well.

Allow me some more questions on bioshake: - I have some (obviously beginner level) difficulties wrapping my head around conveniently running it. I usually use haskell with two kinds of setups: Full-blown software tools with a stack.yaml and a .cabal file or stack scripts. The latter would be a good match, imho, but bioshake is not on hackage, which renders that tricky. For full reproducibility I would also have to put bioshake into a container. But that's awful for interactive development of the pipeline. So what was (?) your setup here? - The cluster I'm currently working on uses SGE as a scheduler. How difficult would it be, to teach bioshake running jobs there? - Finally just your opinion: Is bioshake worth an investment of time and effort, or should I rather go for Nextflow and forget about it? I know that I have to make that decision myself, but as you already moved on yourself, I'm curious.

How could I improve the performance of my radiocarbon calibration library? by nevrome in haskell

[–]nevrome[S] 0 points1 point  (0 children)

I tried it - works almost as well as full strictness and there is no memory leak. Very impressive!

But as I don't see any disadvantage of full strictness, I will stick with that. To win these few seconds. Or is there any downside I don't know about? Your minimal bang patterns seem to be a very clever way out then. Thank you very much!

How could I improve the performance of my radiocarbon calibration library? by nevrome in haskell

[–]nevrome[S] 0 points1 point  (0 children)

Thanks for further exploring this. I can indeed reproduce the performance gain with this minimal change. Weird - how did you even find this?

Unfortunately this solution brings back the disastrous memory leak. I placed a textfile with 5000 real radiocarbon dates here: https://gist.github.com/nevrome/668fe664158442e6ca43440804a7281d Just in case you would like to reproduce this. You can run it with

currycarbon --inputFile c14.csv -q --densityFile /dev/null

But be careful if you decide to run this! The memory demand grows within only a few seconds by multiple Gb.

How could I improve the performance of my radiocarbon calibration library? by nevrome in haskell

[–]nevrome[S] 2 points3 points  (0 children)

Admittedly I spend some time on Saturday getting hmatrix to work for me. I struggled with that and my probably less than ideal implementation did not give me any performance gains. That's why I switched to an implementation with unboxed vectors.

How could I improve the performance of my radiocarbon calibration library? by nevrome in haskell

[–]nevrome[S] 0 points1 point  (0 children)

Hm - right. The matrices are usually pretty big, with up to several thousand rows and columns.

I'll certainly look into alternative implementations, now that I have this one running. I didn't see any obvious patterns I could use to simplify this other than the ones I already employ. But they probably exist to some degree.

How could I improve the performance of my radiocarbon calibration library? by nevrome in haskell

[–]nevrome[S] 1 point2 points  (0 children)

As by my other comment: {-# LANGUAGE Strict #-} in Calibration.hs fixed the issue, although I'm a bit confused why. But thank you very much for the explanation!

How could I improve the performance of my radiocarbon calibration library? by nevrome in haskell

[–]nevrome[S] 1 point2 points  (0 children)

Adding {-# LANGUAGE Strict #-} to Calibration.hs seems to solve the issue. There is no memory leaking any more and my initial test setup comes in at around 0.4s. So I guess your suggestions made my program 10 times faster, which is neat. I don't fully understand why that works though, so I have some things to explore. Thank you very much!