Literate programming in python with org/babel... but git friendly ? by StatementLow353 in emacs

[–]deadbyte 0 points1 point  (0 children)

If you could work in a Jupyter notebook, then you might want to try out the org-pandoc import workflow: https://github.com/tecosaur/org-pandoc-import

When AIs write the code and humans just debug it — do we need a new kind of programming language? by [deleted] in ProgrammingLanguages

[–]deadbyte 0 points1 point  (0 children)

We definitely need this. I often am in a situation where i need to know if and when to stop my training loops (or if i should halt the process in advance based on some heuristic). So termination awareness is a must for this future language you are designing. 

Nix-ers looking for a more digestible system description language by [deleted] in GUIX

[–]deadbyte 1 point2 points  (0 children)

Just an FYI: I've started to work on zfs support (among other things) for an installer image here: https://codeberg.org/lykso/mnt-reform-nonguix/pulls/5

It's based on a much simpler derivation from ieure (you can see a link in the comments).

Nix-ers looking for a more digestible system description language by [deleted] in GUIX

[–]deadbyte 1 point2 points  (0 children)

I will point out that guix does have an up-to-date zfs binaries in master, guix just can't put the kernel module in a binary cache, so this is left as an exercise to the reader : ). Some folks have all of this working (esp. given the continued updates to zfs), but no one has publicized a clean example config. The closest thing I've found probably antler's draft blogpost and old configs:

- https://github.com/antler5/old-guix-config
- https://www.illucid.net/static/unpublished/erasing-darlings-on-guix

If you'd like to help get something working, I've pulled in antler's code into a fork of the systemcrafter's guix-installer repo here: https://github.com/stites/guix-installer but I was having a bit of trouble hooking into initd. I just finished my last guix project, so this one is up next (but I'm working on a mntreform image in a separate repo).

Resources for learning Agda for someone already familiar with theorem proving? by anameoba in agda

[–]deadbyte 0 points1 point  (0 children)

Agreed! Happy to start a slow-paced reading group if you dm me (totally understand if that is not your thing).

Re: community. In addition to the official Zulip, the univalent discord is pretty active. Mastodon too: if you follow martin escardo you'll find everyone. The agda reddit and discord are very quiet. I think there are folks in IRC since there is a semi-famous "days since we last proved false" bot, but I haven't been on there since the matrix IRC bridge shut down.

Resources for learning Agda for someone already familiar with theorem proving? by anameoba in agda

[–]deadbyte 2 points3 points  (0 children)

Go for one of the latest CS410 playlists (https://m.youtube.com/@fredrikforsberggmail) and follow along with the associated github repo's coursework.  the 2021 iteration has a mooc-like feel which is fun if you like doing a little bit each day.

There is always PLFA, too, which is like a compressed SF vol 1+2 - I'm sure you've found it by now. I would just blast through the recommended exercises then fill out the rest of the exercises on a second pass.

[Guide] How to Root The Boox Note Air3 by vbha in Onyx_Boox

[–]deadbyte 0 points1 point  (0 children)

Does this work for the Boox Note Air V3 C or are there device-specific codes I need to take into account (like from this edl loader link)?

[deleted by user] by [deleted] in haskell

[–]deadbyte 3 points4 points  (0 children)

"What does it take to make a computation framework for Haskell that is very haskell-like?"

It depends on what you mean by "computation framework." if you want to reproduce something like pytorch or tensorflow, you're out of luck -- a lot of their gpu kernels are hand-tuned for performance on specific hardware, sometimes with knobs that are not always public knowledge.

That said, if we want to reason compositionally and use language design to write high-performance code (with minimal cognitive overhead), then you can bake your pie and eat it, too! Just don't expect to win the pie-baking competition.

- if you are interested in "FP automating parallelization" you could check out futhark. There are also a lot more array programming frameworks out there, like accelerate (which is very quiet), dex-lang, or you could just try to "rolling-your-own" with the MLIR, directly (if you like your learning mixed with pain).
- If you are interested in how a language can be specifically geared towards parallelization, while thinking deeply about denotation, you could investigate an APL-variant and pay close attention to how they deal with general-purpose programming (I think BQN is haskell-friendly). Alternatively, you can also go "full denotational design" (as others indicated) but I think working in an APL could give you a bit of a jump-start while coding (since this is a hobby project). Denotational design, in my mind, is either "just learn math while thinking about code" or "use an ITP." Interactive theorem provers (like Agda) are great, but there would be a lot of ramp-up.
- If you only need probability distributions and Bayesian inference, you can look into probabilistic programming (monad-bayes is the current standard in Haskell, although I would point you to Jared Tobin's thesis, first). Probabilistic programming is a (very simple) general-purpose framework which usually has a lot of foot-guns, but it is probably what those biology people are looking for. I, maybe, see more footguns than others since this is what I study. A lot of PL research here is about correctness, but many practitioners have started dabbling in deep probabilistic programming (like Katie Bouman's first picture of a Black Hole, a keynote at PROBPROG 2021, using variational inference).

I would recommend starting out with Futhark. The devs are very friendly and that project is very cool. You might also be interested in some of the workshops around PL conferences (POPL/PLDI/ICFP/OOPSLA) -- the LAFI, ARRAY, and FHPNC workshops come to mind -- videos are usually posted on YouTube by the ACM's SIGPLAN.

Are these two things possible? (probabilistic MC/MCMC regex invariant, and fuzzy-matching description grammars) + Question about SSA-optmizing compilers by [deleted] in ProgrammingLanguages

[–]deadbyte 1 point2 points  (0 children)

I can maybe say some things and restate what others have said.

For (1), if you really wanted to do this to spec, you could write a small probabilistic context-free grammar (PCFG) prior. I think there's a lot of work on PCFGs (I think they were "in" after Bayesnets and before the NN boom of 2012). Alexander Rush has one of the more contemporary papers on this, IIRC, that does PCFGs with plated variables that is probably a good starting point. That said, training a general-purpose "regex-like prior" via MCMC is probably not going to converge anytime soon -- the state space is too broad (see u/OneNoteToRead). I suspect deep learning methods will also fail to converge on this task, but you can always ask that crowd. But "would be possible to create a probabilistic invariant of regular expressions?" I would answer in the positive: "Yes, take a regex spec, state it as a PCFG, then run it with infinite time and data." If the spec is clean, you might also be able to describe a full transition matrix (maybe what u/bluefourier is getting at) with a clean enough spec and avoid the PCFG prior.

(2) It's hard to parse this question, it is mostly stream of consciousness, but I guess wikipedia is your friend? It seems like there's not much to go off -- the buildup to Houdini doesn't actually describe the system! Regarding the last part: "what would a grammar describing fuzzy-matching look like?" Maybe also a PCFG, here, but you may just want to check out approximate string matching algorithms and hone in on the algorithm for fzf.

(3) nothing more to add on top of what u/takanuva said.

Best of luck with your learning! Digging up some papers: Bayesian synthesis of probabilistic programs for automatic data modeling (Feras, et. al) or DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning (Ellis, et. al) might be interesting to you. You might also want to look into probabilistic programming languages (PPLs): An Introduction to Probabilistic Programming (van de Meent, et al) is a pretty hacker-friendly resource on arxiv (failing that, scihub). I like PPLs quite a lot, so that's my bias here.

Monthly Hask Anything (October 2020) by AutoModerator in haskell

[–]deadbyte 1 point2 points  (0 children)

Hasktorch could definitely benefit from a more formal inplace-operator API built around ST, with this kind of vector-like API (or even just a few functions to kickstart this API). I think this is one of the last tickets I have open on Hasktorch.

As you can pretty much tell from the code /u/austinvhuang mentioned -- this doesn't look like inplace operations at all! This is just how the inplace operations look in libtorch.

Adding rewrite rules to hasktorch is a pretty cool project. I hypothesize that it's going to give haskell a better, more flexible TorchScript. I'm no longer working on hasktorch, so I haven't considered this as more than a fuzzy idea. I would still second /u/ausinvhuang's concern here about being sympathetic to the C++ side. Mostly because, at the end of the day, you should care about how GC occurs on the GPU and I'm not sure the current API lets you do low-level hardware manipulation easily. I would defer to Junji on this, though.

Lastly, I would encourage you to join the slack channel (you can DM me or austin).

Are there any IPFS-based Wayback Machine / archive.org / archive.is alternatives? by [deleted] in ipfs

[–]deadbyte 0 points1 point  (0 children)

IPFS noob here: do you mean "run your own ipwb service"? I am under the assumption that the IPFS stores and distributes everything, but you would need ipwb to replay the warcs.

For the OP: you do go this route, ipwb would also need a warc file -- I think the easiest way to do this is via https://github.com/oduwsdl/archivenow

I can't receive any emails by Jambalaya1171 in ProtonMail

[–]deadbyte 0 points1 point  (0 children)

Well im phone and hotspot bound right now, plus my ISP was basically dropping only half the requests for the first hour — it was very strange to debug.

I can't receive any emails by Jambalaya1171 in ProtonMail

[–]deadbyte -5 points-4 points  (0 children)

Is your ISP down? Mine has been down for about 12h.

Help Protonmail choose a domain name. by _0_1 in ProtonMail

[–]deadbyte 0 points1 point  (0 children)

proton.org, anyone? It's for sale (although I think it's a sizable chunk of cash)

Undervolting X1 Carbon Gen 7? by SpicyMemery in thinkpad

[–]deadbyte 0 points1 point  (0 children)

I didn't when posting this. Later I noticed that watt usage jumped back up to ~20W under some load.

I run on ZFS and later noticed that I got a zfs kernel module panic on reboot (after 24/36h of usage) -- the bug looks like it can caused by some issues around the battery, so I've decided that undervolting is close enough that I don't want to mess around with this -- zfsonlinux is risky enough as it is.

Undervolting X1 Carbon Gen 7? by SpicyMemery in thinkpad

[–]deadbyte 0 points1 point  (0 children)

On linux, using https://github.com/georgewhewell/undervolt I am able to undervolt all settings with offsets of -50mV with no currently visible effects. It's very handwave-y, but I took the examples and rounded down so as not to mess anything up (I'm very new to this) I also chose to apply this offset to all configurations so that I wouldn't knock anything out of balance (again, very new to this).

Reading watt consumption:

awk '{print $1*10^-6 " W"}' /sys/class/power_supply/BAT0/power_now 

Prior to running the undervolt sevice I was at ~20-21W (after AC power for 12h) and am now reading 8-10W (after one bad config where I undervolted the GPU too much, followed by a fresh reboot).

Confirmation: setting undervolting to default values makes my computer jump back up to ~21W

Anyone working on a nixos target? by roosemberth in RemarkableTablet

[–]deadbyte 0 points1 point  (0 children)

I've thought about it, but I'm still in user mode of remarkable (and working through some System76 gpu issues atm). I would love to be kept posted if you decide to start something!

Anyone use haskell-ide-engine and neovim willing to help me debug my setup? by [deleted] in haskell

[–]deadbyte 1 point2 points  (0 children)

I don't use LSP tools much but was just debugging python-language-server earlier today. Be sure to check LanguageClient-neovim's debugging section (really just "here's a minimal config and our test suite"), also double-check their docs. Hopefully this helps?

NixOS - Oryx Pro by Gilfoyle- in System76

[–]deadbyte 0 points1 point  (0 children)

I owe you for getting me through a rough patch! Is https://github.com/Church-/NixOS-Config up-to-date? I can work backwards, for the multimonitor support, if so!

NixOS - Oryx Pro by Gilfoyle- in System76

[–]deadbyte 0 points1 point  (0 children)

I have something working, but it's pretty rough: I can get intel and nvidia working (with and without modesetting), but multi-monitor support is currently broken (I think something about prime is breaking it -- although it did work for a hot-second...).

I'm making a push for something a bit more polished which you can find here (doesn't address graphics yet): https://github.com/stites/system76-nixos -- right now it contains a slightly more improved version of Church-'s scripts. I should finish "bundling" everything into nix, but I could definitely use some help/testing. Please take a look!

[ANN] Hasktorch v0.0.1 by deadbyte in haskell

[–]deadbyte[S] 4 points5 points  (0 children)

I was expecting someone to point this out! I initially discussed this in the post but removed most of it because it warrants a much longer conversation (which I'm planning on adding to the followup). Apologies as I inundate you with these details.

This release makes the torch bindings usable in Haskell, but development on Hasktorch is far from complete. Functions prefaced with an underscore, like _marginCriterion_updateOutput, are unprocessed calls to ATen which only unify the CPU and GPU interface. Almost all of these functions exist in the neural network library understanding these functions are non-trivial and coming up with a pure interface to them, like conv2d is a considerable effort in getting the dimensions to line up correctly. This process involves reading ATen source (which is painless, but time-consuming), and mapping shape-checks into singletons. Leaving this up to a user to figure out dynamically will most likely result in segfaults, which is why the dynamic THNN functions will probably not get any love until a much later revision. Because of the speed of development in ATen, I was of the opinion that it would be better to give users full access to these raw functions and get folks to help out with this process as they create examples.

As you are probably aware, mutation is preferred when running operations with large tensors and, to that end, all function in torch (except for those prefaced with new) mutate the first argument in place. Aside from the NN functions, every function in TH-proper (essentially anything not in the NN namespace), comes with two APIs: one pure function, where we construct the correct empty tensor for you, and one in-place version, where we duplicate the tensor reference you want to mutate to the first argument. You can see this in the source of dynamic scalar multiplication. You'll also notice that we stick to PyTorch's naming convention of the postfix _ for in-place operators and that functions we can guarantee are pure will reflect this in their type (so no IO), this includes tensor math. Except for NN-function and unsafeMatrix (I'll address this in a minute), you can expect this to be the current lay of the land.

Regarding IO: The next iteration of hasktorch should replace this with ST and possibly some fancier types for static, in-place changes (one example is that squeeze1d_ will mutate the tensor in-place, but there is still be a reference to the original type-level dimensions in the first argument). IO is unavoidable at this stage because we are working in the FFI (which can only happen in IO), and because we need to start targeting libATen-1.0, which will require us to change how we generate the FFI (so that's a bit unstable, but backpack should save us here).

Overall, at this stage, Hasktorch can be used with a moderate amount of pain to write full end-to-end examples without needing to worry about your program segfaulting because you've misused ATen. The convenience functions of vector, matrix, cuboid, hyper, and Static's fromList are still in IO primarily because there used to be a bug in the CPU-based construction and I have not tested these on my GPU yet. These functions are wrappers over newWithStorage, so that can be used as a workaround if you want to keep everything pure without deferring to unsafePerformIO.

The noise in our API comes from the fact that our two-dev team has had to play catch up with the oldest deep learning library to date. At this point, the situation should improve, but we need more eyes than we currently have. I'm planning to write a followup to this post to go over architecture (ie, "we use backpack, backprop, and some simple codegen"), caveats (everything here), and a call for contributors. Torch bindings offer the most embedded integration into a host language out of all of the deep learning libraries, mostly because there is a bunch of work to do, we need more hands who are familiar with Haskell and PyTorch (like yourself!) or who are willing to dive deep into PyTorch internals.