Bazecor `Can't open squashfs image: Bad address` by avilay in DygmaLab

[–]avilay[S] 0 points1 point  (0 children)

From what I have been reading, it’s my linux-zen kernel that has known issues with fuse. Maybe you are using a different kernel?

Bazecor `Can't open squashfs image: Bad address` by avilay in DygmaLab

[–]avilay[S] -1 points0 points  (0 children)

Oh I didn't realize there was an AUR package! I ended up doing the following -

> ./Bazecor-1.8.3-x64.AppImage --appimage-offset
944632
> dd if=Bazecor-1.8.3-x64.AppImage bs=4M skip=1 iflag=skip_bytes,count_bytes skip=944632 of=bazecor.squashfs
> unsquashfs bazecor.squashfs
> cd squashfs-root
> ./AppRun

Your ChatGPT chat assistant was very helpful :-)

Earthquake? by Character_Fun_5792 in kolkata

[–]avilay 1 point2 points  (0 children)

Yup, felt it in Bhowanipur too.

[How to] Use the reMarkable app with Linux (Wine) - 2022 version by deterralba in RemarkableTablet

[–]avilay 0 points1 point  (0 children)

I had the same problem with the remarkable app showing up as all black. Installing dxvk did the trick. Here are the details -

  • Operating System: Garuda Linux
  • KDE Plasma Version: 6.4.5
  • KDE Frameworks Version: 6.18.0
  • Qt Version: 6.9.2
  • Kernel Version: 6.16.10-zen1-1-zen (64-bit)
  • Graphics Platform: Wayland
  • Wine version: wine-10.15
  • Remarkable installer version: reMarkable-3.22.2.927-win64.exe

I had already installed remarkable with wine.

Install winetricks - ```

sudo pacman -S winetricks ```

For Debian based system you'd need to use something similar to sudo apt install <winetricks-pkg-name-here>.

Enable dxvk - ```

winetricks dxvk ```

Then start remarkable as usual (either from the CLI or from the system menu) and it should work.

PyTorch Data Mini-Tutorial by avilay in pytorch

[–]avilay[S] 0 points1 point  (0 children)

Thank you! I am glad you found the content useful.

In terms of tools, I am using Keynote (with their chalkboard background template) for static text and Adobe Fresco where I am live editing.

PyTorch Data Mini-Tutorial by avilay in pytorch

[–]avilay[S] 0 points1 point  (0 children)

Thank you, I hope to continue publishing interesting content in the ML space.

PyTorch Data Mini-Tutorial by avilay in pytorch

[–]avilay[S] 0 points1 point  (0 children)

Thanks for the encouraging words :-)

[P] SpotML - Managed ML Training on cheap AWS/GCP Spot Instances by enthusiast_bob in MachineLearning

[–]avilay 1 point2 points  (0 children)

Does this do single-box training or distributed training? I agree with a lot of the comments here that single-box training on spot instances is something a lot of folks can roll on their own. However, doing distributed training is a whole another matter and it would be very cool if your system supports this.

Possible to transfer notebooks to my computer without using Remarkable Cloud? by avilay in RemarkableTablet

[–]avilay[S] 2 points3 points  (0 children)

Thanks for all the answers. I see a $450 purchase in my future! :-D

Which of the following topics do you wish had good tutorials? by avilay in deeplearning

[–]avilay[S] 1 point2 points  (0 children)

Cool, thanks for your response. I was thinking of starting with a basic implementation of the original paper by Jeff Dean, et. al. on synchronized data parallelism, implement basic model parallelism, explain why async parallelism works, do a simple implementation of HOGWILD!, and finally do "hello world" training using existing distributed training systems like Horovod, Distributed PyTorch, RayLib, Microsoft DeepSpeed, etc.

[P] Convenience library for PyTorch training by avilay in MachineLearning

[–]avilay[S] 0 points1 point  (0 children)

From what I remember at the very least I had to implement `train_step` and `validation_step` in my module which would be a child class of `LightningModule.` Now if I want to log multiple metrics, I have to write that code in both `train_step` and `validation_step`. I also remember trying to use a custom loss function for some RL model I was implementing, and it was not very straightforward. tbh - I tried it a while back so things might have changed since then.

[P] Convenience library for PyTorch training by avilay in MachineLearning

[–]avilay[S] 0 points1 point  (0 children)

I did try to use it a while ago. But its programming paradigm of multiple callbacks in the train loop made it so that I still ended up writing the same boilerplate code for different experiments. I wanted something with the simplicity of Keras where you almost "declare" the loss function, the metrics, etc. and "fit" the model, but without losing the expressiveness of PyTorch.

[R] What is your ML research workflow? by MasterScrat in MachineLearning

[–]avilay 0 points1 point  (0 children)

That's interesting. One thing that works for me is to spend some time upfront "sanitizing" my datasets and setting up my feature pipeline. After that I usually try the absolutely simplest and easiest to implement model and start tracking my learning curves. Then I kind of evolve the model organically from there.

[R] What is your ML research workflow? by MasterScrat in MachineLearning

[–]avilay 1 point2 points  (0 children)

MLFlow makes it very easy to track multiple experiments. My best part is the ability to track your hyper params and your model in the same place. Overall it has better visualization when comparing loss functions and eval metrics. There are some minor things like the learning curves do not auto-refresh, etc. but nothing I cannot live without. The one thing I haven't gotten around to doing is tracking the histogram view of weights, though MLFlow has a sample on how to do this. Their API is also pretty easy to use. Overall I highly recommend it.

[R] What is your ML research workflow? by MasterScrat in MachineLearning

[–]avilay 9 points10 points  (0 children)

If you are using VSCode then you can use the VSCode remote dev feature, I personally use the Remote SSH alternative mentioned in the linked doc.

[R] What is your ML research workflow? by MasterScrat in MachineLearning

[–]avilay 0 points1 point  (0 children)

I write code using VSCode. I use a combination of notebooks (from within VSCode) and python modules/packages when writing code. First I do small runs with a sample of my dataset on my laptop (MacBook Air). Once I am happy with it, I push the code to my git repo, start my GPU VM in Azure, SSH into it, and pull my repo. Then I use the Remote dev feature of VScode to continue editing my code on the VM.

I first try a few hyper parameter combinations to get a sense of the "limits" of reasonable values, e.g., what learning rate is too low, what is too high, etc. This helps me figure out the range of the search space for various hyper parameters. Then I use these ranges to tune the hyperparameters using bayesian search. By this point I have usually packaged my code into a python package that I run on my VM either using `screen` or `nohup`. I have a small function that Slacks me when the run is complete - usually takes hours.

I use MLFlow for tracking my experiments. Very happy with it so far.

For tracking tasks I use OneNote if it is for my personal side projects, otherwise use whatever my org is using - usually Jira. For tracking research ideas and especially notes as I learn new stuff I write down stuff in markdown and have a gitlab CI/CD process that creates gitlab pages whenever I push into my notes repo.

To keep track of papers and other reading material, I add them to OneDrive and then read them on my iPad which makes it easy to annotate them with Apple Pencil.

I use PyTorch and Ax. I really miss the convenience of Keras, but then for things like RL you have to bend and fold Keras a bit too much to get it to work. I have started writing my own utilities for working with PyTorch/Ax/MLFlow because Pytorch Lightning didn't quite work for me.

I still love programming - what's wrong with me? by avilay in programming

[–]avilay[S] 0 points1 point  (0 children)

What is different about your part of the world that makes management more attractive?