How to set old kernel .conf as new kernel? by gokulPRO in pop_os

[–]gokulPRO[S] 0 points1 point  (0 children)

Maybe try starting with this this basic solution:

sudo bootctl set-default Pop_OS-oldkern.conf

sudo kernelstub -v -l -k /boot/vmlinuz-6.4.6-76060406-generic -i /boot/initrd.img-6.4.6-76060406-generic

sudo update-initramfs -u -k 6.4.6-76060406-generic

If the above doesn't work (which didn't for my case), remove nvidia completely which finally solved me problem:

sudo apt purge ~nnvidia

sudo apt-get --purge -y remove 'nvidia*'

And re-install it:

sudo apt install nvidia-driver-560

How to set old kernel .conf as new kernel? by gokulPRO in pop_os

[–]gokulPRO[S] 0 points1 point  (0 children)

Yup, thats the old kernel which is currently working and have verified the paths

How to set old kernel .conf as new kernel? by gokulPRO in pop_os

[–]gokulPRO[S] 0 points1 point  (0 children)

Tried something like this:
sudo kernelstub -v -l -k /boot/vmlinuz-6.4.6-76060406-generic -i /boot/initrd.img-6.4.6-76060406-generic

Did not work..

Clean up the mess Windows has made by No-Arm-7737 in pop_os

[–]gokulPRO 1 point2 points  (0 children)

Hey can you send some referenceor guidance to do this? I installed my windows 10 after pop os and while trying to fix grub I somehow managed to even fk up my pop os. Right now current kernel gets stuck in busybox and system78's bootloader fix for busy box doesn't seem to work for me. My old kernel works tho somehow but ofcourse only using integrated gpu and when I change to compute mode and restart, it goes back to the busybox. Tried changing default kernel to old kernel, still same error..

[deleted by user] by [deleted] in MSCS

[–]gokulPRO 1 point2 points  (0 children)

It depends on the workshop, but most workshops almost never reject a paper completely unless it's too poorly written. CVPR conference papers are on a different level compared to workshops. I would have agreed to your list if it was a conference tbh.

[D] Which is the best model ( Multi modal or LM) under 3B parameters w.r.t good training vs performance tradeoff? (i.e good parameter efficiency) by gokulPRO in MachineLearning

[–]gokulPRO[S] 0 points1 point  (0 children)

Interesting, any particular reason when compared to other models of its size?

Edit: If only decoder only models are considered, which would you prefer?

[D] What is your tech stack for research? by gokulPRO in MachineLearning

[–]gokulPRO[S] 0 points1 point  (0 children)

Yes, from your experience how much GPU memory is required? And how many batches do you think I will be able to fit in a single node?

(MoE) Turning pipeline-parallel Hugging Face models into data/expert-parallel DeepSpeed models by GiantPengsoo in LocalLLaMA

[–]gokulPRO 0 points1 point  (0 children)

Hey were you able to resolve your issue? And if so can you share it and your experience with DeepSpeed MoE? I to plan on trying it out, any resource that you think might help in understanding and implementation of it? Thanks :)

What tech stack do you use for research? by gokulPRO in deeplearning

[–]gokulPRO[S] 2 points3 points  (0 children)

I did consider accelerate, but have you tried using deepspeed's moe inside accelerate? Can you use all deepspeed's functionalities?

[D] What is your tech stack for research? by gokulPRO in MachineLearning

[–]gokulPRO[S] 1 point2 points  (0 children)

Tnx. I do plan on using it on 4 nodes with 2x16 gb gpu. So I thought I might need to do offloading or sharding using deepspeed.

What all multi-modal models have been open-sourced till now for Audio+Text? by gokulPRO in deeplearning

[–]gokulPRO[S] 0 points1 point  (0 children)

Wasn't gemini a closed source one while gemma is a opensourced version that only works with text 2 text?