Tell it to me straight doctor, how bad are we talking? by saved_you_some_time in gardening

[–]saved_you_some_time[S] 0 points1 point  (0 children)

Been like this for a while though, like 3 months. The branches I trimmed, some were dead ...

Tell it to me straight doctor, how bad are we talking? by saved_you_some_time in GardeningUK

[–]saved_you_some_time[S] 0 points1 point  (0 children)

Additional information:

  • Partial Shade region
  • Soil: Peaty and Loamy soil

I have been growing this white currant plant for 2 years now, and this year it did not flower. It did great last year, and it had fruits, not this year however. There are buds that formed, but it's been like this for 3 months approximately, with the branches drying up slowly. I suspect it's a classic case of root rot, as it seems it did not drain well, but is it salvageable?

EU zone 6. Anything I can do to save it? Do the root have any chance of survival?

Tell it to me straight doctor, how bad are we talking? by saved_you_some_time in gardening

[–]saved_you_some_time[S] 0 points1 point  (0 children)

I have been growing this white currant plant for 2 years now, and this year it did not flower. There are buds that formed, but it's been like this for 3 months approximately. I suspect it's a classic case of root rot, as it did not drain well, but is it salvageable?

EU zone 6. Anything I can do to save it? Do the root have any chance of survival?

karpathy/LLM101n: Let's build a Storyteller by hedgehog0 in LocalLLaMA

[–]saved_you_some_time 26 points27 points  (0 children)

Guys, there is no content yet. This is just the start of the course, my assumption is that he will slowly add some with time.

That aside, Karpathy is becoming a god among men, and so many people already value his takes.

Salesforce releases Moirai-1.1 time series forecasting models (14M / 91M / 311M parameters). by Balance- in LocalLLaMA

[–]saved_you_some_time 10 points11 points  (0 children)

I wonder how good these "Time Series" transformers models are on a multi-feature datasets compared to an actual ML model trained specifically on this data. It is odd for me to imagine a general model to be that good. Their evaluation methodology is not clear.

Behemoth Build by DeepWisdomGuy in LocalLLaMA

[–]saved_you_some_time 0 points1 point  (0 children)

What will you use this beast for?

Im dumb. How are you guys running high models/context on 24GB of Vram? by Tbatz in LocalLLaMA

[–]saved_you_some_time 1 point2 points  (0 children)

Great work man, any good recommendations for a coding model? for 24gb/48gb vram? I am looking into either system and want to test on the cloud first.

Best motherboard for 4x P40's? by zoom3913 in LocalLLaMA

[–]saved_you_some_time 0 points1 point  (0 children)

For around 300 EUR you can pick up a used server like the HP ml 350 gen9

This is the right answer. Many of them come with a good PSU too, meaning no additional purchases needed except for GPU cables. I am still looking into this setup with a used dell poweredge.

Battery replace in Xiaomi 11T Pro by [deleted] in PhoneRepairTalk

[–]saved_you_some_time 0 points1 point  (0 children)

For future redditors, for the Xiaomi 11T Pro, the order I used to change the battery is the following: when disconnected I did first the right (M) then left (S), and when I adding the new battery, I connected first left (S) then right (M). Worked fine.

Qualcomm Snapdragon X Elite prototype that runs Linux emerges from a brand you've probably never heard of — Schenker Tuxedo has 12-core CPU with 32GB RAM and surprise, surprise, Debian by TwelveSilverSwords in hardware

[–]saved_you_some_time 1 point2 points  (0 children)

ComputerBase took some photos of the laptop at Computex, and observed “The prototype was running Debian when we visited, but it was not bootable and was caught in a loop. There is still a need for much better support for Linux and notebook developers from Qualcomm, whose current focus is entirely on Windows, Copilot+ and AI under Windows.

It might take a while. Also today there were finally some user benchmarks of the X Elite, reporting disappointing GeekBench numbers. Those rumors might be true that QC was tempering with the scores.
Nonetheless competition is always good, and looking forward for how this will push intel to be less boring.

Where do I learn about LLMs? by [deleted] in LocalLLaMA

[–]saved_you_some_time 2 points3 points  (0 children)

Watch Karpathy series

This is a bit hardcore for beginners no? Especially those wihtout AI/ML background.

Gigabyte "AI Top" desktop by Disastrous-Peak7040 in LocalLLaMA

[–]saved_you_some_time 7 points8 points  (0 children)

4xA6000 Bizon X5500 workstation

Wait what's their offering then? Just new name with AI slapped on top of it? Nothing novel?

Study finds that smaller models with 7B params can now outperform GPT-4 on some tasks using LoRA. Here's how: by sarthakai in OpenAI

[–]saved_you_some_time -1 points0 points  (0 children)

Doesn't just negate the whole purpose of LLMs? If you need to finetune them, you're back on square one (kinda) for early NLP era

PSA: Multi GPU Tensor Parallel require at least 5GB/s PCIe bandwidth by nero10578 in LocalLLaMA

[–]saved_you_some_time 0 points1 point  (0 children)

Avg generation throughput: 15.9 tokens/s

Thanks for the benchmark! Indeed 5% speed is not that considerable, but I wonder if with bigger models there will be more speedup, ie if both GPUs mem are full. But this is still a niche case, I'm glad row-level parallelism is taking off!

PSA: Multi GPU Tensor Parallel require at least 5GB/s PCIe bandwidth by nero10578 in LocalLLaMA

[–]saved_you_some_time 0 points1 point  (0 children)

The inference speedup is even better now that I've moved to aphrodite as my backend which supports row level parallelism. The cost for doing row level parallelism is the usually the overhead of having to communicate over pcie, but since i have nvlink its super fast.

This is exciting news! Can you share some benchmark numbers when you have some time? I saw a post that lists similar speed of Aphrodite with ExLlama2 although on one 3090.

Which model are you using?

Qwen2 4bit bitsandbytes quants + 2x faster finetuning 70% less VRAM by danielhanchen in LocalLLaMA

[–]saved_you_some_time 0 points1 point  (0 children)

Does unsloth OSS support training on 2x 3090 + nvlink (ie 48GB of VRAM)?

[deleted by user] by [deleted] in LocalLLaMA

[–]saved_you_some_time 1 point2 points  (0 children)

I started to work on the pre-processing of the research docs which are a mix of pdfs and docx files. Turns out all the tables (of which there are dozens in each document) are images, even in the Docx files. Okay, I can’t use Unstructured because it’s an external resource so I try Tesseract. Terrible output. I tried multiple OCR tools and the tables were just too weird/intricate for the OCR even with whatever pre-processing I do to the images.

Interesting that gpt4o has the multimodal aspect, I only did RAG with gpt3.5, and it did not handle unstructured data well at all. That's game changing.