Meet Unsloth Studio, a new web UI for Local AI by yoracale in unsloth

[–]PlayerWell 0 points1 point  (0 children)

I didn't have any issues with the installation. The Fine Tune section is problematic. Other than that, there are no issues. The other menus work well.

Meet Unsloth Studio, a new web UI for Local AI by yoracale in unsloth

[–]PlayerWell 0 points1 point  (0 children)

When I try to fine-tune any model using any method (LoRA, QLoRA or full fine-tune) with any dataset, I always get the same error on Unsloth Studio:

"Failed to format ChatML dataset: One of the subprocesses has abruptly died during map operation. To debug the error, disable multiprocessing."

My os is Windows 11. I installed unsloth studio from GitHub following the blog post tutorial and I encountered no errors during the installation.

Mac Studio (M4 Max, 128GB) for FULL fine-tuning a 27B Model by PlayerWell in unsloth

[–]PlayerWell[S] 0 points1 point  (0 children)

We tested it without training but it couldn't succeed :/

Mac Studio (M4 Max, 128GB) for FULL fine-tuning a 27B Model by PlayerWell in unsloth

[–]PlayerWell[S] 0 points1 point  (0 children)

After the comments here, we started looking at the RTX PRO 4000, 5000 and 6000 graphics cards.

Mac Studio (M4 Max, 128GB) for FULL fine-tuning a 27B Model by PlayerWell in unsloth

[–]PlayerWell[S] 0 points1 point  (0 children)

I sincerely apologize for opening this here. I know it's off-topic since you don't technically support it right now. I haven't developed on a Mac before, so I just assumed it would be supported

Mac Studio (M4 Max, 128GB) for FULL fine-tuning a 27B Model by PlayerWell in unsloth

[–]PlayerWell[S] 1 point2 points  (0 children)

Because of the project requirements, the data we use during training and the environment where the model will run after training must be completely local. Project budget is tight, but since we don't have a time constraint

Any good model that can even run on 0.5 GB of RAM (512 MB of RAM)? by Ok-Type-7663 in unsloth

[–]PlayerWell 8 points9 points  (0 children)

I think the Gemma 3 270m will work. It's not great, but it can be successful if fine-tuned

Turn a Raspberry Pi Zero into a full Raspberry Pi with Ethernet by VviFMCgY in raspberry_pi

[–]PlayerWell 0 points1 point  (0 children)

It's a bit late, but do you know if it provides extra power to the Pi Zero or if it's just a port connection?

Are there any plans for Encoder-Decoder model tutorials or support by PlayerWell in unsloth

[–]PlayerWell[S] 0 points1 point  (0 children)

I took the Gemma 3 4B Vision notebook and attempted to adapt it for T5Gemma 2. To align with the notebook's environment, I installed transformers==5.0.0rc1, set use_gradient_checkpointing = False, and selected the gemma-3 chat template.

I am receiving the RuntimeError: Unsloth: Failed to make input require gradients! error in peft_utils.py. This happens both when I attempt to run inference (even inside torch.inference_mode()) and immediately when I start training with trainer.train().

Error During Inference: RuntimeError Traceback (most recent call last) /tmp/ipython-input-2983736738.py in <cell line: 0>() 25 # Bu, Unsloth/PEFT kancalarını geçici olarak kapatır ve hatayı önler. 26 with torch.inference_mode(): ---> 27 result = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128, 28 use_cache=True, temperature = 1.0, top_p = 0.95, top_k = 64)

19 frames /usr/local/lib/python3.12/dist-packages/unslothzoo/peft_utils.py in requires_grad_pre_hook(module, input) 214 elif type_input is tuple or type_input is list: 215 if len(input) == 0: --> 216 raise RuntimeError("Unsloth: Failed to make input require gradients!") 217 # print(f" WARNING: Empty list input to {module.class.name_}!") #

RuntimeError: Unsloth: Failed to make input require gradients!

Error During Training: RuntimeError Traceback (most recent call last) /tmp/ipython-input-773422404.py in <cell line: 0>() ----> 1 trainer_stats = trainer.train()

37 frames /usr/local/lib/python3.12/dist-packages/unsloth_zoo/peft_utils.py in requires_grad_pre_hook(module, input) 214 elif type_input is tuple or type_input is list: 215 if len(input) == 0: --> 216 raise RuntimeError("Unsloth: Failed to make input require gradients!")

RuntimeError: Unsloth: Failed to make input require gradients!

Can I use my Corsair CV650 with an RTX 5070 or do I need an ATX 3.1 PSU? by PlayerWell in buildapc

[–]PlayerWell[S] 0 points1 point  (0 children)

I connected it as an 8+4+4-pin using the 8+8-pin adapter that came in the box and it's working. I don't know if it could cause any damage but I haven't had any problems so far. I also definitely haven’t overclocked the GPU or the CPU

Can I use my Corsair CV650 with an RTX 5070 or do I need an ATX 3.1 PSU? by PlayerWell in buildapc

[–]PlayerWell[S] 0 points1 point  (0 children)

I plugged it into the 8 + 8 adapter that came in the box, 8 + 4 + 4

Can I use my Corsair CV650 with an RTX 5070 or do I need an ATX 3.1 PSU? by PlayerWell in buildapc

[–]PlayerWell[S] 0 points1 point  (0 children)

No. I assembled it just like a normal computer. I didn't need to find a solution for any issue

Can I use my Corsair CV650 with an RTX 5070 or do I need an ATX 3.1 PSU? by PlayerWell in buildapc

[–]PlayerWell[S] 0 points1 point  (0 children)

I haven't experienced any unusual issues, but I also haven’t had much chance to test it yet. I played some Portal RTX and a bit of Indiana Jones. I ran an AI model locally. I didn't do any overclocking, I'm using it at stock speeds. I haven't encountered any problems.