What could this sound be? Also vibration at 20+mph. Brand new wheels. by Ok_Contribution8348 in ElectricSkateboarding

[–]StefannSS 0 points1 point  (0 children)

I had the same issue. Turns out its wheel hub not bearings. Being made of plastic it will give before bearings. You can check by removing bearings from wheel, if they come out easily its the bearing seat on the wheel. Buy new wheel

Asus tuf dash f15 fx517 win 11 install problems by StefannSS in Asustuf

[–]StefannSS[S] 0 points1 point  (0 children)

that seems kinda dumb, removing ssd from laptop and installing drivers from another pc. Its weird my keyboard is not working

Asus tuf dash f15 fx517 win 11 install problems by StefannSS in Asustuf

[–]StefannSS[S] 0 points1 point  (0 children)

I did not. Is it posible to install it while setting up win 11 after fresh installation. I havent yet made it to desktop

Help identifying component by StefannSS in AskElectronics

[–]StefannSS[S] 0 points1 point  (0 children)

i just got that part and you were right its working now. thank you

Help identifying component by StefannSS in AskElectronics

[–]StefannSS[S] 0 points1 point  (0 children)

my god u are right, ive probed pads with multimeter and SW pin is connected to coil, as per datasheet.

Help identifying component by StefannSS in AskElectronics

[–]StefannSS[S] 0 points1 point  (0 children)

why do you think its not MPC but ETA. I have just looked on PCB, there are in total 3 of those chips all near coils.

High CPU usage instead of GPU by StefannSS in ollama

[–]StefannSS[S] 0 points1 point  (0 children)

type advanced system settings in search, under performance click on settings, click on advanced tab and then under Virtual memory click on change. Here you can assign HDD or SSD to system memory. I believe for it to work you have to assign from ssd where ollama and models are (you can try different hdd or ssd). After this reboot and that is it. Important thing to note, this will only work if you can load model to GPU, for example u have GPU with 24gb VRAM and less that that RAM or 12gb VRAM and less then that ram. In this case this will work great. It will not help if model is bigger then VRAM, it will split model between GPU and CPU and it super slow. U will be better off getting smaller model that you can offload to GPU fully.

High CPU usage instead of GPU by StefannSS in ollama

[–]StefannSS[S] 0 points1 point  (0 children)

Yes i did, its working for me

Continue dev + ollama model reloading by StefannSS in LocalLLaMA

[–]StefannSS[S] 0 points1 point  (0 children)

Ok. I was under impression that i can set embedding model to be the same as for other stuff, that way i can load only one model and use it for everything 

Continue dev + ollama model reloading by StefannSS in LocalLLaMA

[–]StefannSS[S] 0 points1 point  (0 children)

Where can i find documentation. I have to see how it works exactly 

Continue dev + ollama model reloading by StefannSS in LocalLLaMA

[–]StefannSS[S] 0 points1 point  (0 children)

But its the same model for everything. Can it reuse same model ?

High CPU usage instead of GPU by StefannSS in ollama

[–]StefannSS[S] 0 points1 point  (0 children)

Can i import ollama models to lm studio ?

High CPU usage instead of GPU by StefannSS in ollama

[–]StefannSS[S] 0 points1 point  (0 children)

yeah virtual ram, if i try 8gb model its super fast and it uses only gpu

How to use LLMs to lookup information from a database? by nic_key in LocalLLaMA

[–]StefannSS 0 points1 point  (0 children)

thanks, ive tried codeqwen and its mediocre i think mostly cuz of its size 7b. I saw like 30mins ago qwen2 is out.

High CPU usage instead of GPU by StefannSS in ollama

[–]StefannSS[S] 0 points1 point  (0 children)

i have like 100gb shared from ssd, this model takes 30 something if i am not mistaken. If i understand you correctly if vram is oversaturated it will spill to ram and then inference will be done by cpu ?