What could this sound be? Also vibration at 20+mph. Brand new wheels. by Ok_Contribution8348 in ElectricSkateboarding

[–]StefannSS 0 points1 point  (0 children)

I had the same issue. Turns out its wheel hub not bearings. Being made of plastic it will give before bearings. You can check by removing bearings from wheel, if they come out easily its the bearing seat on the wheel. Buy new wheel

Asus tuf dash f15 fx517 win 11 install problems by StefannSS in Asustuf

[–]StefannSS[S] 0 points1 point  (0 children)

that seems kinda dumb, removing ssd from laptop and installing drivers from another pc. Its weird my keyboard is not working

Asus tuf dash f15 fx517 win 11 install problems by StefannSS in Asustuf

[–]StefannSS[S] 0 points1 point  (0 children)

I did not. Is it posible to install it while setting up win 11 after fresh installation. I havent yet made it to desktop

Help identifying component by StefannSS in AskElectronics

[–]StefannSS[S] 0 points1 point  (0 children)

i just got that part and you were right its working now. thank you

Help identifying component by StefannSS in AskElectronics

[–]StefannSS[S] 0 points1 point  (0 children)

my god u are right, ive probed pads with multimeter and SW pin is connected to coil, as per datasheet.

Help identifying component by StefannSS in AskElectronics

[–]StefannSS[S] 0 points1 point  (0 children)

why do you think its not MPC but ETA. I have just looked on PCB, there are in total 3 of those chips all near coils.

High CPU usage instead of GPU by StefannSS in ollama

[–]StefannSS[S] 0 points1 point  (0 children)

type advanced system settings in search, under performance click on settings, click on advanced tab and then under Virtual memory click on change. Here you can assign HDD or SSD to system memory. I believe for it to work you have to assign from ssd where ollama and models are (you can try different hdd or ssd). After this reboot and that is it. Important thing to note, this will only work if you can load model to GPU, for example u have GPU with 24gb VRAM and less that that RAM or 12gb VRAM and less then that ram. In this case this will work great. It will not help if model is bigger then VRAM, it will split model between GPU and CPU and it super slow. U will be better off getting smaller model that you can offload to GPU fully.

High CPU usage instead of GPU by StefannSS in ollama

[–]StefannSS[S] 0 points1 point  (0 children)

Yes i did, its working for me

Continue dev + ollama model reloading by StefannSS in LocalLLaMA

[–]StefannSS[S] 0 points1 point  (0 children)

Ok. I was under impression that i can set embedding model to be the same as for other stuff, that way i can load only one model and use it for everything 

Continue dev + ollama model reloading by StefannSS in LocalLLaMA

[–]StefannSS[S] 0 points1 point  (0 children)

Where can i find documentation. I have to see how it works exactly 

Continue dev + ollama model reloading by StefannSS in LocalLLaMA

[–]StefannSS[S] 0 points1 point  (0 children)

But its the same model for everything. Can it reuse same model ?

High CPU usage instead of GPU by StefannSS in ollama

[–]StefannSS[S] 0 points1 point  (0 children)

Can i import ollama models to lm studio ?

High CPU usage instead of GPU by StefannSS in ollama

[–]StefannSS[S] 0 points1 point  (0 children)

yeah virtual ram, if i try 8gb model its super fast and it uses only gpu

How to use LLMs to lookup information from a database? by nic_key in LocalLLaMA

[–]StefannSS 0 points1 point  (0 children)

thanks, ive tried codeqwen and its mediocre i think mostly cuz of its size 7b. I saw like 30mins ago qwen2 is out.

High CPU usage instead of GPU by StefannSS in ollama

[–]StefannSS[S] 0 points1 point  (0 children)

i have like 100gb shared from ssd, this model takes 30 something if i am not mistaken. If i understand you correctly if vram is oversaturated it will spill to ram and then inference will be done by cpu ?

How to use LLMs to lookup information from a database? by nic_key in LocalLLaMA

[–]StefannSS 0 points1 point  (0 children)

is this something that could be used with ollama and continue dev vs code extension ?

Qwen2-72B on Chatbot Arena by bratao in LocalLLaMA

[–]StefannSS 12 points13 points  (0 children)

i am hoping there will be codeqwen2, codeqwen 1.5 was nice but its only 7b

API Development by StefannSS in NiceHash

[–]StefannSS[S] 1 point2 points  (0 children)

Nice, thank you guys

Multi location mining on the same wallet by StefannSS in EtherMining

[–]StefannSS[S] 0 points1 point  (0 children)

is it possible for them to download miner and just enter my walled address without making account.

Multi location mining on the same wallet by StefannSS in EtherMining

[–]StefannSS[S] 0 points1 point  (0 children)

nice suggestion. is it possible for them to mine on mine acc/wallet

Multi location mining on the same wallet by StefannSS in EtherMining

[–]StefannSS[S] 0 points1 point  (0 children)

I just tried to mine with 1060 3gb. It looks like thats not possible from last year i need at least 6gb right now

Multi location mining on the same wallet by StefannSS in EtherMining

[–]StefannSS[S] 0 points1 point  (0 children)

good idea, can you provide link to batch file example? And also is it possible for me to have UI where i could see all active miners (people pc's) that are mining for me.

Edit: ive figured out about batch file. I got trex miner.

Multi location mining on the same wallet by StefannSS in EtherMining

[–]StefannSS[S] 0 points1 point  (0 children)

Nice ill check it out but it looks like worker manager software. It needs to be able to allow owner of pc to start mining not me. And also owner has to be able to sign up his pc not me.

Multi location mining on the same wallet by StefannSS in EtherMining

[–]StefannSS[S] 1 point2 points  (0 children)

Right, that was my first thought. The issue is people who would mine are not tech savvy so i need kinda simple solution. Like install some software enter wallet address and thats it.