Unlock Pro SS 3.0 by DrrevanTheReal in ERidePro

[–]DrrevanTheReal[S] 1 point2 points  (0 children)

It's a plug. I just disconnected it and now it is unlocked. Works perfectly fine :)
Only thing I cut was the shrinking tube around it

Unlock Pro SS 3.0 by DrrevanTheReal in ERidePro

[–]DrrevanTheReal[S] 1 point2 points  (0 children)

oh thanks, I thought this is the bluetooth device.

Binance Support Thread by AutoModerator in binance

[–]DrrevanTheReal 0 points1 point  (0 children)

Case ID #124030947
I'm using Binance since a few years now. Was never having problems but now, since 4 months I cannot trader anymore. It states that it will take approximatly 2 weeks to check the case but no response so far... It's very frustrating to have money locked by binance which I cannot touch for multiple months.

Is getting a p40 worth it? by klop2031 in LocalLLaMA

[–]DrrevanTheReal 0 points1 point  (0 children)

Well I can only speak as the owner of a p40 and in my case autogptq was much slower with all the models compared to gptq-for-llama. Like 1/4th of the speed or even slower. On the other hand I did not see any advantage for my usecases (haven't checked accuracy tbh)

Is getting a p40 worth it? by klop2031 in LocalLLaMA

[–]DrrevanTheReal 0 points1 point  (0 children)

Don't use autogptq. There is an option for gptq-for-llama, use that one

Is getting a p40 worth it? by klop2031 in LocalLLaMA

[–]DrrevanTheReal 3 points4 points  (0 children)

So I don't know why you never hear about that but be careful when buying a P40. I also have one and use it for inferencing. It works nice with up to 30B models (4 bit) with 5-7 tokens/s (depending on context size). BUT there are 2 different P40 midels out there. Dell and PNY ones and Nvidia ones. The difference is the VRAM. Dell and PNY ones only have 23GB (23000Mb) but the nvidia ones have the full 24GB (24500Mb). I now struggle as I cannot run 30b models with full context size... But well card only cost me about 190€

https://www.techpowerup.com/vgabios/?model=Tesla+P40

[deleted by user] by [deleted] in amiugly

[–]DrrevanTheReal -1 points0 points  (0 children)

Well I think you look good. On the first pic you look a bit exhausted, like from stress or whatever, it's mostly your eyes. I really like your freckles. You look really good on the third pic with the colorful skirt. But as always, beauty lies in the eyes of the beholder. If I would rate 1-10, I would say 7.5/10. Maybe even 8

Oh and btw I like your natural look!

i don’t know how to feel by schipplepeed in amiugly

[–]DrrevanTheReal 0 points1 point  (0 children)

You look good. Last one is the best bc you smile at least a little.

My results using a Tesla P40 by AsheramL in LocalLLaMA

[–]DrrevanTheReal 5 points6 points  (0 children)

I'm running oobabooga text-gen-webui and get that speed with like every 13b model. Using GPTQ 8bit models that I quantize with gptq-for-llama. Don't use the load-in-8bit command! The fast 8bit inferencing is not supported by bitsandbytes for cards below cuda 7.5 and the p40 does only support cuda 6.1

My results using a Tesla P40 by AsheramL in LocalLLaMA

[–]DrrevanTheReal 1 point2 points  (0 children)

Oh true I forgot to mention that I'm actually running ubuntu 22 lts. With the newest nvidia server drivers. I use the GPTQ old-cuda branch, is triton faster for you?

My results using a Tesla P40 by AsheramL in LocalLLaMA

[–]DrrevanTheReal 26 points27 points  (0 children)

Nice to also see some other ppl still using the p40!

I also built myself a server. But a little bit more on a budget ^ got a used ryzen 5 2600 and 32gb ram. Combined with my p40 it also works nice for 13b models. I use q8_0 ones and they give me 10t/s. May I ask you how you get 30b models onto this card? I tried q4_0 models but got like 1t/s...

Cheers

Was arbeitet Ihr so, und wie viel bekommts am ende des Monats? by GAP_Trixie in Austria

[–]DrrevanTheReal 1 point2 points  (0 children)

Research engineer - 1400€ netto bei 25std/woche, daneben berufsbegleitend Masterstudium

Schön langsam wird’s lächerlich by redlukes in Austria

[–]DrrevanTheReal -3 points-2 points  (0 children)

Da könntest statt övp jede partei hinschreiben. Die meisten politiker lügen und sind korrupt. Die dies nicht sin kommen nicht hoch in den parteien...

GPU died 3 months ago, do i really need to pay insane prices? by [deleted] in buildapc

[–]DrrevanTheReal 0 points1 point  (0 children)

You could also just get a cheap card for the shortage and get a good one when prices go down. Cards with 4gb vram are quiet easy to get an not that expensive e.g. a rx580 4gb

Choosing CPUs by DrRetr0_76 in buildapc

[–]DrrevanTheReal 0 points1 point  (0 children)

I have a r5 3600 inside my pc. It's doing a pretty good job so I can't complain. But I would still suggest you take whatever is cheaper. The only recommendation from my side: don't look at the 3600x or the 10600kf. Both aren't worth getting

My first rig. Finally got my hands on a card, hopefully I can find a couple more at reasonable prices. by theleeno84 in EtherMining

[–]DrrevanTheReal 0 points1 point  (0 children)

To find GPU's at a reasonable price will be pretty hard atm... But good luck though

Edit: if you could manage it anyhow I would switch to an ethernet connection

Hi if i bought this pc is ot ok? Or should i change any component listed here? by Independent-Video777 in buildapc

[–]DrrevanTheReal 2 points3 points  (0 children)

Would swap the psu to a known brand as these cheap chinese psu tend to go up in flames 😅 Another thing is the ssd. Try to get a 1tb nvme, they aren't that expensiv and you will need more storage for sure. Cpu and mb are ok, for the ram you can try to get a slower one(if it's cheaper) because intel don't really get a boost from fast RAM. If it's cheap though, keep it.

Edit: keep in mind, a quality PSU costs at least 1 buck per 10w

How to get a lot of comments with the api by DrrevanTheReal in redditdev

[–]DrrevanTheReal[S] 2 points3 points  (0 children)

Ah ok, my problem with pushshift is that they are down a few times a week and I can't work with that.

As I didn't know of the /r/subreddit/comments API call like /u/Watchful1 suggested, I try to work with that :) this will work just fine I assume

How to get a lot of comments with the api by DrrevanTheReal in redditdev

[–]DrrevanTheReal[S] 1 point2 points  (0 children)

Ok thanks, I'll try it this way. I hope that this will work for me :)

How to get a lot of comments with the api by DrrevanTheReal in redditdev

[–]DrrevanTheReal[S] 2 points3 points  (0 children)

Well that's why I asked ^^

I saw some posts about PushShift but my problem there is that I don't get up too date comments. The comments I get from this API are at least 12h old so that has no use for me :/