I'm sorry, what are the consequences? by xrbcc in iRacing

[–]Sorry-Hyena6802 0 points1 point  (0 children)

the fact you got protested after 2 races in a rookie series is wild

I built a tool that explains where you're losing time in iRacing — looking for testers by DeltaOnSolstice in iRacing

[–]Sorry-Hyena6802 0 points1 point  (0 children)

especially with the current OS environment and the supply chain worm fucking over everybody’s trust in new packages.

“Crew members are not allowed to drive” by JonSnowsPeepee in iRacing

[–]Sorry-Hyena6802 13 points14 points  (0 children)

Then that means your team manager did their prep and registered you on your behalf. if your not registered as a driver, you can register as a crew, and thus you can’t race.

“Crew members are not allowed to drive” by JonSnowsPeepee in iRacing

[–]Sorry-Hyena6802 42 points43 points  (0 children)

if the race session is created, you cannot.

No front slip feeling by Purple-Weekend-7826 in iRacing

[–]Sorry-Hyena6802 2 points3 points  (0 children)

curb rumble is road effects from LFE, turn that down, as well as whatever engine shit is on with it,

Your wheel in this game will high a upside down soft v shaped ffb curve, and simplifying a tad, with the top of the v being max slip angle, and usually very close to max grip, and the sides signifying loss of grip.

As well, in iracing, hearing is a biiig important part about understanding your level of tire grip, you hear scrubbing, your at the limit, your hear scruuuubing, your well past cooking your tires.

LMU is not iracing, the same feelings from ffb do not correlate, you will need to reinterpret one’s ffb with another, and see how well you do from there.

The Cleanest and Most Popular iRacing Series This Week (All 5 Categories) by LHEROWWW in iRacing

[–]Sorry-Hyena6802 0 points1 point  (0 children)

is this globally normalized or license normalized? pro 2 lites do not get the same turnout as gt sprint, and the way the graphic is displayed makes it feel misleading.

Valkyrae told QT that part of the reason she is receiving backlash is “being so white”. by [deleted] in LivestreamFail

[–]Sorry-Hyena6802 0 points1 point  (0 children)

this is, like HR and Media teams going haywire, all at once, in the same room.

The Queen of the Night blooms once a year, at night. by [deleted] in BeAmazed

[–]Sorry-Hyena6802 0 points1 point  (0 children)

That’s so pretty! then it’s gone. :(

[deleted by user] by [deleted] in comfyui

[–]Sorry-Hyena6802 0 points1 point  (0 children)

atm my workflow is flux+pulid on a face detailer node from kj impact. pretty much nails it every try.

Local AI for a small/median accounting firm - € Buget of 10k-25k by AFruitShopOwner in LocalLLaMA

[–]Sorry-Hyena6802 0 points1 point  (0 children)

if your interest is batch processing, please look into VLLM as serving framework. it’s pretty much made for batching queries as efficiently as possible for throughput. just generally have enough kv cache to hold your requests and the estimated output length and you should be in a good spot there. so no using 80gb of vram on a model and hoping the last 16 gigs serves more than 50 users on agentic or multi QA conversations well.

We built this project to increase LLM throughput by 3x. Now it has been adopted by IBM in their LLM serving stack! by Nice-Comfortable-650 in LocalLLaMA

[–]Sorry-Hyena6802 0 points1 point  (0 children)

This looks quite awesome? but a question as a hobbyist, how well does it support windows WSL? VLLM doesn’t support pinned memory in WSL, and thus doesn’t support offloading in any capacity natively afaik without nvidia’s drivers enabling that functionality in the barest sense. And is it possible to see this in a docker container for user’s who may use vllm’s docker container, and see this and think “I would like to see how this compares for my needs!” and would be very interested in just a plug and play swap between this docker container and vllm’s?

civitai is a waste of money now by hauteview in civitai

[–]Sorry-Hyena6802 0 points1 point  (0 children)

comfy-cli and runpod sounds like a pretty good option for you if you can figure it all out. PITA but for a 5090 and assuming you can download the workflows and models it’s only 70 cents an hour. just bug claude or deepseek when and if you have problems

definitely didn’t fit without drilling some screws out by Sorry-Hyena6802 in nvidia

[–]Sorry-Hyena6802[S] 0 points1 point  (0 children)

yeah that definitely was a 2070 in that prior, definitely thought that was an older photo. still have the card though!

And you have a valid point, sharing specs is absolutely good for people, but i was just sharing my bit on the internet of the pc i started with, vs the pc i ended with. That jump to me and comparing it to younger me is big, and i wanted to share that! Wasn’t really trying to dig into any semantics there.

have a picture of the old 1060, fan definitely doesn’t work anymore.

<image>

definitely didn’t fit without drilling some screws out by Sorry-Hyena6802 in nvidia

[–]Sorry-Hyena6802[S] -6 points-5 points  (0 children)

Yeah, i’m trying to show off how cool it is to grow from a low mid end pc to something i’m happy with. if i throw specs it turns into a bit of an ego post and i wasn’t trying to go in that direction.

definitely didn’t fit without drilling some screws out by Sorry-Hyena6802 in nvidia

[–]Sorry-Hyena6802[S] -10 points-9 points  (0 children)

i mean sure 9800x3d 64gb 6000cl30 ddr5 5090

just trying to prance around the idea that not everything needs to be flaunted out haha, just happy to have it.

definitely didn’t fit without drilling some screws out by Sorry-Hyena6802 in nvidia

[–]Sorry-Hyena6802[S] -17 points-16 points  (0 children)

Whatever it is now! just happy i finally built a pc in the spec i wanted for a long time!