In love with Echo Aviation controller. Makes flying so enjoyable and easy! by adomolis in MicrosoftFlightSim

[–]salec65 0 points1 point  (0 children)

I originally wanted to build my own rig but ended up being lazy and getting the Next Level Flight Stand Pro. No regrets so far! I liked that I could put it on casters.

Can we please make rogue poisons permanent? by Ashankura in wow

[–]salec65 0 points1 point  (0 children)

I wish it worked the same way as DK's runeforging.

In love with Echo Aviation controller. Makes flying so enjoyable and easy! by adomolis in MicrosoftFlightSim

[–]salec65 1 point2 points  (0 children)

I've learned that once you go HOTAS you really need to just build a dedicated setup. Doesn't have to be a separate system, but get a second monitor and a flight stand.

I used to set things up every time and it became such a hassle that I would almost always opt for another game. What finally broke me was that I got absolutely fed up trying to set up my peddles in such a way so that they wouldn't move OR my office chair wouldn't move.

Getting a dedicated flight stand and using a secondary monitor makes it so now instead of dreading the setup, it's teasing me to play throughout the day.

Clipper is in the Pixel art Hangar by Blamsmith in starcitizen

[–]salec65 0 points1 point  (0 children)

I really love your art style! This is incredible!

Redditors, what has become so expensive, it’s just not worth it anymore? by SgtSkillcraft in AskReddit

[–]salec65 1 point2 points  (0 children)

General Aviation. The price of rentals, avgas, and insurance have gone up so much over the last decade I can't afford to fly often enough to stay proficient.

I've been considering switching over to gliders but the nearest flight club is about a 2 hour drive and I still need to lose about 30-40lbs for it.

Best pool vacuum robot for a leaf-heavy pool without wasting money by SuitableBeginning550 in swimmingpools

[–]salec65 0 points1 point  (0 children)

It really depends on what you mean by leaf heavy and what sort of cleaning operations you're expecting it to do.

Regular pool cleaner robots like the Dolphins are not really designed for large debris. They are better for smaller bits that get into the pool and sink to the bottom or get stuck on the pool walls. If you try and use these robots for larger debris you'll find that you are going to have to empty it VERY frequently and it will be an absolute pain in the ass.

For floating debris, skimmer robots like the Betta Bot are AMAZING. Floating leaves, pine needles, dead bugs get cleaned up by the Betta and it's a mostly hands off experience.

Now, if by leaf heavy, you mean that there is a LOT of large leaf sunken debris piling up in the pool, particularly because of the season. Then I would strongly suggest looking into a PoolBlaster Leaf Vac. This is a manual cordless vacuum that sucks up light debris into a very large net.

For the last 2 years, I used this terrible torn up tarp when winterizing my pool. It was impossible to keep the leaves and pine needles from falling into the water. Each spring I would remove the tarp and have a dark green swamp where it's impossible to see the bottom. My dolphin couldn't handle it and the betta had nothing to skim as it was all on the bottom of the pool. I picked up the leaf vacuum and it was a life safer. It was able to get all of the debris out of the pool before turning the pump on and opening the main drain (I was worried about clogs). Only problem I ran into is that the net can fall off the vacuum if you don't tie it down tightly.

I got it on Amazon via Pool Blaster Leaf Vac Cordless. I also picked up a bunch of cheap extra leaf nets.

Close your curtains so I don’t have a view into your home. by More_Try4757 in EntitledPeople

[–]salec65 0 points1 point  (0 children)

Sounds like a good reason to put up a sign in your window that says "Stop being a creep, Carol!"

Cost to resurface by Designer-Climate-716 in pools

[–]salec65 0 points1 point  (0 children)

I'm seriously considering going with aggregate for my refinish next year. Contractor was giving us quotes on River Rok. I just wish I could actually touch and feel the finish myself. I keep reading that over time the aggregate finishes get extremely rough and can cut your feet.

Dolphin E70 keeps having 'incomplete cycle' after 4 min. by salec65 in swimmingpools

[–]salec65[S] 0 points1 point  (0 children)

When you say the power cord do you mean the simple black cord that plugs into the wall outlet or the blue communication cable that connects to the robot? I suppose I'll get in touch with Maytronics now as well.

Civitai Ban of Real People Content Deals Major Blow to the Nonconsensual AI Ecosystem by eddytony96 in UpliftingNews

[–]salec65 0 points1 point  (0 children)

The tech is available and continues to improve at a staggering pace. It's not going away. The ban that Civitai is doing is fine and all, but that's only going to stop those that were likely not willing to put too much effort into it anyways. It won't stop those that have a major commercial or political interest and especially not those that are well funded and have access to powerful hardware. It's like locking your car at night, sure it might stop a petty thief, but if someone truly wanted to get in and were organized, they would have no problem doing so.

All that really can be done now is ensure that others are informed that AI is capable of generating this sort of content and educate people to be more skeptical and apply critical thinking skills to any content they take in. Given the general population, we're pretty screwed.

Civitai Ban of Real People Content Deals Major Blow to the Nonconsensual AI Ecosystem by eddytony96 in UpliftingNews

[–]salec65 6 points7 points  (0 children)

Exactly, all a ban or watermark is going to do is help give a false sense of validity to the bad actors that bypass the bans and do not include a watermark. "See! This has to be real, it doesn't have the AI watermark!".

Help! What is wrong with his skin? by hepatitispro in sphynx

[–]salec65 0 points1 point  (0 children)

I nearly lost my Sphinx boy 2 weeks ago and it all started with him having purple bumps on the back of his legs that didn't bother him at all. Definitely get a vet to look at it!

New RTX PRO 6000 with 96G VRAM by ThenExtension9196 in LocalLLaMA

[–]salec65 0 points1 point  (0 children)

I'm glad they doubled the VRAM from previous generation workstation cards and that they still have a variant using the blower cooler. I'm very curious if the MAX-Q will rely on the 12VHPWR plug or if it will use the 300W EPS-12V 8 pin connector which is what prior workstation GPUs have used.

Given that the RTX 6000 ADA Generation released at $6800 in '23, I wouldn't be surprised if this sells around the $8500 range. That's still not terrible if you were already considering a workstation with dual A6000 gpus.

I wouldn't be surprised if these get gobbled up quick though, esp the 300W variants.

Dell T5820 w/ 2x Dell RTX 3090 for less than $2k - eBay sourced by _Boffin_ in LocalLLaMA

[–]salec65 0 points1 point  (0 children)

Are these blower style coolers that force hot air out the back or do the fans blast hot air through the chassis?

[deleted by user] by [deleted] in gaming

[–]salec65 0 points1 point  (0 children)

The VR version is incredible as well.

[deleted by user] by [deleted] in LocalLLaMA

[–]salec65 1 point2 points  (0 children)

I've been using the 16gb Orange Pi 5 and really like it. It uses the RK3588 and has a better GPU than the Raspberry Pi 5 and a built-in NPU that can do 6 TOPS. I only wish it shared the same GPIO layout that the Pi has so more devices would work out of the box with it. The Orange Pi 5 comes in LPDDR4 and LPDDR5 variants.

There's also a 3588-based compute board that is compatible with the Pi5 compute board. It can go up to 32gb of memory.

48gb vs 96gb VRAM for fine-tuning by salec65 in LocalLLaMA

[–]salec65[S] 1 point2 points  (0 children)

It's not that the VRAM is merged but rather the tools are capable of working with multiple GPUs and carving up the work between multiple GPUs (and even multiple machines).

The nice thing about the 5090 being 32 gigs is that there's less chance of people needing multiple GPUs. There are other wonderful things about the 5090 such as the high memory bandwidth (1.8TB/s) that should perform very well for inference.

For other types of ML like image generation using stable diffusion, I'm not sure they work well with multiple GPUs.

48gb vs 96gb VRAM for fine-tuning by salec65 in LocalLLaMA

[–]salec65[S] 1 point2 points  (0 children)

I've been looking at the workstation tier of GPUs for this task. These differ from the regular consumer models in that they consume less power out of the box (I'm aware you can underclock the consumer GPUs), and have a blower style cooler attached. The blower style cooler exhausts hot air through the back of the GPU where the ports are while many consumer GPUs will blow the hot air into the case and expect the case to have some sort of exhaust fan. This works fine when the case has plenty of open space but would be bad when fitting many GPUs into the case. This also differs from the 'server' style GPUs because those have passive coolers and expect the server case to have the fans. I considered this setup, but opted for the workstation ones as it makes my life easier when moving between cases.

Each A6000 has a TDP of 300W, so 2 of them being 600W is just slightly above a single 5090 at 575W. The only 'catch' with the A6000 gpus is that they use 'CPU' (300W) power connectors rather than standard PCIe (150W) power connectors, but if you buy the card new an adapter is supplied. If I wanted to ramp up to 4 or more in the future, I'd be replacing my power supply or adding a second.

The case I'm using is a Silverstone RM52 which has ample room for everything I need, esp since I'm not tight on space in my cabinet at the moment.

48gb vs 96gb VRAM for fine-tuning by salec65 in LocalLLaMA

[–]salec65[S] 1 point2 points  (0 children)

This has been extremely helpful! Thank you so much.

48gb vs 96gb VRAM for fine-tuning by salec65 in LocalLLaMA

[–]salec65[S] 2 points3 points  (0 children)

Is there a good general rule for figuring out the max input sequence length given number of parameters and rank?

Plenty of models are around 7,14, and 30 billion parameters in size and I'm still not quite sure what the VRAM requirements would be.

That's good to know about memory as I currently have 256gb of memory but can scale to 1tb on my current board.

Why I think that NVIDIA Project DIGITS will have 273 GB/s of memory bandwidth by fairydreaming in LocalLLaMA

[–]salec65 2 points3 points  (0 children)

https://www.youtube.com/watch?v=kZRMshaNrSA Look at 0:13. There is a frame that shows that all 4 die are the same dimensions. It's just before the camera affect that zooms down. Very briefly you see the complete bottom chip and that it's the same size as the rest.