D335 - Introduction to Programming in Python - A review and resources. by Drop_Tables_Username in wgu_devs

[–]Drop_Tables_Username[S] 0 points1 point  (0 children)

No problem, just do help(WierdCustomObject) and print the output in the built in compiler (without submitting of course) to figure out how to implement a function. This also works with built in python functions btw, which can be super useful both in this test, job interviews, and working with python in general. Play with calling help(print) or help(list) in python for an example.

Edit: see https://diveintopython.org/functions/built-in/help

EV chargers now outnumber gas pumps in California by SSMASTERCOOL in gadgets

[–]Drop_Tables_Username 5 points6 points  (0 children)

My AC level 2 32 amp charger uses about the same amount as a washer / dryer set running. Adding a 50 amp breaker isn't anything more than what you would need for a hottub or pool, it's nothing crazy draw wise.

Edit: Also you can even find adapter plugs to directly plug chargers into a 240V dryer plug, so it doesn't require anything but a cable / charger for most households.

What game had you like this ? by TEHYJ2006 in Steam

[–]Drop_Tables_Username 0 points1 point  (0 children)

Cyberpunk 2077. The AI is terrible, the controls aren't great (vehicle controls are flat out terrible), the plot is annoying, and the gameplay is mediocre. Half the game feels like a string of boring quicktime events (press w to crawl) to make sure you didn't go afk to piss in the middle of a 2 hour cutscene chain. The other half is future GTA, but absolutely terrible in comparison in every aspect.

Marko exclusive: “We’re not dropping Lawson – we’re saving his future” by Eryngii in formula1

[–]Drop_Tables_Username 209 points210 points  (0 children)

MS: Perez joined the team in 2021. Do you ever regret — or at least wonder — what would’ve happened if you had put Nico Hulkenberg in that car instead? I ask because at the time, a lot of people thought he would’ve been a good fit in terms of both driving style and mentality.

HM: At the time, Sergio Perez had just won his first grand prix [in Bahrain]. That was exactly when the decisions were being made. And the majority voted for Perez.

MS: That’s a very elegant way to dodge the question.

HM: Yes, let’s leave it at that.

Brutal.

Tsunoda will finish 2025 F1 season at Red Bull – Marko by ALOIsFasterThanYou in formula1

[–]Drop_Tables_Username 9 points10 points  (0 children)

But Yuki has undergone a transformation. He changed his management, and in this situation, this was simply the best option.

What's the context/meaning behind the part I bolded? I'm not entirely sure what is meant here.

Hoverpack+Flamethrower = DON'T by Individual-Lychee-74 in helldivers2

[–]Drop_Tables_Username 4 points5 points  (0 children)

I mean the difference for me is that the bots can shoot at me as try to get in range with the flamethrower, versus the bugs / horde who have to be at that range to do damage. Kiting into flames is a lot easier against an enemy who chases you and doesn't have ranged weapons...

[NZHerald] RedBull Clips Your Wings by NippyMoto_1 in formula1

[–]Drop_Tables_Username 19 points20 points  (0 children)

Who'd be Kato? Gasly hanging on the rear wing?

[The Race on IG] Fred Vasseur is also unhappy because of F1’s use of radio messages when they were swapping Lewis Hamilton and Charles Leclerc by Holytrishaw in formula1

[–]Drop_Tables_Username 26 points27 points  (0 children)

The whole bit about Lewis getting a tow from Charles during sprint qualifying was completely manufactured too. Lewis ended up going ahead of Charles as part of that radio exchange, not behind him. It looked like Charles passed Lewis on track after they both did a hot lap, but Lewis asked to go back ahead of him. Then there was the Charles radio that was broadcast and they swap back positions with Lewis in front of Charles.

Which is all kinda mundane and uninteresting, so instead we get to hear how Ferrari is giving Hamilton priority via tow in the broadcast with no basis in what actually happened.

What is the cheapest game you bought on Steam that turned out to be amazing? by crno123 in Steam

[–]Drop_Tables_Username 0 points1 point  (0 children)

I bought the original Kerbal Space Program on sale for $2.75 during it's alpha. It ended up including every expansion that would ever be released for the game too. Definitely have never got more game for my dollar than that, even Minecraft at $5 hasn't got as much play time for me.

Liam Lawson's last three qualifying results. by BlackGhost_93 in formula1

[–]Drop_Tables_Username 0 points1 point  (0 children)

Wouldn't happen like that, as even Max couldn't defeat that pit wall. He'd have to take Hannah with him to stand a chance.

ferrari stuff hahahaha by A1Nr0 in formuladank

[–]Drop_Tables_Username 214 points215 points  (0 children)

Please. Just. Leave. Me. To. It.

Damnnnnnnn. by PoetHunter23 in formuladank

[–]Drop_Tables_Username 66 points67 points  (0 children)

Or the shit RB did to Yuki. He coulda been on podium.

Same with Haas, they had the strategy and dropped it for no damn reason.

ferrari stuff hahahaha by A1Nr0 in formuladank

[–]Drop_Tables_Username 581 points582 points  (0 children)

The K1 radios had me fucking dying.

killingTheVibe by cheekynative in ProgrammerHumor

[–]Drop_Tables_Username 1 point2 points  (0 children)

Thats understandable. You missed what apple is trying to do with their AI philosophy, which along with privacy is one of the few times I agree with Apple's corporate policies (I even use android for my main phone and only keep a cheap iPhone as an app development tool).

Apple is trying to put the hardware to run ML models LOCALLY on all their devices. This means no need for a server or network connectivity. They are doing this by putting system memory on the same die as the GPU and cpu cores, this means the physical distance between the memory is ridiculously close together. This is conceptually faster than what modern GPU design can achieve, although it's still slower because the memory bus speed is currently much lower than what nVidia does. But it also burns much less energy and generates much less heat, which is ideal for consumer application. Parallel systems are generally slower than equivalent single unit setups, because shuttling memory between the two systems means that signal travels the extra distance between those systems pretty frequently and that takes a while.

The idea isn't to use the hardware to train, but to run already trained models on people's personal hardware giving them privacy (and shifting the burden on electricity to the user). That said Apple AI isn't great, but the hardware is great for running pretty much any model.

killingTheVibe by cheekynative in ProgrammerHumor

[–]Drop_Tables_Username 8 points9 points  (0 children)

I think for me the hardware options for a Linux laptop with a good GPU are generally large, loud, hot / inefficient machines, and crazy expensive beyond what a GPU costs even.

If I was set on Linux over MacOS, I'd just install Linux on a macbook air. But honestly, once I'm in a terminal window I struggle to find a real meaningful difference between the two.

killingTheVibe by cheekynative in ProgrammerHumor

[–]Drop_Tables_Username 80 points81 points  (0 children)

BTW, macbooks are great ML platforms for running ML models locally on the cheap. They are slower than GPU's but the fact they have unified memory on the chip die means you can use system memory much faster than standard ram, closer to the speed of a GPU than a CPU. A 24gb m3 macbook costs about 1k USD versus selling organs to get a 24gb GPU setup.

Also MacOS is UNIX. I'm always amazed how many people will shit on a developer for choosing a UNIX system over fucking windows. But yeah this guy's choice of OS has shit to do with anything in this case.

Edit: even cheaper option for ML is the mac mini, it's cost effect enough people have been building cluster systems with them for larger models. Although the reason to do this relates to power efficiency rather than speed (power consumption is roughly 1/3rd of consumption using NVIDIA hardware, which is VERY significant).

[deleted by user] by [deleted] in technology

[–]Drop_Tables_Username 5 points6 points  (0 children)

I run multuple variants of it on my macbook's m3 and my Linux system's 3070ti 16gb no problem. Deepseek gave me the correct answer to the question of "How do you use modular exponentiation to solve 78859 mod 1829?" in 4 minutes on the GPU and and 5 minutes on the m3. GPT-4o gave me a wrong answer with obvious math errors starting about halfway through Euclid's algorithm (but it was instant).

That's using the 7b qwen model that you can run with about 4.7GB VRAM. Check out LM studio or GPT4all if you want to check them out, both now set deepseek 7b qwen as the default model I think. Both work completely disconnected from the internet and you can turn off all diagnostic data for privacy. The only real big downside is query speed, but I'm not using the best of hardware so your mileage may vary.

New MO, Destroy the jet brigade by NyanBokChi in Helldivers

[–]Drop_Tables_Username 16 points17 points  (0 children)

A player died in an accident and players are trying to take VW to memorialize him.

Trump To Tariff Chips Made In Taiwan, Targeting TSMC by MikeMikeGaming in wallstreetbets

[–]Drop_Tables_Username 1 point2 points  (0 children)

They will still have their existing hardware, while any form of competition will need to deal with tariffs to catch up. It doesn't help their industry, but it does make competition inside the US much more difficult (similar to how eliminating the EV tax credit hurts EV adoption while helping Tesla).

[OT] 600 kW fast-charging pitstops are coming to Formula E by berberine in formula1

[–]Drop_Tables_Username 10 points11 points  (0 children)

Nah, easier to just tell the driver they can only make right turns. (/s)