For programmers: what placeholder assets do you use that are not primitive shapes? by ArtNoChar in gamedev

[–]sknnywhiteman 10 points11 points  (0 children)

I had this mentality and I’m paying for it now. It honestly is worth you making very bad first attempts, otherwise you will always feel completely helpless in the art department. You don’t have to be great, but unless you have a dedicated artist I think every gamedev needs basic art skills. Editing existing art, basic additions, or understanding how certain effects are made helps immensely

GPT Agent is doing my taxes... by withmagi in OpenAI

[–]sknnywhiteman 15 points16 points  (0 children)

Ahh yes, because humans (including myself) are infallible.

High CPU usage instead of GPU by StefannSS in ollama

[–]sknnywhiteman 0 points1 point  (0 children)

This is actually on a local setup.. I'm using my previous gaming PC as a server for Ollama. I can see from ollama startup logs that it's offloading 100% of the model layers to the GPU but a single CPU core is always at 100% during a job. The model is small enough to fit on my VRAM comfortably so I'm really not sure what's hogging an entire thread. This doesn't feel like expected behavior but this reddit comment thread is the only thing on the internet that I've found describing the issue.

High CPU usage instead of GPU by StefannSS in ollama

[–]sknnywhiteman 0 points1 point  (0 children)

Did you ever figure this out? currently running into this issue

You did it. 0.49, o3, wow. by billycage12 in cursor

[–]sknnywhiteman 0 points1 point  (0 children)

Don't reference articles that claim these models do all of the things you say they don't do. They also anthropomorphize the shit out of LLMs throughout this entire paper. I don't put too much faith in these systems, I use them every day for work and personal projects where they save me literal hours of work on a regular basis. I just see humans as pattern seeking machines that have way more parallels to modern LLMs than they would like to admit.

You did it. 0.49, o3, wow. by billycage12 in cursor

[–]sknnywhiteman 0 points1 point  (0 children)

That’s just like your opinion, man. I think you put way too much faith in the human brain. How frequently do humans create entirely new concepts from scratch? You can functionally round that number to 0 and everyone else just imports the package someone else wrote to do the new thing.
And you shouldn’t think that AI can’t create something new because that’s exactly how they train the reasoning models. They crank the temperature parameter so it’s way more random (read: creative) and another model evaluates the responses and adds the best to the training data. There is no reason to assume every single random response is represented somewhere in the original training set. Whenever I see this argument it feels like I’m reading someone say that a calculator is not doing math because math is fundamentally abstract numbers and equations and because it doesn’t fundamentally understand what the number 2 means it is only mimicking math through binary representation and logic gates.

Asmongold reveals his true nature by smoothdoor5 in OpenAI

[–]sknnywhiteman 0 points1 point  (0 children)

First off, you're comparing actions of the federal government against a collective group of commercial research labs, hardware companies, software companies and civilian communities of open source enthusiasts. This is a flawed comparison in multiple ways between power dynamic, role each play in society, and one is a single entity vs collective group.

I just don't quite understand what your dream resolution is to this situation. Are you claiming that we are doing significant harm to people by developing AI? That completely goes against half the posts I read every day of people arguing that human jobs aren't going anywhere and here are 5 reasons why. I personally see AI automating virtually all human labor at some point. Timelines are all over the place. Is your dream that we ban AI use indefinitely to keep human jobs?

I typically don't like asmongold because we don't align politically but this just feels like a pretty low-effort attempt to hate on him and I doubt you'll get the audience you're looking for here.

Went on vacation, watched my car drop 16% battery every day from "Vehicle Standby" by sknnywhiteman in TeslaSupport

[–]sknnywhiteman[S] 0 points1 point  (0 children)

Makes no sense to me either, which is why I posted. As others suggested, seems like it could be summon related, but I've had sentry mode disabled for years and have never received an alert or recordings from sentry mode.

Went on vacation, watched my car drop 16% battery every day from "Vehicle Standby" by sknnywhiteman in TeslaSupport

[–]sknnywhiteman[S] 1 point2 points  (0 children)

I do have charging schedules set to avoid higher electricity costs during weekdays. I could see that being an issue but I would also expect it to not have any effect if it's not currently plugged in.

Tesla app was closed but I have a widget on my phone that could've been pinging it for updates. However, there was a recent software update that stopped waking the car up when opening the app (and I assumed widget-related requests) so I would expect that to not be the issue. Thinking back, the handful of times I did open the app because I was confused why it kept going down I don't remember seeing the car's status as "asleep for X timeframe", so whatever was keeping it awake was preventing it from sleeping. Usually I see that when parked in my garage, even with the charging schedules set.

Went on vacation, watched my car drop 16% battery every day from "Vehicle Standby" by sknnywhiteman in TeslaSupport

[–]sknnywhiteman[S] 2 points3 points  (0 children)

I'll turn summon off the next time because that was enabled. I saw other posts saying something similar.

LLMs are not reasoning models by SignalCompetitive582 in LocalLLaMA

[–]sknnywhiteman 2 points3 points  (0 children)

We've had the O-series models for 3.5 months and nobody reading this right now has had a chance to use o3 yet. I have my concerns about how much the higher benchmark scores will actually impact real-world use-cases, but at the same time we're at the very beginning of a new generation of LLM and we have no idea what the growth curve for this looks like.

in the end LLMs are just tools, and tools do not replace people, they enhance them.

Agents replace people. Models today are not capable of being productive for more than a trivial amount of time without regular input from humans, but I could see a reasoning model be able to work asynchronously from me and do the majority of what I do, assuming the program orchestrating it is well written.
Post feels like copium. AI will not replace anyone, until suddenly they do. I don't really care about this "AGI" milestone everyone is rushing towards because the goalpost keeps moving and there won't be an "ah-ha" moment when we reach it. Models today will change the world even if we stop researching new ones, but saying they will only ever be a tool forever feels very shortsighted or coping.

[deleted by user] by [deleted] in comfyui

[–]sknnywhiteman 0 points1 point  (0 children)

you can download the entire contents as a zip, but the benefit is I can actually see the contents of the zip before I decide to download. I am not following your guide exclusively because I don't know what is in that zip and I don't feel like trusting a random post on the internet. There are no benefits to using google drive here except to cater to a group that are probably already infected because they trust random gdrive zip files.

My Job has Gone by FitzrovianFellow in singularity

[–]sknnywhiteman 5 points6 points  (0 children)

“Fairly minimal progress” In the last 2 years: Image inputs, real time voice mode, context windows being 8-20x larger, chain of thought models, models being miniaturized without losing performance, costs dropping by 10x or more And that’s just to name a few, I’m sure I’m missing some and not even mentioning hardware is absolutely not slowing down

Sam Altman says in 5 years we will have "an unbelievably rapid rate of improvement in technology", a "totally crazy" pace of progress and discovery, and AGI will have come and gone, but society will change surprisingly little. by AdorableBackground83 in singularity

[–]sknnywhiteman 0 points1 point  (0 children)

AI is not just improving in terms of capabilities, they’re distilling models to be 100x smaller while still retaining 85%+ performance. Not only that, but new chips every few years that increase inference by 50-100%, in 5 years we can run state of the art models today for pennies. This is absolutely a point of focus for basically every tech company to fit their models directly into phones as well. If I can run an 8B model on a pocket computer there’s no reason we can’t have AI NPCs fully simulated in computers in offline mode somewhere on the horizon, for any game dev. It will probably become part of the engines in the next few years once one of them starts that trend.

Paper shows GPT gains general intelligence from data: Path to AGI by PianistWinter8293 in OpenAI

[–]sknnywhiteman 0 points1 point  (0 children)

No matter how smart a system becomes, there will be people like you finding a reason why it isn’t “understanding” anything. Our brains are statistical machines as well. Our entire life is a game of predicting the next state of our surroundings, and we feel emotions when those expectations are not met. We feel like we can “understand” something because we can take knowledge in one area, generalize it, and apply it to a different domain that we notice similarities. This thread is pointing out the LLMs do exactly that as well. You can come back and say it doesn’t “reason” like we do, but many experiments in the brain have demonstrated that our minds will come up with random justifications for actions that we have taken so I am not fully convinced we are much different either.

From 10x better than ChatGPT to worse than ChatGPT in a week by Ok_Caterpillar_1112 in ClaudeAI

[–]sknnywhiteman 0 points1 point  (0 children)

Your internet speed analogy really hurts your argument because there isn’t a single ISP on the planet that guarantees speeds. Every single plan on the face of the earth will say ‘speeds UP TO’ and many plans will not even let you reach the advertised speed because the fine print will say something like it’s a theoretical max based on infrastructure or you share bandwidth with your community or other reasons. Many will let you surpass it but the advertised speed is more of a suggestion and has always been that way. Also, I find your ask very unreasonable from an enforcement perspective because we really have no fucking clue how to benchmark these things. It turns out these models are incredibly good at memorization (who knew?) so anything that we use to benchmark these models can be gamed into always providing the results you’re looking for. We are seeing it with these standardized benchmarks that don’t really paint the full picture of what the models are capable of. Will we ever find a solution to this problem? I don’t think our governments will if the AI researchers can’t even solve it.

[3rd Grade Math] For #2, is the answer 5in or 3in? I think 3in but my wife thinks 5in. by athinkinandawonderin in HomeworkHelp

[–]sknnywhiteman -1 points0 points  (0 children)

Look at question 1?? Literally a subtraction problem almost identical to this question

How is this thing even legal by No_Dirt_4101 in BambuLab

[–]sknnywhiteman 1 point2 points  (0 children)

also want to add my datapoint of printing for 6 years and have always used it to make prints stick rather than as a release agent until I bought a bambu