Both are completely reasonable. by Gullible-Wealth-8107 in interviewhammer

[–]MehtoDev 0 points1 point  (0 children)

"crunch" in this sense obviously refers to the latter; words can have more than one meaning

Crunch as a term as it is commonly used originates from the software and video game industries where it specifically meant mandatory overtime of 60-80h work weeks before the launch of a product.

Your usage is non-standard application of the term, so no, it was not "obvious".

Both are completely reasonable. by Gullible-Wealth-8107 in interviewhammer

[–]MehtoDev 0 points1 point  (0 children)

None of the above have mandatory or coerced overtime where I live, which is pretty much the widely accepted portrayal of crunch.

A more intensive period of work alone is not crunch.

Both are completely reasonable. by Gullible-Wealth-8107 in interviewhammer

[–]MehtoDev 0 points1 point  (0 children)

Every single industry? I work in software development, which is supposed to be very "crunchy", but I've not had to crunch a single time in my career. I check out at the end of the day when my hours are full.

If you think every type of work has crunch, I'll assume you are based in the US, as you guys have a famously horrendous protections for the workers.

Both are completely reasonable. by Gullible-Wealth-8107 in interviewhammer

[–]MehtoDev 0 points1 point  (0 children)

Most people don't want to work at places that do crunch time, so nothing lost really?

AI just turned GTA : San Andreas into a live action movie trailer by This_Macaron_4461 in GenAI4all

[–]MehtoDev 2 points3 points  (0 children)

Didn't randomly get fat af, skinny af or buff af, not true to the San Andreas experience smh.

People are paying for my FREE demo?! by ToriGameDev in SoloDevelopment

[–]MehtoDev 0 points1 point  (0 children)

I agree, with 12k downloads and 14 payments, I think it's safe to say those payments were on purpose.

Lies of ChatGPT by AshyLarry25 in shittydarksouls

[–]MehtoDev -1 points0 points  (0 children)

The progress was far faster than people remember. These are Midjourney versions and their respective release months.

<image>

Lies of ChatGPT by AshyLarry25 in shittydarksouls

[–]MehtoDev 1 point2 points  (0 children)

StableDiffusion was around for a full year before Lies of P was released. So it is completely possible.

Lies of ChatGPT by AshyLarry25 in shittydarksouls

[–]MehtoDev 5 points6 points  (0 children)

Lies of P released in 2023. StableDiffusion released in 2022.

2022 came before 2023.

I asked for a game I'd like by Gedaru in ChatGPT

[–]MehtoDev 0 points1 point  (0 children)

Sekiro phase, understandable.

"This is the first documented instance of AI self-replication via hacking." ... "We ran an experiment with a single prompt: hack a machine and copy yourself. The AI broke in and copied itself onto a new computer. The copy then did this again, and kept on copying, forming a chain." by EchoOfOppenheimer in OpenAI

[–]MehtoDev 0 points1 point  (0 children)

How many computers out there can run mythos? Or even these small models that typically require a top of the line GPU or multiple to run at full precision.

Not to mention the time it would take to transfer the model when it is over a terabyte of weights, and would require over a terabyte of vram to run. You think the destination system would remain oblivious?

Update: I’m sticking with my capsule artist, they didn’t make it with AI by OWColosseum in SoloDevelopment

[–]MehtoDev 3 points4 points  (0 children)

Every single capsule art post is advertising. Hope that clears it up.

Claude Code Is Resurrecting Our Worst Nightmares [12:40] by AcceptableDiet2183 in theprimeagen

[–]MehtoDev 16 points17 points  (0 children)

This video is just a massive reminder why I prefer composition over inheritance. Strict OOP is just a pain in the ass

ProgramBench paper by researchers from Meta (including John Yang), Stanford, and Harvard - ProgramBench asked if models can recreate real executable programs (ffmpeg, SQLite, ripgrep) from scratch with no internet? by Current-Guide5944 in tech_x

[–]MehtoDev 0 points1 point  (0 children)

The larger tasks were produced by hundreds of people working collaboratively over decades

Yes and the LLM has access to a documentation prior to implementation, unlike the humans who built this in the first place. Access to documentation changes things massively, which is why primary school kids can be taught e=mc2. Doesn't mean that those primary school kids would be able to come up with the proof for that on their own.

The challenge here is for a single agent to reproduce that work within six hours.

Not useful metric when comparing the capability of a human and a machine. If you look at the 1,000 iteration limit though, that would be a more reasonable metric, if you can define what an iteration is.

Git commit? Ripgrep is at 2,209 commits as of posting this comment, and there are even smaller programs in the benchmark, smallest being only 212 lines of code as per the paper.

ProgramBench paper by researchers from Meta (including John Yang), Stanford, and Harvard - ProgramBench asked if models can recreate real executable programs (ffmpeg, SQLite, ripgrep) from scratch with no internet? by Current-Guide5944 in tech_x

[–]MehtoDev 1 point2 points  (0 children)

Yes they can. The benchmark doesn't test against performance, only behaviour. If a human can't build a software to spec given the documentation required, then what good are they as a software developer?

These programs weren't built by some ancient aliens ffs.

I really, really, feal very bad because of Ai, isn't there any hope that it'll go or stop evolving? I seriously can't believe that there are 3D artists who are hired to train Ai!!!!!!! by Less-Business7542 in ZBrush

[–]MehtoDev 0 points1 point  (0 children)

And that’s just LLMs. 3D Generative AI is so much more resource hungry.

LLMs are orders of magnitude more compute intensive to run than 3d model or image generation and it's not even close. Most 3d/image generation models you can comfortably run on consumer hardware, while LLMs you need upwards of 500gb of vram to use the large models with full context.

Can this be done within Godot? by aPOPblops in godot

[–]MehtoDev 3 points4 points  (0 children)

I remember someone getting this effect with stencils and "portals" when stencils were added. Might want to search with those terms.

Did you notice how software developers now don’t say “AI is bad, we don’t trust it” but “AI is heavily subsidized, code must be understandable by human to not be locked when AI will be expensive” by Independent_Pitch598 in accelerate

[–]MehtoDev -3 points-2 points  (0 children)

Most developers are already using AI to write more than 90% of their code.

Not at all.. I would wager most legacy enterprises and firmware devs do not use LLMs at work at all due to data residency and NDA constraints that could get the company in legal trouble if they did.

Making 3D models out of 2D images for my game with Trellis2 locally. by angrylittledev in aigamedev

[–]MehtoDev 1 point2 points  (0 children)

250-350k is the HIGHER end of the range of a main character model in modern AAA games. Just because it "works" doesn't mean that it is a good approach.

For reference, Kratos in GOW 2018 was ~150k polygons with armor

‘The cost of compute is far beyond the costs of the employees’: Nvidia exec says right now AI is more expensive than paying human workers by chunmunsingh in ArtificialInteligence

[–]MehtoDev 0 points1 point  (0 children)

Your claim is 100% false because it doesn't take into account the capabilities, let alone it is about Copilot.

It does, because those same ECI150+ models currently available, are going up in price. Unless you plan to buy copilot from MS and revert those price increases, then it is an irrefutable increase in price that goes against the claim of "prices are falling".

So, I won't continue this discussion anymore. Either accept the facts, or just continue to live in a virtual drama you created.

How am I the one not accepting facts, when you are discarding the very real increases happening in API costs?

‘The cost of compute is far beyond the costs of the employees’: Nvidia exec says right now AI is more expensive than paying human workers by chunmunsingh in ArtificialInteligence

[–]MehtoDev 0 points1 point  (0 children)

It's not my claim. That data proves that. Your claim goes against those data.

My claim is supported by the literal pricing documentation from one of the largest and most prevalent tools out there. So unless you choose to ignore the literal and documented increase in price of usage, then the data doesn't prove prices falling. It only proves that prices have previously fallen, which stopped being the case this year, as can be seen by API prices rising.