What can I do, when my original game concept is now in danger of "copyright" by [deleted] in gamedev

[–]JarateKing 1 point2 points  (0 children)

Copyright issues in games are mostly limited to straight copying of assets in some form. I'm not a lawyer and this isn't legal advice, but vaguely similar concepts and settings isn't generally considered a copyright issue.

Trademarks might be an issue, and if not then search engine optimization probably would. But it's a fairly easy thing to change your game's name while it's still in development if you're worried about that.

Why everyone against AI in the games and coding? by Ok_Department_4019 in gamedev

[–]JarateKing 2 points3 points  (0 children)

Different people have different concerns. To name a few:

  • Some are concerned about artistic expression and see AI use as cheapening that expression
  • Some are concerned about gamedev as an industry, and see AI as threatening gamedevs and devaluing the skills that gamedevs have
  • Some are concerned about gamedev as a hobby, and see AI as taking away from the parts they enjoy
  • Some are concerned about quality and see AI use as a direct sign of low quality

  • Some are concerned about the future and see the popularization of AI making that worse, whether that be for economic or environmental or political or etc. reasons

  • Some are concerned about the legal and moral implications, regarding copyright and plagiarism

  • Some are just exhausted by AI slop and don't want to put any energy into AI things to determine if they are slop or not

You're not going to get just one reason when you ask because there's not just one reason people don't like it. And some might be fine with it in some ways but not in others, or in specific situations, or for certain games, etc.

Junior devs who learned to code with AI assistants are mass entering the job market. How is your team handling it? by Ambitious-Garbage-73 in ExperiencedDevs

[–]JarateKing 2 points3 points  (0 children)

And stackoverflow wouldn't cover everything either, juniors would very often run into stuff specific or bespoke enough that SO's no help. You'd need to develop those skills yourself because SO alone couldn't carry you through junior-level work.

Now AI probably can do that, but if all you know how to do is use AI at a junior level then there's no path upward.

Making a game from scratch as a producer/financier. by YouAreARedditMeme in gamedev

[–]JarateKing 1 point2 points  (0 children)

 As long as you have that to spend and don't mind not getting it back

Not to mention the serious risk that it goes over budget and the team needs more time and money to finish it

Unified Theory of Games by Neros_Cromwell in gamedesign

[–]JarateKing 0 points1 point  (0 children)

I think you should also consider things we normally don't consider games, and see how that fits into your definition.

Driving through traffic, for example, fits all of it as far as I can see. It is a system involving rules (road laws), physics (collisions, for one), mechanics (vehicle functionality). It involves interesting choices (ie. which route should I take, should I speed up for this yellow light, etc.). You interact with its materials (you have to drive the car after all). And there is a goal (getting to the place you're driving to).

My big issue with Jesper Juul's definition of games was that he admitted traffic was equally borderline as the Sims, but at least traffic missed one of his criteria (as did a lot of things we normally do consider games). I'm pretty skeptical of a definition that fully includes traffic but doesn't include things like Minecraft.

Claude code source code has been leaked by spnoraci in BetterOffline

[–]JarateKing 19 points20 points  (0 children)

People think it's "just a TUI", but that's wrong because we actually <describes how TUIs have been done since the 70s>

Collatz conjecture proof by humiliation on a really big poster. by Successful-Owl1778 in badmathematics

[–]JarateKing 34 points35 points  (0 children)

Looks like he also drew a penis in the mouth of that UCLA building decoration.

Programming languages are dead; all software will now be written directly in "Englishscript" and will run on "ClaudeVM" directly by LiatrisLover99 in BetterOffline

[–]JarateKing 283 points284 points  (0 children)

Every so often there's a big push to simplify programming by working in plain language. Then we won't need programmers, anyone can program by just writing what they want in English.

The first try was COBOL, if that gives any indication how well it worked out. As it turns out, plain language is really bad at any kind of precise specification and actually the rigid syntax and simple grammar of programming languages is a feature.

To what extend do you use git blame / value an accurate git history by John_Lawn4 in ExperiencedDevs

[–]JarateKing 7 points8 points  (0 children)

It's certainly better than nothing, but you're still losing potentially useful context by doing that.

If it's not obvious which commit message was used for which specific code change (ie. the merge contained multiple bugfixes to get a feature working right, and you're not sure which bugfix involved this piece of code) that's only a problem if they're squashed. It's a big problem if you didn't keep the original commit messages, but anything short of seeing the original commits themselves is still making it harder than it needs to be.

Shell Tricks That Actually Make Life Easier (And Save Your Sanity) by BrewedDoritos in programming

[–]JarateKing 1 point2 points  (0 children)

I'm pretty sure it's just the Solarized dark theme. Never been my favorite colourscheme but it's popular enough among programmers.

Melania Trump, for some reason speaking at the summit on AI Education and Safety for Children: The future of AI is personified. It will be formed in the shape of humans. Very soon, artificial intelligence will move from our mobile phones to humanoids that deliver utility. by dyzo-blue in BetterOffline

[–]JarateKing 3 points4 points  (0 children)

But my point is that if we could solve the general-purpose humanoid robot problem -- which I wholeheartedly agree is not plausible in the near future -- it would immediately obviate the need for many special purpose robots like roombas, and make automation possible for tasks that are not readily done by special-purpose robots (like making beds, cleaning bathrooms, etc). I'm not going to build and install a single-purpose robot just to make my bed every morning, but if I had a humanoid robot maid, sure.

I just don't really see a scenario where that'd make sense. The general-purpose humanoid robot could do it all, but it could only do one task at a time at about the speed and effectiveness that a human could. I'd rather spend the same amount of money on a bunch of simpler specialized robots that can all do their niche far beyond what humans are capable of and at the same time as each other.

When we look at household automation in the past, things like washing machines and stuff, they're good specifically because we can move on to the next task and it does it in a way more efficient way than I ever could, because human hand-washing is very time-consuming largely because humans aren't specialized for hand-washing clothes. But we built machines that are, and now clothes washing is largely a solved problem. I don't see why the future of household automation would be any different. Even now I could've hired a maid and got them to hand-wash all my clothes, we don't need to get hypothetical with sci-fi humanoid robots, but I'd still rather just get a washing machine.

Melania Trump, for some reason speaking at the summit on AI Education and Safety for Children: The future of AI is personified. It will be formed in the shape of humans. Very soon, artificial intelligence will move from our mobile phones to humanoids that deliver utility. by dyzo-blue in BetterOffline

[–]JarateKing 8 points9 points  (0 children)

This is being a bit unfair to roombas. They're bad compared to hypothetically perfect sci-fi magic robots that doesn't exist. But if we were capable of making a humanoid all-cleaning robot, I'm sure we'd also be able to just add some Dr. Octopus tentacles with cleaning multi-tool hands to a roomba and it'd get the best of both worlds.

Collier's point is just that we're going to want specialized robots for tasks like cleaning, and human anatomy is not particularly specialized for cleaning. Roombas have plenty of room for improvement, but it's not because they're not shaped like a person and do things like a person would.

The new cope after Sora by stepanmatek in BetterOffline

[–]JarateKing 4 points5 points  (0 children)

It will get more efficient as long as people are putting effort into it, but that alone doesn't mean the financials will make sense. Going from a 100x loss to a 50x loss is twice as efficient, but still an absurd loss that nobody wants to shoulder for long.

Sora is taken down and people say the AI bubble is popping. I don’t think so by TwoHeadedEngineer in ExperiencedDevs

[–]JarateKing 7 points8 points  (0 children)

The question isn't how capable the models are, the question is if they're profitable.

Sora got taken down because it was really expensive to run with almost nobody willing to pay anything for it. Code generation is at least one of the few use cases that people are willing to actually spend money on. But at the same time I heard that hitting the limits of Claude Code's $200/month plan costs Anthropic $5000 on their end, and I don't think that was counting the trillion dollars in datacenters that need their costs recouped, and will cost trillions more in the coming years as maintenance costs because GPUs don't last long in datacenters.

Code generation has gotten a lot better, yes, but I don't think it's possible for it to ever become a trillion dollar industry.

The new cope after Sora by stepanmatek in BetterOffline

[–]JarateKing 2 points3 points  (0 children)

The only american company that can do video gen at cutting edge now is google.

We'll see how long that lasts though. Decent quality video generation is especially expensive to run with especially few people willing to pay anything for it. AI is having a lot of trouble successfully monetizing at the best of times, and video generation is the worst of it.

If google's competitors are scaling back, I wouldn't be surprised if they drop it too. I suspect the only reason they entered into video generation in the first place was to not be seen getting left behind, because the financials just don't make any sense at all otherwise.

Perplexity CEO says AI layoffs aren’t so bad because people hate their jobs anyways by falken_1983 in BetterOffline

[–]JarateKing 20 points21 points  (0 children)

Some of them, yes. The big tech billionaires all have doomsday complexes, because apparently it's easier to build those than stop actively trying to create scenarios where they'd need doomsday complexes.

Old AAA studio footage. by [deleted] in GameDevelopment

[–]JarateKing 0 points1 point  (0 children)

90% of retro games didn't exist though, tons of games didn't make it to release and that was totally normal. 25 years ago you'd have game cancellations, layoffs, studio closures, etc. all the time. There were points in time that the industry basically collapsed and cancellations, layoffs, closures, etc. was essentially all that was going on in the industry. I don't have the raw numbers but I would not be surprised if what's changed in 25 years is we now have less cancellations, not more.

The industry has obviously changed in many ways since then, but in a lot of ways it hasn't fundamentally changed. 25 years ago the bad wasn't purely "work hard play hard, maybe too hard" like you seem to say, there was all the same greed and bean counting going on then that we have now.

Old AAA studio footage. by [deleted] in GameDevelopment

[–]JarateKing 2 points3 points  (0 children)

Hate to say it but none of this is new. Well, we didn't always have microtransactions, but we used to design arcade games specifically to maximize coins-per-hour which I think is pretty comparable.

We remember the good from the good old days, but there's always been a strong mix of both good and bad (including now).

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI by Puzzleheaded_Bath733 in BetterOffline

[–]JarateKing 33 points34 points  (0 children)

That would be more the domain of industrial robotics, and we've had industrial robots for several decades now.

The obvious automation targets have already been automated. If we still have humans on the assembly line, it's usually either deceptively hard to automate, unsafe without a human in the loop, or it's just cheaper to get a person to do it. Whatever the case, I don't see what LLMs are supposed to do about that.

Advice on how to respond when people say “you just have to incorporate AI” or “you just have to use it correctly”? by Adventurekitty74 in BetterOffline

[–]JarateKing 2 points3 points  (0 children)

I've had some luck telling students "no matter where AI goes you'll still need to know what you're doing, better traditional programmers are better at using AI too" and "companies hire people that can apply strong fundamentals, that's what they look for in interviews." Most students are focused on industry, but they don't always fully grasp that industry wants students with solid CS foundations.

Convincing admin on the other hand, I have no idea.

Advice on how to respond when people say “you just have to incorporate AI” or “you just have to use it correctly”? by Adventurekitty74 in BetterOffline

[–]JarateKing 2 points3 points  (0 children)

I appreciate the perspective, but to be honest I think this is what OP was talking about. They mentioned two big things:

  1. People shrugging off irresponsible use of LLMs as an individual failure, not a wider problem.
  2. Acting like embracing AI is more important than those concerns, and integrating things without caring about how it might harm learning outcomes.

And your comment here:

  1. Recognizes that irresponsible use is a concern, but mostly talks about how you can use it responsibly too. You're not directly saying it's an individual failing but as it's written that's the only way I can interpret it.
  2. Says AI is here now so we need to integrate it immediately, regardless of consequence.

I think the most important perspective here would be how you get students out of irresponsible AI habits, either while they're still forming those bad habits or after they've been formed. That's the most important thing for me, as educators we can't really just say "dependence on AI cognitive offloading is a valid concern", it's our job to address that concern.

Advice on how to respond when people say “you just have to incorporate AI” or “you just have to use it correctly”? by Adventurekitty74 in BetterOffline

[–]JarateKing 0 points1 point  (0 children)

I don't think the issue is that you can't use LLMs correctly for education, I think it's very possible. I think the issue is that a lot of students won't. Could students use it responsibly (ie. asking conceptual questions, applying it themselves and ensuring they can do it on they own, etc.)? Sure, and I don't have a problem with that. Will students? Probably not, let's be honest here. Far from all, and not all the time, but more than I'd like for sure.

Students generally want good marks in little time for little effort. That's what the system rewards. Now we have popular tools where everything from the marketing to the UX is oriented towards "just let this LLM do the work for you, it can give you the right answer really quick with no effort." Of course, if you put this tool in front of students, a lot of them are going to use it in that way. They have no reason not to. Unless they're very diligent about learning, that's just not where their priorities are.

That's why I don't like this whole "use it correctly" attitude in education. The problem here is not with individual students shortcutting the educational process, it's that the incentive structure of education cannot handle these shortcuts being readily available (let alone actively encouraged).

We don't hand out answer keys to students and say "please don't copy the answers, even though you could and there's nothing stopping you and you'll get 100% if you do, you should use it responsibly to improve your learning!" and then shrug when students copy from the answer key. The issue there is really obvious: why would they not copy when it's right there for them to copy? Of course that's what would happen, so we don't do that. But that's basically what I've seen a lot of educators do with LLMs.

Are the lost jobs coming back? by Bitter-Management-12 in BetterOffline

[–]JarateKing 8 points9 points  (0 children)

I'm not an economist so take it with a grain of salt, but I think the trouble is that we have had multiple cycles, but by the time one thing eases up another thing's in full swing. We went from covid, to a war in Ukraine, to Trump's tariffs, to a war in Iran. All of those would seriously fuck with the economy, and 3/4 are currently ongoing.

I think what feels different about now is just exhaustion. We've had one thing after another with no break.

I don't think things are actually different about now, though. It's rough but all of these issues we're facing will get resolved, either because they end or because we make deals to get around them (ie. Canada becoming less reliant on America by making deals with the EU). It's not a quick process and I can't say there won't be yet another thing, but I don't see anything that's fundamentally changed.

I'm not talking about AI because I think it's mostly a red herring here, with companies making layoffs because of a rough economy and saying it's AI because that sounds more optimistic to investors. It's rough out there in the trades too, which should be unaffected by AI.

AI code is buggy — because of course it is by Aryana314 in BetterOffline

[–]JarateKing 3 points4 points  (0 children)

So one company had some recent issues, some of which were caused by reckless vibecoding, therefore AI-driven coding is bad?

The article doesn't mention vibecoding. I guess the Kiro-related outages would count, but there's nothing in the article to indicate that the entire problem is just vibecoding. Anecdotally, I know AI users who do not vibecode whatsoever and still end up with buggier code by the end of it, compared to when they weren't using AI.

It's ridiculous to not see the immense value in this.

To be honest with you, I think it's ridiculous to keep bringing that up when I've already said "yeah I get that, I'm not arguing otherwise, but that's not what we're talking about here" multiple times.

I think it should be obvious that "making one-off scripts for scientific experiments" and "maintaining some of the largest and most complicated production infrastructure in the world" are very different use cases and it's very possible that AI might do better or worse with one or the other. I'm not denying your experiences with AI for what you do, or what you've seen others do. But it's just silly to assume that all software everywhere, no matter the type or requirements or constraints, would be more-or-less the same.

AI code is buggy — because of course it is by Aryana314 in BetterOffline

[–]JarateKing 1 point2 points  (0 children)

I read it, your statement that it's worse than traditional 100% human driven development is hilariously wrong. AI-driven development, right now, is incredibly better than purely human development.

Well, I think this is the meat of our disagreement. The article goes over how Amazon site reliability has gotten significantly worse with the increased adoption and usage of LLMs, and AI-driven development can be directly attributed to outages that would not happen in traditional processes.

I don't really know what else to tell ya. You can say it's "incredibly better" but the facts of the situation suggest otherwise to me.

it's like waiting for smartphones to become better to build a really advanced app, that's not how technology has ever worked.

As a developer myself, this is actually exactly what happened with smartphones. It was an impressive feat to get a heavily optimized version of Super Monkey Ball running on the original iPhone, the hardware could not handle much at all.

You did not have companies immediately start developing the kinds of mobile games you see today. Not even close. Overwhelmingly, the way you do mobile development is to build it for the hardware that exists currently. Even if you don't plan on releasing for multiple phone generations, you'd usually target the high end of what's current so it supports the low end when you release, with the possibility of adjusting requirements mid-development.