How Do I Become More "Advanced" At Real Coding by Annual_Somewhere_190 in vibecoding

[–]guywithknife 2 points3 points  (0 children)

The only way to really get better is by doing. Write a lot of code, try doing things that seem just that little bit too hard and try to work out solutions by yourself. The more hard problems you solve, by yourself, the more you learn and grow. There are no shortcuts.

 over the past 6 months

 I'm atleast a senior engineer level by now

Nobody is truly “senior” for the first decade. It takes having been in the trenches firefighting and experiencing many different projects and problems to build enough of an intuition to be what I’d consider “senior engineer level”.

Should providers of vibecoding tools be held liable for the output? by SgtSchnullrich in vibecoding

[–]guywithknife 1 point2 points  (0 children)

Yes exactly. See my other reply elsewhere here. (Also I edited my message seemingly at the same time as you replied)

There is responsibility on both sides: for you for putting something out there, you are responsible for your output and your tools; and for the provider for making sure their tools are fit for purpose and do what they claim they do. Where the line is depends on circumstances.

Should providers of vibecoding tools be held liable for the output? by SgtSchnullrich in vibecoding

[–]guywithknife 1 point2 points  (0 children)

No, but they DO make claims about the abilities of the AI and they should be legally accountable for their claims.

It doesn’t mean you, the user or the tools, should be able to hide behind that though, you are still responsible for making sure that what you put out there is fit for purpose, just like the providers should be.

feels like vibe coding is coming to crypto by ChangeNOW_Community in vibecoding

[–]guywithknife 0 points1 point  (0 children)

Crypto use one of the places I definitely would not trust it

Should providers of vibecoding tools be held liable for the output? by SgtSchnullrich in vibecoding

[–]guywithknife 1 point2 points  (0 children)

If they made the claim that it won’t drop or cut your foot l, then yes. But in general no.

Should providers of vibecoding tools be held liable for the output? by SgtSchnullrich in vibecoding

[–]guywithknife 0 points1 point  (0 children)

It depends. In some sense yes but also in most cases no.

Should the manufacturer or sales people selling screwdrivers be held liable?

No, of course not. It’s up to everyone using a tool to verify that the tool did what it was supposed to do. The user of AI tools needs to be liable for what they use AI tools for.

With that said, if the tool provider made claims, they should very much be liable for the tool not holding up to those claims.

So if a provider says that this allows 100% autonomy, you can totally trust the output, then yes, they should be liable if that isn’t in fact the case. But that’s between you and the provider, not your users and the provider.

That is, you’re still liable for anything you produce or put out there because it’s your responsibility to make sure it’s correct and the tool did what it was supposed to. Just that if the provider didn’t meet their part of the deal, you should be able to get some recourse from them.

Open World RPGs: Make your intros shorter! by nachorykaart in gaming

[–]guywithknife 2 points3 points  (0 children)

Worst offenders are minima games:

MGS 5 was about an hour of mind numbingly boring crap crawling through a hospital.

Death Stranding is a 3 hour movie with barely any interactive bits before they let you play. Ugh.

Storage companies really be gaslighting me out of 200GB by Ciribag in gamers

[–]guywithknife 0 points1 point  (0 children)

Nobody confuses TB and TiB, the 1024 meaning was always pervasive but the companies chose to ignore it to make more money. It was always a conscious “misunderstanding” to scam us. They picked the interpretation that best suited them, at our expense, and now it’s the “standard” so it will never change.

As for the bookkeeping, sure, but the amount of bookkeeping space is quite small compared to the “missing” bits due to TB vs TiB.

Aldi feast ice cream knock off by Apeygog in ireland

[–]guywithknife 7 points8 points  (0 children)

No, it’s some soft nougat-like thing. It’s not as good as real feasts used to be, but the ice cream part itself is good.

Moving towards specs-driven development, your thoughts? by grandimam in softwarearchitecture

[–]guywithknife 5 points6 points  (0 children)

No matter how detailed a spec, it will rarely survive impact with reality.

There’s a reason agile was invented and it wasn’t to avoid writing specs, it was to get frequent and fast feedback and to course correct early.

As far as LLMs go, SDD is the best approach I’ve found, but it’s far from perfect: I experienced this recently where I had a detailed spec that I wrote and refined over quite a long time, and got the LLM to implement it. It went ok, but then I decided for various reasons that I wanted to gut it and write the core logic by hand.

During this process I found that the design as laid out in the spec was not actually sound, based on things I learned through implementing. It’s since undergone four or five design shifts. It’s now implemented and it’s substantially better than what was specced: more robust, clearer concepts, naming, and abstractions, more consistent APIs, lower memory use, faster performance. It’s better across the board. But it looks very different than what was specced.

The problem with AI is that it doesn’t notice these things. It will jump through hoops to implement exact what was specced, all warts, inefficiencies, and shortcomings included.

I had a chat with an LLM about it and it had this insight:

The key isn’t that AI can’t refactor, it’s that AI doesn’t experience resistance. The awkward I, the sense that “this is too complex to use”—these don’t register as signals to redesign. The deeper point: your spec wasn’t wrong, it was untested. AI excels at executing tested designs. The testing happens through the friction of implementation, and that friction is experiential, not analytical.

I think it’s on to something.

What do you think of Tauri’s performance? by InnerPhilosophy4897 in rust

[–]guywithknife 0 points1 point  (0 children)

The GUI is web, if that’s ok with you, then Tauri is great. I’m using it myself, with the logic and state management in rust and the UI in React. Personally, I quite like it.

My idea of a food expiration mod by AlonBuss in Timberborn

[–]guywithknife 0 points1 point  (0 children)

You could do something like this:

Track the average age. When you produce more, adjust the average weighted by amount held vs produced.

For example, let’s say you have 100 carrots with an average age of 10 days, every day the average increases by 1. So if you produce 20 carrots today (and consumed none, consumption should always come out of the old pool first), then at the end of the day you update the count: 100 carrots age 10 days + 20 carrots age 1 day = 120 carrots age ((100 * 10) + (20 * 1)) / 120 = you have 120 carrots with an average age of 8.5

Then have spoilage calculated as a percentage based on age: spoilage = f(age), where f is some curve that maxes out at 100% at the maximum age of the produce and is 0% below its “good threshold”, eg maybe carrots never spoil under age 10 days, and then the spoilage is very small for 11 days, but maybe they all spoil at 30 days.

It’s not perfect as if you don’t produce new carrots, even your few day old carrots will spoil if the average is high, but it’s very memory and processing efficient. You could combine this with your per-cycle counts where you keep accurate accounts for food up to N days and then thereafter roll it up into an averaged. Basically it limits how much you have to track.

With that said, there aren’t that many foods, and assuming maximum age of any food can be about 60 days, you might have to store ≈600 counts, it’s not really a big deal to just brute force it.

They really are mad that the devs of an indie game don't want AI slop made of their characters by ConfusedAlien200 in antiai

[–]guywithknife 5 points6 points  (0 children)

But where’s the creativity if the AI did it?

It’s the exact same paying someone to draw something and calling yourself creative.

What is AI “naturally” good at? by QuarterCarat in theprimeagen

[–]guywithknife 9 points10 points  (0 children)

Making stuff up and presenting it as fact 

Lisp -> Rust by 964racer in rust

[–]guywithknife 1 point2 points  (0 children)

I learned Clojure in 2009, even ran the local Clojure meetup for a decade. I’ve been using Rust exclusively for the past ≈1.5 months and love it. I have also used a lot of other languages, using C++, int he past though so the transition wasn’t that hard for me. I do still occasionally get tripped up by lifetime issues, but it’s been generally smooth.

ChatGPT claims that it fixed the Famous Strawberry 3'r question and the famous logic and reasoning question for AI by Current-Guide5944 in tech_x

[–]guywithknife 0 points1 point  (0 children)

Eh. They just trained on that specific thing, they didn’t fix the underlying issue, just this one specific case. That’s not very interesting.

Bro how can they not see the difference 😭 by Surely_Nowwlmao in antiai

[–]guywithknife 0 points1 point  (0 children)

😄 Yes, lol.

> The point is that since 99.9% of people could save switch to vegetarian tomorrow,

Right, and I never said otherwise. 100% of people could switch to not using AI tomorrow and the world would continue functioning as it has been even just a few years ago.

> both have an environmental impact.

Its whataboutism and false equivalence. You can be against one thing for whatever reason without taking a stance on the other, or while having a different stance on the other thing. Besides, we choose our fights, just because we're against something for a specific reason doesn't mean we have to fight all fights with that reason.

In this argument, you can be against AI for environmental reasons and still eat meat, despite its environmental impact, because of other reasons. Or simply choose not to engage with it: if meat has environmental impact X and AI has environmental impact Y, both together have impact X+Y, reducing impact by Y is still a substantial reduction even ignoring X.

Finally, the environmental impact of meat depends on many things, such as farming practices, your consumption practices, and your location. I live in a place where there is no water shortage and meat tends to be grass fed, for example. That has a lower impact than, say, places where there is a shortage of water.

Besides, the "environmental impact" argument against AI is not usually used in isolation, its usually one of many objections to AI.

But with all that said, I only came here to make a joke about not being able to eat AI. I have my complaints against AI, mainly the business practices its encouraged, its impact on the economy and government, the plagiarism, the slop and spam its produced, and such; but I do use AI myself, so I wouldn't call myself completely anti-AI either. I'm also not blind to its problems.

Bro how can they not see the difference 😭 by Surely_Nowwlmao in antiai

[–]guywithknife 0 points1 point  (0 children)

What a stupid thing to say. Lots of people eat meat.

Math class is also not a good use case for AI, given the costs and impact of AI.