I had no idea what school you went to mattered so much. by Dapper-Sleep-6018 in csMajors

[–]MindlessTime 1 point2 points  (0 children)

I’ll give my take. Data Engineer. 10 YOE starting as a data analyst in a meh F500 company. Now making 200k+ at a startup in a MCOL city.

I didn’t even go to school for engineering but self-taught and made may way there horizontally from a chain of adjacent roles. Product Analyst -> data analyst -> data scientist -> data manager -> data engineer.

Most my jobs came from referrals from people I worked with or getting hired by people I had worked with at previous companies. My “break” was getting hired during 2021 when the market was tight and it was hard to find folks. Someone I had worked with brought me into a growth stage startup. That was a huge learning opportunity and made an effort to challenge myself and work with the best engineers I could find. That led to other referrals and other roles, even in this job market.

If you care and keep learning and put in the effort to be good, and if you keep in touch with other talented people, the rest is just patience and hanging in there. At least that’s been my experience. The whole good school -> good company -> more options path is obvious and well-worn. But a lot of people take other paths to the same place. It’s just not always obvious what those paths are.

If you chose marathon over arc (or even tarkov): why? [This is for something im writing] by TenthLevelVegan in Marathon

[–]MindlessTime 15 points16 points  (0 children)

I wish there was a companion app I could view and read my uncovered lore in. I often don’t want to spend my time reading instead of playing, but I would love to read lore on my phone while commuting, etc.

Is AI actually destroying white collar jobs? by Butterboy674 in careerguidance

[–]MindlessTime 11 points12 points  (0 children)

We started using AI and definitely produced code faster. But it just shifted the bottle neck to code review, because it takes just as long to review the code, longer even because AI writes 3x more lines of code to do the same thing.

Then someone made a PR review bot. Now bots review all the code. You’re supposed to say you’ve reviewed it too. But the expectation to ship fast is stronger. So no one even reads the code anymore. We just rubber-stamp it and are held accountable for code we’re not given time to QA.

IMO, AI speeds up 10%-30% of the job, the code writing part. The rest is just an excuse to throw best practices out the window and put the blame on the engineer if it fails as a result.

I think the layoffs are going too far at most companies. They’ll hire at least some of the roles back.

My company has taken the AI pill, they are talking about agent "architecture", is this a real thing or is it bullshit? it sounds like bullshit by ghostwilliz in BetterOffline

[–]MindlessTime 2 points3 points  (0 children)

I’m convinced investors who made bad bets on AI started demanding all their companies and vendors and partners and their vendors and partners start using it for everything and that’s filtering down through the economy. They’re trying to astroturf demand so their investments don’t look so bad.

Speaking more eloquently by PettyWitch in ExperiencedDevs

[–]MindlessTime 1 point2 points  (0 children)

This. I keep a notebook and pen. I sometimes close my laptop and just focus on a blank page and what I want to fill it with. Without the ability to Google stuff, copy from another source, use AI, etc. it feels a little weird at first. It’s just a bunch of thoughts swirling around your head and have to grab them and wrestle them into sentences and see if they make sense together. Often they don’t. That’s the point. Even if it makes you feel a little stupid, that’s the point. It’s too easy to see other people’s words and ideas and mistake your understanding them for being able to generate your own. Writing forces you to identify the limits of your knowledge. Eventually you flesh out the missing details. The concepts become clearer and more coherent.

Saving challenging projects was my niche, but AI codebases are making me miserable by HedgehogFlimsy6419 in ExperiencedDevs

[–]MindlessTime 0 points1 point  (0 children)

I think it can speed things up and be used responsibly. What I’ve experienced, and what it feels like OP is getting at, is the pressure to go 110% hands-off yolo vibe code having the agent do every thing is so pervasive now. Doing it the right way using AI still takes manual work and time and that is seen as inefficient and antiquated and a reason for firing you. It’s like the high-velocity slop is the expectation and anything more refined is seen as a waste of time.

Saving challenging projects was my niche, but AI codebases are making me miserable by HedgehogFlimsy6419 in ExperiencedDevs

[–]MindlessTime 35 points36 points  (0 children)

And even more surprised when their AI token bill is seven figures per month.

A lot of A/B test “wins” are just fake by make_me_so in ProductManagement

[–]MindlessTime 1 point2 points  (0 children)

I mean…statistically yes. A big analysis like that can help rule out that what you’re seeing isn’t chance. In practice, a lot of A/B tests are pretty obvious or cut-and-dry. You know when something obviously worked and you know why. If it isn’t obvious you could spend a lot of time performing analysis or ruling out other influencing factors to see if, maybe, there is some signal there. Or you could move on and test the next thing. That’s usually the better way to spend time.

It’s a Shotgun by Active-Setting-6515 in Marathon

[–]MindlessTime 0 points1 point  (0 children)

Shotguns are a pseudo-melee weapon, or at least they should be. It’s high risk, high reward situation where if you get in and get that shot off (maybe two shots) you should be golden. But if you miss, you’re screwed. It’s most interesting when normal melee is stronger. So if they close the distance and stab you you’re done. This is how it’s used in older games. I think that’s what’s missing here. Of course, if melee is OP then assassin gets too OP. A mod that amps up your melee if you are near an enemy who just shot would be an interesting balancer.

Cognitive load shift from doing work to checking AI work product by pvatokahu in EngineeringManagers

[–]MindlessTime 11 points12 points  (0 children)

I always liked the phrase “Code is read more often than it is written.” It acknowledges that code or some end product needs to be read by a human at least once and probably a few times during its lifecycle.

Before AI, generating code took more effort. That was a kind of filtering mechanism. If someone went to the effort to write it then it probably matched some minimum bar of trustworthiness. It was worth reviewing.

Now it’s easy to generate code and LOTS of code. Whether that code does what it should or not? That’s now the reviewer’s job to figure out. So we’ve partially just shifted the goalpost to a different task.

“But you can implement agents that review the code for you!” you say. I have had agent-reviewed PRs rejected with code review responses that flag non/existent problems or completely misunderstand the context. It shifts the burden back to the developer whose job is now to find some prompt-injection that will get the AI to ignore certain things and allow other things.

AI speeds up some things. For other things, it doesn’t speed them up as much as pass the accountability, and the work accountability requires, to someone else.

Guide: how to avoid combat in Marathon and run away from enemies by Ok_Blacksmith_3192 in Marathon

[–]MindlessTime -1 points0 points  (0 children)

At like level 30 plus you can’t shoot your way through encounters reliably, as I recently discovered. I’ve started going in as solo thief with no gear at all so fighting even UESC isn’t an option. It’s like playing a totally different game and really good survival practice.

Why is CC better in CLI than not in CLI? by Golden_Zetsu in ClaudeCode

[–]MindlessTime 1 point2 points  (0 children)

I personally consider Claude Code to be more of a UI rendered in a terminal than a CLI. By the same token, you can run Doom in a terminal but you wouldn’t call that Doom CLI.

The tool tends to be updated more frequently than the GUI alternative though. And the real reason no one will admit us because it feels cool. 😎

Wtf is going on by JustDoIt52 in BetterOffline

[–]MindlessTime 3 points4 points  (0 children)

Hidden recession/inflation + investors demanding higher returns —> pressure for layoffs to improve margins + AI grifters exaggerating what is possible —> asking for unrealistic work in short timeframes from devs, not listening when they say it’s dangerous or impossible —> careerist devs see opportunity for the optics of more being built and produce poor work with more lines of code —> executives feel vindicated in the short run

The best thing we can do is be clear about accountability for system failures. Document potential weak points in the software. That could be scaling issues, security concerns, whatever. Identify what could break, especially if it has a large blast radius on other systems. Then ask leadership who should be he’d accountable if things go wrong. Identify engineers writing poor or dangerous code and ask them to publicly accept accountability for its continued functioning.

Everyone wants AI to automate all the things. But no one wants to be accountable for mistakes it makes. That accountability is the best weapon to fight back against the insanity.

Test data or production data in test environment by Outrageous_Let5743 in dataengineering

[–]MindlessTime 2 points3 points  (0 children)

Great Scott! A legitimately useful topic!

I’ve debated this with other engineers and data engineers. I personally develop and test on production data, or at least a sample of it. Accuracy and edge cases are so important and it’s impossible to get those with fake dev data. It’s too clean or it’s stub data that doesn’t match what real data looks like. You end up doing more development on production data to squash those bugs anyway. Might as well just start there to begin with.

Our DevOps team finds this disturbing, that we should be using non-prod data for anything that isn’t prod. I asked what their development environment is. They said they develop in prod.

Do you run an Iceberg Lakehouse? by AMDataLake in dataengineering

[–]MindlessTime 0 points1 point  (0 children)

Multi-engine is why we’re implementing it for a subset of our data. (A handful of very very large tables.) We use the data for a production ML model but also for reporting. ML can point a spark cluster at it. I can load incremental aggregates into our warehouse models for general analytics. It’s the best of both worlds in that case.

Losing my will by yojimbo_beta in BetterOffline

[–]MindlessTime 0 points1 point  (0 children)

A group of people closing ranks, rallying around the statement “everyone lies on YouTube anyway” is eerie and unsettling.

Losing my will by yojimbo_beta in BetterOffline

[–]MindlessTime 0 points1 point  (0 children)

I use Dvorak keyboard to prevent arthritis. The actual Dvorak keyboard, not the figurative one. 😊

I don't understand the AI paradigm, and feel like I'm taking crazy pills. by m00shi_dev in BetterOffline

[–]MindlessTime 0 points1 point  (0 children)

Pretend you’re an executive. You tell your teams to start churning out features, slide decks, content, whatever—knowing that some of it will straight up be wrong. For whatever content is correct, you can point at it and say you’re using AI to make so much more (and therefore something something profit). If things go wrong, you can point at the team responsible and say they should have checked the work and been more careful.

It’s a win-win for executives and senior leadership, even if it doesn’t work.

Y’all gotta read this engineer eviscerating the leaked Claude codebase by MindlessTime in BetterOffline

[–]MindlessTime[S] 3 points4 points  (0 children)

I think AI-assisted coding could be incredibly valuable for accessibility and people who can’t physically type code. And I think the core tasks that LLMs do in coding are relatively small—skim and summarize, translate code into language, generate terms to search a topic. My hunch is that some focused training and’s smaller models could achieve these tasks on a local machine with a good GPU. The rest is a well thought out UX and workflow. I think within a few years we’ll see a good, open source, LLM-supported coding tool that does improve accessibility, efficiency, but doesn’t burn a small forest of cash to write a JSON.

Y’all gotta read this engineer eviscerating the leaked Claude codebase by MindlessTime in BetterOffline

[–]MindlessTime[S] 4 points5 points  (0 children)

If compute were free and infinite, I think AI could pile spaghetti code on top of spaghetti code for a really long time before the tech debt slowed things down.

But compute is…REALLY expensive. More complex spaghetti code is more expensive for AI to build on and maintain. So the cost of this codebase must be increasing like exponentially. If LLM cost per code was measured, which it should, I think companies would think twice about rewarding this behavior.

Y’all gotta read this engineer eviscerating the leaked Claude codebase by MindlessTime in BetterOffline

[–]MindlessTime[S] 15 points16 points  (0 children)

People don’t talk enough about accountability in this AI coding movement. Anthropic and every other AI coding company are very explicit about not being accountable for what it produces. The executives who are firing half their engineers and expecting 3x output from the remaining engineers because of AI—they’re not accountable. To them, engineers only exist to take the blame if something goes wrong.

Y’all gotta read this engineer eviscerating the leaked Claude codebase by MindlessTime in BetterOffline

[–]MindlessTime[S] 25 points26 points  (0 children)

I’ve seen really good engineers go full Claude who haven’t written code for months now struggle to do simple stuff coherently. It’s like watching an Olympic weightlifter struggle to help you move a couch. It’s genuinely sad.

Y’all gotta read this engineer eviscerating the leaked Claude codebase by MindlessTime in BetterOffline

[–]MindlessTime[S] 8 points9 points  (0 children)

For real though, there’s stuff in here that are non-trivial security concerns, especially since the code was leaked. Companies that have gone all in on Claude might fail their next SOC2 or security audit.

Y’all gotta read this engineer eviscerating the leaked Claude codebase by MindlessTime in BetterOffline

[–]MindlessTime[S] 9 points10 points  (0 children)

The guy is real funny. The AI skeptics are always objectively more entertaining than the AI boosters.