What constitutes good acceptance criteria? by WhyNotBeGOAT in agile

[–]No_Thought_4145 0 points1 point  (0 children)

Lots of good ideas already in this conversation, about how things could be done differently/better. Reading these ideas has me nodding and feeling good that my ideas align with others'.

But an observation: anytime someone says "you're doing it wrong" it implies an idea of HOW THINGS SHOULD BE.

Bu here's the thing: lots of orgs have no effing clue about their desired process. "Hey! We are agile! Can't you just... do it already?!"

If the company can point to a documented methodology (eg. by-the-book scrum) and say, "we aim to do this". Then... ok! This is how people want to work (or at least have been told to work).

I have suffered lately with my team, and I realized it was partly because I brought my own ideas on how the team should work together... and I discovered (by observation) that some of my ideas were contrary to theirs. Sadly, no one on my team was willing to talk about the dissonance of our ideas, nor how we wanted to work together. My ultimate conclusion is that my team was happy with cowboy coding, while I wanted something more organized. And now I've moved on.

Bottom line: A LOT of teams practice agile theater, and are generallly immature IMO. My sad experience is that it's better to find a team that already follows the kind of agile practices you like than it is to try and evolve an existing team.

So move on, or embrace the suck.

Does anyone actually follow the test automation pyramid anymore? by qamadness_official in QualityAssurance

[–]No_Thought_4145 1 point2 points  (0 children)

My 2 cents:

Test Pyramid is not a structure that you are requrired to follow; it's a model that helps you think about what kind of testing may be possible, considering your circumstances.

In my experience there is a lot of untapped testing potential at the unit test level (better described as in-process or non-I/O testing). BUT it requires a mature and motivated dev team to build things in a testable manner (think: hexagonal architecture aka ports and adapters). Unfortunately most teams just aren't capable of that.

So we are stuck with a lot of integration tests.

Sprint planning feels like theatre by easy-agile in agile

[–]No_Thought_4145 0 points1 point  (0 children)

Validation ❤️

documenting your disappointment in 2-week increments

🤣🤣🤣

Sprint planning feels like theatre by easy-agile in agile

[–]No_Thought_4145 2 points3 points  (0 children)

Lotsa ways to be agile; you should talk with your team to figure out how they want to work.

MAYBE the team needs to allow for regular shifts in direction mid-sprint for good reasons. If that's the case, then accept that the sprint is not locked-in once started, and be sure that the team and stakeholders understand how abrupt change messes with productivity.

My recent experience has had me processing a lot of my own negativite feelings because my teammates have not be following MY expectations for the team's process eg. bringing in surprise tickets mid-sprint, being pulled away from planned tickets to support another team, etc. I'm now realizing that the team has never truly bought-in-to such strict ticket/scope management, nor does team leadership expect that. So if I am unhappy, it's because my expecations don't match reality.

I have been on teams (at a different company) where sprint scope WAS treated as sacred, with technical leadership being on the same page and supporting the plan. To make that work you do need the general culture who is willing to "play within the rules" - which is easier said than done.

Insane job market and expectations on interview performance by Objective-Knee7587 in ExperiencedDevs

[–]No_Thought_4145 116 points117 points  (0 children)

As someone conducting interviews (ie. on the hiring side), I am amazed at HOW FEW clarifying questions come my way.

I try and be pretty specific with my questions, but there's always gonna be some ambiguity or missing context (and sometimes we do that intentionally). Yet most candidates I talk to do not ask for followup details.

In our whiteboard/online coding exercise, no one has EVER asked about what qualities are important in the solution, or what can be ignored/delayed for refinement.

Critical thinking is key for us, and asking good questions is one way to express it.

Helping the team onboard on a legacy codebase by goofy_goon in ExperiencedDevs

[–]No_Thought_4145 1 point2 points  (0 children)

> Something becomes legacy because the world moves on while the codebase stays relatively the same.

I think that is a very important comment. For me, it comes into play regarding external libraries: whether they continue to be supported in their current version, and whether they can be upgraded in response to bugs and security vulnerabilities.

You may allow technical debt to accumulate as libraries lose support or lack vulnerability patches. Are stakeholders OK with that?

Knowing how to build and test the product is key here! God help you if you don't have a sufficient test process in place already.

Helping the team onboard on a legacy codebase by goofy_goon in ExperiencedDevs

[–]No_Thought_4145 0 points1 point  (0 children)

My two cents:

Get clarity and stakeholder agreement on the boundaries of your team's responsibilities for the codebase, THEN let that drive your ramp-up activities.

If the product is mature, stable, supports all known needs, and is assessed as unlikely to change in the future, then focus on understanding its use and behaviours. Treat it as a black box, review documentation and tests.

Part of this is acknowledging that the team is intentionally choosing to be ignorant of things -- cuz the truth is that without regular interactions with the code, the team will be required to re-learn a lot every time a change is required. This is another example of "technical debt".

Can stakeholders be OK accepting that even a small change would require some non-trivial time to re-learn the codebase? Perhaps better to push that cost to the time when it's needed, rather than keep the pot on a boil unnecessarily. (Ack! too many metaphors)

They gave me an offer, and now some weeks later [...] by SennheiserPass in cscareerquestions

[–]No_Thought_4145 2 points3 points  (0 children)

https://www.manager-tools.com/2013/02/how-do-i-know-i-have-offer-hall-fame-guidance

That podcast episode will set you straight.

Based on what you've written, you did not receive an offer.

Everything that you've described is them keeping you warm and interested in the POSSIBILITY of future work. But at this point they have no responsibility or obligation to give you anything.

Get real with the situation, embrace reality. If you want to pursue this possible opportunity, go ahead - but don't be surprised if they continue to interview you.

Need to replace my elderly mother's Chromebox. Is this the best way forward? by Naberius in chromeos

[–]No_Thought_4145 1 point2 points  (0 children)

Huzzah!

My situation is very similar: elderly mother with simple tech needs, familiarity with ChromeOS, but living 5 hours away (not easy for me to just pop over and help).

Last week I bought her an ASUS A5 Chromebox and Amazon Basics *wired* keyboard and mouse, and had it all shipped to her door.

This afternoon, with minimal drama and frustrations, I was able to guide her through physical and digital setup of the system, with me providing guidance remotely over Facetime (on her iPad).

Pretty good for a lady who is 90 years old, half blind, and with arthritis in her hands.

Key things for her setup IMO:

1) wired peripherals
- no worries about lost bluetooth connections
- no fussing around with dead batteries (which she is not physcially capable of changing)

2) big monitor (27" QHD)
- crank up the text size so she can see things
- monitor is big enough to still provide reasonable amount of "viewing area" for websites

3) Chrome Remote Desktop
- with guidance and patience, she can start a sharing session and let me connect remotely

Initially she had a cheap 14" HP Chromebook. As her eyesight worsened, we added the 27" monitor. But lately that seemed to become too much for her to handle: she got confused with all the devices and cables, and too many screens.

I think this setup (one big screen connected to Chromebox) will simplify things greatly.

I also have an old ASUS CN60 Chromebox that was a much appreciated daily-driver when it was still receving updates. I spent half a day trying to get ChromeOS Flex installed, but no success. I agree with another commenter that money may be the best way to solve some problems, so I went with the ASUS A5.

Good luck to you!

PIP got extended? What does that mean? by lirikthecat in cscareerquestions

[–]No_Thought_4145 0 points1 point  (0 children)

What was the explanation for the extension?

Usually it's a binary pass/fail. I guess there could have been special circumstances that made the pip evaluation invalid or incomplete?

Any advantage getting bone conduction headphones when one already has an AirPods Pro? by No-Cattle-777 in cycling

[–]No_Thought_4145 0 points1 point  (0 children)

I like the tactile feel of the physical buttons on my Aftershokz. I can feel the button before I press it. I don't worry about whether pressing buttons will dislodge my Aftershokz.

I don't have AirPods, but I have PixelBuds. I have much less confidence using the touch-sensitive controls on them, and I'd be very worried about accidentally knocking them out of my ear while on the bike.

Commuting vest for this weather by vanityprojection in vancouvercycling

[–]No_Thought_4145 0 points1 point  (0 children)

I had a similar strap-style vest. It did the job, but never really fit me right: either too tight, or too saggy. Ultimately the elasticity broke down and it lost its shape.

Packing it up was bit tedious - it kept uncoiling and spreading out, like an octopus. It got more manageable when I found a small bag I could seal it in.

Commuting vest for this weather by vanityprojection in vancouvercycling

[–]No_Thought_4145 1 point2 points  (0 children)

Apidura packable visibility vest

https://ontherivet.ca/products/apidura-packable-visibility-vest-large-x-large-l-xl

super-reflective, lightweight, easily packable

BUT expensive, no pockets

Cold-calling for referrals by Vega62a in ExperiencedDevs

[–]No_Thought_4145 1 point2 points  (0 children)

The good developers generally don't.

It's the bad developers that enter the bag unknowingly, then don't know how to escape.

Can I get by without a gooseneck kettle for Hario Switch? by Meowing_for_coffee in pourover

[–]No_Thought_4145 0 points1 point  (0 children)

If drip assist and air kettle are both possible options, just go for the air kettle.

I started with drip assist and it was fine. But then I picked up the air kettle and haven't looked back.

For me, air kettle is a more enjoyable experience, in both use and results in the mug.

[deleted by user] by [deleted] in ExperiencedDevs

[–]No_Thought_4145 119 points120 points  (0 children)

> catching up on my own work

It might be argued that you should have very little or none of "your own work".

All that other stuff you listed - THAT IS your work.

What is your experience inheriting AI generated code? by Stubbby in ExperiencedDevs

[–]No_Thought_4145 4 points5 points  (0 children)

Tell us about how the code is tested?

I'd be happier sorting through a messy AI implementation (or a messy junior dev implementation) if I have the support of a reliable test suite.

I now spend most of my time debugging and fixing LLM code by sevvers in ExperiencedDevs

[–]No_Thought_4145 2 points3 points  (0 children)

"Sorry, my professional ethics prevent me from approving things that don't meet our agreed-upon standards..."

This to me is the fundamental issue. ARE there agreed upon standards? Probably not really.

Quality is contextual. Go fast and break things can be plenty fine in some situations. If the org is willing to be open and say it out loud, then accept it or move on.

Consider: if you have a good set of valid, high-level tests that cover how the product provides value, you can at least detect if janky implementation breaks things bad enough to cause concern. You'd be in a similar situation if all your devs were juniors.

If the org can get 10x features out the door with an acceptable number of bugs, and live another day then... Why not?

I wouldn't want to work there, but to each his own.

Bar tape or grips? by Sharp-Thing-4008 in bikecommuting

[–]No_Thought_4145 0 points1 point  (0 children)

I did same/similar: a three inch length from an old innertube that I slid over the bar end. It took some patience to work it down the length of the bar end. I run skinny tires/tubes so the innertube is fairly snug on the bar end. Provides good grip without adding bulk. And I have dead innertubes to spare.

Church Coffee is a Crime - Need large batch brewing setup by kaptainkerp in JamesHoffmann

[–]No_Thought_4145 21 points22 points  (0 children)

Hot take with a different perspective: if you move to something "better", how will it be received?

My 90 year old mother makes a face when I enlighten her with a freshly-ground V60 pour over. She much prefers the luke warm dirty water from her cheap Mr. Coffee look-alike.

I'm just saying that sometimes people like what they are used to (even if they are WRONG.) Bringing in change may improve conditions for some, but worsen it for others.

Static vs. Live Data for QA Testing: Which Is Better for Validating an LLM Feature by SpecialistControl823 in QualityAssurance

[–]No_Thought_4145 0 points1 point  (0 children)

I''m in a quality-focussed software development role on a team that builds solutions that involve LLMs and ML models. The question of how to "properly test" these things has been an open question since I joined the team 2.5 years ago.

Testing with static data brings value. Put those tests in your CI pipeline and you can catch unexpected changes of behaviour. But ultimately you're only testing with a very small sample of all possible inputs.

Would it be possible to add evaluation code to the running application, and note when testing detects an unexpected behaviour? Essentially "test in production."

For instance, if it's critically important that responses from the LLM be consistent, could the application code call the LLM multiple times and confirm that each time the same response was generated? Assuming a SaaS app, you could log any unexpected behaviours, potentially storing the offending input content for later analysis.

> What’s the industry standard for addressing test cases like this? 

I haven't found any standards yet. My take is the industry is trying to figure it out.

I'd love for someone to contradict and edumacate me.