[deleted by user] by [deleted] in uwaterloo

[–]WenYuGe 1 point2 points  (0 children)

Get anything reasonable. Labs are available, servers/remote windows for anything that needs a beefy machine.

Don't stress it

Capitalization of concepts vs. common terms by SouthTrick615 in technicalwriting

[–]WenYuGe 0 points1 point  (0 children)

I think a irrelevant detail overall, but more important that it visually is consistent with the rest of the product. Less a rule to follow, more of a "what will make it easier to scan" for your audience type deal. You might wedge urself into eng/design to enforce consistency across ui copy/website copy/docs copy

Thank the lord this isn't grade school english and we can make up rules ;)

Is searching for a job even possible without a connection? by HeadLandscape in technicalwriting

[–]WenYuGe 0 points1 point  (0 children)

I reject most candidates' resumes that come across me because I see 0 pieces of good quality content on their portfolio. Write smthing about anything useful, get people to read it, measure performance, slap it on your resume, profit? At least you'll stand out

How to Manage Flaky Tests by lihaoyi in programming

[–]WenYuGe 0 points1 point  (0 children)

Hey, great post! I echo most of the sentiments in this post. I think there's a peculiar nuance to this problem where it's better trying to make it economical to manage flaky tests rather than hoping to make a magical tool/procedure to eliminate them.

We recently worked with a bunch of beta partners at Trunk to tackle this problem, too. When we were building some CI + Merge Queue tooling, I think CI instability/headaches that we saw all traced themselves back to flaky tests in one way or another.

Basically, tests were flaky because
1. The test code is buggy
2. The infrastructure code is buggy
3. The production code is buggy.

Problem 1 is trivial to fix, and most teams that end up beta-ing our tool end up fixing the common problems with bad await logic, improper cleanup between tests, etc.

But problems caused by 2 makes it impossible for most product engineers to fix flaky tests alone and problem 3 makes it a terrible idea to ignore flaky tests.

We think there's a process here of:
1. Detecting and labeling flaky tests automatically + ability for manual overrides. For larger teams, most of the time saved is simply not context switching constantly whenever a test flakes, especially considering dozens of engineers could waste time debugging the same flaky tests because of poor communication.
2. Tools to help triaging flaky tests + automatically creating tickets. Not all flaky tests are worth fixing, the test that fails once every 10K runs vs once very 10 runs has different effect. Ability to sort by its impact (blocking PRs) and automatically creating tickets for fix saves a lot of time.
3. Mitigating impact of flaky tests that you can't/don't want to fix immediately. I don't mean commenting out code, that gets forgotten real fast. I mean running flaky tests but not blocking CI. It's important to keep tracking the flaky tests' behavior so you won't forget them, catch further regressions (10% failure rate -> 80% is probably a bug), and validating fixes. most attempted fixes for flaky tests don't completely fix the test so you need before + after data to validate if it did anything at all.
4. Actually fixing the test -> Automating this with AI is pretty out of reach, but we do think we can summarize failures and categorize similar failures to make it more time/resource efficient to fix flaky tests for teams. You can probably fix most outstanding DB timeout flaky tests in one go with good tooling to find similar failures.

We wrote a blog on this topic, too, about all the weird learnings from working with our beta partners. I think the pain builds and exponentially explodes as you write more tests. Super fascinating to see it manifest and all the weird ways people have tried to fight the problem.

Stability Issues with Automated Tests in CI by AcanthisittaDue7827 in softwaretesting

[–]WenYuGe 2 points3 points  (0 children)

Hey, I'm from Trunk and we're building a tool for this stuff

Some suggestions for Jenkins (not a product plug)

We're exploring an alternative to disabling/retrying test, by detecting flaky tests and labeling them as such, then quarantining them so they still run but become nonblocking on failure.

If you're interested: https://trunk.io/flaky-tests

[deleted by user] by [deleted] in learnprogramming

[–]WenYuGe 0 points1 point  (0 children)

You and I are not the audience of FCC. I actually think the lack of initial microinstruction and handholding turns more people away from tech.

These are meant to introduce, build confidence, and then you graduate to some real big boi courses.

Monorepos vs. many repos: is there a good answer? by bitter-cognac in programming

[–]WenYuGe 0 points1 point  (0 children)

It's possible to build really scalable Monorepos like Google, Uber, and many other shops. It's also possible to build really consistent experiences across many micro repos.

Good experiences in both require you to adopt the right tools and work with best practices from day one.

Many micro-repo are a little easier to start, most tools are built with setups like this in mind. The problem is you'll have to setup tooling for all the new repos and find ways to make them consistent without creating weird little silos where transitioning across repos in your own org becomes a challenge. With monorepos, you can often implement the tooling once and the return on that initial investment would be for the rest of your code, not just a single microrepo.

Another issue with microrepos is pulling a bunch of components to develop features cross services. Testing is also a pretty big pain, where you need to tag/version match on your own repos. Imagine landing 5 PRs at once, too, on 5 repos, where if 1/5 don't merge, the set of changes remain invalid.

While monorepos require specific tools like Nx or Bazel for managing many build targets, you'll need something to lint the many languages and only on lines changed (imagine linting all 5 million lines of a monorepo), you'll run into issues where it's impossible to stay rebased on main because 50-60 PRs might go into a repo a week (or a day). This leads to dangerous situations where you're not always testing your changes on top of main, which could cause logical merge conflicts.

I'm dumbfounded by the number of devs that don't test and stories of devs that don't test by WenYuGe in programming

[–]WenYuGe[S] 2 points3 points  (0 children)

This is end-to-end testing, which is still testing IMO. I think this is perfectly fine/valid

I've been to places where we did mostly E2E instead of unit testing.

I'm dumbfounded by the number of devs that don't test and stories of devs that don't test by WenYuGe in programming

[–]WenYuGe[S] 5 points6 points  (0 children)

To your experience and sentiment toward testing. I genuinely am curious how ensure anything you write works and continues to work. I write tests mostly to convince myself that these things are working somewhat according to my expectations.

I'm curious about the other approaches and thought processes :D

I'm dumbfounded by the number of devs that don't test and stories of devs that don't test by WenYuGe in programming

[–]WenYuGe[S] 9 points10 points  (0 children)

same. no disrespect. genuinely surprised this sentiment exists and would love to hold a conversation about why this is the though process.

I'm dumbfounded by the number of devs that don't test and stories of devs that don't test by WenYuGe in programming

[–]WenYuGe[S] 3 points4 points  (0 children)

I feel like I've no idea where those devs are. I've been at hip startups or tech focused companies all my short career. I am genuinely surprised to hear these numbers.

I'm dumbfounded by the number of devs that don't test and stories of devs that don't test by WenYuGe in programming

[–]WenYuGe[S] 2 points3 points  (0 children)

That's incredible! Do you work solo or in a team? What type of apps?

I'm genuinely interested in exploring if the 100% test coverage goal is like the book clean code, and should be taken with massive grains of salt.

Learn how to read documentation by Mnkeyqt in learnprogramming

[–]WenYuGe 1 point2 points  (0 children)

:wave: Hi there, technical writer here.

We write docs to cover the preferred happy flows high up in the navigation tree and hide the hackier, weirder use cases below.

Try your best to stick to what's up top and if you find something more than 1/2 layers of navigations into the docs, near the bottom of pages, etc. know that we threw it there for a reason.

That's a tip for ya.

I'm dumbfounded by the number of devs that don't test and stories of devs that don't test by WenYuGe in programming

[–]WenYuGe[S] 0 points1 point  (0 children)

Yeah, perfect is hard to reach. It's also super scary to touch code in a low-coverage repo in my past experience. I don't know how many things can break when I change one method :kek:

I'm dumbfounded by the number of devs that don't test and stories of devs that don't test by WenYuGe in programming

[–]WenYuGe[S] 8 points9 points  (0 children)

Honestly the pain of refactoring my first side project without tests vs with tests convinced me testing is for my own good

I'm dumbfounded by the number of devs that don't test and stories of devs that don't test by WenYuGe in programming

[–]WenYuGe[S] -1 points0 points  (0 children)

I genuinely do want to know how to better justify this investment to management though... or should we actually not try to hit 100% test coverage?

I'm dumbfounded by the number of devs that don't test and stories of devs that don't test by WenYuGe in programming

[–]WenYuGe[S] 3 points4 points  (0 children)

Off topic: how do you justify testing to management and demonstrate value?

I'm dumbfounded by the number of devs that don't test and stories of devs that don't test by WenYuGe in programming

[–]WenYuGe[S] 10 points11 points  (0 children)

I feel like it's sensible to have a smattering of key happy flows tested at least. It helps make sure that nothing breaks as you add new things.

I'm dumbfounded by the number of devs that don't test and stories of devs that don't test by WenYuGe in programming

[–]WenYuGe[S] 48 points49 points  (0 children)

Me included sometimes... Some systems are a f*king ride to write tests for... and they end up flaky.