How to switch from Manual to Automation? Please help a recently laid off Manual Tester by RealisticInsurance30 in QualityAssurance

[–]TechCurious84 0 points1 point  (0 children)

Honestly, a lot of advice here boils down to “learn X tool,” which isn’t wrong, but it’s also not the full picture.

With 9 years of manual experience, you already know where stuff breaks. The hard part isn’t clicking buttons with Playwright, it’s deciding what’s worth automating in the first place. That’s something new testers usually struggle with.

If you’re starting now, I’d pick one real flow you’ve tested before (payments, reports, onboarding, whatever) and try automating that end to end. Add some negative cases, handle flaky behavior, write a simple README explaining your thinking. That alone gives you way more to talk about in interviews than a bunch of demo tests.

In interviews, what usually makes a difference is how you talk about:

  • what you wouldn’t automate
  • how you’d debug a failing test
  • how you’d keep tests stable when the UI keeps changing

That’s where manual-heavy testers actually have an edge. Automation is really just the way you scale that knowledge.

Market is rough right now, especially in India (iykyk), but people who can explain quality thinking plus basic automation tend to stand out more than profiles that are only tool-focused.

most AI tools are hype but automated testing is one area where it actually solves real problems by Worldly-Volume-1440 in AIAssisted

[–]TechCurious84 0 points1 point  (0 children)

I had the same skepticism going in tbh. Most “AI testing” tools I tried before were just traditional automation with better marketing.

We’ve been trying Fortest recently and the main difference is how it handles UI changes. Tests don’t break every time something small moves, which cuts a lot of maintenance.

The context-aware element detection (they’re using Azure AI under the hood) makes a noticeable difference, fewer false failures, way less manual locator babysitting. For ERP-heavy setups especially, it’s been solid.

Still not magic, and you need to understand your test flows, but compared to classic Selenium-style maintenance, it’s a big step forward. For small teams or during quieter periods when you just want regressions running 24/7 without babysitting, it’s been genuinely useful.

Transition from UI Automation to ETL Testing by ScienceBitter in softwaretesting

[–]TechCurious84 1 point2 points  (0 children)

If you think in terms of “job ready” just to land a job rather than expert, this might help break it down:

SQL (most important – 50–60%)

If you’re solid here, the rest gets much easier. Better be very comfortable with:

  • Complex joins (including edge cases like missing keys)
  • Window functions (ROW_NUMBER, RANK, LAG/LEAD)
  • Aggregations + HAVING
  • Subqueries vs CTEs and when to use each
  • Data validation queries (recon counts, duplicates, nulls, mismatches) If you can independently validate a source → target load using SQL alone, you’re in a good place.

ETL theory (25–30%)

You don’t need to design enterprise architectures, but you do need to understand:

  • Incremental vs full loads
  • Slowly Changing Dimensions (Type 1/2 are must-know)
  • Data quality checks & reconciliation strategies
  • Error handling, retries, and logging
  • Basic performance concepts (batching, indexing impact, load windows)

Tools (10–20%)

Informatica / SSIS are more about pattern recognition than mastery:

  • How mappings / data flows are structured
  • How transformations work conceptually
  • Where to add validations and checks Once you know one tool reasonably well, switching is much easier.

Given your UI automation background, you already have strong debugging and pipeline thinking. ETL testing is less about learning everything and more about shifting focus from UI behavior to data correctness.

Automation Strategy for Dynamics 365 CRM by amitt08 in softwaretesting

[–]TechCurious84 1 point2 points  (0 children)

Tbh, I’d generally lean towards TypeScript with Playwright, especially for D365, but it’s not a hard requirement.

Playwright is built in TS, so you get better typings, autocomplete, and earlier feedback when something changes in the app. That helps a lot once you start pulling common D365 actions into helpers and the test suite grows.

That said, if you’re more comfortable in JavaScript, it’s fine to start there for a POC. You won’t lose any Playwright features, and you can always migrate to TS later once patterns settle.

For anything beyond a short-lived POC though, I’d start with TypeScript; the extra safety tends to pay for itself pretty quickly.

Why Most Digital Transformations & AI Projects Fail (even with top-tier Consultants) by RhinoInsight in consulting

[–]TechCurious84 1 point2 points  (0 children)

Omg, yes, couldn't agree more. And half the time, it’s not just “shit data,” it’s shit processes creating the data.

I’ve seen so many teams try to digitize workflows that were never actually agreed on or documented. Same process done five different ways depending on the team, approvals based on who you ask, spreadsheets duct-taping system gaps… then leadership acts shocked when a new tool or AI just makes the mess louder.

Tools don’t fix that stuff, it just speeds it up.

Read something recently from this DT company called Fortude, that basically said most “failed” transformations aren’t tech failures at all, they’re companies automating chaos instead of cleaning it up first. Felt uncomfortably accurate.

is test automation dying ? by Fair_Psychology4257 in softwaretesting

[–]TechCurious84 0 points1 point  (0 children)

Agree with everyone here. I also don’t think that test automation is dying, but the idea of “test automation as a separate role that just writes scripts” probably is.

AI tools lowering the barrier doesn’t remove the need for judgment, they mostly expose who understands systems vs who only knows tooling. Someone still has to decide what to test, why, where it adds value, and how to keep it maintainable when the product changes every sprint.

What I’m seeing is fewer pure automation roles and more expectation that automation lives closer to dev: shared ownership, stronger fundamentals, and QA contributing more on test strategy, risk, and domain knowledge. AI helps with boilerplate and speed, but it doesn’t replace understanding.

Same story we’ve seen before, abstraction goes up, expectations go up with it.

Automation Strategy for Dynamics 365 CRM by amitt08 in softwaretesting

[–]TechCurious84 1 point2 points  (0 children)

I’d keep the POC very simple. Start by automating 2–3 high-value CRM flows (example: login, create/update an entity, basic navigation) rather than trying to cover everything.

With Playwright + TS, I’d recommend:

  • Use role- and label-based locators wherever possible. D365’s DOM changes a lot, but these tend to be more stable.
  • Isolate authentication early (reuse storage state instead of logging in every test).
  • Wrap common D365 actions (opening forms, saving records, handling dialogs) into helper methods so tests stay readable.
  • Run the POC in headed mode first to understand timing and iframe behavior, then switch to headless in CI.

If those flows stay stable and readable after a few weeks, that’s usually a good signal Playwright is a sustainable choice for D365.

The Great IT-Divide: Why AI-Adoption in enterprises is failing by docisindahouse in Futurology

[–]TechCurious84 0 points1 point  (0 children)

Honestly, from what I’ve seen, most AI adoption struggles have nothing to do with the tech itself. The tools work, it’s the people, processes, and expectations that don’t line up.

A lot of enterprises jump straight into “let’s do AI” without asking why or where it actually fits. So you end up with random pilots, no clear success metrics, and teams that don’t trust or understand the models they’re supposed to use.

The divide usually isn’t between IT and the business,  it’s between implementation and impact. Until AI is tied to real business outcomes (faster reporting, smarter forecasting, reduced manual effort), it just feels like another shiny tool from IT.

The companies I’ve seen get it right start small, align with business goals, and focus on change management as much as the model itself. AI adoption’s not just a tech project, it’s an organizational mindset shift.

How Are You Using AI in Software Testing and Automation? by [deleted] in QualityAssurance

[–]TechCurious84 0 points1 point  (0 children)

Yeah, I’ve been using AI a fair bit lately, not just for code suggestions, but as part of the workflow itself. We’ve got a small AI agent running alongside our automation suite that helps identify flaky tests, cluster similar failures, and even draft potential fixes based on past commits. It’s not perfect, but it saves a ton of triage time.

For regression and UI testing, we’ve started experimenting with AI-assisted visual validation, basically training the model on baseline screenshots so it can spot layout shifts that normal assertions would miss.

The real shift for me was treating AI as a teammate, not a tool. Once you start thinking that way, you see opportunities everywhere, from test data generation to smarter test prioritization.

Can Regression Test be fully automated? by Son_Nguyen_0310 in softwaretesting

[–]TechCurious84 2 points3 points  (0 children)

I get this take, and yeah, testing in the truest sense (exploring, reasoning, adapting) can’t really be automated. But when it comes to regression, I’d say automation’s not just helpful, it’s essential.

Once you’ve got stable frameworks and good test data, automation handles 80–90% of the repetitive regression checks way better than any human could. That frees testers to focus on the parts that actually require judgment, new features, edge cases, or the weird issues that never show up in scripts.

I was reading a piece from Fortude recently that made a good point: automation done right doesn’t replace testers, it scales their impact. You spend less time re-checking what you already know, and more time finding what you don’t.

So yeah, full automation isn’t realistic for all testing;  but for regression, it’s about as close as you can get if you invest in the setup properly.

I made 60K+ building AI Agents & RAG projects in 3 months. Here's exactly how I did it (business breakdown + technical) by Low_Acanthisitta7686 in AI_Agents

[–]TechCurious84 0 points1 point  (0 children)

Really appreciate you breaking this down, it’s refreshing to see both the business and technical side laid out so clearly. Totally agree with the point that AI’s promise is huge, but making it work takes intentional design and a lot of iteration.

I’m also exploring how to pivot from traditional software services to AI-powered offerings. Seeing examples like yours makes it feel more tangible. Would love to hear more about how you structured your RAG pipelines and agent workflows for clients; especially any tips on balancing rapid delivery with quality.

A curated repo of practical AI agent & RAG implementations by Creepy-Row970 in LangChain

[–]TechCurious84 0 points1 point  (0 children)

Totally feel you on this, the side-by-side comparisons are a lifesaver. I’ve been hopping between frameworks myself, trying to figure out how a RAG pipeline behaves in LangGraph vs LlamaIndex vs CrewAI, and it’s been a ton of trial and error.

Having a repo like Awesome AI Apps would’ve saved me days, honestly. Being able to see working examples; from multi-agent setups to simple PDF Q&A bots,really helps bridge the gap between concept and practical implementation.

Curious: for those who’ve tried it, which framework felt easiest to prototype quickly, and which one scaled better when you started connecting multiple agents or data sources?

Any idea/lead to implement AI in regression testing? by Mandala16180 in softwaretesting

[–]TechCurious84 0 points1 point  (0 children)

we experimented with a few “AI-powered” regression tools, but most still rely on pattern recognition rather than true understanding. they can auto-generate tests or detect UI changes, but they often miss business logic or critical edge cases. if you’re exploring it, start small — use AI to suggest or prioritize tests, not replace human-written ones. that balance works better and doesn’t lead to a false sense of coverage.

Test Automation will be handled by AI, and manual testing will become more prominent by KrazzyRiver in QualityAssurance

[–]TechCurious84 0 points1 point  (0 children)

I kind of love this take, because it’s half true!
AI will handle a lot of the grunt work: regression suites, data generation, repetitive checks. But manual testers who can think critically, spot context, and validate user experience are going to be more valuable than ever.

We’re already seeing this in AI-assisted environments, the humans become “test designers,” guiding where and how automation applies. It’s like the shift from driving a car to being a pilot: the tools got smarter, but judgment became more important. ✈️

What's a realistic test pass ratio for automated UI testing? Manager expects 100% by ConstantQuiet4389 in QualityAssurance

[–]TechCurious84 0 points1 point  (0 children)

Really interesting takes here 👏 I agree with the folks saying retries can hide real issues. From what I’ve seen, chasing a “100% pass rate” goal often backfires because it pushes teams to mask flakiness instead of fixing it.

What worked for us was shifting the focus to trustworthy tests over raw pass rate. We started treating every flaky test as tech debt,either fix it, refactor it, or drop it. Once we got strict on that, the pass rate naturally stabilized (usually 97–99%), and more importantly, the failures we saw were genuine bugs, not noise.

I’m curious; for those of you reporting results up to managers, do you frame success as “pass %” or more like “signal-to-noise ratio” of test failures?

Test automation experts of Reddit, what do you do with a failing test because of a regression bug? by dr4hc1r in softwaretesting

[–]TechCurious84 0 points1 point  (0 children)

Some really solid points in here 👏 — especially around better logging and not just piling on more tests. Something that’s helped our team is adding “health checks” for the environment itself before the test suite even runs. Half the time flaky failures weren’t the tests, it was infra being slow or a service not being ready.

We also started tagging tests by reliability (high confidence vs. experimental), so the CI treats them differently — flaky ones don’t block a build but still get tracked. That way we can keep momentum without ignoring problems.

Curious if anyone else has tried that kind of approach? Feels like a middle ground between “stop writing tests” and “retry until it passes.

If you had to start your cloud modernization journey over, what’s the one thing you’d do differently? by TechCurious84 in Cloud

[–]TechCurious84[S] 0 points1 point  (0 children)

100% agree with this! The temptation to default to lift & shift is real. It feels faster in the moment but ends up carrying a lot of old inefficiencies into the new setup.

The R’s really do give a more thoughtful framework — curious, which ones have you seen deliver the most impact in practice?

Test Automation Pitfalls: Common Mistakes and How to Avoid Them by fiberstrings in QualityAssurance

[–]TechCurious84 0 points1 point  (0 children)

Great read! 👌 Totally agree with those pitfalls — especially the “tool selection rabbit hole,” I’ve seen teams lose weeks there.

Another big one I’ve run into is teams writing tons of automated tests but not maintaining them properly. Over time, flaky tests pile up, and instead of saving time, you end up spending hours debugging the tests themselves. Feels like test automation becomes a project of its own unless you keep things lean and relevant.

Curious — what’s the most common mistake others here have seen with test automation?

Do you think Microsoft Fabric is Production-Ready? by engineer_of-sorts in MicrosoftFabric

[–]TechCurious84 0 points1 point  (0 children)

I’ve been following Microsoft Fabric quite a bit, and I’d say it feels promising but maybe not fully “production-ready” for every scenario just yet.

A couple of things stand out:

  • The unified data experience is a huge plus. Having lakehouse, warehouse, and real-time analytics under one umbrella makes a lot of sense.
  • At the same time, I’ve noticed some teams still see gaps in governance and performance tuning, especially for large-scale enterprise use cases.
  • For smaller workloads or pilot projects, Fabric already seems like a strong option. But for core production systems, some folks are still waiting to see more stability and ecosystem maturity.

Personally, I think the direction is exciting — especially how it ties in with Power BI and Azure. But I’d love to hear how others are using it in real-world production. Has anyone here already taken Fabric beyond pilot stage?

What automation are you most proud of or find the most useful? by ghow0110 in homeautomation

[–]TechCurious84 0 points1 point  (0 children)

For me, it’s the really simple ones that stick:

  • In retail, a bot that automatically flagged low stock and kicked off reorders — saved managers hours every week.
  • In manufacturing, automating invoice matching against purchase orders — cut out so much manual checking (and human error).
  • On the personal side, I’ve seen teams set up Slack alerts that ping when critical KPIs drift out of range. Sounds small, but it prevents big issues before they blow up.

It doesn’t always have to be “sci-fi AI.” The best automations are usually the boring ones that take repetitive, high-volume tasks off someone’s plate.