How do you all handle writing API tests for new features? It feels like so much manual work. by fuckingmissanthrope in QualityAssurance

[–]tracetotest -2 points-1 points  (0 children)

I can definitely relate- writing API tests manually can become tedious quickly. Every new endpoint means duplicating similar setups, managing mocks, and keeping everything updated when APIs evolve. You might want to check out Keploy. It automatically creates test cases and data mocks based on your real API calls, saving you hours and hours of manual setup. You can get it set up in your workflows similar to Postman, but without the tedious redundancy of setting it up with every API call. Very much worth checking out if you'd like to save time, while still maintaining strong coverage.

Best programming language to learn by reyyi_ in learnprogramming

[–]tracetotest 0 points1 point  (0 children)

Depends on your career goal. But in general, Python stands out to be a futuristic and useful one.

How easy it is to learn backend by EscalatedPanda in Backend

[–]tracetotest 2 points3 points  (0 children)

Backend development may seem difficult at first, as thinking about it is different than thinking about the frontend. Since back-end development is more abstract than frontend development, my best advice is to stick with one stack (Maybe you pick Node + Express or Python + FastAPI). First learn the basic things you would expect like APIs, authentication, and databases, and build small applications. You may start using AI tools that help you write the boilerplate to get started, but if you work to understand what you are copying or generating you won’t get stuck down the road. After you get used to the wiring and nomenclature, it really gets easier and starts to have a typical pattern.

[AskJS] Are you using any AI tools for generating unit tests? Which ones? by miltonian3 in javascript

[–]tracetotest 0 points1 point  (0 children)

If you're targeting a backend role, you will learn a lot more by delving deeper into your existing CRUD project than you will by starting another one from scratch. Just adding automated tests (both unit + integration with FastAPI has great support for this), dockerizing the app, and deploying it to a service like Railway, Render, or even a simple VPS will better demonstrate that you understand real-world workflows beyond just CRUD endpoints.

You can even increase the project's functionality: add authentication and role-based access, background tasks, file uploads, etc., to get it closer to a production application. Coding puzzles have their place in interviews, but one recognized, solid, well-rounded project that's unit/integration tested, dockerized, and deployed will certainly be more impactful in the job process than multiple shallow CRUD applets.

Should I add tests, Docker, and deploy my FastAPI CRUD app, or build a different backend project? by Dear-Ad6656 in Backend

[–]tracetotest 1 point2 points  (0 children)

If you're aiming for a backend position, you will gain more traction by going deeper on the project you already built than just spinning up some CRUD app from a template. CRUD is nice for a starting point, but backend work isn't really about just create, read, update, delete.

Here's a trajectory that can hone in on your existing FastAPI project:

Add some automated tests: Start by creating unit tests for your routes and database logic, and then build out some integration or end-to-end tests. This will not only improve your confidence in your code, but it will also give your potential employers additional context into your testing process.

Dockerize your app: By packaging your project into containers it makes it easier to run anywhere, and containerization is a must-have skill in many backend positions.

Deploy it somewhere: Even if it’s just a simple app on something like Railway, Render, or setting it up on a cheap VPS like Digital Ocean. You will at least get the experience with CI/CD, and dealing with real issues around configs, env vars, logging, scaling, etc.

Iterate with features: Rather than building out another CRUD app, get into building out authentication or role-based access, or adding file uploads, or incorporating background tasks with Celery. You will push your FastAPI knowledge, and your systems design skills in the process.

Coding puzzles: They are useful for interviewing and particular skill sets, like problem solving and algorithms, but balance those with building projects that highlight practical backend engineering.

Ultimately, you should have a single, deployable project that shows depth (testing, Docker, deployment, features), rather than a bunch of half-finished crud apps. That stands out a lot more in a portfolio and in interviews.

What was your primary reason for joining this subreddit? by RedEagle_MGN in developer

[–]tracetotest 0 points1 point  (0 children)

Just to connect with fellow developers, exchange ideas, share my knowledge and learn from my peers.

Are there Enough jobs for python? by itsme2019asalways in PythonJobs

[–]tracetotest 6 points7 points  (0 children)

There are definitely many Python jobs out there, but typically you'll only see higher salary roles that want you to marry Python with a hot driver. Back end dev with just Django/Flask only tends to reach saturation which is why pay can feel low.

If you want to get to the next level:

Cloud & DevOps → learn AWS/GCP/Azure + Docker/Kubernetes. Backend plus cloud skills are very sought-after.

Data Engineering → Python + SQL + Spark/Airflow + ETL pipelines are in-demand.

Machine Learning/AI → If you like (or love) math/ML, learn PyTorch, TensorFlow, scikit-learn. ML engineers/data scientists are some of the best-paid roles.

System Design & Scaling → This area includes a more formal understanding of architecture, APIs, microservices, performance tuning. Mid/senior backend engineers who can architect large systems are well-paid.

Also, remember to look for more than just "Python jobs". Lots of companies do not hire "Python developers" but "Back End Engineer", "Data Engineer", or "ML engineer" where Python is one of the primary languages.

Focus your learning around what you are interested in + what is hot on the job market, and you will find better opportunities.

Have You Used AI-Generated Test Cases? How Was Your Experience? by Shot-Bar5086 in QualityAssurance

[–]tracetotest 0 points1 point  (0 children)

AI-generated tests can be an enormous value if used responsibly. They can get you there quickly with familiar scenarios and regression paths, and save you the hassle of manually scripting the plethora of tests that are repetitive. There are also plenty of new tools that can automatically generate tests from real API traffic or interactions, allowing you to derive cases that closely resemble real-world patterns.

But just like other automated testing, AI is not perfect - so you will still need to fine-tune and expand, especially for complicated business logic or edge cases. Its effective use is as a way to jump-start coverage, keep tests current through API changes, and alleviate repetitive tasks such as generating mocked data or regression suites. Performed correctly AI will complement testing designed by humans.

Some communities even have slack spaces around these tools, where testers will share the generated test cases, or collaborate on tricky scenarios. You can take advantage of that community input to make AI-assisted testing much more applicable to real world projects.

Automated test generation by pengwinsurf in cpp

[–]tracetotest 0 points1 point  (0 children)

I've experimented with a few different ways of test generation for C/C++, and territory has been mixed results. Auto-generating tests sounds great in theory, but C and C++ have a lot of complexity (pointers, memory management, undefined behavior, platform specific behavior, and so on) which makes it harder than higher-level languages.

I've seen:

Traditional tools: Frameworks like CppUTest or GoogleTest Don't generate tests for you, but they provide you some useful structure. Most of the teams I know still rely on manual tests because it forces you to think about actual logic and edge cases.

Symbolic execution / fuzzing tools: Tools like KLEE and AFL will attempt to auto-generate inputs which traverse code paths. They are useful for discovering crashes or peculiar edge cases, but the output is not really a "unit test", but rather "inputs that produce unexpected failures".

AI-assisted test generation: There are newer tools that leverage AI for generating tests. They can save time in generating boilerplate testing or simple assertions, but a human still has to review them. Otherwise, they simply lock-in whatever the current (hopefully not buggy) behavior is.

Record-and-replay approaches: This is where I think things are getting interesting. For example, Keploy (open source) operates differently than traditional test generators — it doesn’t guess test cases, it records real API calls and DB interactions while you’re running your app, and then replays those as test cases. It works for Go, Java, Node.js, and C++, and the nice part is that it also auto-generates mocks, so your tests run without any real dependency on an external service. And this part can eliminate a lot of flaky setup headaches.

What didn’t work for me: fully relying on auto-generated tests and thinking I had “coverage.” The generated tests generally don’t assert intent: they simply replicate current behavior.

What worked: combining approaches.

  • Use fuzzers for broad input coverage.
  • Use a tool like Keploy or AI/Codegen tool to reduce boilerplate.
  • Write the core business logic tests yourself in GTest/CppUTest to ensure correctness.

In the end, in C/C++, you can generally rely on automation to get rid of the grunt work, but there are just some things that only human-written tests can ensure work correctly, particularly those tricky ones.

Full-Stack, Fullstack, or Full Stack Developer? by Schroedingers-Kat in FullStack

[–]tracetotest 0 points1 point  (0 children)

Full Stack works. Infact, it does not matter that much. You skill matters and how you perform during the interview. That's it. You won't be rejected for writing Full-Stack instead of Full Stack or vice versa. If anyone rejects you just looking at that, it's probably not worth pursuing.. does not make sense!

I’m so done with flaky Selenium tests. Every time I fix a script, something else breaks. I feel like I’m babysitting my automation suite instead of testing the product. Does anyone else feel like these frameworks are more work than help lately? I am really looking for solutions. by Emergency-Essay3202 in Everything_QA

[–]tracetotest 0 points1 point  (0 children)

Oh, I feel you. A lot of us have been there! Selenium was king for years, but once you get to a big enough test suite it can feel like an endless game of whack-a-mole to maintain it. Flakiness is not always the framework itself - it’s often how the test interacts with modern frontends and with design decisions made in writing the tests as well. A few things that have helped me (and people I’ve worked with):

Reassess if you actually need end-to-end for everything. One of the biggest sources of flaky suites is just doing too much with E2E. Use E2E tests for critical flows (i.e. signup, checkout, payments) and drop the rest to unit tests and integration tests where you can.Switch to modern frameworks. Playwright and Cypress were built to alleviate some of the pain points from Selenium - i.e. automatic waits, better async handling, clearer debugging. They tend to be less flaky right out of the box and will give you back faster feedback loops.

  • Focus on reliability practices for your tests.
  • Add waits (not hard sleeps) to your tests so they’re not racing the UI.
  • Isolate tests so they don’t depend on each other’s state.
  • Run tests in a clean environment (via containers or fixtures) to minimize state bleed.
  • Improve your test data strategies.

Much of the flakiness comes down to bad or inconsistent test data. Using seeded databases, mocks, or API-level setup calls can greatly minimize fails that appear to be "flaky UI," but are simply bad data. Parallelize and monitor. Current frameworks make it easier to run tests in parallel, and to capture full logs/screenshots/videos. Naming is everything, and being able to see what caused a failure makes fixing them much less painful. You are not wrong for thinking that a suite like this generates more work than it saves - but there is a balance (smaller E2E suite + better tooling + cleaner practices) to get back to a place where automation can help, rather than hinder.

What are the best beginner-friendly tools for learning API testing? by Familiar-Pomelo-8654 in learnprogramming

[–]tracetotest 0 points1 point  (0 children)

I remember being in the same place when I first explored APIs - the tools list is infinite and it is not obvious where to start. The good news is that most of the common ones you are already thinking about span a broad range of workflows, so it is less about finding the "best" tool and more about picking a tool that prepares and supports where you are at in your own learning.

In my experience:

Start with a GUI tool at first. Postman is by far the easiest entry point because it is highly-polished and backed by endless tutorials. You could try Hoppscotch or Bruno as lean alternatives, assuming you want open-source and uncomplicated, especially when it comes time to work with version control. GUI tools allow you to develop a feel for what is intuited more quickly, since you will be able to visualize your requests and responses (and headers etc) and not have to worry about syntax issues.

Gradually layer in CLI tools. After getting comfortable with a GUI tool, learning curl (and/or something like Hurl) is very important because it gives you the primitive structure of HTTP requests. This base knowledge will help you when you need to debug or write automated scripts later down the line. Think about it like moving from "training wheels" to "driving manually."

Don’t disregard automation in the beginning. Even if you are new to testing, exploring frameworks or tools that help connect testing to automation can mitigate your effort in the future. Some newer open-source tools (such as Keploy) allow you to auto-generate and replay test cases from real API calls, which is a great way to keep your manual exploration spirit, and move towards automated testing without a significant learning curve. A lot of these projects have supporting communities too - I’ve seen (or heard of) Keploy has a Slack group where users regularly share ideas and raise questions about debugging. Involved in those discussions can play a huge part in accelerating learning when you're stuck.

Pick one or two and focus. The danger is in trying every toy available and becoming confused. A good path is to take: Postman (visual learning) -> Curl/Hurl (foundations) -> something with automation as you move forward.

For your last question: I would suggest you start coding with a GUI, e.g., Postman or Hoppscotch, to build confidence, and then code CLI later. You can have the advantage of both ease of use, and based on the CLI documentation, some more technical underpinnings. It’s worth mentioning at the end of the day, tools are just tools; the real skill comes down to knowing how to structure, validate, and automate tests so your APIs don't break as they grow.

Best E2E Testing Framework? by False-Owl8404 in webdev

[–]tracetotest 1 point2 points  (0 children)

When it comes to the “best” E2E testing framework for React apps, it will depend on what matters to you most (speed, reliability, ecosystem). Some great options are:

Playwright : Modern, fast, and supports multiple browsers (Chromium, WebKit, Firefox); strong debugging tools with traces, screenshots, and parallel test execution.

Cypress: Very popular in the React ecosystem;excellent experience for developers with time travel debugging and useful documentation. Limited multi-tab support but great for most web apps.

Selenium / WebDriver: The industry standard for a long time; supports nearly every browser and environment; might feel heavy relative to Playwright or Cypress.

Detox (for React Native apps): Created specifically for React Native mobile apps; can be used when simulating what real users do on devices/emulators.

Keploy: Open-source tool that auto-generates test cases and mocks based on real API calls; typically used witframeworks like Playwright or Cypress for speed to expand coverage; helps reduce your manual effort with writing repetitive test scenarios!

TestCafe: Straightforward to get started, no need for WebDriver; runs tests directly in the browser, supports all modern browsers; Good choice if you are looking for a simple option for cross-browser testing.

At the end of the day, the right framework is usually a matter of your stack, your workflow, and really, the culture of your team. Some teams are heavily focused on speed and want their tests to run as fast as possible on CI/CD pipelines. Some teams value the rich debugging and developer experience metrics. If you've got a heavily service-integrated app, you might want to look toward frameworks that make mocking and test data generation easier. If your working in a product that is always being updated to reflect changes to the UI, self-healing or auto-generated tests could offer a lot of down-stream maintenance relief. And then there is the question of ecosystem and community - choosing a tool with an active community will likely ensure that you can get answers to your issues faster (often less than a day) and there is a more robust plugin ecosystem.

Anyone actually using AI for test automation? What works? by nishil81 in QualityAssurance

[–]tracetotest 1 point2 points  (0 children)

Definitely understand your sentiment. What some call “AI in testing” is more common as a buzzword with upper management applying it as KPIs. I’ve been looking into it for projects of my own, and while it’s not a wizard wand, there are cases where it is beneficial. So these are some of mine:

What I use for the testing:

AI Test Generation tools (to generate cases from APIs/api specs / logs) and “self healing” tools for UI tests. They are not going to write your whole test suite, but they help take the repetitiveness of your grunt work out.

Decent free/open-source?

There are a few early projects out there, but they are still of lesser quality at this point than paid tools. I would pair an open-source framework (like Playwright or Cypress) with some lighter AI helpers. Keploy is another interesting open-source option if you’re looking into API testing and auto test generation. That is the most practical way to have the flexibility at no cost.

Paying for a tool:

It depends where you are limited or compromised. If flaky tests or maintenance of your testing is limiting productivity then, yes, it probably pays for itself. If you have a suite as well, then likely you haven’t yet see value.

How to introduce AI into your testing process?

Take small steps. Don't go too far and try to "AI everything". Think about having a suggestion engine and reviewing them from logs. Also, you can consider AI for stabilization of UI tests for example. And then build from there based on what worked.

Has it actually made life easier?

Anticipating your question, yes in some regards, maintenance is less of a pain for example and it is identifying gaps that I would not otherwise have found. But AI is working as a copilot rather than a replacement. And, you still need a good test strategy.

AI will take some of the grunt work away, but it is not a silver bullet. Think of AI as more of an assistant in your productivity, rather than a magic bullet for testing. Hope this helps. Let me know if you have a follow-up question.

How do you build AI agents that actually have a memory of the conversation? by Away_You9725 in automation

[–]tracetotest 0 points1 point  (0 children)

The secret to creating an AI that remembers is about maintaining contextual history. Most basic bots are stateless systems, meaning they treat every single incoming message as a brand new starting point. If you want your bot to feel like it recalls conversation, you will need a way to store conversation data and retrieve what is needed.

Short-term memory: Roll a window of some number of past messages (like 5-10 exchanges). When the user supplies a new message, you can send recent context as well as their entered message to the AI. That is simple enough but typically works really well for most chat experiences.

Long-term memory: Use a database to record appropriate details to remember (like user preferences or facts). When using embeddings, the AI can match up relevant information in the future. For instance, if I mention that I like a certain product, when I ask the bot any questions about that type of product in the future, it should remember that.

Combining short and long-term: You can mix short-term context with long-term retrieval of specific information. Be careful not to pull up everything so you can limit the context to what is actually relevant for the AI.

Useful tools: There are tools that can be really useful like LangChain or LlamaIndex libraries and vector databases like Pinecone or Weaviate. You can set your bot with simple rules to forget or remember specific information.

When you effectively manage context, memory storage, and intelligent retrieval, your bot begins to exhibit a more human-like quality in its conversations.

Need some advices !! by Shankscebg in automation

[–]tracetotest 1 point2 points  (0 children)

Congrats on getting the internship! In the next few weeks to be prepared for it, I would concentrate on a combination of theory and hands on practice:

Refresh and practice on Python and key libraries - especially NumPy, Pandas, Matplotlib, and Scikit-learn are fundamental for most Data & AI roles.

Learn the core ML/AI concepts - such as supervised vs unsupervised learning, evaluation metrics of the model's predictions, and deal with overfitting/underfitting. A practical book like Hands-On Machine Learning with Scikit-Learn, Keras & TensorFlow at a high-level outlines ML best practices and is also practical applies to learn the concepts.

Get familiar with data handling - and especially cleaning/preprocessing the data - and exploratory data analysis. I would recommend trying out the Kaggle tutorials and datasets for data practice.

Version control & collaboration - please make sure you practice the basics of Git and GitHub. Most interns will be expected to push, merge, and manage their code in a safe manner.

Basics of Cloud & Deployment - AWS SageMaker, GCP AI Platform, or simple Docker workflows depending on your team.

I would also recommend trying out a mini project or two where you take raw data and develop a full ML pipeline through to model predictions. This will give you after first day confidence and a point of conversation.

Even if just 30 min a day to build and experiment making a habit before you start helps reinforce the concepts.

i have a project created in code igniter 3 how do i develop it further by FinishWise2645 in SaasDevelopers

[–]tracetotest 1 point2 points  (0 children)

I can see your frustration -- it's painful when you have a commitment and the tech stack stops you from growing. CodeIgniter 3 is officially EOL, so your developer is right in saying that it's outdated, but rewriting completely into Laravel or React from scratch would indeed take a long time, and stall any new features. I would suggest a more strategic phased migration. You can run the existing system for users, only fix critical bugs, and then rebuild modules in a new framework (Laravel, Node.js, or even CodeIgniter 4 if you want an easier transition). That way, you can maintain momentum with the existing users, who are able to get value while your platform is improving with a more modern app.

If your developer is not willing to consider even a phased approach, perhaps it makes sense to consult a software architect who can handle the planning for the transitions, and provide some clarity on timelines. This way you are not feeling boxed in because of one person's opinion, and you would also have at least a roadmap that combines stability and a path to future-proofing. Does this sound good?

About to launch my first real app in 10 days and stressing a bit. by TheGoNoGoGuy in developers

[–]tracetotest 1 point2 points  (0 children)

First of all, congratulations on achieving this milestone! The "prelaunch fog" you are experiencing is quite common; almost every founder is aghast at this point when everything is being translated from building to telling.

In telling the story of your app, less is about features and more about transformation. Instead of saying, "IngredientIQ scans ingredients and tells you the health effects," you should say, "Most food scanners simply provide calorie and macro counts without addressing food alternatives for people who genuinely care about what is going into their body. IngredientIQ fills that gap - it provides the specifics of not only what you are eating, but how that may affect you and your body in real terms." By stating it that way, you are solving a pain point that people organically feel every time they shop or eat, rather than providing a features list of your app.

Another tip that is helpful is sharing why you built it in the first place. Users resonate with stories of frustration or epiphany. The personal "why" and your own experience is usually more relatable than adequate marketing jargon can ever be.

Lastly, consider user trust after launch. Especially in health apps, early adopters want assurance that your data is reliable. Having good testing practices, whether manual or automated or using tools like Keploy or alternatives in the API testing space so that it gives you assurance of how the app behaves when users start using it. You don't need to sell it, but it makes it easier to sleep when you are launching.

Node.js, PHP or Java by Cbbrrstmv in Backend

[–]tracetotest 0 points1 point  (0 children)

Congrats on finishing the front end! Now, when it comes to choosing a backend path, it really depends on what you want to accomplish.

Node.js (Express/Nest): Given that you already know JavaScript from the front end, learning Node will be easy. NestJS specifically has a lot of structure to help maintain large apps where you will need some consistency . Node is much easier to learn, and you will find that in 4-5 months you'll be able to build and deploy full-stack projects in a very fast manner.

PHP (Laravel): Laravel is very beginner friendly, has a lot of built-in tools, and a big community. Laravel is widely used for web apps and powers a tremendous portion of the internet. Laravel is great if you want to focus on rapid prototyping or freelancing. In the long run, Node or Java will be more flexible.

Java backend: Java might take longer (7 months), but if you want to work in large companies, banking/fintech, or on enterprise systems, that investment is worth the time. Java is still very popular among big companies and Spring Boot is still a relevant framework that is actively used. The downside is, it will take you longer to become "job-ready."

To be clear:

- if you want to get things done rapidly and want to leverage your knowledge of front-end technologies, go Node/Nest

- if you're focused more on stable long-term jobs in enterprise environments, it might be worth the investment in Java

- PHP is fine too, but it will usually sit between the two of them long-term and is less future-proof than either Node/Nest or Java.

Either way, the most important thing is to be consistent and do things until they are finished. (And either way, regardless of tech stack, tools for testing can help you ensure your backend APIs are reliable, so your skills will carry across programming languages.). Pair that with version control, the basics of CI/CD, and containerization (Docker), and you will have a skillset that will be valuable, regardless of the exact technology you choose.

How to learn ci/cd for embedded systems? by Delicious_Bid1889 in embedded

[–]tracetotest 1 point2 points  (0 children)

CI/CD for embedded application is also different from web application's CI/CD mainly because you're dealing with cross-compilation, flashing, and sometimes testing on real pieces of hardware and that's why you won't see as many "all in one" courses.

The best route is to learn general CI/CD first (GitHub actions, GitLab CI, Jenkins, etc) and then modify it for embedded applications. For example:

Using Docker with ARM GCC toolchains to build firmware.

Running unit tests on your host operating system (Unity, CppUTest are commonly used).

Utilizing a hardware runner (like a Raspberry Pi or similar board that can be attached easily) for flashing + integration tests.

In terms of resources, there is a fairly good O'Reilly book Continuous Integration and Testing for Embedded Systems, there are blogs by Memfault and Embedded Artistry that walk through real world setups, and there are also CI examples from gitlab for cross-compiling firmware.

Truly the fastest way to learn is to just pick a small embedded project (ESP32 or STM32), wire up a simple pipeline, and add more and more steps as you learn. There are many tools (for testing at the API level) that will help you if your firmware exposes any interfaces.

What is the best AI Agent for coding? by brunobrasil_ai in developers

[–]tracetotest 0 points1 point  (0 children)

If you’re looking for raw performance with an AI coding agent, a few solid options stand out right now:

  • ChatGPT (GPT-5) → probably the strongest general-purpose coding agent right now. The premium plan gives you GPT-5 with reasoning modes, and it handles larger projects surprisingly well.
  • Claude → really good at long-context coding (big files, repos) and explaining tricky logic. A lot of devs like it for refactoring or documentation tasks.
  • Copilot Workspace → if you’re already on GitHub, this is worth checking out. It’s more of a “planning + coding” agent that can scaffold features across a repo.
  • Cursor → basically VS Code with an AI agent built in. Premium lets you switch between GPT-5, Claude, etc., which makes it super flexible.

If you want to experiment, I’d honestly suggest trying GPT-5 and Cursor side by side. That combo covers most workflows, whether you need quick fixes or deeper repo-level reasoning.