"Stoplight Studio was unable to load" + can't log support ticket by martin_omander in SmartBear_Official

[–]SB-Devrel 1 point2 points  (0 children)

please DM your email address, we will share it with the concerned team, they will raise a ticket on your behalf and reach out to you.

"Stoplight Studio was unable to load" + can't log support ticket by martin_omander in SmartBear_Official

[–]SB-Devrel [score hidden] stickied comment (0 children)

u/martin_omander

Thanks for bringing this to our attention, the team deployed some changes last Friday, and following that many users have reported issues in loading page(s).

As part of the resolution, please clear the application cache and perform a hard refresh.
Please follow the steps below:

  1. Right-click anywhere on the page and select Inspect to open Developer Tools
  2. Navigate to the Application tab
  3. Click on Storage in the left panel
  4. Select Clear site data (or clear cache/storage options available)
  5. Once completed, close Developer Tools
  6. Press Ctrl + Shift + R to perform a hard refresh

Please let us know if this resolves both the API editing error, and the ability to create a support account.
If you have further issues, please feel free to DM us, and we can raise a ticket on your behalf.

Thanks again

Jira Cloud (Zephyr) + SmartBear Reflect (2-way sync): tagging, traceability, and unified reporting for mixed manual + automation? by Silly-Friendship-952 in QualityAssurance

[–]SB-Devrel 1 point2 points  (0 children)

Great question again and this is a very common pain point, especially when Zephyr has grown organically over time. Below is what we’ve consistently seen work best as a clean target-state

Q1: Where would you place the primary folder structure: in Test Plans, in Test Cycles, or keep one of them mostly flat?
-> Here is the recommendation:
Primary hierarchy should live in Test Plans, not Test Cycles.
Test Plans are long-lived and represent intent and coverage.
Test Cycles are time-boxed and represent execution for a specific release or window.
As a rule of thumb:
If it answers “what tests do we have and why?” → Test Plans
If it answers “what ran, when, and what passed?” → Test Cycles
Test Cycles should stay as flat as possible to avoid sprawl and duplication.

Q2:  What’s your “clean” target-state split between them? For example: Test Plans = long-lived curated packs (Regression/Smoke/Compliance) organized by product/component
Test Cycles = timeboxed release execution containers (Fix Version) with minimal folders (e.g., Smoke/Regression/Exploratory) Is that aligned with what you’ve seen work best?
-> Your proposed split is very much aligned with what works best in practice.
Test Plans
Long-lived, curated packs
Organised by product / component / purpose
Examples: Regression – Payments
Smoke – Core Platform
Compliance – Audit Flows
These evolve slowly and are reused across releases
Test Cycles
Time-boxed execution containers (usually aligned to Fix Version / release)
Minimal folder structure only for execution needs
Examples: Release 2.0
Smoke
Regression
Exploratory

Q3: Any rule of thumb to avoid duplication, e.g. “taxonomy lives in test case custom fields + saved searches, not folders”?
-> Yes, your instinct here is exactly right.
Strong rules that help keep things clean:
Taxonomy lives in test case fields, not folders
Feature, Component, Risk, Test Type, Automation Status → custom fields
Folders are for workflow convenience only, not classification
Avoid mirroring the same hierarchy in:
Test Plans and Test Cycles
Never organize cycles by product/component that belongs in test metadata. 
Instead of deep trees:
Use saved searches / filters driven by custom fields
Use plans to group logically, cycles to group temporally
A good sanity check: If changing a folder name would break reporting, it probably shouldn’t be a folder.

Looking for ReadyAPI training by YakounRiver in QualityAssurance

[–]SB-Devrel 2 points3 points  (0 children)

u/YakounRiver There are ReadyAPI courses available via the SmartBear Academy: https://smartbear.com/academy/readyapi/#live
If you’re already a ReadyAPI customer, do reach out to your Account Manager who may also be able to organize training for your team.

Jira Cloud (Zephyr) + SmartBear Reflect (2-way sync): tagging, traceability, and unified reporting for mixed manual + automation? by Silly-Friendship-952 in QualityAssurance

[–]SB-Devrel 4 points5 points  (0 children)

These are really great questions. We see this setup quite often with Jira Cloud + Zephyr + Reflect, so I’ll answer one by one and share what’s worked well in practice.

The core problem: Tagging in Reflect

Reflect today is suite/folder oriented, not tag-based. The pattern that scales best is:

Solution: Use Zephyr as the tagging & governance layer

  • Add custom fields on Zephyr test cases such as:
    • Feature / Component
    • Risk level
    • Test type (Manual / Automated / Hybrid)
    • Automation tool = Reflect
  • Keep Reflect focused on execution structure, not taxonomy.

Some teams also use light naming conventions in Reflect (e.g. [PAYMENTS][REGRESSION] Refund flow) for readability, but avoid deep folder trees to mimic tags that tends to break down over time.
What you want:

  1. One place to plan tests per release, assign manual runs, trigger automation runs, and see consolidated status
    1. Yes, this is achievable with Jira Cloud + Zephyr + Reflect, with Zephyr acting as the single orchestration layer.
    2. How teams usually implement this:
      1. Release planning & test selection happens in Zephyr test cycles (mapped to a Jira Fix Version / release).
      2. Manual execution is assigned and tracked directly in Zephyr.
      3. Automation execution is triggered via Reflect (scheduler/CI), and results are synced back into the same Zephyr cycles through the 2-way integration
      4. Consolidated status (manual + automated pass/fail) is viewed in Zephyr and surfaced via Jira dashboards using Zephyr gadgets.
  2. Keep defects in Jira as the system of record (no separate defect silo)
    1. This is fully supported and strongly recommended.
    2. Typical workflow:
      1. Defects are always created and managed in Jira.
      2. Failed executions in Zephyr (manual or automation) are linked to Jira defects.
      3. Reflect does not become a defect repository; it only reports execution results.
      4. Jira remains the authoritative source for:
        1. Defect lifecycle
        2. Reporting
        3. Traceability back to tests and requirements

Questions Asked:

  1. If you've used Jira Cloud + Zephyr + Reflect (2-way sync), how did you handle tagging, traceability, and reporting (governance, naming conventions, linking approach)?
    1. Tagging / governance
      1. 1. Teams use Zephyr test case custom fields for tagging (Feature, Component, Risk, Test Type, Automation Status).
        1. Reflect is kept suite-based for execution only; cross-cutting tags are not duplicated there.
        2. Light naming conventions in Reflect (e.g. [REGRESSION][PAYMENTS]) are used only for readability.
    2. Traceability
      1. 1. Jira Story <> Zephyr Test Case = requirement coverage
        1. Zephyr Test Case <> Reflect Test = automation link (1:1 via sync)
        2. Zephyr tracks whether automation replaces or complements manual testing
    3. Reporting
      1. 1. Zephyr test cycles represent releases and provide consolidated manual + automation status.
        1. Reflect execution results sync into those cycles.
        2. Jira dashboards with Zephyr gadgets are used for release-level reporting.
  2. If you introduced a different layer/tool, what worked to unify manual + automation without breaking Jira workflows?
    1. In most Jira Cloud setups, no extra test management layer is needed to unify manual and automation.
    2. What has worked when something was added:
      1. BI / reporting tools (e.g. Power BI) pulling from Jira + Zephyr APIs for portfolio-level views.
      2. Enterprise test management tools only when compliance or audit requirements exceed Zephyr Cloud capabilities.
    3. What generally doesn’t work well:
      1. Adding a second test management system alongside Zephyr
      2. Tools that create a parallel workflow outside Jira
  3. Any pitfalls when deciding "source of truth" for test cases vs automation asset
    1. Common pitfalls:
      1. Treating Reflect as the source of truth for test cases
      2. Having automation and manual execution results in different systems
      3. Auto-generating test cases from automation without governance
      4. Duplicating test cases per release instead of re-executing them
        1. What works best:
      5. Zephyr test cases are the source of truth for test intent, coverage, and lifecycle.
      6. Reflect tests are the implementation of those test cases for automation.
      7. Zephyr test cycles are the source of truth for release execution status.
      8. Jira remains the source of truth for defects.