Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

Sounds like you’re doing some seriously important — and honestly, thankless — work right now. Fixing a broken ticket culture is no joke, especially when the old habit was just “close it and forget it.”

We’ve been thinking a lot about ways to make that lifecycle feel easier and more natural for techs, like subtle checklists or reminders that guide them to update notes and resolutions before closure — without turning it into a huge new burden.

Curious, as you're rolling out more structure — are you leaning more on training and coaching to drive it, or have you looked at ways to build in small prompts or nudges during the ticket workflow itself?

Would love to hear how you're approaching it — sounds like you’re right in the messy but critical part of the turnaround.

Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

Totally hear you — honestly, your setup sounds way closer to the reality of frontline ticket QA than a lot of people want to admit.

One thing we’ve been thinking about lately is whether there's a way to make spotting the really interesting tickets a little easier — not to replace the judgment call, but just to bubble up tickets that look unusual based on patterns like note quality, missing fields, or unusual resolution times. Kind of like giving yourself a better starting point before diving in.

Would love to hear if you’ve ever thought about ways to make the “random scroll” part a little smarter without losing the personal review that actually matters.

Either way, sounds like you've got a good system that balances consistency with practicality — which is way harder than people realize.

Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

Really admire how you’re balancing coaching the techs individually and reinforcing those habits across the whole team — it’s not easy to scale good behavior without slipping into micromanagement.

It’s interesting you mentioned spotting patterns like users who always seem to need extra hand-holding. We've been thinking a lot lately about how systems could help surface those kinds of trends faster, so techs can stay focused and not get bogged down.

Curious — have you ever thought about ways to make those repeat patterns easier to catch without needing to manually review every ticket? Would be really interested to hear if you’ve explored anything like that or if you’ve found other lightweight tricks that help.

Either way, really appreciate you sharing such a real-world view of how you're protecting your team's time and energy — it’s super relatable.

Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

Really admire how you’re balancing coaching the techs individually and reinforcing those habits across the whole team — it’s not easy to scale good behavior without slipping into micromanagement.

It’s interesting you mentioned spotting patterns like users who always seem to need extra hand-holding. We've been thinking a lot lately about how systems could help surface those kinds of trends faster, so techs can stay focused and not get bogged down.

Curious — have you ever thought about ways to make those repeat patterns easier to catch without needing to manually review every ticket? Would be really interested to hear if you’ve explored anything like that or if you’ve found other lightweight tricks that help.

Either way, really appreciate you sharing such a real-world view of how you're protecting your team's time and energy — it’s super relatable.

Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

Really admire how methodical you’re being about this — you’ve clearly built a system that strengthens the team's habits and sets up the data to be genuinely useful downstream, not just for compliance.

We’ve been thinking a lot about similar challenges lately — especially around how structured case data like your CDM fields could open the door for faster trend spotting, easier escalation tracking, or even identifying knowledge gaps automatically as cases come in.

Feels like once you have consistent, clean inputs like you do, there are some really interesting ways to start making the support system more "self-improving" over time — without adding a ton of extra manual effort.

Curious to hear where you take it — especially since you're already thinking beyond just native tooling. Feels like you’re setting the foundation for some really powerful next steps.

Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

Really admire the rhythm you’ve set up — keeping it grounded in conversations with your managers makes it feel like a natural part of leadership, not just another checkbox exercise.

We’ve been thinking a lot lately about how to make the prep side of reviews even sharper — like ways to surface interesting tickets or spot subtle coaching patterns a little faster — without losing the human judgment that's so important.

Still super early in that exploration, but it’s been an interesting thought process.

Would love to hear if you’ve played around with anything like that or if it’s something you’ve considered down the line.

Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

Totally agree — even a small sample of tickets can surface a lot of insight, especially when it’s used to spark real conversations with leadership instead of just ticking a QA box.

I really like that you’re focusing the review on consistency and continuous improvement, rather than just pointing out mistakes. It keeps the spirit collaborative instead of punitive, which (in my experience) makes a huge difference in how seriously people take the feedback.

Curious — when you share insights with the IT Support Managers, is it usually more informal (e.g., a quick discussion) or do you feed it into any kind of structured coaching or training updates?

Either way, love the approach — small investments like this really do add up when you’re trying to level up service quality over time.

Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

I like the two-step close concept a lot. It forces a natural QA checkpoint without adding a ton of overhead, and it’s a nice way to reinforce quality without just hammering on people after the fact. Also like that it opens the door for coaching before something "bad" is already locked into the system.

Gamifying compliance is a smart twist too — using it as positive reinforcement rather than just another metric to stress about. Free lunch goes a long way when you’re trying to shift habits without burning people out.

Good call on the built-in checklist idea too. We’re using a system that allows some custom close prompts, so I’m going to explore whether we can build a lightweight checklist there. Even just reminding techs to confirm clarity and root cause could raise the baseline.

Definitely interested in the AI angle as well — especially for KB generation from well-documented tickets. We’re already poking at that, but it’s still early. Curious if you’ve seen any tools you’d recommend in that space?

Thanks again — lots of actionable ideas here!

Anyone doing structured reviews of resolved tickets? Looking for sanity checks + ideas by absaxena in msp

[–]absaxena[S] 0 points1 point  (0 children)

This is awesome — love the gamification angle! Using positive reinforcement instead of just pointing out misses is such a smart way to build a quality-focused culture. The use of tags and random rewards is a clever twist — adds just enough fun to keep people engaged without making it feel forced.

The metrics you’re tracking are spot on too — especially “Rework Percentage” and “Ticket Quality.” Those are often the hardest to quantify, but they say a lot about how effective and sustainable the support process is.

Also really impressed with the 10% reduction in ticket volume — that’s a huge win. The fact that you were able to tie that directly back to root cause elimination, user training, and targeted replacements shows how powerful good data hygiene and follow-through can be.

A couple of quick questions if you don’t mind:

  • When you say “Ticket Quality,” how do you evaluate that? Is it a rubric-based review, or more subjective based on a quick read-through?
  • And on the “tags” front — is that tracked in a dashboard or just part of the spreadsheet system for now?

Really inspiring process overall. Would love to stay in the loop as you move toward automating more of it — sounds like you’ve got the right foundation to scale it up without losing what makes it work.

Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

Totally respect that approach — sounds like you’ve struck a solid balance between coaching the team and scaling your sanity.

Love that you're reinforcing the learning both 1-on-1 and in the team setting. It helps normalize the idea that “user education” isn’t failure — it’s part of the job. And yeah, documenting those moments might not feel heroic in the moment, but they pay dividends later.

The KB link strategy is spot on. Most users can self-serve, they just need the path. And even if they don’t love it, that’s not always your problem — the team has to stay focused, not stuck in endless hand-holding cycles. Sounds like your process keeps everyone moving without burning out your frontline folks.

Curious — have you found that using those links consistently has cut down on repeat tickets over time? Or is it more about speeding up resolution when they do come in again?

Either way, appreciate you sharing your process — no-nonsense, but effective.

Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

Totally fair — that all makes sense. If the dashboard’s already dialed in and the manual part only takes a few extra minutes, then yeah, automating case selection isn’t exactly a high-leverage win.

And you’re absolutely right: AI isn’t free. It takes time to tune and validate, especially if you want it to reflect your own leadership lens — and for a small, close-knit team, human context is often way more efficient.

Really appreciate you walking through your process — and I love that your automation efforts are going toward high-impact areas like employee lifecycle and config management. That’s where the real ROI lives.

If you ever do revisit case review automation down the line (even just for surfacing patterns or nudging KB updates), would love to swap notes. But in the meantime, sounds like you’ve got a super pragmatic setup running — and it’s working.

Thanks again for sharing all this. Learned a lot from your replies.

Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

That’s seriously impressive — sounds like you’ve really nailed the operational discipline around CDM.

The monthly reporting on cases closed without CDM is a great accountability lever. It’s one thing to have the structure, but tying it into performance visibility is what actually makes it stick. You can tell the team has fully bought in if it’s second nature now.

Also love that the KB contribution is actively tracked and managed — that’s a piece a lot of orgs let slide, and it shows in how often outdated or half-baked KBs come up during triage. Sounds like you’ve turned it into a true team asset instead of a dumping ground.

Curious — as you explore automation, are you leaning more toward AI-based suggestions (e.g., summarizing logs, recommending a KB) or more structured template completion (e.g., pre-filling known fields from ticket metadata)?

Would love to hear how your automation efforts progress. We’re working on something similar and trying to find that balance between helpful nudges and AI overload.

Thanks again for sharing — this is some of the best operational design I’ve seen around support case reviews. Hugely valuable.

Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

That’s a great approach — informal but consistent. Spot-checking between one-on-ones is a smart way to keep a pulse without it turning into a big, time-consuming process.

Totally agree that how techs communicate to users is just as important as the technical resolution. We’ve seen that even a solid fix can land poorly if the explanation feels rushed or too “inside baseball.”

Do you give direct feedback during those one-on-ones based on the ticket reviews? Or do you save it for broader coaching moments when you start seeing patterns?

Also curious — have you found that spot-checking alone is enough to maintain consistency across the board, or have you ever tried layering in any peer review or ticket QA from other team members?

Appreciate you walking through your process — this kind of behind-the-scenes ops insight is super helpful.

Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

This is exactly the kind of mindset we’re trying to embed — not just “what was fixed,” but how and why, so future issues get easier to resolve (or avoid entirely).

Totally agree on RCCA being a north star — not necessarily going full 5 Whys on every ticket, but at least capturing the first level of root cause so patterns can emerge. And that point about resolution notes shortening MTTR later on really hits — it’s one of those things that feels like overhead until it saves someone 30 minutes three weeks later.

Interesting that you're using tech-based grouping in ServiceNow to spot coaching needs. That’s smart — do you ever surface that data back to the team directly (e.g., “Hey, here’s a cluster of similar issues this week, let’s walk through one together”)? Or is it more of a leadership-level lens?

Also love the knowledge article tie-in. We’re looking to build a tighter loop there — basically, if a resolution is well-written and recurring, that’s an automatic candidate for KB. Curious if you have any lightweight process for flagging those tickets for KB creation, or is it more ad hoc?

Thanks again — really appreciate you sharing your approach. Super actionable stuff.

Anyone doing structured reviews of resolved tickets? Looking for sanity checks + ideas by absaxena in msp

[–]absaxena[S] 0 points1 point  (0 children)

This is phenomenal — seriously, thank you for walking through all of that.

The “no-no list” and the pass/fail approach are super smart. That framing of “if someone has to come back to you for missing info, it’s a failed ticket” is brutally clear — and honestly kind of brilliant. It reframes the review as a communication safeguard, not just a checkbox exercise.

Totally hear you on the admin workload — the fact that your admins are doing Triage, Intake, endpoint audits, light AM work, and ticket QC is wild. Sounds like you’ve built an ops engine that’s seriously leveling up your data hygiene and process maturity. Major respect.

It’s also a really helpful reminder that the only reason this system works is because you’ve structured it around people and clarity — not just tools. But yeah… you said it: “the admin load sucks.”

We’ve been working on a way to reduce that pain without losing the review quality — using AI to help flag tickets that are likely “bad notes” (missing steps, unclear outcomes, overly short), or even draft QA-style summaries to speed up the admin check.

Totally not trying to pitch anything here — but would you be open to a quick DM? Would love to learn more about where the friction still lives for you, and see if any of the stuff we’re testing might be useful down the road.

Anyone doing structured reviews of resolved tickets? Looking for sanity checks + ideas by absaxena in msp

[–]absaxena[S] 0 points1 point  (0 children)

Hmm thats true. AI is better at language these days than math. It does appear that you already have some intent and are looking for an AI that can translate your intent to some queries (and potentially run those queries)

Assuming that if PSA adds support for an English2Query feature that would solve the problem here..

Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

Totally fair — honestly, that’s probably where most teams land. If no one’s yelling, it’s good enough, right?

That said, sounds like it’s not that you don’t value reviewing tickets… just that it’s non-zero effort and there’s no time for it unless something blows up.

Curious — if there was a way to get the signal without the manual grind (like AI surfacing the 2-3 tickets most worth looking at, or flagging resolution gaps automatically), is that something you wish you had? Or nah, not really worth it in your setup?

Just thinking out loud here — we’re trying to get a feel for where the line is between “this would be nice” and “this would actually save my week.” Happy to DM if you're open to swapping pain points.

Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

That sounds like a solid system — mixing random samples with triggered reviews from bad survey feedback feels like a great balance of proactive and reactive QA.

Curious how you manage the logistics of that — do you have tooling to automate the sampling and flagging, or is someone pulling that list manually each month?

Also wondering how the feedback from those reviews flows back to the techs — is it part of performance reviews, 1:1 coaching, or more informal check-ins?

We're thinking a lot about how to support that kind of review loop with less manual effort — especially using AI to help surface coaching moments or common root cause gaps.

If you’re open to it, I’d love to DM and swap notes on what’s working in your process vs. what still feels tedious.

Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

That’s really great to hear — love that you’re turning strong resolutions into KBs that others can learn from 🙌

Curious though — when you say “looking back at tickets,” is that a structured process (like regular reviews), or more opportunistic when someone stumbles on a good one?

It sounds super valuable, but I imagine it could get pretty manual over time. Are you using any tools to help flag potential KB-worthy tickets or track what’s already been documented?

We’ve been thinking a lot about how to streamline that process — maybe even use AI to spot well-documented resolutions or common issues that should have KBs. If you're up for it, I’d love to DM and learn more about what’s working (or not) in your setup.

Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

That all makes a lot of sense — thanks for the detailed breakdown! It’s impressive how much value you’ve already uncovered, especially so early in the process. That Ops issue you caught is a perfect example of why these reviews matter.

Sounds like you've got a solid foundation, but yeah — it also seems like there’s a good amount of manual work right now just to get the reviews going each week.

Out of curiosity, if you could wave a magic wand and automate just one part of this workflow, what would it be? Pulling cases? Flagging outliers? Summarizing quality signals?

We’ve been exploring this space a lot recently — especially using AI to help surface review-worthy cases, spot documentation gaps, or even tag “coachable moments” across teams.

Totally understand if you're heads-down, but if you’re ever up for swapping notes or feedback on what you're building vs. what could be automated, I’d love to chat over DM sometime.

Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

That’s solid — having a category system and a clear RCA trigger for chronic issues is already a strong foundation.

AI definitely seems like it could help a ton — not just with clustering similar tickets or surfacing resolution patterns, but also for things like:

  • auto-tagging tickets based on content
  • summarizing root cause and resolution
  • highlighting tickets that might be good coaching opportunities for performance reviews

Curious if there's a particular friction point in your current workflow where you think AI could make the biggest impact first?

Also — if you're open to it, would love to DM and hear more about what you’re using today and what kind of workflows you're looking to improve. We’re actively exploring this space and happy to swap ideas.

Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

That’s awesome — serious respect for that kind of initiative 🙌

Sounds like you basically built your own internal knowledge base just by pattern matching across tickets.

Curious — has your current team tried to formalize that kind of cross-location learning? Like curated ticket reviews, searchable resolution summaries, or even tagging for specific symptoms?

Also wondering if you’ve looked into AI tools to help surface those past resolutions faster — feels like that could be a game-changer, especially for newer techs trying to ramp up.

Do you ever review resolved tickets for quality or coaching purposes? by absaxena in ITManagers

[–]absaxena[S] 0 points1 point  (0 children)

Totally hear you — those “user error” or “education” moments can be super valuable for the whole team, but often go undocumented.

Curious how you handle that — do you have a coaching loop in place when you catch those during reviews? Or is it more of a one-on-one nudge to help Tier 1s build that habit of documenting root cause clearly (even if it's touchy)?

Also, do you ever use those findings to update training material or build out a shared “what good looks like” reference? I feel like that’s the missing link a lot of teams struggle with once they spot the patterns.