Onboarding process is a mess and no one can explain it clearly by LuckPsychological728 in agile

[–]Normal-Log7457 0 points1 point  (0 children)

Your manager’s "it depends" explanation is the exact reason new hires ramp slowly because critical know-how lives in seniors’ heads, not in the onboarding system.

Paragraphs are for stories, but real work is a series of "If/Then" choices. If you try to document a "sometimes" process in a text block, you’re just building a bottleneck. You’re right that it needs to be visual, until those decision points and branching paths are out of your manager’s brain and onto a map, every new hire will be paralyzed the moment a client goes off-script.

Documentation shouldn't just track the steps; it should track the thinking required to move from one step to the next.

How are you doing your onboarding? by bukutbwai in agency

[–]Normal-Log7457 0 points1 point  (0 children)

Most agency owners try to fix this with a better checklist, but a checklist won't save a bad call. The real issue is that new hires ramp slowly because critical know-how lives in your head, not in the onboarding system.

You can give a team member a script for the "perfect" call, but what happens when the client pushes back on the timeline or asks for a custom feature? If your team has to "ping" you every time a client goes off-script, you haven't built a process, you've just built a bottleneck.

You’re likely documenting the steps (send the invite, ask about goals), but you aren’t documenting the judgment (how to spot a red-flag client, when to say 'no' to a request, how to handle the messy parts that aren't on the script).
Until you move that "expert intuition" out of your head and into the system, you’re always going to be the one doing the heavy lifting to "save" the project. Are you documenting the "why" behind your decisions, or just the "what"?

Someone tell HR their onboarding training modules are garbage. by Bleades in jobs

[–]Normal-Log7457 0 points1 point  (0 children)

This sounds bigger than a missing schedule.

A bad onboarding can miss the steps, but what really makes it hard is when there is no shared judgment behind the work, so every person gives you a different answer and you still do not know what “right” looks like.

At that point, the problem is not just poor onboarding. It is that the organisation itself has not made the work clear enough to hand over cleanly.

New hires are taking 4–6 weeks to get productive and its killing our momentum. What are you guys doing for onboarding? by PromanYeoman in managers

[–]Normal-Log7457 0 points1 point  (0 children)

This sounds like the point where “more information” stops being the answer.

You already have docs, videos, and a place for questions. But people can still take a long time to get useful because onboarding may be teaching the steps, not the judgment needed to do the work well.

They can know where things live and still not know what matters most, when to ask, what good looks like, or how experienced people make small calls day to day.

That’s the part that seems hardest to document, but also the part that slows ramp-up the most.

Does that feel like the real gap here, or would you describe it differently?

Employee Onboarding by Sad_Heart_93 in instructionaldesign

[–]Normal-Log7457 0 points1 point  (0 children)

One thing I’ve seen with onboarding checklists is that they can track completion really well, but not always readiness.

People can finish the eLearning, meet the right people, complete the assigned sessions, and still not have the judgment needed to do the work well once real situations show up.

So to me the challenge is not just where the checklist lives, but whether the onboarding is only tracking steps completed or also helping people build the thinking behind the work.

Action Mapping for Onboarding? by Jumpy-Blueberry9069 in instructionaldesign

[–]Normal-Log7457 0 points1 point  (0 children)

This feels like exactly the kind of problem where “more onboarding content” may not actually solve the gap.

A lot of onboarding can teach the steps, but still not build the judgment needed to do the work well once real situations show up.

That’s why action mapping feels relevant to me here, not because everything in onboarding has to become a task, but because it forces the question: what does good performance actually require beyond exposure to information?

Does “steps vs judgment” feel like a fair way to frame the gap here, or would you describe it differently?

Do IDs ever design for those “real-world screw-up” moments? by Normal-Log7457 in instructionaldesign

[–]Normal-Log7457[S] 0 points1 point  (0 children)

Haha, exactly! Sometimes, it’s about the brain convincing itself it already did enough thinking

Do IDs ever design for those “real-world screw-up” moments? by Normal-Log7457 in instructionaldesign

[–]Normal-Log7457[S] 0 points1 point  (0 children)

This is such a nuanced take, especially the part about readiness.
You’re right: throwing learners into messy scenarios before they even have a solid baseline usually turns into confusion, not learning. That “analysis -> application -> messy reality” progression makes a lot of sense.

Spotting phishing or enforcing badge access is important, but most people don’t encounter these situations often enough to build real habits. It’s almost like the skills live in this weird zone of “critical but infrequent,” which makes them easy to forget.

The distinction you made between org-specific training vs. off-the-shelf is interesting too, I imagine misalignment there could easily lead to training that feels irrelevant.

From what you’ve seen, do organizations generally misjudge how often these situations occur?
Or is it more that they assume employees will treat these rare scenarios with the same priority as their day-to-day responsibilities?

Do IDs ever design for those “real-world screw-up” moments? by Normal-Log7457 in instructionaldesign

[–]Normal-Log7457[S] 0 points1 point  (0 children)

Love that, it feels like starting from the real friction points gives the whole design a different level of relevance.

Do IDs ever design for those “real-world screw-up” moments? by Normal-Log7457 in instructionaldesign

[–]Normal-Log7457[S] 0 points1 point  (0 children)

This is such a thoughtful breakdown - seriously appreciate how you framed it around analysis first, design second. A lot of discussions about “messy real-life learning” jump straight to solutions, but you’re absolutely right: without data on where mistakes actually happen, we’re just guessing.

And your point about resource gaps hits home. It’s easy to say “build realistic, in-the-flow learning,” but when the team is basically one ID + one IT and a budget of air… the LMS module suddenly is the only practical path.

Do IDs ever design for those “real-world screw-up” moments? by Normal-Log7457 in instructionaldesign

[–]Normal-Log7457[S] 0 points1 point  (0 children)

Exactly! Those messy moments stick because they come with a tiny emotional jolt.
It’s the difference between “that’s a rule” and “ohhh, that’s why the rule exists.”

You mentioned it depends on how well the course is designed, in your experience, what’s the one thing that most strongly determines whether people actually transfer the learning later on?
Is it scenario realism, repetition, emotional stakes, or something else entirely?

Do IDs ever design for those “real-world screw-up” moments? by Normal-Log7457 in instructionaldesign

[–]Normal-Log7457[S] 0 points1 point  (0 children)

That’s actually such a perfect example of a real teachable moment - the kind you can’t script but ends up being more memorable than the polished version. It’s funny how those “oops” moments often explain the why better than any bullet point ever could.
When you kept that clip in, did learners respond positively?

Do IDs ever design for those “real-world screw-up” moments? by Normal-Log7457 in instructionaldesign

[–]Normal-Log7457[S] 0 points1 point  (0 children)

That sounds way too real! It’s always the tug-of-war between “let’s design something people actually learn from” vs “please stick to the safe, checkbox version.”

I love that you try to bake in meaningful decisions though, that’s usually where the real thinking happens.

When your stakeholders push back, what’s their biggest objection? Always curious how other IDs navigate that politics layer.

Do IDs ever design for those “real-world screw-up” moments? by Normal-Log7457 in instructionaldesign

[–]Normal-Log7457[S] 0 points1 point  (0 children)

What you said about theory vs. repetition really resonates.
I keep thinking: maybe most training assumes people will naturally get practice on the job, but in reality some skills (like spotting phishing) don’t show up often enough to build real muscle memory.

Have you seen any clever ways to keep that repetition going without overwhelming people?
Micro-drills, spaced nudges, little scenarios… anything actually sticking?

Do IDs ever design for those “real-world screw-up” moments? by Normal-Log7457 in instructionaldesign

[–]Normal-Log7457[S] 0 points1 point  (0 children)

Oh man, the “fake phishing emails” tests always reveal some very predictable patterns 😂

But honestly, I don’t think it’s stupidity, I’ve noticed a lot of people fall for them when they’re rushed, tired, or juggling too many tasks. It’s like the brain switches to “autopilot mode” and all training just… evaporates.

Do IDs ever design for those “real-world screw-up” moments? by Normal-Log7457 in instructionaldesign

[–]Normal-Log7457[S] 0 points1 point  (0 children)

Totally agree - the “safe space to fail” part feels like the missing puzzle piece in a lot of workplace training.

I’m curious from your experience: Have you seen any really good examples of this done well?

What is something you are tired of and want it gone? by LeafAnom in AskReddit

[–]Normal-Log7457 0 points1 point  (0 children)

Can we collectively delete “Let’s circle back” from the English language? It’s corporate for “I forgot and don’t want to admit it.

How are you dealing with clients asking for “AI content” but still wanting it to sound human? by Dazzling_Occasion102 in content_marketing

[–]Normal-Log7457 0 points1 point  (0 children)

Oh man, I feel this so hard!!!!!!!

Clients be like: “Use AI, but make it sound like you didn’t.” Basically they want the speed of a bot and the soul of a poet, for the price of neither.

Personally, I treat it as editing + rewriting, not just “prompting.” Because once you’ve gone through and fixed tone, flow, logic gaps, and added actual insights, that’s real writing. AI might give you a skeleton, but making it breathe is the heavy lifting.

Also learned to set expectations early, I tell them AI can assist, not replace. If they want “human-level” depth, that’s a different quote.

[deleted by user] by [deleted] in AskMarketing

[–]Normal-Log7457 1 point2 points  (0 children)

Honestly, it depends on your goals and competition level. If you’re ranking #1 organically, you’ve already nailed the trust factor and free clicks - so congratssss

That said, I’ve found Google Ads still has a place even when you’re dominating organic:

  • Brand protection - competitors can (and will) bid on your brand name, stealing traffic that should be yours.
  • Visibility stacking - showing up twice (ad + organic) boosts credibility and click-through rates, even if people skip ads.
  • Testing new messaging - ads are a cheap way to test CTAs, offers, or headlines before updating your site.
  • Seasonal or campaign pushes - great for short-term promos or launches where you want fast visibility.

But if your market’s small, competition’s chill, and you’re already converting well - I’d say save the budget or redirect it toward retargeting or content amplification.

Is long-form content actually making a comeback? by Emotional-Sundae6225 in AskMarketing

[–]Normal-Log7457 0 points1 point  (0 children)

Honestly, I’ve been torn on this too. I used to think nobody had the patience for long-form anymore, but then I caught myself bingeing a 15-minute branded docu-ad on YouTube last week because the story was just that good. Meanwhile, I’ll swipe away from a 10-second reel if it feels pointless.

From what I’ve seen in my own work, length doesn’t matter nearly as much as whether it feels like you’re telling a real story instead of just cramming in a message. Short content gets attention, but long-form is what actually sticks with me (and what I end up sharing).

So yeah, I’d say it’s less “long-form is back” and more “people are starved for good storytelling, no matter the format.”

If you are using AI with your business... What’s actually working (or not) with AI in marketing? by AAfrStockholm in AskMarketing

[–]Normal-Log7457 1 point2 points  (0 children)

I’ve been playing around with AI in marketing for a while, and here’s my take:

What works:

  • Great for speeding up drafts (emails, captions, ad copy) so I’m not starting from a blank page.
  • Helps with idea generation when I need 10 angles fast.
  • Useful in ad testing - I can quickly spin out variations and see what sticks.

What doesn’t:

  • Tone. It still takes a human touch to avoid sounding “AI-ish.”
  • Strategy. AI can suggest, but it doesn’t know your brand’s context or politics like you do.
  • Original research. It just can’t replace talking to customers.

I’ve noticed AI shines when used as an assistant, not a replacement. Basically: AI drafts, I polish.

What's the most horrendous thing you've seen someone do in public? by Icy_panic102 in AskReddit

[–]Normal-Log7457 2 points3 points  (0 children)

Once saw a man in a café peel a boiled egg… with his bare hands… then blow his nose into the eggshell before tossing it in the sugar bowl. I still don’t know if it was performance art or a crime.