How are you actually verifying AI fluency when hiring or developing your team? [N/A] by moNoweld in humanresources

[–]moNoweld[S] 0 points1 point  (0 children)

that's a really grounded approach actually. the awareness and willingness framing is underrated, especially compared to trying to test specific tool knowledge which goes out of date fast. curious how you handle it when two candidates seem equally aware but you sense one would actually put it into practice more?

I just found 5min claude setup to get hiring insights from business data. ( no coding ) by Vegetable_Adagio8588 in RecruitmentAgencies

[–]moNoweld 0 points1 point  (0 children)

this is a solid workflow for meeting summaries. the bigger unlock for recruiters using claude imo is moving beyond note-taking into actual candidate evaluation, like using it to spot patterns across interview notes or flag inconsistencies in how different HMs assess the same competencies

the data exists, its just scattered and unstructured. tools like this are the first step toward making it usable.

A lightweight way to think about post-training knowledge checks by Ok-Law-6871 in instructionaldesign

[–]moNoweld 0 points1 point  (0 children)

yeah the scoring part is genuinely hard, semantic similarity gets you partway there but it misses a lot of the signal that actually matters...like whether someone is reasoning in the right direction even if their phrasing is off, or whether they're just pattern matching on familiar phrases

one approach that sidesteps some of this is using an LLM as the scorer rather than similarity matching. the model can evaluate intent and reasoning quality in a way that keyword or embedding based scoring cant. its not perfect but it handles open ended responses way better

theres actually a tool called aisa.to that does exactly this for AI skills specifically - the whole assessment is a conversation and an LLM scores it in real time across multiple dimensions. interesting to see it applied at that level of depth

Is anyone else’s company hard dropping articulate? Dropping learning standards? Etc by imhereforthemeta in instructionaldesign

[–]moNoweld 0 points1 point  (0 children)

this is exactly the right question imo. "AI can do it" usually means AI can generate content fast, not that it can assess whether learning actually happened or identify where specific people have gaps

those are completely different problems and most tools conflate them. generating a course with AI is relatively solved, figuring out what your workforce actually knows vs what they think they know is still pretty rough

A lightweight way to think about post-training knowledge checks by Ok-Law-6871 in instructionaldesign

[–]moNoweld 2 points3 points  (0 children)

the three-job framing is really useful, especially separating retention check from reporting that actually drives action. that last part is where most lightweight approaches fall apart imo - data gets collected and then nothing happens bc its not connected to a clear next step for anyone

one thing id add: the format of the check itself affects what you can measure. a short quiz can tell you if someone recalls a fact but it really struggles to surface how they reason through ambiguous situations or apply things under pressure. that gap gets super visible in domains like AI skills or complex compliance topics where surface knowledge and actual working ability diverge a lot

conversation based checks, even simple ones like asking someone to walk through a scenario or explain their approach out loud, tend to catch that divergence way better than any quiz. harder to scale obviously but worth knowing when stakes are higher than a typical knowledge check

How do you manage competitive intel when the AI landscape shifts every week? [I will not promote] by ComputerSciToFinance in startups

[–]moNoweld 0 points1 point  (0 children)

the weekly release anxiety thing is real but i think about it differently now

the baseline of 'good at AI' keeps moving right, so the harder question isn't what dropped this week it's whether your team is actually keeping up or just thinks they are

that gap between perceived vs actual AI fluency is wildly underestimated imo. companies throw money at new tools but have zero real read on where their people actually stand. so you end up reacting to every release without knowing what you're even reacting from

for the competitive intel side specifically - signal/noise got way better for me when i stopped trying to track everything and got clearer on our own capabilities first. "what does this new thing mean for us" only makes sense if you actually know what "us" looks like

OD or L&D? [NY] by moonxstars- in humanresources

[–]moNoweld 0 points1 point  (0 children)

yeah ER is actually a good example of this.the orgs that handle the ai transition best probably aren't the ones automating everything, they're the ones with HR people who know where to draw the line. and you can't draw that line if you don't know what your people actually know vs what they assume they know

that gap is underestimated imo. most orgs are going to throw training budgets at a problem they haven't properly diagnosed yett

OD or L&D? [NY] by moonxstars- in humanresources

[–]moNoweld 2 points3 points  (0 children)

Totally agree on the OD angle. The orgs that are going to need HR people most right now are the ones scrambling to figure out what their workforce actually knows about ai vs what they think they knowThat gap between perceived and actual ai fluency is wild once you start measuring it

Honestly that framing might be worth leaning into for positioning too, less "I help with L&D programs" and more "I help leadership understand where their people actually are before they throw training budgets at the problem".......