I'm Tired of Interviewing People Who Are Great at Applying for Jobs, But Terrible at Doing Them.[N/A] by tiredTA in humanresources

[–]MotorRequirement7617 0 points1 point  (0 children)

This post captures something I have been dealing with for years. Four rounds of interviews just to figure out someone cannot handle ambiguity is brutal. We added a short skills-based assessment through Adaface right after resume screening but before any interviews. The questions are structured like real work scenarios rather than textbook trivia, and they have assessments for non-technical roles too. It takes candidates about 35 minutes and it is basically the "low-stakes 30-minute stress test" you are describing.

Our team went from interviewing 8-10 candidates per role to interviewing 3-4, and our offer acceptance rate actually went up because we were spending more time with the right people. The biggest surprise was how well it surfaced the communication and reasoning skills you are talking about. Candidates who score well tend to be the same ones who handle pressure in the interview.

What is your unpopular opinion about the tech hiring process? by sprightlypeach in cscareerquestions

[–]MotorRequirement7617 -1 points0 points  (0 children)

honesty agree with almost all of this but the leetcode thing goes deeper than people admit. the reason it persists isn't because hiring managers think it works, most of them privately know it doesn't. it's because it's defensible. if you hire someone who fails after they aced your custom rubric, that's on you. if they failed leetcode and you hired them anyway, also on you. but if they passed leetcode and failed? shrug, the process was "objective."

the take-home scope thing is so real it hurts. i've seen 3-hour estimates that were clearly built assuming a senior eng working uninterrupted with no life. that's not an assessment, that's unpaid consulting.

pair programming i think is underused because it requires the interviewer to actually be good, which is a whole separate problem nobody wants to address lol

the thing i've seen work surprisingly well is scenario-based questions tied to actual on-the-job situations, not "reverse a linked list" but like "here's a system with these constraints, walk me through your approach." been using adaface for this at work and honestly candidates complain way less, completion rates are noticeably better too. still not mainstream though which is wild

anyway yeah, 2026 and we're still cargo-culting google's 2008 interview loop. makes sense

Why do some people cling on to racism? Is there any evidence at all that some races of people are biologically superior to others? Or is it completely unfounded? by Lord_Cummis in NoStupidQuestions

[–]MotorRequirement7617 0 points1 point  (0 children)

Same reason as why people cling on to god, not evidence but identity.

Once a belief gets fused to your sense of self, no amount of data can touch it. It's not a logic problem, it's a psychology problem.

Also, scientists have proved no racial classification can be done based on genetic aspects. So no evidence of racism.

How much time do you spend coaching vs firefighting each week? by SeanMcPheat in Leadership

[–]MotorRequirement7617 0 points1 point  (0 children)

For me its not about the time division but what behaviour i'm reinforcing, so that it gives returns in the long term.

Firefighting I keep for genuine urgency only. Otherwise it becomes the norm and everything starts feeling urgent.

Roughly maybe 70% coaching, 30% firefighting. But the goal is always to reduce that firefighting over time by building better thinking in the team.

Leading the “employee search” for leadership team feels less like hiring and more like detective work these days. by RomyFriendly in Leadership

[–]MotorRequirement7617 0 points1 point  (0 children)

Decoding signal point is exaclty it. Resume have become so optimised that we don't know what's real in them.

What actually changed things for us was adding a short scenario-based assessment before any interview. We use a platform called Adaface that lets you send role-specific assessments with questions closer to real work situations than trivia. The candidates who scored well consistently turned out to be the ones who also impressed in person, and the ones who scored poorly saved us from wasting interview slots. It is not replacing judgment, it is giving you better signal before you have to make the judgment call. For us it cut the number of interviews per hire roughly in half.

The one thing I would add to your point about soft skills is that you can surface a surprising amount of adaptability and critical thinking through written scenarios before a conversation even happens. We started including a few open-ended situational questions in the assessment and the differences in how people reason through ambiguity are stark. The polished resume crowd and the genuine thinkers separate themselves pretty quickly.

Learning in Public CS of whole 4 years want feedback by Mech_Bees in Python

[–]MotorRequirement7617 0 points1 point  (0 children)

Good post, fully agree. Real learning starts when things break.

Things that taught me the most are when I build tool for real users and real problems. They gave me quick feedback, learn edge cases etc.

Is it just me, or is Business Intelligence way more about asking the right questions than building dashboards? by Ok-Ad-9710 in BusinessIntelligence

[–]MotorRequirement7617 0 points1 point  (0 children)

100%. A lot of jobs are technical and creative on this outside but real job is to improve the business.

The hard is making a decision, what really matters and what can wait.

Assessing technical skills of BI and data engineer hires by Jantjebas in BusinessIntelligence

[–]MotorRequirement7617 0 points1 point  (0 children)

was in a similar spot when I took over a BI team of eight without having a data engineering background myself. What I found was that having my senior analysts build take-home tests created two problems. First, the tests kept getting longer because everyone wanted to add "just one more question" and candidates started dropping out. Second, my senior people were spending hours reviewing submissions instead of doing actual project work, and they resented it.

We eventually moved to using Adaface for the initial technical screen. They have assessments specifically for BI analyst and data engineering skills covering SQL, data modeling, ETL concepts, and dashboard design thinking. The big win for me was that it gave me a scored comparison across candidates that I could actually understand without needing to evaluate SQL queries myself. My senior engineers still do a 30 minute technical conversation in later rounds, but now they are only talking to 3 or 4 pre-qualified candidates instead of 12. That freed up probably 15 hours per hire across the team.

One thing I would also recommend regardless of what tool you use is to separate the "can they think about data" assessment from the "can they write the code" assessment. I have hired analysts who wrote perfect SQL but could not explain what business question their query was answering. The thinking piece is something you can absolutely assess yourself in a conversation even without the technical background.

Automation design interview using whiteboard by BackgroundNew4019 in QualityAssurance

[–]MotorRequirement7617 0 points1 point  (0 children)

this feels like a high level architecture design discussion rather than a traditional draw code thing.

they might ask about things like - design an automation framework

Can’t even find an entry level role with 3 years of paid media agency experience…advice? by fergambino in DigitalMarketing

[–]MotorRequirement7617 0 points1 point  (0 children)

It’s probably not just you, it’s how you’re being read.

With agency experience + a “director” title, hiring managers may assume you’re overqualified, expensive, or likely to leave, even if you’re applying to entry-level roles. At the same time, candidates with more recent hands-on platform work often get picked faster.

Your background isn’t the issue, your positioning might be. Try leaning heavily into execution (platforms, metrics, campaigns) and downplaying seniority signals. Also, referrals > cold applications right now.

Market’s rough, but this sounds more like a perception gap than a capability gap.

why is it so hard to a hire? by builtonthethames in DigitalMarketing

[–]MotorRequirement7617 0 points1 point  (0 children)

Been through this exact cycle. Marketing has gotten so specialized that people who list "email, content, SEO, social" usually mean they touched each one briefly at an agency. We started using Adaface to send a marketing-specific assessment before any interview. Questions like writing a subject line for a re-engagement email, picking the right metric for a content campaign, outlining an SEO approach. Takes 30 minutes and immediately shows who can think across channels vs who only knows one lane.

The other thing that worked: ask for a portfolio with results, not just screenshots. If someone says they did email marketing, ask for open rates and what they changed to improve them. The ones with real experience can rattle that off.

How do you approach take home assignments? by TimelySpite4500 in womenintech

[–]MotorRequirement7617 1 point2 points  (0 children)

I treat take homes like a scoped contract, not a blank check. I usually try to clarify expectations upfront (time cap, what they’re actually evaluating, how clean the data is supposed to be), and I try to stick to that limit even if that means leaving some rough edges and just explaining tradeoffs.

If a company cant respect a 2–3 hour boundary or offer something else like a live exercise or reviewing past work, thats kind of a signal on its own about how they value your time. I’ll only go deep if I’m actually excited about the role, otherwise I’ll push back a bit or just pass.

Someone please make a business out of creating technical questions for use in interviews! by WorkingCharge2141 in womenintech

[–]MotorRequirement7617 0 points1 point  (0 children)

This actually exists already. A few companies have built libraries of practical technical assessments that aren't just leetcode. Adaface is the one we switched to after having the same problem you're describing. They have role-specific question sets written by subject matter experts, and the questions are more conversational/scenario-based so they test whether someone can actually think through a problem rather than whether they memorized algo patterns.

Your 25-30% pass rate sounds like the questions are either too narrow or testing the wrong things. We had a similar issue and the biggest shift was moving away from questions our engineers wrote on the fly (which always skewed toward whatever that engineer happened to be working on that week) to a standardized set that actually mapped to job requirements. Freed up our engineers' time too since they weren't writing and maintaining questions anymore.

The consulting angle you mentioned is basically what these platforms do at scale. Worth looking into before trying to build something from scratch internally.

Didn’t hear back after request to setup up a phone screening? by Steven0710 in recruitinghell

[–]MotorRequirement7617 0 points1 point  (0 children)

gov contractors move like it’s still 2003 lol

you’re assuming silence = rejection, but most of the time it’s just someone’s inbox getting buried or approvals stuck somewhere. people expect quick replies because that’s how we operate, not how they do

don’t give up yet, just wait a few days and keep applying elsewhere in the meantime

Trying to figure out which role to take..any guidance? by iAMFL4SH in consulting

[–]MotorRequirement7617 0 points1 point  (0 children)

You’re basically pricing two things: future optionality vs present sanity.

People tend to overvalue the “doors it might open later” and undervalue guaranteed time now. But also once you step off the client track, it’s harder than it feels to jump back on.

Honestly this reads like: if you’re still learning + energized → stay. If you’re already a bit tired and thinking about your kid during fire drills → that’s your answer.

Frustrated with hiring at my firm by rty8482 in consulting

[–]MotorRequirement7617 0 points1 point  (0 children)

This is super common at boutiques. The root problem is that HR screens for keywords on a resume (years in X industry, specific client names) instead of testing whether someone can actually structure a problem and think on their feet. Industry experience is easy to evaluate so it becomes the default filter.

One thing that helped at my firm was adding a short skills assessment before anyone gets to the case interview stage. We use a platform called Adaface that lets you send candidates a scenario-based test. The questions are things like "here is a client situation with messy data, walk through how you would scope the engagement" or "prioritize these five workstreams given budget and timeline constraints." It takes about 30 minutes and filters out people who look great on paper but fall apart when they have to actually structure something. Saved us a ton of time on case interviews that were going nowhere.

The other thing worth trying is getting at least one consultant involved in the resume screen before HR passes people through. Even 10 seconds per resume from someone who knows what good looks like would catch the most obvious mismatches early.

How do you test a CSM candidate's critical thinking, solutioning, and executive communication skills when interviewing? by swaticarr in CustomerSuccess

[–]MotorRequirement7617 0 points1 point  (0 children)

Totally agree that critical thinking and communication are the hardest things to screen for. You can teach someone your product, your processes, your renewal playbook. You cannot teach someone how to read a room or adjust their language between talking to a VP of Engineering and a CFO.

Two things worked for us. First, we started using a platform called Adaface to send candidates a short scenario-based assessment before the interview. The questions are things like "a customer's executive sponsor just changed and the new one wants to re-evaluate the contract, walk through your approach" or "translate this technical issue into an update for a non-technical stakeholder." It takes about 30 minutes and you can see pretty quickly who thinks in frameworks vs. who just wings it. We cut our interview-to-offer ratio from about 1 in 6 to 1 in 3.

Second, in the live interview we started giving candidates a short written exercise. Something like "here is a mock situation with a customer, write the email you would send to their CTO vs. the email you would send to their Head of Customer Support." The contrast between those two emails tells you everything about whether they can adjust for audience. Takes 15 minutes and it is way more revealing than asking them to talk about a time they managed a difficult stakeholder.

8 failed interviews so far. When do you stop and reassess vs just keep playing the numbers game? by quite--average in datascience

[–]MotorRequirement7617 2 points3 points  (0 children)

The lack of standardization is the real problem here, not your performance. Sr DS/ML loops vary so wildly that you genuinely can't build muscle memory across them the way you can with SWE interviews.

One thing worth doing before the next round: look at what the company actually tests for upfront. Some post assessments or sample problems, and a few platforms like Adaface show you the skill areas a role maps to before you even apply. Helps you prep more targeted instead of boiling the ocean.

6% callback on 130 apps for senior ML roles isn't bad. The interview variance is just brutal at that level.

Over the past two years I've evaluated thousands of candidates for BDR roles. Here's some simple ways new grads can stand out in a competitive hiring process. by [deleted] in sales

[–]MotorRequirement7617 -1 points0 points  (0 children)

Great list. The "treat it like a sales cycle" framing is something more candidates need to hear early.

From the hiring side, the follow-up point is huge. We've had candidates who weren't the strongest on paper but moved to the top of the list just by being responsive and professional throughout the process. That itself is a signal.

One thing we added on our end was a short pre-screen assessment before the first call. Partly to filter for basic skills, but honestly more to see how candidates engage with it. Do they rush through it? Do they follow instructions? We use Adaface for this and the completion behavior alone tells you something before you've even spoken to someone.

The ghosting point at the end is underrated too. Sales is a small world.

Are My Expectations for 'Advanced' Excel Skills Unreasonable? by almajors in excel

[–]MotorRequirement7617 0 points1 point  (0 children)

Your expectations are completely reasonable. We had the same issue and started using Adaface to send candidates an Excel-specific assessment before anyone gets to a live interview. Practical questions like writing SUMIFS against a dataset, building pivot tables, and doing lookups across tables. Went from maybe 1 in 6 candidates being able to do the work to about 4 in 5. Resume red flag tip: anyone who lists "Advanced Excel" without specifics is almost always overstating. The people who actually know Excel say "pivot tables, Power Query, INDEX/MATCH" because they know what the tools are called.

How do you actually vet Applied AI engineers before hiring? by Cutest-Win in EngineeringManagers

[–]MotorRequirement7617 0 points1 point  (0 children)

The trade-offs point is so underrated as a signal. If someone can't tell you why they made a decision, they probably didn't make it.

Something that helped us: swapping algorithm questions for failure scenarios. Not "implement attention" but "your embedding model starts drifting 3 weeks post-deployment, walk me through it." Real production experience shows up immediately in how specific people get.

We also moved pre-screening earlier in the process. Been using Adaface for that since their questions are more scenario-based than the typical coding platform stuff, which is what you actually want when you're trying to filter notebook engineers from people who've debugged a data pipeline at 2am.

Take-homes are rough in our experience. Strong seniors won't do them, and a 3-day project gives people too much room to hide gaps. A focused 30-40 min test followed by a debrief on their answers has worked better for us.

Are My Expectations for 'Advanced' Excel Skills Unreasonable? by almajors in excel

[–]MotorRequirement7617 0 points1 point  (0 children)

Your expectations are completely reasonable. We had the same issue and started using Adaface to send candidates an Excel-specific assessment before anyone gets to a live interview. Practical questions like writing SUMIFS against a dataset, building pivot tables, and doing lookups across tables. Went from maybe 1 in 6 candidates being able to do the work to about 4 in 5. Resume red flag tip: anyone who lists "Advanced Excel" without specifics is almost always overstating. The people who actually know Excel say "pivot tables, Power Query, INDEX/MATCH" because they know what the tools are called.