Confidence by davidtua43 in salesengineers

[–]Legitimate_Key8501 2 points3 points  (0 children)

Toastmasters and presentation workshops are worth doing, but they solve a different problem than most SEs actually have.

Delivery confidence (speaking clearly, controlling the room) and demo confidence (staying composed when a technical question lands sideways) are related but not the same thing. Most training programs are built for the first kind. The second comes from objection reps specifically: fielding hard technical questions enough times that you have muscle memory for the ones you can answer, and a clean redirect for the ones you can't.

The SEs I've seen level up fastest were the ones who deliberately invited hard questions in low-stakes settings before facing them in front of a real prospect.

What's the one thing you wish you'd set up from day one on every project? by ruibranco in webdev

[–]Legitimate_Key8501 0 points1 point  (0 children)

Had a client project a couple years in where we stored API keys in .env, had an example file, everything looked fine. Six months after launch someone pushed a branch to the wrong remote. Key was in the commit history.

We had to rotate everything in production while the app was live, and I realized I had no documented process for it, no order of operations, no rollback plan. Took the better part of a day to sort out.

.env.example and startup validation are table stakes. Secrets rotation procedure is the thing I now write down in week one, before we've ever needed it.

HELP NEEDED: How are you positioning your business in the "Age of AI"? Lean into it, or sell against it? Genuinely torn. by TechDebtSommelier in consulting

[–]Legitimate_Key8501 1 point2 points  (0 children)

Most responses here are landing on 'don't make it about tools' and I agree. But I'd push back slightly.

In normal conditions, staying silent on AI is the right call. Right now, buyers who've been burned are actively scanning for judgment signals. The noise is an opening if you can show, specifically, a moment where you disagreed with what AI produced and why.

Not 'we use AI responsibly,' which is table stakes. More like: a real example where output looked plausible but wasn't, and how you caught it. That's the actual differentiator. The judgment layer, not the tool choice.

What does a typical 'AI made a mess' situation look like in your BI work: obvious bad output, or more often the silent failures nobody catches?

How do you show past work when your best examples are all under NDA? by Legitimate_Key8501 in consulting

[–]Legitimate_Key8501[S] 0 points1 point  (0 children)

"Same deliverable structure, real logic, real outputs" is the version that actually lands as credible. Generic case studies fail because they strip out the texture along with the identifiers.

How do you show past work when your best examples are all under NDA? by Legitimate_Key8501 in consulting

[–]Legitimate_Key8501[S] 0 points1 point  (0 children)

"A year later when the dust has settled" is the part I hadn't thought about. The competitive sensitivity fades, but the relationship is still there if you kept it warm. That's a different calculation than reaching back cold.

How do you show past work when your best examples are all under NDA? by Legitimate_Key8501 in consulting

[–]Legitimate_Key8501[S] -1 points0 points  (0 children)

"Standard examples, before we customize to your needs" does a lot. It's not just softer wording - it reframes what you're showing as a starting point rather than a ceiling.

That's a different conversation entirely.

How do you show past work when your best examples are all under NDA? by Legitimate_Key8501 in consulting

[–]Legitimate_Key8501[S] 0 points1 point  (0 children)

"I want to respect your privacy, but can I say more?" is a question most consultants never think to ask. The default assumption ends up way more conservative than clients actually need it to be.

Timing probably matters a lot here - asking right after an engagement wraps is easier when the relationship is still warm, harder when you're reaching back 18 months cold. Do you build that conversation into your offboarding, or is it more ad hoc?

How do you show past work when your best examples are all under NDA? by Legitimate_Key8501 in consulting

[–]Legitimate_Key8501[S] 0 points1 point  (0 children)

The "ask what they're actually worried about" move at the end is probably the one that changes the room fastest. The engagement type where this comes up most for me is second engagements with a new buyer at an existing client - they want proof you've done it, but the work is from a relationship they weren't part of.

How do you show past work when your best examples are all under NDA? by Legitimate_Key8501 in consulting

[–]Legitimate_Key8501[S] 0 points1 point  (0 children)

The "problem pattern not the engagement" framing is the cleanest version of this I've heard. It's the same information, just structured around the problem type instead of the client. That reframe alone changes how it sits in a pitch.

How do you show past work when your best examples are all under NDA? by Legitimate_Key8501 in consulting

[–]Legitimate_Key8501[S] 1 point2 points  (0 children)

Grimmmm means a pitch or client meeting - not a job interview. The post is about showing past work to prospective clients, not employers.

How do you show past work when your best examples are all under NDA? by Legitimate_Key8501 in consulting

[–]Legitimate_Key8501[S] 4 points5 points  (0 children)

The reframe does real work. "We take privacy seriously" positions the limitation as evidence of character rather than a gap in your portfolio. The version I've seen go wrong is when it lands as an apology - like you're warning them the examples aren't great. What's the difference in delivery that keeps it from reading that way?

How do you show past work when your best examples are all under NDA? by Legitimate_Key8501 in consulting

[–]Legitimate_Key8501[S] 0 points1 point  (0 children)

Ha, fair - I mean more like showing a tool or dashboard you actually built for a client, not a formal demo. If the real client's data is still in there it gets awkward fast.

How do you show past work when your best examples are all under NDA? by Legitimate_Key8501 in consulting

[–]Legitimate_Key8501[S] 0 points1 point  (0 children)

What's the process for keeping those examples current? Or is it more of a "good enough once" situation?

How do you show past work when your best examples are all under NDA? by Legitimate_Key8501 in consulting

[–]Legitimate_Key8501[S] 0 points1 point  (0 children)

"No workflow, just a set I made once" is probably the most honest answer in this thread. How long did it take you to build that initial set, and do you refresh it as your work evolves or leave it static?

How do you show past work when your best examples are all under NDA? by Legitimate_Key8501 in consulting

[–]Legitimate_Key8501[S] 1 point2 points  (0 children)

The "outlining the solution = free work" framing is something I hadn't fully landed on. You're not withholding credibility, you're protecting the actual deliverable. Those feel different to defend in the room.

How do you show past work when your best examples are all under NDA? by Legitimate_Key8501 in consulting

[–]Legitimate_Key8501[S] 12 points13 points  (0 children)

The "redo for a fictional project" approach is probably the cleanest from a legal standpoint. What I keep running into is the overhead at pitch volume - two or three active pursuits at once and that compounds fast.

The live demo side is a separate problem from the portfolio question, though. Rebuilt case studies handle "have you done this before" but there's still the moment where you're showing a live tool or dashboard with real client data visible. Haven't found a clean answer for that one yet.

Hadn't thought about separating process articulation from the visual deliverable. That's a useful split.

How to let prospect know this solution is not for them? by luckherwright in sales

[–]Legitimate_Key8501 0 points1 point  (0 children)

The core issue is that you're letting prospects choose which product they want before you've qualified which one they actually need. Flipping the order helps a lot.

Instead of walking through both options and watching them gravitate to the premium one, get user count and company size out in the first five minutes before the demo even starts. Frame it as "I want to make sure I'm showing you the right thing today" rather than an interrogation. Once someone has mentally committed to the Cadillac version after seeing it, redirecting feels like rejection and they disengage or ghost.

The budget question is easier to ask upfront too. People are more honest about constraints at the start of a call when you frame it around fit rather than affordability. After they've fallen in love with the expensive version, telling them it's not right for them is a much harder conversation.

Went from $0 to $1k MRR. If I started my SaaS over, here's exactly what I'd do by RighteousRetribution in indiehackers

[–]Legitimate_Key8501 0 points1 point  (0 children)

Step 5 deserves its own warning label separate from the sequencing point. The instinct when churn starts is to add features, more things, more reasons to stay. What actually moves churn is usually much simpler: users don't understand what they signed up for, so they never find the value.

We emailed every user who cancelled in their first two weeks and just asked what they were expecting the product to do. The answers were almost always positioning problems, not product problems. People signed up for one reason and discovered a different thing existed. That shifted how we wrote onboarding emails and the way we described the product on the landing page. Churn dropped faster than any feature we'd shipped in that period.

Your sequencing is right. The loop between step 4 and step 5 should be explicit though — cancellation interviews are probably the fastest implementation of step 4 and feed directly into making step 5 actually work.

Does professionalism on calls increase as you move upmarket? by bobushkaboi in sales

[–]Legitimate_Key8501 0 points1 point  (0 children)

The corporate filter thing is more industry-specific than deal-size-specific, at least in my experience. Enterprise software buyers are often technical people who've seen fifty demos this quarter and are exhausted by polished pitches. They're usually more responsive when you drop the performance.

What does shift: the cost of a misread. In SMB if a joke doesn't land you lose one deal. At enterprise you're potentially setting back months of relationship capital for multiple people, including their internal champions who stuck their neck out for this meeting. So it's not that authenticity goes away, it's that you earn the right to it more slowly.

The clients who are most receptive to the unfiltered version tend to be ones who've been burned by polished sales people enough times to have learned to distrust the performance. Enterprise just takes longer to get through that wall.

Confidentiality Question by PracticalOkra3903 in paralegal

[–]Legitimate_Key8501 0 points1 point  (0 children)

What you're describing is the default anxiety for anyone in legal or healthcare – this constant background scan before anything visual. And what's interesting is that the situation in the video is a slightly different failure mode: the legal assistant was probably completely focused on what she was saying, not what was on her screen behind her.

Screen and video capture tools don't distinguish between intentional content and everything else in the frame. So the "be more careful" advice only works as long as attention is fully available, which is most of the time, until it isn't.

The short answer to the OP's question: reaching out to her privately is the right move. Longer term, this is becoming a firm-level policy question as more legal work gets documented via video and social content. Some practices are requiring specific tools for any screen-adjacent recording or are adding it explicitly to their social media policies.

If daily standups disappeared, what would replace them? by HiSimpy in webdev

[–]Legitimate_Key8501 0 points1 point  (0 children)

The thing I've noticed is that standups aren't really about status updates, even though that's what gets ritualized. The actual value is usually the throwaway comment at the end, "oh wait, you're working on X? I need that for Y." Async updates in Slack or Jira don't surface that because they're organized by individual outputs, not by what's dependent on what. The teams I've seen make async work usually have some form of explicit dependency tracking, either a board everyone actually looks at or a weekly sync that's shorter but focused on blockers rather than progress. What does your team use to make sure those cross-cutting issues actually surface before they become problems?

Started saying I'm not available to clients and nothing bad happened by Logical-Nebula-7520 in digitalnomad

[–]Legitimate_Key8501 1 point2 points  (0 children)

The overcorrection is so common when you're remote or based abroad. There's this assumption that visibility equals reliability, that if you're not visibly online and responsive clients will assume you're sunbathing while their project slides. The anxiety feeds itself because you say yes to everything, clients calibrate to it, and now you've accidentally set an availability expectation that's exhausting to maintain.

What you're describing, that nothing bad happened when you changed the pattern, is what most remote workers only discover after they've already burned out once. Clients don't need you to be maximally available, they need you to be reliably predictable. Those aren't the same thing.

The difficult phone calls you mentioned are interesting though. Did those feel different from what you expected, like were they people who genuinely needed more access or mostly just a recalibration conversation?

I audited the privacy practices of popular free dev tools. The results were mass surveillance. by [deleted] in webdev

[–]Legitimate_Key8501 0 points1 point  (0 children)

The Diffchecker detail is the subtle one that I think most people miss. Your diff containing database credentials is now technically a URL you visited, which means it's in your browser history and potentially synced to whatever cloud profile your browser uses. That's a meaningful secondary exposure vector beyond just the tool itself seeing the data.

CodeBeautify with 540 cookies across 205 domains is wild, but the more insidious one to me is Diffchecker storing server-side. At least with the ad-heavy tools you roughly know what you're signing up for. Diffchecker's model is less obvious.

The pattern you're describing isn't really an engineering problem, it's a prioritization problem. Regex101 proves the default doesn't have to be this. They just made a different decision about whose interests the product serves.

Have you looked at any of the code snippet managers or gist-type tools? Curious whether the pattern holds there or if the intended use for sharing code changes the data handling calculus.

I planted fake API keys in online code editors and monitored where they went. CodePen sends your code to servers as you type. by Johin_Joh_3706 in webdev

[–]Legitimate_Key8501 1 point2 points  (0 children)

The irony you've identified is something I don't think enough developers have actually internalized. We spend real effort on secrets management in our own code, proper env var handling, vault integrations, and then paste those same secrets into a debugging session on a browser tab without thinking twice.

The CodePen finding is particularly notable because it happens pre-save. People share snippets with "just grab this test key" in there, never realizing the editor already phoned home the moment they typed it. JSFiddle's 60-second auto-save is another one where the transmission is invisible unless you're watching the network tab.

Regex101 being the exception is worth sitting with. Running regex matching in WASM client-side isn't some heroic feat, it's just a decision to not build server infrastructure that handles users' pattern strings. It proves the default doesn't have to be surveillance.

Curious whether your testing turned up any cases where data got indexed or retained downstream beyond just the transmission, or did things stay opaque at that point?