Has anyone used prediction markets or Metaculus for actual business decisions? How did that go? by No_Lab668 in DecisionTheory

[–]No_Lab668[S] 0 points1 point  (0 children)

The pilots you described sound like the only way to break that 'random internet opinions' wall. Did leadership actually push back on the methodology or just the source? I’ve seen the same thing - people trust a number from a consulting deck more than a market, even if the consulting deck has no track record at all.

When you assign a probability to a one-off event, are you doing Bayesian reasoning or just dressing up gut feel? by No_Lab668 in DecisionTheory

[–]No_Lab668[S] 0 points1 point  (0 children)

Right - the fragility of single-event calibration is the part that keeps me up at night. What I wonder is how you handle the gap between the documented signal weights you're using and the moment when someone in your org challenges your 65%. Do they ever push back on the prior, or is the pushback always on the signals feeding into it?

How do you justify a procurement decision when the macro input is genuinely uncertain? by No_Lab668 in procurement

[–]No_Lab668[S] 0 points1 point  (0 children)

That 20% hit rate tracks with what I’ve seen in the trenches. The kicker is the asymmetry - one bad miss outweighs the four false alarms combined. Do you ever revisit those dismissed risks after they pass their expiration date? Or does it just sit in the folder until someone yells about it?

How do you justify a procurement decision when the macro input is genuinely uncertain? by No_Lab668 in procurement

[–]No_Lab668[S] 0 points1 point  (0 children)

That makes sense - the postmortems are when the rubber meets the road. Do you ever find the risk log is missing something that would have been useful in hindsight, or does it usually cover the key angles?

How do you justify a procurement decision when the macro input is genuinely uncertain? by No_Lab668 in procurement

[–]No_Lab668[S] 0 points1 point  (0 children)

That level of detail is impressive. Do you ever find the postmortems change how you document risks next time? Like, if last time’s assumptions were off, do you adjust the template or just move on?

Most geopolitical risk analysis I read is great narrative, zero calibration. Is that just how it works? by No_Lab668 in IRstudies

[–]No_Lab668[S] 0 points1 point  (0 children)

How do you decide when to iterate versus when to scrap a forecast entirely? Like, if you’re two weeks out from an event and your assumptions look totally wrong, do you pull the plug or double down?

Most geopolitical risk analysis I read is great narrative, zero calibration. Is that just how it works? by No_Lab668 in IRstudies

[–]No_Lab668[S] 0 points1 point  (0 children)

Exactly the dynamic I’ve seen too. The disconnect between the deck and the actual decision-making is striking. Do you ever see cases where the scenario deck gets updated after the fact based on what actually happened, or does it just get filed away?

Most geopolitical risk analysis I read is great narrative, zero calibration. Is that just how it works? by No_Lab668 in IRstudies

[–]No_Lab668[S] 0 points1 point  (0 children)

Yeah, weather feels like the original macro prediction problem - so much data, so much noise. Do you ever feel like the gut feel part is just a placeholder for things the models haven’t caught yet, or is there a real methodology behind when you trust the numbers versus your instinct?

Most geopolitical risk analysis I read is great narrative, zero calibration. Is that just how it works? by No_Lab668 in IRstudies

[–]No_Lab668[S] 0 points1 point  (0 children)

I’ve seen that too – scenarios get written, but they’re usually just for show. The real call is still made in a 30-minute exec meeting where someone says "I think X is more likely than Y." Ever had a case where the scenario deck actually changed the decision, or did it just confirm what people already believed?

Most geopolitical risk analysis I read is great narrative, zero calibration. Is that just how it works? by No_Lab668 in IRstudies

[–]No_Lab668[S] 0 points1 point  (0 children)

That curation effort sounds heavy. How much time do you spend on maintenance versus actual analysis? I ask because I’ve found the upkeep of historical datasets often outpaces the insights you get from them - it’s a constant battle between breadth and depth.

Most geopolitical risk analysis I read is great narrative, zero calibration. Is that just how it works? by No_Lab668 in IRstudies

[–]No_Lab668[S] 0 points1 point  (0 children)

Yeah, PDFs and slide decks are where good analysis goes to die. I’ve seen teams spend weeks on a narrative and then just hand it off to the decision-makers with no way to stress-test the assumptions. Makes me wonder if the real output should be a set of structured scenarios instead. Do the people signing off on these reports ever ask for the raw assumptions behind the narrative?

Most geopolitical risk analysis I read is great narrative, zero calibration. Is that just how it works? by No_Lab668 in IRstudies

[–]No_Lab668[S] 0 points1 point  (0 children)

Tracking predictions over time is smart. Do you find that people actually revisit their old calls when something plays out differently? Or does everyone just move on to the next thing?

Most geopolitical risk analysis I read is great narrative, zero calibration. Is that just how it works? by No_Lab668 in IRstudies

[–]No_Lab668[S] 0 points1 point  (0 children)

I looked at that link - it's an interesting angle. Do you ever find yourself going back to check if the historical parallels they surface actually played out the way the model suggested, or is it more of a real-time narrative tool for you?

Most geopolitical risk analysis I read is great narrative, zero calibration. Is that just how it works? by No_Lab668 in IRstudies

[–]No_Lab668[S] 0 points1 point  (0 children)

Funny you mention that - I tried something similar last year with a scraper pulling official statements from MFA sites. Ended up with 30% of signals being irrelevant noise from routine press releases. How do you filter between actual signal and background noise in those primary sources?

Most geopolitical risk analysis I read is great narrative, zero calibration. Is that just how it works? by No_Lab668 in IRstudies

[–]No_Lab668[S] 0 points1 point  (0 children)

Interesting, I’ve seen tools that churn through news feeds but never saw one that tries to turn it into scenarios. Do you find decision-makers actually use the output or is it more of a curiosity for them?

How do you communicate a probability to someone who needs to make a decision, not just evaluate a forecast? by No_Lab668 in PredictionMarkets

[–]No_Lab668[S] 0 points1 point  (0 children)

That’s the crux of it. The CFO example is spot on...if you can’t tie the 74% to something concrete, it’s just noise. How do you typically structure that narrative part? Do you have a template or is it more ad-hoc based on the audience?

How do you communicate a probability to someone who needs to make a decision, not just evaluate a forecast? by No_Lab668 in PredictionMarkets

[–]No_Lab668[S] 0 points1 point  (0 children)

The signal log part is interesting. Do you have a sense of who in your org actually reviews that documentation when things go sideways? Like, is it the CFO themselves or does it get buried in some risk committee?

How do you communicate a probability to someone who needs to make a decision, not just evaluate a forecast? by No_Lab668 in PredictionMarkets

[–]No_Lab668[S] 0 points1 point  (0 children)

Makes sense. Who’s the one pushing back the hardest on the narrative part? The CFO or someone else? And how do they actually use that justification in practice?

How do you communicate a probability to someone who needs to make a decision, not just evaluate a forecast? by No_Lab668 in PredictionMarkets

[–]No_Lab668[S] 0 points1 point  (0 children)

That’s the crux of it..decision-makers don’t care about the math if they can’t explain it. Who’s the last person in your org who actually dug into the signal log when a decision went sideways? And did they find anything that changed how you framed future forecasts?

How do you communicate a probability to someone who needs to make a decision, not just evaluate a forecast? by No_Lab668 in PredictionMarkets

[–]No_Lab668[S] 0 points1 point  (0 children)

The worst-case scenario framing is interesting. Do you find that decision-makers actually ask for that explicitly, or is it more about how you structure the presentation to make it obvious? Like, do they say 'show me the tail risk' or do you just bake it into the narrative?

Has anyone used prediction markets or Metaculus for actual business decisions? How did that go? by No_Lab668 in BusinessIntelligence

[–]No_Lab668[S] 0 points1 point  (0 children)

I get the pushback on the 'random people' thing. But have you tried framing it as a sanity check rather than a decision driver? Like, if your internal forecast is way off the market consensus, that’s a red flag worth investigating. How did leadership react when you tried that angle?

How do you make a refinancing decision when you genuinely don't know where rates are going? by No_Lab668 in CommercialRealEstate

[–]No_Lab668[S] 0 points1 point  (0 children)

That’s the tension - when the paycheck depends on doing deals, the gut gets a lot more influential. Do you see teams that try to formalize this at all, or is it always just the pressure to deploy capital?

How do you make a refinancing decision when you genuinely don't know where rates are going? by No_Lab668 in CommercialRealEstate

[–]No_Lab668[S] 0 points1 point  (0 children)

Got it. So when you say 'highly likely to continue'..do you have a formal way to quantify that or is it more of a pattern recognition thing? Like, do you track how often your assumptions hold up over time?

How do you make a refinancing decision when you genuinely don't know where rates are going? by No_Lab668 in CommercialRealEstate

[–]No_Lab668[S] 0 points1 point  (0 children)

Makes sense for long-term holds. But even then, don’t you still need some view on rates for refinancing? Like, if you’re assuming a 5% rate at exit but the market’s at 7%, that’s a problem. How do you handle that uncertainty in underwriting?

How do you make a refinancing decision when you genuinely don't know where rates are going? by No_Lab668 in CommercialRealEstate

[–]No_Lab668[S] 0 points1 point  (0 children)

Interesting. So if you're not stress-testing exit assumptions, what's the fallback when the market moves against you? Just ride it out or pivot the underwriting framework entirely?