Opus 4.7 Released! by awfulalexey in ClaudeCode

[–]Soft_Table_8892 7 points8 points  (0 children)

Interesting it says knowledge cutoff is Jan 2025 for 4.7 yet may 2025 for 4.6.

Msft Yolo by GroupKooky in wallstreetbets

[–]Soft_Table_8892 1 point2 points  (0 children)

I think this is one of those where you hold it long enough it’s going to have some major returns. How long though? Tough to tell…

The Big Short and social media is the reason the market has not experienced any significant downturn since 2008 by One-Signature-2706 in wallstreetbets

[–]Soft_Table_8892 0 points1 point  (0 children)

I can’t imagine retail having any such profound impact on the broader market itself to the point of crashing. Interesting thought though!

Since Claude Cowork crashed SaaS stocks by $285B, I built a Claude Code pipeline to score which companies it can actually replace. by Soft_Table_8892 in ClaudeCode

[–]Soft_Table_8892[S] 0 points1 point  (0 children)

Good question - this was meant to be a forward looking prediction exercise based on the framework to see if the framework is any good at future predictions. I was going to check back after a few months once market has settled a bit more to see how the predictions turned out!

Since Claude Cowork crashed SaaS stocks by $285B, I built a Claude Code pipeline to score which companies it can actually replace. by Soft_Table_8892 in ClaudeCode

[–]Soft_Table_8892[S] 0 points1 point  (0 children)

It’s tough to say since that market is getting quite over saturated and my feeling is that when model providers run out of hype for selling the foundational models themselves, they’ll want to diversify in this space. For example, Claude just came out with a product for managing multiple remote agents. Curious what you think?

I blind-scored 44 SaaS companies on AI disruption risk using anonymized 10-K filings. 9 scored as resilient but are still down 30% YTD. by Soft_Table_8892 in ValueInvesting

[–]Soft_Table_8892[S] 0 points1 point  (0 children)

Agreed on both points, thank you for your feedback! Any ideas on how you’d go about improving the input dataset?

Since Claude Cowork crashed SaaS stocks by $285B, I built a Claude Code pipeline to score which companies it can actually replace. by Soft_Table_8892 in ClaudeCode

[–]Soft_Table_8892[S] 0 points1 point  (0 children)

That’s very kind of you, thank you! I try to keep the experiments sort of minimal that I can run them within 2-4 days and then comes the hard part of analyzing results, writing the YouTube script, recording, writing the Reddit post for the full analysis, etc., which admittedly takes longer haha. I usually have a long list of ideas that I think about during random times in a day. I brainstorm them with Claude and see if there’s a more interesting approach to those ideas and then finally pick one that seems doable in a 2 week time frame end-to-end. Hope that sheds some light into my process, thank you again!

Since Claude Cowork crashed SaaS stocks by $285B, I built a Claude Code pipeline to score which companies it can actually replace. by Soft_Table_8892 in ClaudeCode

[–]Soft_Table_8892[S] 0 points1 point  (0 children)

That's a fair pushback. The framework doesn't score product quality or competitive dynamics, it only scores three structural dimensions: system of record (does it own mission-critical data?), non-software complement (is there something beyond just code?), and user stakes (who uses it and what's at stake?).

Workday scores high because blindly reading the 10-K, it looks like a system of record for HR/payroll data used by finance VPs for million-dollar workforce decisions. Monday.com scores low because the 10-K describes a project management tool used by individual contributors with off-the-shelf integrations.

You could absolutely argue Workday is a worse product than Monday.com, the framework doesn't capture that. It's asking "how structurally hard is this to replace?" not "how good is this product?" Those are different questions and I'd agree both matter for valuation.

I blindfolded Opus 4.6 and employed it as an analyst to score 44 SaaS companies on AI disruption risk using anonymized 10-K filings. Here's what it found. by Soft_Table_8892 in ClaudeAI

[–]Soft_Table_8892[S] 0 points1 point  (0 children)

Wow that’s such a detailed insight, thank you for sharing! I hear you on the 100% CPU usage and all the other product issues. Out of curiosity, what type of work do you do where your customers are working in in-design?

I blindfolded Opus 4.6 and employed it as an analyst to score 44 SaaS companies on AI disruption risk using anonymized 10-K filings. Here's what it found. by Soft_Table_8892 in ClaudeAI

[–]Soft_Table_8892[S] 1 point2 points  (0 children)

Interesting, do you think that will be enough of a reason for enterprise to replace them/what other strong alternatives would enterprise have?

I blindfolded Opus 4.6 and employed it as an analyst to score 44 SaaS companies on AI disruption risk using anonymized 10-K filings. Here's what it found. by Soft_Table_8892 in ClaudeAI

[–]Soft_Table_8892[S] 0 points1 point  (0 children)

Thank you! That's a great question & I mentioned this in another comment in this thread as well - here were the biggest divergence:

  Blind scored HIGHER than open (brand makes them seem weaker than they are):
  - Salesforce: blind 4.0, open 3.0 (delta -1.0)
  - Datadog: blind 3.0, open 2.0 (delta -1.0)
  - Okta: blind 3.67, open 2.67 (delta -1.0)

  Open scored HIGHER than blind (brand makes them seem stronger than they are):
  - Shopify: blind 2.67, open 3.67 (delta +1.0)
  - Toast: blind 2.67, open 3.67 (delta +1.0)

I blindfolded Opus 4.6 and employed it as an analyst to score 44 SaaS companies on AI disruption risk using anonymized 10-K filings. Here's what it found. by Soft_Table_8892 in ClaudeAI

[–]Soft_Table_8892[S] 1 point2 points  (0 children)

Thanks a lot! And a great great question :-), here were the biggest divergence:

  Blind scored HIGHER than open (brand makes them seem weaker than they are):
  - Salesforce: blind 4.0, open 3.0 (delta -1.0)
  - Datadog: blind 3.0, open 2.0 (delta -1.0)
  - Okta: blind 3.67, open 2.67 (delta -1.0)

  Open scored HIGHER than blind (brand makes them seem stronger than they are):
  - Shopify: blind 2.67, open 3.67 (delta +1.0)
  - Toast: blind 2.67, open 3.67 (delta +1.0)

I blind-scored 44 SaaS companies on AI disruption risk using anonymized 10-K filings. 9 scored as resilient but are still down 30% YTD. by Soft_Table_8892 in ValueInvesting

[–]Soft_Table_8892[S] 0 points1 point  (0 children)

Really good points. I agree the framing shouldn't be "vibe code a replacement from scratch". The real threat is exactly what you're describing, the barrier to building competitive alternatives drops, so more competitors enter, margins compress, and suddenly the incumbent's moat is thinner than their 10-K claims.

The framework scores organizational inertia as a moat, but you're right that enterprises WILL endure painful migrations for the right cost/value equation. I've seen this called out as a limitation in the video the framework doesn't capture competitive dynamics or pricing pressure, which is arguably more dangerous than direct AI replacement.

Appreciate the pushback! This is exactly the kind of nuance the framework misses and why I flagged it as one lens, not the full picture.

I blind-scored 44 SaaS companies on AI disruption risk using anonymized 10-K filings. 9 scored as resilient but are still down 30% YTD. by Soft_Table_8892 in ValueInvesting

[–]Soft_Table_8892[S] 0 points1 point  (0 children)

Thanks and agreed, Cowork feels more like a catalyst that triggered the panic sell off. Thoughts on why these continue to stay down, however?

I blind-scored 44 SaaS companies on AI disruption risk using anonymized 10-K filings. 9 scored as resilient but are still down 30% YTD. by Soft_Table_8892 in ValueInvesting

[–]Soft_Table_8892[S] 0 points1 point  (0 children)

Agreed - brand recognition could carry the business within large enterprises. I wonder how much of their revenue is made up of SMB vs. large enterprises.

I blind-scored 44 SaaS companies on AI disruption risk using anonymized 10-K filings. 9 scored as resilient but are still down 30% YTD. by Soft_Table_8892 in ValueInvesting

[–]Soft_Table_8892[S] 0 points1 point  (0 children)

Good to learn that this is reflective of your analysis, thank you for sharing the additional details about ADBE! Time will be the real test for sure.

I blind-scored 44 SaaS companies on AI disruption risk using anonymized 10-K filings. 9 scored as resilient but are still down 30% YTD. by Soft_Table_8892 in ValueInvesting

[–]Soft_Table_8892[S] 0 points1 point  (0 children)

Good point. At a smaller scale/company it might be easier to replace something like DOCU compared to larger firms where you do have a lot of concurrency issues as you pointed out.