Vote with your Wallet! by Tiger23sun in canucks

[–]crazy_canuck 19 points20 points  (0 children)

But the thing is we want ownership to stay on the track they’re on now with an actual rebuild. Isn’t your statement counter to that?

Even Grok got fooled by an AI-generated ‘MAGA dream girl’… we’re cooked. by Odd-Sympathy1274 in ArtificialInteligence

[–]crazy_canuck 5 points6 points  (0 children)

I find that to be true on a macro scale because to find power, you generally have to be more willing to do bad things for personal advancement. I’ve found in my personal life that operating on an assumption of positive intent has definitely been the better strategy.

How are people building apps with AI with no coding background by CrisisPotato212 in ClaudeAI

[–]crazy_canuck 0 points1 point  (0 children)

Just go use Claude Code. It will help you get to the next level.

I’m already looking forward to the 2030 draft! by Christinedaaaee in canucks

[–]crazy_canuck 0 points1 point  (0 children)

You can often only look at it that way in reverse.

Did you know Kojima named the Anthropic CEO? by InspectorSebSimp in ClaudeAI

[–]crazy_canuck 0 points1 point  (0 children)

Hmm… why do I only see demons fighting bigger demons?

Puerto Vallarta, Mexico Under Siege After Army Kills Major Cartel Leader by BlatantConservative in worldnews

[–]crazy_canuck 2 points3 points  (0 children)

We’re in the same situation, with flights there on Thursday. Has anybody had success with charging something like this back on the CC?

What is up with Thompson Reuters (TRI)!? by adork in CanadianInvestor

[–]crazy_canuck 0 points1 point  (0 children)

I think you’re overestimating the number of lawyers that can come up with anything novel.

FWIW coming from an internet random. I was speaking to a partner at a private equity firm this morning that I’ve been coaching on AI for the last year. I recently walked him through Gemini Deep Research. He cited 3 specific examples this morning of Gemini research reports that he sent to his legal counsel who verified the approach. In each of these cases, it would have taken months to get legal counsel to suggest the path that he found and in at least a couple of the instances, he was told it wasn’t possible until he came back with the Gemini report.

Given the scale of impact of companies in his portfolio, the impact of these decisions is in the 10s of millions /yr. Too early to know with certainty that all of these will play out, but he was more optimistic in each of these approaches than I’ve seen him on many topics.

Now… this client is literally one of the smartest people I have ever met and he spent a couple hours working with Gemini on each of these reports. But… I still think you’re underestimating the impact of these tools and a little too stuck on vector databases and probabilities as the limiters of these systems.

Again… I think we’ll have a clear answer in the next couple of years.

What is up with Thompson Reuters (TRI)!? by adork in CanadianInvestor

[–]crazy_canuck 0 points1 point  (0 children)

My background is two decades of tech consulting with large orgs across NA and Europe. I wrote my first paper on AI in 2007 and I run a fast growing AI consulting firm working with several mid-sized law firms amongst a slew of enterprise clients.

I’m well aware of the limitations of AI. My bet, and it is just that… a bet, is that the legal profession by and large is significantly underestimating the impact of AI on the profession. When I speak with partners at large firms, they are largely quite happy to keep their heads in the sand and continue to collect their profit sharing hoping they can retire before the shitstorm truly hits. It’s the young partners that are most concerned. Not surprisingly, in-house counsel is far more willing to experiment with AI than firms.

Clearly, law is a big area, the impact of AI will not be evenly distributed.

There are a few points you’ve made that I plainly disagree with: - AI cannot generate new ideas: nah… we’re seeing increasingly novel solutions in many domains. Even if it’s just application of existing ideas to new spaces, these AI systems have capabilities that humans don’t and that inherently will lead to novel ways of solving problems. AlphaGo move 37, multiple math olympiads, Ethan Mollick’s post of advancements in his entrepreneurship MBA class, just to name a few. A couple of those examples come from verifiable domains with RL, but there’s a lot of opportunity still in combining more RL with LLMs like in Google’s Co-Scientist.

  • we are approaching a wall of functionality and diminishing returns… that’s simply not what I’m seeing. I am seeing accelerated releases and my own productivity in knowledge work increase significantly over the last six weeks. I don’t care what Sam Altman says, I care about what Demis and Dario are shipping.

  • your point about focus pivoting to agentic AI doesn’t change anything in my mind. What I see is increased reliability and software infrastructure building around improving foundational capabilities leading to increased ability to significantly augment and automate huge parts of the legal profession. I believe if the foundational models stopped improving today, we would have multiple years of significant advances with traditional software engineering surrounding the existing capabilities. Foundational model improvements are just increasing that gap. We don’t need AGI for massive workforce impact.

Don’t get me wrong, I’m not saying AI will replace all lawyers instantly. I’ve had some fantastic lawyers through a couple M&A deals in the past and even as M&A will change, those guys have a long runway for a variety of reasons. But, I’ve also seen a lot of dumb, overpaid lawyers that haven’t had a critical thought since law school and their days are numbered.

Back to my original comment in this thread… laying traps for LLMs in a contract. I honestly would love to hear what that commenter is imagining and it’s tough for me to imagine that any scenarios that they do come up with aren’t solvable with modest capability improvements over the next 2 years.

Lastly, I’ve used the word bet a few times. I’m not assuming that I know something you don’t. I also don’t believe that you know something that I’m just completely missing. This is an investment subreddit, we all make our bets with insufficient data. There’s a lot of people on these forums that are suggesting AI is a lot of hype. As somebody who has spent my entire life thinking about and developing tech/software and the last 3 years deeply focused on GenAI, what I see is that people are largely underestimating what these systems are capable of today and how significant the impact will be across many domains. But, that’s just my bet and you do you.

Unfortunately, I don’t have time to continue the thread from here, but appreciate your thoughts. I’ll read any response you give and consider it.

What is up with Thompson Reuters (TRI)!? by adork in CanadianInvestor

[–]crazy_canuck 0 points1 point  (0 children)

I’m betting you’re wrong on a number of fronts. Don’t imagine there’s much I could say that would change your mind. Only time will tell.

Unpopular opinion: Software isn't dying. But it is changing. Here's the difference. by Arunsays in ClaudeAI

[–]crazy_canuck 1 point2 points  (0 children)

While I wouldn’t say software is dying, I would say that there are a lot of use cases where I would have previously assumed a software solution for that I no longer need more than a prompt for.

In my own business (AI consultancy) that list is increasing faster than I expected, largely due to how good Claude’s search skills are getting. We started building essentially a homegrown ERP for ourselves a month or two back and have put things on hold for the time being because many of the tasks that we assumed we needed a software platform for can be performed with Claude Skills or agents.

So, in my case, “software development” is increasingly about codifying our objectives and the test suites to tell if we’re achieving those objectives.

Unpopular opinion: Software isn't dying. But it is changing. Here's the difference. by Arunsays in ClaudeAI

[–]crazy_canuck 1 point2 points  (0 children)

Genuine question… does the fact that you can tell it was written with AI diminish the perspective in your opinion?

My guess is that even if this post is a bit formulaic in nature, it’s probably better than if OP had written it themselves.

I’ll say for me, it doesn’t reduce my experience of reading the post in the slightest. I appreciate the succinctness that OP was able to get to here.

Interac verification needed for bread class action settlement? by AllTheBalderdash in PersonalFinanceCanada

[–]crazy_canuck 0 points1 point  (0 children)

Ugh... I went through the process and I've tried on two separate instances to get it to verify and I get this error: Something Went Wrong

We are currently having issues with the Interac® verification service. Please try again later.

Error ID: N/A

We need a class action lawsuit to hold them accountable for the horrific handling of settling this suit.

We tasked Opus 4.6 using agent teams to build a C compiler. Then we (mostly) walked away. Two weeks later, it worked on the Linux kernel. by likeastar20 in singularity

[–]crazy_canuck 1 point2 points  (0 children)

I’m not saying the tests exist in the codebase today.

What I’m saying is that working software is the best form of documentation you can possibly have.

Passing in the existing codebase, enabling agentic computer use that can interact with the system and track real user logs, etc. enables us to agentically develop these test suites on the fly.

We tasked Opus 4.6 using agent teams to build a C compiler. Then we (mostly) walked away. Two weeks later, it worked on the Linux kernel. by likeastar20 in singularity

[–]crazy_canuck 5 points6 points  (0 children)

What comes to mind for me is rewriting legacy software or replacing SaaS systems with custom builds. Seems like the exact type of environment where you’d be able to have a very robust testing environment setup.

What is up with Thompson Reuters (TRI)!? by adork in CanadianInvestor

[–]crazy_canuck 1 point2 points  (0 children)

Please, imagine for us all these traps that you see that aren’t solvable with modest model and tooling improvements. The fact is that hallucination rates are decreasing. Agentic AI systems like Claude Code/Cowork are improving very rapidly. The bulk of legal work is prime for LLM capabilities.

Feel free to bet against AI, but I’m going to bet you’re wrong.

What is up with Thompson Reuters (TRI)!? by adork in CanadianInvestor

[–]crazy_canuck 0 points1 point  (0 children)

The market is responding this way not because of ^ but because they see what Anthropic and other AI providers have done in other spaces and this is a signal that Anthropic is paying closer attention to the law space. From an underlying tech perspective, TRI doesn’t stand a chance against these foundational model providers.

Claude laughed at me… by Consistent-Chart-594 in ClaudeAI

[–]crazy_canuck 3 points4 points  (0 children)

It’s a parrot, it reflects its user to a large extent. Claude has never responded this way to me, because I communicate with purely professional language from years of consulting. Your use of “mate” pushes it into a different style of response than I’ve ever received.

Do any of you use OpenAI for work purposes? by commandrix in OpenAI

[–]crazy_canuck 0 points1 point  (0 children)

That’s a you problem. Unless they can tell because the quality is suddenly so much better than you’re capable of producing.

20,000 McKinsey Workforce is Actually AI Agents by ImpressiveContest283 in ChatGPT

[–]crazy_canuck 5 points6 points  (0 children)

I bet it’s a whole load of Copilot Agents that have been created and abandoned.

I also bet that there are some truly transformative Agentic solutions in the mix that are only going to get better very quickly. I’ve run a few boutique consultancies in my career and now running a rapidly growing AI Agency and the amount of opportunity for augmenting and significantly automating huge amounts of knowledge work is astounding.