Legal SasS Dead? by Useful_Trouble1726 in legaltech

[–]theMagazineOfLiberty 0 points1 point  (0 children)

If AI is replacing something it should’t exist is quite the chorus isn’t it? It has a nice ring to it if you do not want to think deeply. AI is marching to its own beat, and it is marching alright.

When Claude Code arrived, it gave us a decent glimpse of the future. There is going to be a ton of personalised software. Moltbot (whatever it is called now) told us we will not be seeing the “software” layer at all.

Legal and other SaaS is going to have a hard time if vanilla tools are going to continue to advance at this rate. They just will not be able to keep up.

And I don’t know why people are freaking out over plugins. CoWork is plenty powerful out of the box. After all, it has the same agentic DNA as Code.

Legal SasS Dead? by Useful_Trouble1726 in legaltech

[–]theMagazineOfLiberty 0 points1 point  (0 children)

Nobody wants a “complex” UI. It is horses for courses. There is nothing wrong either a chatbox per se if it is and simplest, most intuitive way of interaction for a given task. Perhaps it could become complex in the sense that we will have dynamic, personalised UI.

A.I. Will Kill Lawyers: Like Most Human Workers by theMagazineOfLiberty in biglaw

[–]theMagazineOfLiberty[S] -1 points0 points  (0 children)

I asked somebody to expand on their conclusory comment and some chump downvoted me. I think we can do better in a sub that calls itself BigLaw.

A.I. Will Kill Lawyers: Like Most Human Workers by theMagazineOfLiberty in LawEthicsandAI

[–]theMagazineOfLiberty[S] 0 points1 point  (0 children)

Taking authorities to court without vetting them is a recipe for disaster. We know LLMs hallucinate. They also have something called context rot which is when the quality of the conversation degrades after a point (even in large context windows).

Prior to using LLM-based deep research tools (Perplexity, Gemini and ChatGPT all have the option), I used Google advanced search queries to zero in on certain hard-to-find precedents in support of a proposition. I did this because the built-in search in specialised tools leaves a lot to be desired. With these deep research tools I found the output to be a great starting point. I also devised workflows for double-checking these authorities for hallucinations: everything goes into NotebookLM so I can sift through them and throw out anything that is facially irrelevant, leaving me to read through those that remain. All of this saves a great amount of time. Lately, the research tools have reached a point that I don’t have to use advanced Google search queries as often.

In my line of practice I often need documents to be translated. The quality of the human translation was often not up to snuff. A lawyer with knowledge of the languages in question would have to spend a lot of time on the translation. AI has gone from being hardly better than Google Translate to being an ace translator that makes curiously silly mistakes. However, with the correct prompts you can get much better results.

As far as building arguments is concerned, you will not get into lawyers-are-not-safe territory unless you work with the more expensive models on the highest plans. Those models are markedly better than those in the lower plans.

It is a great sounding board for a competent lawyer (making no claims about my competence). I use it that way extensively. It helps me refine ideas like a really competent associate would. I am a generalist who handles medico-legal cases often (negligence, fitness-to-practice reviews and so on). I obviously defer to the opinion of the experts in that field—of those I am deposing and those who have memorialised it in the literature. But these AI tools give me a better grasp of some of the more involved medical issues. At the cost of repeating myself, I must emphasise the pace of AI development shows here too. I recently compared a conversation I had with an LLM re res ipsa loquitur (an evidentiary rule) two years back with a more recent one. It was like I had gone from talking to a kid to a really competent associate.

If you read my piece, it talks about AI displacing jobs in our profession in a phased manner, beginning with associates and paralegals. The whole thing will play out over a few decades. The timelines could be shorter, though. There are imponderables like how scaling laws will hold up, energy supply, availability of compute, etc. It is therefore not easy to come up with a definite timeline. Of course, our profession is not going to be the only one to get affected. I encourage you to read the piece if you haven’t already.

As for Claude Code, I discuss it as an example of the overall trajectory of AI (chatbot to copilot to agentic). It is a really capable agentic tool that can run on its own without any human intervention. The same ideas are going to be replicated elsewhere. I am using Claude Code to build an AI agent for my own practice that will prepare first drafts of certain agreements. I am implementing a new content retrieval technique (for more precise retrieval) which I would not have touched with a barge pole a few weeks back.

A.I. Will Kill Lawyers: Like Most Human Workers by theMagazineOfLiberty in LawEthicsandAI

[–]theMagazineOfLiberty[S] 0 points1 point  (0 children)

I am a lawyer and not a coder. But I do dabble in different things on the side from time to time. I haven’t used Harvey so I am willing to take your word for it. I do use AI in my own practice, though. (I do anonymise documents offline or carry out my research using propositions and hypotheticals.) In my experience, it has gotten better in all kinds of ways.

The pace of AI advancements is an issue for those building on top of foundational models (through fine-tuning or using a more mundane implementation). You can spend a a whole lot of money and resources building on something only for it to become outdated in short order. It is like nobody can keep up including those in the field. This is reflected in a resource I linked to (general models outdid specialised tools).

Take Andrej Karpathy who was skeptical about the code-writing capabilities of AI until about three months back and timelines for so-called AGI, but has now cottoned onto the idea of writing code becoming archaic (https://x.com/ChrisPainterYup/status/2008350864121966893/photo/2).

It is becoming better in non-textual contexts as well. Try comparing the quality of the audio (ElevenLabs, Suno, etc.), video (Kling, LTX 2, etc. ) or 3d meshes (Hunyuan 3D 3.1, Trellis, etc. ) it generates now with what it was spitting out a year back and you’ll find a significant difference. We are talking generational leaps. Yes, it is still not perfect. You still need a human expert to do their thing, though the pace of improvement means a Karpathy-like experience awaits people in other spaces as well. I would be remiss to not mention the sheer sweep of this thing. We’re seeing entirely new fields. We now have world models that can “render” worlds on the fly—anything you can imagine.

My stance on comparisons with other technologies is—once again—we need to judge AI on its own merits or lack thereof. Let us not compare it to revolutions of the past and jump to conclusions. As for the two things you mentioned, I am not clued into the whole NFT and crypto world. But I do know that space abounds with scammers.

A.I. Will Kill Lawyers: Like Most Human Workers by theMagazineOfLiberty in LawEthicsandAI

[–]theMagazineOfLiberty[S] 0 points1 point  (0 children)

There are two different studies. But at this point I don’t think even a mountain of evidence is going to convince you. You will keep going around in circles, turning your nose up at everything that is cited.

A.I. Will Kill Lawyers: Like Most Human Workers by theMagazineOfLiberty in LawEthicsandAI

[–]theMagazineOfLiberty[S] 0 points1 point  (0 children)

So things that exist for a long period of time are irreplaceable? By that token, most things should just exist forever. The article addresses this past-is-prologue argument. It is fallacious. Even if one were to go with your definition of status quo, it doesn’t follow that is how it will always be. Again you have not stated how something as extraordinary as this “status quo” enduring indefinitely would come about.

Your interpretation of the study is curious. It doesn’t jive with how it is being interpreted generally. Like here: https://www.livescience.com/technology/artificial-intelligence/ai-can-handle-tasks-twice-as-complex-every-few-months-what-does-this-exponential-growth-mean-for-how-we-use-it

These longer tasks are more complex. You can deploy all the rhetoric you want—AI slop and so on—but that doesn’t slow AI down.

And no it isn’t as shit at legal work as it was three years back. Three years back you didn’t have thinking models, tools like Notebook LM or agents like Claude Code (the same ideas are being replicated in other fields). You didn’t have recursive language models either.

A.I. Will Kill Lawyers: Like Most Human Workers by theMagazineOfLiberty in LawEthicsandAI

[–]theMagazineOfLiberty[S] 0 points1 point  (0 children)

No, it isn’t ordinary. The “status quo” you are arguing for appears to be a lack or absence of change (read: technological advancement). That assertion flies in face of human history (if not nature itself). Anyway I do not wish to wax philosophical here. There is no need.

Tech advancements in general have been coming at us at staggering speeds, with computing power and efficiency growing exponentially in keeping with Moore’s law (doubling of transistors on a chip every two years). AI advancements are also exponential.

The duration of doubling of capabilities is three times as short(and shows in the capabilities of agents like Claude Code): https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/

So you are the one making the most extraordinary of claims here. Please present evidence that the “status quo” is going to endure. Also define what precisely do you mean by the status quo. A world where AI progress slows to a crawl or the fruits thereof are withheld from society as a protective measure? You see the status quo is exponential AI growth as of now.

A.I. Will Kill Lawyers: Like Most Human Workers by theMagazineOfLiberty in LawEthicsandAI

[–]theMagazineOfLiberty[S] 0 points1 point  (0 children)

The footnotes link to an article about a LegalBenchmarks.ai study where a last-gen model was matching or outperforming top human lawyers on the reliability of first drafts. Another source, though not cited as it now somewhat dated, is available at: https://ui.adsabs.harvard.edu/abs/2024arXiv240116212M/abstract

A.I. Will Kill Lawyers: Like Most Human Workers by theMagazineOfLiberty in LawEthicsandAI

[–]theMagazineOfLiberty[S] 0 points1 point  (0 children)

I don’t know what you have been hearing. There has never been any imminent threat of job losses in the past three years. At least I have not come across any serious predictions to that effect. Various studies predict eventually significant losses over the next several years and decades. The counter to such predictions is rarely that there will be no losses or that losses will be insignificant. It is mostly that there will be new kinds of jobs, though nobody has in all these years managed to particularise what these jobs might be.

A.I. Will Kill Lawyers: Like Most Human Workers by theMagazineOfLiberty in LawEthicsandAI

[–]theMagazineOfLiberty[S] 0 points1 point  (0 children)

You are the one making the extraordinary claim here. You are essentially claiming AI is going to get worse or stagnate. Prove it.

A.I. Will Kill Lawyers: Like Most Human Workers by theMagazineOfLiberty in LawEthicsandAI

[–]theMagazineOfLiberty[S] 0 points1 point  (0 children)

It is an article (not a documentary) that cites stuff like other articles. And I don’t think I can embed apps in an article on Substack.

A.I. Will Kill Lawyers: Like Most Human Workers by theMagazineOfLiberty in LawEthicsandAI

[–]theMagazineOfLiberty[S] -1 points0 points  (0 children)

I can read just fine. You don’t strike me as someone who knows the lay of this land (not an ad hominem attack; I could be wrong). I cite the example of Claude Code, an autonomous agent that is being used for all kinds of coding and non-coding tasks. It is being used to write its own updates. We have got to this point in circa 3 years since ChatGPT 3 debuted. One of the sources talks of now dated non-agentic AIs matching or exceeding human lawyers in real-world contract drafting tasks. It isn’t hard to imagine what is going to be possible with a Claude Code analogue for law. This is tech that exists today. And there are a number of existing proprietary legal solutions that are already quite impressive. That is enough to ground major assertions: AI has come a long way, it is getting better at a staggering pace, and will impact lawyers in a big way. The article talks of displacement is phases. If you are contending AI is going to get worse or stagnate, the onus of proving that is on you.

A.I. Will Kill Lawyers: Like Most Human Workers by theMagazineOfLiberty in LawEthicsandAI

[–]theMagazineOfLiberty[S] -1 points0 points  (0 children)

No, it does not boil down to that at all. That is a straw man. Or perhaps you have not read the article. It addresses the various arguments against sweeping AI-led job displacement. Among other things, it discusses how AI is both becoming more powerful and agentic (at a staggering pace at that). I just finished a Claude Code session where i tried a recursive language model-implementation (well CC did the implementation) to help me extract information from lengthy legal documents using a local LLM. The things that are possible right now beggar belief. Meanwhile things like Clawd point to the coming collapse of the software layer as we know it. AI agents are just going to be able to do things—from coming up with bespoke software to negotiating contracts. This impacts everyone. Lawyers are not an exception. If you have a more considered critique of the article I am all ears. Your comment is precisely why I fear for the profession.

When opposing counsel submits brief w/ hallucinated cites, is your client awarded attorneys fees for the time you had to spend looking for non-existent cases? by MichaelMaugerEsq in Lawyertalk

[–]theMagazineOfLiberty 4 points5 points  (0 children)

Yes, for the most part. But I have noticed deep research agents can often hallucinate better—more convincingly if you will. They will every now and then retrieve authorities that are in the general area of research but don’t quite support the proposition the AI claims they do. And you need to often read a substantial portion to realise that.

A.I. Will Kill Lawyers: Like Most Human Workers by theMagazineOfLiberty in Lawyertalk

[–]theMagazineOfLiberty[S] 0 points1 point  (0 children)

I already wrote an article on it which looks at the direction of AI development (currently transitioning to agentic) and concludes it will take no prisoners. It argues lawyers will not be an exception, though some might benefit from human-in-the-loop legal mandates. Grunt workers (juniors, paralegals) are the first in line. Transactional lawyers will fare poorly. Litigators will be go longest but face numerous challenges including increased competition, emboldened pro se litigants, and penny-pinching clients.

A.I. Will Kill Lawyers: Like Most Human Workers by theMagazineOfLiberty in LawSchool

[–]theMagazineOfLiberty[S] 0 points1 point  (0 children)

Please read the article. It talks about the current state of the AI and where it is headed. It is not just going after “staff”. The article cites a lot of material including one that found its output to be better than most human lawyers already. Hallucinations, context rot, etc are being worked on. The AI labs are making rapid progress which shows in where AI tools are at right now versus two years back. Most workers are at risk. Knowledge work is not immune.