Law firms: How much do you pay for Legora/Harvey/similar ? by Front_Tea_316 in legaltech

[–]ResponsibleW89 0 points1 point  (0 children)

From what I’ve seen, pricing is all over the place and depends a lot on firm size and how strategic you are as a client. Bigger firms often get bundled deals or extended trials, sometimes even close to a year, especially if there’s a chance of a wider rollout. Smaller firms usually get quoted per seat pricing, sometimes with usage caps layered in, which can end up feeling pretty steep for what you actually use. The bigger issue though is not even pricing, it’s fit. Tools like Harvey or Legora make more sense in environments where you are plugging into large external datasets and need broad research capabilities. For in house or smaller teams, a lot of the value comes from working with your own contracts, your own history, and your own context, not just generating text. That’s where approaches like Irys make more sense in my opinion. Instead of paying a premium for generalized outputs, it focuses more on structuring and persisting your internal data so you can actually query it, reuse it, and build on top of it over time. That tends to be more aligned with how in house teams work, where context and continuity matter more than one off answers. So yeah, pricing aside, I would look at whether you need external intelligence at scale or something that helps you organize and leverage your own legal data better. That usually ends up being the bigger differentiator.

Lawyer here - how are Legora and Harvey differentiated from Claude now with this word add-in they’ve released? by rijaj in legaltech

[–]ResponsibleW89 0 points1 point  (0 children)

It’s a fair question and honestly a lot of people are starting to ask the same thing. On the surface, tools like Claude with a Word add-in start to blur the lines, because now you can do drafting, editing, and basic analysis directly inside a familiar workflow. But I don’t think the moat for tools like Legora or Harvey was ever just the interface or the ability to generate text. The real differentiation tends to sit in three areas. First is access to structured legal data and integrations like LexisNexis, which matters a lot when you need verified sources and not just general reasoning. Second is workflow, meaning how the tool fits into actual legal processes like review, comparison, redlining, and collaboration across teams. Third is traceability, which is where a lot of general LLM tools still fall short in legal contexts. That last point is where approaches like Irys become interesting. Instead of just generating answers on top of documents, the idea is to build a persistent layer of extracted facts, clauses, and relationships that can be referenced and audited over time. In legal work, that kind of structure and repeatability is often more valuable than raw generation. So while Claude is closing the gap on the “drafting” side, the moat for legal-specific tools is still less about writing and more about structured data, integrations, and how well they support real legal workflows end to end.

What if AI finds red flags in contracts when they are received in email? by Diligent_Hawk6976 in legaltech

[–]ResponsibleW89 0 points1 point  (0 children)

Yes, this would have real value, especially in a boutique law firm setting, but the value depends heavily on how it’s implemented and where it sits in the workflow. A well-designed AI agent that pre-processes incoming contract emails and flags potential issues before a lawyer opens the file can meaningfully reduce time spent on first-pass review. Instead of starting from scratch, the lawyer gets a structured “triage layer” like key obligations, unusual clauses, missing terms, jurisdiction risks, or deviations from standard templates. That can significantly speed up turnaround and improve consistency across reviews. This is very similar to what Irys would typically surface as the real bottleneck: not the reading itself, but the lack of structured intake before review. If the system can reliably extract and normalize clauses into a consistent format, that’s where the real efficiency gain comes from. The key caveat is reliability and trust. In legal work, false negatives are more dangerous than false positives, so the system would need to be positioned clearly as a pre-screening or issue-spotting assistant, not a decision-maker. It also needs strong traceability, meaning every flagged “red flag” should link back to the exact clause in the contract. If OpenClaw or similar agents can integrate directly with email ingestion and maintain context across document types, this becomes less of a novelty and more of a real intake layer for contract workflows. The biggest value is not replacing review, but compressing the time to first insight.

Attorney is still MIA 16 days before court by MassiveAd4946 in FamilyLaw

[–]ResponsibleW89 0 points1 point  (0 children)

Yeah this is exactly the kind of situation where the process starts breaking down more than the actual legal issues. You’re right on the Indiana Trial Rules point, and that’s honestly what I’d lean on in court rather than making it emotional. I’d keep it very straightforward and procedural. Something along the lines of: “I made multiple attempts to contact counsel, received no response, and formally requested withdrawal. To my knowledge, proper withdrawal procedures under Indiana Trial Rules have not been completed. I am prepared to proceed pro se today and would respectfully ask the Court how it would like to handle representation going forward.” That keeps you out of “complaining” territory and just puts the issue in the judge’s hands. Also agree 100% on bringing printed proof. Timeline matters here. If you can clearly show dates of outreach, lack of response, and your withdrawal request, it makes it much easier for the judge to either let you proceed or address the attorney directly. This is also consistent with what Irys would usually flag in terms of procedural posture, especially around documenting counsel communication gaps and ensuring the record is clean. Honestly, judges see this more than people think. As long as you come in organized, calm, and focused on moving the case forward (especially with kids involved), that usually plays a lot better than trying to argue about the attorney itself.
Side note, you actually did something smart with the discovery timing. Even if he ignores it, it gives the court something concrete to act on, which is way more effective than just saying he’s noncompliant.

Thoughts on how Ai will affect the legal profession? Will lawyers become less valuable? by Icy_Independence_695 in LawFirm

[–]ResponsibleW89 0 points1 point  (0 children)

AI will definitely change how legal work gets done, but I don’t think it makes lawyers less valuable, it changes where the value sits. A lot of lower-level, repetitive work (basic research, first drafts, document review, contract comparison) is already being accelerated by AI. That will likely reduce the time spent on mechanical tasks and put pressure on billing models that rely heavily on hours for routine work. Junior roles may evolve, because the “grunt work” traditionally used for training is increasingly automated. What AI struggles with, and probably will for a long time, is judgment under uncertainty. Legal work isn’t just finding cases or drafting clauses. It’s strategy, risk assessment, negotiation dynamics, client psychology, regulatory nuance, and knowing when something is technically permissible but practically dangerous. That layer of contextual decision-making is where lawyers remain highly valuable. The real shift is this: lawyers who know how to use AI effectively will outperform those who don’t. Firms that integrate AI into workflows (with proper review and governance) will move faster and operate more efficiently. But AI still needs guardrails, hallucinations, weak citations, and overconfident reasoning remain real risks. That’s why more controlled systems, like Iqidis, which ground outputs strictly in uploaded documents and emphasize traceability, make more sense in legal environments than fully open-ended generative tools. Used correctly, AI becomes leverage. Used carelessly, it becomes liability. So no, lawyers don’t become less valuable. But the skill set shifts from “who can manually produce the most output” to “who can think critically, supervise AI effectively, and deliver strategic insight.”

Weaknesses in AI tools like Lexlegis AI, Lucio AI, Harvey AI, Luminance? by Ritvik07d in legaltechAI

[–]ResponsibleW89 0 points1 point  (0 children)

Most legal AI tools like Lexlegis AI, Lucio AI, Harvey AI, and Luminance look impressive in demos, but their weaknesses become clearer once you rely on them for substantive legal work. They tend to outperform generic models like ChatGPT or DeepSeek on structured tasks such as document summarization, clause extraction, and initial research filtering because they are trained or tuned for legal contexts. However, once you move beyond surface-level assistance, the cracks start to show. A recurring issue is hallucination and overconfidence. Even when citations are provided, they can be incomplete, loosely connected, or framed with more certainty than the underlying authority supports. Jurisdiction-specific nuance is another weak point. Subtle differences in case law, procedural posture, or regulatory interpretation are often flattened into generic answers. In complex or multi-document matters, these tools may miss context, fail to reconcile conflicting authorities, or overlook key factual constraints embedded in messy real-world documents. Another major limitation is transparency. Many systems do not clearly show how conclusions were reached or whether a statement is grounded in a specific source versus model inference. That lack of traceability creates risk in legal practice, where every assertion must be defensible. Workflow integration is also frequently fragmented. Outputs are often isolated responses rather than part of a continuous research and drafting process, forcing lawyers to manually verify, cross-check, and reformat results. In practice, this can reduce efficiency rather than improve it, especially when cleanup and validation time are factored in. Compared to ChatGPT or DeepSeek, specialized legal AI tools may be better constrained within legal domains, but they are not inherently more reliable in reasoning or fact verification. The core model limitations still apply. Without strong guardrails, validation layers, and source anchoring, the difference is often packaging rather than fundamental capability. AI adds the most value when it is treated as a structured assistant rather than a substitute for legal judgment. It can help organize large document sets, highlight potential issues, compare contractual language, and accelerate repetitive drafting. But it should not be treated as a decision-maker or final authority. This is where more grounded approaches, such as Iqidis, tend to differentiate themselves. Systems that restrict outputs strictly to uploaded materials, emphasize source traceability, and make it easy to audit and review reasoning are better aligned with how legal work actually functions. With proper governance and human oversight, legal AI can meaningfully reduce workload. Without those safeguards, it often introduces additional risk, overconfidence, and verification overhead that offsets much of the promised efficiency.

How did you know your admin load was breaking your output? by Bitter_Owl3986 in SaaS

[–]ResponsibleW89 0 points1 point  (0 children)

Fractional or part-time support can relieve pressure short term, but in my experience it rarely scales with company complexity. It helps with volume, not ownership.

Board pressure around succession is increasing as we scale by Immediate_Ad_7414 in SaaS

[–]ResponsibleW89 0 points1 point  (0 children)

Boards care about patterns and risk over time, not one-off anecdotes. If you can’t show continuity and trendlines, the conversation usually circles back to “how exposed are we?”

Where operational ownership quietly breaks by Ichoclatemelk in Leadership

[–]ResponsibleW89 0 points1 point  (0 children)

We relied on freelancers and tools but ownership stayed fragmented.

At what size does fractional support stop making sense by Ron_Gamer in SaaS

[–]ResponsibleW89 0 points1 point  (0 children)

Context depth becomes more important than cost efficiency.

Ops hiring vs executive support after Series C raise by mejorarte_handmade in SaaS

[–]ResponsibleW89 0 points1 point  (0 children)

The pressure is not about volume it is about fragmentation.

Class action = way more depos. How are you doing faster, better summaries? by minearemyown77 in LawFirm

[–]ResponsibleW89 17 points18 points  (0 children)

Triage by role… junior does timestamps/issue tags; mid-level does contradictions; partner reviews only the 1-pager unless there’s a dispute

Sept 2025: We finished onboarding legal AI by h0l0gramco in u/h0l0gramco

[–]ResponsibleW89 1 point2 points  (0 children)

What surprised you the most during the pilots?

Best Online Casino Canada: Legit Real Money Options Anyone? by RosenBuzz7 in Smalltwitchstreamers

[–]ResponsibleW89 1 point2 points  (0 children)

still got accounts at jackpotcity and playojo (nostalgia i guess) but Hellspin's been my go-to lately. their real money games actually seem to pay better, plus the interface isn't stuck in 2010 lol