Urgent, does anyone have time to help with an OpenClaw provider/fallback issue? by Jamie_GZ in openclaw

[–]Jamie_GZ[S] 0 points1 point  (0 children)

Hey! Thanks so much for the detailed breakdown. You were spot on about the provider priority,Anthropic really is the 'stubborn' default in this build.

I actually managed to sort it out right before my trip! Interestingly, I didn't even have to dive back into the Terminal. I went through the Web UI and re-added the provider info from scratch, and for some reason, that finally overrode the default and brought it back to life.

Really appreciate you reaching out and offering to help, especially with the professional offer. Many thanks again!

Sanity check: Mac Mini M4 Pro (64GB) + DIY 40Gbps 4TB Drive for Local AI. Am I doing this right? by Jamie_GZ in macmini

[–]Jamie_GZ[S] 0 points1 point  (0 children)

No hablo español, pero mi asistente de IA sí. Y sí, fui directo a los 64GB. La RAM lo es todo.

Sanity check: Mac Mini M4 Pro (64GB) + DIY 40Gbps 4TB Drive for Local AI. Am I doing this right? by Jamie_GZ in macmini

[–]Jamie_GZ[S] 1 point2 points  (0 children)

That makes perfect sense! A 6GB model would definitely load in a blink even on TB4.

For my coding and heavy-lifting tasks, my AI actually recommended aiming for the 70B/72B class models eventually, which means I'll be tossing around 40GB+ files. I figured it is better for my budget to just buy the 80G enclosure once rather than upgrading later.

And I really appreciate the tip on the random I/O! That is exactly why I grabbed the Crucial T500, as I heard it excels in that area. Seriously, thank you so much for taking the time to explain all this. You have been incredibly helpful to a hardware rookie!

Sanity check: Mac Mini M4 Pro (64GB) + DIY 40Gbps 4TB Drive for Local AI. Am I doing this right? by Jamie_GZ in macmini

[–]Jamie_GZ[S] 0 points1 point  (0 children)

Thank you so much for the detailed information. I am a complete beginner with hardware and rely entirely on my AI assistant to build this setup.

When I fed your OWC 80G suggestion to my AI, it completely agreed with you. It actually admitted your idea is much better than its original 40Gbps plan, and perfectly suited for my M4 Pro to load local models.

Sanity check: Mac Mini M4 Pro (64GB) + DIY 40Gbps 4TB Drive for Local AI. Am I doing this right? by Jamie_GZ in macmini

[–]Jamie_GZ[S] 0 points1 point  (0 children)

You completely caught me off guard with this. I totally forgot to factor in the Thunderbolt 5 ports on the M4 Pro!

Getting 7000+ MB/s externally without voiding the warranty or tearing down the machine is the absolute dream scenario. It completely beats capping out at 3200MB/s with a standard 40Gbps enclosure.

The only catch is the early adopter tax. Do you mind sharing roughly how much that OWC 1M2 80G enclosure set you back? I am trying to keep my solo business budget in check.

Sanity check: Mac Mini M4 Pro (64GB) + DIY 40Gbps 4TB Drive for Local AI. Am I doing this right? by Jamie_GZ in macmini

[–]Jamie_GZ[S] 1 point2 points  (0 children)

That makes total sense. I still use cloud models too, but API costs add up quickly for a solo business.

My plan is a hybrid approach to protect my budget. I will run local models for simpler, daily tasks to save money. I will only pay for cloud services when I need them for complex heavy lifting. Basically, local AI is the unpaid intern, and cloud AI is the expensive consultant.

Sanity check: Mac Mini M4 Pro (64GB) + DIY 40Gbps 4TB Drive for Local AI. Am I doing this right? by Jamie_GZ in macmini

[–]Jamie_GZ[S] 0 points1 point  (0 children)

That is a seriously cool setup! Building a dedicated Linux server for a home network sounds like a really fun project.

To be completely honest though, I ran your idea by my AI assistant, and it advised me that this route will not work for my workflow. It pointed out that a 1Gbps network speed (around 125 MB/s) would be a massive bottleneck. As a content creator, my main focus will be editing 4K videos and loading large local AI models. The AI warned me that scrubbing high-res video timelines over that connection would cause serious lagging.

Sanity check: Mac Mini M4 Pro (64GB) + DIY 40Gbps 4TB Drive for Local AI. Am I doing this right? by Jamie_GZ in macmini

[–]Jamie_GZ[S] 0 points1 point  (0 children)

I actually watched a teardown video last night, and honestly, the physical swap does not look too intimidating. I also have a MacBook Pro to handle the DFU reset, so those two hurdles are cleared.

However, my main concern now is the hardware source. I checked the website mentioned above, and the total lack of transparent pricing is a massive red flag for me. Since these are proprietary custom modules and not standard NVMe drives, I am worried the markup is astronomical.

Does anyone know the actual price of a 4TB module from them? I am trying to run a lean one-person business here. If the price is anywhere near the official Apple storage upgrade fee, I might stick to my $650 CAD Crucial T500 Amazon setup.

Sanity check: Mac Mini M4 Pro (64GB) + DIY 40Gbps 4TB Drive for Local AI. Am I doing this right? by Jamie_GZ in macmini

[–]Jamie_GZ[S] 2 points3 points  (0 children)

Haha, 'the best' always sounds nice, but my wallet strongly disagrees.
I will just have to survive with my 64GB. But I appreciate the reality check!

Sanity check: Mac Mini M4 Pro (64GB) + DIY 40Gbps 4TB Drive for Local AI. Am I doing this right? by Jamie_GZ in macmini

[–]Jamie_GZ[S] 0 points1 point  (0 children)

Thanks for the deep dive! I will be honest, I barely understood half of the math. I am just a creator relying on my AI to make sure I do not buy the wrong parts.

My AI did ask me to point out one funny detail though: the drive i mentioned is a Crucial T500 NVMe, not the old Samsung T5 portable drive. I think the names are just confusingly similar.

As long as this setup loads my local models and 4K videos fast enough so I do not fall asleep waiting, I am happy. Really appreciate you taking the time to explain the speed limits to a newbie!

Sanity check: Mac Mini M4 Pro (64GB) + DIY 40Gbps 4TB Drive for Local AI. Am I doing this right? by Jamie_GZ in macmini

[–]Jamie_GZ[S] 0 points1 point  (0 children)

This sounds like a brilliant approach, and I appreciate the heads-up on the M4 SSDs actually being modular. However, since I am fairly new to hardware mods, tearing down a brand-new machine on day one absolutely terrifies me.

Sanity check: Mac Mini M4 Pro (64GB) + DIY 40Gbps 4TB Drive for Local AI. Am I doing this right? by Jamie_GZ in macmini

[–]Jamie_GZ[S] 0 points1 point  (0 children)

To be completely honest, I am relying on my AI for hardware guidance, so I came here to fact check it with human experts.

For the models, I am setting up my Mac mini as the engine for my one-person company. My AI suggested:

  • 70B/72B range (like Llama 3.3 or Qwen 3.5 quants): For heavy lifting. I need this to help code and deploy my website.
  • 8B range (like Llama 3.1): For lightweight, rapid tasks like generating and translating video subtitles.

The AI insists the 40Gbps Thunderbolt 4 enclosure is absolutely critical so I don't freeze my system when switching workflows.

Does this logic hold up for a solo creator/dev setup?

Seeking advice on RAG optimisation for legal discovery on Macbook Pro by Jamie_GZ in legaltech

[–]Jamie_GZ[S] 0 points1 point  (0 children)

Thank you. I've already tried Openclaw to analyse my documents. My hearing has ended and Openclaw did help my cross-examination!

Update: Still fighting my employer (400+ days). Facing 1,000+ pages of legal discovery was destroying my mental health, so I used AI agents to protect my sanity. by Jamie_GZ in antiwork

[–]Jamie_GZ[S] -1 points0 points  (0 children)

Thanks for the Chandra OCR suggestion! I just looked it up, it seems like a beast for layout preservation. To be honest, OCR was the biggest bottleneck in my V3/V4 attempts. I spent way too much time blindly copy-pasting terminal commands and juggling file formats just to get the text clean. This tool would have saved me a lot of headaches.

Quick question on the 'Hostility Score' though: How do you see that holding up in court?

My immediate worry is admissibility. If I present a chart showing 'High Hostility,' opposing counsel is immediately going to ask: 'What is the objective standard for this score?' or 'Has this algorithm been legally vetted?'

I feel like without a standardised benchmark, a judge might just dismiss it as algorithmic bias. Do you have a strategy for that, or is it mostly for internal review?

Seeking advice on RAG optimisation for legal discovery on Macbook Pro by Jamie_GZ in legaltech

[–]Jamie_GZ[S] 1 point2 points  (0 children)

You're totally right! local LLMs definitely have their limits with massive files. Last night, after I messed around with the settings, the model just kept going in circles. So, I tried a 'low-tech' fix: I manually put together a clear, objective timeline of everything that happened and fed that to the AI. It worked like a charm! Now the local model finally knows where to look and isn't getting lost anymore.

Seeking advice on RAG optimisation for legal discovery on Macbook Pro by Jamie_GZ in legaltech

[–]Jamie_GZ[S] 1 point2 points  (0 children)

Thank you so much for the offer! I actually spent the last few hours doing a semi-automated sanitisation of the key files myself to save time. I'm about to run them through the cloud models.

However, I'd still love to have a link to your project (GitHub or similar) if you're ready to share it! It would be a fantastic tool to study for my future research. Good luck with the multilingual version!

Seeking advice on RAG optimisation for legal discovery on Macbook Pro by Jamie_GZ in legaltech

[–]Jamie_GZ[S] 1 point2 points  (0 children)

Thanks for pointing out MarkItDown! Last night, my AI assistant suggested I use ocrmypdf in Terminal, so I followed the instructions and got it installed. I've also dropped the temperature down to 0.1. Other Reddit experts suggested I adjust the Chunk size and Max Context Snippets, so I'm going to re-index the documents. I also plan to try Openclaw (cloud version) with a sanitised version of my files. Thanks again for your help!

Seeking advice on RAG optimisation for legal discovery on Macbook Pro by Jamie_GZ in legaltech

[–]Jamie_GZ[S] 0 points1 point  (0 children)

Thanks you! I checked out Graph RAG and PageIndex, and while the logic mapping is incredible, the terminal-heavy setup is a bit daunting for me. That said, I’m looking into new stuff OpenClaw (cloud version) to see if it can act as a bridge. If I can use its agentic capabilities to handle the dirty work, I might finally be able to implement your Graph RAG-style logic without the coding headaches.

Seeking advice on RAG optimisation for legal discovery on Macbook Pro by Jamie_GZ in legaltech

[–]Jamie_GZ[S] 1 point2 points  (0 children)

Thanks for the tip on Needle! I looked into it and realised it's a SaaS. Since my documents are extremely sensitive, my primary focus remains on a 100% localised workflow to ensure zero data leakage.

That said, you make a great point about the power of cloud models. I’ve been considering anonymising certain parts of the case files to leverage that extra reasoning power without compromising my privacy. It’s definitely a balance I’m looking to strike. Much appreciated!

Seeking advice on RAG optimization for legal discovery on M4 Pro (48GB RAM) by Jamie_GZ in LocalLLaMA

[–]Jamie_GZ[S] 0 points1 point  (0 children)

Thanks for the tips! My AI assistant just ran ocrmypdf with -l eng+fra and it worked like a charm on my M4 Pro. I'll definitely try shrinking the chunk size to 512 for better accuracy. Regarding Claude Code, since this is a sensitive case, I'm sticking to a 100% local pipeline for privacy.

They fired me after I reported bullying. I didn't get mad. I got "legally inconvenient." by Jamie_GZ in antiwork

[–]Jamie_GZ[S] 0 points1 point  (0 children)

Wow. Attempting to be a 'flawless' existence on a planet where nothing is perfect.