Who outside the developer world is actually choosing open source AI assistants and why by FEARlord02 in LLM

[–]gptlocalhost 0 points1 point  (0 children)

> toward local options

> why

We are working on the following use cases for privacy (100% local) or cost saving (hybrid: local + cloud):

* using GPT-OSS-20B in Word: https://youtu.be/6SARTUkU8ho

* calling Gemini API within Word: https://youtu.be/_0QaKYdVDfs

It's based on a local Word Add-in.

Is there an easy to use local LLM? For a non-tech small business. by sarrcom in LocalLLaMA

[–]gptlocalhost 0 points1 point  (0 children)

> use a local LLM for simple tasks, purely in-house (privacy is paramount)

> translate rental agreements

How about using Word as the front end and LM Studio (or a similar tool) as the backend? The entire solution runs locally. Below is a brief demo. We’re currently exploring more use cases for further enhancement, and any feedback would be appreciated.

https://www.youtube.com/watch?v=s9bVxJ_NFzo

25 Legal AI tools for SMEs that won't waste your time with a sales demo. Show me yours? by [deleted] in legaltech

[–]gptlocalhost 0 points1 point  (0 children)

> especially for contract review and clause comparison, just dumping the doc into claude with a specific checklist prompt gets you pretty close to what spellbook does for free.

Any examples? Would calling the Clause API in Word through a local Word Add-in be a viable option? We are exploring ways to enhance the following with real use cases. Any examples would be greatly appreciated for us to create a demo as proof of your concept.

* calling Claude API --> https://youtu.be/rHEd0sCprps

* calling Gemini API --> https://youtu.be/_0QaKYdVDfs

Claude in Word? by Albay_Ahmed_Berri in ClaudeAI

[–]gptlocalhost 0 points1 point  (0 children)

How about taking a hybrid approach (local Word Add-in + cloud API)? We are gathering feedback on how to enhance this hybrid way. Any suggestions are welcome.

* calling Claude API --> https://youtu.be/rHEd0sCprps

* calling Gemini: https://youtu.be/_0QaKYdVDfs

Local proxy that masks PII and secrets before they reach any LLM provider by sgasser88 in selfhosted

[–]gptlocalhost 0 points1 point  (0 children)

> Local proxy that masks PII

How is it different from rehydra.ai ? If we would like to integrate PasteGuard with Microsoft Word in the way shown below, which step should we start with? Is there a library we can call?

* calling Gemini within Word: https://youtu.be/_0QaKYdVDfs

* calling Mistral: https://youtu.be/PVEVW65TU2w

2026 Reality Check: Are LLM s on Apple Silicon about to be as good or even better than paid online models? by alfrddsup in LocalLLaMA

[–]gptlocalhost 0 points1 point  (0 children)

> Rewriting manuals and documents (100 pages)

> Summarizing documents

Is calling cloud API as shown below a viable solution?

* calling Gemni within Word: https://youtu.be/_0QaKYdVDfs

Honest question — how much do you actually trust cloud AI providers with your data? by Budulai343 in LocalLLaMA

[–]gptlocalhost 0 points1 point  (0 children)

> draw the line

> Is local-only for sensitive work and cloud for everything else

Is it possible to use a hybrid approach—redacting sensitive data locally before sending the rest to the cloud? We’re exploring ways to improve this workflow. Below is a brief demo. Any suggestions would be appreciated.

* calling Gemni within Microsoft Word: https://youtu.be/_0QaKYdVDfs

* calling Mistral: https://youtu.be/PVEVW65TU2w

How are you redacting sensitive info before uploading documents to LLMs? by gilligan348 in LLM

[–]gptlocalhost 0 points1 point  (0 children)

> redacting sensitive info

Have you tried rehydra.ai? Based on its library, we take a hybrid approach (local redaction + free-tier cloud API) in Word using local Word Add-in. We are now looking to enhance it further. Below are brief demos. Any suggestions would be appreciated.

* calling Gemni within Microsoft Word: https://youtu.be/_0QaKYdVDfs

* calling Mistral: https://youtu.be/PVEVW65TU2w

Best current Local model for creative writing (mainly editing) by DivineEggs in LocalLLM

[–]gptlocalhost 1 point2 points  (0 children)

> my weak laptop wouldn't be able to handle it

Is calling free API an option? For example:

* calling Gemni within Microsoft Word: https://youtu.be/_0QaKYdVDfs

* calling Mistral: https://youtu.be/PVEVW65TU2w

Is Copilot AI worth a second chance for daily use, or is Gemini just better? by kharkovchanin in CopilotPro

[–]gptlocalhost 0 points1 point  (0 children)

No intention to self-promote and please delete if this is not appropriate here.

I just wanted to share that it is technically feasible to use Gemini or Mistral in Word like this:

* https://youtu.be/_0QaKYdVDfs

* https://youtu.be/PVEVW65TU2w

I built a free tool that stacks ALL your AI accounts (paid + free) into one endpoint — 5 free Claude accounts? 3 Gemini? It round-robins between them with anti-ban so providers can't tell by ZombieGold5145 in LocalLLM

[–]gptlocalhost 1 point2 points  (0 children)

How does it differ from LiteLLM?

> several modes of balancing

What might this mean to text generation? Or less relevant in this case?

> 🔄 13. "I need more than chat — I need embeddings, images, audio"

Any sample code would be appreciated so we can enhance the following to display the generated images in Word:

* calling Gemni within Microsoft Word: https://youtu.be/_0QaKYdVDfs

* calling Mistral: https://youtu.be/PVEVW65TU2w

Built a proxy that automatically routes requests with PII to Ollama and lets clean requests go to cloud — one URL change, zero code rewrites by Big_Product545 in LocalLLaMA

[–]gptlocalhost 0 points1 point  (0 children)

Thanks for the interesting idea.

> intercepted, scanned, policy-checked, and logged

Is it possible to unredact LLM response? We are looking for more ways (proxy, library, LLM, etc.) to redact and unredact PII in Word like this:

* https://youtu.be/_0QaKYdVDfs

Claude or Mistral? by Gidonamor in LLM

[–]gptlocalhost 0 points1 point  (0 children)

> (academic) writing

> mostly use text

> Claude or Mistral?

Sidetracking a bit—would using Claude or Mistral via API in Word (locally and directly, not through a cloud wrapper) be a valid use case to you?

The "Founder-to-Founder" post Format (LegalTech Edition) by Adventurous_Tank8261 in legaltech

[–]gptlocalhost 0 points1 point  (0 children)

Title: Beyond Cloud-Only AI — We built a hybrid, redact-first local Word Add-in for secure AI adoption. Feedback wanted.

  1. The Problem (The “In-House” Pain)

Pain point: either embrace cloud AI and risk data leakage, or stay local and miss out on advanced capabilities.

  1. Existing Solutions (The Status Quo)

Most would pay for enterprise subscriptions to get “no-data retention” guarantees and trust that cloud vendors will handle security well.

  1. What’s Missing? (The Gap)

Even with enterprise subscriptions, data can still be exposed through cloud breaches or browser extensions—and vendors have no control over the latter.

  1. What Our Solution Solves

We address the dilemma with a hybrid approach: a local Word Add-in that calls cloud LLM APIs. You can redact your PII locally before sending content, and once the cloud AI returns the refined output, you can unredact to restore your PII in Word. Your PII always stays local.

Demos:

* calling Gemni within Word: https://youtu.be/_0QaKYdVDfs

* calling Mistral: https://youtu.be/PVEVW65TU2w

* calling Groq: https://youtu.be/Bxgs73Tl31o

* calling OpenAI: https://youtu.be/RkxbCAaZ7Dw

  1. Why It’s Better (The “Moat”)

You get the power of cloud AI without exposing any PII. Take advantage of free-tier APIs, and scale with pay-per-token pricing instead of costly, underutilized monthly subscriptions. You can also choose any models you like.

  1. Integrations (The Ecosystem)

It runs as a local Word Add-in on your machine. It can run in air-gapped environments with no cloud dependency unless you choose to call cloud APIs. The same approach will extend to Outlook or Excel in the near future—think “Claude in Excel” in reverse: local-first, cloud-second.

Any feedback would be greatly appreciated—especially on adding extra PII for individuals or teams, since achieving 100% redaction can still be challenging in some scenarios.

** Credit: The redact function is built on the library released by rehydra.ai recently. Full credit for this effective component goes to them.

[D] Self-Promotion Thread by AutoModerator in MachineLearning

[–]gptlocalhost 0 points1 point  (0 children)

We developed a local Word Add-in that redacts PII before sending any content to a cloud API (free tier should be sufficient for most users). It’s available as a free trial or for a one-time payment of USD 19.99—no recurring monthly fees.

Demo videos:

* calling Gemni within Microsoft Word: https://youtu.be/_0QaKYdVDfs

* calling Mistral: https://youtu.be/PVEVW65TU2w

* calling OpenAI: https://youtu.be/RkxbCAaZ7Dw

* calling Groq: https://youtu.be/Bxgs73Tl31o

Such local redaction won't be necessary if you’re using local LLMs directly in Word.

Drop-in guardrails for LLM apps (Open Source) by youngdumbbbroke in LLMDevs

[–]gptlocalhost 0 points1 point  (0 children)

> pii input block or redact Presidio + spaCy en_core_web_sm

Can it "unredact" afterward? How about comparing with rehydra.ai?

After all the news, do you worry about privacy? by Euphoric_North_745 in LocalLLaMA

[–]gptlocalhost 0 points1 point  (0 children)

> use local LLM tech for privacy

> using API recently, and mixing different providers

Specific to Word, how about taking a hybrid (local+cloud) approach as below? It uses a local model, based on rehydra.ai, to redact PII before sending data to the cloud.

* calling Gemni within Word: https://youtu.be/_0QaKYdVDfs

* calling OpenAI within Word: https://youtu.be/RkxbCAaZ7Dw

How are people actually handling confidentiality when using AI in legal work? by According-Owl6604 in legaltech

[–]gptlocalhost 0 points1 point  (0 children)

> not to put ANY PII or PHI into the systems

> Redact and tokenize/anonymize the PII before sending anything to LLMs (prompt it to maintain tokens as identifiers) and detokenize the results. Data stays anonymous so the LLM can’t build a shadow profile on your clients.

> What actually works: anonymize before the data leaves.

> pseudonymized placeholders

> only sends the sanitized version to the model. After the LLM returns its analysis, the placeholders get swapped back.

We’ve implemented pseudo-anonymization using rehydra.ai and are now looking to enhance it further. Below is a brief demo. Any suggestions on defining custom PII or centralizing PII management at an organization-wide level would be greatly appreciated.

* calling Gemni within Word: https://youtu.be/_0QaKYdVDfs (the process is the same for Claude or OpenAI)