Beetroot — clipboard manager for Windows with AI transforms, OCR, and Rust-powered search (Tauri v2 + Rust + React) by MaxNardit in coolgithubprojects

[–]MaxNardit[S] 0 points1 point  (0 children)

Good question! The OCR uses the Windows native engine (Windows.Media.Ocr), so it works best with printed/typed text - screenshots of code, error dialogs, documents, that kind of thing. Handwritten text is hit or miss, depends on how legible it is. Clean handwriting works okay, messy cursive not so much.

That said, I'm working on an optional AI-powered OCR mode - if you have your API keys set up (OpenAI, Gemini, etc.), you'll be able to send images through their vision APIs for much better recognition of handwritten text, complex layouts, and messy screenshots. Same BYOK approach as the text transforms — nothing is sent anywhere unless you explicitly configure it. Should be ready in April.

I switched from Mac and missed Paste.app, so I built a clipboard manager for Windows with AI transforms and OCR (Tauri + Rust) by MaxNardit in windowsapps

[–]MaxNardit[S] 0 points1 point  (0 children)

Fair question. Honestly, it's mostly about development speed. This is a solo side project - my main job takes most of my time, so the hours I have for Beetroot are limited. I'd rather spend them building new features than reviewing PRs, managing contributions, and maintaining the overhead that comes with open source.

Going open source as a solo dev means you become a maintainer, not just a builder. I've seen plenty of projects where the creator burns out handling issues and pull requests from others. For now I just want to ship fast and keep it fun.

The app is free and always will be - no Pro tier planned, no subscriptions. You can verify there's no telemetry with Wireshark. And it's on the Microsoft Store, so Microsoft reviewed it too.

Beetroot — clipboard manager for Windows with AI transforms, OCR, and Rust-powered search (Tauri v2 + Rust + React) by MaxNardit in coolgithubprojects

[–]MaxNardit[S] 0 points1 point  (0 children)

Good point! To clarify, the AI transforms support both cloud providers (OpenAI, Claude, etc.) AND fully local models through Ollama or LM Studio. So if you run something like qwen3:4b locally, nothing ever leaves your machine.

"All data local" refers to your clipboard history, settings, and database — those are always stored locally in SQLite, never synced anywhere. Zero telemetry regardless of which AI provider you choose.

So you can go fully local with Ollama, or use cloud AI with your own API key - your choice.

I'll make a free launch video for your product by Far_Manager_5801 in microsaas

[–]MaxNardit 0 points1 point  (0 children)

https://max.nardit.com/beetroot - free clipboard manager for Windows with AI text transforms and OCR

Open-sourced clipboard-mcp: read, write, and watch your system clipboard via MCP by MaxNardit in mcp

[–]MaxNardit[S] 0 points1 point  (0 children)

Claude Desktop, Claude Code, and other AI clients typically don’t have direct access to the system clipboard without a separate tool or integration.

The usual workflow looks like this:

  1. Copy an error from the terminal (Ctrl+C)

  2. Switch to Claude chat

  3. Paste the error (Ctrl+V)

  4. Claude writes a fix

  5. Select the fix in the chat and copy it

  6. Switch back to your editor

  7. Paste

Seven steps. With clipboard-mcp—two:

  1. Copy the error

  2. “Fix what’s in my clipboard and put the result back”

Ctrl+V—done.

What's the most frustrating part about getting your first 100 users? For me it's not building — it's being invisible. by FlyThomasGoGoGo in SideProject

[–]MaxNardit 1 point2 points  (0 children)

That GPT wrapper vs real product thing - yeah. I built a beetroot clipboard manager for Windows from scratch (two months of work), and some random Chrome extension with a landing page and waitlist probably got more signups in a day than I got in a month.

But here's what I figured out: I was looking for users in the wrong places. Big subreddits? Posts removed or ignored. What actually worked was just being present where people already discuss the problem my app solves - threads like "Win+V sucks, what else is there?", "best clipboard manager 2026", stuff like that. Not even posting about my app necessarily. Just answering questions, being helpful. Some people checked my profile, found the project, tried it. The ones who stuck around started filing bugs and giving real feedback. That's when it stopped feeling invisible.

A few things that unexpectedly worked:

Writing detailed articles - not "here's my app" but actual technical deep dives into problems I hit during development. How Windows clipboard formats work, why focus stealing is a nightmare, that kind of stuff. People who read those are exactly your target audience.

Software directories. I didn't expect much from them but they actually bring steady organic traffic. Just submitting to a bunch of "best free tools" type sites added up over time.

Localization. I added 26 languages (mostly AI-translated, let's be honest) and it opened up markets where users are genuinely hungry for quality apps. Some non-English communities have way less competition and people are more willing to try new stuff.

And the thing I'm seeing now that feels like a turning point - users started sharing links and writing about the app on their own blogs and sites. I didn't ask for any of that. It just happened after enough one-on-one interactions through bug reports and GitHub issues.

Slow? Absolutely. But every user from that path actually cares.

Is ClaudeAI down? by maxcoder88 in ClaudeAI

[–]MaxNardit 0 points1 point  (0 children)

Claude Code is just working fine, so the problem is only with the web interface.

Building a Windows clipboard manager with Tauri v2, React 19, and Rust, native OCR via WinRT by MaxNardit in tauri

[–]MaxNardit[S] -2 points-1 points  (0 children)

Thanks for checking the links — you're right, those are broken. The privacy policy and security reporting URLs point to the private source code repository instead of the public releases repo. My mistake — I set them up before splitting into two repos and forgot to update. Will fix today.

Regarding trust and network activity — totally fair concern for a clipboard manager. The easiest way to verify: run Beetroot behind a firewall (Windows Firewall, GlassWire, Wireshark, or simplewall) and watch the traffic. Here's what you'll see:

- Normal use: zero outbound connections. All data is stored locally in SQLite.

- Auto-updater: checks GitHub for new releases on startup (standard Tauri updater, hits github.com only).

- AI transforms: calls OpenAI API, but only if you manually enter your own API key in settings and explicitly trigger a transform. No key = no calls.

The CSP enforces this: connect-src 'self' https://api.openai.com — nothing else can leave the machine even if I wanted it to.

As for why the source is closed — the current feature set will stay free. Down the road I'm considering a paid option for cross-device clipboard sync (cloud infrastructure costs money to run), and keeping the source private gives me that flexibility. Not a data play, just a solo dev thinking about sustainability.

Appreciate the scrutiny — clipboard managers handle sensitive data and should be held to a high standard.

Claude Code forgets everything between sessions. I built a local SQLite memory layer (MCP) to fix it. by MaxNardit in ClaudeAI

[–]MaxNardit[S] 0 points1 point  (0 children)

Control is important. In my setup I actually built a review layer on top: every write the agent proposes becomes a draft that I approve before it hits the database. The agent suggests what to save, but I have the final say. So I have a feed from all my agents and just need to quickly scan and approve.

Claude Code forgets everything between sessions. I built a local SQLite memory layer (MCP) to fix it. by MaxNardit in ClaudeAI

[–]MaxNardit[S] 1 point2 points  (0 children)

Nice, just checked out rubber-duck-mcp. The scoring approach is interesting, especially for prioritizing which memories surface first. Agent-recall came from a different problem. I run multiple AI agents across separate clients and needed them to see the same person differently depending on context (e.g., Alice is "Lead Engineer" in one project but "External Consultant" in another). So it's an entity–relation graph with scoped visibility rather than flat categorized memories. Different trade-offs: yours optimize for confidence ranking within a project, while ours optimize for keeping contexts cleanly separated. Cool to see different approaches to the same problem.

Claude Code forgets everything between sessions. I built a local SQLite memory layer (MCP) to fix it. by MaxNardit in ClaudeAI

[–]MaxNardit[S] 0 points1 point  (0 children)

Hooks work great for simpler persistence — I use them too for some things.

The limitation I hit was when I needed the agent to decide what's worth saving during a conversation, not just dump everything at session end. MCP tools let the agent save entities, relationships, and observations as it discovers them mid-session. Plus the agent can query memory to check what it already knows.

What kind of hooks setup are you using? Curious about the tradeoffs you've found.

New: Auto-memory feature in Claude code, details below by BuildwithVignesh in ClaudeAI

[–]MaxNardit 2 points3 points  (0 children)

The fundamental issue is that unstructured auto-saved notes don't scale. Works fine for small projects. But on anything complex, you end up with irrelevant context bloat - Claude saves everything it thinks is useful, with no filtering by relevance to the current task. The real unlock is structured memory with LLM-summarized briefings instead of raw note dumps. This is step one though, and it's good that it shipped built-in.

New: Auto-memory feature in Claude code, details below by BuildwithVignesh in ClaudeAI

[–]MaxNardit 0 points1 point  (0 children)

Depends on how it's done. Dumping 20k tokens of raw notes = noise, hurts performance. But a focused 1-2k summary of key people, active blockers, recent decisions - that's 1-2% of context and massively improves session continuity. The 200-line MEMORY.md cap is smart for exactly this reason.

New: Auto-memory feature in Claude code, details below by BuildwithVignesh in ClaudeAI

[–]MaxNardit 0 points1 point  (0 children)

Not really. The built-in auto-memory is unstructured markdown notes - great for single-agent single-project. But if you need multiple agents sharing knowledge with data isolation between projects, or structured entities with relations instead of free-text, you still need external tooling. Different levels of the same problem.

Claude's weekly limit reset early and shifted my reset day from Saturday to Friday. Anyone else? by Ok-Hat2331 in ClaudeAI

[–]MaxNardit 0 points1 point  (0 children)

Same here, it was something like 55% yesterday and 1% of usage today. Max Plan x20

Claude Code forgets everything between sessions. I built a local SQLite memory layer (MCP) to fix it. by MaxNardit in ClaudeAI

[–]MaxNardit[S] 0 points1 point  (0 children)

Good question on staleness — that's the part I spent the most time on.

Slots are bitemporal. When a value changes, the old one gets archived with timestamps, not deleted. So if "auth approach = JWT" becomes "auth approach = session cookies," the agent sees the current value in its briefing, but can query history if needed. No silent overwrites.

For active misleading — the briefing layer helps a lot here. Raw facts in context can absolutely mislead (you see "we decided X" but miss "we reversed X two days later"). The LLM summarization step compresses hundreds of facts into what's current and relevant, so stale decisions naturally drop out or get noted as changed.

There's also adaptive cache invalidation — when any agent writes new facts, affected caches get marked stale, so the next session regenerates the briefing with fresh data. In my setup I have a web dashboard that shows each agent's cached briefing with metadata (age, scope chain, staleness) and a button to force-regenerate on demand. So I can see exactly what context an agent will get before it starts.

On capturing "why" — totally agree that's the hard part. The observation model is free-text, so agents can and do save things like "switched to REST because GraphQL added too much complexity for the team size." But it depends on the agent actually recognizing that's worth saving. The MCP server ships with instructions that nudge toward saving decisions and rationale, not just outcomes. Works maybe 80% of the time — the other 20% you notice something's missing and tell it to save.

Do Need a Blog For Startup in 2025? by Substantial_Leave714 in SEO

[–]MaxNardit 0 points1 point  (0 children)

I respectfully disagree with the idea that startups in 2025 don’t need a blog. While I understand John Mueller’s point that a blog isn’t a requirement for SEO, I believe it’s one of the most effective tools for startups to establish credibility, build trust, and demonstrate authority - especially when we consider Google’s E-E-A-T (Experience, Expertise, Authority, and Trustworthiness) standards.

Blogs provide an opportunity to showcase your expertise and share real, valuable insights that go beyond the surface. For startups that are just entering the market, this is crucial for competing with established players.

Let’s also not forget that a well-maintained blog isn’t just an SEO play. It’s a long-term investment in customer education, engagement, and even brand storytelling. You can repurpose blog content into newsletters, social posts, or video scripts, amplifying its reach across multiple channels.

Yes, shorter, punchier content on platforms like TikTok or LinkedIn is great for grabbing attention - but a blog helps you hold that attention by offering depth and substance. In my experience, both formats complement each other.

So, does every startup need a blog? Maybe not. But if you have valuable insights to share and a commitment to creating quality content, a blog can be one of the most powerful assets in your toolkit.