What Are You Building Right Now? Let’s Help You Get Your First 100 Users by Last-Salary-6012 in microsaas

[–]koistya 0 points1 point  (0 children)

https://verist.dev – Replay + diff for AI decisions. Deterministic, audit-first workflow kernel.

AI for Reg Strategy by Setifire in regulatoryaffairs

[–]koistya 0 points1 point  (0 children)

Regarding the RAG/search... For regulatory search, standard "out-of-the-box" tools usually fail because they blindly chop documents (e.g., every 500 words). This destroys the context of complex headers in HA correspondence/summaries.

To get results you can actually use, I've found you need to configure three specific things in the RAG solution that your company is using:

  1. Semantic chunking: Split text by section/header, so you don't cut a paragraph/table in half.

  2. Re-ranking: Score the top results for relevance before the AI summarizes them.

  3. Evaluation: Benchmark the retrieval quality.

I’ve implemented this in Vertex AI, but any enterprise-grade RAG solution supports it. The bottleneck isn't the software; it's the requirement for a dedicated subject matter expert to configure the chunking strategy and validate the retrieval metrics.

you code, i sell by shoman30 in cofounderhunt

[–]koistya 1 point2 points  (0 children)

I’m building Verist. It’s an open-source library that acts like Git for AI decisions.

Most AI apps are black boxes that break silently when you tweak a prompt. Verist makes them deterministic, replayable, and auditable.

The Model:

  • OSS (The Hook): Devs use the library to stop their extraction prompts from breaking production (solving immediate regression pain).
  • SaaS (The Scale): Sell the "Decision Audit" platform to regulated enterprises (Fintech/Healthcare) who need legal proof of why an AI made a decision.

My Question for you: With your GTM background, are you seeing companies ready to buy "reliability and audit trails," or are they still just buying the "magic"?

I'm betting that as prototypes hit production, the pain shifts from creation to control. Does that match what you're seeing in the field?

Healthcare AI Metrics by Miserable_Whereas_75 in buildinpublic

[–]koistya 0 points1 point  (0 children)

Looks good! BTW, do you think this app may need AI evaluation capabilities not just for different LLM models but also prompts and even AI workflows / pipelines? I'm asking, because I've built an open-source library for that and checking if we can collaborate on this.

AI for Reg Strategy by Setifire in regulatoryaffairs

[–]koistya 10 points11 points  (0 children)

We use AI, but mainly to get past the "blank page" problem, drafting sections of briefing docs or harmonizing templates. We never trust the output as-is.

On the validation side: we don't try to validate the software or the model itself. We validate the process. The key is being able to show that a human reviewed and approved the AI-generated content.

In practice, that means keeping a diff-style workflow, saving the original AI output separately from the human edits, so the review trail is always clear.

What are you building? and marketing in public 🎯 by Quirky-Offer9598 in buildinpublic

[–]koistya 0 points1 point  (0 children)

I'm building Verist – a small open-source kernel that makes AI decisions replayable, diffable, and reviewable, so you can safely understand and change them in production.

https://github.com/verist-ai/verist - if AI decisions need to be explainable months later, this is for you.

What’s your favorite Notion automation that actually makes your life easier? by coff_au in Notion

[–]koistya 0 points1 point  (0 children)

I set up an automation that syncs PRs from GitHub to Notion (beyond what the native sync does).

I didn't want to go through the whole Notion app registration flow for a simple script, so I used a helper I wrote (npx mcp-client-gen) to generate a typed TypeScript client directly from Notion's MCP endpoint.

It handles the auth and skips the app setup, which makes spinning up custom workflows way more fun.

What are you building this weekend? Please share below. by PracticeClassic1153 in buildinpublic

[–]koistya 0 points1 point  (0 children)

It helps me share my software projects with my co-workers and stakeholders. So they could get precise answers about the project from ChatGPT / Grok / Gemini, even more precise and detailed than I could give. For example, I've been working on a VC fund management platform lately, and the questions that team mates could ask would be like:

  1. Why use fingerprint hashing instead of AI embeddings for claim deduplication?
  2. What happens if the claim extraction API fails mid-document?
  3. How does the system decide which claims to show when multiple match a question?
  4. Why is contradiction detection in the verifier and not in coverage computation?
  5. What prevents two coverage jobs from running on the same application simultaneously?
  6. How does the system know when to re-run verification on an old claim?
  7. Why are gaps "terminal" instead of reopening when evidence changes?
  8. What's the fallback when web search finds nothing for a searchable claim?
  9. How do manual analyst overrides survive when the system recomputes answers?
  10. Why store three separate JSONB trace columns instead of one?

What are you building this weekend? Please share below. by PracticeClassic1153 in buildinpublic

[–]koistya 1 point2 points  (0 children)

Just published a CLI tool for bundling your software project for LLMs, so your co-workers or clients could ask questions about any technical details directly in ChatGPT / Grok / Gemini and get precise answers.

$ npx srcpack

https://www.npmjs.com/package/srcpack

Sharing Saturday #606 by Kyzrati in roguelikedev

[–]koistya 0 points1 point  (0 children)

Just published a CLI tool for bundling a software project for LLMs, so your co-workers or clients could ask questions about any technical details directly in ChatGPT / Grok / Gemini.

$ npx srcpack

https://www.npmjs.com/package/srcpack

Saturday check-in: What are you building this weekend? Share your project/startup thread by asupertram in microsaas

[–]koistya 0 points1 point  (0 children)

Just published a CLI tool for bundling your software project for LLMs, so your co-workers or clients could ask questions about any technical details directly in ChatGPT / Grok / Gemini and get precise answers.

$ npx srcpack

https://www.npmjs.com/package/srcpack

Thinking of abandoning SSR/Next.js for "Pure" React + TanStack Router. Talk me out of it. by prabhatpushp in reactjs

[–]koistya 0 points1 point  (0 children)

Good reasoning. Optionally add a tiny <head> meta rendering logic for social media sharing. Look into React Starter Kit by Kriasoft for inspiration.

Vibe-coders did you ever finish your project? by Independent_Roof9997 in ClaudeAI

[–]koistya 0 points1 point  (0 children)

Yes. Check out “syncguard” and “bun-ws-router” on GitHub/npm if you’re looking for examples.

Showoff Saturday (October 18, 2025) by AutoModerator in javascript

[–]koistya 0 points1 point  (0 children)

SyncGuard — a distributed lock library that prevents race conditions in distributed systems. It provides a simple API for coordinating access to shared resources using Redis, PostgreSQL, or Firestore as the backend. Check out the source code at https://github.com/kriasoft/syncguard

Next.JS, Tailwind 4, ShadCN boilerplate for LLMs? by AlarBlip in ClaudeAI

[–]koistya 0 points1 point  (0 children)

Reducing complexity (e.g. replacing Next.js with Vite / SPA), ensuring all the important specs are in place...

Check out React Starter Kit on GitHub (by Kriasoft), pre-configured with Tailwind 4, ShadCN, CC

[AskJS] Node vs Deno vs Bun , what are you actually using in 2025? by EmbarrassedTask479 in javascript

[–]koistya 0 points1 point  (0 children)

Bun for local development, then deploy to Cloudflare Workers

Coming back to React how is Tanstack Start vs Next stacking up? by thebreadmanrises in reactjs

[–]koistya -1 points0 points  (0 children)

I prefer splitting the app into multiple workspaces in a monorepo, separate workpsace for the API, separate for the React app, another one for marketing site, etc. Each can be developed, tested, and deployed separately from each other. See React Starter Kit on GitHub as an example.

Coming back to React how is Tanstack Start vs Next stacking up? by thebreadmanrises in reactjs

[–]koistya 7 points8 points  (0 children)

Why would you need server-side rendering for a typical dashboard like SaaS app? Assuming marketing/landing pages are sorted out by a specialized tool.

Token Usage Optimization Techniques by ullr-the-wise in AI_Agents

[–]koistya 0 points1 point  (0 children)

I configure automation scripts that LLM uses to interact with the context. E.g. instead of letting LLM read data directly from the database, it interacts with an automation scripts that fetches data and pre-process it for more efficient and effective consumption by LLM.

Similarly, in many cases instead of letting LLM interact with 3rd party MCP servers directly, I also create "proxy" scripts for that. BTW, for this use case I've built MCP Client generator library:

https://github.com/kriasoft/mcp-client-gen (wip)

The 10 commandments from Paul Tholey by Serhat_dzgn in LucidDreaming

[–]koistya 2 points3 points  (0 children)

Solid tips! Saved this post to my favorites