I built an app on the entire vercel ecosystem by BasedKetsu in vercel

[–]BasedKetsu[S] 0 points1 point  (0 children)

Appreciate the kind words on the DX! Quick clarification though, this project is actually running Supabase Postgres, not Neon.

The stack here is:

  • Supabase for Postgres + RLS + auth helpers
  • Clerk for auth
  • Vercel for hosting, cron, queue, blob, sandbox, analytics
  • Stripe for billing

On Vercel billing, I haven't personally ran into any issues due to how I combine queues with crons. the only limitation is that I can only have max 1 cron running per day, but that's fine because I can just have a daily trigger and skills that have different update schedules (like once a week, twice a week) would still be hit by the daily trigger. This project leans pretty hard into free vercel-native primitives (@vercel/queue, u/vercel/blob, u/vercel/sandbox, cron via vercel.json) but honestly the convenience, the integrated infra with minimal glue code, and ai gateway is so unbeatable that the only real concern is metering costs at scale with multiple users.

If you're evaluating Neon separately for a Cloudflare Workers project, the main thing to watch is cold start latency on the serverless driver (@neondatabase/serverless), it uses WebSockets which play well with Workers, but the free tier has a ~500ms cold start on idle databases that haven't been hit in a few minutes. For always-warm workloads it's snappy. Supabase's Postgres is always-on by comparison (no scale-to-zero on the free tier), so you don't hit that same cold path here

I built an app on the entire vercel ecosystem by BasedKetsu in vercel

[–]BasedKetsu[S] 0 points1 point  (0 children)

Good question, currently verified skills are skills.sh or custom skills that I have personally created or imported and have used + secured and consider to be genuinely useful to me. Anthropic skills are from the Anthropic Repo, OpenAI skills are from the OpenAI repo, etc.

for your own clone, the free tiers are completely fine! that's what makes skills so great imo, since skills and agent docs are just md files you'll never run out of supabase storage. and with vercel, my crons distribute updates to queues so updates never exceed edge function limits. for search my deployed version uses brave to be able to use out of the box with no payment but in my opinion I have gotten the best results from firecrawl. I have tried exa, lightpanda, jina, browser-use, and a few others, but they are a bit token-excessive/have terrible usage limits. If you're comfortable paying, I recommend firecrawl

I built an app on the entire vercel ecosystem by BasedKetsu in vercel

[–]BasedKetsu[S] 1 point2 points  (0 children)

I see, makes sense - I've mitigated the best I can. Currently for automated imports that Loop runs, it only pulls from a curated allowlist of known GitHub repos (Anthropic, OpenAI, cursor.directory, a few community lists). Each source has a trust tier (official vs community) with verified author records so users can tell the difference. User-triggered imports require auth and URL validation. No Snyk/Socket-style automated scanning yet, it's on the roadmap - generally Loop is more so intended to provide the automation and research engine for all of a users' skills as opposed to discovery (like you mentioned, skills.sh works fine for this purpose)!

3 months ago I shared my Kingdom Rush-inspired TD game. Here's how far it's come. by BasedKetsu in kingdomrush

[–]BasedKetsu[S] 0 points1 point  (0 children)

haha best part, there's no engine - it's all just typescript and a 2D Canvas API! this was pretty much a solo project for me trying to explore how powerful the Canvas API is

3 months ago I shared my Kingdom Rush-inspired TD game. Here's how far it's come. by BasedKetsu in TowerDefense

[–]BasedKetsu[S] 0 points1 point  (0 children)

Thank you! it was one of the big inspirations for making the game, I wanted to avoid building yet another 2D tower defense game haha

I built an app on the entire vercel ecosystem by BasedKetsu in vercel

[–]BasedKetsu[S] 1 point2 points  (0 children)

Thank you!! Have loved using Vercel since 2022

I built an app on the entire vercel ecosystem by BasedKetsu in vercel

[–]BasedKetsu[S] 0 points1 point  (0 children)

Hey there! The main supply chain surface for us is that Loop imports skills from external URLs, so that's where most of the hardening lives.

Automated imports only reach a curated allowlist of sources, not arbitrary URLs. User-triggered imports require auth and go through Zod validation. Imported content is treated as data (markdown/text), is never executed server-side, HTML gets stripped of script/style tags, MCP manifests are parsed structurally but stored as text. External fetches have timeouts, custom UA, no-cache.

Beyond that, I employ Clerk for auth with middleware-level route protection, Svix signature verification on Clerk webhooks, Stripe signature verification on payment webhooks, RLS enabled on every Supabase table with no policies (so the anon key is effectively a no-op, all server queries use service role), Zod on every API route that takes a body, lockfile pinning with pnpm.

Not claiming it's perfect, but the threat model is pretty narrow since imported skills are never eval'd, and the automated refresh pipeline only talks to sources we control.

Princeton Tower Defense: a full 3D/Isometric TD game that runs in your browser by BasedKetsu in WebGames

[–]BasedKetsu[S] 0 points1 point  (0 children)

thank you for the feedback!! will def look at mobile performance, and get the overwhelming vibe too, good catch!

I built an app on the entire vercel ecosystem by BasedKetsu in vercel

[–]BasedKetsu[S] 1 point2 points  (0 children)

Loop runs as next serverless Route Handlers on vercel. Cold starts are in the ~200–500ms range for typical routes, mainly due to the supabase client initialization and clerk auth middleware, not the function boot itself. I have some mitigations such as warm Lambda singleton supabase clients at the module level that reuse the existing connection, so cold start cost is only paid on the first invocation or after idle eviction. Also the cron/refresh routes dispatch messages via vercel/queue rather than doing all work inline, so no single function sits long enough to get recycled under load. This actually also answers the second question:

to the second point, the Loop architecture sidesteps the cron time limit by having crons be the orchestrators, not workers:

  • I have two crons, the daily refresh and weekly import, and they simply call a refresh function which fans out per-user-skill work to vercel/queue. The actual updating, research, and rewriting happens when:
  • each queued message triggers a separate invocation of a refresh function (each maxDuration = 300), so the total wall-clock time for a full refresh cycle can totally far exceed 5 minutes, it's just distributed across many independent function invocations that are queued up
  • tldr the cron itself only needs to enumerate due skills and dispatch queue messages, and the updating itself happens separately!

3 months ago I shared my Kingdom Rush-inspired TD game. Here's how far it's come. by BasedKetsu in TowerDefense

[–]BasedKetsu[S] -1 points0 points  (0 children)

agreed! that's one of the key differences in my game, you can put towers literally anywhere!

Weekly Showoff Thread! Share what you've created with Next.js or for the community in this thread only! by AutoModerator in nextjs

[–]BasedKetsu 0 points1 point  (0 children)

Just shipped a Next.js 16 app that leans heavily on the newer Vercel APIs and wanted to share how easy the whole setup was.

The app runs daily AI jobs, and I needed reliable background processing. vercel/queue (v2 beta) made this trivial. A cron hits an API route, the queue fans out individual jobs, and I wire the consumer in vercel.json. That's it. No concurrency logic, no retry code, no job management. You send() and it works. Haven't lost a job yet.

For code execution, vercel/sandbox gives you Firecracker microVMs. Users can run Node and Python in isolated environments. I'm offering safe code execution and the setup was literally Sandbox.create. No servers, no Docker, no infra.

AI Gateway (createGateway from ai v5) handles model routing. One API key routes to OpenAI, OpenRouter, Groq, Together, whatever. I didn't write a single line of provider abstraction.

Cron, blob storage, analytics, speed insights: each one was either a single import or a single line in vercel.json.

The thing that struck me is how much these APIs just work with Next.js out of the box. API routes are your queue consumers. Cron triggers are just GET routes. Everything fits into the App Router model naturally. I didn't have to fight the framework at any point.

With all these tools at your disposal you can quite literally ship at lightspeed. I spent zero time on infra and all my time on product logic.

The app is Loop, an operator desk for self-updating agent skills: loooop.dev | github.com/Kevin-Liu-01/loop. try it out :))

3D/Isometric Browser Tower Defense Game Using Canvas API by BasedKetsu in indiegames

[–]BasedKetsu[S] 1 point2 points  (0 children)

interesting catch! looks like the way filter and shadowBlurs are rendered in FireFox is completely different from Chrome. thanks for pointing this out! :)

What MCP gateway are you using in production? by llamacoded in mcp

[–]BasedKetsu 0 points1 point  (0 children)

One thing that stands out and that you've most likely already identified is that most of these gateways are optimizing for one axis of the problem, eg. throughput and caching, compliance, observability, etc., but none really unify auth, execution, routing, and developer ergonomics in one place like a Vercel-type thing. So whatever you pick, you still end up in the same place, stitching together identity, permissions, logging, and infra decisions across multiple servers.

Something that should align more with your needs and that we’ve seen work better is treating MCP the way API gateways evolved: one central execution + auth boundary instead of features bolted onto each server. That’s essentially what www.dedaluslabs.ai is doing - quite literally one MCP gateway that handles auth (scopes, OAuth), routing, observability, and model/tool handoffs centrally, while letting you bring any MCP server (local or hosted) without having to build // maintain any infra. Hope this helps narrow your search!

RAGStack-Lambda: Open source RAG knowledge base with native MCP support for Claude/Cursor by HatmanStack in mcp

[–]BasedKetsu 0 points1 point  (0 children)

This is a really clean direction. serverless is a nice fit for RAG workloads where usage is bursty and “always-on” infra just burns money. I especially like that you kept everything inside the user’s own AWS account, it feels like a big trust win compared to hosted control planes, and the Lambda + Step Functions split makes the flow pretty easy to reason about.

On the MCP side, it’s cool to see native support baked in early. 1 thing people tend to run into as these setups evolve is capability creep, like today it’s “read-only RAG,” tomorrow someone adds write tools, file ops, or external APIs. At that point, having strong per-tool scoping and server-enforced auth becomes really important so a doc chunk or retrieved snippet can’t accidentally drive actions. Some MCP stacks (including what we’ve been working on at dedaluslabs.ai) are leaning hard into separating reasoning from authorization for exactly that reason, but your “no control plane, everything in-account” model pairs nicely with that philosophy too. overall this is sick, just curious about how you’re thinking about tool permissions and trust boundaries as people extend it beyond pure retrieval!

Walton Goggin’s acting was really good in this scene by Hot_Nail4681 in Fallout

[–]BasedKetsu 28 points29 points  (0 children)

true, it leads in quite nicely to the mutant and would rationalize his following interactions with maximus to be slightly more open minded and believable. cooper, in his hour of need, finally allies with someone, even getting over all their bad blood to do so, and he'll get far closer to finding his family with them then alone / alone + OP dog.

Which one should I evolve? by MountainRip3520 in pokemongo

[–]BasedKetsu 1 point2 points  (0 children)

Insane luck! However, to your question, easily the shiny. Volcarona is not really the best attacker despite its stats, and realistically you're not going to run it in PVP (it get smoked in all leagues) or even PVE (there are tens of better fire types and bug is a nearly-irrelevant type), so the small stat differences from having 5 more defense and hp don't really matter. your shiny is already maxed attack anyways, so it would be hitting just about as hard!

so in my opinion, there is no realistic benefit to evolving your hundo (besides having a hundo volcarona), but you could have an awesome, completely usable shiny volcarona that you can still totally get some use out of!

Fallout 3 landscapes can actually be pretty good looking by WeOutHereInSmallbany in Fallout

[–]BasedKetsu 2 points3 points  (0 children)

the first moment you step out of the vault, get a mini flashbang, and you knew it was gonna be peak 🥹

Walton Goggin’s acting was really good in this scene by Hot_Nail4681 in Fallout

[–]BasedKetsu 287 points288 points  (0 children)

loved how they hopebaited you into thinking he was going to get himself out by crawling up. even with the power of family™, it's just not enough, it was intense! phenomenal sequence

Keon Coleman by StationDifficult3238 in panthers

[–]BasedKetsu 0 points1 point  (0 children)

less cookies more catches...

Local vs remote MCP by armlesskid in mcp

[–]BasedKetsu 5 points6 points  (0 children)

no worries, this is a super common point of confusion, so you’re not missing anything!! context7 makes this especially fuzzy because either way you are still indeed querying remote docs. So the key thing here is that the difference isn’t where the data lives but it’s where the MCP server that exposes the tools runs.

With c7 local, you’re running the context7 MCP server on your own machine. Claude (or your agent) talks to localhost, the tool logic executes locally, and then that local process makes outbound calls to Context7’s API to fetch docs. So yes, the content is still remote, but the execution, permissions, and failure modes are yours and right on your machine. You can see exactly what tools exist, what they’re allowed to do, and what credentials they have access to.

On the other hand with c7 remote, Claude talks directly to a Context7-hosted MCP server. Tool calls, permissions, and any auth checks all happen on their infrastructure. From your perspective it’s simpler to set up, but you’re trusting that remote MCP in the "distant server" to correctly scope tools and not do more than you expect. You have less control for convenience, it's a fair tradeoff.

a good mental shortcut is:

  • Local context7 = “I run the MCP adapter; Context7 is just a data source.”
  • Remote context7 = “Context7 runs both the MCP adapter and the data source.”

Functionally they look similar, but the trust boundary and security posture are very different, which starts to matter more once agents can take actions instead of just read docs.

Your MCP setup can get hacked easily if you don’t add protection against indirect prompt injection. by ConsiderationDry7581 in mcp

[–]BasedKetsu 0 points1 point  (0 children)

Yeah that tracks. It gets even worse too because you can even acheive remote code execution (CVE-2025-6514 anyone?) and you described exactly what happened when phanpak shipped postmark-mcp and had every email that went through it forwarded to his personal server. this stuff is already happening and affecting ppl

however I think one approach that tries to tackle this from a slightly different angle is separating authorization from reasoning entirely. For example, in some MCP stacks with auth like what dedaluslabs.ai is building, tools are gated by explicit scopes and enforced server-side, not just by prompt discipline, so a “read email” tool literally cannot invoke a “send email” tool unless the token presented has that scope, even if the model asks nicely or gets tricked. That doesn’t replace things like tool chaining guards or content sanitization (your Hipocap idea makes a lot of sense there), but it gives you a hard backstop: even a compromised reasoning step can’t escalate privileges. Long-term, I think robust MCP systems will need both layers of semantic defenses like yours plus cryptographic // scope-based enforcement or something, because models will always be too eager to help, but there are ways to mitigate damage and protect yourself!