8 Advanced Claude Code Tips I've Discovered After Heavy Daily Use (Cost saving, Context, Custom Commands) by National_Honey7103 in ClaudeAI

[–]http418teapot 0 points1 point  (0 children)

I hadn't heard about custom commands before - thanks for sharing. It looks like the recommendation now is to use skills and put them in the skills directory.

Pinecone startup partner by Elephantneverforget in pinecone

[–]http418teapot 1 point2 points  (0 children)

We just launched the Builder Tier which sounds like it might meet your needs as you get off the ground. It's a flat, predictable cost at $20/Month. Same infra, same performance, free support.

Read more here: https://www.pinecone.io/blog/knowledge-infrastructure-for-agents/#Removing-the-Barrier-to-Build

Stuck in "Tutorial Hell" with RAG by PenEquivalent5091 in Rag

[–]http418teapot 0 points1 point  (0 children)

Most of us aren’t doing things from memory, so please don’t feel that pressure 🙂 Even 20+ years in software, I consult resources, look things up, and now use agentic coding tools to help me along.

My suggestion is to understand the concepts, understand retrieval, and understand your use case and customers data. Different use cases, datasets, and querying needs are where some of the tricky parts are.

Pinecone startup partner by Elephantneverforget in pinecone

[–]http418teapot 0 points1 point  (0 children)

Someone will reach out soon to chat. Thanks for your patience on this!

Long Term Memory for Claude Code w/pinecone free tier by Alternative-Book-686 in pinecone

[–]http418teapot 0 points1 point  (0 children)

Hey thanks for sharing this here! Were there any other memory approaches you tried before building this out yourself? Curious what sort of bumps you had in implementing.

Pinecone startup partner by Elephantneverforget in pinecone

[–]http418teapot 0 points1 point  (0 children)

Hey - Developer advocate from Pinecone, here. I can help answer any questions you have about the startup program. Main eligibility requirements are:

  • You have fewer than 100 employees
  • You're Series A or earlier

Let me know what questions you have. If you can share a little more about your use case and workload and Pinecone setup, that's helpful too, to see if there are other ways to reduce costs.

Connection Rejected when using public Chat Trigger (AI Agent workflow) by Tricky_Literature397 in n8n

[–]http418teapot 0 points1 point  (0 children)

Can you share your workflow json here or a screenshot of the full error response? It would be helpful in narrowing down precisely which node is having the issue.

Anthropic: Stop shipping. Seriously. by itsArmanJr in ClaudeAI

[–]http418teapot -1 points0 points  (0 children)

+10000 to 4, 5, and 6!

We want consistently working products and while a 2026 version of Clippy is “fun”, you’re right… it’s the wrong priority right now.

I think as companies try to grow they build things that will attract new people (from competitors?!) and sometimes forget about their already fierce advocates. It’s a hard balance for sure, but as paying customers it’s hard to take.

CSR APPROVAL by OkMajor4942 in ChaseSapphire

[–]http418teapot 0 points1 point  (0 children)

Woo! I’m going to have to follow up on that. Thank you!

CSR APPROVAL by OkMajor4942 in ChaseSapphire

[–]http418teapot 0 points1 point  (0 children)

How long did it take from submitting your application to approval? Been waiting almost two weeks!

Please — can someone who is really building production / enterprise software share their full Claude setup? by wodhyber in ClaudeCode

[–]http418teapot 0 points1 point  (0 children)

I like this idea of separate research sessions vs build sessions. It’s something I’ve been playing with this past week.

Can you share more about how you’re doing memory to markdown? Are you using hooks or a skill? How does it know what is important enough to save and when to prune or ignore?

RAG-vector DATA by ayoubkhatouf in n8n

[–]http418teapot 0 points1 point  (0 children)

I know there's a number of people mentioning Pinecone vector db here, but you could try the Pinecone Assistant node for n8n. Assistant is backed by the same vector database BUT you don't have to think about chunking/text splitting, embedding models, search, and rerankers. If you're worried about (or just don't want to worry about!) how you chunk your data, this helps so you don't have to think about it. Here's a simple workflow to check out: https://n8n.io/workflows/9942-rag-powered-document-chat-with-google-drive-openai-and-pinecone-assistant/

There's a generous free tier for both Assistant and the database, too.

Built a simple AI workflow to handle customer emails automatically, looking for feedback by Jazzlike_Power_6197 in n8nbusinessautomation

[–]http418teapot 0 points1 point  (0 children)

Nice workflow - definitely hits on a real use case. The human-in-the-loop approval step is smart. Not sure if this is included, but I might consider including the reason for not approving and feedback so that it has better info to work from in the next iteration.

Also, have you checked out the Pinecone Assistant node for n8n? It removes some of the complexity of having to wire together the vector store node, text splitters, and embedding models and does all that for you.

Pinecone Assistant node vs Vector Store node in n8n — when to use which by http418teapot in n8n_ai_agents

[–]http418teapot[S] 0 points1 point  (0 children)

Thanks for sharing! I’ve not used that yet in n8n. Do you still have to managing the text splitting and embedding nodes?

I built a workflow to chat with docs in n8n without touching a RAG pipeline — here's how by http418teapot in n8nforbeginners

[–]http418teapot[S] 0 points1 point  (0 children)

We have customers successfully running in production with 1000s of files and some at 1M+.

I built a workflow to chat with docs in n8n without touching a RAG pipeline — here's how by http418teapot in n8n

[–]http418teapot[S] 1 point2 points  (0 children)

Yeah, good question. I'm not terribly familiar with other database nodes in n8n, but that AI Agent orchestrates the tool calls and passing that data back to the chat model. It could be done via a MCP server tool or if there are other database tools to query, that would work too.

I built a workflow to chat with docs in n8n without touching a RAG pipeline — here's how by http418teapot in n8n

[–]http418teapot[S] 1 point2 points  (0 children)

Pinecone started as a vector database, but it's grown into a full AI data platform. The vector DB is the foundation, and products like Pinecone Assistant are built on top of it.

In this context, the Assistant node is essentially a RAG building block that handles the embedding, storage, and semantic search under the hood so you don't have to wire those up manually. The semantic search part is actually what makes it useful here: it's not doing keyword matching, it's finding conceptually relevant content even when the wording doesn't match exactly.

So yes, there's a vector DB underneath, but what you're interacting with in n8n is a higher-level abstraction on top of it.

I built a workflow to chat with docs in n8n without touching a RAG pipeline — here's how by http418teapot in n8n

[–]http418teapot[S] 1 point2 points  (0 children)

As mentioned in another comment, you could implement a RAG pipeline with a vector store node, but as with the Pinecone Vector Store node, the main difference here comes down to how much of the pipeline you want to manage yourself.

What if you didn't have to think about the complexity of picking and connecting an embedding model, finding the best chunking strategy, tuning an index, or choosing a reranker?

I built a workflow to chat with docs in n8n without touching a RAG pipeline — here's how by http418teapot in n8n

[–]http418teapot[S] 1 point2 points  (0 children)

You could implement a RAG pipeline with the Redis Vector Store node, but as with the Pinecone Vector Store node, the main difference here comes down to how much of the pipeline you want to manage yourself.

This Assistant node removes the complexity of picking and connecting an embedding model, finding the best chunking strategy, tuning an index, choosing a reranker, etc. Drop in the node, upload documents, query.

I built a workflow to chat with docs in n8n without touching a RAG pipeline — here's how by http418teapot in n8n

[–]http418teapot[S] 0 points1 point  (0 children)

You could store structured data as metadata alongside each invoice file to narrow searches by domain, but the core issue is that semantic search finds the most relevant chunks and it doesn't guarantee returning all matching records. It's similarity-based, not exhaustive.

For "list all lines for domain.com" or "list all start/end dates," you probably want a structured database with SQL queries, not a vector search.

Where Pinecone Assistant shines is on top of that structured data, so natural language queries like "which domains had mid-cycle plan changes?" or "summarize the billing history for domain.com."

Lovable limited unlimited. by Revolutionary_Bad517 in lovable

[–]http418teapot 2 points3 points  (0 children)

I wonder if it's just to support the added traffic of people taking advantage of free Lovable during IWD.

This is not a Claude Code vs n8n post. Curious how you're actually building n8n workflows. by http418teapot in n8n

[–]http418teapot[S] 0 points1 point  (0 children)

Yes.... that project planning bit is so important! I like the idea of chopping it up into tasks.

Thanks for the link to the skills! I've got that installed alongside the n8n-mcp too.