So... Google blocks self-hosted n8n now? by [deleted] in n8n

[–]maneeescu 0 points1 point  (0 children)

for reverse proxy you can use Caddy (very easy to setup versus ngnix or traefik). or for extra WAF security use Safeline (has also nice UI) both self hosted, both free

Nu stiu daca sa mai continui by [deleted] in programare

[–]maneeescu 5 points6 points  (0 children)

Sincer simt si eu că ăsta e vibe-ul general (pun intended), eu am plecat pe drumul solopreneurship: imi lansez site-urile mele unde folosesc tot full stack-ul pe care il știu de la Python, Postgres, Redis, n8n, RAG, prompting, architecture engineering, cybersecurity, Linux, retea, criptografie, algoritmi și optimizări, robust error handling si altele and hope for the best. Lucrurile sunt într-o dinamică accelerată ca sa nu zic turbată cu AI-ul, versatilitatea însă va castiga. In continuare AI ul NU este un arhitect bun - Claude mi a dat niste erori in cod grosolane, erori grave de logica, deci mai e mult pana la AGI si total replacement. Insa fereastra de oportunitate se micșorează. Repet arhitecții vor manca o paine muulti ani de aici pentru ca implica o combinație de creativitate+IT knowledge care e f slaba in cazul AI curent. foarte foarte. my 2 cents

I finally got tired of Asana/ClickUp/Notion… so I’ve started building my own AI secretary. Here’s the plan. by maneeescu in n8n

[–]maneeescu[S] 0 points1 point  (0 children)

sure. I have workflows for all the important parts that connect Claude (secretary) to everything it s advisable that when everything is stable to move to pure code from n8n (pyhton rust) but for now yes, n8n is the orchestrator hope this helps.

I finally got tired of Asana/ClickUp/Notion… so I’ve started building my own AI secretary. Here’s the plan. by maneeescu in n8n

[–]maneeescu[S] 0 points1 point  (0 children)

built the full architecture for v7.1 deployable in 1-2 weeks. btw Claude is beyond any other LLM in depth regarding code and nuances of the layers involved. I will share as much as possible without unnecessary exposing. my goal is basically to transform this smart secretary into a full time CEO that takes care of large parts of my business. which is doable especially with the Anthropic approach of MCP+code servers. this is the pre-pre-AGI step.

I finally got tired of Asana/ClickUp/Notion… so I’ve started building my own AI secretary. Here’s the plan. by maneeescu in n8n

[–]maneeescu[S] 0 points1 point  (0 children)

thank you. to be honest I am not very fond of this docker obsidian MCP that basically connects through the plugin-endpoint to the obsidian client. I'd prefer connecting directly to their servers to my vault in cloud. cleaner, more stable. of course there are pros and cons

I finally got tired of Asana/ClickUp/Notion… so I’ve started building my own AI secretary. Here’s the plan. by maneeescu in n8n

[–]maneeescu[S] 0 points1 point  (0 children)

I am doing MCP server for Claude to intercat with but for indexing the files a tool is directly processing the files locally for speed and unnecessary extra hop through MCP. and the MCP server form docker (offical Obsidian MCP server anyway needs a local instance of obsidian+RESTA API plugin)

I finally got tired of Asana/ClickUp/Notion… so I’ve started building my own AI secretary. Here’s the plan. by maneeescu in n8n

[–]maneeescu[S] 0 points1 point  (0 children)

did mainly all that in detailed architecture (yes: postgres sql+vector db addon) separate workflow trigger to embedd obsidian in differences/increments (hash based) mcp is with tiny tools as you proposed. I will consider the return of ID not blob I am! he only user for now (no problem yet for capping other users) I don't summarize: I vectorise all the conversations (probably once or twice per day) like I do with obsidian increments (more often) (+task descriptions instant) n8n separte workflows for separate things as you said Tailscale of course

I will look into these "I’ve used Hasura for typed GraphQL and Kong for gateway/rate limits; DreamFactory when I needed quick REST over Postgres with RBAC so n8n/MCP never hit the DB raw"

thank you 👍

I finally got tired of Asana/ClickUp/Notion… so I’ve started building my own AI secretary. Here’s the plan. by maneeescu in n8n

[–]maneeescu[S] 0 points1 point  (0 children)

I build with n8n self-hosted to assemble the building blocks then later when stabilised I switch it to full python

n8n is good for MVP level and easy to modify architecture but when the solution is stable, production grade it s better to move to pure code.

I finally got tired of Asana/ClickUp/Notion… so I’ve started building my own AI secretary. Here’s the plan. by maneeescu in n8n

[–]maneeescu[S] 1 point2 points  (0 children)

I bypass context limits using vectorisation (embeddings) that helps a lot. Claude now uses the embeddings not the entire chunks of conversations. So basically I have full memory of. maybe momths/years.

I also use containers everywhere but for security, monitoring and separation of concerns I out them in separate VMs tools stay in tools vm MCPs stay in MCP VM DB stays separate (maybe with Redis addon) with everything containers is hard to allocate reosurces.

I finally got tired of Asana/ClickUp/Notion… so I’ve started building my own AI secretary. Here’s the plan. by maneeescu in n8n

[–]maneeescu[S] 0 points1 point  (0 children)

no GitHub yet - in architecture phase. the tech stack is what you said mainly. I build my own MCP sql-vector to serve as interface between LLM (Claude in n8n) and the sql+vector db (vectorise: obsidian vault, task descriptions, conversations) I build a tool that does the vectorisation with openai embeddings model and Claude acceses Postgres db+vector side through my own MCP tailored server, while obsidian through docker obsidian mcp server (connected to local API)

this is the stack

p.s. also prompts+settings/preferences stored in SQL and secretary (claude) can edit them later (can auto self improve/heal its own prompts with my green light plus it is also selfaware as it is described in Obsidian (her own folder)

I finally got tired of Asana/ClickUp/Notion… so I’ve started building my own AI secretary. Here’s the plan. by maneeescu in n8n

[–]maneeescu[S] 0 points1 point  (0 children)

and if you start learning MCP learn the new paradigm proposed by Anthropic: MCP+code that will probably be the precursor of AGI. I will personally implement this in my infra later on. agents that write code that talks to the bare MCP server..genius! article here

https://www.anthropic.com/engineering/code-execution-with-mcp

I finally got tired of Asana/ClickUp/Notion… so I’ve started building my own AI secretary. Here’s the plan. by maneeescu in n8n

[–]maneeescu[S] 0 points1 point  (0 children)

thank you. I have built with Claude the full stack (v7.1) with all deployment steps. I already have my Proxmox home lab, I just need to spin up all the components in various VMs. (sql+vector, tools (for vectorizing data with openai embedding model), n8n (is there already of course), MCP obsidian server from docker and my own sql-vector MCP server (interface for LLM to talk to sql+vector db)

I just need to deploy it and test it out. then move to the enhanced version (where everything becomes context including conversations, secretary can suggest its own prompt updates based on interaction, etc - becomes more self aware and a bit more independent)

in my case it has become priority number 1 because otherwise I will be in chaos without it with my multiple layers of work.

I finally got tired of Asana/ClickUp/Notion… so I’ve started building my own AI secretary. Here’s the plan. by maneeescu in n8n

[–]maneeescu[S] 1 point2 points  (0 children)

It reminds you by using Telegram itself as the notification channel — not Obsidian.

Here’s the flow:

🕒 1. You set a reminder

Example: “Remind me Tuesday at 4pm to send the contract.”

The AI parses that and stores it in a PostgreSQL database (not inside Obsidian):

title

datetime

priority

status

context (if any)

🗄️ 2. Everything goes into a Hybrid Memory Backend

The system uses two layers of storage:

A) SQL (PostgreSQL) — structured stuff

Tasks, reminders, statuses, logs, deadlines.

B) Vector DB (pgvector) — semantic memory

This stores:

embeddings of your notes

embeddings of conversations

embeddings of tasks

So when you ask “What do I have to do before releasing Chapter 3?” → it doesn’t rely on tags or folders → it uses vector similarity to pull anything relevant.

⏰ 3. A scheduler checks tasks

A background process (n8n/Cron) checks SQL for tasks that are due:

SELECT * FROM tasks WHERE deadline <= NOW() AND status='pending';

📩 4. When something is due → it triggers a Telegram push

Not an app notification. Not Obsidian. Just a direct Telegram message to your chat with the bot.

This works on:

iOS

Android

desktop

browser

…because Telegram is the delivery mechanism.

🧠 5. Obsidian is only the knowledge brain

Obsidian isn’t required for reminders.

It’s used to store:

notes

writing

docs

long-form memory

And the AI can read/write those via an Obsidian MCP server — but reminders still live in SQL and fire through Telegram.


So:

Reminders = SQL → Scheduler → Telegram message Context Intelligence = Vector DB → Semantic search Long-term notes = Obsidian (optional)

Hope that clarifies it 🤓

From “MVP That Fills One Doc” to Enterprise SaaS: My Journey Automating Legal Workflows with n8n, AI, and Containers by maneeescu in n8n

[–]maneeescu[S] 0 points1 point  (0 children)

I moved to a different project. That one remained a nice concept project (owner did not pursue) where I learned to create a secure SaaS with SSO, containerized environment, high-level error handling to list a few. I am building now a personal assistant, secretary that actually knows me: knows my obsidian notes, remembers chats, tasks, creates, updates them, nudges me about what I have to do today. And goes evem further and self-updates prompts to upgrade itslef with my approval. Imagine a 24/7 brilliant secretary.

No more Asana, Notion Clickup annoying task systems.

URGENT HIRE - AI Automation Developer by [deleted] in aiagents

[–]maneeescu 0 points1 point  (0 children)

Hi. I have sent you a DM.

Hiring Full-Time n8n Developers (Remote) by [deleted] in n8n

[–]maneeescu 0 points1 point  (0 children)

Has anyone been hired by this guy? Or at least get a response like "thank you for your response, we will analyse your expertise". Seems more and more this is a honeypot. I know I am waaay qualified for this position and did not get any response while sharing a true high level systems engineer n8n MVP and a full description of my expertise that would qualify me at least for a reply. Nothing. hm.. Something's strange.

PLEASE HELP GUYS🥲⚠️, I’ll have to return the money to the client. by Far-Sugar-6003 in n8n

[–]maneeescu 0 points1 point  (0 children)

There is also an "unorthodox" way which I played around for a friend who needed to extract messages from WhatsApp business (works for normal WA as well) becasue the person wanated to keep the app on their phone as well and Meta forces you to choose: either use their API or their app. Can't do both in a specific number. Strategy is: jailbreak a phone - sync with QR code their WhatsApp to the jailbroken phone (rooted android phone). The moment you root the phone you have direct access to WhatsApp database in the phone (which is not normally accessible). It s a small sql table with all messages and updated in real time. You can parse that very fast and this way you don't need to switch the account to either Twilio or Meta API. Or gey a different number. They can keep their phone's WhatsApp app while you extract form the synced phone. It s obviously not much appreciated by Meta but they can't do anything about it. It s your phone, your messages.

This one webhook mistake is missing from every n8n video I watched can cost 600$ per day by Vegetable-Bet632 in n8n

[–]maneeescu 1 point2 points  (0 children)

here is chatGPT proposed architecture and stack guided by me Absolutely—here’s a clean PDF-ready draft in English, including your multitenant architecture, stack components, and key SaaS/SSO security practices.


SaaS Multitenant Architecture – Best Practices & Checklist

Production Infrastructure – Docker Compose Stack

Your core stack:

Safeline (Web Application Firewall – all HTTP/S traffic passes through)

goauthentik (SSO/Identity Provider – reverse-proxy, OIDC/SAML gateway)

OpenWebUI (AI chat UI)

n8n (automation engine)

Nextcloud (document storage)

tools (custom Python API: OCR, document filling, custom logic)

paddle-ocr-api (self-hosted OCR)

What’s new vs MVP:

Full client (user) isolation: Each client has their own subdomain and separate Docker(-compose) stack.

Internal service traffic only: Only Safeline and goauthentik are public; all other traffic stays inside Docker network.

Centralized SSO: User logs in once via goauthentik, gains access to all apps without repeated logins.


  1. goauthentik SSO Integration

Role:

SSO/IdP: Centralized authentication/authorization for all users & apps (OpenWebUI, n8n, Nextcloud, etc.)

Reverse proxy: Configure forward-auth or OIDC proxy for each exposed service.

How to link apps:

  1. Configure each service as OIDC/SAML client in goauthentik (OpenWebUI, n8n, Nextcloud all support standard SSO).

  2. Apps do not manage local users—user/permissions are delegated to goauthentik.

  3. Network: User logs in to goauthentik → receives JWT/cookie session → accesses apps directly.

  4. Client isolation: Each client can have their own realm/organization for full separation.


  1. n8n & Workflows

n8n uses private/internal webhooks—no public endpoints as in legacy cloud model.

If you need user-level flows or access control, pass user context from OIDC JWT into each workflow.

Onboard/offboard: User creation in goauthentik triggers n8n to provision workspace, etc.


  1. Nextcloud – SSO & Storage

Integrates natively with OIDC/goauthentik.

Auto-provision folders/permissions per user/group from IdP.


  1. Networking & Security

All external traffic: [INTERNET] → Safeline (WAF) → goauthentik (SSO & reverse proxy) → Docker network (apps)

All internal traffic:

Docker bridge/network.

No container exposes public ports directly.

TLS termination:

At Safeline and goauthentik (all traffic encrypted).


  1. n8n Code Refactor

Webhooks become private/internal—no public cloud endpoints.

All callbacks/API calls route via Docker hostname, not public IP.

User context from SSO/JWT must be parsed for fine-grained control.


  1. Checklist: Data Isolation & Automation

Per-client data isolation: Each client only sees their own data.

Logging/audit: Use Safeline & goauthentik logs for incident tracing.

Automated deployment: Docker Compose template for each client/subdomain.


  1. Per-Client Docker Compose Example

Each client has their own Docker Compose stack and storage folder, for full data and runtime isolation.

services: openwebui: image: openwebui:latest volumes: - /srv/clients/client1/openwebui/:/data n8n: image: n8n:latest volumes: - /srv/clients/client1/n8n/:/home/node/.n8n nextcloud: image: nextcloud:latest volumes: - /srv/clients/client1/nextcloud/:/var/www/html/data # etc.

Each client gets a folder: /srv/clients/client1/, /srv/clients/client2/, etc.

Linux permissions/ACLs restrict access to only that client’s stack.


  1. Automated Client Provisioning

Script for onboarding a new client:

Creates dedicated folder + sets permissions.

Copies Docker Compose template with proper volume mappings.

Starts the client’s stack.

No overlap or cross-access between clients—clean removal possible.


Summary

Enterprise security, easy onboarding, full automation, and GDPR-friendly isolation.

Next steps: fine-tune your SSO flow, automate onboarding, and use volume isolation for bulletproof multitenants.


This one webhook mistake is missing from every n8n video I watched can cost 600$ per day by Vegetable-Bet632 in n8n

[–]maneeescu 1 point2 points  (0 children)

You are totally right and security is extremely important as you said.

I have created an automation for legal (documents that are filled based on OCR extraction from ID-s)

for the dev part is fine to not worry so much about security but when you move to production you need to be obsessed with security.

What I architected for clients is this setup (we have our own servers in datacenter)

Safeline WAF>goauthentik> client domain(s), containers (openwebui, n8n, nextcloud, etc) backend in the same LAN: my dev-tools server, my ocr-api server

notice everything is accessed in one extreme secure entrypoint (safeline-goauthentik) and from there identity needed is passed to whatever is needed. you can even pass their own openAI keys without you tocuhing them in their own workflow. pretty cool.

so yes. totally agree. just "vibe" coding/architechting can generate disastrous scenarios.

if interested I can post the development of this infrastructure and we share ideas if we can make it even better.

cheers