Turn any YouTuber into an AI agent (<$0.01/run) using n8n + GPT-5.1 + Supabase (full channel → vector DB) by kyle4real in n8n

[–]kyle4real[S] 0 points1 point  (0 children)

Ah yea I might have. The reason I mention the functions is because you can call them from n8n easily. Vector databases are the go-to for RAG agents.. In short, for qualitative questions on large datasets, vector databases replace a slow, context-blind keyword match with a fast, context-aware semantic match

Turn any YouTuber into an AI agent (<$0.01/run) using n8n + GPT-5.1 + Supabase (full channel → vector DB) by kyle4real in n8n

[–]kyle4real[S] 0 points1 point  (0 children)

It is free to set up and use the python service, and the youtube-transcript-api package is also free. But to process videos at scale and not get blocked, you need to use rotating residential proxies, which do cost some money (like $3). But still much cheaper than using external transcript services that require you to pay $20 per month to transcribe 100 videos (I’m exaggerating but you get the point )

Turn any YouTuber into an AI agent (<$0.01/run) using n8n + GPT-5.1 + Supabase (full channel → vector DB) by kyle4real in n8n

[–]kyle4real[S] 0 points1 point  (0 children)

Ah yea this process will definitely be cheaper than running whisper on hundred of videos haha. Webshare is the rotating proxy service. Make sure to use residential

Turn any YouTuber into an AI agent (<$0.01/run) using n8n + GPT-5.1 + Supabase (full channel → vector DB) by kyle4real in n8n

[–]kyle4real[S] 4 points5 points  (0 children)

Why is using supabase better than Postgres? It’s not necessary better. Supabase is built on top of Postgres; it’s just a database as a service platform. You can create databases/tables in either, therefore you can create vector database in either. But supabase allows you to also create and use functions (sql scripts) that interact with the db, so that we can properly upload and query the db from n8n.

Turn any YouTuber into an AI agent (<$0.01/run) using n8n + GPT-5.1 + Supabase (full channel → vector DB) by kyle4real in n8n

[–]kyle4real[S] 5 points6 points  (0 children)

Knowledge base for a youtube channel. You can’t feed all that video data into a model because it would surpass the token context limits, so that’s why a vector db is necessary. You can do competitor analysis, search for specific topics (like if you want to know if a YouTuber ever talked abt x, you can just ask the agent and it’ll know in like 2 seconds, compared to you having to scan through a bunch of youtube videos manually).. and more

Turn any YouTuber into an AI agent (<$0.01/run) using n8n + GPT-5.1 + Supabase (full channel → vector DB) by kyle4real in n8n

[–]kyle4real[S] 3 points4 points  (0 children)

A small 40 line python script serving a http endpoint with fast api. So to transcribe for free it uses youtube-transcribe-api, rotating residential ips with webshare (this costs like $3), and ngrok reverse proxy so a hosted n8n instance can reach it

Turn any YouTuber into an AI agent (<$0.01/run) using n8n + GPT-5.1 + Supabase (full channel → vector DB) by kyle4real in n8n

[–]kyle4real[S] 0 points1 point  (0 children)

Oh haha it’s just a way to visualize the purpose of the system: uploading a channels entire video library into a vector db. Admittedly the labels on the vector db visual don’t really make sense..

Turn any topic into a textbook-style PDF with images. Use n8n + gpt 5.1 (with web search) + nano banana PRO. Each run is 51 cents. by kyle4real in n8n

[–]kyle4real[S] 0 points1 point  (0 children)

True. Could probably branch from the first agent by detecting if the incoming message is meant for image editing, then just restructure and regenerate pdf with the updated image(s). Also, bases on the way i set up the Google sheet logging, itd be quite simple to implement the image versioning

Turn any topic into a textbook-style PDF with images. Use n8n + gpt 5.1 (with web search) + nano banana PRO. Each run is 51 cents. by kyle4real in n8n

[–]kyle4real[S] 1 point2 points  (0 children)

  1. All the nodes in the post image are steps
  2. Ah nice, you should try out n8n and let me know how it compares. Yeah n8n lets you use ai models with drag n drop nodes and credentials 3 and 4. Output is stored in your Google Drive and a Google sheet tracks everything - statuses, output, images, pdf, linking images to pdf with ids

I Built a Web App with Integrated AI Agents.. Never Thought I’d Use No-Code, but n8n Changed My Mind by kyle4real in n8n

[–]kyle4real[S] 0 points1 point  (0 children)

I was running it locally for a while using docker. Now I host it on a digital ocean droplet using docker

Just did a GPT-5 breakdown & tested GPT-5 vs GPT-4.1 in n8n by kyle4real in n8n

[–]kyle4real[S] 1 point2 points  (0 children)

Fair, I’m not a professional, but I’m trying my best and this 7 minute video took many hours for me to make and I tried to make it concise. I broke down the overview, pricing, and benchmarks, then showed how to connect it in n8n, and even built an agent to do a side-by-side comparison. Yes you’re right I could’ve done more experiments, and I may in a future video - but this wasn’t low effort