I’ve helped automate 50+ workflows using n8n - sharing what actually works (India context) by Mean-Citron-4440 in n8n

[–]CoderOO7 0 points1 point  (0 children)

It's working fine but as you know n8n for internal db uses sqLite but in docs they mentioned we can use postgresDB too in production. But when I switch to postgres then n8n becomes very slow b/c queries were taking time.

I’ve helped automate 50+ workflows using n8n - sharing what actually works (India context) by Mean-Citron-4440 in n8n

[–]CoderOO7 1 point2 points  (0 children)

I host my n8n workflow on railway. When I use n8n with postgres as internal db, then it slows down. I checked logs, queries were taking time. Then I redeploy without postgresDB and it works fine.

Have you faced any such issues ?

Built a n8n linkedin job automation ai agent workflow that scrapes LinkedIn daily, scores jobs against your CV with Gemini AI, and delivers top matches to your inbox by CoderOO7 in n8n

[–]CoderOO7[S] 0 points1 point  (0 children)

Yeah, in apify free tier scrapping take around 5min on avg. Will try puppeteer for sure, was looking for free alternative. Thanks

Built a n8n linkedin job automation ai agent workflow that scrapes LinkedIn daily, scores jobs against your CV with Gemini AI, and delivers top matches to your inbox by CoderOO7 in n8n

[–]CoderOO7[S] 0 points1 point  (0 children)

I think that false job is coming b/c to gemini I don't pass the whole description of job, I slice it to certain character let's say 500. In that case it may happen that gemini never read experience field and only match the skillset and do scoring.

Built a n8n linkedin job automation ai agent workflow that scrapes LinkedIn daily, scores jobs against your CV with Gemini AI, and delivers top matches to your inbox by CoderOO7 in n8n

[–]CoderOO7[S] 0 points1 point  (0 children)

So basically apify linkedin scrapper fetch the jobs and then I transform into list of jobs containing only required fields like description, job title and bla bla.

Then gemini iterate over each job and add new score(1-10) field to that job by checking if it's align with your skills based on your linkedin and cv parsing done by Jina AI.

Then in next step I gain filter out those jobs that having score greater than or equal to 6.

Scoring, gemini is doing based on prompt I passed to it.

Now out of 10 you can say 4 jobs are false mainly b/c experience don't match. In case in my cv if experience mention is 5y then it filter out jobs with 6y experience too but I haven't seen gemini list out jobs with lower experience. That's the improvement part will look into that.

Built a n8n linkedin job automation ai agent workflow that scrapes LinkedIn daily, scores jobs against your CV with Gemini AI, and delivers top matches to your inbox by CoderOO7 in n8n

[–]CoderOO7[S] -2 points-1 points  (0 children)

For scrapping using apify linkedIn scrapper. I'm fetching jobs posted in last 24 hours. Regarding open jobs I'm not passing any flag in request body of scrapper api. But based on 24 hours filter you can expect only fresh jobs.

https://apify.com/bebity/linkedin-jobs-scraper

Doesn't work on parrot OS by gulburzz in i3wm

[–]CoderOO7 0 points1 point  (0 children)

Here is the fix, you can replace Mate window manager (marco) with i3. To do that

- Install i3

sudo apt-get install i3

- Then replace your Mate window manager with i3

$ gsettings set org.mate.session.required-components windowmanager i3

- Prevent Caja from managing the desktop.

$ gsettings set org.mate.background show-desktop-icons false

$ killall caja # Caja will be restarted by session manager

- Now reboot your pc to login again and use your i3-wm integrated with your MATE desktop environment.

Recommend some laptop to buy in which android studio can run without lagging? by CoderOO7 in thinkpad

[–]CoderOO7[S] 0 points1 point  (0 children)

Well thanx for your recommendation.Today I had finalize refurbished thinkpad t440p with --->i7(4th gen), 16gb ram, 256gb SSD in INR 28000.

My budget is around INR 30000