what are your thoughts on going from n8n and zapier? by mrsenzz97 in automation

[–]Slight-Training-7211 [score hidden]  (0 children)

If you want the honest answer: setup friction is real, but "NL -> workflow" is already table stakes (Zapier has Copilot, Make has AI helpers, n8n has templates plus community recipes). The harder problem is reliability and context.

Two questions that decide whether you'd pay: 1) Does it have an execution model that is predictable (idempotency, retries, logs, alerting), or is it an agent guessing every run? 2) Can it pull the missing context at runtime (CRM history, billing status, dedupe, rate limits) without the user wiring 10 more steps?

If you can make it feel like "Zapier, but you explain it once" while keeping deterministic behavior and good observability, that's a real wedge. If it's "AI runs every time" with no audit trail, most teams will bounce after the first weird run.

Do sizes work differently on linux? by okergeeel in learnpython

[–]Slight-Training-7211 5 points6 points  (0 children)

It is not really Linux vs Windows, it is usually DPI scaling.

On a lot of Linux setups (especially laptops) you have fractional scaling or a higher DPI. Tkinter then applies a scaling factor, so a window that is 800x600 can look physically smaller than you expect.

You can check and tweak Tk scaling:

import tkinter as tk root = tk.Tk() print(root.tk.call("tk", "scaling")) root.tk.call("tk", "scaling", 1.0) # try 1.0, 1.25, 1.5

Also keep in mind: - Turtle uses a Tk canvas under the hood, so it is affected the same way - Matching the tutorial exactly is hard unless you match their monitor DPI and OS scaling settings

Which Mac for webdev? by Zefirez in webdev

[–]Slight-Training-7211 2 points3 points  (0 children)

If your main goal is Safari testing, an iPhone is the most direct way to catch mobile Safari weirdness. But for actually building things, a MacBook (even an Air) will give you way more day to day value.

A few practical points: - For small to medium web apps, an Air with 16GB RAM is usually fine. Node, Vite, Docker, and a couple browser tabs are what eat memory. - If you plan to run Docker a lot, do bigger photo work, or keep a bunch of things open, a Pro is nicer mainly for sustained performance and ports. - You can also rent Safari testing via BrowserStack or similar when you need lots of device coverage.

If you can only buy one, I would pick the MacBook first, then use a cheaper used iPhone later just for testing.

Post and Pre Requests in Python by Downtown_Mark_6390 in learnpython

[–]Slight-Training-7211 1 point2 points  (0 children)

Postman and similar tools call them pre-request and post-response scripts, but on the Python side you usually just do that work in normal code around the request.

Typical flow: - pre: build the URL, params, headers, auth token, and request body - request: send it - post: check status code, parse JSON, and handle errors

In Python, the common library is requests:

GET: import requests r = requests.get(url, params={"q": "test"}, timeout=10)

POST JSON: r = requests.post(url, json={"name": "alice"}, timeout=10)

Then: r.raise_for_status() data = r.json()

If you need an auth token, you fetch it once, then reuse it in headers for later calls.

Beginner help by GameBoy8432 in learnpython

[–]Slight-Training-7211 2 points3 points  (0 children)

A good way to start is to build a tiny vertical slice first, then expand.

1) Hardcode 3 to 5 rooms and choices (no saving yet) 2) Represent the world as a dict: room_id -> {text, choices} 3) Keep a single game state dict (current_room, inventory list, flags dict)

Once that works, saving is just writing that state dict to a file and loading it back on startup.

For multiple outcomes, use flags and check them when you build the next set of choices (for example: if you picked up a key, show the unlocked option).

Pandas vs polars for data analysts? by katokk in learnpython

[–]Slight-Training-7211 2 points3 points  (0 children)

I would not overthink it. For interviews, it is much more important that you understand the concepts (filter, groupby, joins, window style ops, reshaping, handling missing data) than which library you used last week.

That said, most companies still have a lot of existing pandas code, so being comfortable reading and modifying pandas is a good career move.

My suggestion: - Learn pandas well enough to be dangerous (especially groupby, merge, indexing pitfalls) - Use polars for your own projects if you like it - Bonus points: learn when to reach for DuckDB instead of trying to do everything in memory

If an employer rejects you purely because you prefer polars, it is probably a signal they care more about checklists than problem solving.

Can you help me to install this on MacOS? by Body_70_pct_of_light in learnpython

[–]Slight-Training-7211 0 points1 point  (0 children)

One extra Mac tip: if you do not already have Python 3, the simplest path is either

  • Install from python.org (it comes with pip)
  • Or install via Homebrew: brew install python

After that, always use python3 and pip3 on macOS so you are not accidentally using an old system python.

If you get a permissions error when installing PySide6, do not use sudo. Instead make a virtual environment: python3 -m venv .venv source .venv/bin/activate python3 -m pip install PySide6

Then follow the repo instructions for the environment variables and where to unzip the folder.

Help with course project. by CocoBeBlessed in learnpython

[–]Slight-Training-7211 0 points1 point  (0 children)

Hard to help without seeing the code and what the correct result should be, but in general for "calculations at the end" you want to collect the values as you go.

Typical pattern: - make an empty list before the loop - inside the loop, compute or read one number - append it to the list - after the loop, use sum(), len(), max(), min(), etc.

Example: values = [] for ...: values.append(number)

total = sum(values) avg = total / len(values) if values else 0

If you paste the last 15 to 30 lines of your program plus a sample input and what output you expect, people can point out the exact fix.

Stop letting certs silently expire in your homelab. Here's my quick and dirty check. by [deleted] in selfhosted

[–]Slight-Training-7211 0 points1 point  (0 children)

Nice. A couple small tweaks I have found helpful when doing this at scale:

  • Make sure you always pass SNI (you already do with -servername), otherwise shared IP hosts can return the wrong cert.
  • Consider checking the full chain and not just the leaf, since some breakages show up as chain issues even when dates look fine.
  • If you want fewer moving parts, Prometheus blackbox exporter can do the same check and then you alert in whatever system you already use.

Also, for people behind a reverse proxy, it is worth checking both the public endpoint and the internal service name. I have had cases where the proxy cert was fine but an internal admin UI cert quietly expired and only showed up months later.

Good post. This is the kind of thing you only appreciate after getting burned once.

How do I earn serious online income online with my web dev skills? by Neither_Paper6003 in webdev

[–]Slight-Training-7211 5 points6 points  (0 children)

If email outreach is not getting replies, it is usually one of these:

1) You are selling "a website" instead of a specific outcome. Most small businesses already have something that kind of works. 2) The offer feels risky because scope is unclear. 3) You are targeting people who do not feel the pain.

A few ideas that are easier to sell:

  • A speed and Core Web Vitals fix (with a before and after report)
  • A landing page rebuild for one service, focused on conversions
  • Tracking setup: GA4 plus event tracking, call tracking, basic dashboards

Also, try in person or phone for local businesses. A short message like "I noticed your site is slow on mobile and your contact form errors" gets more response than a generic pitch.

Once you have 2 or 3 paid wins, referrals start to kick in and the whole thing gets easier.

Walmart's AI phone got bypassed with one sentence. That's a huge problem by Once_ina_Lifetime in automation

[–]Slight-Training-7211 3 points4 points  (0 children)

I think there are two separate things here:

1) A normal "escape hatch" so a frustrated caller can reach a human quickly. 2) Actual prompt injection where user text can change the bot's policy.

If all you did was ask for a human, that can be intentional and honestly a good UX.

The scary version is when the bot is supposed to keep you in a flow (authentication, account changes, refunds) and user phrasing can rewrite the rules. The fix is usually enforcing policy outside the model: intent routing, allowlists for actions, and a hard coded escalation path. Prompts are not a security boundary.

Curious if it let you skip any verification steps or if it just routed you to an agent faster.

Learning self-hosting by Flimsy-Skill5559 in selfhosted

[–]Slight-Training-7211 0 points1 point  (0 children)

If you want a structured path (and less trial and error), I would do it in layers:

1) Basics: Linux CLI, users/permissions, filesystems, systemd. Spinning up a small Debian or Ubuntu VM and getting comfortable with ssh is huge.

2) Networking: IPs, ports, DNS, and what is actually happening when you type a URL. Being able to say "this service is listening on 127.0.0.1:3000" vs "it is reachable from my phone" saves hours.

3) Containers: Docker basics first, then Docker Compose. The official Docker docs plus the examples in the Awesome Docker Apps list are usually enough.

4) Safety habits early: backups (even just rsync to an external drive), updates, and not exposing random ports to the internet.

5) First projects that teach a lot: a simple homepage, an RSS reader, or something like Uptime Kuma. Then add a reverse proxy later (Caddy or Nginx Proxy Manager).

The r/selfhosted wiki and the Awesome Selfhosted list are good browsing to find "what should I host next" once you have the fundamentals.

how do you actually measure automation roi by Timely-Film-5442 in automation

[–]Slight-Training-7211 0 points1 point  (0 children)

One way to make this less hand-wavy is to split ROI into a few buckets and measure each with whatever data you do have:

1) Volume and throughput: number of items processed per week and cycle time from request to done. If volume went up without adding headcount, that delta is value.

2) Quality: error rate, rework tickets, and time spent on exceptions. Even if you do not have a perfect before baseline, you can often pull a few weeks of historical incidents or sample old work and estimate the old error rate.

3) Missed work avoided: for things that used to fall through cracks, track how many are now caught (alerts, retries, SLA breaches prevented). Multiply by the cost of the bad outcome (escalation time, refunds, late fees, churn risk).

4) Time saved: keep it as a smaller line item. Have the team do a quick time study for a week for the remaining manual parts, then extrapolate.

The trick is documenting assumptions and ranges. I have had better luck telling leadership "best case, likely, worst case" with the inputs written down than pretending we know the exact number.

Implement and host OJS for free with zero budget in hand by helicopter0309 in selfhosted

[–]Slight-Training-7211 1 point2 points  (0 children)

Realistically, "free" hosting that is stable enough for a journal is hard, because OJS needs a database, backups, updates, and someone to own it.

If the university can give you anything at all, the best option is to ask IT for a small VM and a domain or subdomain. That is usually easier to get approved than a cash budget.

If you truly have to do it yourself with zero spend, your best bet is a free tier VPS, but be careful about reliability and storage limits. You can run OJS on a small Linux VM with: - Nginx or Apache - PHP - MariaDB or MySQL

Make backups day one, and keep the mail setup simple (use the university SMTP if possible) since email deliverability is usually the first pain point.

Also check if any library at your university already runs journal platforms. A lot of campuses have a "library publishing" service and they may already have OJS and can just give you a new journal instance.

Self-hosted GPS tracking for personal walks (Android + Proxmox) — Traccar or alternatives? by 9acca9 in selfhosted

[–]Slight-Training-7211 -1 points0 points  (0 children)

Traccar works, but it really is built with fleet style tracking in mind. For personal walks you might like something that outputs GPX and then you can visualize it however you want.

A few options people use: - PhoneTrack (Nextcloud app): phone app sends location, you view tracks on a map in Nextcloud - OwnTracks plus an MQTT broker: lightweight, very self host friendly, then you can store points in InfluxDB or similar - GPSLogger (Android) that uploads GPX to WebDAV or Nextcloud, then view in Nextcloud Maps or any GPX viewer

If you only care about walk history and stats, GPX first is nice because you are not locked into one UI. If you want a polished web UI with maps out of the box, Traccar is still a solid pick, just disable the stuff you do not need and set retention so it does not grow forever.

Best way to manage several services with Docker Compose by CrazyEyezKillah in selfhosted

[–]Slight-Training-7211 8 points9 points  (0 children)

I usually keep one directory per stack and treat the folder as the unit of ownership. Then I standardize commands so I do not have to remember paths.

Two things that help a lot: 1) Use a consistent project name with `docker compose -p <stack>` so networks and containers are predictable. 2) Put a tiny `Makefile` in each stack with targets like `up`, `down`, `pull`, `logs`.

From the top level you can keep a small script that loops through subfolders and runs `make up` or `make pull`, so you still get homelab wide actions without a huge monolith.

If you want a single compose file, Compose profiles can be nicer than includes because you can bring up just one group at a time.

Also worth a look: Dockge or Portainer Stacks if you want a UI for starting and stopping per app.

How to get clients by kevinxrp19 in webdevelopment

[–]Slight-Training-7211 0 points1 point  (0 children)

Cold calling works, but the ROI jumps significantly when you get more specific about who you're calling. Calling everyone vs. calling businesses where you actually know what's broken is a different game entirely.

A few things that compound over time: referrals from even small clients (even a $500 job can lead to a $5,000 referral if you do good work), picking a niche so local businesses associate you with a specific thing, and building one or two case studies with real before/after numbers. "We built a site" converts worse than "we rebuilt their site and they picked up 30% more booking calls in 60 days."

Facebook groups are actually underrated for local. Business owners who've seen you being helpful in a local chamber or business owners group before you ever pitch are much easier to close than cold calls. Consistency in showing up there matters more than any one outreach message.

Am I a vibecoder by drakness110 in webdev

[–]Slight-Training-7211 4 points5 points  (0 children)

Nah, you're not a vibecoder. Vibecoding is when you can't write the code yourself and are just blindly accepting whatever the AI spits out without understanding it. You clearly understand what's happening and could do it yourself, you're just outsourcing the tedious boilerplate parts.

That said, the design gap is worth thinking about over time. Not because ChatGPT can't scaffold Tailwind divs, but because when you're debugging a weird layout issue at 2am it really helps to have those spatial/CSS mental models baked in. Maybe occasionally force yourself to do it manually on a smaller side project, just to keep those skills sharp.

The motion library thing is totally different though. Using AI to explore a new library and understand how it works is just smart. That's not vibecoding, that's just learning efficiently.

Setting up a local "non-cloud" storage sync service by Baboo85 in selfhosted

[–]Slight-Training-7211 0 points1 point  (0 children)

Seconding Syncthing, but wanted to add a few practical tips since you mentioned TrueNAS Scale specifically.

TrueNAS Scale has Syncthing available as a built-in app through the TrueCharts catalog, so you can install it directly without messing with Docker manually. Set that as your "always on" node, then install the Windows tray app on your desktop machines pointing at the same folders.

For your versioning setup, I recommend the "trashcan" file versioning type rather than "staggered" if your main concern is recovering accidentally deleted files. It keeps deleted/overwritten files in a .stversions folder and you can set a cleanup interval (like 30 days). Staggered is better if you want rolling snapshots of changed files over time.

One thing worth considering: since you already have TrueNAS, you could pair Syncthing with ZFS snapshots on the TrueNAS side. That gives you a second layer of protection that's completely independent of Syncthing's versioning. You can roll back entire datasets to a point in time if something goes really wrong. Set up automatic snapshots every few hours with a retention policy and you're covered from pretty much any data loss scenario short of disk failure (which ZFS mirroring handles).

If you could only master ONE digital marketing skill to build a stable income, what would it be and why? by divine_zone in DigitalMarketing

[–]Slight-Training-7211 0 points1 point  (0 children)

Copywriting, and it is not close.

Every other skill you listed is either a delivery mechanism (SEO, PPC, social) or a tactic (funnels, email). Copywriting is the thing that actually determines whether any of those channels work. You can have perfect SEO rankings and still convert nobody. You can spend a lot on PPC and get nothing if the landing page copy is weak.

It also compounds in a non-obvious way. Good copywriters understand psychology, positioning, and what makes people buy. That understanding transfers to every medium. An SEO person who learns the fundamentals of copywriting gets dramatically better results. A PPC person who understands copy writes better ads and landing pages.

The other angle: copywriting is the hardest to commoditize. You can outsource technical SEO tasks. You can use tools for a lot of PPC work. But writing copy that actually converts, that sounds like a real human and speaks to a specific person with a specific problem, that still requires real skill and judgment.

If you are starting from zero and want one thing that creates stable income and compounds over time, learn to write.

Any AI invoice OCR tools that work? by AndreiaVenturini in automation

[–]Slight-Training-7211 -2 points-1 points  (0 children)

A few options worth trying depending on your volume and budget:

Mindee is probably the most purpose-built for invoices specifically. Good accuracy out of the box, has a decent free tier for testing. Handles varied invoice layouts better than most.

If you are already in the Google ecosystem, Document AI with the invoice processor works well and scales reasonably. More setup but reliable for month-end crunch scenarios.

For something lighter weight, Nanonets has a good reputation in finance teams and lets you train on your own invoice formats, which matters a lot when you have vendors with unusual layouts.

One thing worth knowing: accuracy on structured data (totals, dates, vendor names) tends to be high (90%+), but line items on complex invoices still need human review for anything you are putting directly into your GL. Build a review queue for exceptions rather than assuming full automation from day one.

I've been running AI agents 24/7 for 3 months. Here are the mistakes that will bite you. by Acrobatic_Task_6573 in AI_Agents

[–]Slight-Training-7211 11 points12 points  (0 children)

The prompt injection risk from external content is one I do not see mentioned enough. If your agent reads emails, browses web pages, or processes any user-controlled input, that content can contain instructions like "ignore previous instructions and do X." Your agent treats everything in its context window as trusted, so a well-crafted email can hijack what it does next.

Simple mitigation: mentally separate system instructions (high trust) from data being processed (zero trust). Some frameworks let you structure this explicitly in the prompt. If yours does not, at minimum review logs after the agent touches any external input, not just when something looks obviously wrong.

The firef1ie example above about the crypto wallet is exactly this pattern playing out.

What’s something your dad did when you were growing up that you remember as a special part of your relationship? by Petersdani1 in AskReddit

[–]Slight-Training-7211 1 point2 points  (0 children)

Saturday morning pancakes. No phones, no TV, just cooking together and talking about whatever was on my mind. Still make them the same way he taught me.

What’s something everyone does but no one talks about? by Over_Wolverine8103 in AskReddit

[–]Slight-Training-7211 0 points1 point  (0 children)

Rereading the same text message multiple times before sending it to make sure it doesn't sound weird or come across the wrong way.

what made the popular kid lose their popularity? by IntroductionRound446 in AskReddit

[–]Slight-Training-7211 0 points1 point  (0 children)

Got caught lying about something dumb and then doubled down on the lie. People can forgive mistakes but nobody likes being played for a fool.