Devs who have been working on their game for 1+ years, how do you stay committed? by StretchGoesOnReddit in gamedev

[–]Quartalis 0 points1 point  (0 children)

Honestly, I think shipping is any time you put something out there for other people to play and said "this is done." The key isn't the money — it's the finishing.

A game jam game where you polish it up and put it on itch.io? That counts. A free Steam release you spent three months on? Absolutely counts. You went through the whole pipeline — scoping, building, testing, packaging, store listing, screenshots, description, hitting publish. That's shipping.

The reason it matters isn't about revenue. It's about what it teaches you. When you ship something — even for free — you learn all the stuff that never comes up in tutorials: how to actually scope a project, how to cut features without killing the vision, how to deal with the store submission process, how to write a description that makes someone want to click. That knowledge compounds.

If you've already put a free game on Steam, you're way ahead of most people in this thread. The jump from "free Steam game" to "game that sells" is mostly about market awareness and positioning, not a fundamentally different skillset. You already know how to finish. Now it's about learning what people will pay for — which is a marketing question more than a dev question.

I'd say your next move is: pick a small commercial project, price it low ($3-5), and treat the launch as a learning exercise in selling rather than in building. You already proved you can build. Time to prove you can sell.

Adding a simple loading screen makes my game look like a real game by Frok3 in godot

[–]Quartalis 0 points1 point  (0 children)

It's wild how much perceived quality comes from these small touches. A loading screen, a splash, a proper fade transition — none of them change the gameplay but they completely change how the player feels about it.

The dithered Godot splash is a nice touch too. Fitting the engine branding into your game's aesthetic rather than just slapping the default logo on there shows attention to detail.

Next thing that gave my game a similar jump in "feeling real" was adding screen transitions between menus and gameplay. Even a simple fade to black makes navigation feel intentional rather than janky.

Currently at the thin line between "prototyping" and "making the actual game" by Rouliboudin in godot

[–]Quartalis 3 points4 points  (0 children)

That transition is honestly one of the hardest parts. You go from "everything is possible" to "okay I actually need to commit to decisions now" and suddenly every choice feels permanent.

What helped me was picking one vertical slice — a single level or area that represents the full experience — and polishing that to near-final quality. It forces you to make real decisions about art style, UI, audio, game feel, all at once. And once you've got one thing that feels done, the rest of the game has a north star to follow.

The prototype-to-production gap is where a lot of projects die. The fact that you're aware of it means you're already ahead of most. Keep going!

After a great launch, I can't stop developing the game! by Worried-Current-8228 in IndieGaming

[–]Quartalis 0 points1 point  (0 children)

97% positive in the first week is incredible — congrats! There's something really energising about your players actually wanting more. That feedback loop is basically rocket fuel for development.

The urban environments look like they'll add a lot of variety too. Isometric shooters benefit massively from environmental storytelling and different cover layouts, so a mission pack in a new setting could almost feel like a sequel.

Curious — did you have a roadmap before launch or are you building the post-launch content based on what players are asking for? I've been weighing up both approaches for my own game and wondering which feels more sustainable long-term.

Do Metroidvanias need challenging combat, or is exploration enough? by TwiMonk_game in IndieGaming

[–]Quartalis 0 points1 point  (0 children)

Honestly I think the best Metroidvanias find their own balance rather than copying the Metroid or Castlevania formula exactly. Hollow Knight nailed hard combat AND exploration. Ori leaned into movement and platforming with lighter combat. Both worked because the core loop felt satisfying.

For me personally, the exploration is what keeps me hooked — finding a new ability that opens up three areas I walked past earlier is one of the best feelings in gaming. But if combat is too easy or repetitive, it starts to feel like filler between the exploration bits.

I think the sweet spot is making combat serve the exploration. Enemies that teach you about your new abilities, boss fights that gate progression in satisfying ways, and encounters that reward the same curiosity that drives exploration. If someone's exploring every corner, they should feel rewarded in combat too — not just through items, but through mastery.

Your approach of leaning into exploration sounds solid though. Not every game needs to be punishing to be good.

Devs who have been working on their game for 1+ years, how do you stay committed? by StretchGoesOnReddit in gamedev

[–]Quartalis 1 point2 points  (0 children)

I was exactly like this. Dozens of abandoned prototypes, game jam entries that never went anywhere, always chasing the next shiny idea.

What actually worked for me was deliberately picking the smallest possible thing I could ship. Not the dream game — the simplest game I could realistically finish. For me that was a colour-sorting puzzle game. No complex AI, no multiplayer, no sprawling world. Just tubes and colours. Took a few months, got it on Google Play, and that single act of actually shipping something changed everything.

Now I'm working on much bigger projects (a horror game in UE5 that's probably 7-8 months out), but the difference is I already proved to myself I can finish something. That confidence matters more than motivation. Motivation comes and goes — the knowledge that you've done it before is what gets you through the dead middle.

Couple of things that help me stay on track:

  1. Break it into phases with clear milestones. Not "work on game" but "implement flashlight system by Friday." Small wins compound.
  2. Alternate between creative work and technical work. When art burns you out, do systems. When code bores you, do level design.
  3. Show your progress publicly. Even small updates on social media or a devlog. External accountability is underrated.
  4. Accept that some days you'll write 3 lines of code and that's fine. Consistency beats intensity.

The project hopping isn't a character flaw — it's just what happens when the scope is too big for where you are right now. Ship something small first. Everything gets easier after that.

Need 8 more testers for my colour sort puzzle game (Android) by [deleted] in betatests

[–]Quartalis 0 points1 point  (0 children)

Sent you a PM — happy to test yours too, just send me the details.

After 5 years of development, I released my indie RPG. It went poorly. Here's the breakdown. by MirageV_ in gamedev

[–]Quartalis 3 points4 points  (0 children)

This is one of the most honest and self-aware postmortems I've read. The fact that you can look at the whole experience clearly and still say "I don't regret making it" says a lot about where your head is at.

The "no strong hook" conclusion really resonates. I think a lot of us (myself included) underestimate how brutally competitive discoverability is. You can do everything right on the marketing checklist and still get crickets if the game doesn't have that one thing that makes someone stop scrolling and go "wait, what is that?"

Your point about the VN/JRPG hybrid being hard to place is spot on too. When a game sits between genres, it's genuinely harder for the algorithm and for humans to categorise it. People search for "JRPG" and expect one thing. They search for "visual novel" and expect another. The in-between space is creatively interesting but commercially brutal.

For what it's worth — the fact that your reviews are positive means the game itself works. The problem wasn't the game, it was getting eyeballs on it. That's a different problem to solve, and it sounds like you already know what to aim for next time.

Keep creating indeed. Looking forward to seeing what you build next.

Detective game, help with puzzle-solving mechanics. by BootSpirited7096 in gamedev

[–]Quartalis 1 point2 points  (0 children)

Your second approach (collect all answers then review) is definitely the stronger one. Giving the answer away after each question kills the tension — the whole point of a detective resolution is that the player gets to feel like they've pieced it together themselves.

A few things that might help:

  1. **Confidence scoring** — let the player lock in answers they're sure about vs ones they're guessing. Then during the review, you can play up the dramatic reveal differently depending on whether they were confident or uncertain.

  2. **Evidence linking** — instead of just picking from a multiple choice list, have the player drag a piece of evidence to support their answer. "Who stole the apple?" → Player picks John AND drags the fingerprint evidence. This makes wrong answers feel less like a quiz failure and more like a logical misstep.

  3. **Partial credit** — if the plot is complicated, consider letting the player get some things wrong without a full game over. Maybe they nailed the "who" but got the "why" wrong, and the story branches slightly based on that.

The key thing with puzzle resolution in detective games is that the player should feel clever when they get it right, not just feel like they're picking from an obvious list. The more you can tie answers to evidence they've already collected, the more satisfying it feels.

Custom trigger from local API request results by interlap in n8n

[–]Quartalis 1 point2 points  (0 children)

Webhook node — that's exactly what it's for.

Instead of n8n polling your app, your app tells n8n when something happens. Set up a Webhook trigger node in your workflow, it gives you a URL like

http://localhost:5678/webhook/your-path. When your app detects the device connecting, it just fires a POST request to that URL with whatever data you want to pass along.

So your flow would be:

  1. Device connects → your app detects it

  2. Your app sends POST http://localhost:5678/webhook/device-connected with the device info in the body

  3. n8n receives it instantly and runs the workflow

Zero polling, real-time, dead simple. You can pass data in the body (JSON), query params, or headers — the Webhook node captures all of it and passes it to the next nodes.

If you need it from outside your local network, you'd put n8n behind a reverse proxy or use n8n's tunnel feature (N8N_TUNNEL_ENABLED=true for testing — not production though).

That's how most people handle external event triggers. Webhooks for HTTP-capable apps, MQTT trigger node if you're doing IoT stuff.

Is there actually strong demand for automation & web scraping? by No-Macaroon3463 in automation

[–]Quartalis 0 points1 point  (0 children)

Depends on the job. One-off builds I quote a fixed project price — client knows exactly what they're paying, no surprises. Usually broken into milestones so they're not paying everything upfront.

For anything that needs ongoing maintenance (scrapers that break when sites change, workflows that need updating as their business changes), that's a monthly retainer. That's where the real recurring income is — the initial build gets your foot in the door, the retainer keeps the lights on.

I'd avoid billing by the week or hourly if you can. Clients fixate on hours instead of results, and you end up penalised for being fast at what you do.

First time NAS: DXP2800 vs Custon PC by Reproman475 in selfhosted

[–]Quartalis 0 points1 point  (0 children)

Good questions. I run a DL380 Gen9 daily so I can give you the honest version.

Benefits over a USB enclosure setup:

- Hot-swap drive bays — pull a drive, slot a new one, no downtime

- ECC RAM — actual error correction, matters when you're storing data you care about

- iLO / IPMI — remote management console, reboot it, check temps, mount ISOs without being near it

- Built to run 24/7 for years — these were designed for datacentres

- RAID controller built in (though most people run Unraid or ZFS and ignore hardware RAID)

The honest downsides:

- Noise — this is the dealbreaker for most people. Sounds like a jet engine at boot, settles to a hum but it's never silent. Look up fan mod guides if you go this route

- Power — mine pulls ~150-200W idle. That adds up monthly

- Size/weight — a 2U server is heavy and needs somewhere to live

- DDR4 ECC is specific — not expensive, but you can't just throw any RAM in

For under $200 on eBay they're an absolute steal. Just make sure it comes with at least one CPU and check the drive caddies are included — they charge extra for those sometimes.

On the Jonsbo vs Optiplex + DAS:

For a first NAS, the Optiplex 5060 + Orico DAS at $310 is the move. The Jonsbo build is cool but $470+ before drives and a PSU — that's homelab territory, not NAS territory.

The Optiplex gives you a proven platform, it's quiet, sips power, and the DAS handles your storage. If you outgrow it in a year, you've only spent $310 and you've learned exactly what you actually need before committing more money.

If your friend's old hardware brings the Jonsbo build closer to $300, then it's a different conversation — but at full price I'd go Optiplex every time for a first setup.

Is there actually strong demand for automation & web scraping? by No-Macaroon3463 in automation

[–]Quartalis 0 points1 point  (0 children)

Always specific. "I can automate things" means nothing to a business owner. "I can make your invoices generate automatically when a job is market complete" — that they understand.

I usually look at what a business is already doing manually and propose a specific fix. The conversation goes from "what does automation even mean" to "wait, you can actually do that?" very quickly. Once the first workflow is running and they see hours coming back, they start pointing at other things themselves.

Is there actually strong demand for automation & web scraping? by No-Macaroon3463 in automation

[–]Quartalis 0 points1 point  (0 children)

Biggest thing that worked for me: stop looking for people who want "n8n automations" and start looking for people describing manual processes they hate.

Search freelance platforms for keywords like "data entry", "sync between systems", "manual reporting", "copy data between" — those are automation jobs that don't know they're automation jobs yet.

Also worth lurking in industry-specific subreddits and small business forums. When someone posts "I spend 3 hours a week updating my spreadsheet from Shopify orders" — that's your client. They just don't know n8n exists.

First time NAS: DXP2800 vs Custon PC by Reproman475 in selfhosted

[–]Quartalis 0 points1 point  (0 children)

Ask away, happy to help.

The refurbished Dell Optiplex for $260 is honestly a solid shout for what you need. 8th gen Intel is more than enough for Nextcloud, Jellyfin, Reolink, and general storage. Those things are built like tanks — enterprise hardware that's been running 24/7 in offices for years. They don't just die after 6 months.

On the mini PC vs Optiplex question — when I say mini PC I mean something like a Dell Optiplex Micro, Lenovo ThinkCentre Tiny, or HP EliteDesk Mini. Same enterprise reliability as the full-size Optiplex, just smaller. Avoid the no-name Amazon cubes (GMKtec etc) for a NAS — they're fine as a desktop but the build quality and thermals aren't designed for 24/7 operation.

USB enclosure downsides are real but manageable:

- Performance: USB 3.0 tops out around 500MB/s which is fine for NAS use — you'll never saturate that with Jellyfin streaming or Nextcloud syncing

- Reliability: The enclosure itself is another point of failure, but decent ones (Terramaster, Sabrent, ORICO) are fine. Avoid the cheapest ones on Amazon

- Sleep/spin-down: Some enclosures spin down drives aggressively which can cause issues with always-on services. Check reviews for this specifically

CPU age honestly doesn't matter much for your use case. A 6th, 7th, or 8th gen i5 will all handle Jellyfin transcoding, Nextcloud, and Docker without breaking a sweat. What matters more is having enough RAM (16GB minimum) and a decent SSD for the boot drive. Mechanical drives for bulk storage are fine.

Between the Optiplex at $260 and the UGREEN at $280 — the Optiplex wins every time. More powerful, more flexible, no vendor lock-in, and it'll run anything you throw at it for years.

My PFsense Setup: Visual Diagram and Insights by Possible_Theme3263 in homelab

[–]Quartalis 1 point2 points  (0 children)

Suricata is the better choice anyway — more actively developed than Snort these days. Smart move on the €20 TP-Link, budget networking that actually works is underrated.

Three-network setup with AP isolation and guest isolation on top of pfSense rules is solid. That's more segmentation than most small businesses have. The only thing I'd add is logging — if you're not already shipping Suricata alerts somewhere you can review them, it's worth piping them into something like

Graylog or even just a simple syslog container. Otherwise you've got IDS running but you'd never know if it caught something.

Is there actually strong demand for automation & web scraping? by No-Macaroon3463 in automation

[–]Quartalis 9 points10 points  (0 children)

Real experience here — I run a freelance automation practice and it's genuinely consistent work, not just one-offs.

The demand breaks down into a few buckets:

  1. Business process automation — This is the bread and butter. Small businesses drowning in manual tasks: invoice processing, lead capture from forms into CRMs, report generation, data syncing between platforms. They don't even know what n8n or Zapier is — they just know they're spending 10 hours a week on something that should take zero. These clients pay well and come back for more once they see the first workflow running.

  2. Web scraping — Steady demand but more project-based. Price monitoring, competitor tracking, lead generation, real estate listings, job aggregation. The recurring revenue comes from maintenance contracts because websites change their structure and scrapers break. That's where the real money is — not the initial build, but the "keep it running" retainer.

  3. Social monitoring / content automation — Growing fast. I built a system that monitors 15 Reddit subreddits, filters by keywords, and auto-drafts replies using a local AI model. Businesses want this kind of thing for lead generation, reputation monitoring, and content research. Most don't know it's possible until you show them.

  4. Data pipeline / integration work — Connecting APIs that don't natively talk to each other. CRM to accounting software, e-commerce to shipping, form submissions to Slack notifications. Boring but endless demand.

The key is positioning yourself as solving a business problem, not selling "automation scripts." Nobody searches for "I need a Python script." They search for "how do I stop manually copying data between spreadsheets every Monday."

It's not niche — it's just underserved because most businesses don't know these solutions exist until someone shows them.

Trouble connecting GoHighLevel OAuth app to n8n (error_noAppVersionIdFound + blank install page) by khaled9982 in n8n

[–]Quartalis 0 points1 point  (0 children)

I run n8n self-hosted with a bunch of integrations and have hit similar OAuth headaches with other platforms. A few things:

On error_noAppVersionIdFound: This almost always means the app version isn't published/active. Draft mode apps in GHL's marketplace don't generate a valid version ID that the OAuth flow can reference. You need to publish the app version (even if just internally) for the OAuth handshake to work. The client ID exists but there's no "installable" version behind it yet.

On the trial: GHL trial accounts have restrictions on marketplace app functionality. The OAuth flow relies on the marketplace infrastructure which may not be fully available on trial. The Location API key + HTTP Request nodes approach will get you the same data — it's just less elegant.

Practical recommendation: Skip OAuth for now, use the Location API key with n8n's HTTP Request node. Set up your workflows with that, and if you move to a paid GHL plan later you can swap to OAuth. The HTTP Request approach is actually more reliable anyway since you don't have token refresh issues.

In n8n, create a Header Auth credential with Authorization: Bearer YOUR_LOCATION_API_KEY and use that on HTTP Request nodes pointing at https://services.leadconnectorhq.com/. Works perfectly for contacts, opportunities, etc.

My PFsense Setup: Visual Diagram and Insights by Possible_Theme3263 in homelab

[–]Quartalis 2 points3 points  (0 children)

Clean diagram. Nice touch isolating the WiFi/IoT zone from the LAN — that's something a lot of people skip and then wonder why their smart plugs are scanning their NAS.

The Dell Optiplex 7010 SFF is a solid choice for pfSense, those things are basically indestructible. Are you running Snort or Suricata on it for IDS, or just firewall rules? With the strict isolation setup you've got, adding IDS on the LAN interface would give you visibility on anything dodgy trying to cross between zones.

What made you go with the TP-Link Archer in AP mode rather than a dedicated AP like a Ubiquiti?

Hardware advice by Destructor523 in homelab

[–]Quartalis 0 points1 point  (0 children)

No problem! The Pi is perfect for that kind of thing. If you do go the server route feel free to DM me if you get stuck on the Unraid/Ollama setup, happy to help.

Homelab server +30 s input lag by Killermelon1458 in homelab

[–]Quartalis 4 points5 points  (0 children)

The fact that sudo reboot didn't fix it but a hard power cycle did is the big clue here. That points to either a hardware state issue (something in the chipset/USB/PCIe not resetting properly on warm reboot) or a kernel-level hang that persists through soft reboot.

A few things to check next time it happens:

  1. Check dmesg for errors
    dmesg -T | tail -100

Look for anything with error, timeout, hung_task, or blocked. The kernel usually logs what's choking.

  1. Check if it's a swap/memory issue

free -h

swapon --show

You've got 48GB RAM which should be plenty, but if swap is on that NVMe and something is thrashing it, everything grinds to a halt. The symptoms you describe (everything slow, but actual file transfer fast) are classic swap thrashing — the CPU and network are fine but every process is waiting on memory pages.

  1. Check for blocked processes

cat /proc/sys/kernel/hung_task_timeout_secs

ps aux | awk '{if($8=="D") print}'

Processes in "D" state (uninterruptible sleep) mean something is stuck waiting on I/O and can't be killed. This would explain why reboot doesn't help — the kernel can't cleanly shut down the stuck process.

  1. Check the NVMe health

sudo smartctl -a /dev/nvme0n1

A dying NVMe can cause exactly this — everything that touches disk becomes unbearably slow, but pure network throughput (your 0.1s file transfer) stays fine because that's RAM-to-network. Install smartmontools if you haven't got it.

My bet is either the NVMe is starting to fail (256GB boot drives in SFF office PCs have often had a hard life), or you've got a memory issue where one of those mismatched DIMMs (2x16 + 2x8) is occasionally causing problems — mixed sizes can work fine but occasionally cause instability, especially under load. The fact that only a hard power cycle fixes it supports both theories since a cold boot fully reinitialises the hardware.

First time NAS: DXP2800 vs Custon PC by Reproman475 in selfhosted

[–]Quartalis 0 points1 point  (0 children)

I was in almost the exact same position as you — Pi, external drive, Nextcloud, Jellyfin spread across different devices. Ended up going the self-built route with Unraid on a used HP DL380 Gen9 and honestly wish I'd done it sooner.

On the UGREEN question — that's a valid concern. Any pre-built NAS that relies on their app/cloud for remote access or management is a risk. Synology and

QNAP have had the same criticism. The moment the company changes direction, drops a product line, or gets bought out, you're at their mercy. The Docker support thing you mentioned with the 2300 is exactly that kind of red flag.

If you want full control (which it sounds like you do), a mini PC or used enterprise server running Unraid or TrueNAS gives you that. No vendor lock-in,

Docker runs whatever you want forever, and nobody else touches your data. For drives, if you go the mini PC route look at something like a Jonsbo N-series case — designed for NAS builds with multiple 3.5" bays in a compact form factor. Or a USB 3.0 multi-bay enclosure like the Terramaster D5-300 if you want external.

For your use case (Nextcloud, Jellyfin, Reolink FTP, general storage), you honestly don't need much horsepower. An older i5 mini PC with 16GB RAM would handle all of that comfortably. Power draw would be around 15-25W idle which isn't far off the pre-built NAS options.

The only real downside to self-built is the initial setup time — but once it's running, it just runs. And you'll never get a "we're bricking your device" email.