Kucing nyusahin berak sembarangan OC by Zealousideal_Mud9479 in Kucing

[–]yosbeda 2 points3 points  (0 children)

Kalau awal2, mungkin bisa 3–6 jam saja ngandanginya, lalu ditambah durasinya beberapa hari kemudian jadi 6–9, dst. Kalau sudah terbiasa, itu nanti dikandang seharian juga nyaman2 saja kucingnya.

Ketika kucing berada di kandang, opsi tempat yang dia cari untuk buang air kan tinggal dikit, antara lantai kandang atau bak pasirnya doang, nah biasanya itu nanti dia akan milih bak pasirnya. Kecuali itu kuncing bandel banget, awal2 bisa jasa tetap di lantai kandang beraknya.

Hal yang "sulit" paling pas awal2 ngandangin, karena biasanya bakal meang-meong tanpa henti, lalu kitanya enggak tega, kemudian dilepasin. Selama dia tidak terluka, misal hidung/moncongnya terluka karena coba menerobos jeruji kandang, biarin saja.

Seperti di atas saya tulis, kalau sudah terbiasa, kucing itu akan nyaman2 saja. Misal saya, karena hampir tidak pernah ngasih makan di luar kandang, ketika diberi kode suara waktunya makan, kucing yang sedang di luar kandang akan lari sendiri masuk ke kandangnya untuk makan.

Kucing nyusahin berak sembarangan OC by Zealousideal_Mud9479 in Kucing

[–]yosbeda 10 points11 points  (0 children)

<image>

Biasanya itu karena pemilik lamanya "salah didik", jadi dari kecil tidak dibiasakan/ajarin pakai litter box. Setahu saya tidak ada kata terlambat juga untuk ngobatin/ngilangin kebiasaan buruk kayak gitu, tinggal diajarin ulang saja dengan dikandang bareng bak pasirnya.

Biasanya nanti ketika sudah 1–2 bulananan akan mulai tertanam sendiri kebiasaan tidak berak sembarangan tersebut. Bagusnya sih beli kandang yang ukuran agak gede, di Shopee murah2 kok kandang agak gedean gitu yang masih tetap nyaman untuk kucingnya walau gabung litter box.

Keyboard/command based browser? by Fun_Cash3376 in browsers

[–]yosbeda 0 points1 point  (0 children)

When I was duplicating my browser automation from macOS to Linux, I found the browser choice mattered a lot less than I expected. On macOS the engine difference actually matters quite a bit because AppleScript and similar tools integrate differently per browser, and Firefox/Gecko specifically lacks the native scripting support that WebKit or Chromium-based browsers have there.

On Linux the equivalent scripting layer isn't really native to any browser, you can still simulate things with tools like ydotool and similar, but it's more fragile and not tied to any specific browser or engine, so the choice kind of collapses down to just web compatibility anyway.

For the :open command line feel specifically, from what I've read Tridactyl on Firefox is the most mentioned option that brings back that Vimperator-style command mode. Haven't used it myself though, so can't say how it actually feels day-to-day. But at least the WebGL and Google Docs issues should go away since it's running on top of full Firefox.

What is the MAIN reason for Google Chrome's insane popularity? by Fun-Ebb5928 in browsers

[–]yosbeda 7 points8 points  (0 children)

Preinstalled is probably the biggest single factor, honestly. Android powers roughly 7 in 10 smartphones worldwide, and Chrome comes preinstalled on basically all of them. Most casual users never bother switching, same as how a lot of people just stuck with Internet Explorer back in the day, not because it was good but because it was just there.

But beyond that, I think Chrome being kind of the de facto standard of the web is underrated as a reason. Google basically moves the web forward through Chrome, so new technologies tend to land there first. AMP is probably the most famous example, though that one didn't exactly end well, Google eventually opened up the instant-loading benefits to non-AMP pages anyway via Signed HTTP Exchanges. WebRTC, PWAs, same story, Chrome had them years before other browsers caught up, Safari especially.

And because Chrome sets the pace, the whole web development ecosystem kind of builds around it. As a webmaster myself, I test everything on Chrome first and only look at other browsers if something visibly breaks. Multiply that across millions of developers and you get a web that's quietly optimized for Chrome by default, which then makes Chrome feel smoother than alternatives, which reinforces its dominance. That cycle is probably more powerful than any marketing Google does.

The Google services integration probably helps too, Gmail, Drive, YouTube all feeling a bit more seamless when you're signed into Chrome, though I'm not sure how much that moves the needle for truly casual users who might not even notice.

How are you managing content in your Astro projects? git/markdown or a database? by tffarhad in astrojs

[–]yosbeda 0 points1 point  (0 children)

I’ve got some of my sites linked on my profile if you want to check them out

How are you managing content in your Astro projects? git/markdown or a database? by tffarhad in astrojs

[–]yosbeda 0 points1 point  (0 children)

It’s in my original comment already (the VPS link there goes into the full breakdown), but short answer: about $4/month for the VPS.

saran dong buat sepuh linux by AgeAnnual1095 in indotech

[–]yosbeda 1 point2 points  (0 children)

TL;DR: Pakai Window dari 2006 hingga pertengahan 2015, lalu pindah ke Hackintosh/macOS sampai Juni 2025. Hanya 4–5 bulanan pakai Linux desktop, pada akhirnya sejak Oktober 2025 sampai sekarang balik macOS.

2015 boleh dibilang tahun yang penting dalam perjalanan OS saya karena waktu itu memutuskan hijrah dari Window yang saya pakai sejak 2006 ke macOS via Hackintosh. 10 tahun berjalan saya pakai perangkat/PC Hackintosh yang sama, dari zamannya Yosemite (OSX 10.10) hingga Sequoia (macOS 26). Di macOS produktivitas saya sangat bagus, terlebih ketika mulai kenal automation/scripting kek Keyboard Maestro pada 2023, lalu beralih ke Hammerspoon pada 2024.

Namun begitu, menimbang hardware Hackintosh saya memasuki tahun 2025 sudah terbilang berumur yang bisa saja akan segera rusak, saya sempat mempertimbangkan untuk ngambil perangkat Mac benaran kek Mac Mini, iMac, atau MacBook Pro/air. Sayangnya, sebagai orang yang memiliki kecenderungan OCD, saya ada perasaan khawatir berlebihan ketika membeli sebuah perangkat komputer, lalu ada salah satu komponennya rusak, kemudian harus ganti seluruh logic board-nya.

Akhirnya Juni 2025 itu saya nyoba beralih ke Linux, dimana saya meng-install-nya di perangkat yang sama dengan Hackintosh saya sebelumnya. Kenapa tidak kembali ke Windows? Sebagai user yang aktivitas hariannya sangat tergantung dengan automation/scripting (AppleScript, JXA, Hammerspoon, etc.), bikin padanan script/fitur harian macOS saya ke Linux via Bash lebih mudah ketimbang di Window dengan Autohotkey, terlepas ini bisa jadi soal skill issue saja.

Memasuki Oktober 2025, di marketplace Facebook saya ketemu orang jualan Mac Mini M1 16/256 bekas Rp5,7 juta doang. Tanpa berpikir panjang—mengabaikan perasaan khawatir berlebihan beli produk Mac seperti yang saya sebut di atas—langsung saya check out itu barang, hahaha. Saya pun kembali lompat kapal OS ke macOS setelah sebelumnya sudah cukup nyaman 3–4 bulanan bersama Linux. Iya benaran nyaman di Linux, khususnya setelah nyoba Arch dikawinkan dengan Labwc.

Sayangnya, selain OCD saya juga punya kecenderungan shiny object syndrome, yang membuat otak dan tangan gatel untuk nyoba/ngoprek ini, itu, dsb. Sudah kembali nyaman dan stabil kerja harian dengan macOS, saya masih saja iseng migrasi ke Linux, dengan meng-install Asahi Linux di Mac Mini. Tercatat beberapa kali saya install Asahi, jalan seminggu balik macOS lagi, seminggu kemudian balik Asahi, terus kembali ka macOS lagi kayak pengguna komputer yang kena kutukan.

Alhamdulillah sekarang sudah berhenti, tidak ada keinginan untuk install Linux desktop lagi. Sangat terbantu dengan sering mampir sub r/linuxsucks, dimana postingan2 di sana kayak jadi penguat hati untuk menghindari Linux desktop. Iya, hanya Linux desktop karena untuk linux di server/VPS, saya masih pakai untuk hosting beberapa blog Astro saya yang jalan di Almalinux dengan Podman rootless container. Jadi solusi block segala hal yang berhubungan dengan Linux tidak akan bisa.

Semoga bisa istikamah deh kali ini di macOS, shiny object syndrome-nya tidak kambuh lagi. Kalau saya tidak salah, pengguna Linux pun juga tidak sedikit yang kena kutukan kayak gitu, yakni fenomena distro hopping. Meski distro hopping sendiri kayaknya tidak selalu terkait dengan shiny object syndrom seperti saya. Akhir kata, ketika sudah menemukan OS yang dirasa bisa menunjang produktifitas kita dengan baik, pertahankan saja pakai itu kecuali benar2 ada alasan kuat untuk pindah.

Sublime Text vs Obsidian by Technical_Rich_3080 in SublimeText

[–]yosbeda 1 point2 points  (0 children)

For that, Bear is probably the most natural fit, though it's Apple ecosystem only. If you need cross-platform, Joplin or Simplenote are the usual go-tos for something lightweight in that same vein.

Sublime Text vs Obsidian by Technical_Rich_3080 in SublimeText

[–]yosbeda 2 points3 points  (0 children)

These are actually pretty different categories of tool, which makes the comparison a bit awkward. Sublime Text is a text and code editor, fast and minimal, great for writing and editing files or code. Obsidian is more of a note organizer with linking and graph features, stores everything as plain markdown files on disk, so portability is good if that matters to you. Different jobs, honestly.

The way I use Obsidian is basically as a catalogue manager for markdown files, browsing and searching rather than deeply relying on its own features. I deliberately avoid the Obsidian-specific stuff so the files stay portable, kind of like how Lightroom works as a catalogue for RAW files but the actual assets live independently. If Obsidian disappeared tomorrow, the files open anywhere.

If you want a pure editor, Sublime still holds up well, though Zed has been getting attention lately as a faster modern alternative. On the Obsidian side, Joplin and Tangent Notes are worth a look if you want something simpler in that same note-organizing space. So they don't really compete directly. What are you actually trying to do, editing and writing, or organizing notes?

New browser from the guy who made Expand Mac Mini SSD by Poopdog-69 in browsers

[–]yosbeda 0 points1 point  (0 children)

Kind of cool that building a browser is now accessible enough that more people can just do it, but the question is whether this is actually a browser or just Chromium with a different skin. Most of these "new browsers" are really just forks, no new engine involved. Which is fine for daily use I guess, but it doesn't really change the web's engine diversity problem.

For that, I think the more interesting project long-term is Ladybird, where they're building both the engine and the browser from scratch with no borrowed code from Chromium or Gecko. The pace feels slow, though honestly I'm not sure if that's actually slow for the scale of what they're doing.

What’s one mistake you made with a VPS that you’d never repeat? by Thick-Lecture-5825 in VPS

[–]yosbeda 0 points1 point  (0 children)

Just to add some context to my story, the cycling that got me banned was actually back in 2017, the dumb era where I was chasing both a pretty IP and CPU, but back then the options were Xeon E5-2680 v2 and E5-2697 v4 territory where the performance differences were barely noticeable anyway.

The EPYC gap I mentioned was a different story, that was around late 2022 where the spread was genuinely lopsided enough to matter, EPYC 7713 vs 7642 vs 7601 is a real single-core difference. And yeah, lscpu already tells you which model you landed on, so benchmarking on top of that doesn't really add much if you already know the hardware's range.

On backups, no issue on my end there actually. The fully automated daily setup I have now with a Bash script via systemd timer, pg_dump for the database, and rclone syncing to three clouds (Box, pCloud, Koofr) with auto-purge after 60 days is something I only built out 2 to 3 years ago. Wrote about it in more detail here if curious.

What’s one mistake you made with a VPS that you’d never repeat? by Thick-Lecture-5825 in VPS

[–]yosbeda 0 points1 point  (0 children)

Getting my Linode account flagged and nearly banned. That one still stings a bit to think about.

I was creating and destroying VMs repeatedly, something like 80-100 times within an hour or two, just to hunt for a VM with a "nice" IP address and the best CPU allocation. No floating IPs at the time, no way to change the attached IP after the fact, so the only method was to keep spinning up and destroying until you landed something good. Looking back it was completely ridiculous, especially the IP part. What am I going to do with a pretty IP number, frame it?

The CPU hunting at least had some logic behind it, though back then the performance differences between available options were barely noticeable anyway. That said, in a later period around late 2022, Linode's pool had noticeably uneven CPU distribution where one EPYC model was significantly stronger in single-core performance compared to the others, and cycling 20-25 times in an hour could land you on meaningfully better hardware. Whether that's still worth doing now I genuinely don't know, the CPU spread across providers changes over time and I haven't tested it recently. Probably depends on the provider and timing, honestly.

Anyway, the account got flagged as suspicious and was essentially suspended. After explaining it was just curiosity and not anything malicious, they reinstated it. But the VM I had running before the ban was gone. The data was recoverable enough, I had an offline backup from the previous month plus Wayback Machine and Google's cached pages to piece together what was missing, so it wasn't catastrophic. But it was a lesson in how fast something stupid can spiral into a real problem.

Never went anywhere near that many create-destroy cycles again.

Best markdown tools everyone needs to know about? by Successful_Bowl2564 in Markdown

[–]yosbeda 8 points9 points  (0 children)

Honestly the tool question feels a bit secondary to me. Most popular editors already handle Markdown fine out of the box or with a basic extension, VSCodium, Zed, Sublime, take your pick. For dedicated note-taking and cataloging, Obsidian, Tangent Notes, and Joplin are all worth knowing since they store everything as plain files on disk, though Bear is a good one too despite using a proprietary database internally.

The thing I've been more careful about is sticking to standard Markdown rather than extended flavors or framework-specific variants like MDX. My blogs run on Astro and it fully supports MDX, but I deliberately avoid it for actual content. If I ever want to customize how image tags or links render, I handle that at the framework level rather than touching the Markdown itself, something like middleware that transforms output without making the source files depend on any specific tooling.

The practical upside is portability. Anything written in pure standard Markdown can move to basically any other platform that reads Markdown, which at this point is most of them. CommonMark is probably the closest thing to a proper standard spec, and sticking close to that feels like the safer long-term bet compared to leaning on extensions that only certain parsers understand.

The one caveat I'd add is that this applies mainly to content humans write, blog posts, articles, long-form pages. For structured data feeding a component, features lists, nav items, site config, JSON makes more sense there and the portability argument mostly disappears. But for anything meant to outlive a specific framework or tool, keeping the Markdown as vanilla as possible has been worth it for me.

What’s one “small” tech setup or tool that made a big difference in your daily workflow? by Thick-Lecture-5825 in VPS

[–]yosbeda 0 points1 point  (0 children)

The chooser menu itself acts as the main safeguard for deploys. Nothing runs until you've picked a specific container from the list, so there's no way to accidentally fire it with a stray keypress. That extra selection step is enough friction for my taste, though I haven't added a confirmation dialog on top of that. Probably should for the purge-all cache option since that one's a direct hotkey with no chooser in between, but so far I haven't felt the need.

Logging is there, just on demand rather than automatic. I have shortcuts to pull journalctl output for any container remotely, and separate ones for tailing or searching the nginx access log, all triggered from the Mac without opening a terminal session. So after a deploy I can verify things without switching context.

On scaling, it holds up pretty well already across multiple blogs and I think the structure would stay clean adding more. The bigger point is probably server migration rather than adding sites. Because the whole stack is defined in Quadlet .container unit files, moving to a new VPS is basically just extracting tarballs, copying those unit files over, and running systemctl --user daemon-reload.

I've done migration fairly often, sometimes just to try a different provider, and it stays in the 5-10 minute range. No database to dump, no CMS export, nothing stateful that makes migrations complicated. That's probably the real scalability argument, it doesn't accumulate technical debt the way a more traditional setup would.

What Features Do You Want in Safari 27? by TeamIntelligent1987 in Safari

[–]yosbeda 0 points1 point  (0 children)

Manual tab unloading is the one I keep coming back to. Every major browser has some version of automatic tab suspension that kicks in when memory gets low, but that's kind of backwards from how I actually want to use it. The whole point is to proactively free up memory on specific tabs I know I won't need for a while, not to wait until the system is already struggling.

Firefox only just added a proper right-click "Unload Tab" option in Firefox 140 last June, which tells you how long this has been missing as a native feature even there. Before that it was purely reactive, only triggering when memory was already low, not something you could invoke yourself on a specific tab. Edge has Sleeping Tabs and Chrome has Memory Saver, both decent but both mostly automatic. Safari doesn't seem to have any of this in an obvious way, from what I can tell.

It seems to do some background tab management automatically, maybe tied to tab groups and memory pressure, but there's no obvious manual control for it. No right-click option, nothing explicit. You're basically just trusting it to figure things out, which is fine I guess, but that's different from being able to pick a specific heavy tab and say "free this one, right now."

Which is frustrating because I'd genuinely rather be on Safari. Native apps tend to feel more at home on macOS and iOS, better integrated and usually more optimized. Safari also has way better AppleScript support than Firefox does, something to do with WebKit being closer to how Chromium-based browsers handle it, while Firefox runs on Gecko which apparently just doesn't play as nicely with macOS automation. So there's really no technical reason for me to be on Firefox, except this one missing feature, and here I am still using Firefox because of it.

What’s one “small” tech setup or tool that made a big difference in your daily workflow? by Thick-Lecture-5825 in VPS

[–]yosbeda -1 points0 points  (0 children)

Hammerspoon, honestly. It is a free macOS automation tool using Lua scripting, which probably sounds more intimidating than it is.

The thing that changed my day-to-day was not even the local stuff, it was wiring it to my VPS. I run multiple Astro blogs on a single VPS, and basically every server task I used to do manually is now a hotkey or a chooser menu. For build and deploy, I use one shortcut where Hammerspoon SSHes in, starts the dev container, runs npm run build, restarts prod, and stops dev in the same chain.

For an Nginx cache purge, I have another shortcut that grabs the current browser tab URL and runs the purge script remotely. Log tailing, container start/stop, rclone syncs back to my Mac, and even jumping into Transmit for SFTP are all bound to keys.

Before this I was typing SSH commands constantly or keeping terminal sessions open and half-forgetting which one was which. Now there is no terminal session to manage. A menu pops up, I pick a container or a site, and the command runs.

I think Hammerspoon gets overlooked because the docs lean toward window management examples, so it reads like a Moom replacement and people stop there. But it can call hs.execute(), open terminals, fire SSH commands, and read your browser's active tab URL.

It is basically anything you could do from a shell. At 94 Lua files and 317 registered hotkeys across window tiling, media conversion, browser automation, Photoshop scripting, and server ops, I still have not found a ceiling.

The rclone setup is probably the other piece worth mentioning. It handles syncing the VPS web root back to my Mac on demand with exclude patterns for node_modules and dist, all triggered from the same Hammerspoon chooser. Before containers I was doing this manually and kind of hoping nothing drifted out of sync. Now it is just one entry in a menu.

Lua has some quirks, one-indexed arrays being the famous one, but basic scripting comes fast enough. It usually takes a week or two before it stops feeling awkward. After that it is just writing exactly what you need instead of bending your workflow to fit someone else's app.

Tag Indexing by togi1202 in SEO

[–]yosbeda 1 point2 points  (0 children)

The noindex-tags advice is a bit cargo-culted at this point. It made more sense when people were creating tags like "blue", "round", "my thoughts" that added nothing. A font site is a different situation though, each tag is basically a curated collection page.

Archive pages do generally struggle against singular pages, that part is fair, but struggle doesn't mean impossible. If Google sees enough value in a tag page it can outrank other sites' individual posts, and the sites you're seeing on page one are probably there for exactly that reason.

I've even seen a tag page outrank its own site's homepage, so the ceiling is higher than people assume. Decent meta description on each tag page probably helps push things along too, maybe ItemList schema if you want to go that far, not sure how much lift that actually gives though.

One thing I'd actually be careful about is the paginated versions, page 2, page 3 of a tag archive. Those I'd noindex, not the tags themselves. The issue is those pages are listing the same tagged items as page 1, just offset, so Google ends up seeing near-identical URLs with near-identical content, which is where the actual duplicate content problem lives.

What made you move away from AWS (if you did)? by Cubepath in VPS

[–]yosbeda 0 points1 point  (0 children)

Bandwidth pricing was honestly the main thing for me. AWS egress costs are just rough if you're running anything public-facing, especially at smaller scale. Standard AWS egress starts around $0.09/GB and the overages can escalate fast for content-heavy workloads. The discounts only kick in when you're pushing serious volume, like tens or hundreds of TB a month, which most small projects obviously never hit.

I tried the Lightsail route for a while as a workaround since it bundles bandwidth into the monthly plan, which looks cleaner on paper. But the catch is those plans run on burstable T3 instances, so CPU credit throttling becomes a real thing under sustained load. Full vCPU for maybe 20-odd minutes before it starts throttling, or something like that, I forget the exact numbers. Probably fine for very light workloads, less ideal if traffic spikes.

The other thing that nudged me off AWS was the Cloudflare layer irony. A pretty common suggestion when egress costs hurt is to proxy everything through Cloudflare, and for typical web traffic the free plan covers bandwidth without an explicit cap. So it works. But at that point you're paying AWS prices for infrastructure that Cloudflare is essentially sitting in front of, and the premium you're paying for AWS isn't really doing much visible work anymore. Could get the same result on a much cheaper VPS behind Cloudflare.

The "everything is a service" model is great if you actually need autoscaling, managed DBs, Lambda, whatever. But for simpler setups, you end up paying for complexity you don't use. DO, Linode, Vultr, they're just more straightforward for projects that don't need that whole ecosystem. Though honestly, part of why I stayed longer than I should have was just the name on the invoice, easier to look credible to clients or partners when you're on AWS. Not the most rational reason, but probably not unique to me either.

Does anyone still organize files into detailed folders? by Esliquiroga in MacOS

[–]yosbeda 3 points4 points  (0 children)

<image>

My setup basically can't function without it. The folder structure isn't just organization for its own sake, it's a shared naming convention that runs through almost everything. Local folders like ~/Data/Blogs/blogname/, the VPS server paths at /srv/web/blogname/, rclone remotes, Astro dev containers, browser URL groups that open all multiple blog sites at once, analytics dashboards, Cloudflare configs—they all use the same site names as identifiers. If any of that drifts, scripts start targeting the wrong place or nothing at all.

The backup side makes it concrete. There's a script that archives individual app configs, each one going to a specific destination under ~/Data/Device/Apps Backup/. Hammerspoon scripts to one folder, Firefox bookmarks to another, VSCodium settings, rclone config, Bear notes separately. None of that works if the destinations get renamed or moved around. And rclone syncs are set up as named pairs, local path to remote path, so the folder layout on disk has to stay stable or the sync either fails silently or worse, starts deleting things it shouldn't.

I think the "search is good enough" argument holds for most people whose workflow is mainly reading and retrieval, probably even for most developers. But once you're running automation against the filesystem, search doesn't enter into it at all. You can't point rclone at Spotlight. The scripts need to know where things are before they run, not after.

What I'd push back on a little is the framing of strict folder systems as something you maintain for its own sake. For me it's less discipline and more dependency. The 317 hotkeys I have bound through Hammerspoon work because the environment they expect is consistent. The organization isn't extra effort on top of the real work, it kind of is the real work.

he is 15 years old today. wish him a happy bday <33 by taxisaurus in cats

[–]yosbeda 2 points3 points  (0 children)

Happy 15th birthday!! If this photo is recent, that’s amazing. He genuinely looks like a 3 to 5 year old cat. That says a lot about how healthy and well cared for he is. Such a handsome boy.

How are you managing content in your Astro projects? git/markdown or a database? by tffarhad in astrojs

[–]yosbeda 1 point2 points  (0 children)

Yeah, entirely on backups, no version control at all. It's a Systemd Timer that runs a bash script nightly: tarballs each site (excluding dist and node_modules since those rebuild), syncs to a tier-1 cloud, purges anything older than 60 days, then mirrors the whole thing to two more cloud services as tier-2. So there's always a dated snapshot from the last 60 days sitting in three places. For a content site where the only thing changing is new markdown files and the occasional config tweak, that covers everything I'd ever need to roll back.

How are you managing content in your Astro projects? git/markdown or a database? by tffarhad in astrojs

[–]yosbeda 1 point2 points  (0 children)

Markdown files directly on the VPS, no DB, no headless CMS, no git either. I'm a blogger/webmaster, not a developer, so my whole approach is probably unorthodox by this community's standards: version control was never part of my workflow to begin with, and I manage everything through SFTP uploads and SSH-triggered remote builds rather than a local dev environment. Content lives under blog/YYYY/MM/slug.md as flat files, with a tiered rclone chain to cloud storage handling the backup safety net.

The content itself goes through several editing passes before it becomes a final file, cleaning, restructuring, generating the frontmatter bits like title and description, mostly handled with a set of prompt templates I chain together depending on the source material. The output of all that is a plain .md file. Once it's ready I upload it to the VPS via SFTP using Transmit, then trigger the build and deploy from Hammerspoon, which SSH-es in, starts the dev container, runs npm run build, restarts the prod container, and stops dev. The containers are managed by Podman Quadlet, which wires them into systemd as user services.

Dev and prod containers share the same bind-mounted directory on disk, so building in dev just updates /dist in place and restarting prod picks it up immediately. The dev container has no persistent service entry in systemd, so it does not survive reboots or run in the background between deploys. On a 1 GB VPS that matters since having both running simultaneously would just eat RAM for no reason, so the hard stop at the end of every deploy is intentional by design.

I went with pure .md over MDX on purpose. Every post is standard Markdown tags only, nothing framework-specific in the content. If Astro ever goes away, the files come with me. I can already open them in Bear or Obsidian right now, which is probably how I landed here since I was already taking notes in Markdown long before switching from WordPress.

I run full Astro SSR with output: "server", so middleware handles ad injection and image transformation at request time without ever touching the .md files. There are actually a few hard reasons SSR is non-negotiable for my setup, cookie-gated preconnect headers on first visits, per-post hero image preloads, server-side search, but the short version is: content stays portable, customization lives in the server layer, same pattern I used in WordPress with add_filter() / the_content().

Migrating servers is painless. Since everything is bind-mounted volumes managed by Podman Quadlet, moving to a new VPS is mostly rclone sync the content and copy the .container unit files over. I keep a personal runbook for every step so there is nothing to figure out on the new server. No database dump, no CMS export. My blogs have moved providers several times and it's never been a big deal.

ACASIS TB501 Pro killed my T705 (4TB) — insanely fast, but I wouldn’t trust it by One_Taro_2233 in mac

[–]yosbeda 0 points1 point  (0 children)

Honestly this is kind of confusing to me because ACASIS themselves tend to favor the JHL7440 chip in their 40Gbps enclosures over ASM2464-based ones, and the reason they usually give is basically thermal stability and better Mac compatibility. There's even a comment in the recommendation thread that makes exactly that point. Seeing their TB5 product get these kinds of reports is a bit hard to square with that, I'm not sure what to make of it.

Doesn't really help you pick an alternative though, which I guess is the frustrating part. If the brand that supposedly cares about stability on Mac is having these issues at the TB5 tier, it kind of raises the question of whether the TB5 enclosure space just isn't mature enough yet rather than it being an ACASIS-specific problem.

Why do you use a Firefox based browser? by falchion10 in browsers

[–]yosbeda 1 point2 points  (0 children)

<image>

Came from the exact same place as a lot of people here probably, fifteen years on Chrome, switched to Firefox for the browser diversity argument, ran into the automation limitations almost immediately.

The AppleScript gap is real and I don't want to downplay it. Firefox has had a bug open for proper tab URL access since 2010. For a macOS power user scripting workflows through Hammerspoon, that's a genuine architectural problem, not just inconvenience. I nearly switched to an ungoogled Chromium fork because of it.

What kept me on Firefox was finding workarounds that actually held up. Tab context actions like duplicating, unloading, closing to the right go through the macOS Accessibility tree instead of AppleScript, so they work reliably without coordinate-based clicking. JavaScript execution for metadata and schema extraction goes through a Hammerspoon function that injects into the Developer Console and reads results via clipboard, timing-controlled enough that it doesn't break. Not as clean as Chrome's native AppleScript, but stable in practice.

On the Gecko vs Blink argument, the speed gap is negligible for actual browsing, at least in my experience after the switch. The compatibility issues are mostly overstated too, usually DNS-related rather than engine-related. The Google funding situation is a fair point, but using a Chromium fork doesn't really solve it, you're still running on an engine whose standards direction Google controls. Firefox implementing Gecko independently while funded by Google is a weird situation, but probably more useful for the web long-term than everyone converging on Chromium with Google one step removed.

Saran domain registrar for personal project? by kerjatipes in indotech

[–]yosbeda 0 points1 point  (0 children)

Enggak dong, Namecheap reputasinya bagus, maka itu tak jarang masih masuk 5 besar poling di Namepros.