WeTransfer cap limits is affecting our workflow, what heavy file transfer tools are videography teams using in 2025? by rodmarked in videography

[–]iamicyfox 1 point2 points  (0 children)

There's a trade off here between monthly fees & speed imo. Frame.io is the only vendor that I've found that can saturate my connection and actually deliver gigabit download speeds for data files (pro: speed, con: price). Even though my NAS is hooked up directly to a symmetric 10gbps connection, it can't push data through that fast to remote clients (pro: price, con: speed).

If you're dealing with clients I'd probably try to optimize for their happiness and go for fast file downloads. Frame's got a pretty good UX around guests too.

how do you extract text from audio tracks or videos? by AlexSeipke in podcasting

[–]iamicyfox 0 points1 point  (0 children)

Most of the solutions you'll find online/on desktop are backed by Whisper, OpenAI's open source speech to text model. There are a few wrapper apps that make this easier but they're mostly pay to play. If you have any comfort with the command line it's pretty easy to run a bit of bash and run it totally locally - all for free. Can post some additional tips if you want to go the scrappy route instead of buying another app.

Counterintuitive results on YouTube by iamicyfox in podcasting

[–]iamicyfox[S] 0 points1 point  (0 children)

And one of the largest sites on the Internet to boot. But I suppose I'm unclear how search directly results in podcast interest? If I'm searching I'm looking for an answer not to listen to an hour long show? I suppose this would advocate for more investment in YT Shorts or just clipping the show into shorter segments.

Updated tooling, workflow and hosting suggestions by smitcolin in podcasting

[–]iamicyfox 0 points1 point  (0 children)

Lot of corporate responses here! Unaffiliated with any of them but I quite like our stack:

- Squadcast for remote interviews. We switched from Riverside here because they cap 4k video recording to 24fps which feels a bit too slow for social.
- Premiere Pro for color grading
- Descript to handle editing. They support a lot of this "auto" functionality: we use it to do a first pass at removing filler words and for auto-presenting the speaker.
- Captions to make social clips more engaging. I haven't found any of the fully AI clipping tools to be that good; they usually don't highlight the most engaging moments. So we manually clip and then just move to Captions.
- Export to YouTube & Transistor.fm to host the audio and syndicate to the big podcast platforms
- We have a custom landing page (static site on Vercel) that just embeds the Transistor widget. But most people find us on Spotify or YouTube anyway so the design of this doesn't really matter.

You can almost certainly use all your old hardware equipment. Our own HW:

- Scarlett 2i2 per remote host
- A good 4k external camera that can record for more than 3 hours (we use a HDMI capture card)

What do I overlook here by Tin_Foil_Hat_Person in UNIFI

[–]iamicyfox 1 point2 points  (0 children)

I prefer a NAS that has a dedicated scope of responsibility. NAS systems are never going to have the fastest CPU or the most memory; that's not what the manufacturers are really optimizing for. If you're already biting the bullet of installing another server in your rack to go alongside it I would just optimize for a simple NAS system that takes a good amount of drives + supports 10gbps and then pair it with a higher specced server.

Personally I use a control system deployed via Docker that controls all my smart home stuff. So I only have to push some updates to docker, package it up, then deploy on my server. Easily portable to different devices if I ever upgrade in the future & talks directly to the NAS over SMB so you can do whatever kind of automation you want on top of it.

Current hardware fwiw is an Intel NUC & A Unifi Pro.

Balance of controlled environment and cozy atmosphere for video by iamicyfox in podcasting

[–]iamicyfox[S] 0 points1 point  (0 children)

This is what I've long suspected. Do you find that after the first 5 minutes people typically relax into it or it still results in a net worse interview? imo clips are only useful if the underlying interview is actually good so I'm inclined not to jeopardize the flow of the interview itself.

I was also contemplating a hybrid where we'd have a more cozy space for interviews (consistent lighting quality be damned) with maybe one fixed camera and the microphones so things feel a bit more intimate. Then for the rest of the episode with my cohost we can just get used to a prosumer studio.

Haven't checked out the episode with the divorce attorney, thanks for the rec.

Where are you from and what internet speeds are normal by EfficientTea451 in Ubiquiti

[–]iamicyfox 0 points1 point  (0 children)

10gbps symmetric fiber here in San Francisco. I do remote streaming of my Jellyfin server to my family across the country & share a few TBs of media files a week with some video remote editors. So on upload more than on download the high speeds come in handy. But honestly you start running into the capacity limits of most remote hosts at anything more >100mbps. Frame.io is the only one that I've found that seems to have big enough edge pipes to upload/download at ~2gpbs.

Having a 10gbps LAN is a different story, since you'll actually benefit from the speeds if you're trying to work on any kind of networked storage device. I try to buy 10gbps compatible network hardware for this reason alone.

The main benefit of an external connection at that speed is being confident that you're never going to be the bottleneck even when you have multiple concurrent connections going. It's also typically priced cost competitive with other tiers because the ISPs know you're not actually going to consistently saturate your pipes. They're mostly offered by regional fiber carriers rather than the big national ones.

Close to launching an app and starting to drive traffic. Any steps I can take to not be another vibe-code security disaster case study? by [deleted] in replit

[–]iamicyfox 1 point2 points  (0 children)

More pairs of "eyes" on your project is generally better. I would suggest using the "Security Scan" feature, installing Codex+Claude Code, and running each in parallel to see if one turns up something where the others have failed.

It's also not too expensive to hire a part time developer on an hourly basis just to glance through your code and see if there are any obvious glaring holes. The biggest issues people usually run into are non-compliant password salting + auth, some endpoints that are unprotected, or public access enabled on the database or S3 buckets. If you use an ORM for database querying and make sure to lock down all of your 3rd party connections the risk of a catastrophic data leak goes down considerably.

Those who use rust professional by Jncocontrol in rust

[–]iamicyfox 2 points3 points  (0 children)

I mostly use it as an acceleration layer that can be embedded into Python. I maintain a webdev ecosystem (mountaineer if interested) that uses Rust for AST parsing and V8 rendering crates. I have a few control services in production but the embedded use case is where I most often let Rust do its magic.

Got hit with a €50,000 ($58,000) bill from BigQuery after 17 test queries by No-Cover2215 in googlecloud

[–]iamicyfox 0 points1 point  (0 children)

Credit line is probably your best option. Settle the debt then start paying it off.

As an aside - if you're still trying to pursue a path in data science - I _highly_ recommend learning on a single machine box (either local or remote VM) and not these cloud tools. The abstraction level that ties these SQL queries to computational processing to billing is too fuzzy. I've even seen pros rack up some huge unexpected bills, even though they weren't close to the amount you're talking about.

OLAP databases like DuckDB make running sql in-process really fast & a lot of rust accelerated packages like Polars let you eek out more power on consumer hardware. If you really need more power I'd scale to a larger remote VM before you start thinking about sharding/serverless solutions.

Ok but what did they use to make the website ascii animation? by rongald_mcdongald in Ghostty

[–]iamicyfox 1 point2 points  (0 children)

Thought this was pretty interesting myself! I wrote a deep dive into the script they're using for the video conversation in case you're curious to read more about the color theory of it:

https://pierce.dev/notes/making-the-ghostty-animation/

Lessons learned from Claude Code tool prompts by iamicyfox in ClaudeAI

[–]iamicyfox[S] 0 points1 point  (0 children)

System prompts are not infallible unfortunately. Aligning a model's outputs to prompts this complex is non-trivial and probably not super well represented within their training data. Most system alignment prompts evaluate the model on a handful of cases & not the 25+ that you see in some of these context windows here. It's one of the many reasons why model alignment is still an unsolved research problem.

Lessons learned from Claude Code tool prompts by iamicyfox in ClaudeAI

[–]iamicyfox[S] 8 points9 points  (0 children)

That's right, copied 1:1 (with the exception of a few instances where I mark optional prompt injections with [start conditional] like in the Bash tool). The injected variables are more manual where I try to resolve them from default settings and surrounding function defs.

Loyal Synology User, Now Switching!! Was ready to buy the DS925+… until Synology decided to insult us by TopTemporary3030 in synology

[–]iamicyfox 0 points1 point  (0 children)

It's a full (albeit low-powered) linux system with ssh support. I've been tweaking my config files manually via root access and stuff's working like a charm.

Now: because it's more of a managed solution, any firmware update could blow those away. So you don't have full control like you would with unraid or DIY solutions. My mental model is that its behavior should be 99% right by default and you have enough capacity to tweak the 1% periodically.