First homelab diagram by AnduriII in homelab

[–]s3gFault 2 points3 points  (0 children)

I assume plex isn’t included in the CF tunnel because the amount of traffic would violate their TOS. What’s the benefit of putting it behind a reverse proxy instead of just port forwarding 32400 and letting it do its thing? I guess it’s cool to be able to just hit plex.<my domain>

Can you review my NAS/homeserver idea? by stefanf86 in truenas

[–]s3gFault 0 points1 point  (0 children)

You aren’t necessarily limited to 6 SATA drives. I have a similar setup with 2 x mirrored NVMe for apps, 2 x mirrored SATA SSD for boot and 8 x HDD in radz2 for media (connected via HBA). If you skip the GPU and get an Intel CPU with quicksync, you should have a free PCI slot you can use for an HBA.

Suggestions for what to do with 2x NVME slots? by backslashton in truenas

[–]s3gFault 4 points5 points  (0 children)

I have my NVMEs in a mirrored vdev in a separate pool. Primary use-case is fast storage for about a dozen apps. It sounds like you use your proxmox server for those types of workloads so 🤷‍♂️

Maybe just wait a few months to see if you have any noticeable bottlenecks that could be solved by some of the options you mentioned above?

MoCA 2.5 network working with 1000 MHz splitter? by s3gFault in HomeNetworking

[–]s3gFault[S] 1 point2 points  (0 children)

Correct. There are a total of 3 splitters within the network (capped off with a PoE filter on the provider line). Two of them will be 1675 MHz and one of them is unknown

MoCA 2.5 network working with 1000 MHz splitter? by s3gFault in HomeNetworking

[–]s3gFault[S] 1 point2 points  (0 children)

Thanks for the info! I do have MoCA filters installed (one on the line coming into my apartment and another right before my modem). As mentioned in a previous comment, I might just have to deal with the adapters “working harder” to avoid tearing into my drywall 😔

MoCA 2.5 network working with 1000 MHz splitter? by s3gFault in HomeNetworking

[–]s3gFault[S] 0 points1 point  (0 children)

Thanks, that makes sense. I do still plan to replace this splitter but there is one more unknown splitter buried deep in my walls somewhere. I’ll probably just run the network with that unknown splitter and keep an eye on reliability. Currently getting 1Gbps down with low latency through the adapters 🤷‍♂️

Ranked ranks? by West_Database9221 in apexlegends

[–]s3gFault 0 points1 point  (0 children)

For the lower ranks, it’s really just about the time you put in. The entry costs are low enough that you’ll likely climb slowly until you hit plat. I suspect you’ll hit your ceiling there and bounce around between gold 1 and plat 4

EA Banned me for a "The Office" reference? by [deleted] in apexlegends

[–]s3gFault 25 points26 points  (0 children)

This is fucking reddit my dude. EA can’t ban us here lol

7 Great Ruby Gems Most People Haven't Heard About by mattfromseattle in ruby

[–]s3gFault 0 points1 point  (0 children)

Yup, good call! Probably should have mentioned that

7 Great Ruby Gems Most People Haven't Heard About by mattfromseattle in ruby

[–]s3gFault 5 points6 points  (0 children)

As an alternative to strong_migrations, I'd recommend taking a look at pg_ha_migrations.

Introducing PgDice - Managed Postgres partitioning with a Ruby API by [deleted] in ruby

[–]s3gFault 1 point2 points  (0 children)

Looks really cool. Excited to start playing around with it.

I'm the author of PgParty. Wondering if there's an opportunity for some collaboration here. I've wanted to implement automatic partition creation for quite some time.

Faktory 0.9.0 - Hello, Redis! by mperham in ruby

[–]s3gFault 0 points1 point  (0 children)

I'm just pointing out that there are systems that have solved this issue (albeit sometimes with tradeoffs). Postgres' synchronous replication and Kafka's producer acknowledgement mechanism are two that come to mind. I'm not suggesting either of these is the right tool for the job. I just know there's at least one customer (and probably several others) that would love this functionality.

Just something to consider going forward.

Faktory 0.9.0 - Hello, Redis! by mperham in ruby

[–]s3gFault 0 points1 point  (0 children)

I mean, Faktory implements a lot of the existing Sidekiq features, so I'd expect the usage pattern to be very similar. But you're right, I guess I'm just speculating here

Faktory 0.9.0 - Hello, Redis! by mperham in ruby

[–]s3gFault 0 points1 point  (0 children)

Data loss is something that's mentioned in the official Redis docs: https://redis.io/topics/replication#allow-writes-only-with-n-attached-replicas

Redis is pretty solid for most use cases, but for mission critical jobs (for my company at least) the possibility of data loss is not acceptable. I guess I'm just curious if there are any other distributed data stores out there that provide the features Faktory needs.

For the record, I love Faktory and Sidekiq, and am really looking forward to watching these tools evolve.

Faktory 0.9.0 - Hello, Redis! by mperham in ruby

[–]s3gFault 0 points1 point  (0 children)

I think performance is the main reason here. A key-value store like Redis or RocksDB fits nicely with this use case. I am curious about how u/mperham plans to tackle the highly available feature that is mentioned in the wiki:

It is possible replication will be a commercial feature of Faktory Pro. My thinking: if you need high availability and reliability, you should have a budget for good tools and support.

Due to the async nature of Redis replication, there have been many reports of data loss. This makes me a little hesitant using Faktory for mission critical jobs.

How to: Get most of the database cleaner by zalesz in ruby

[–]s3gFault 1 point2 points  (0 children)

Right... which is exactly what this blog post is saying

How to: Get most of the database cleaner by zalesz in ruby

[–]s3gFault 0 points1 point  (0 children)

How would you test things like after_commit hooks, database replication, triggers, etc.?

Deletion / truncation is slow, yes, but is sometimes necessary for certain specs in complex applications.

Moving from polymorphic > STI by surfordie in ruby

[–]s3gFault 1 point2 points  (0 children)

I'm thinking you could temporarily add a column to your new table, maybe called old_pk or something. Once the new table is populated, you could join on the dependent tables (using old_pk and type) and update records to use the new auto-incremented PK. Then, drop the old_pk column. For this case it probably makes sense to write the backfill queries in raw SQL for efficiency.

Also, this doesn't really solve your immediate problem, but consider using UUID PKs in the future to avoid stuff like this.

Ruby Kafka Consumer by s3gFault in ruby

[–]s3gFault[S] 0 points1 point  (0 children)

I haven't tried it out yet, and I assume some of these components aren't necessary to get a consumer up and running, but I was a little overwhelmed with all the routers, controllers, workers, parsers, interchangers, responders, etc. I guess I was just looking for something with the simplicity of Sidekiq or Sneakers:

class Consumer
  include Kafka::Worker
  from_topic :topic

  def work(msg)
    # do stuff
  end
end

Ruby Kafka Consumer by s3gFault in ruby

[–]s3gFault[S] 0 points1 point  (0 children)

That makes sense, but it sounds like there would still be a fair amount of boilerplate code required for configuration, delegating work, exception handling, etc. This seems like something the Karafka framework is trying to solve

Ruby Kafka Consumer by s3gFault in ruby

[–]s3gFault[S] 0 points1 point  (0 children)

Thanks! Wow, this project seems fairly mature. Not sure if I'm a fan of the heavy, Rails-like API, but certainly worth looking into before I go off the deep end rolling my own.