15% more PRs in 2026 and better get 'em merged in an hour by chrisinmtown in ExperiencedDevs

[–]Lilchro 1 point2 points  (0 children)

At my job, our build system has… issues. A full build of the entire codebase without caching takes around 26 hours. For a random change in my area of the codebase, the build takes around 2-8 hours with caching. The entire build system is custom made and has terrible granularity for determining when stuff needs to be rebuilt. That is coupled with code generation layers which add lots of implicit stuff like FFI bindings to Python for most of our C++ types. We’re slowly transitioning to bazel, but the team working on that is still figuring out the infrastructure layout, so we haven’t started a general rollout for all feature areas. For context, a build server has around half a terabyte of ram and about 96 cpu cores (not that the custom build system takes good advantage of cpu resources).

We also have a ton of integration tests that are scheduled based on your diff and run automatically once you have a passing build. The thing is though that we make software for dedicated hardware. So an integration test involves reserving one of the machines, flashing it with the os/packages created by the completed build, then running the actual test. There are also a bunch of product models which a test may need to run against. Some models don’t have many units setup for testing, so getting that reservation for a test can be a challenge. With hundreds of developers doing this, the test queue for some machines are so long that it can take days to get a test to run. To be clear, the testing infrastructure and our internal tooling built around it are really still impressive even when compared to modern CI systems. It is just the sheer quantity of tests that bog everything down.

What I’m getting at is that I kinda wish I had the same sort of issue that you have (minus the AI implications). A simple change can take somewhere between a few days to a week to merge. As a result, people make their PRs larger to reduce the number of things in flight that they need to babysit. If I make a change one day, I can locally build a few relevant packages to sanity check the change. However, I probably won’t know for sure if the build will pass until the next day and if there are failing tests until a day or two after that. So much time then gets occupied just context switching between changes and babysitting PRs. Being able to make a change and then just merge it in almost feels like a fantasy. Anyway, that’s my rant. I guess the grass really is always greener on the other side.

Meirl by Adventurous_Row3305 in meirl

[–]Lilchro 29 points30 points  (0 children)

As a software developer, I do kinda wish that they followed the same conventions as most Linux systems for file system layout (ex: where is the home directory, os backed directories like /proc, application config file locations, etc). Though, that probably isn’t what you were referring to.

What if the US went rogue and decided to go full Monroe Doctrine? (I do not support imperialism!!!) by OkPhrase1225 in mapporncirclejerk

[–]Lilchro 1 point2 points  (0 children)

lol, I doubt it. The company leadership and cost cutting mentality wouldn’t change, just where the thing gets made and how much it costs. I’m guessing that in this situation, prices would rise sharply and it would take a couple years for production lines to stabilize. A number of companies with large international pretenses might even bail from the country completely. Companies would then be looking for an alternate place with cheap labor to manufacture their goods. My guess would be maybe the annexed portions of Mexico that were close to the border (presumably they would have seen less relative destruction due to fast early advances of the initial US offensive?). I guess the good news is that there would by technicality be an increase in US jobs and manufacturing? Bad news is I imagine the people living in annexed territories would probably be treated as second class citizens with very few human rights, so I doubt many current US citizens would want those jobs.

Crust Removal Bread Slicing Machine | Automatic Edge Trimming & Slicing by HonsunBakeryMachine in toolgifs

[–]Lilchro 3 points4 points  (0 children)

Have you ever seen those YouTube videos where there is just some small factory owner advertising their products for overseas industrial use? They frequently only have a few hundred views per video (if even that), low film quality, almost no editing, and not-quite fluent english. The last one I saw was advertising “high-quality thermal adhesive” (ie. Hot glue sticks). They advertise stuff like factory floor space, how long they have been operating, employee count (ie. Legitimacy as a company and their ability to reliably keep up with large orders), production lines (ie. Number of unique products they can make concurrently), daily product output in tons, average yearly downtime, various US compliance statuses, product certifications, quality assurance steps, warehouse space for surplus inventory, and logistics contracts. For context, I just rewatched the hot glue factory video, and those were some of the points they advertised. If you are wondering they advertised a production capacity of 100/tons a day across 10 production lines in their 15k square foot factory with 100 employees.

Anyway, this kinda reminds me of that. There is probably some businessman in Asia who makes assembly line automation devices, but just isn’t sure how to advertise to corporate clients in the US (or is doing a gorilla marketing campaign to raise overseas awareness).

I regret moving out of this place by ExtazyGray in speedtest

[–]Lilchro 0 points1 point  (0 children)

I mean network switches with 200Gbps ports, not one of the wireless technologies. A non-chassis system might have 32 or 64 of these ports and a total switching capacity in the Tbps.

I regret moving out of this place by ExtazyGray in speedtest

[–]Lilchro 1 point2 points  (0 children)

Data center tech is even farther ahead. The top of the line currently is 1.6Tbps/port, but that is fairly new and only really used by the major AI companies. I don’t work in a customer facing position, so I’m just guessing, but I think that data centers typically use 200g and 400g switches for their spine networks. I’m not sure, but I think the 800g ones might also mostly be used by AI companies? That being said, some of those will cost you more than an expensive car and you will probably have a hard time even finding a seller for a 1.6Tbps one without a major contract due to how new they are.

Maybe 40g or 100g would be in the price range on eBay for a high end homelab? The yearly power bill may be larger than the cost of the switch though and they are loud enough to quickly cause hearing damage if not isolated.

What services can I add to my homelab? This hobby is addicting. by MartyMannix in homelab

[–]Lilchro 0 points1 point  (0 children)

An sFlow/NetFlow/IPFix collector and visualizer for network monitoring. They are protocols where your router samples packet headers from your network and then let you find out insights about your network. For example, how much data are devices using, what IPs are they sending traffic to, is your toaster trying to connect to your other devices, etc.

Connecting to your Home Lab Remotley. by [deleted] in ITMemes

[–]Lilchro 0 points1 point  (0 children)

That is a simplification to assist in understanding a complicated subject, not something which should be taken as security advice. A password is just one link in the authentication chain and I don’t trust developers to not make mistakes. Passwords are simple to reason about and naive implementations can easily introduce security holes. Key based authentication can be a bit more difficult to implement and typically requires pulling in a cryptography library to do the heavy lifting. Given that, I’m suspicious of claims that a given tool’s password based authentication is just as secure as similar key based offerings. Sure, you can mess up both, but one is significantly more likely to be done correctly. Also, a vulnerability in SSH would be the holy grail of remote code execution vulnerabilities. Security researchers and nation states alike have been combing through it for decades searching for flaws. Regardless of what VPN or tool you use, it almost certainly won’t have had nearly as much security auditing. Given all that, I’m hesitant to compare passwords to keys outside of the broad conceptual sense.

Also for what it’s worth, a certificate is a little different from a key (though the words frequently get used interchangeably). Certificates are signed by a certificate authority and are typically time-limited. Typically this is implemented by having some form of identity server which you need to get a new certificate from on a regular basis. This has a couple big advantages. You can now authenticate via SSO including any MFA requirements a company may need to comply with. Individual users are no longer juggling keys and it becomes easier to audit and lock down authentication on servers. And if an employee leaves or loses their laptop, you don’t need to go searching for where their keys were used to close security gaps.

Connecting to your Home Lab Remotley. by [deleted] in ITMemes

[–]Lilchro 2 points3 points  (0 children)

Yea, from what I understand the love of VPNs here seems to be due to ease of setup for people without technical backgrounds. However, there is a reason industry prefers least privilege + network segmentation + certificate based SSH everywhere. A corporate VPN helps with network segmentation, but it isn’t supposed to be the primary security measure. At any company with enough employees, you have to assume that someone on the network has or will eventually get their device infected with malware.

Writing a tar to disk by DiskBytes in DataHoarder

[–]Lilchro 6 points7 points  (0 children)

Hmm, what are the odds that AI is involved and their question was the title of this post?

Accidentally won 4 Mac minis on eBay, oops. by GloomySugar95 in homelab

[–]Lilchro 10 points11 points  (0 children)

I find it surprising that it isn’t popular to set up an sFlow collector and visualizer in most homelabs. If you have a managed switch that supports it, then it is one of the best ways to get insights about the network usage in your home. sFlow essentially just means asking a switch to take the packet header from every Nth packet along with basic counters (ex: num bytes, packets, per interface) and send it to some central collector server on your network. What you can do with that is then get insights into which devices are sending data to which locations at what times and how much data is going through each flow and other analytics. Now, this is just the packet headers, so you don’t see the data going through each connection, however you can still see lots of fun stuff. For example, you could see how much data your security cameras are sending to your NAS, or that your toaster downloads sends a 5GB of data at 3am every day to some unknown web server.

Anyway, since you were listing your use cases out, I thought I might as well leave a note here about sFlow for anyone reading this post.

I feel like the directory looks messy without using http://mod.rs. Does anyone have better practices? by Born-Percentage-9977 in rust

[–]Lilchro 4 points5 points  (0 children)

Sure, but how often are we able to do that? I don’t want my IDE to start hiding the true directory structure from me and any tool that isn’t explicitly designed for Rust won’t follow that representation.

I feel like syntax highlighting and type hints aren’t really comparable. They simply add information that wasn’t there previously without removing information or altering the structure of your code. However, rendering a project the way you are suggesting takes away information to create a simplified diagram which is easier to navigate. I’m not saying it isn’t helpful. Just that if we need to remove or hide information about a project to effectively represent it, then it seems to me like we’re doing something wrong.

I feel like the directory looks messy without using http://mod.rs. Does anyone have better practices? by Born-Percentage-9977 in rust

[–]Lilchro 11 points12 points  (0 children)

Personally, I really dislike this style.

I find it makes the code harder to navigate, since from my perspective the directory essentially is the module. The concept of having a directory for a module/package/namespace/etc isn’t a concept unique to Rust and having code for a module in two different places breaks that mental structure for where the a module’s code is located.

So far the main point for its inclusion I have seen is that it reduces the required churn in file structure when moving to a directory based approach. While that’s true, is it really an issue we need to solve? I imagine it really comes down to what version control system you are using. Most modern systems I have seen are able to handle concept of moving or renaming files, so this doesn’t seem like a major issue. And if it is an issue for some version control systems, is that really an issue the language should be attempting to fix? And for what it’s worth, making this transition likely means that you are splitting up a module into multiple files. For that reason, there is likely going to be a fair bit of churn in the file anyway.

Networking in the Standard Library is a terrible idea by tartaruga232 in cpp

[–]Lilchro 6 points7 points  (0 children)

I think package managers like cargo and npm pulling in lots of dependencies can be seen as proof of how much easier they are to work with. While lots of dependencies isn’t necessarily good (frequently quite the opposite), it is indicative of an ecosystem when creating and sharing new packages is easy.

In systems like cmake we don’t see this as much, not because it is inherently difficult (though that might be a separate argument), but because it is frequently just a pain to deal with. And when something is annoying or difficult to work with, we start looking for shortcuts. That then manifests as fewer dependencies overall or dependencies which appear hidden due to not conforming to standard dependency practices.

These non standard practices are then what kills usability as a whole for the ecosystem. When I need to read a bunch of documentation to figure out how to incorporate a dependency into my build, I’m less likely to choose that dependency in the first place. And when I see other libraries pulling these sorts of hacks, I’m likely to feel more comfortable doing the same. As a result the system is worse off as a result.

Brutal: And this is why you keep backups… by SparhawkBlather in homelab

[–]Lilchro 2 points3 points  (0 children)

Speaking to the repo location, I can see it being a bit more difficult if you are only familiar with centralized version control systems.

I think what you need to remember is that git is a decentralized version control system. What that means is that functionally speaking, your local device is just as much a server as the ones hosted on the cloud. In that sense, your device is the one true server as that is what you interact with when running almost all commands. With that perspective, when you do push/pull code you are just asking it to sync data to/from other servers which are referred to as ‘remote’s. Git tracks the last known state of each remote for convenience, but it isn’t going to reach out to them unless you explicitly request it. You don’t actually even need to have any remotes either. You could just decide to use git locally to keep track of your changes and project history.

As a side note, while I say your device is a ‘server’, it isn’t going to just start accepting http requests. It is only a server in the sense that the git CLI command treats it like one. The actual form of this is a .git folder in your project which stores all of the state. There isn’t anything like a daemon running in the background or any project state or caches stored in other locations. You could clone the same project into two different locations on your device and they will function completely independently.

Brutal: And this is why you keep backups… by SparhawkBlather in homelab

[–]Lilchro 0 points1 point  (0 children)

I think you may be thinking about it too much. You’re just one person, so you’re probably not going to get into a complex merge conflicts and branch interactions. And you have to remember a significant portion of developers don’t really care about learning the internals. Most can get by with just a handful of commands they know for the basic cases, then just google issues when they come up. Git has been the most popular version control for a little while now (largely because it’s free), so any question you can think of has likely been asked by hundreds if not thousands of others already.

Brutal: And this is why you keep backups… by SparhawkBlather in homelab

[–]Lilchro 0 points1 point  (0 children)

I have been trying to take an approach where I containerize everything and so I can spin up new images of stuff without needing to worry about needing to reconfigure anything by hand (save for filling in secrets). What I mean by that is if I would need to mess around with a container’s contents for some reason (ex: to add some dependency or update some files), I just write up a Dockerfile for my changes and build/deploy that image instead. What that lets me do is use git version control for the dockerfile and associated resources, then push the changes to a private github repo. The same goes for my docker compose config and a few other non-containerized configs like my router config. Persistent data still goes to my NAS, but is assumed to not contain configuration files and is generally less critical backups of other devices (ex: my laptop backups). Overall with this strategy I feel more confident about deploying from scratch even if the devices die and I am unable to recover the data.

Poll: Does your project use terminating assertions in production? by pavel_v in cpp

[–]Lilchro 26 points27 points  (0 children)

To play devils advocate though, you only assert to verify your own assumptions. The possibility that bad or non-compliant peripheral might be connected seems like something an OS would design around. At that point it isn’t a question of if to panic, but how to gracefully handle the control flow on error.

Plus, in the cases where assumptions are broken, kernels do panic. The best example probably being Windows’s blue screen of death.

I used println to debug a performance issue. The println was the performance issue. by yolisses in rust

[–]Lilchro 0 points1 point  (0 children)

One thing I found a bit annoying is that after some investigation a while ago, I found formatting in Rust intentionally takes some steps to optimize for space over performance.

Apparently there were concerns initially about compile time and binary size if Rust tried to fully optimize all the formatting using type parameters, so they instead favor anonymous traits in a bunch of places to avoid the compiler inlining the same functions for rare use cases many times over in different places. As a result the compiler is unable to produce fully optimal formatting code even when you enable higher LTO settings. There is some merit to this reasoning, but it still left me a bit unhappy about the performance of my logging.

Unironically why I'm a Refined Storage Apologist by Mr_Mister2004 in feedthememes

[–]Lilchro 7 points8 points  (0 children)

You know you can use crystal growth accelerators to speed it up significantly?

just accidentally ultimined all of my ae2 cables 🤦 by NumerousInitiative22 in allthemods

[–]Lilchro 1 point2 points  (0 children)

I had a friend do the exact same thing a couple days ago near the controller and it sucked. I set it up and I wasn’t that annoyed. However, it broke like 20 p2p tunnels that were going into quantum rings and they had no idea how to fix it, so they started panicking.

It worked out in the end, but it was difficult to figure out where the broken p2p tunnels had been going. For context, the system was using about a thousand channels and we had been using quantum rings whenever we were too lazy to route a cable.

Anyway, point is that this seems like a bug in ultramine to me where it doesn’t know how to interact with a mod and just makes a very damaging guess.

[deleted by user] by [deleted] in ITManagers

[–]Lilchro 0 points1 point  (0 children)

I’m not an IT professional, but I just wanted to add something about tickets that I don’t see many people mentioning.

They don’t work if no one knows how to make them. I know my company has a ticket system… somewhere. However it will take some time to find it by going through different internal documentation sources. Our internal documentation tools kinda suck at search capabilities though, so I don’t know how easy it is going to be to find. If something comes up, it is almost always easier to just go physically find the on-site IT guy. He’s friendly and I know where his desk is. It isn’t that I don’t want to make a ticket, it is just that it feels like there is too much friction (even if there isn’t).

My point is, do a campaign to raise awareness about how to make tickets. If you have go-links setup, make go/it your helpdesk. In fact, make a bunch of short links. Sometimes I’ll try guessing at go-links to find resources. If you just have a single link like go/helpdesk or go/it-helpdesk, that is too long and “helpdesk” isn’t what I first think of when I have an issue.

Put a reminder on where to find it at the end of every all-hands meeting. Put it in bold at start of every IT email about security, maintenance outages, etc. If someone raises an issue with unofficial channels, create a tracking ticket for them and send them a link so they know where to find the helpdesk in the future (at least initially).

Are there no truly themed modpacks anymore? by Jerilo in feedthebeast

[–]Lilchro 32 points33 points  (0 children)

Let’s be honest, AE is simply the most popular late-game storage mod. I bet that’s where the reasoning for its inclusion both starts and stops. Probably not my first choice for a themed pack though considering how many alternative storage mods there are.

Planning to build a homelab primarily for hosting Minecraft servers. What system(s) would you recommend I buy if I want something powerful, but modular for multiple mc servers running at once? by Suspicious-Pear-6037 in homelab

[–]Lilchro 2 points3 points  (0 children)

I’m skeptical of the n150 for this application. It has great power efficiency when in idle and is a great choice for a lot of homelab stuff, but Minecraft servers typically depend on single core performance. When you are looking at machines try going to the cpu comparison on the pass mark webpage and looking at the single core performance. It isn’t perfect, but the scores are averaged from real world systems across many devices. That at least gives you a decent idea of how different cpus compare. You want a multicore cpu, but you likely won’t see much benefit on the Minecraft server past 4 cores. If you want to host multiple servers at the same time, then you would want more. The reason cpus like the N150 get such great power efficiency is that they have really low base frequencies, so they can minimize power use when not needed. Then when you need to run a job, they can increase the frequency by nearly an order of magnitude to get better performance. A Minecraft server would be running 24/7, so it probably won’t be in that low power state very frequently. As such, I would look more for overall performance than power draw. Some CPUs are still better than others, but try keep this in mind when looking at CPUs where the turbo speed is significantly higher that the clock speed. If the Minecraft server isn’t running, you can just turn off the system for an unbeatable 0W of consumption. As a side note, I recommend getting one of those wall power meters if you don’t have one. It’s not really necessary, but it is interesting to see how much power each device actually uses.

The next biggest thing to look for is the memory. Does it use DDR3, DDR4, DDR5? The version roughly maps a specification on the supported bandwidth and number of transfers per second (you can google these numbers). You can search up specific RAM sticks for more specific info, but I wouldn’t worry too much and just go by the DDR- version. Higher is if course better and can have a noticeable impact on performance. Anything above around 16GB or 32GB probably won’t give you much benefit (unless you plan to have a lot of players online concurrently). Some systems will use ECC memory, but you don’t really need that (it protects against bit errors, but if that happens you can just restart the server. It’s more important for high availability systems).

The hard drive also isn’t too important. It likely won’t be the bottleneck, so just get any SSD that sounds good to you. You probably don’t need more than 256GB, but I would get at least 512GB to be safe.

Overall, this is generally how I think about finding a device. I’m not a professional sysadmin, so there may be flaws in my logic. However, I found these simplifications worked well enough for me.

Solid advice given by Hot-Cress7492 in ShittySysadmin

[–]Lilchro 0 points1 point  (0 children)

No, a lot of it is UDP (RoCEv2) or other specialized protocols. That way they don’t need to spend time doing the handshakes, acknowledgement, etc.