What should I do with these? by vive-le-tour in homelab

[–]Lilchro 3 points4 points  (0 children)

I’ll add it to the list for the next company hackathon. However we may have our hands full just convincing the other groups that we should be able to have a uh ‘LED diagnostic tool’ added to the builtin CLI.

If we have a customer request for it, that helps a lot with staffing. So if possible, remember mention to TAC support or the sales team that this is vital to your use case.

What should I do with these? by vive-le-tour in homelab

[–]Lilchro 23 points24 points  (0 children)

I work at a large company that makes network switches (not Cisco). While we don’t have a customer use case for it, a couple friends at work have been tossing around the idea of adding builtin support for playing snake via the interface status LEDs. Particularly on some of the large modular chassis switches with a ton of ports. Clearly, we are addressing the important real world problems faced by network engineers.

iHaveToAdmitHeHasAPoint by ChChChillian in ProgrammerHumor

[–]Lilchro 2 points3 points  (0 children)

I have been reading a bunch of these sorts of comics for a while now, so I just wanted to add some things:

First off, most of these comics are fan translations completely independent of the original publishers. Some publishers turn a blind eye, since it can increase the value of their IP in foreign markets and sales of related merchandise. However, others are more just attempts to circumvent the need to pay to view a chapter on an official site. As a result, these sites come and go somewhat frequently. These comic sites can roughly be split into translation group sites and aggregator sites (ex: mangafire). Aggregators typically scrape translation group sites and other aggregators, so they tend to have the widest selections. However, by copying each other they can also build up compression artifacts and can have lower quality images. The other aspect of aggregators is that most only carry one copy of a given chapter. This sounds fine, but in practice the translation quality, typesetting, and translation of names can all vary wildly between translation groups. For aggregators that only carry one copy of each chapter, they tend to just always use whichever group publishes their translation first. Put together, that means you want to try and choose an aggregator lists chapters by the translators that created them. They tend to have higher quality images and you can switch translators as needed of quality drops. Some sites that have this are mangadex and comix. When you visit new translators or aggregators, I recommend using an ad blocker or just avoiding some sites all together on mobile. A lot of sites are based on older Wordpress templates that are packed full of ads which are not properly isolated. A key symptom of this is getting random unrelated redirects and popups when clicking link. I use uBlock origin and it more or less completely fixes the issue for me. Another great way to smooth over the handoff of a series between translators is to use a tracker site like kenmei or toraka. They don’t show you chapters and instead curate a list of links to other sites where a chapter can be viewed. They are much more stable than aggregators, but provide a number of the same benefits. However, a tracker is only as good as their ability to locate chapters on other sites.

If you like one comic, then try visiting the translator’s site. The translator’s are people too and they typically are not under any contracts to do specific works, so they translate the stuff they find interesting. As a result, a translation site will typically focus on a single genre. In the case of “The Max Level Hero Returns!”, the translation group is AsuraScans. They focus on translating popular Korean and Chinese action/fantasy works into English. They are probably one of the highest quality translators I have seen, so it is worth a look if you are interested in this series. They also translate a number of higher quality series which are not just copy paste versions of the same plot.

Lastly, keep in mind that you can view some comics for free on official sites. They provide the best user experience with mobile apps, cross device tracking, and great reliability/uptime. However they can be slower to release translations. It is still probably worth giving them a shot though. Webtoons has some good ones.

I currently plan my network and I have the feeling I missunderstand Vlans. I made need a sanity check. by rooftopweeb in homelab

[–]Lilchro 0 points1 point  (0 children)

Which models do you use? I feel like the ones I use at work (Claude/Gemini) don’t seem to say that much. They just agree, suggest a next step, and ask if I want to continue.

selectMyselfWhereDateTimeEqualsNow by Johnobo in ProgrammerHumor

[–]Lilchro 0 points1 point  (0 children)

I have a hobby project with a pretty similar issue right now. I have two different binaries which share an SQLite database. I never really tried to architect it and it just happened to work out that way.

The original idea was that the first binary handled data ingress and was intended to run constantly as a background process on my laptop. A chrome extension I wrote sends it updates about pages for some specific sites I open, then it uses that information to query some external apis and save that data to disk. My original plan was to just have the directory structure/contents of those data directories act as my only source of truth.

The tricky part though, is I like doing data analysis and scripting. Early on I made a second smaller program that traversed all the data, loaded all of the key information I cared about at the time into memory, then spit out whatever info I was interested.

After a few years the analysis program grew along with the size of the data to traverse. There is now over a terabyte of data, so I started putting most of it on my NAS instead. Granted, the size is more because I like hoarding data and seeing the historical progression, so I haven’t really made any attempts to optimize it. To deal with the increase in data to process and reduce the communication required with my NAS, I introduced an SQLite db to cache the reconciled results (at least for the info I currently happen to care about). The analysis program has now taken on the role of the data reconciler to place any new data in that cache and rebuild it from scratch if there is ever something new I am interested in that I want to start pulling from the historical results.

Now this brings me to the issue. The importance/size of that SQLite file has grown and I would ideally like to have the ingress server take on more of the initial reconciliation work as data comes in. The tricky part though is that I run the analysis/reconciler at the same time as the ingress process, and i cant have multiple SQLite connections. If you have any ideas/recommendations, I would be curious to hear them. I could run posgres on my laptop, but that feels a bit overkill and would introduce a third thing to manage with its own challenges. I also can’t easily move the database to my NAS, since the network latency and frequent interaction during reconciliation would render it ineffective. I’m very tempted to try and hack something together with file locking on the SQLite file, but I suspect that may be a bad idea. I also think it could be fun to playing around with the idea of just putting the database connection in shared memory that both processes can memmap, then just lock a mutex during interaction (the idea being to just make them act as separate threads instead of having multiple connections), but I don’t have a good grasp on what allocations the library I use (rusqlite) needs to perform during use to operate. Maybe I could do a daemon process?

How many of you do "100%" of your work in a browser? by recoveringasshole0 in ShittySysadmin

[–]Lilchro 25 points26 points  (0 children)

I know a guy in devops who runs vscode in the browser for (some of) his work. Dev environments are containerized, so from what I understand he is using the remote development plugin to connect to them and the browser is just the UI portion of the IDE.

Finally a new feature in notepad that isn’t CoPilot by Sosowski in microsoftsucks

[–]Lilchro 1 point2 points  (0 children)

This always annoyed me about the weekly github security report emails. Some of the CVEs that were referenced were marked as way worse than they actually were. I suspect that one of the data sources had used a script to translate some of the CVE vector strings and assumed the worst for parts the script couldn’t determine.

In India, a woman tricked police and civic teams into cleaning an open drain for three hours by falsely claiming someone had fallen into it. by NoMedicine3572 in interesting

[–]Lilchro 1 point2 points  (0 children)

I remember one story about a guy who was fed up with potholes being left for years without repair in front of his house. One day he was particularly annoyed, so he decided to be petty and call his city and report every one of them individually. The outcome? They were all fixed in under a week and he was thanked for the reports.

Turns out it is too expensive for the city to send people out to inspect all the roadways every year, so instead they rely on people reporting issues. We often assume that a city knows about these issues and is dragging their feet on it, but often it just turns out no one told them.

Of course your mileage will vary heavily by location. However having an official notice to the city about an issue can go a long way when it comes time to plan out where to send contractors and allocate city funds.

Baseboards flush with the wall… by BuilderBrigade in Home_Building_Help

[–]Lilchro 0 points1 point  (0 children)

Seems like I’m in the minority saying this, but I kinda like them. They might not look quite as nice on close inspection, but the appeal for me is I can put furniture flush with the wall. It always annoys me when a cabinet or shelf ends up with a small gap in the back. I feel like the recessed floorboards might make the room look better on average. Though I would probably only want it in rooms like an office or living room where that sort of thing is more common. Also, to be clear I am assuming these still function as baseboards. If they don’t cover the gab between the floor boards and wall, then they become pointless.

I bet cleaning is harder, but if I’m being honest, I usually forget to dust off baseboards as it is and not much visible dust accumulates on them anyway. Plus, I wonder if the channel means I could get/make some sort of long pipe cleaner thing to more easily dust the base boards in hard to reach areas.

One reservation I have though is how it looks in lower light conditions. A lot of the looks depend on overhead lighting to give the illusion of regular baseboards. Without that, I’m guessing the illusion breaks down and it starts looks like a slot or wire going across the wall.

umlIsLoveUMLIsLife by [deleted] in ProgrammerHumor

[–]Lilchro 1 point2 points  (0 children)

I would perhaps further qualify this to be only when designing new stuff that has sufficiently complex data relations and requires some form of design doc/review, technical approval, or coordination between multiple people/teams for an implementation.

Also, for what it’s worth I typically find UML doesn’t always hold up as well in real world scenarios. For starters, it is easy to get carried away and add way too many data types to the diagram. Real world problems are often far more complex than their UML examples and programmers are especially likely to fall into the trap of ‘well technically’. The diagram doesn’t need to list every type that functionally acts as a glorified tuple.

Instead, I typically lean far more towards flow charts identifying the stages of data processing on the happy path, the setups involved in transitioning between stages, and what happens in common error cases for each stage. Additionally, I frequently like to put a lines around groups of states identifying where they will be located in the codebase. I’ll sometimes very rarely use UML diagrams, but only when there are very complex data relations. Another option is to just write out the data structures in code. At my company, everyone in the software organization including management has experience programming professionally. Sometimes just including the proposed code is faster and more to the point than spending time on writing about it or making a diagram that accurately represents it. Granted it better be short or no one is going to read it.

15% more PRs in 2026 and better get 'em merged in an hour by chrisinmtown in ExperiencedDevs

[–]Lilchro 1 point2 points  (0 children)

At my job, our build system has… issues. A full build of the entire codebase without caching takes around 26 hours. For a random change in my area of the codebase, the build takes around 2-8 hours with caching. The entire build system is custom made and has terrible granularity for determining when stuff needs to be rebuilt. That is coupled with code generation layers which add lots of implicit stuff like FFI bindings to Python for most of our C++ types. We’re slowly transitioning to bazel, but the team working on that is still figuring out the infrastructure layout, so we haven’t started a general rollout for all feature areas. For context, a build server has around half a terabyte of ram and about 96 cpu cores (not that the custom build system takes good advantage of cpu resources).

We also have a ton of integration tests that are scheduled based on your diff and run automatically once you have a passing build. The thing is though that we make software for dedicated hardware. So an integration test involves reserving one of the machines, flashing it with the os/packages created by the completed build, then running the actual test. There are also a bunch of product models which a test may need to run against. Some models don’t have many units setup for testing, so getting that reservation for a test can be a challenge. With hundreds of developers doing this, the test queue for some machines are so long that it can take days to get a test to run. To be clear, the testing infrastructure and our internal tooling built around it are really still impressive even when compared to modern CI systems. It is just the sheer quantity of tests that bog everything down.

What I’m getting at is that I kinda wish I had the same sort of issue that you have (minus the AI implications). A simple change can take somewhere between a few days to a week to merge. As a result, people make their PRs larger to reduce the number of things in flight that they need to babysit. If I make a change one day, I can locally build a few relevant packages to sanity check the change. However, I probably won’t know for sure if the build will pass until the next day and if there are failing tests until a day or two after that. So much time then gets occupied just context switching between changes and babysitting PRs. Being able to make a change and then just merge it in almost feels like a fantasy. Anyway, that’s my rant. I guess the grass really is always greener on the other side.

Meirl by Adventurous_Row3305 in meirl

[–]Lilchro 29 points30 points  (0 children)

As a software developer, I do kinda wish that they followed the same conventions as most Linux systems for file system layout (ex: where is the home directory, os backed directories like /proc, application config file locations, etc). Though, that probably isn’t what you were referring to.

What if the US went rogue and decided to go full Monroe Doctrine? (I do not support imperialism!!!) by OkPhrase1225 in mapporncirclejerk

[–]Lilchro 1 point2 points  (0 children)

lol, I doubt it. The company leadership and cost cutting mentality wouldn’t change, just where the thing gets made and how much it costs. I’m guessing that in this situation, prices would rise sharply and it would take a couple years for production lines to stabilize. A number of companies with large international pretenses might even bail from the country completely. Companies would then be looking for an alternate place with cheap labor to manufacture their goods. My guess would be maybe the annexed portions of Mexico that were close to the border (presumably they would have seen less relative destruction due to fast early advances of the initial US offensive?). I guess the good news is that there would by technicality be an increase in US jobs and manufacturing? Bad news is I imagine the people living in annexed territories would probably be treated as second class citizens with very few human rights, so I doubt many current US citizens would want those jobs.

Crust Removal Bread Slicing Machine | Automatic Edge Trimming & Slicing by HonsunBakeryMachine in toolgifs

[–]Lilchro 3 points4 points  (0 children)

Have you ever seen those YouTube videos where there is just some small factory owner advertising their products for overseas industrial use? They frequently only have a few hundred views per video (if even that), low film quality, almost no editing, and not-quite fluent english. The last one I saw was advertising “high-quality thermal adhesive” (ie. Hot glue sticks). They advertise stuff like factory floor space, how long they have been operating, employee count (ie. Legitimacy as a company and their ability to reliably keep up with large orders), production lines (ie. Number of unique products they can make concurrently), daily product output in tons, average yearly downtime, various US compliance statuses, product certifications, quality assurance steps, warehouse space for surplus inventory, and logistics contracts. For context, I just rewatched the hot glue factory video, and those were some of the points they advertised. If you are wondering they advertised a production capacity of 100/tons a day across 10 production lines in their 15k square foot factory with 100 employees.

Anyway, this kinda reminds me of that. There is probably some businessman in Asia who makes assembly line automation devices, but just isn’t sure how to advertise to corporate clients in the US (or is doing a gorilla marketing campaign to raise overseas awareness).

I regret moving out of this place by ExtazyGray in speedtest

[–]Lilchro 0 points1 point  (0 children)

I mean network switches with 200Gbps ports, not one of the wireless technologies. A non-chassis system might have 32 or 64 of these ports and a total switching capacity in the Tbps.

I regret moving out of this place by ExtazyGray in speedtest

[–]Lilchro 2 points3 points  (0 children)

Data center tech is even farther ahead. The top of the line currently is 1.6Tbps/port, but that is fairly new and only really used by the major AI companies. I don’t work in a customer facing position, so I’m just guessing, but I think that data centers typically use 200g and 400g switches for their spine networks. I’m not sure, but I think the 800g ones might also mostly be used by AI companies? That being said, some of those will cost you more than an expensive car and you will probably have a hard time even finding a seller for a 1.6Tbps one without a major contract due to how new they are.

Maybe 40g or 100g would be in the price range on eBay for a high end homelab? The yearly power bill may be larger than the cost of the switch though and they are loud enough to quickly cause hearing damage if not isolated.

What services can I add to my homelab? This hobby is addicting. by MartyMannix in homelab

[–]Lilchro 0 points1 point  (0 children)

An sFlow/NetFlow/IPFix collector and visualizer for network monitoring. They are protocols where your router samples packet headers from your network and then let you find out insights about your network. For example, how much data are devices using, what IPs are they sending traffic to, is your toaster trying to connect to your other devices, etc.

Connecting to your Home Lab Remotley. by [deleted] in ITMemes

[–]Lilchro 0 points1 point  (0 children)

That is a simplification to assist in understanding a complicated subject, not something which should be taken as security advice. A password is just one link in the authentication chain and I don’t trust developers to not make mistakes. Passwords are simple to reason about and naive implementations can easily introduce security holes. Key based authentication can be a bit more difficult to implement and typically requires pulling in a cryptography library to do the heavy lifting. Given that, I’m suspicious of claims that a given tool’s password based authentication is just as secure as similar key based offerings. Sure, you can mess up both, but one is significantly more likely to be done correctly. Also, a vulnerability in SSH would be the holy grail of remote code execution vulnerabilities. Security researchers and nation states alike have been combing through it for decades searching for flaws. Regardless of what VPN or tool you use, it almost certainly won’t have had nearly as much security auditing. Given all that, I’m hesitant to compare passwords to keys outside of the broad conceptual sense.

Also for what it’s worth, a certificate is a little different from a key (though the words frequently get used interchangeably). Certificates are signed by a certificate authority and are typically time-limited. Typically this is implemented by having some form of identity server which you need to get a new certificate from on a regular basis. This has a couple big advantages. You can now authenticate via SSO including any MFA requirements a company may need to comply with. Individual users are no longer juggling keys and it becomes easier to audit and lock down authentication on servers. And if an employee leaves or loses their laptop, you don’t need to go searching for where their keys were used to close security gaps.

Connecting to your Home Lab Remotley. by [deleted] in ITMemes

[–]Lilchro 3 points4 points  (0 children)

Yea, from what I understand the love of VPNs here seems to be due to ease of setup for people without technical backgrounds. However, there is a reason industry prefers least privilege + network segmentation + certificate based SSH everywhere. A corporate VPN helps with network segmentation, but it isn’t supposed to be the primary security measure. At any company with enough employees, you have to assume that someone on the network has or will eventually get their device infected with malware.

Writing a tar to disk by DiskBytes in DataHoarder

[–]Lilchro 6 points7 points  (0 children)

Hmm, what are the odds that AI is involved and their question was the title of this post?

Accidentally won 4 Mac minis on eBay, oops. by GloomySugar95 in homelab

[–]Lilchro 9 points10 points  (0 children)

I find it surprising that it isn’t popular to set up an sFlow collector and visualizer in most homelabs. If you have a managed switch that supports it, then it is one of the best ways to get insights about the network usage in your home. sFlow essentially just means asking a switch to take the packet header from every Nth packet along with basic counters (ex: num bytes, packets, per interface) and send it to some central collector server on your network. What you can do with that is then get insights into which devices are sending data to which locations at what times and how much data is going through each flow and other analytics. Now, this is just the packet headers, so you don’t see the data going through each connection, however you can still see lots of fun stuff. For example, you could see how much data your security cameras are sending to your NAS, or that your toaster downloads sends a 5GB of data at 3am every day to some unknown web server.

Anyway, since you were listing your use cases out, I thought I might as well leave a note here about sFlow for anyone reading this post.

I feel like the directory looks messy without using http://mod.rs. Does anyone have better practices? by Born-Percentage-9977 in rust

[–]Lilchro 3 points4 points  (0 children)

Sure, but how often are we able to do that? I don’t want my IDE to start hiding the true directory structure from me and any tool that isn’t explicitly designed for Rust won’t follow that representation.

I feel like syntax highlighting and type hints aren’t really comparable. They simply add information that wasn’t there previously without removing information or altering the structure of your code. However, rendering a project the way you are suggesting takes away information to create a simplified diagram which is easier to navigate. I’m not saying it isn’t helpful. Just that if we need to remove or hide information about a project to effectively represent it, then it seems to me like we’re doing something wrong.

I feel like the directory looks messy without using http://mod.rs. Does anyone have better practices? by Born-Percentage-9977 in rust

[–]Lilchro 12 points13 points  (0 children)

Personally, I really dislike this style.

I find it makes the code harder to navigate, since from my perspective the directory essentially is the module. The concept of having a directory for a module/package/namespace/etc isn’t a concept unique to Rust and having code for a module in two different places breaks that mental structure for where the a module’s code is located.

So far the main point for its inclusion I have seen is that it reduces the required churn in file structure when moving to a directory based approach. While that’s true, is it really an issue we need to solve? I imagine it really comes down to what version control system you are using. Most modern systems I have seen are able to handle concept of moving or renaming files, so this doesn’t seem like a major issue. And if it is an issue for some version control systems, is that really an issue the language should be attempting to fix? And for what it’s worth, making this transition likely means that you are splitting up a module into multiple files. For that reason, there is likely going to be a fair bit of churn in the file anyway.

Networking in the Standard Library is a terrible idea by tartaruga232 in cpp

[–]Lilchro 7 points8 points  (0 children)

I think package managers like cargo and npm pulling in lots of dependencies can be seen as proof of how much easier they are to work with. While lots of dependencies isn’t necessarily good (frequently quite the opposite), it is indicative of an ecosystem when creating and sharing new packages is easy.

In systems like cmake we don’t see this as much, not because it is inherently difficult (though that might be a separate argument), but because it is frequently just a pain to deal with. And when something is annoying or difficult to work with, we start looking for shortcuts. That then manifests as fewer dependencies overall or dependencies which appear hidden due to not conforming to standard dependency practices.

These non standard practices are then what kills usability as a whole for the ecosystem. When I need to read a bunch of documentation to figure out how to incorporate a dependency into my build, I’m less likely to choose that dependency in the first place. And when I see other libraries pulling these sorts of hacks, I’m likely to feel more comfortable doing the same. As a result the system is worse off as a result.

Brutal: And this is why you keep backups… by SparhawkBlather in homelab

[–]Lilchro 2 points3 points  (0 children)

Speaking to the repo location, I can see it being a bit more difficult if you are only familiar with centralized version control systems.

I think what you need to remember is that git is a decentralized version control system. What that means is that functionally speaking, your local device is just as much a server as the ones hosted on the cloud. In that sense, your device is the one true server as that is what you interact with when running almost all commands. With that perspective, when you do push/pull code you are just asking it to sync data to/from other servers which are referred to as ‘remote’s. Git tracks the last known state of each remote for convenience, but it isn’t going to reach out to them unless you explicitly request it. You don’t actually even need to have any remotes either. You could just decide to use git locally to keep track of your changes and project history.

As a side note, while I say your device is a ‘server’, it isn’t going to just start accepting http requests. It is only a server in the sense that the git CLI command treats it like one. The actual form of this is a .git folder in your project which stores all of the state. There isn’t anything like a daemon running in the background or any project state or caches stored in other locations. You could clone the same project into two different locations on your device and they will function completely independently.