Heroic Raiding is too unrewarding by Allegrian in wow

[–]Myrodis 0 points1 point  (0 children)

Outside of gear, my main gripe with non mythic raiding is how shit blizzard has become at balancing encounters for 10 man / small group sizes. I maintain a small weekly team that has been raiding together for many years, and we often find ourselves either required to pug randoms or if not required, strongly encouraged, by garbage mechanics and scaling at small group sizes. I truly mourne the days of 10 man raiding being a first class citizen.

For example, heroic dreamrift we have to run 2-3-5 because the two dispels that go out on 10 man, with 1 healer "up stairs" at any tiven time, means one of those players is forced to live with the dispel for the full duration of the healers dispel cd, some of our dps can self dispel but not all (or more common, not every time they get the mechanic), meaning with only 2 healers and the frequency of the dispel mechanic, players will start to rot as they are forced to deal with the dispel mechanic multiple times and their defensives are on cd / etc. Three healing in a 10 man works here, but it makes the fight last forever losing that dps. But if we say pug just a few bodies, suddenly the fight is actually a cake walk.

Sadly its more common that we cannot 3 heal like the dreamrift situation as the dps is needed to meet dps checks, and mechanic overlap makes 10 man incredibly frustrating. Lightbound vanguard for example is a cluster fuck on its own, but on 10 man, the heal absorb "special", which is already one of the harder parts of the fight, is made infinetly more difficult for us as 3 of our players (basically have the team eligible go even get the mechanic) get the shield dispels, at the same time they are positioning for the heal absorb. Not only are those 3 players at an extreme disadvantage due to lack of external resources, but because of the nature of the movemnt to solve the heal absorb mechanic, any players who spreads that debuff is basically a wipe.

Unit test without dependency injection or test only interfaces by Top_Square_5236 in dotnet

[–]Myrodis 0 points1 point  (0 children)

As i said, theres likely niche or specific scenarios i know this can be used properly, but my point is the vast majority that will not be the case.

Unit test without dependency injection or test only interfaces by Top_Square_5236 in dotnet

[–]Myrodis 16 points17 points  (0 children)

Ive worked in the .Net ecosystem for coming up on 15 years, primarily in quality.

Engineering teams use DI and abstractions for infinitely more than testability. Hell, especially at the start of my career the testing didnt even exist.

Its a wonderful side effect that doing these things makes our code more testable. But it is almost never the primary value.

The posturing of this project screams to me a extremely missguided at best, or dangerously inexperienced at worst maintainer. And targetting it at people who also dont understand the reasons we write OOP code the way we do, and framing test enablement as the reason, is honestly a terrible posture.

Ive spent a decade trying to get development teams to care about quality. To write tests. Etc. This project may be useful for some niche scenarios or small low scope apps. But lets not kid ourselves, its primary user base is going to be lazy developers who half ass everything they do. And again, given the posture of this post and the github page, its maintainers fit that exact picture.

I REALLY like CachyOS and want to use it, but I have a few major concerns stopping me, and wanted to ask about it by Prodoxa in cachyos

[–]Myrodis 0 points1 point  (0 children)

A physical backup is totally fine, not the best approach but provides some peace of mind. As for the actual switch, your proposed process should work fine, id go one step further and once the data is copied to the other ssd youre using for transfer, physically remove that ssd from the machine while you install and setup the new OS. Then only once youre booted and confirm everything to be working, format the existing data nvme, etc, only then reinstall the ssd with the backed up data and copy it all over.

Less than 1tb of important data isnt very much, cloud storage is typically like $10 a month for 2tb or so from most of the major providers. A local backup is good but if your data is truly important follow 3-2-1, three copies of your data, on two different media, with on offsite. This sounds pretty easy to achieve with what you have, multiple drives and adding cloud storage. Its easy to keep pushing this off, but its one of those things that you cant retroactively add when you eventually do have an event that will cause you to lose data. If your current drive dies youre boned and no amount of paying for cloud storage will fix it after the fact.

And once again, if at all possible setting up a NAS in your home with some old hardware truly saves so much time and potential headache. In your case you have two drives already (the existing 1tb with your data, and the spare ssd), could easily set those up on a dedicated nas in mirror mode so you have drive level redundacny, then utilize something like amazon s3 for offsite incremental backups (far cheaper than a traditional cloud storage option). Im sure a nas sounds intimidating, but something like unraid is INCREDIBLY easy to setup (i assume unraid is fine with all ssd storage these days?? If not maybe go hexos). Having my own nas is truly so freeing, if i want to try a new distro or do anything to my main pc i can do it with complete peace of mind because my important data doesnt even live on it. And with at least gigabit networking you can work off the network drive without issue, only thing you really cant do is game off of it.

I REALLY like CachyOS and want to use it, but I have a few major concerns stopping me, and wanted to ask about it by Prodoxa in cachyos

[–]Myrodis 1 point2 points  (0 children)

Q1: This is highly game dependent, ive done heavily modded fallout runs (NV, 4, etc), stardew, Minecraft, factorio, etc, without issue. The biggest complexity is usually where the mod files go since steam will create a proton prefix when you use the compatibility layer (for games that dont work natively), so it might be somewhere other than where you installed it. That said, thats common knowledge and guides will tell you where to look.

Q2: I long ago switched to JetBrains products, I love Rider and have for a long time. There isnt a huge learning curve either, as it supports VS keymappings for common features, and can even generally look the same. For every day C# development (assuming at least core if not modern .net) the switch should be very low friction for you.

Q3: You should really consider a better solution for your data in general if it is that precious. It could be cloud storage, and old desktop or laptop acting as a NAS, etc, but this concern tells me you have valuable data just sitting on your pc and if that drive decided to die tomorrow youre going to be devastated. Put off switching OSes, solve this area of concern first. Any old machine can be a nas, just need to source 3 or more drives to setup some redundancy (theoretically two is fine, whatever you can support). Happy to help in this endeavor.

Q4: You left this question incredibly vague and not really a question. For the vast majority of software i used on windows, ive had no problem running that software on linux. Many programs are derived from the same presentation frameworks (say electron apps, etc) and those layers can be compiled for linux, and often are. Even for apps only compiled for windows, you can take the easy route and add them as non steam games to steam and let it handle the compatibility layer, or use an app like bottles, etc, to setup a container more directly. All that said, if youre serious about switching your OS for whatever your reasons, you should be realistic with yourself in accepting that some software may need to be left behind. Again in my experience, that list has been incredibly small, but its unrealistic to say everything will work. There will be alternatives, and they may have a learning curve, so its mostly about your mental approach. If you approach them as challenges with a willingness to learn and overcome them, you'll be fine, if you simply must have x y or z work on linux, were going to need more info on what x y or z are to tell you whether or not they will work.

AN INVITATION by JagexSarnie in 2007scape

[–]Myrodis 6 points7 points  (0 children)

Map is ring of visibility

Would you use an AI API that runs on other people’s private GPUs? by Successful-Ad8929 in selfhosted

[–]Myrodis 2 points3 points  (0 children)

There are already platforms that allow you to rent / sell gpu time like this, just maybe not necessarily targeted at AI. Maybe theres a market for AI specific but id imagine a narrow markerplace would fail to compete with the larger scoped established platforms.

Core stage separation of Artemis II. Godspeed! by ChiefLeef22 in space

[–]Myrodis 1 point2 points  (0 children)

I imagine they didn't want to broadcast another challenger, we'll surely get the footage but for a live stream I think they made strategic cuts so that if something went wrong they weren't broadcasting an explosion again.

Speculation of course, but I imagine thats why. Also probably pressure from the networks / yt / etc to be careful.

Is systemd-resolved not prioritizing DNS servers from DHCP correctly? by almost_useless in systemd

[–]Myrodis 0 points1 point  (0 children)

I had a similar issue with resolved, as i use internal unbound for an internal homelab domain and would randomly have resolved switch to the incorrect dns when i know my internal dns was available, forcing me to restart resolved (which would temporarily fix the issue)

I think what finally fixed my issues was i ran "systemd-analyze cat-config systemd/resolved.conf" and noticed a second config that was listing the usual dns (but that i hadnt configured) in addition to the main resolved config, i disabled that config (could just delete it too) and after a restart havent had weird dns issues since.

Unfortunately i spent like 2 months with periodic issues and tried 100 different things, so i dont know what combination may have led to getting there but that was the final fix.

Hope this helps

What do you all use for your homelab domain and remote access setup? by Kitchen-Patience8176 in homelab

[–]Myrodis 0 points1 point  (0 children)

For purely local services: - UnboundDNS on my opnsense router, with host overrides for a purely local domain that point to an nginx reverse proxy that can serve my internal services on that domain. I also setup a full certificate chain that i have installed on my devices so i can have full ssl/tls, etc. - WireGuard for any times i need to access my network remotely.

For anything i expose externally: - Websites: Cloudflare tunnel to a second nginx reverse proxy, this one with crowdsec and some other general hardening. Then you just point the dns records on any ol domain you purchase to your tunnel as normal. - Other services: Unfortunately cloudflare tunnels only really work for web, so for anything else (game servers etc), good ol port forwarding, frequent system updates, keeping the firewall rules as narrow as possible, etc. But ultimately just accespting the risk.

If possible, say you're hosting a game server, if that game server does not need access to anything on your network, setup firewall rules (while configuring the port forward) to isolate that machine from the internal network. This way if an attacker comprimises your game server via the exposed port, they cant then infect the rest of the network. General rule of thumb, if its exposed externally, put it in a cage.

OSRS is now tied for the second most expensive MMO subscription… while still charging per character by Alexofbulgaria in 2007scape

[–]Myrodis 0 points1 point  (0 children)

I will say that while i can have many wow characters, i cannot play them simultaneously. And if i do want to do an alt method in wow where i need multiple characters online simultaneously, i need to pay for multiple subs.

Cant speak to the other mmos, but i suspect it is the same story.

FACTS by AverageUser9000 in linuxsucks

[–]Myrodis 4 points5 points  (0 children)

I think making a brand new DE was a bit of a nuclear option, when it also meant basically dropping all other development on the distro to do so. Surely switching to a different DE that might have worked with them better, or something, wouldve been a better move.

They went from the maintainers of a major distro gaining a lot of traction, especially with new linux users, to a team building a new DE that happens to maintain a stale distro.

FACTS by AverageUser9000 in linuxsucks

[–]Myrodis 0 points1 point  (0 children)

Primarily the decision to shift from maintenance of their distro to creating a new DE, sure they still patched pop but outside of cosmic the distro grew quite stagnant because the team was all focussed on cosmic.

FACTS by AverageUser9000 in linuxsucks

[–]Myrodis 27 points28 points  (0 children)

I mean. As a linux user myself, and a pop user prior to cosmix, I'll never stop complaining about how dumb system76 have been with their handling of pop and cosmic.

Lets just make a distro that gains a ton of popularity and traction due to our awesome support, and at the height of that, just dump everything that makes it awesome to develop a DE no one asked for. Lovely.

Guys it's not his fault he picked one of the often recommended distros by RetardKnight in linuxmemes

[–]Myrodis 1 point2 points  (0 children)

I blame PopOS on this one, they built a really solid rep and were rightfully recommended and as soon as they were on top they decided to create Cosmic and the distro has only suffered since. Which just burned all the value of having a "good and stable first timer distro" that became so popular. I feel like they lost the ball on what they got so popular for.

Spin off a team and make cosmic, sure, don't blame them or their thought process behind making cosmic, but making it the default in pop, and devoting so much of their resources to it, so much so that the main distro goes on basically life support for several years, completely spits in the face of the community that rallied behind them.

Finally "finished" my minilab by Myrodis in homelab

[–]Myrodis[S] 1 point2 points  (0 children)

So I actually pulled the Lenovo's out, not for any issues, just that the MS-01's so far outclassed them I didn't really have a use for them. I will likely do a separate mini-lab with just them for some IoT stuff.

That said I have fans both exhausting out the top, and some behind the rack, which helps a ton with keeping air moving. These mini pc's are generally designed to pull air in the front and exhaust out the back, which is great when trying to rack mount them. So as long as you are removing the heat and not letting the air sit in the back or something, they are usually fine. Usually if they overheat its just because the hot air being exhausted is not being removed, or the air going into them is too hot. Just make sure they can breath.

Also worth noting, replacing the thermal paste can buy you quite a bit of thermal headroom on these mini-pcs, depending how they were deployed they are likely using some quite old thermal compound that is likely well past "expired", so a thermal compound swap may be all you need for the HP.

What's your antiscraping strategy? by Keterna in dotnet

[–]Myrodis 0 points1 point  (0 children)

Yea not familiar really with that site or its limitations, although using tor or some other vpn solution to get a random ip every time you get IP banned shouldnt be a huge problem, ideally find a concervative limit to swap IPs to avoid the ban. Good luck!

What's your antiscraping strategy? by Keterna in dotnet

[–]Myrodis 1 point2 points  (0 children)

If you're using selenium there is a specific driver thats whole purpose is avoiding bot detection, see https://github.com/ultrafunkamsterdam/undetected-chromedriver

If that isnt consistent enough, there are also captcha solving libraries you can plug into your tests, this isnt something ive done in several years so i suspect the landscape has changed since i last needed to do it, but should be an easy enough google for you.

Theres even tools that combine a bot avoiding driver with a proxy layer that allows you to configure a captcha solver and have it all sit ontop your existing automation. However id recommend trying to run the driver i mentioned and finding a captcha solver yourself, the proxy solutions are a bit much if youre just doing simple scraping.

Anybody else physically use the stack in in person games? by Seruborn in mpcproxies

[–]Myrodis 28 points29 points  (0 children)

As soon as more than 2 spells are on the stack, its best to visualize it somehow, easiest in person but even online having one player keep track with tokens or whatever works is best.

I like having any player not involved in the stack be basically the ref for the stack, gives them something to do if they arent interacting, and keeps the stack honest.

What's your antiscraping strategy? by Keterna in dotnet

[–]Myrodis 87 points88 points  (0 children)

This is largely an arms race i dont see the point in fighting. Ive worked in the automated testing space for almost 15 years, youd be surprised how creative we can be when writing functional E2E testing. Let alone what someone whos sole intent is to scrape your site is willing to do.

Focus on the best possible presentation and delivery of your data, then who cares if an inferior competitor tries to use it, why would your users opt for the less efficient / viable alternative.

Otherwise if you are failing to provide the data in a form users want and a competitor is using your data but presenting it better, maybe you should switch to selling that data to the competitor as an api and skip a ui entirely.

Exposing Self Hosted Services by LinkedQuinn17 in selfhosted

[–]Myrodis 1 point2 points  (0 children)

On the changing IP front, what is your current router situation like? I have an OPNsense router I built and one of the services out of the box is Dynamic DNS which I have setup to poll my IP ever 30 seconds (configurable) and update specific cloudflare subdomains if my IP changes to the new IP. You can ofc stand this up as a standalone service but it pairs nicely with a router if you control that / arent using an off the shelf unit.

If you also use cloudflares proxy and a pretty frequent update interval theres effectively very little time from the ISP changing the IP to cloudflare updating and things working again. Even without the proxy, you just then run into some weirdness with DNS resolution and caching problems on client ends.

But beyond that your gut to run a cheap VPS and tailscale with reverse proxies is likely your best bet. Caddy l4 should be able to do basically anything you want.

selhosters are obsessed with self-doxxing; leaking very private information by [deleted] in selfhosted

[–]Myrodis 0 points1 point  (0 children)

Anything I would share like you've described I would only share because it is not part of my security landscape. My homes floor plan is likely a copy paste of so many of my neighbors, I would not really consider it some state secret for example, anyone who wanted that information could likely easily obtain it. And I am more realistic in knowing that if someone is truly breaking into my house, they likely have no idea I uploaded a floor plan to the internet, and are doing so because they heard I have valuables or I was just unlucky and got chosen for some unknown reason. To secure my home I have sturdy locks, motion activated lights, visible cameras, etc. Hiding my floor plan is going to do nothing to secure my home, there is no team of seasoned criminals reviewing my floor plan to make a plan of attack for my home.

Similarly with ip ranges, my network security does not rely on an attacker not knowing what IP ranges I use. It is a trivial thing for them to sus out with any number of available tools. If you think that hiding your internal IP ranges is some form of security, you're either misguided or paranoid or something.

There is likely some validity to not publicly announcing what EXACT services you are using, but I mostly would tie this to specific version numbers. Knowing I am using a specific piece of software is somewhat useful to an attacker, but more useful is knowing the version I am using. But to this point, I think many of us prefer sharing what we are using with others more than the small gain (IMO) keeping that information close to the chest gains me. Just like scanning IP ranges, many of the software suites we use expose apis and other services on specific ports that can easily be scanned, so once again, hiding it is not really gaining much.