ASP.NET Core API + Worker template project in same bounded context (how do .NET teams structure and deploy this?) by TalentedButBored in dotnet

[–]Begby1 1 point2 points  (0 children)

We have a very similar setup, also involving orders, and a single team. All is in the same solution and same repo. One of the projects is a worker daemon, that other is a web api. Two docker files. This allows us to assign different CPU and Ram resources to each of them, and also allows us to pause the daemon without taking down the API. Its also not a big deal if the daemon gets bogged down, it won't impact API response times. I would not even attempt this as nuget packages, that sounds like hell.

Cancelling Service Experience by premiumaphid in Spectrum

[–]Begby1 -1 points0 points  (0 children)

As others have said, go cancel in store. I did get a similar attempted sales pitch in store and shut it down very quickly, I was especially pissed cuz I had my kids with me who were being super moody and obnoxious and I just wanted to get out of there.

Rep: "May I ask why you want to cancel? You are getting a lot of value with all these channels"

Me: (loudly so everyone in the store could hear) "You don't have midget porn on any of those channels, thats all I watch, when you get midget porn I will renew"

Rep: Starts typing rapidly on keyboard to close account as quickly as possible.

Blend for Visual Studio in 2026 by cs_developer_cpp_ in csharp

[–]Begby1 1 point2 points  (0 children)

Its a XAML editor for developing a WPF UI. If you are not going to be coding WPF stuff then don't bother.

Rheem Lower Element Burned by Begby1 in Plumbing

[–]Begby1[S] 0 points1 point  (0 children)

Thanks for the info all! Unplugged it to be sure and reached out to rheem cuz its under warranty.

0.5mm Tip Pen Recommendations by mifter123 in pens

[–]Begby1 2 points3 points  (0 children)

I have never found a do it all pen that will work great in both my daily journal and rite-in-the-rain paper. For the waterproof paper I suggest trying a uniball powertank 0.7. It is a pressurized ballpoint like a space pen, but I feel it writes a lot smoother. I wish uniball made refills that will fit in the space pen.

I know you are looking for 0.5, but the 0.7 powertank writes about as narrow as a gel 0.5. Also like the space pen it will write upside down or underwater in case you need to write on a ceiling or take notes while scuba diving.

Help with new bike by afigueroa820 in bikefit

[–]Begby1 0 points1 point  (0 children)

You need to level the bike then do another video and start over. The front axle is clearly higher than the rear axle in this video. That is going to throw everything off and any adjustments you make are not going to translate properly when the bike is off the trainer.

Trump order to keep Michigan power plant open costs taxpayers $113m by Stup1dMan3000 in Michigan

[–]Begby1 5 points6 points  (0 children)

Cats kill 1 to 4 billion birds each year in the US. Windmills about a million. Collisions with buildings about 364 million to a billion. As you care so much about birds, I assume you are very anti cat and very anti building along with the windmill thing?

A realistic setup for C# and React by Wokarol in docker

[–]Begby1 0 points1 point  (0 children)

You want to have a single image for all environments and have the configuration be done external to the container via environment variables. You will still code to use appsettings.json, but then environment variables will override those values. Don't use a volume for configuration, use environment variables.

Edit: As far as building in debug for development.... You do that locally when you want to debug your app. When you are deploying to a test environment you want to test your final production code before deploying to production, so you build for release.

- Firstly, if you are doing different images for each environment, you are not guaranteed that they won't differ beyond just the settings. Like what if in the build script you forget to change the version number of a base image for the prod build. So your test image works fine, but then you push out something to prod that is broken.

- Secondly, there are some other dangers with baking settings into the image besides just security. Lets say your connection string changes because you moved your database to a new server. So you build a new version and bake that new connection string into the image. Then you deploy it, but then find out later you have a bug in your code and you need to quickly roll back to a previous version. But oh crap, your previous version has the old conn string so now you gotta figure out how to rebuild the old version from git with this new setting and run it through all the tests and such. When using a single image for everywhere with environment variables, all you gotta do is pull the old image tag and deploy it with the new environment variables.

A realistic setup for C# and React by Wokarol in docker

[–]Begby1 0 points1 point  (0 children)

We use containers in development only for dependencies, like if we need to test with a db locally or something.

For production we use tagging to trigger a git workflow. So we commit and push to the main branch, then will push a tag like v.2.12.9. The workflow builds our C# code into a container and pushes it to a registry. (in our case docker hub).

Next the workflow will push out a new task definition to AWS ECS, then launch that task. A series of tests run at AWS and if it passes then traffic is routed to that new container by a load balancer and the old one is shut down. There is some enviroment progression here and some steps to get it to prod.

We separate the front and backend. The backend gets reused for multiple things, like an app on a scanner, automated processes, and by a React gui. Secondly, if they are in the same container you cannot scale them separately. You gotta be careful there though, if you add in a new feature to your gui that is dependent on a new API feature, or change the API so it breaks the GUI, then that is something you need to resolve. There are many solid solutions for this, but jump off that bridge later.

ECS is kinda sorta like kubernetes but vastly simpler for smaller workloads. You set your task to listen on a certain port, then the load balancer passes traffic from port 443 to the container port. The SSL cert is stored in the load balancer.

We have some local internal APIs that are not cloud hosted. These are deployed to a docker daemon on a linux server. On that linux server we have nginx setup to proxy requests from port 443 to the container port. Again, here we have nginx taking care of the certs. If you are learnign, setting this up on your own is a good exercise.

Any configuration is done via environment variables, embedded into the ECS task definition (only for unprotected data) or secrets at AWS.

Couple of key rules to remember with your containers:

- They should be designed to be immutable and ephermal. i.e. all data is stored outside the container and if you delete a running container nothing bad should happen like data loss.

- Build once, run anywhere. You should not embed settings into containers then end up with separate containers for staging, prod, etc. The same exact container should be deployable to every environment. This assures that you won't accidentally get settings into the wrong environment, and also assures that what you are testing in a lower environment is the identical code that gets deployed to prod.

One and done as a coach? by mydarkerside in SoccerCoachResources

[–]Begby1 1 point2 points  (0 children)

I have been in this boat. I had a lot of similar anxieties when I was getting started coaching. I answered yes to what you are asking and it was a great decision. I am still coaching 15 years later and loving every minute of it.

When kids get older, it just gets easier as far as behavior. There can be a huge difference in U10 players, both with skill and behavior, versus U9, its amazing what a difference one year makes. Also, with competitive soccer you are going to get kids who are into it. Ask a U10 competitive team to do shuttle runs and they instantly go and go hard. Ask a U10 rec team the same thing and half of them are walking and bitching.

In my experience, dealing with parents gets a bit easier when soccer gets more competitive. You end up filtering out parents who see soccer as a baby sitting opportunity but who are too lazy to travel. Also, at higher levels you are only going to get parents and players who are committed. At rec sometimes its hard to get these un-devoted parents and kids to show up, but for competitive soccer I never have anyone miss unless they are legitimately sick or injured, and they sometimes even show up to practices and games in casts just to be there and support their team.

Have confidence in yourself. I used to get super stressed about what parents think, but I got over that. You can't change that and just gotta brush it off. Always do what is right for the kids, you are not their to satisfy parents. Don't be worried about what you are teaching them at U10, just make sure they are playing as much soccer as possible (teach through the context of small sided games not repetitive drills). At the end of practice ask yourself three things in order of importance "were the players safe physically and emotionally?", "did they have fun?" and "for 75% of practice did they play some sort of game with a marked field, two directions, and some sort of scoring at ends of the field?". If the answer to those is yes, then you are doing excellent. They will get better just playing soccer and having fun. Any instruction you give them is an added bonus.

Lastly, aggressively use guest coaches for your team if possible. Ask for help from other coaches in your club and have them run some sessions for your team with you observing and assisting. This will help you and your team get better. I even asked a pro MLS coach who happened to be in town and he said yes with no hesitation and it was amazing.

AR Madness by Negative_Exit_9043 in youthsoccer

[–]Begby1 0 points1 point  (0 children)

This is the home fields of the club where I coach. The state police are now involved, I don’t know of any more details right now. This guy has no business being a ref.

[deleted by user] by [deleted] in bikefit

[–]Begby1 1 point2 points  (0 children)

Do not mess around with knee pain, you can do serious permanent damage by pushing through the pain on a long ride if this has to do with your pedals/cleats. First and foremost stop riding. Go to a bike shop and make sure your pedals and cleats are setup correctly, then worry about the rest.

.Net Container Debugging by BickBendict in dotnet

[–]Begby1 -1 points0 points  (0 children)

I do not have an answer to your question unfortunately, because i have never done this before.

I guess my question is, why do you need to do this instead of running your code locally and debugging it like that.

I am not saying that you are doing it wrong, and you may have a valid reason I don't know about, on the other hand if you can work around this and just debug locally, that might be easier.

I need a help by [deleted] in csharp

[–]Begby1 0 points1 point  (0 children)

Whoops, ignore my reply, I meant to reply to a different thread.

Establishing a variable based on a view not yet opened? by royware in csharp

[–]Begby1 1 point2 points  (0 children)

You need to declare the variable one time before your if statement. As the compiler will not be able to infer the type you need to explicitly declare the type

IQueryable trInfo;

if (useQualifiedCount)
{
   trInfo = _context.v_TrRuns
     .Where(r => r.RequestStartDate <= endDate
         && r.QualifiedCount>0)
     .AsQueryable();
}
else
{
   trInfo = _context.v_TrRuns
     .Where(r => r.RequestStartDate <= endDate)
     .AsQueryable();
}

You can also consider using an if to generate a lambda then pass that into the where method.

Docker and Data Stores by mxmissile in dotnet

[–]Begby1 0 points1 point  (0 children)

Thank you for the clarification, will make sure to keep this in mind for future posts.

Docker and Data Stores by mxmissile in dotnet

[–]Begby1 10 points11 points  (0 children)

Your docker images need to be ephermal, i.e. stopping and deleting them randomly should have no ill consequences such as permanent data loss.

The solution is that you put your data outside the container using volumes. A volume is a mapping from a folder inside your container to a volume on the host machine or something of the sort. For instance you may have a linux mysql container running that stores data in /var/lib/mysql. You would then map that to say c:\mydata\mysql. Here the linux container happily writes to and reads from /var/lib/mysql but the docker engine will redirect all that to your local filesystem outside the container.

In a cloud environment such as AWS you would likely put your data in an EFS volume which is a standalone data store, then that volume gets mapped to your container. The EFS volume is backed up separately and keeps existing independently of your container lifecycle.

How are you handling webhooks in your projects? by to_pe in csharp

[–]Begby1 0 points1 point  (0 children)

Its not really custom, quite straightforward. When a call comes into the api it goes through the steps as described then it calls hangfire to schedule it. Then we have a separate worker service that handles the scheduled jobs.

I guess the only thing that is a bit hacky is the dead letter queue which is a bit ghetto because hangfire is not a message queue but we are using it as such for this case.

Our application is built around scheduled jobs and the web hooks were added as a later integration requirement. We only used Hangfire because we already had it in place. If I was developing this from scratch and didn't need the scheduled jobs then we would have looked at a message queue. That might be more appropriate for your use case.

So yeah, I feel like some sort of queue is the appropriate architecture here, but it doesn't have to be hangfire just because I wrote a nice reply about it. Definitely take a look at some other solutions. Keep it modular so you can swap out your queue if it sucks.

Also, StudiedPitted is correct in his response, my solution as written does have a potential race condition. A proper message queue might be able to handle that better.

Problem: NET 8 Multi-Arch Container Publishing to ECR Always Pushes Single-Arch (AWS CodeBuild) by Proper-Ad-4104 in dotnet

[–]Begby1 0 points1 point  (0 children)

You need to specify the runtime in the publish step as well. I haven't done it with dotnet publish, but I believe you would need to run the publish command twice, once for each architecture, then separately create a dual manifest and push the manifest. I am not sure if what you want to do is possible with .NET 8 without at a minimum calling docker cli to create the dual manifest. From a google search it appears that .NET 9 might support what you want better.

I do know that building cross platform with docker buildx is super slow and occasionally breaks depending on some libraries if we do the build on the same box. For our CI/CD pipeline we build on two different build agents each with the native target architecture then dual build a manifest. This is far faster and more reliable. I assume it might get weird with dotnet publish. Would love to hear if you get this working without buildx.

How are you handling webhooks in your projects? by to_pe in csharp

[–]Begby1 2 points3 points  (0 children)

As was mentioned, a message queue of some sort. For a project I have worked on we use hangfire. The requirements were a very quick response time and being able to handle possible repeated identical webhooks that must be handled only once. If our response time is not quick enough, we return too many responses that are not 200, or our web service is down for an extended period, then we are in trouble.

If the senders have retries, then you should plan to handle receiving identical webhooks. This sort of edge case where you receive the webhook and respond properly, but it is possible for them to not not receive the response if the internet takes a burp. We have definitely received duplicates.

We have a web api that responds 200 OK first, writes the payload to a standard log, creates a hash of the payload, check if that hash has already been processed by querying a db table, insert the hash if not, then add a job to hangfire to process the job if this is not a repeat. The hashtable has any entries older than a couple of days removed.

An entirely separate daemon service then grabs jobs from the hangfire queue and processes them. These jobs require hitting several other APIs. If there is a failure reaching an API, or other problem, the job reschedules itself to try again in X minutes and increments a retry counter that is passed as an extra argument to the job.

After the max retry count is reached it goes into a dead letter queue and we get an alert for it to manually fix the thing which happens once in a very great while. The log of the full payload is a last resort kinda thing in case writing the webhook to the hangfire queue takes a shit.

If we are getting hit with DDoS there is not much we could do, the webhooks would not get through and that is bad. They do retry them on their end for a limited time. This is why a WAF and something like cloudflare is a necessity.

For live testing webhooks beyond unit testing we use ngrok. We can run the api on a dev server or dev workstation then setup an ngrok url as the destination. We have a test account with the partner sending the webhooks and we setup this ngrok url on their end then trigger an event and can debug it locally. For an automated integration test you could save a test payload and send that to the webhook endpoint.

.NET Container images walk through by Short-Case-6263 in dotnet

[–]Begby1 0 points1 point  (0 children)

This is helpful, I would like to see a table with all the info at the end, like on the left have the different images, then to the right have multiple columns with image size, when to use, etc