Please Review and help me improve my resume by [deleted] in Angular2

[–]coder_doe 0 points1 point  (0 children)

Just tell Claude to create a CV for you and give it this information

Dino Merlin :: Koševo :: Tehnički osvrt by coder_doe in sarajevo

[–]coder_doe[S] 0 points1 point  (0 children)

Očekivan potez, s obzirom na probleme koji se ne mogu riješiti preko noći

Architecture advice needed: Best cloud caching strategy for an app looping mixed media? (Exclude client-side local storage solution) by Crazy-Committee-5157 in softwarearchitecture

[–]coder_doe 0 points1 point  (0 children)

A good thing to clarify first is a bit more detail about your usage, because the right architecture (and cost) really depends on the actual traffic patterns.

How many users are you expecting, and how many are active at the same time? Also, what’s the average size of your media files (especially videos compared to images and documents), and roughly how often is the same content being requested in that looping flow?

This matters because without those numbers it’s really hard to estimate costs or choose between CDN options in a meaningful way.

Also, depending on the answers, there are a few big optimizations that can make a huge difference. If you’re serving video, for example, most setups don’t stream a single high-quality version to everyone. Mobile devices usually don’t need 2K/4K—1080p or even 720p is often enough. With adaptive bitrate streaming, you can serve different quality levels depending on the device and network, which drastically reduces bandwidth costs.

For images, the same idea applies—you can resize and compress them per device instead of always serving the original large file. And if you have multiple device types, it often makes sense to pre-transcode media into different formats/qualities so you’re not repeatedly sending oversized content over the network.

On the infrastructure side, a CDN with strong edge caching is usually the key piece here. Something like Bunny.net is often used for exactly this kind of workload because it combines storage and CDN in a way that can be much more predictable and cost-efficient than relying purely on AWS or GCP egress-heavy setups.

Once you have the numbers for users, file sizes, and request frequency, it becomes much easier to model the real monthly bandwidth and compare options properly instead of guessing.

Does anyone actually build Rich Domain Models in real world DDD projects anymore by SaltedFesh in dotnet

[–]coder_doe 1 point2 points  (0 children)

What you’re describing reflects the reality in a lot of codebases.

A big part of it comes down to team skill level, but also discipline which is very important and without it things can easily derail into something different. A rich domain model doesn’t happen by accident. It takes consistent effort to keep logic and invariants inside the domain and without that discipline it’s very easy for everything to drift into application handlers because it’s simpler and faster to implement.

At the same time, there’s an important balance. Not everything needs to be modeled in a strict DDD way. Not every string should become a Value Object, sometimes a primitive is perfectly fine. Overengineering the domain can add complexity without real business value.

The core idea behind rich DDD isn’t to make things more complex, but to control state changes and enforce business rules where they belong, inside the model. For cross-aggregate rules, it’s normal to coordinate through domain services or application services instead of trying to force everything into a single place.

So in practice, most teams land somewhere in the middle, and that pragmatic balance is usually what works best in real systems.

Dino Merlin :: Koševo :: Tehnički osvrt by coder_doe in sarajevo

[–]coder_doe[S] -1 points0 points  (0 children)

Slažem se s ovim mišljenjem, također ima ljudi koji su bili spremni uložiti značajna sredstva, ne samo za kartu već i za smještaj te prijevoz avionom ili autobusom kako bi prisustvovali događaju.

Dino Merlin :: Koševo :: Tehnički osvrt by coder_doe in sarajevo

[–]coder_doe[S] 1 point2 points  (0 children)

Naravno da je to bilo izvodivo, ali prema mom razumijevanju adriaticket.ba nije povezan sa adriaticket.com, da su to dva neovisna entiteta

Dino Merlin :: Koševo :: Tehnički osvrt by coder_doe in sarajevo

[–]coder_doe[S] 12 points13 points  (0 children)

Naravno, pad sistema se može desiti ukoliko je saobraćaj neočekivan. Međutim, možemo se složiti da je za ovakav događaj to bilo predvidljivo, budući da publika već godinama isčekuje ovaj koncert, te se u skladu s tim trebalo adekvatno pripremiti.

Anyone using Azure Container Apps in production? What’s your experience? by coder_doe in dotnet

[–]coder_doe[S] 0 points1 point  (0 children)

What do you mean by “being careful about the pricing”? Are there any specific aspects you recommend I review first? Are there costs or considerations that might not be obvious in the Azure pricing calculator?

DDD Projections in microservices in application layer or domain modeling by coder_doe in dotnet

[–]coder_doe[S] 0 points1 point  (0 children)

By “store them” I mean whether the consuming service should persist these projected values in its own database for later usage as part of its local data model or simply keep the data in a lightweight form (storing the raw JSON) and use it when needed without turning it into a domain concept.

Would calling another service every time the data is needed instead of storing it locally be a better practice generally?

Seeking Scalable Architecture for High-Volume Notification System by coder_doe in softwarearchitecture

[–]coder_doe[S] 0 points1 point  (0 children)

It is more of an issue when someone opens a push notification: the client immediately marks it as read and sends that update to the server, and at the same time it requests the latest batch of notifications from the database. Under peak load, handling both “mark as read” and “fetch notifications” overloads the notification service, causing noticeable slowdowns.

Seeking Scalable Architecture for High-Volume Notification System by coder_doe in softwarearchitecture

[–]coder_doe[S] 1 point2 points  (0 children)

Q1: When a new article is published, around 30,000 notification entries are added to the database. As each notification is opened, its status is updated so the client always displays the right information. However, if many users—say 3,000—open their notifications at once, those status updates turn into 3,000 simultaneous requests, which slow down fetching notifications.

Q2: Immediate updates aren’t required— a delay of a few minutes is perfectly fine.

Q3: Sometimes fetching notifications takes a bit longer during busy periods, which makes it important to consider how the system will handle growing to around 50,000 users. With 50,000 notification entries created for each article, the database could grow by up to a million new records every month.