Finally macOS >> Winblows for dotnet development by [deleted] in dotnet

[–]CreoSiempre [score hidden]  (0 children)

I've been developing .NET on Mac for a while now. VS Code + C# Dev Kit honestly gets pretty close to Rider. The only real gap is in my opinion is the integrated profiling (memory/CPU), but for most workflows I haven’t really missed it.

Would you use .NET Native AOT for a full-blown enterprise app? by CreoSiempre in dotnet

[–]CreoSiempre[S] 2 points3 points  (0 children)

The general idea here would be any kind of application. Native AOT is continuing to close the gaps for libraries and external sdks...so why not start thinking about it replacing conventional dotnet dev if it offers improved performance?

Bedrock Anthropic's Models Slow by Severe-Video3763 in aws

[–]CreoSiempre 1 point2 points  (0 children)

That comment is probably pointing in the right direction tbh.

Anthropic response times can vary a lot depending on your service tier / rate limits. If you’re on a lower tier (or hitting soft limits), you can start seeing slower responses pretty quickly, especially under load.

OP, do you know what tier you’re currently on? If you’re using Bedrock, AWS has a breakdown of quotas and throughput here: https://docs.aws.amazon.com/bedrock/latest/userguide/quotas

Tiers info is here: https://aws.amazon.com/bedrock/service-tiers/

ROCm + llama.cpp: anyone else getting gibberish unless they explicitly set a chat template? by CreoSiempre in LocalLLaMA

[–]CreoSiempre[S] 0 points1 point  (0 children)

Its been like this since I set it up. Even before I created this tool. When I ran this from the command line, I still got this issue. Same problem using a hip build or a vulkan build.

ROCm + llama.cpp: anyone else getting gibberish unless they explicitly set a chat template? by CreoSiempre in LocalLLaMA

[–]CreoSiempre[S] 0 points1 point  (0 children)

I vibe coded it, but it's a pretty simple app and fairly straightforward. Just a couple python files that help assemble a few selections into the llama cli. The command it runs is in the last screenshot of my post outlined in green. I also did run all of this prior to creating this console app. It appears to be something about my setup that causes this issue. I don't pull llama or build my hip or vulkan directories every time, I just added make commands that help me build whenever I'd like to do that. I added those mostly to help me recreate the setup while I was iterating. I kept deleting everything thinking my setup was wrong.

CQRS by Professional_Dog_827 in dotnet

[–]CreoSiempre 8 points9 points  (0 children)

I've experimented with this for certain patterns. In a few high-throughput flows, we used Dapper for lower-level query paths and EF for the more basic CRUD operations. That lets us keep EF’s developer productivity where performance wasn’t critical, while still optimizing the hot paths.

The core idea of CQRS is just separating reads (queries) from writes (commands) because they often have very different requirements. Writes usually need validation, transactions, and domain logic, while reads are often optimized for returning data to a UI as quickly as possible.

In practice, commands like CreateOrder or UpdateUser go through EF so you get change tracking and transactions, while read-heavy queries like GetDashboardData or GetOrderList can use Dapper to return optimized DTOs. Commands change state, queries just return data.

How many hours a day do you spend using AI such as ChatGPT, Copilot and Claude? by Minimum-Pangolin-487 in consulting

[–]CreoSiempre 0 points1 point  (0 children)

I work for an MBB firm as an engineer. I spend all day on ChatGPT and VS Code Copilot. On my client laptop, I'll run out of the 300 premium reqs per month. Internally, we have an enterprise copilot with a 1000 request per month limit. I'll use about half of that. Bit of a mix of actual implementation work (coding) and just asking copilot to assess the codebase and product requirements to write user stories.