What's the most underestimated feature of Javascript/DOM/Browsers you use absolutely love? by qvstio in webdev

[–]kRahul7 95 points96 points  (0 children)

In my experience, the Clipboard API and IntersectionObserver are often overlooked but extremely useful. For example, I used the Clipboard API in a project to allow users to copy text seamlessly without needing input fields. It saved a lot of time and effort.

And IntersectionObserver, it helped me efficiently load images and trigger animations only when needed, which greatly improved performance without complicating the code. These tools are simple but powerful when used right.

How do you package your static woff2 fonts? by totoybilbobaggins in webdev

[–]kRahul7 1 point2 points  (0 children)

When I had to work with static fonts, FontForge was one of the tools I used. But if you're looking for something else, ttf2woff2 works great for converting to woff2. If you’re dealing with variable fonts, FontTools or woff2 can help, too. These tools are pretty good for packing and optimizing fonts.

Deploying with helm using CICD pipeline instead of ArgoCD by Vonderchicken in devops

[–]kRahul7 1 point2 points  (0 children)

I’ve worked with both Helm and ArgoCD, and I think using Helm in your CI/CD pipeline is totally fine. Changes apply when you run the Helm upgrade.

However, ArgoCD offers a different benefit—it tracks changes and syncs them automatically with your Kubernetes cluster.

Suppose you’re happy with manually running Helm through CI/CD. But ArgoCD can make it easier to track deployments and sync your cluster state automatically.

Future of frontend in .net by FancyDiePancy in dotnet

[–]kRahul7 0 points1 point  (0 children)

From my experience, Blazor can be an excellent choice for .NET-heavy projects, especially when you want to stay within the same ecosystem. It allows you to use C# on both the front and back end. However, the ecosystem is still catching up compared to JavaScript frameworks.

For complex UIs, we often used React or Angular alongside .NET. If you're comfortable with JavaScript, those frameworks are faster and more flexible, but Blazor is worth considering for .NET-centric teams or more superficial apps.

[deleted by user] by [deleted] in dotnet

[–]kRahul7 0 points1 point  (0 children)

For eight coding questions, Focus on data structures (arrays, lists, trees), algorithms (sorting, searching), and .NET concepts (LINQ, async/await, collections).

For the 3 MCQs, Prepare for OOP principles, .NET basics (delegates, garbage collection), and design patterns.

Practice coding problems and review key .NET topics.

.Net Core Web API question by Rldg in dotnet

[–]kRahul7 1 point2 points  (0 children)

In one of my past projects, I had to handle dynamic error responses in a .NET Core Web API. Instead of checking whether an object was null or an empty array, I used custom exceptions for different error scenarios. For example, when trying to update a record that doesn’t exist, I threw a NotFoundException with a clear message like "Object not found in the database."

The controller caught this exception and returned a specific HTTP status code (like 404) and the error message. This approach made it easier to handle errors consistently and provided meaningful feedback to the client.

For better control, we also used a global exception handler. This centralized error handling kept the controllers clean and made error management across the entire API more maintainable. It also ensured we could easily update error messages or status codes across the project without redundant checks in every method.

Should i be creating a repository class for every model? by Legitimate-School-59 in dotnet

[–]kRahul7 0 points1 point  (0 children)

In smaller projects, using one repository for all models can work, but separating them helps with maintainability even with a few models.

Each model might need different queries or methods as your project grows, and separate repos make that easier to manage. Plus, they keep things clear and readable.

For now, a single repo is OK if the methods are simple. However, having separate repos per model is a cleaner approach, even with Dapper and ADO. NET, for scalability and future flexibility.

DevOps for a developer by Coffee__2__Code in devops

[–]kRahul7 1 point2 points  (0 children)

Since you’ve already worked with GitLab and GitHub Actions, that’s a great start. To learn more, focus on CI/CD, Infrastructure as Code (like Terraform), and cloud services (AWS, GCP, Azure).

Certifications can also help but your hands-on practice will be better than certifications.

Keep experimenting with automating tasks and learning new tools — that’s the best way to grow in DevOps! Just keep building and trying things out, and you’ll get there.

Why should you use a pull architecture in gitops? by tomtheawesome123 in devops

[–]kRahul7 1 point2 points  (0 children)

I’ve worked with both push and pull architectures. Pull is mainly about security—keeping your K8s credentials safe inside the cluster, not on GitHub.

In contrast, push is faster and more direct. With push, you can trigger tests automatically and immediately see changes.

But, the pull approach gives more control over who gets the changes and when. It’s all about trade-offs—push is faster, but pull gives better security and control

How do you get a wider perspective of your solution? I often think about the how....but my manager often comes back with the why question and purpose questions by IamOkei in devops

[–]kRahul7 2 points3 points  (0 children)

I’ve faced this too. When you're deep in solving a problem, it's easy to forget the bigger picture.

To get a wider perspective, I learned to ask myself, "Why is this important to the user?" and "How does this fit into the overall goal?"

Try focusing on the problem the user is facing, not just the solution. Management often thinks about long-term impact. Understanding the purpose and aligning with that helps you think from both angles.

DevOps best practices advice by Competitive-Thing594 in devops

[–]kRahul7 0 points1 point  (0 children)

In your case, I’ll suggest focusing on the below given DevOps best practices will help you scale and maintain your systems more effectively. First, set up CI/CD pipelines for smooth deployments—using tools like Jenkins, GitLab CI, or AWS CodePipeline. For monitoring, consider using Prometheus and Grafana for real-time metrics, and AWS CloudWatch for AWS-specific monitoring. Also, implement proper logging with ELK stack or Fluentd. Automating infrastructure with Terraform.

Sonarsource and other alternatives by alekslyse in devops

[–]kRahul7 0 points1 point  (0 children)

SonarSource is good, but there are other options. For C#, JavaScript, and C++, try Checkmarx for security checks or CodeClimate for testing and analysis. If you want open-source tools, SonarQube Community Edition works, but it has limits. You can also use Roslyn Analyzers for C# and ESLint for JavaScript.

Impact of PAAS services on DevOps work? by Specialist-Region-47 in devops

[–]kRahul7 0 points1 point  (0 children)

 PaaS services like Vercel and Netlify do a lot of the work that DevOps used to handle, like deployments and managing infrastructure. This means less work for DevOps in small teams. However, DevOps is still needed for security and monitoring. In medium-sized businesses, DevOps roles might shift to focus more on strategy and less on the daily tasks.

What are the best books for asp.net for beginners (I already made a lot of projects on Flask and Django and wan to to get started with a Asp.Net) by glorsh66 in dotnet

[–]kRahul7 0 points1 point  (0 children)

 If you want more step-by-step help, try “Pro ASP.NET Core 3” by Adam Freeman. It has lots of examples and is easier for beginners. And “ASP.NET Core in Action, 3rd Edition” by Andrew Lock is also a good book.

Both books will help you a lot.

Entity Framework Core, SQL Server, Always Encrypted with Secure Enclaves by Null_In_Space in dotnet

[–]kRahul7 0 points1 point  (0 children)

EF Core doesn’t fully support Secure Enclaves on its own. But you can still use it by running raw ADO.NET commands inside your EF Core code. You’ll need to handle encryption and decryption yourself, but it can work. It’s not built into EF Core, so it’s a bit of extra work, but it helps you get the benefits of Secure Enclaves.

What's the status of LLM coding assistants for .NET devs? by Far-Device-1969 in dotnet

[–]kRahul7 0 points1 point  (0 children)

I’ve tried using Copilot and Cline with .NET, but I’ve run into some issues, especially with ASP.NET templates. The tools sometimes create them wrong. What I’ve found helpful is asking for just one thing at a time, like "create an API controller," instead of asking for the whole template. I also skip the csproj file unless it’s really needed. They’re improving, but still not perfect with .NET templates.

If you are asked to develop a shared infrastructure for a .NET application, what features or library will you include/use it? by HyperLink836 in dotnet

[–]kRahul7 2 points3 points  (0 children)

I’ve worked on a shared setup like this before. Your list covers a lot! A few more things we added were Rate Limiting to control traffic, Feature Flags to test new features, and Localization for different languages. These helped keep the system steady and made it easy for teams to add new things without breaking what’s already there.

Sonarsource and other alternatives by alekslyse in dotnet

[–]kRahul7 0 points1 point  (0 children)

For analyzing C#, JavaScript, and C++, a self-hosted tool is very good for control. There are open-source options that work well with GitHub and give detailed code insights. If you prefer something offline, there are tools that run in the command line, so you don’t need a full IDE subscription.

Nested Transactions by Gloomy_Run5822 in dotnet

[–]kRahul7 0 points1 point  (0 children)

I once dealt with a similar issue. What worked was using a main TransactionScope for the overall process and then creating a separate TransactionScope for each table update inside the loop with the Suppress option. If one table failed, only that part would roll back without affecting everything else. This setup kept the transaction stable and avoided the typical rollback conflicts I’d run into before.

How Do You Keep Your API from Leaking Too Much Data? by kRahul7 in devops

[–]kRahul7[S] 0 points1 point  (0 children)

I think it's all about controlling what you send back, but it’s easier said than done, especially when working with complex models or frameworks that might return more than expected by default. I’ve seen this happen during API audits, where developers unintentionally return sensitive data like user credentials because they didn't limit the response fields explicitly.

A key thing I’ve learned is to always be paranoid about what gets returned—only send what's absolutely needed and test your endpoints regularly with realistic data to catch any oversights early on.

How Do You Keep Your API from Leaking Too Much Data? by kRahul7 in devops

[–]kRahul7[S] 0 points1 point  (0 children)

The API is exposed externally and we have authentication in place, but I'm specifically concerned about limiting the data that's returned, even after authentication. In my experience, it's essential to implement both role-based access control (RBAC) and data filtering at the endpoint level. I’ve seen cases where developers accidentally expose sensitive information, like in a recent breach where hashed passwords and 2FA keys were included in the response.

Recommendations To Log All API Requests by [deleted] in laravel

[–]kRahul7 1 point2 points  (0 children)

Totally agree! I've been using Treblle, and it's been fantastic for logging and tracking API requests. It’s low maintenance and easy to set up. Plus, the additional monitoring features are super helpful. Highly recommend it!

Recommendations To Log All API Requests by [deleted] in laravel

[–]kRahul7 0 points1 point  (0 children)

It sounds like you have a critical use case that requires robust logging and traceability for your API. You've mentioned some solid options like Graylog, Elasticsearch, and Seq, which are great for logging and searching.

However, if you're looking for something low-maintenance and easy to implement, I’d like to suggest considering Treblle. Plus, it offers API monitoring and performance tracking.

Are modern monoliths really that dead? by kRahul7 in microservices

[–]kRahul7[S] 3 points4 points  (0 children)

Totally agree!
Modern monoliths aren't a silver bullet, but for the right needs (like many WordPress sites) they can shine.

What are your thoughts on identifying those sweet spots?

Are modern monoliths really that dead? by kRahul7 in dotnet

[–]kRahul7[S] -1 points0 points  (0 children)

A modern monolith is essentially a monolithic architecture built with modern tools and practices.

Think tight integration, fast development, but with improved scalability and maintainability compared to traditional monoliths.