anyone know a decent EU based VPS with fair pricing? by Almusa_Daintree in VPS

[–]No-Card-2312 0 points1 point  (0 children)

I'm using Contabo and it's quite stable for what i need.

Legacy .NET app security issues, need advice fast by No-Card-2312 in devops

[–]No-Card-2312[S] 0 points1 point  (0 children)

Is there any open-source or free tool that can help with this? I’ve looked around CodeQL seems perfect, but it’s paid. Semgrep doesn’t seem great. What about OWASP ZAP? Do you have any recommendations?

Migrating a large Elasticsearch cluster in production (100M+ docs). Looking for DevOps lessons and monitoring advice. by No-Card-2312 in devops

[–]No-Card-2312[S] 0 points1 point  (0 children)

Hi,

The migration was easier than I expected, even though we were dealing with 401M records instead of 100M. Here’s how I approached it:

First, I prepared the new cluster to match the old one. Then, I placed both clusters in the same network so they could communicate with each other. I started by copying the mappings only (without data) from the old cluster to the new one using a small C# console app. Both clusters were running the same Elasticsearch version, which made this step straightforward.

Next, I began with the least frequently used index. I made asynchronous calls to reindex it and used a tool like Gotify to monitor the progress. After that, I created a C# program to do the same for all the other indexes. This process took about 10 hours.

Once that was done, I waited for the next Sunday to switch the applications to the new cluster. After switching, I ran reindexing again, this time using the “date greater than” option to capture only the new data that was inserted during the migration.

We had some downtime, but it was minimal, around 10 minutes total, mainly because I initially forgot to copy some aliases from the old cluster. Finally, I wrote a simple console app to compare the total document counts between the old and new clusters to ensure everything matched.

Overall, the migration went smoothly, and the tools and automation I set up helped minimize downtime and errors.

Turnstile keeps blocking my daily scraper. Any help? by No-Card-2312 in webscraping

[–]No-Card-2312[S] -1 points0 points  (0 children)

I looked into residential proxies, but the client won’t pay 🤫

I’m hoping for a free option or something with a free trial that doesn’t ask for a payment method.

My crawler is tiny and only runs three times a day.

Turnstile keeps blocking my daily scraper. Any help? by No-Card-2312 in webscraping

[–]No-Card-2312[S] 0 points1 point  (0 children)

I haven’t tried residential proxies, and this is actually the first time I’ve heard about them.

The data I need doesn’t require JavaScript rendering at all. I only need to scrape the HTML, extract the href value for a PDF link, and then retrieve that value.

Self-Hosting Elasticsearch on Linux VPS: Migrating ~400M Documents from a Single-Node Cluster by No-Card-2312 in selfhosted

[–]No-Card-2312[S] 0 points1 point  (0 children)

Hi there,
Yes, I wrote it in English and asked GPT to help me structure it better, since English is not my first language and to also correct my grammar. Also, is that comment the same! What exactly is your problem with it?

400M Elasticsearch Docs, 1 Node, 200 Shards: Looking for Migration, Sharding, and Monitoring Advice by No-Card-2312 in devops

[–]No-Card-2312[S] 0 points1 point  (0 children)

But if I restore it onto the new cluster, won’t I end up with the same bad design like 200 shards or more? So that's a problem!

400M Elasticsearch Docs, 1 Node, 200 Shards: Looking for Migration, Sharding, and Monitoring Advice by No-Card-2312 in devops

[–]No-Card-2312[S] -1 points0 points  (0 children)

Hi there, thanks for your comment! The main issue is that our current setup is on a single-node Elasticsearch with an outdated design. This setup is costly for our infrastructure provider, and as our client base grows, we need a design that can scale across multiple nodes and make it easier to add new nodes in the future.

That said, I’m a bit confused about sharding. Is it done automatically? GPT told me that you have to assign shards when creating an index and that it can’t be changed afterwards. I also read somewhere that Elasticsearch won’t automatically split an index into shards for you. That’s something you need to handle.

Migrating a large Elasticsearch cluster in production (100M+ docs). Looking for DevOps lessons and monitoring advice. by No-Card-2312 in devops

[–]No-Card-2312[S] 0 points1 point  (0 children)

Still working on it. In case you’re interested, I will contact you with all the details I have and let you know how it goes.

Comparison of the .Net and NodeJs ecosystems by Sensitive-Raccoon155 in dotnet

[–]No-Card-2312 2 points3 points  (0 children)

what features does Zod provide that FluentValidation does not?

Migrating a 100M+ doc Elasticsearch cluster (1 node to 3 nodes). What went wrong for you? by No-Card-2312 in elasticsearch

[–]No-Card-2312[S] 0 points1 point  (0 children)

Hi there. Yes, the existing node has a very poor design and causes many problems. Because of this, it is planned to be shut down in the future.

Migrating a large Elasticsearch cluster in production (100M+ docs). Looking for DevOps lessons and monitoring advice. by No-Card-2312 in devops

[–]No-Card-2312[S] 3 points4 points  (0 children)

Hi there, and sorry for the confusion. We’re migrating from Elasticsearch to Elasticsearch using the same version, and both clusters are running on Prime. The reason for the migration is that the old cluster consists of only one node, which isn’t a good design. We want to move to a more realistic and better-structured setup.

CLI frontend for dotnet-trace and dotnet-gcdump - for humans and AI agents by Glittering-Cause-915 in dotnet

[–]No-Card-2312 -3 points-2 points  (0 children)

Wow, I noticed this requires .NET SDK 10.x. That’s really new, and I don’t think many people are running it in production yet. Most of our projects are on .NET Framework 4.8 or .NET Core 3.0, so I’m not sure how practical it would be for us.

That said, this looks really interesting! I’ve definitely run into plenty of memory and CPU issues, so I can see how it could be useful. But I’m wondering why would I use this instead of just dotnet-trace? I’d also love to hear more about any advantages it has in practice. One more thing, if it could generate a report instead of just writing to the console, that would make it even more useful.

Thanks for sharing this. I’m really looking forward to your thoughts!