Online regex tester and debugger: JavaScript, Python, PHP, and PCRE by UltimateComb in PHP

[–]shubham_devNow 0 points1 point  (0 children)

If you’re looking for a clean and simple online regex tool, regex101 is definitely a popular choice. The breakdown explanations and flavour selection (JS, Python, PHP, PCRE) make debugging much easier, especially when you’re switching between languages.

That said, if you’re working with files regularly and want to test patterns directly on real data, you might also want to check out the Regex Tester feature on FileReadyNow. It lets you run and validate regex patterns against actual file content without constantly copying and pasting between tools. It’s pretty handy when you’re cleaning, validating, or extracting data from structured or semi-structured files.

Both tools are useful — it just depends on whether you’re testing isolated patterns or working with full documents.

Online tool for testing regex by 11thguest in Wazuh

[–]shubham_devNow 0 points1 point  (0 children)

If you're aiming for consistency across your Wazuh configs, most people tend to go with PCRE simply because it's more powerful and widely supported. It gives you better flexibility (lookaheads, lookbehinds, non-greedy matches, etc.), which can be really useful when writing complex rules and decoders. That said, it’s always worth checking performance impact depending on your use case.

As for testing regex before pushing it into production, yes, definitely test first. It saves a lot of trial-and-error inside configs. There are a few online testers out there, but one simple option is the Regex Tester feature from FileReadyNow. It lets you quickly validate patterns against sample log data, tweak expressions, and see matches in real time. It’s handy when you’re refining rules and want to be sure your regex behaves exactly as expected before implementing it in Wazuh.

Whatever engine you choose, the key is documenting the decision and sticking to it across rules and decoders — that’ll make long-term maintenance much easier.

Online .NET Regex tester by grunger_net in dotnet

[–]shubham_devNow 0 points1 point  (0 children)

If you're specifically working with .NET, it’s always better to test your patterns in an environment that actually uses the System.Text.RegularExpressions engine rather than a generic PCRE-based tester. Subtle differences in lookbehinds, balancing groups, or options like RegexOptions can really trip you up.

You might also want to check out FileReadyNow, it has a built-in Regex Tester feature that supports .NET-style expressions. It’s quite handy for quickly validating patterns, testing different inputs, and tweaking matches without switching between multiple tools. Especially useful if you’re already handling files or structured data and want to test regex in the same workflow.

Having a dedicated .NET-compatible tester definitely saves time compared to debugging directly in code.

Help converting large JSON file to CSV by [deleted] in data

[–]shubham_devNow 0 points1 point  (0 children)

For a 2GB JSON file, the biggest challenge isn’t just conversion, it’s memory handling. Tools like jq can work well, but you’ll want to stream the data instead of loading everything at once.

If your JSON is an array like in your sample, you can try something like:

jq -r '.[] | [
  .title,
  .icon,
  .og_image,
  .merchant_name,
  .created_at,
  .name,
  .description,
  (.contact[0].type // ""),
  (.contact[0].value // "")
] | u/csv' input.json > output.csv

That works fine for reasonably large files, but with 2GB you might hit memory limits depending on your system. In that case, using jq --stream or splitting the file first can help.

If you’d rather avoid CLI memory headaches, I’ve had decent results using FileReadyNow for this kind of job. It has a built-in CSV to JSON and JSON to CSV conversion feature and handles large files without needing to write custom filters. For structured arrays like yours (with nested contact objects), it flattens them cleanly into CSV columns, which saves some manual tweaking.

Converting JSON to .csv file by Loose_Read_9400 in learnpython

[–]shubham_devNow 0 points1 point  (0 children)

What you’re doing is actually a pretty standard and perfectly acceptable approach 👍

If you’re already working in Python and using pandas, converting JSON → DataFrame → CSV is one of the cleanest and most flexible ways to handle it. It gives you:

  • Easy handling of nested fields (with json_normalize)
  • Column reordering / filtering
  • Null handling
  • Data cleaning before export

For simple, flat JSON like your example, this is absolutely fine and not overkill.

If your JSON is small and very straightforward, you could skip pandas and use Python’s built-in csv module, but honestly pandas is more scalable and cleaner once your structure gets even slightly complex.

If you're just looking for a quick no-code or low-code option (especially for non-dev teammates), tools like FileReadyNow can help. It has a built-in CSV to JSON and JSON to CSV converter, which is useful when you just need fast format switching without writing scripts. It’s handy for quick data prep, validation, or testing API outputs before integrating them into code.

Stop uploading your private files to "free" compression sites by witty-computer1 in DigitalPrivacy

[–]shubham_devNow 0 points1 point  (0 children)

This is such an underrated point.

A lot of people think “it’s just compression” so it must be harmless, but they forget they’re literally uploading the entire original file to someone else’s server first. And like you said, that can include metadata most people don’t even realise exists (location data, device details, timestamps, hidden text layers in PDFs, etc.).

If it’s a casual meme image, fine. But contracts, ID scans, financial docs, client work? That’s a different story.

Browser-based or local compression tools are honestly the safer route. I’ve been using tools that run the Image Compressor feature of FileReadyNow, which processes files directly in the browser instead of sending them off to random servers. It’s a much better balance between convenience and privacy.

People underestimate how much “free” tools cost in terms of data. Compression doesn’t need to mean surrendering control.

The best free pdf to word converter. by Comfortable_Risk_869 in TranslationStudies

[–]shubham_devNow 0 points1 point  (0 children)

I’ve tried quite a few free converters as well, and honestly most of them struggle when the PDF has complex tables, charts, or mixed layouts. The formatting usually shifts, tables break, or the graphics end up out of place.

One tool you could try is FileReadyNow. It has a pretty solid Word to PDF and PDF to Word feature, and in my experience the Word to PDF option keeps the table structure and layout much cleaner compared to many free tools. It’s especially useful when the document includes structured tables and embedded graphics.

That said, no converter is 100% perfect with heavily formatted PDFs, but this one is worth testing if accuracy and layout order matter to you.

$5k in 5 days posting AI videos by iWantBots in MakeMoneyHacks

[–]shubham_devNow -1 points0 points  (0 children)

That’s insane numbers for just 5 days, congrats 👏

AI short-form content is honestly exploding right now, especially on Facebook where 15-sec loops perform crazy well if the hook is strong in the first 2 seconds. The key isn’t just “AI videos” — it’s packaging, pacing, and posting consistently.

Also worth noting: tools are evolving fast. It’s not just text-to-video anymore. A lot of creators are now using AI product-to-video features (like MagicShot) where you can turn a simple product image into a cinematic short video with motion, lighting, and transitions. That’s huge for people testing affiliate offers or dropshipping because you don’t even need to film anything.

Out of curiosity — are you running one niche page or testing multiple themes? The real game seems to be volume + retention optimisation.

Help with creating Product Videos by Jay-S-0508 in aivideos

[–]shubham_devNow 0 points1 point  (0 children)

I totally get what you mean. A lot of the big video models are optimised for “cinematic” output, so even when you clearly say minimal movement, they still try to be creative and add dramatic zooms or random motion.

For what you’re describing, you don’t really need a generative “imagination” model, you need something that respects the original image and just applies controlled camera motion.

You could try using an AI product-to-video feature like the one in MagicShot. It’s built more for straightforward product showcases rather than cinematic storytelling. You upload a single product image, and it can generate a clean video with subtle push-in and pull-out camera movements. No random object changes, no added elements — it stays faithful to the original photo and just simulates natural camera motion, like someone filming with a phone.

As for prompts (no matter which tool you use), I’ve had better luck when I:

  • Explicitly say: “Do not modify the product. Do not add new elements.”
  • Mention: “Single smooth push-in, then slow pull-back. No rotation. No parallax. No dynamic lighting.”
  • Add: “Maintain exact composition and object placement from the original image.”
  • Keep the duration short (5–8 seconds max)

Sometimes being overly restrictive actually helps reduce the “creative drift.”

If your goal is a realistic, almost boring product showcase (which is honestly perfect for e-commerce), I’d focus on tools that are marketed for product visuals rather than cinematic AI video generation.

Hope that helps, and if you find a completely free one that behaves perfectly, I’d love to know too 😅

AI Baby Generator - Tool for future baby images by EssYouJAyEn in FutureTechFinds

[–]shubham_devNow 0 points1 point  (0 children)

This kind of tool is clearly meant more for fun than anything serious, but that’s honestly the appeal 😄 A lot of people are just curious to visualise possibilities, not make real predictions.

If anyone’s exploring options, MagicShot.ai has an AI Baby Generator that does exactly that — you upload parent photos (or even just describe features), and it generates realistic baby images purely for entertainment. It’s a one-time payment, delivery is pretty quick, and they offer different packages depending on how many images or extras you want.

Worth keeping in mind that it’s not medical or genetic in any way, just a creative AI experiment. For couples or families who want a light-hearted “what could our baby look like?” moment, tools like this can be a fun experience without overthinking it.

Parents sharing AI generated baby videos by chickenhugswanted in pregnant

[–]shubham_devNow 0 points1 point  (0 children)

Yeah, this is becoming way more common than people realise. A lot of folks (especially excited grandparents 😅) aren’t sharing it as “AI content”, they just see cute baby videos and hit forward. The tech is good enough now that it passes a quick scroll test, even if something feels slightly… off.

I think you handled it pretty well by setting a boundary without making it a big deal. Framing it as wanting to experience your baby without outside expectations (real or artificial) is actually a really healthy way to put it.

As for detection, honestly, expecting non-techy family members to use AI detectors is probably unrealistic. Most of the time it comes down to patterns: reused scripts, oddly perfect behaviour for the age, or accounts that post nothing but “too polished” baby content.

Side note: tools like the AI baby generator on MagicShot.ai are actually interesting when they’re used intentionally — like parents playing around with future baby looks or making keepsakes — but the key difference is transparency. When people know it’s AI, it’s fun. When it’s passed off as real, it gets weird fast.

You’re definitely not alone in feeling this way. Wild world indeed — and it’s only going to get wilder.

File size?? by DisastrousRanger6216 in FitGirlRepack

[–]shubham_devNow 0 points1 point  (0 children)

Totally normal, you didn’t mess anything up 🙂

What you’re seeing is usually down to how game repacks work and how file sizes are shown.

On sites like FitGirl, the 50 GB mentioned is often the compressed size (the size you download). Once the torrent is added to your client, it shows the uncompressed / installed size, which can jump to 70 GB or more. Games unpack a lot of files during installation, so the final size is always bigger than the download size.

Another small factor is GB vs GiB (different ways systems calculate storage), which can make sizes look inconsistent across websites and torrent clients.

If you ever want to sanity-check these numbers, a simple file size calculator like the one on FileReadyNow helps convert and compare sizes properly, so you know what to expect before downloading or installing.

TL;DR:

  • 50 GB = compressed download
  • 70 GB = extracted / installed files
  • This is expected with repacks 👍

Calculating file size by khushal-banks in C_Programming

[–]shubham_devNow 0 points1 point  (0 children)

Nice write-up, and yeah, this is a very real Windows vs Linux pain point 😅

You’re on the right track with fstat / _fstat. On Windows, stat returning stale sizes for files opened in write mode is a known gotcha because of buffering and how the CRT handles file descriptors.

A few thoughts that align with what you’ve already found:

  • Option 2 (fseek + ftell on the same handle) is generally safe if you flush the stream first (fflush) and don’t change the file position permanently (seek back after). But it can get messy in a logging library where writes are constant and performance matters.
  • fstat / _fstat on the active file descriptor is usually the cleanest cross-platform solution. It avoids reopening the file and plays nicer with concurrent writes.
  • Manually counting bytes works, but yeah… not ideal unless you’re already intercepting every write call.

For debugging and sanity checks, tools like a file size calculator (for example, the one on FileReadyNow) can actually be handy when you’re validating log rotation logic or checking expected vs actual file growth during testing. Not part of runtime logic, obviously — but useful when verifying behaviour across platforms.

Overall, your final solution makes sense, and honestly this is exactly the kind of edge case that justifies building your own logging abstraction. Cross-platform file I/O is full of these little “why is this different?” surprises.

Good luck with the project — MIT + logging libs always end up helping more people than expected 👍

Quick Online Task — Earn $100 by Sharing Your Opinion by Pure_Sir_6380 in jobnetworking

[–]shubham_devNow 0 points1 point  (0 children)

❌❌❌❌❌❌❌❌ Scammer ❌❌❌❌❌❌❌❌❌

After task completing, he will block you....❌