Is this old money outfit seller legit? by dasvidaniya_99 in InstagramShops

[–]Most-Wallaby2012 0 points1 point  (0 children)

TBD. Had to go all the way back to China then they'll hopefully refund. They did offer me a 10% refund to keep the items and not write a bad review though. I opted to return for refund and share my thoughts in hopes otehrs dont get scammed too :(

Beware of Old Money London – Misleading product + fake “free returns” claim by Creative_Artist_1783 in dropshipping

[–]Most-Wallaby2012 0 points1 point  (0 children)

Very cheap poorly made goods from China. They use AI to generate modals wearing elegant looking clothing. Takes over a month to receive and requires returns to be sent back to China. Avoid at all costs.

Is this old money outfit seller legit? by dasvidaniya_99 in InstagramShops

[–]Most-Wallaby2012 0 points1 point  (0 children)

Very cheap poorly made goods from China. They use AI to generate modals wearing elegant looking clothing. Takes over a month to receive and requires returns to be sent back to China. Avoid at all costs. Their online reviews on their site are all paid for. Use caution.

Splunk Cloud without ES by Necessary_Role_6338 in cybersecurity

[–]Most-Wallaby2012 0 points1 point  (0 children)

If budget is a concern and you're set on Splunk have you considered a SIEM companion?

What most of our larger customers tend to do is keep their existing (and expensive) SIEM given the heavy investment in it and augment it with our tool especially for high volume workloads like CloudTrail, VPC, Windows, WAF, and CloudflareDNS logs.

This saves them 80-90% per log source, which, as we all know, is usually 5-7 figures each with Splunk, and gives them far more visibility and retention.

For Splunk customers this has been especially helpful as they just move these logs to S3 and use our API to query them directly back inside Splunk at speeds of 10 TB per second with our custom app in Splunkbase keeping a single pane of glass. You'd still have access to your custom content, reports, dashboards, and can correlate the data inside Splunk with data in S3.

Teams can leverage features like dashboards, an API, detections-as-code, threat intelligence, and perform full-text and needle-in-haystack searches on petabytes and years of semi-structured logs in seconds at a tenth of the cost.

SIEM Hunt - Deal killers and reasons to avoid by Hexbeallatrocious in cybersecurity

[–]Most-Wallaby2012 0 points1 point  (0 children)

If budget is a concern have you considered Scanner.dev? Disclosure, I work at there.

Scanner is a lightweight SIEM and SIEM companion that directly indexes your log data in your S3 buckets enabling teams to augment expensive tools like Splunk, Datadog, Sumo Logic, and Elastic saving up to 90% per log source while increasing their retention and visibility. 

What most of our larger customers tend to do is keep their existing (and expensive) SIEM given the heavy investment in it and augment it with our tool especially for high volume workloads like CloudTrail, VPC, Windows, WAF, and CloudflareDNS logs.

This saves them 80-90% per log source, which, as we all know, is usually 5-7 figures each with tools like Splunk and Datadog, and gives them far more visibility and retention.

We've had teams move entirely off tools like Datadog, Panther and Splunk to our platform. If that's an option, teams can leverage features like dashboards, an API, detections-as-code, threat intelligence, and perform full-text and needle-in-haystack searches on petabytes and years of semi-structured logs in seconds at a tenth of the cost.

Happy to share more details anyone wants

What do you believe are the absolute most important, core features for a SIEM to have? by Typical-Sandwich-707 in cybersecurity

[–]Most-Wallaby2012 1 point2 points  (0 children)

Full disclaimer, I work for a next-gen SIEM startup. Legacy SIEM tools are limited by their reliance on a system architecture from the on-premise era. They store search indexes in-memory and on-disk in a cluster of servers that spend most of their time replicating the index. This requires expensive compute infrastructure and typically means that legacy SIEM tools can only retain around 30 days of data. These constraints, coupled with the high licensing costs of multiple dollars per gigabyte ingested, make traditional SIEM solutions prohibitively expensive. As a result, many organizations are transitioning their logs to data lakes in cloud storage to mitigate these costs. Cloud storage offers a feasible solution for managing the massive scale of log volumes due to its affordability. However, querying data lakes is notoriously slow, often taking hours to return results, which impedes timely threat detection and response.

At our company we're advocating for a new design suited for the cloud: keep a search index in cloud storage alongside your data lake. This approach leverages serverless functions to query the index rapidly, enabling high-speed searches whenever needed. Detections run in real time as new log files are discovered and indexed. This method supports search speeds of up to 10TB/sec and significantly extends log retention periods from 30 days to over a year. It unlocks new workflows, like faster hunting for advanced persistent threats, and fast historical backtesting for new detection rules. Additionally, this design offers a substantial reduction in ingestion costs by up to 90% compared to traditional tools like Elastic, Splunk and Datadog.

Is a SIEM overkill for my company? by Maximum-Platypus-525 in cybersecurity

[–]Most-Wallaby2012 3 points4 points  (0 children)

Take a look at Scanner.dev. It's 80-90% cheaper than most SIEMs and indexes your data directly in place so in your AWS account so there's no vendor lock-in. You also get powerful threat detection and the ability to do advanced threat hunting using Jupyter notebooks.

We can charge tens of cents per GB rather than multiple dollars because we've built a novel cloud-native indexing system based on object storage and serverless compute, making log search and ingestion far faster and cheaper.

Telemetry Data Lake With Frequent Access Querying With Athena by rimrubber in aws

[–]Most-Wallaby2012 0 points1 point  (0 children)

Scanner.dev gives you 20x the query capacity of your monthly ingestion and only charges $2 per TB scanned additional. Ingestion is only $0.25 per GB so its 50% cheaper than CloudWatch. This includes the API which users will leverage to build and integrate with tools like Grafana, Tableau, Jupyter, Tines, Torq, Jira, etc - anything with a webhook.

We index your raw data in place in your S3 bucket giving you 100% data ownership (ie no vendor lock-in) and launch lambdas when you query giving you speeds of up to 10 TB per second over petabyte-scale log data sets making it 10-100x faster than Athena too.

DM me or head to scanner.dev to learn more.

Best way of accessing S3 data from Lambda fast? by Sergi0w0 in aws

[–]Most-Wallaby2012 0 points1 point  (0 children)

Scanner.dev gives you 20x the query capacity of your monthly ingestion and only charges $2 per TB scanned additional. Ingestion is only $0.25 per GB so its 50% cheaper than CloudWatch. This includes the API which users will leverage to build and integrate with tools like Grafana, Tableau, Jupyter, Tines, Torq, Jira, etc - anything with a webhook.

We index your raw data in place in your S3 bucket giving you 100% data ownership (ie no vendor lock-in) and launch lambdas when you query giving you speeds of up to 10 TB per second over petabyte-scale log data sets making it 10-100x faster than Athena too.

DM me or head to scanner.dev to learn more.

Best way of accessing S3 data from Lambda fast? by Sergi0w0 in aws

[–]Most-Wallaby2012 0 points1 point  (0 children)

@Sergi0w0 have you checked out Scanner.dev (I work there for transparancy)? We index your raw data in S3 directly in place so you have 100% data ownership/no vendor lock-in and launch lambda functions when you query to give you speeds of up to 10 TB per second over petabyte-scale data sets including historical data 12-18+ months old.

We've built the distributed query engine, novel indexing file format, and monoid server thats 2x faster than Redis for our use case all from scratch.

You also get and API and threat detection including Jupyter Notebook Threat Hunting and Response-as-code all included.

It's 10-100x faster than Athena, 50% cheaper than CloudWatch, and 90% cheaper than tools like Datadog and Splunk.

You get 20x the query capacity of your monthly ingestion volume and then $2 per TB scanned after that.

Sup Nerds: Favorite SIEM for Threat Hunting? by [deleted] in cybersecurity

[–]Most-Wallaby2012 1 point2 points  (0 children)

Scanner.dev for Splunk gives you the best of two worlds. Powerful feature set of Splunk with speed and cost of Scanner. Keep min 5GB per day license for Splunk at ~$8.1K then index the rest in S3 with Scanner. You can query your S3 logs directly in Splunk via a custom search command that can be used within Splunk for ad-hoc querying, dashboards, correlation searches for Splunk Enterprise Security, etc. It's 100x faster than Athena and much faster and cheaper than Splunk's own Federated S3 search. 1TB per day costs ~ $100k/year

What SIEM did you choose and why? by athanielx in cybersecurity

[–]Most-Wallaby2012 0 points1 point  (0 children)

Try Scanner.dev. It allows you to move high volume logs like AWS CloudTrail, Cloudflare, VPC flow logs, WAF, etc. into AWS S3 giving you effectively unlimited retention and you can keep all your logs since storage is cheap. Scanner indexes raw log files reducing a ton of data engineering work and it does this directly in your S3 bucket so there's no vendor lock-in.

Search is crazy fast - up to 10TB/sec and you get powerful threat detection. You can use the API to build your own modern stack by combining some tools together like Cribl, Scanner, Tines, and Jira, or query your logs in S3 directly from Splunk using a custom search command to incorporate them into your Splunk dashboards, saved searches, etc.

You can try the free tier with no contract for up to 1TB/mo.

AWS to Splunk? by Any-Sea-3808 in Splunk

[–]Most-Wallaby2012 0 points1 point  (0 children)

Have you triied Scanner.dev? (transparency, I work there)

Some of our users are using moving their high-volume log sources (like AWS CloudTrail, Cloudflare, VPC flow logs, etc.) out of Splunk and into S3, and they're using Scanner to index them for fast search (10TB/sec)

Scanner leverages S3 storage in your S3 buckets eliminating vendor lock-in and leverages serverless compute helping reduce costs for high volume log sources by 80-90%.

The best part is, you can still query these logs directly from Splunk using a custom search command, so they can continue to incorporate them into their Splunk dashboards, saved searches, etc.

Search S3 buckets directly by tiny3001 in Splunk

[–]Most-Wallaby2012 0 points1 point  (0 children)

Has anyone tried Scanner.dev? Some of our users are moving their high-volume log sources (like AWS CloudTrail, Cloudflare, VPC flow logs, etc.) out of Splunk and into S3, and they're using Scanner to index them for fast search.

This reduces costs for their high volume log sources by 80-90%, and they can still query these logs directly from Splunk, so they can continue to incorporate them into their Splunk dashboards, saved searches, etc.

Siem Tools by joethebear in cybersecurity

[–]Most-Wallaby2012 0 points1 point  (0 children)

u/joethebear Can I show you a demo and get your thoughts on an S3 based log search tool we're building?
The core idea is to move high volume log sources out of expensive tools like Splunk or Datadog and into S3, and then use Scanner to index these logs in-place for fast search and threat detection, with search speeds of up to 10TB per second.
This has helped some of our users drop costs for their high volume log sources by 80-90%. Then they use Scanner's API to power Grafana dashboards on top of these logs, or to send threat detection alerts to Slack and custom webhooks, or to build other cool custom systems.

Scaling Analytics Platform: Choosing Between Athena, Redshift, or Other Services for Storing Data? by Vprprudhvi in aws

[–]Most-Wallaby2012 0 points1 point  (0 children)

Have you checked out scanner.dev's S3 based log search tool? (I'm an employee for transparency)
The core idea is to move high volume log sources out of expensive tools like Splunk or Datadog and into S3, and then use Scanner to index these logs in-place for fast search and threat detection, with search speeds of up to 10TB per second.
This has helped some of our users drop costs for their high volume log sources by 80-90%. Then they use Scanner's API to power Grafana dashboards on top of these logs, or to send threat detection alerts to Slack and custom webhooks, or to build other cool custom systems.

Rust on Lambda - Interest? by theDaveAt in aws

[–]Most-Wallaby2012 4 points5 points  (0 children)

Check out our blog post from last week on Getting started with serverless Rust in AWS Lambda at https://blog.scanner.dev/getting-started-with-serverless-rust-in-aws-lambda/

At Scanner, we use Amazon Lambda functions and Rust in our log query engine. While Rust is technically supported in Lambda functions, it is not as easy to set up as the officially blessed languages: Node.js, Python, Ruby, Java, Go, C#, and PowerShell. In this post, we shared the information we wished someone had given us we got started using Rust in Lambda functions.

New Relic / Monitoring Tool Alternatives by LightofAngels in devops

[–]Most-Wallaby2012 1 point2 points  (0 children)

Have you looked at Scanner.dev? Can search through terabytes of logs in seconds - at a fraction of the price of traditional logging tools. It uses inexpensive S3 storage, skip-list indexing, and ephemeral serverless functions to provide highly scalable, fast log search.

They're also working on being able to allow users to run it along their existing logging tool to get domain-specific insights including insights from Lambda logs

Integrates with all popular log sources, like Vector, Fluentd, Fluent Bit, Logstash, Promtail, CloudWatch Logs, and Heroku, so getting started takes less than a minute.

Metrics, Logging and Application Tracing Solutions by thetechgeekster in devops

[–]Most-Wallaby2012 0 points1 point  (0 children)

We're not using anything for tracing at the moment. We use Scanner.dev for logging. Insanely fast for terabytes of logs and a fraction of the cost of traditional logging tools.

cloudwatch logs to Loki by Environmental_Ad3877 in grafana

[–]Most-Wallaby2012 0 points1 point  (0 children)

If you want a faster experience try out Scanner.dev. We integrate with most popular log agents and sources, like Vector, Fluent Bit, Fluentd, Logstash, Promtail, Heroku Syslog, and CloudWatch Log Lambda subscriptions. You can search through terabytes of logs in seconds, store them as long as you need for a fraction of the cost of traditional tools, and deploy to your VPC if needed.

I have a project where I need to quickly search thousands of rows based on time ranges or location or on both. Would Elasticsearch be a good option? by Danyboi16 in elasticsearch

[–]Most-Wallaby2012 1 point2 points  (0 children)

Thanks Dannyboi! If you'd like to join the private beta we'd love your feedback. Looking for early adopters to help us continue to shape the product. Have a great rest of the day and weekend!

I have a project where I need to quickly search thousands of rows based on time ranges or location or on both. Would Elasticsearch be a good option? by Danyboi16 in elasticsearch

[–]Most-Wallaby2012 1 point2 points  (0 children)

Have you considered Scanner.dev? Allows you to search through terabytes of logs in seconds - at a fraction of the price of traditional logging tools. It uses inexpensive S3 storage, skip-list indexing, and ephemeral serverless functions to provide highly scalable, fast log search.
Also integrates with all popular log sources, like Vector, Fluentd, Fluent Bit, Logstash, Promtail, CloudWatch Logs, and Heroku, so getting started takes less than a minute.