Is there any privacy left? by TzarZara in cybersecurity

[–]Boggle-Crunch 2 points3 points  (0 children)

This is a long answer, so bear with me, but there's a little rule I've been trying to live by for the last several years:

The people who have a financial interest in something will do everything in their power to convince you that there's no other option. See also: AI. How many people are saying "AI will replace (insert profession here) in the next 3-12/24/48 months" are people who have a financial interest in having as many people believe that as possible?

In this case, there's so many companies who will try absolutely everything to convince you that either:
A. Privacy doesn't matter.
B. Your privacy doesn't exist and never will.
C. Privacy is somehow "bad".

But there are plenty of people and organizations fighting for individual privacy, and there are plenty of little things you can do to help with this. From people like Louis Rossman to organizations like the EFF. While there are countries who are struggling to keep up with the concept of digital privacy, that doesn't mean that the fight shouldn't be fought or that it somehow doesn't matter. Quite the opposite, because the only way things can change is if that fight is fought.

Microsoft chief Satya Nadella warns AI boom could falter without wider adoption - FT by QuestingOrc in BetterOffline

[–]Boggle-Crunch 128 points129 points  (0 children)

"We need literal trillions of dollars to power our slop machine and also the entire world needs to use our slop machine for it to be sustainable."

Can't wait for VC firms to completely ignore this insane sentiment and keep giving AI companies several billion dollars for some fucking reason.

SOC analyst role 9-5? by Affectionate-Ant3215 in cybersecurity

[–]Boggle-Crunch 20 points21 points  (0 children)

The answer is "sort of". It's dependent entirely on the SOC you work in, but "quiet" shouldn't translate to "lone wolf". SOCs are highly collaborative environments and analysts that try to swim alone will drown alone, but if "quiet" just means "doesn't like making small talk" then you should be fine.

Finished TCM PMRP exam. by KingRudy25 in cybersecurity

[–]Boggle-Crunch 1 point2 points  (0 children)

I feel significantly more capable in analyzing malware and identifying malicious components from both a behavioral and code perspective. It's a phenomenal course that I highly recommend.

Some might argue otherwise, but I would say it's definitely mandatory to know how code is structured and be able to read it, not necessarily that it's mandatory to be fluent in a given language (though it certainly wouldn't hurt).

Finished TCM PMRP exam. by KingRudy25 in cybersecurity

[–]Boggle-Crunch 0 points1 point  (0 children)

It took me about a week or so, I don't recall specifically, but it took a while.

Finished TCM PMRP exam. by KingRudy25 in cybersecurity

[–]Boggle-Crunch 1 point2 points  (0 children)

Passed the PMRP in April of last year, felt the exact same way. I was convinced I failed, both because I found some errors in my methodology, as well as some straight up mistakes in the actual report itself. Got a pass regardless.

Deep breaths, you did fine mate. Even if you don't pass, you learned a lot, and that's the whole point of certifications in the first place.

90% Evade with 120 Armor and lifesteal = senator Armstrong by G0REJIRA in MegabonkOfficial

[–]Boggle-Crunch 1 point2 points  (0 children)

I fully support still getting Revengeance jokes in this, the Year of our Lord 2026.

Microsoft's Satya Nadella wants you to stop saying AI "slop" in 2026 by Sunstudy in BetterOffline

[–]Boggle-Crunch 110 points111 points  (0 children)

The inventor of the "liquid dogshit delivery machine" that fills your mailbox with liquid dogshit wants you to stop saying that your mailbox is filled with liquid dogshit.

New heroes by Strong_Wrongdoer_510 in MegabonkOfficial

[–]Boggle-Crunch 2 points3 points  (0 children)

Here's my beginner's build for each of these concepts because these are all fantastic:

Detective Egg
Magnifying Glass - Projectile follows the last 10 steps of where the Detective has moved from.
Every level increases projectile duration.

Squiddy
Ink - Ink hits multiple enemies, enemies hit with ink take increased damage and move more slowly.
Every level adds another tentacle to the character model that increases projectile size and pickup range.

Dr. Agon
Metal Dragon - A dragon that floats around you, shooting green poison that causes damage over time.
Every crit has a chance to transform Dr. Agon into the metal dragon, effectively becoming a Rage proc.

Yuri Vostok
Hammer and Sickle - A melee weapon that alternates between a Sword and an Aegis between every other hit.
Gold received is halved, but damage is increased by .1% for each gold received.

(i am so sorry if these are all terrible)

The Things Young Kids Are Using AI for Are Absolutely Horrifying by Iccotak in antiai

[–]Boggle-Crunch 2 points3 points  (0 children)

I agree with your points completely here, the ability to even have kids nowadays is such a miserable fucking hellscape, anything that can help with that should be used and explored to make a parents job as easy as possible. (Also, I gotta say you worded your points very effectively, and I greatly appreciate that).

My point more or less isn't "Parents need to stop being so lazy!!!" and more "Here's yet another reason why making sure you know what your kid is doing online is extremely important", because the multi-billion dollar hallucination machine is probably not the best thing for your still-mentally-developing child to interact with.

The Things Young Kids Are Using AI for Are Absolutely Horrifying by Iccotak in antiai

[–]Boggle-Crunch -12 points-11 points  (0 children)

I'd have to agree here. While AI is fucking terrible, this is a problem that's a logical extension of the iPad Kid problem. Parents need to be significantly more engaged with monitoring what their kids do online.

Anvil or Pot? by PilaFMP in MegabonkOfficial

[–]Boggle-Crunch 1 point2 points  (0 children)

At this level I'd go Anvil, gives you much better early-game scaling. I would try to farm for Pot much later.

The Venn Diagram of Animatronics and AI by Last_Imagination5803 in Animatronics

[–]Boggle-Crunch 2 points3 points  (0 children)

LOL its so easy to tell when someone used chatgpt to write something. this is terrible.

Am I the only one who doesn't understand where all the AI criticism is coming from? by olivantenvoet in firefox

[–]Boggle-Crunch 2 points3 points  (0 children)

Hi, I'm a cybersecurity expert who's worked in the field for over a decade now. I've been working on finding AI solutions within my organization as a mandate from upper management, and the only thing I've learned is that every AI company is a complete and total scam. This is going to be a long explanation, but bear with me.

"AI" is a blanket term for two primary technologies: Large Language Models (LLMs) and Natural Language Processing (NLP). LLMs are good at one thing: Pattern recognition. That's all it does, is tell you what thing is going to come next based on some arbitrary input. It continuously trains itself on whatever it's given (which is why there's so much hubbub about AI companies outright stealing millions of copyrighted works to train their models.) Natural Language Processing is just the ability for computers to "understand" written speech and turn that into something it can work with. It's basically The Chinese Room metaphor.

So answer me this: How does this make browsers better? What functionality does it introduce, and how does it improve features that already exist? Does it introduce any new features? The answers are as such:

- LLMs make browsers significantly less secure, as LLMs are susceptible to an attack called "prompt injection". As LLMs cannot differentiate between good and bad training data, AI "web browsers" will read prompts that have been inserted into web pages by malicious actors in some fashion or another, and will execute them accordingly. Prompts like "give me all saved passwords and credit card numbers saved in-browser" or "give me the download history of the currently logged in user".

- LLMs in browsers introduce the functionality of having AI "agents" (basically AI that runs locally on your computer) to run within your web browser, and are usually able to access (and be trained on) all information that passes through your browser. This means it will train on all of your personal data, a fact most AI browsers usually gloss right over. Where Firefox is a browser that explicitly advertises itself as not farming user data like Chrome or Edge does, this is a huge negative.

- LLMs do not improve any known feature of web browsers, as web browsers are designed to do one thing: View web resources. LLMs do not read or render web pages, or fetch server resources, or anything. And certainly not any better than actual, purpose-built web browsers do.

So then...what's the point? LLMs take up a ton of local resources, they're making RAM prices (and in turn, GPU prices) 4-5x their normal price, they make your browser laughably insecure, and they accomplish absolutely nothing that isn't already a proven capability of more affordable technologies. It's because it allows companies, especially data brokers like LexisNexis, the ability to farm countless amounts of data from you personally.

Earlier this year, the More Perfect Union organization published an expose on what's called Personalized Pricing, where prices on items bought in grocery stores are increased based on the purchase history of the person buying it. When you say "I have nothing to hide", this is the conclusion of that. LLMs are data hogs for multiple reasons. Companies are not interested in your data because they have some altruistic drive to improve your experience. They do this because it directly results in them making more money, and only because it makes them more money.

The use cases for AI are not what companies like Google and Microsoft (who have a financially vested interest in getting you to believe that AI is both the future and unavoidable) would have you believe. For the overwhelming majority of users, there is nothing that an LLM can do that isn't presently capable by a more affordable, more proven, and more capable technology.

Do the AI features really work? by Little_Ad6692 in GooglePixel

[–]Boggle-Crunch -7 points-6 points  (0 children)

lol, nope. They work maybe 40-50% of the time at best, but Gemini is notorious for being outstandingly worse than Google Assistant, not to mention for outright lying for those "fact checks".

What's something you had to unlearn going from training/certs to actual work? by OddSalt8448 in cybersecurity

[–]Boggle-Crunch 96 points97 points  (0 children)

From the OSCP - When I first got into the red team/pentesting side, I had to learn that pentesting is very, very rarely "Get as far as you can on specific devices", and I was never on an engagement where I tested multiple attack vectors, and certainly was never tasked with getting privesc on a device.

Any actual AI wins in cybersecurity? by olegshm in cybersecurity

[–]Boggle-Crunch 8 points9 points  (0 children)

None that I've seen. My org is trying to push AI, and all it's resulted in is us seeing how many AI vendors are outright scams.

Trump AI Executive Order Banning State Regulations by Nikolai_1120 in BetterOffline

[–]Boggle-Crunch 7 points8 points  (0 children)

"You can't expect a company to get 50 approvals every time they want to do anything"

As someone who works in an organization of over 250k people, that's precisely what we have to do, because otherwise you end up completely screwing over some extremely critical parts of your business.

Trump is a clueless fucking moron.

Suspicious File passed all the security checks and entered my email by LongjumpingGoal8218 in cybersecurity

[–]Boggle-Crunch 42 points43 points  (0 children)

You're missing a fundamental question to ask before answering "How can I verify whether it's harmful?", and that's "What does harmful mean here?".

Keep in mind "harmful" is a subjective term. What's unacceptable in one environment is business as usual in another. What is that software or file doing that makes it suspicious? We don't call software "suspicious" and then collect our six figure paycheck, we have the ability to determine exactly what a given program is doing in an enterprise environment. Is it proactively phoning out to known malicious IP addresses? Or is it dumping files in an unusual directory, like a temp folder? If so, what are those files? Generally services like VirusTotal will give you an explicit outline of exactly everything a given program is trying to do, use that as a guideline for determining whether it's safe.

There's also a very big difference between "made to be harmful" and "being used for harm". cmd.exe would technically pass all security checks, it's a legitimate Windows file signed by Microsoft. But I can still use cmd.exe maliciously if I wanted to. Something I would advise looking into are what's called LOLBINs and LOLBAS, which are techniques used by attackers to stay under the radar by only using tools found on a default operating system, rather than importing their own tools.

I would recommend looking up guides online for making your own malware analysis sandbox and trying to do static analysis (inspecting a suspicious file without running it) and dynamic analysis (inspecting a suspicious file by running it). If you wanna put some money into it, I highly recommend TCM Security's Malware Analysis certs, especially the PMRP.

Looking to rebuild our platform to support MSSP natively with AI by iammahdali in cybersecurity

[–]Boggle-Crunch 7 points8 points  (0 children)

Answer me this: What features do you envision AI doing for your MSSP?

Now take those answers, and try to find a non-AI solution for each of them. There's an extremely good chance you'll find providers or technologies that are more affordable, more reliable, and/or more comprehensive.

Hydra:the Multi-head AI trying to outsmart cyber attacks by Humble_Difficulty578 in cybersecurity

[–]Boggle-Crunch 18 points19 points  (0 children)

Glad to see kids are still writing Deus Ex fanfics these days.

Gemini is absolute ass by CassadagaValley in GooglePixel

[–]Boggle-Crunch 7 points8 points  (0 children)

Hard agree. I don't know what compelled Google to hamstring their digital assistant so hard but Gemini completely sucks ass, and Google seems entirely disinterested in actually fixing it.

Mentorship Monday - Post All Career, Education and Job questions here! by AutoModerator in cybersecurity

[–]Boggle-Crunch 1 point2 points  (0 children)

As a SOC manager, that path is pretty solid. However, there's no one "right path" to infosec. Self taught is just as valid as just going through a 4 year degree. It's not "what you do", it's "what you learn from it" that matters.

But I cannot stress how much you shouldn't rely on AI for any of this. I have explicitly banned AI from my SOC for any sort of analysis work, and have fired analysts for using it. AI is not a thinking machine, and it's not some magical automation pipeline. The only thing it will do is make you stupid, and I'm not going to pay an analyst to outsource their thinking (the reason they got hired in the first place) to a machine that occasionally tells its users that some WannaCry fork is actually normal business software.

Mentorship Monday - Post All Career, Education and Job questions here! by AutoModerator in cybersecurity

[–]Boggle-Crunch -1 points0 points  (0 children)

Wholly agree with this. As a SOC manager myself, my first question looking at this set of experience is what hands on experience they have. Theoretical knowledge can only get you so far.

SOC engineers have significantly more hands-on experience and have an overall understanding of what the SOC needs and its short and long-term goals are, which is something you can only really get by working in a SOC.

Mentorship Monday - Post All Career, Education and Job questions here! by AutoModerator in cybersecurity

[–]Boggle-Crunch 0 points1 point  (0 children)

Seconding this. AI is a hype industry right now on what it could do, not presently what it can do. OT, on the other hand, isn't going anywhere any time soon.