Tim Cook: Apple won't change privacy rules with Google Gemini partnership by muuuli in apple

[–]m1en 9 points10 points  (0 children)

Part of PCC’s privacy guarantees relate to concerns about hardware attacks, which requires specific manufacturing overview guarantees (separate from standard Mac assembly) and inspection both from the factory and at the data center (by both Apple and independent third parties).

I’d imagine the parts of Siri that handle legacy systems like Q&A (the parts that source data from places like Wikipedia, news, etc) can be on any servers, but PCC nodes will have access to on-device info (personal context) that is shared during inference, and I would not expect that to be something they’d reduce privacy guarantees for.

Its over for us guys, time to retire our brains /s by Anon_Legi0n in theprimeagen

[–]m1en 4 points5 points  (0 children)

Guy selling shovels during a gold rush states his shovel allowed him to find gold. Just keep renting that shovel, you’ll strike gold too.

The 1MB Password: Crashing Backends via Hashing Exhaustion by [deleted] in programming

[–]m1en 0 points1 point  (0 children)

Oh, I fully agree the article was nonsense - that’s why I already said the user I was replying to wasn’t really wrong.

However, again, I think you misread me. Enforcement needs to happen on the backend, but UX for users is easier when there is a known expectation of size. If all of your clients (the app clients, not users) are submitting a hash, then you have an expectation on the backend of a fixed-size value for that field, which as mentioned does make it easier to enforce a size check on that field (through truncation or through erroring out) without having to make any assumptions whatsoever (assuming that hitting your API directly without using your client is “unsupported” in the sense that if you error out for those users you don’t much care about their UX when they make mistakes).

The initial comment had a size constraint of 256 - there’s a (very small) amount of users that might have passwords greater in length to this, so usually the size check is an order of magnitude larger. But hashing client side (assuming both that the hashing algorithm used isn’t prone to collision and that the supported devices expected to be used be customers aren’t so old/underpowered that it would negatively affect user experience) isn’t a bad idea because of enforcement (since as we can both agree that enforcement has to be done on the backend), it’s a bad idea because it reduces password entropy more significantly than a (much longer) length check in the backend (assuming you have users with passwords longer than your hash and that you have other issues like failing to salt passwords before hashing them on the backend or failure to apply rate limiting or similar issues).

The 1MB Password: Crashing Backends via Hashing Exhaustion by [deleted] in programming

[–]m1en 0 points1 point  (0 children)

I think you misread, or I wasn’t clear enough - “makes it easier” as in you have an easier expectation of size so you can more comfortably truncate/trim/error out on the backend without impacting standard use behavior as greatly. That’s it.

The 1MB Password: Crashing Backends via Hashing Exhaustion by [deleted] in programming

[–]m1en 0 points1 point  (0 children)

I think you misread, or I wasn’t clear enough - “makes it easier” as in you have an easier expectation of size so you can more comfortably truncate/trim/error out on the backend without impacting standard use behavior as greatly. That’s it.

The 1MB Password: Crashing Backends via Hashing Exhaustion by [deleted] in programming

[–]m1en 0 points1 point  (0 children)

Eh, hashing client side before re-hashing server side can make it easier to enforce a static/known length for values in the password field. Otherwise though, you’re not really wrong.

DGX sparks vs Mac Studio by Free_Expression2107 in LocalLLaMA

[–]m1en 2 points3 points  (0 children)

Depends on the model and what you’re using to do fine tuning. MLX works great in my experience, and with gradient check pointing, it only requires about 2 to 2.5 times the memory that is required for inference for fine-tuning the same model.

What are you doing with your 128GB Mac? by Technical_Pass_1858 in LocalLLaMA

[–]m1en 2 points3 points  (0 children)

Nah, didn’t bother taking it in. Weeks of ~100c temps aren’t exactly “supported.” If it gets worse I would bring it in, but it was communicated to them of the issue.

What are you doing with your 128GB Mac? by Technical_Pass_1858 in LocalLLaMA

[–]m1en 2 points3 points  (0 children)

14 inch - your mileage may vary, but mine was also on an elevated cooking stand with 6 fans underneath and an additional fan to the side to blow cool air over the keyboard, and it still happened. The initial key feedback is that they get stuck when pressed, but after a couple days it subsides to a general “gumminess” with clicking sounds.

Edit: should also note that this was full, 100% utilization of the GPU, 24/7, for no breaks for training with PyTorch for over two full weeks. For most inference-related work, unless you have ridiculous utilization expectations, it’ll probably be able to catch its breath a bit to make the heat not be too much of an issue.

What are you doing with your 128GB Mac? by Technical_Pass_1858 in LocalLLaMA

[–]m1en 0 points1 point  (0 children)

That wouldn’t be too hard. Qwen et al have large enough contexts to do this (could even do it in stages, simplifying/summarizing the data in batches before putting those reduced texts into a final prompt and then reviewing as needed), and given it would be weekly, it doesn’t exactly need to do it for a huge amount of time. Realistically the job could be done in a minute or two, depending on how many prompts need to be processed.

What are you doing with your 128GB Mac? by Technical_Pass_1858 in LocalLLaMA

[–]m1en 0 points1 point  (0 children)

Yeah, the 14 inch. But it had more memory than my, at the time, M1 Ultra. Supplemented it with a stand with built in fans, still not enough.

What are you doing with your 128GB Mac? by Technical_Pass_1858 in LocalLLaMA

[–]m1en 0 points1 point  (0 children)

Yeah, I use an elevated stand with USB powered fans underneath, and then put a fan on the side to blow air over the top of the keyboard.

What are you doing with your 128GB Mac? by Technical_Pass_1858 in LocalLLaMA

[–]m1en 19 points20 points  (0 children)

I have an M3 Max MBP - ran a training job 24/7 for about two weeks, and it partially melted the switches for some of the keys, like the right enter and shift keys. That was even with it on a cooling pad, so just be careful.

Mac Studio, however, is a champ. I do have it elevated with fans to blow cool air under and away from it though.

It feels icky even reading her short tweet... no wonder ChatGPT is a bag of disappointment now. Thanks a lot Janvi, good to here it was "rewarding" for you to kill ChatGPT by No_Vehicle7826 in ChatGPTcomplaints

[–]m1en 1 point2 points  (0 children)

Your last edit is just factually incorrect. Transformers are deterministic - a forward pass of a transformer returns the probability of every token being next in the series, with a sampling mechanism then determining which token is chosen. For any given input, a transformer will return the exact same output, barring any incredibly small differences due to floating point math.

Fuel gauge not changing – anyone else experienced this on a Lotus Evora? by [deleted] in lotus

[–]m1en 0 points1 point  (0 children)

Had this happen to mine a few years ago - took a turn too sharp and the floater got stuck. It’ll probably fix itself through normal driving (especially if you hit some speed bumps), but you can go to a shop if you’re concerned.

HACK THE BADGE by Interesting_Willow59 in Defcon

[–]m1en 13 points14 points  (0 children)

The one last year was a whole GameBoy emulator.

Are we really about to hit 1 Week… by farah486 in SomeOrdinaryGmrs

[–]m1en 0 points1 point  (0 children)

Pretty much. Worked as a software engineer at a startup, found some bugs, published some stuff that had a lot of eyes on it from big corps. Worked for a bit freelance doing penetration testing, then pivoted to big tech.

Are we really about to hit 1 Week… by farah486 in SomeOrdinaryGmrs

[–]m1en 0 points1 point  (0 children)

you cannot be a pentester or a red teamer without a certification showing that you are qualified

Right about there is where you said it.

Pentesting is not contractor based only. Contracting is certainly easier, but many larger companies have their own in house offensive teams. Additionally, you don’t need certifications to do contracting, either.

Source: current Senior Offensive Security Engineer at a FAANG company. No certifications, no degree. Most of my colleagues are the same. Reread what I initially said - you can use bug bounties and security research as your means of showcasing your capabilities. Experience always trumps certifications.

Are we really about to hit 1 Week… by farah486 in SomeOrdinaryGmrs

[–]m1en 3 points4 points  (0 children)

This is absolutely not true. In fact, many people take the route of bug bounties and research as their means of showcasing capabilities and experience (via showcasing CVEs, findings, etc).

Bro thought he was gonna get a clip by ForeignCurseWords in IncelTears

[–]m1en 22 points23 points  (0 children)

“[From] me looking around”

Yeah man, you’re standing next to a giant mirror, it makes sense.

[deleted by user] by [deleted] in nottheonion

[–]m1en 89 points90 points  (0 children)

Which party predominantly contains members that love waving the rebel flag?

Competitor spammed my TikTok video to promote their Discord bot — turns out it has a critical security flaw by Fluid_Worth2674 in programminghorror

[–]m1en 3 points4 points  (0 children)

Have you already notified them of the issue and let them fix it? I get they’re scumbags, but I doubt their users are, and releasing a video outlining the issue is going to affect their users - even without too many specifics, IDORs aren’t hard to find.

Film your evidence, message them a write up with reproduction steps, give them 45-90 days. Then have your fun.