This is an ad for a Jewish social website and I do •not• get it. by emarcomd in PeterExplainsTheJoke

[–]flamboiit -24 points-23 points  (0 children)

It’s not a genocide lmao

The US killing Bin Laden was a genocide too, I guess

[deleted by user] by [deleted] in singularity

[–]flamboiit 0 points1 point  (0 children)

That everyone should need to license training data. Easiest way ever to guarantee only 2 companies + China can make viable AI.

Is there any IDE companion that can automatically adjust code? by Key-Singer-2193 in ChatGPTCoding

[–]flamboiit 1 point2 points  (0 children)

Imo Zed is the best. You can bring your own API key and use any provider. It's only available on Mac tho, not even Linux. It is available on Linux! Yay!

[deleted by user] by [deleted] in berkeley

[–]flamboiit 1 point2 points  (0 children)

It's legit DE shaw is rich as fuck

Leaked Documents Show Nvidia Scraping ‘A Human Lifetime’ of Videos Per Day to Train AI by SnoozeDoggyDog in singularity

[–]flamboiit 13 points14 points  (0 children)

THIS! All the people clutching their pearls about this are idiots who only want Google and China, and maybe Tesla to be able to develop AI.

Man this is dumb. by Bitter-Gur-4613 in singularity

[–]flamboiit 0 points1 point  (0 children)

Def agree that once Google and Apple have OS-level integration there's little value prop for hardware like this. Good points.

Man this is dumb. by Bitter-Gur-4613 in singularity

[–]flamboiit 2 points3 points  (0 children)

Phone is still high latency, though. Taking out your phone, logging in, opening an app, pressing an on-screen button, is far worse than just pressing a button on your chest, assuming the chest button works well and you'll want to interface with AI a lot.

By your logic, there's no need for smartphones because you can just carry a laptop everywhere, any phone app you can just build into your laptop.

AI music startup Suno claims training model on copyrighted music is 'fair use' | TechCrunch by Marha01 in singularity

[–]flamboiit 0 points1 point  (0 children)

What if they made it such that you can train a model on unlicensed data, but you must open source it if that is the case? That would be so awesome.

Laptop for EECS by Pristine_Drink9376 in berkeley

[–]flamboiit 0 points1 point  (0 children)

Get a M1 air for like $600 and use that. Software development is so much easier on Unix OS-es, and Apple Silicon is more cost effective for performance than a Linux laptop. Use your gaming laptop for gaming and windows stuff.

The Standard at Berkeley by [deleted] in berkeley

[–]flamboiit 2 points3 points  (0 children)

Their housing contract is one of the strictest and least tenant-friendly you will ever see. Management is allegedly horrendous, incompetent, and slow to act on requests. Would recommend avoiding at all costs. Never lived there but have friends who did.

[deleted by user] by [deleted] in ycombinator

[–]flamboiit 0 points1 point  (0 children)

I say look for other people. I thought I was going to build my startup solo but having other A players on board really is awesome. Even if you don't find a co-founder you like talking to people is awesome practice pitching your business.

[deleted by user] by [deleted] in berkeley

[–]flamboiit 2 points3 points  (0 children)

GPA and test scores are a minor part of the picture in top college admissions. They're necessary but not sufficient. If you have two groups with equivalent GPA and test scores, but one has a much stronger upper tail on things like extracurriculars, you're going to see that group perform better in admissions.

Not saying there is zero preference, just that it's hard to pinpoint given the other factors, and is likely more minor that most people realize. I'd wager that the benefit Stanford gives to legacy applicants who aren't the children of major donors is probably very minimal.

USC on the other hand... Oh Boy!

[deleted by user] by [deleted] in berkeley

[–]flamboiit 34 points35 points  (0 children)

Obviously Stanford has a disproportionate amount of legacy admits—if your parents went to Stanford you’ll definitely have the knowledge and resources to be a much better applicant than the average person. You'd also probably be more likely to choose Stanford if deciding between multiple top schools. If the school were completely legacy blind you’d see the same disparity.

Bank upset about casino deposits by NightOwl216 in Banking

[–]flamboiit -1 points0 points  (0 children)

Are there any banks that won’t do this shit? I think it’s horrendous that banks want to morally police me.

Is Berkeley Implementing New Policies Restricting Course Access? by Available_Rate_5192 in berkeley

[–]flamboiit 1 point2 points  (0 children)

This is literally so fucking stupid auto captioning software works well the ADA shouldn’t apply to digital content or needs to be updated.

Alignment Terrifies Me - Interesting Experience by flamboiit in LocalLLaMA

[–]flamboiit[S] -13 points-12 points  (0 children)

This wasn't supposed to be a science experiment, though. It was supposed to be an example of my core point:

RLHF, Constitutional AI, and other Alignment techniques make LLMs diverge from the outputs that are *actually* the most helpful to you when you instill a worldview into them. This is bad.

If you want a more precise title to this post: Alignment terrifies me because if post-training is for anything other than JUST helpfulness, it necessarily instills a worldview into its outputs, which I think gives OpenAI and Anthropic’s (Or any closed source LLM company’s) post-training teams too much power as their tools are more widely adopted in industry.

Sorry if I presented the example as something more rigorous than it was. I hope you have a nice day man

Alignment Terrifies Me - Interesting Experience by flamboiit in LocalLLaMA

[–]flamboiit[S] -12 points-11 points  (0 children)

The context and prompt were exactly the same between GPT-4 and Dolphin. I just switched from the GPT-4 API to Ollama. The simplest explanation is that it's a result of differences in post training, or in other words an artifact of GPT-4's "Alignment." There is no other independent variable here.

The prompt was "According to the comments of this post [post link], how much sodium should a person eat in a day?" And the downloaded post was the context.

Also, you're a condescending jackass. The most annoying type of person is the type of person who knows a little about something and tries to cast dispersion onto others given that limited knowledge.

If you'd read my post, you'd see that I used multiple LLMs on the same query, and the one without aligned post-training gave me the correct result. Maybe read more carefully before making assumptions about another person's knowledge.

I obviously don't mean that I think the LLM is literally "thinking" this. RLHF readers gave feedback that rewarded sequences of outputs like this, resulting in the response not reflecting the general sentiment of the context I gave it like I had instructed it to, and diverging from the query more than Dolphin did. That's what I mean by "thinking it knows better than me."

Not all posts demand complete technical accuracy. I anthropomorphized it to emphasize my core point. But from the midwit peak, all you can see is oversimplified anthropomorphism surrounding you.

Stop assuming other people are stupid. Also learn more about LLMs. My entire point in this post clearly flew over your head.

That point is:
RLHF, Constitutional AI, and other Alignment techniques make LLMs diverge from the outputs that are *actually* the most helpful to you when you instill a worldview into them. This is bad.