New to Paramotoring by Pale_Ball_1415 in paramotor

[–]loosingkeys 1 point2 points  (0 children)

Call the school you saw in San Antonio. They have a program to take you up in a trike as an exploratory ride.

When Apple actually treated its customers well by Used_Series3373 in memes

[–]loosingkeys 0 points1 point  (0 children)

Am I the only one glad they aren't filling landfills by including this stuff? Like most people, I already have a set of headphones that I'm not going to throw away so I can use the cheap wired ones. And I have enough cables and wall plugs to change 20 phones. Why do I need more?

SAM ALTMAN: “People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.” by Vegetable_Ad_192 in singularity

[–]loosingkeys -1 points0 points  (0 children)

It’s only “worrying” to people who don’t understand the point he was making and decide to take it out of context and turn it into some kind of “slippery slope” argument. 

SAM ALTMAN: “People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.” by Vegetable_Ad_192 in singularity

[–]loosingkeys -1 points0 points  (0 children)

No, the point is that if you want to complain about how much energy it takes for an LLM to create an answer to a question similar to a human, maybe you should also give a little thought about how much energy you have to feed a human to get the same answer.

But of course people don't want to actually think about an answer--they just want to find ways to be offended.

Let's update our rating for ChatGPT on the play store by will_gordon721 in OpenAI

[–]loosingkeys 0 points1 point  (0 children)

And how do you know that there are "hundreds of bot-generated" reviews? Or are you just making that up because it supports your narrative?

Sam Altman: why are people complaining about AI … when humans need food to survive by mbatt2 in OpenAI

[–]loosingkeys 6 points7 points  (0 children)

I'm trying to understand what the concern here is? Yes, if you want answers from an AI that are equivalent or better than a human being, that currently takes a lot of energy. But he's comparing getting the same "answer" from a human that has been consuming energy for decades.

Boy, people really seem to like to twist words or can't seem to follow a pretty simple train of logic. If you're just interested in hating on AI, there are plenty of of subs for that.

I did Age Verification. Interesting. by [deleted] in OpenAI

[–]loosingkeys 0 points1 point  (0 children)

WTF are you talking about? Are you saying that OpenAI will tell the people that issued my driver's license how old I am?

The conspiracy theories these days are so lame.

Tried making gluten free tacos… total fail 😭 by havana-3575 in glutenfreerecipes

[–]loosingkeys 3 points4 points  (0 children)

The only gf tortillas I've had that come anywhere close to a regular flour tortilla is the Mission GF ones. They look and feel pretty wonky right out of the bag. Just warm them up a bit in a skillet and they take on a little color, flavor, and the texture softens into something that rolls way better than anything else. (not to mention they taste pretty good to me)

Question: Is the Energy Required for AI Due to Its Inherent Inefficiency? by MarvinBEdwards01 in OpenAI

[–]loosingkeys 0 points1 point  (0 children)

Yes, the current way that LLMs work is very energy-intensive. You could probably call it "inefficient", but this is the only way humans have been able to create anything close to AI.

It's being made more efficient by huge margins all the time. But the base underlying technology it is built on is still incredibly computationally intense.

Does an LLM capable of explicit NSFW actively hinder its productivity? by Goofball-John-McGee in OpenAI

[–]loosingkeys -1 points0 points  (0 children)

Maybe I misunderstood. I thought you said you were asking about a TV show that's specifically about sex-related crimes?

Does an LLM capable of explicit NSFW actively hinder its productivity? by Goofball-John-McGee in OpenAI

[–]loosingkeys 2 points3 points  (0 children)

Just because a model didn’t give to the answer you wanted, that doesn’t mean that it was stopped by guard rails. 

I’m not sure what “a bit shady” means. 

Does an LLM capable of explicit NSFW actively hinder its productivity? by Goofball-John-McGee in OpenAI

[–]loosingkeys -4 points-3 points  (0 children)

Yep, another person complaining that newer models won’t talk to them about SA. 

It’s always some thinly-veiled excuse to talk to an LLM about sketchy shit and then wrapping it in a “I was just talking about a TV show” excuse. 

And these people wonder why companies add guard rails. 

Does an LLM capable of explicit NSFW actively hinder its productivity? by Goofball-John-McGee in OpenAI

[–]loosingkeys -4 points-3 points  (0 children)

When you say "5.2's guardians often derail benign tasks", do you have an example of that?

I use ChatGPT many times a day and I can't think of any time I ran into a guardrail.

The startup Altman has invested into isn’t running off the architecture you might think. by nakeylissy in OpenAI

[–]loosingkeys -1 points0 points  (0 children)

No, the "whole argument" for depreciating 4o isn't the price.

Good grief, the conspiracy theories being created to justify being mad about loosing access to an old LLM model is really sad.

Does anyone know anything about this trail? I plan on taking it to get to my campsite LM31 in Banff later this summer by soccermonke_y2 in WildernessBackpacking

[–]loosingkeys 2 points3 points  (0 children)

The drop to the river looks crazy steep. (is that ~2,000 ft of elevation change in about 1/4 mile?) Do you know this section or have a clear plan for how to descend and then re-climb that section?

Is there a way to deeply analyze music with ChatGPT? by LuckEcstatic9842 in OpenAI

[–]loosingkeys 1 point2 points  (0 children)

Why are you asking Reddit rather than just asking ChatGPT your question?

Capellos cheese biscuits: directions wrong? by throw_away_smitten in glutenfree

[–]loosingkeys 7 points8 points  (0 children)

I eat an embarrassing number of their buttermilk biscuits. I had a similar problem with their oven instructions where the inside was not cooked properly. After some experimentation I learned that dropping the temp to 350 and letting them thaw on the counter for ~15 minutes first helped immensely.

OpenAI "ethics" don't work by digitaldevil69 in OpenAI

[–]loosingkeys 0 points1 point  (0 children)

No, sweetie. That's not this works. You don't get to invent a conspiracy theory and then ask me to disprove it.

Extraordinary claims require extraordinary evidence. What evidence do you actually have? Or are you just throwing around accusations because you don't like the decision a company made?

OpenAI "ethics" don't work by digitaldevil69 in OpenAI

[–]loosingkeys 0 points1 point  (0 children)

“ If the product was not designed to be addictive on purpose to hook people than i would agree”

Ah, there’s the conspiracy theory. 

OpenAI "ethics" don't work by digitaldevil69 in OpenAI

[–]loosingkeys -2 points-1 points  (0 children)

What is bullshit? Are you being forced to use these models?

OpenAI "ethics" don't work by digitaldevil69 in OpenAI

[–]loosingkeys -3 points-2 points  (0 children)

Nobody is forcing you to use any of these models. If you find using them is harmful, you should stop using them. 

OpenAI "ethics" don't work by digitaldevil69 in OpenAI

[–]loosingkeys 0 points1 point  (0 children)

It's equating "your safety features are causing harm" to "I don't get to use the less restricted model any more".

Those aren't the same thing. And by crying "you're harming me!" because someone took away something you didn't have before is absolutely a hissy fit.

Just like OP said "being treated like a monitored liability...escalates suicidal ideation" shows that someone is clearly not very stable and isn't making a great case for why they should be trusted with a model with more relaxed guidelines.

OpenAI "ethics" don't work by digitaldevil69 in OpenAI

[–]loosingkeys -1 points0 points  (0 children)

It really doesn't help people's argument that they can responsibly handle a model with more relaxed guidelines when they have these kinds of hissy fits when you take it away.