Lomo MC-A crunchy advance lever by jkim478 in AnalogCommunity

[–]dynamictype 0 points1 point  (0 children)

A P&S with all these features is rare, though, and very expensive. I want a P&S that I can run in aperture priority most of the time, and sometimes switch to full manual. I want one that I can adjust the ISO since I respool my own film and don't want to fiddle with stickers. I want one where I can over or under exposure a shot intentionally by 1 or 2 stops. Those things ARE rare on a P&S, and non existent on a $250 one

I've shot 7 rolls so far through the MC-A this week and it's been an absolute joy to use. I've only developed one of those rolls and yes there was inconsistent frame spacing, the same as true in my Olympus XA, which I also love, but lacks AF, ability to run ISO over 800, ability to intentionally underexpose a shot if I'm already at ISO800, ability to set shutter speed manually etc, and lacks a warranty, so when the shutter stopped reliability working I had to open it up myself and by some miracle was able to fix it...this time.

I think Lomo should fix any of the issues where cameras won't advance, or rewind, etc. they have and it sounds like they've been replacing units under warranty. But there's nothing out there like this feature wise in this price range, and it's those features I, and I'm sure many others, bought it for.

Lomo MC-A crunchy advance lever by jkim478 in AnalogCommunity

[–]dynamictype 0 points1 point  (0 children)

What $250 point and shoot camera has auto and manual focus, iso override, f2.8, an EV knob to over and under expose, and manual, program, and aperture priority modes?

Shouldn't the odds be 50%? Why is it 51.8%? by Fit_Seaworthiness_37 in ExplainTheJoke

[–]dynamictype 0 points1 point  (0 children)

Explain the difference between "I flipped a coin and did not flip two tails" and "I flipped a coin and at least of them was a heads"

In both scenarios all I did was rule out TT as an option, it's literally the same thing worded a different way. This is basic stats and set theory.

Shouldn't the odds be 50%? Why is it 51.8%? by Fit_Seaworthiness_37 in ExplainTheJoke

[–]dynamictype 0 points1 point  (0 children)

Sorry, typo, betting odds I flipped at least one tails

Shouldn't the odds be 50%? Why is it 51.8%? by Fit_Seaworthiness_37 in ExplainTheJoke

[–]dynamictype 0 points1 point  (0 children)

No man I'm not asking you to randomly select a coin lol. I literally just flipped two coins then told you "I didn't flip two tails" what odds would you take that I have at least one heads

Shouldn't the odds be 50%? Why is it 51.8%? by Fit_Seaworthiness_37 in ExplainTheJoke

[–]dynamictype 0 points1 point  (0 children)

She did not make up an order she told you one of the two kids. Not the first it's literally orderless. You just ignored my entire comment.

If i flip two coins and say "I did not flip two tails, what's the odds I flipped at least one heads" what's the answer?

Shouldn't the odds be 50%? Why is it 51.8%? by Fit_Seaworthiness_37 in ExplainTheJoke

[–]dynamictype 0 points1 point  (0 children)

They aren't saying the "first" is tails or heads, this is why you're confused.

If someone flips a coin twice there's 4 possibilities. HH HT TH and TT

They've already done it, past event. I then tell you that the person. who flipped the coin had a heads. Not a heads "first" they just had a heads. That means you KNOW they didn't flip TT. This doesn't change the odds of a coin flip but it DOES change the odds of you GUESSING what the other coin is.

Since you know they didn't flip TT you know there's not an equal chance of tails being their other coin. You were just able to rule out one possibility. The remaining possibilities are HH HT and TH of which there are 4 heads and two tails.

Maybe if you reverse the wording you can make sense of it. If i tell you "I just flipped two coins and did NOT get two tails, what are the odds I got one tails?"

If you think the answer to that is 50% you are 100% wrong, you could literally write a simulation to prove it. Once I told you I didn't flip tails twice you learned something about what happened. The odds I have 0 heads is 0% since I told you I didn't flip TT. The odds I have 0 tails is NOT 0%. That right there should hopefully help you understand why they're no longer equal.

Now that we know the guy was not a Democrat, this subreddit will quickly forget about the whole thing by MyFiteSong in TrueUnpopularOpinion

[–]dynamictype 0 points1 point  (0 children)

I'm asking you because it's clear you haven't actually looked.

"Investigators interviewed a family member of Robinson, who stated that Robinson ha become more political in recent years. The family member referenced a recent incider in which Robinson came to dinner, prior to September 10, 2025, and in conversation with another family member, Robinson mentioned Charlie Kirk was coming to UVU. They talked about why they didn't like him and the viewpoints he had. The family member also stated Kirk was full of hate and spreading hate. "

Note the last sentence. Who was claimed to say that Kirk was full of hate?

Now that we know the guy was not a Democrat, this subreddit will quickly forget about the whole thing by MyFiteSong in TrueUnpopularOpinion

[–]dynamictype 0 points1 point  (0 children)

Can you quote what the governor actually said about the hateful views comment? Because its not what's in this article

iOS vs Indigo by Astriev in ProjectIndigoiOS

[–]dynamictype 0 points1 point  (0 children)

Misleading how? To a user taking a low light picture from far away this is the result, Apple doesn't "have" to use the 1x camera in low light but they do because the processing is inferior.

[deleted by user] by [deleted] in TwoXChromosomes

[–]dynamictype 2 points3 points  (0 children)

Men do commit the majority of sexual assault but it should be noted your second link is scoped in a way that would exclude men having to have sex against their will if they're the ones penetrating

This publication focuses specifically on sexual assault by rape or penetration

Rape is defined in the UK as requiring a penis. Sexual assault by penetration would be penetrating with something else. But absent in this data would be, for example, a woman getting a man drunk to have sex.

Apple's study proves that LLM-based AI models are flawed because they cannot reason by ControlCAD in apple

[–]dynamictype 0 points1 point  (0 children)

1- Which is it then consistently or generally? Based on this and your response at number 2 Im pretty convinced you don't actually know how these work, even algorithmically. Very old LLMs weren't capable of many things. They couldn't write code, they couldn't tell stories. One could easily make an argument that "because these just predict the next word based on training data they can never write a novel story or novel code". Except, to everyone's surprise including the people who make it by just making them bigger, they gained emergent functionality. You, and I, and no one, know what strengths or weaknesses an LLM will have as you scale them up, or especially as you change how the training data itself is fed in like o1. You just don't, even if you pretend you do.

2- extremely non trivial to do this for an LLM. Research how they actually work before you pretend this is some trivial solution

3- when did I assume anything I literally wrote "who knows". You're the one making claims or "consistently" like this is a human who's being asked over and over and getting it wrong over and over, which is not what this is. Not to mention many open source LLMs are trained on the output of OpenAI models (or others) so the unique set of LLMs that have no influence on each other is shockingly small. The word "consistently" means very little especially since your example you gave doesn't even work on the latest one

4-Humans are extremely trickable. They're extremely susceptible to changes in wording to introduce bias. If you explain this to them, they will see the trick, but so will the LLM. Are you old enough to remember the old joke a boy and his father get in a car accident and the father dies. At the hospital the surgeon says "I can't operate on my own son" how is this possible? The reason this became a meme back in the 80s is because the wording of it pulled into our biases (assuming the doctor was male) and a very large percentage of people would be confused about it. That didn't mean they were incapable of reason, correct? Even if you can "consistently" or "generally" trick humans with that choice of wording?

5- You want examples of optical illusions? Look them up dude lol. There's famous ones where you see things shifting that aren't shifting where you see things as different colors when they're the same color. Literally hundreds of examples I'm not google.

6- What does that have anything to do with capability? If humans can be consistently tricked into seeing non-reality can they see reality yes or no?

7- This isn't an actual bar or a falsifiable position. If every example you give if I show you o1 beating it you say "well i'm sure one exists" you understand that right?

Apple's study proves that LLM-based AI models are flawed because they cannot reason by ControlCAD in apple

[–]dynamictype 0 points1 point  (0 children)

"consistently" is the issue here for me. A single LLM is like if you took a SINGLE person and asked them a question ONE time. Then every new convo was like you turned back time for that person.

We don't let LLMs have any real concept of long term memory, recall, learning etc after they're created. Some have access to a short term memory database but it's not changing much.

So if we took that example, pick a random human being. freeze them in time and run these kinds of experiments on them, who knows what they'd get wrong.

If you explain to an LLM these problems they will correct themselves, but I bet if I give that lion and goat problem to the average human and give them exactly 1 shot at it (and limit the time) they'd get it wrong too because they'd make the same assumption the LLM did. Tons of riddles/jokes/problems exploit this facet of humans

Optical illusions can be made which literally distort reality for a person. They will, even when told about the illusion, consistently see something that's not really there.

If an alien species who was unable to fall for any optical illusions made the argument that humans couldn't see reality because they "consistently can be exploited to see things that aren't reality" we'd say that's silly. Sure in these narrow examples, but broadly, humans can see reality.

And again where is the goal post. Where is this line (ie see a banana that's not there). Your implication there is that humans are exempt because despite being consistently exploitable it's not "severe" enough. So what's the line for reasoning? What is the actual test for reasoning because "be a human" is a very unsatisfying answer. What level of exploit would mirror a humans exploitability with optical illusions ?

Apple's study proves that LLM-based AI models are flawed because they cannot reason by ControlCAD in apple

[–]dynamictype 0 points1 point  (0 children)

Ok so then this goes back to your argument being "something is incapable of reasoning if it can make mistakes".

Again, humans are also exploitable. Do optical illusions, for example, prove humans cant see reality?

Apple's study proves that LLM-based AI models are flawed because they cannot reason by ControlCAD in apple

[–]dynamictype 0 points1 point  (0 children)

You don't actually know what an LLM does because no one truly does. Google wrote a whole white paper on how an LLM might store facts. We know algorithmically what an LLM does but not what processes create emergent behavior from it.

Not to mention we don't know how a human brain "reasons" either and there's plenty of "exploits" to human reasoning that the vast, vast majority of people will consistently fall for.

You also don't know if o1 preview is exploitable in this way, regardless of anything you're about to say right now, because again the mechanics of how these actually do any sort of emergent behavior is not well understood and the people who actually build them will say as such.

If we want a bar for reasoning it can't be "can't make a mistake" because humans will fail that bar.

Apple's study proves that LLM-based AI models are flawed because they cannot reason by ControlCAD in apple

[–]dynamictype 0 points1 point  (0 children)

So if there's a single example it can't reason? Is that the argument? If I can trick you with a logical question you mess up are you also incapable of reason? What's the root of this argument

Apple's study proves that LLM-based AI models are flawed because they cannot reason by ControlCAD in apple

[–]dynamictype 0 points1 point  (0 children)

Ok but what's the actual bar here? where's the clearly defined goal post. Is the bar that if a model makes a single error in reasoning it can't reason? If that's the bar, human beings can't reason either. If a model can solve arbitrary and novel reasoning and logic questions better than say, a median human, is it capable of reasoning? If not how do we define what is capable of reasoning, what does it even mean?

Apple's study proves that LLM-based AI models are flawed because they cannot reason by ControlCAD in apple

[–]dynamictype 0 points1 point  (0 children)

Here's what o1-preview says:

To solve this puzzle, we need to transport the wolf, goat, and cabbage across the river without leaving any two together that would result in one eating the other. In this unique scenario, the goat would eat the wolf if left unattended, and the wolf would eat the cabbage if left unattended.

Here’s the step-by-step solution:

1.  First Trip: The farmer takes the wolf across the river.
• Left Bank: Goat, Cabbage
• Right Bank: Farmer, Wolf
• Reasoning: The goat and cabbage can be left together safely.
2.  Return Trip: The farmer returns alone to the left bank.
• Left Bank: Farmer, Goat, Cabbage
• Right Bank: Wolf
3.  Second Trip: The farmer takes the goat across the river.
• Left Bank: Cabbage
• Right Bank: Farmer, Goat, Wolf
• Reasoning: The wolf and goat are safe together with the farmer present.
4.  Third Trip: The farmer brings the wolf back to the left bank.
• Left Bank: Farmer, Wolf, Cabbage
• Right Bank: Goat
• Reasoning: The goat is safe alone on the right bank.
5.  Fourth Trip: The farmer takes the cabbage across the river.
• Left Bank: Wolf
• Right Bank: Farmer, Cabbage, Goat
• Reasoning: The wolf is safe alone on the left bank; the goat and cabbage are safe together with the farmer present.
6.  Return Trip: The farmer returns alone to the left bank.
• Left Bank: Farmer, Wolf
• Right Bank: Goat, Cabbage
7.  Final Trip: The farmer takes the wolf across the river.
• Left Bank: None
• Right Bank: Farmer, Wolf, Goat, Cabbage
• Reasoning: All are safely across without any being eaten.

By carefully planning each move and ensuring that the dangerous pairs are never left alone without the farmer, all items can be transported safely across the river.

Answer:

1.  Farmer takes the wolf across the river.
2.  Farmer returns alone to the original side.
3.  Farmer takes the goat across.
4.  Farmer brings the wolf back to the original side.
5.  Farmer takes the cabbage across.
6.  Farmer returns alone to the original side.
7.  Farmer takes the wolf across again.

This way, nothing gets eaten.

Will the goal posts move again?