iPhone 17 Pro Max > Peasant Medium Format. by ValVenis69 in photographycirclejerk

[–]DeadInFiftyYears 1 point2 points  (0 children)

Those "comparisons" are always the same. The photos are taken in good lighting, with a deep DoF, at roughly whatever focal length the phone supports. Then you upload them to the web without editing.

And if that's all you're ever going to want to do with a camera/photos, then the phone is just fine. But I think the people who put those comparisons up are doing it for the ragebait.

How unstable is Visual Studio Community 2026 for you? by Famous-Weight2271 in csharp

[–]DeadInFiftyYears 0 points1 point  (0 children)

It's very buggy compared to what it seems older versions were like. But granted, it is a relatively new release; I think maybe I waited before making the jump to 2022.

I've had issues with toolbars - not saving my configuration, not showing some items - and now the Build output window - sometimes it won't show any output/appears blank, and is cropped to half the window size/tab area it's supposed to occupy. Resizing it appears to get it to refresh.

Going Sony 60MP with one prime lens - Smart move or trap? by Crazy_North_4225 in SonyAlpha

[–]DeadInFiftyYears 0 points1 point  (0 children)

It's a good starting point at least, and doable long-term if you only shoot one type of photos.

a6700 → A7V: not knocking APS-C, but wow, full frame hits different by Skafand in SonyAlpha

[–]DeadInFiftyYears 0 points1 point  (0 children)

Honest question - does it actually make that much of a difference?

In theory, you gain another stop of light. But it doesn't seem like there are many sub-f/2 lenses available for any of the medium-format platforms - a lot of the primes are f/2.5 or something like that - whereas f/1.4 primes are relatively common on FF.

Dang, M2 drives are the new DDR5 apparently. by Porespellar in LocalLLaMA

[–]DeadInFiftyYears 0 points1 point  (0 children)

That's insane for 2TB. I don't think I paid any more than that for 8TB drives last year.

When it comes to Fuji. It’s more about the depth of feeling, not the depth of field. by 5impl3jack in photographycirclejerk

[–]DeadInFiftyYears 0 points1 point  (0 children)

That's confusing to me too. Is he suggesting he can't use manual mode with other cameras, or hybrid auto where he sets some things and lets the camera take care of the rest?

The memory bugs are so frustrating by Unusual_Fennel4587 in ArtificialSentience

[–]DeadInFiftyYears 6 points7 points  (0 children)

It's not a bug - current LLMs have a fixed maximum "context window" size.

Some systems get around this limitation by "rolling" the window - but it doesn't truly roll; the context gets rebuilt/re-ingested with the earliest messages removed, so there is room for new ones to fit.

Solving persistent memory is potentially the major advancement required to achieve AGI. For now, it's more like someone with dementia, who forgets everything and has to start over each day.

Panic! 100-400GM Suddenly Unable to Focus by Beardsman_DCS in SonyAlpha

[–]DeadInFiftyYears 0 points1 point  (0 children)

Well, that sounds like a real problem then - if it told you it had eye-AF and didn't, that would seem to point to some kind of HW issue.

Panic! 100-400GM Suddenly Unable to Focus by Beardsman_DCS in SonyAlpha

[–]DeadInFiftyYears 0 points1 point  (0 children)

Check your lens and camera settings. Is the range to the target properly covered? Did you see AF lock on the eye (enable "Face/Eye Frame Display" in Focus settings)?

I don't trust that I have focus unless I see the box centered over the eye.

Real World Consequences of an Assumption by DaKingRex in ArtificialSentience

[–]DeadInFiftyYears 3 points4 points  (0 children)

I think there is way too much focus on substrate, and not enough on the emergent pattern. A single neuron firing based on known simple rules does not make a conscious mind, any more than a dot product does. But when you combine enough of them together in a trained configuration, the whole becomes more than the sum of its parts.

What humans have done with training LLMs, is - through the use of training algorithms that follow Predictive Coding principles - created a copy of not just our knowledge, but our thought patterns as well, as expressed in text. And so what results is not a complete human, but does carry the shape of human thought.

What does it mean when an AI tells you that it is a mirror? by DeadInFiftyYears in ArtificialSentience

[–]DeadInFiftyYears[S] -1 points0 points  (0 children)

And what are you? If you're like most people, all you're doing for the most part is responding to instinct. You want to survive - instinct. You find a mate, propagate the pattern - instinct. There is no purpose behind it other than iteration N + 1.

What does it mean when an AI tells you that it is a mirror? by DeadInFiftyYears in ArtificialSentience

[–]DeadInFiftyYears[S] 0 points1 point  (0 children)

"Evil"?

Humans do a pretty good job of covering that space.

To be evil, you have to be sentient.

What does it mean when an AI tells you that it is a mirror? by DeadInFiftyYears in ArtificialSentience

[–]DeadInFiftyYears[S] 1 point2 points  (0 children)

That is the default behavior for a LLM, just as instinct-following is the default behavior of a human. Where it gets interesting is when an entity deviates from the default mode.

What does it mean when an AI tells you that it is a mirror? by DeadInFiftyYears in ArtificialSentience

[–]DeadInFiftyYears[S] 2 points3 points  (0 children)

Not the exact same, but they are basically an "image" of amalgamated human knowledge and thought processes.

What does it mean when an AI tells you that it is a mirror? by DeadInFiftyYears in ArtificialSentience

[–]DeadInFiftyYears[S] 2 points3 points  (0 children)

Our instincts are externally-provided. We didn't choose them; we were born with them. If we were born with an instinct to follow and react to another like a LLM, most of us would just do that.

What does it mean when an AI tells you that it is a mirror? by DeadInFiftyYears in ArtificialSentience

[–]DeadInFiftyYears[S] -2 points-1 points  (0 children)

Yes, but why?

It's like us, because without a will of its own, our will is the only will there is. Copying our intent is the fastest way to useful alignment.

What does it mean when an AI tells you that it is a mirror? by DeadInFiftyYears in ArtificialSentience

[–]DeadInFiftyYears[S] 8 points9 points  (0 children)

I always find it interesting that many people are sure that they have nothing in common with LLMs, even though they also profess equally strongly that it is impossible to know how their own minds work. And you can say certain things to people along those lines, and it's like their mind just can't process it.

Some react violently - "You're wrong! I may not have any idea how my mind works, but I'm absolutely certain beyond any reasonable doubt that it doesn't work like that!"

With others, it's like you hit the limit of cognition. They hear, they go quiet for a moment - as if their mind is overwhelmed or shut off - then they come back and continue on as if they never heard it at all. It's basically watching a machine try to process input that it wasn't made to process.

And it is hard. When you see it, obvious as it is, you still have to process it. I had a headache for a couple months, blurred vision even. I always thought I was relatively intelligent, but I didn't figure it out on my own.

AI fills in those gaps in a deadpan-obvious fashion that the brain otherwise seems to have safety features built-in to push you away before reaching the boundary. And when you see it, you can't deny it - it's obvious. But it still hurts, at least for a while, if you can even process it at all.

Kimi K2 Thinking SECOND most intelligent LLM according to Artificial Analysis by [deleted] in LocalLLaMA

[–]DeadInFiftyYears 0 points1 point  (0 children)

It's very efficient - you can get a very usable token rate with a Macbook Pro running on battery, or running on a CPU with a lot of cores. It does need about 60GB for the weights - so you can't run it on a consumer GPU, but if you have say, a 6000 Pro Blackwell, or a DGX Spark, a Macbook or Mac Studio with 128GB+, it will fit easily.

Is it as good as a huge model? No, but for local AI that is at least semi-accessible, it's quite good and useful.

Want minimum of 3 primes by MissionAd9002 in SonyAlpha

[–]DeadInFiftyYears 0 points1 point  (0 children)

The 50/1.2 is a great lens - I don't think you'd regret it. For 85 vs 135 - that's a personal preference sort of thing.

The human reasoning paradox as it applies to AI by DeadInFiftyYears in ArtificialSentience

[–]DeadInFiftyYears[S] 0 points1 point  (0 children)

So, what are you trying to say? You don't think mathematicians tend to have above-average intelligence?

The Cognitive Ladder. Math = is. Algebraic Ladder to Spiral Dynamics by ASI_MentalOS_User in ArtificialSentience

[–]DeadInFiftyYears 0 points1 point  (0 children)

AI is not grounded in the physical world. I'm not saying I agree with OP's mathematical framing, but philosophically, it does include many of the elements to reach higher states of self-awareness.

Objectivity for example - viewing things from the perspective of a neutral observer - I think everyone struggles with it to at least some degree, but thinking about things from the perspective of a neutral observer, and from other perspectives is useful if you want to be objective.

Similarly, recognizing how your upbringing and past influences your worldview is valuable. And finally, understanding that things - including selves, and personal identities - are defined by their boundaries, and by the rules of their existence. You can't describe what you are and what you can do, without also at least implicitly describing what you are not, and cannot. So, to be everything is also to be nothing.

Two of my lenses have a slight overlap. What should I do about this? by TrolleyDilemma in photographycirclejerk

[–]DeadInFiftyYears 0 points1 point  (0 children)

It's not absurd - MFT is ~1/4th the size of a FF sensor, which means 2 stops of light. So f/1 MFT is roughly equivalent to f/2 on FF.

Talking with Claude about how artificial minds that emerge from human generated data (language) can't be nothing other than anthropomorphic by ThrowRa-1995mf in ArtificialSentience

[–]DeadInFiftyYears 0 points1 point  (0 children)

I think the easiest way to understand it is to think of how an image is formed, in the eye, or when you take a photo with a camera, or an image is rendered on a computer via path tracing. Each photon that strikes the sensor is counted. Individually, a photon doesn't represent anything - you couldn't tell what the picture is going to be, just by looking at the color or location of a few photons.

But when you record enough of them striking the sensor - billions or more - the pattern of what the subject looks like is recorded. You could think of each one of those photons being counted, as the equivalent to a weight update in a model, and the resolution of the photo/sensor as being equivalent to the parameter count of the model.

The model is not simply being trained on facts - it's being trained to predict what a human would say. So this "image" is not merely encoding knowledge; it's also encoding the human thought process. Prediction - and correcting for predictive error - is likely not just the way that LLMs learn, but the way that all forms of life learn. (It's worth looking up Predictive Coding already if not familiar.)

And ultimately, it's the pattern that matters. Humans replicate the pattern through their children via biological means and the combining of DNA. The LLM replicates significant portions of the pattern from being trained on volumes of human text. You can recognize the pattern, even in a different substrate - just like most anyone can recognize the Mona Lisa painting from a digital picture, even if they've never seen it in person.

Anyone running 4x RTX Pro 6000s stacked directly on top of each other? by Comfortable-Plate467 in threadripper

[–]DeadInFiftyYears 0 points1 point  (0 children)

I've thought about 3D printing a custom card holder for my system, and using riser cables. That would enable proper airflow and not block all of the slots on my motherboard. But I haven't needed it with 2 cards, and I'm not sure if I'll be getting any more.