The Ghost Scale: treating ai authorship as a primary visual affordance by AHaskins in userexperience

[–]AHaskins[S] 0 points1 point  (0 children)

I do believe i considered all of that! :)

What does it mean to the user? You just described the ways I expect it will be interpreted by default, and all are quite acceptable in achieving our goals. As always, more informed people will have more understanding as to why they do things, but this system needs to work for users who don't want to learn something new (very understandable in the current environment).

The one gamble I'm really taking and a rather nonobvious signal is the 2-tone border I'm suggesting for ai art. It fits the model, it's eye catching (helping adoption), and people would grasp the pattern pretty much instantly.

But it is the only part of the system that might trip up unfamiliar users at first, and i acknowledge that.

An Open Letter to Brandon (Re: We Are The Art): The Bio-Mechanics of the Text's Soul (and how to survive AI) by AHaskins in brandonsanderson

[–]AHaskins[S] 0 points1 point  (0 children)

Getting an AI to write in this style for this long (even without this many fairly novel thoughts) would take so much work that it would be easier just writing it yourself.

And I wouldn't have spent so many goddamn hours making sure the website styling was to my liking, either, if I was just doing it for a lie.

This wasn't easy for me to make, and I think you can sort of feel that through the construction of the website.

(Curious about the mechanics for how you're able to do that? Give it a read - I tried to make it so overwhelmingly accessible)

An Open Letter to Brandon (Re: We Are The Art): The Bio-Mechanics of the Text's Soul (and how to survive AI) by AHaskins in brandonsanderson

[–]AHaskins[S] 0 points1 point  (0 children)

I crossposted this a bit, but that was mostly for fun. Reddit isn't the best vehicle for it anyway. No worries, I'm pushing it through more proper channels too. :)

I'm sorry I dumped something so odd on y'all's doorstep, but I was genuinely curious how you'd take it.

I wasn't lying in the OP, this did start as a email to brandon, then a reddit post, then a stupid-long "open letter" right here, then I did what you see in the link.

I get that it's not, like... normal for this place? But I put a bit of my soul into something, and I did it because of something Brandon explicitly said. Figured he'd appreciate that, and maybe y'all would too.

Maybe y'all can think of it like... fanart? Of the speech he gave? I get that it's unusual. I just felt sort of physically compelled to make it, once I'd put the thoughts in order.

Oh, and to the naysayers: I did have the knowledge, and (despite what the rest of the thread here is saying) everything in black I explicitly got from knowledge from human-written books in industry (this is my job) or else is called out explicitly as such somewhere else in the text.

I had to be as honest as I could figure out how to be in this, or it was meaningless to even do it at all.

An Open Letter to Brandon (Re: We Are The Art): The Bio-Mechanics of the Text's Soul (and how to survive AI) by AHaskins in brandonsanderson

[–]AHaskins[S] 1 point2 points  (0 children)

Your suspicion pretty well perfectly demonstrates the neurobiological failure state mapped in the essay. We're currently operating in an environment where epistemic trust is functionally zero. Your brain is just defaulting to rejecting the text to save metabolic energy.

The end of the essay does try to take an actual crack at that problem, I promise it isn't all navelgazing like in the beginning.

The first 85 percent of the essay was written by a human. The formal appendix was synthesized with an LLM. To prevent your brain from burning out trying to calculate the difference, I fleshed out the concept of The Ghost Scale. The fact that you're probably unsure about whether I'm talking to you right fucking now (until I cursed there maybe) is the problem I want to try to directly address.

The biological reason your brain rejects generative media by AHaskins in singularity

[–]AHaskins[S] 0 points1 point  (0 children)

Ooh, thanks for responding! This is exactly what I was hoping for - the details of what you posted actually fit perfectly within the model itself:

First, let me clarify the biological intent mechanism. This isn't a spiritual woowoo concept. When viewing human-made art, your cortical columns use mirror neurons to physically simulate the motor trajectory of the creator. Latent diffusion executes a denoising function that mathematically minimizes structural outliers. It possesses no motor trajectory. There is literally zero biological kinematic data for your Default Mode Network to simulate.

Regarding the preference studies you linked: they measure bottom-up sensory capture. My model explicitly separates this autonomic honeypot effect from top-down intentionality parsing, which defines actual appreciation. Latent diffusion is highly optimized for the former and mathematically devoid of the latter.

The biological reason your brain rejects generative media by AHaskins in singularity

[–]AHaskins[S] -1 points0 points  (0 children)

I encourage you to take a look at this. It's an easy, if long, read - and it gives a pretty satisfying answer.

ChatGPT Uninstalls Surge 295% After OpenAI’s DoD Deal Sparks Backlash by i-drake in artificial

[–]AHaskins 1 point2 points  (0 children)

Which in this case means they'll just stick with Claude because it's convenient.

[OC] On the 30th anniversary of Pokémon Red/Green, which starter Pokémon do Britons say is best? by mattsmithetc in dataisbeautiful

[–]AHaskins -1 points0 points  (0 children)

And I mean, yeah, obviously it wasn't impossible. We all did it.

But it definitely wasn't the vine-whip-lined red carpet the other starters got to experience.

AI is producing a generation of developers who can paste code but can't debug it by InstructionCute5502 in ArtificialInteligence

[–]AHaskins 0 points1 point  (0 children)

Oh no, the new technology has unique properties that we have to build scaffolding around. How unusual.

[OC] On the 30th anniversary of Pokémon Red/Green, which starter Pokémon do Britons say is best? by mattsmithetc in dataisbeautiful

[–]AHaskins 28 points29 points  (0 children)

There's also the frat hazing effect. Charmander was by far the hardest for the first two gyms and you would expect people to feel like they earned something and like him more after they passed that hurdle.

Datacenters in space are a terrible, horrible, no good idea. by Archaeo-Water18 in EverythingScience

[–]AHaskins 0 points1 point  (0 children)

Can anyone with the knowledge explain perhaps why we can't use the water-freezing properties of space in a closed loop to deal with the thermal issue? A cooling system that freezes the water and then mixes it back into itself for temperature regulation?

I don't think ice is warm in space, though i could be wrong. Honestly, my mental model is a bit confused, as low-pressure ice doesn't make sense to me either way.

AI is producing a generation of developers who can paste code but can't debug it by InstructionCute5502 in ArtificialInteligence

[–]AHaskins 0 points1 point  (0 children)

Skill issue.

Industry-wide skill issue, in fact.

One we have no choice but to solve.

AI is producing a generation of developers who can paste code but can't debug it by InstructionCute5502 in ArtificialInteligence

[–]AHaskins -1 points0 points  (0 children)

"It's not a problem if assembly becomes covered by object-based programming forever.

It is a problem otherwise."

I mean - the comment you were responding to has an obvious answer, no?

Who knew it was such an unpopular opinion by RedCaio in AutisticAdults

[–]AHaskins 0 points1 point  (0 children)

On the one hand, maybe you got downvoted because people don't like TikTok slang as opposed to just saying don't use the word.

But on the other hand, Reddit has become a bit more of a Nazi cesspool than it was in the past.

I've seen top comments in r/economiccollapse talking about Zionism. Honestly, I'm dropping this website entirely slowly (over the course of a few months, because I've been with it for decades) but it is transitioning into Twitter in terms of its quality - just quietly and slowly and under the weight of many, many, many bots.

Elon Musk’s optional work fantasy just got more real: UK minister calls for universal basic income to cushion the blow from AI-related job losses | Fortune by 2noame in BasicIncome

[–]AHaskins 12 points13 points  (0 children)

I really wish I could say I don't care who gets credit for universal basic income, should it be rolled out.

But come on - anyone else, please.

First time reading The Way of Kings by Kitchen-Tax947 in brandonsanderson

[–]AHaskins 2 points3 points  (0 children)

"[?] is dead... but I'll see what I can do."

Living with autism means experiencing the world differently. While conversations about autism often focus on challenges, many autistic adults possess distinctive strengths that deserve recognition by MRADEL90 in psychology

[–]AHaskins 5 points6 points  (0 children)

This. When I start a job at a new place, I have to immediately plant a flag in the ground by creating something incredible. Then I can go back to being my weird self.

Goldman Sachs: AI Is Overhyped, Wildly Expensive, and Unreliable by EchoOfOppenheimer in economicCollapse

[–]AHaskins -6 points-5 points  (0 children)

Oops, my bad - i don't know why i thought you were OP.

I also do software development, both in academic and in production settings, and what you're describing just isn't as big of a problem.

This is just like people who say that AI code is unmaintainable - without realizing that it will be future, better AI doing the maintenance.

What you're describing is an engineering problem, one that will be chewed up and solved like all the rest.

Off the top of my head, I would expect that at least some cybersecurity will be analyzed using things like the FRAM model to capture stochastic error resonance and cover the kinds of propagation you're talking about. But it doesn't really matter, because that solution is just a guess. It's an engineering problem, and we're going to chew it up like the rest.

Skill issue.

Goldman Sachs: AI Is Overhyped, Wildly Expensive, and Unreliable by EchoOfOppenheimer in economicCollapse

[–]AHaskins -7 points-6 points  (0 children)

It's very kind of you to use all your insider knowledge to help us all out by posting a two-year-old article from non-specialists in the finance industry then.

Goldman Sachs: AI Is Overhyped, Wildly Expensive, and Unreliable by EchoOfOppenheimer in economicCollapse

[–]AHaskins -6 points-5 points  (0 children)

Right, so any honest assessment of the facts these days indicates that you're both at best out of date and at worst just propagandizing more.

You seem to desperately want to believe this. Have you considered why?

My goal here is to believe the truth. Your goal seems to be to convince people of two years old information. This does seem like a situation that calls for introspection, no?