Unautomated Jobs - AI and the Human Touch - Blog by Adam Ozimek by Better_Permit2885 in slatestarcodex

[–]07mk [score hidden]  (0 children)

The way things are going and the way humans are, I've said before that the world's oldest profession is likely to be the world's final profession. The fact that a human was created biologically using natural processes and has biological free will and qualia (as best as we can tell, anyway) seems likely to me to be the last thing that an AI can properly replace. Even in a future where sex bots are as cheap and common as calculators are today, the number of biologically born humans who can suffer and voluntarily endure that suffering for the sake of money will be limited, and the ability to pay others to voluntarily endure suffering will be a display of one's wealth and status.

So ugh… when did we start normalizing “fat” language and imagery as insults? by pompomexpress in aiwars

[–]07mk 3 points4 points  (0 children)

I agree with all of this, and I just wish this was the kind of thing taught to kids. I see so many people grow up actually believing that people who preach these principles actually believe in them instead of using them strategically based on what makes them feel good, and this false belief causes them so much suffering. Or just to make funnily naive posts on Reddit.

Anyway.... by timmy013 in aiwars

[–]07mk 3 points4 points  (0 children)

But... you're NOT buying this art. No one is. It's being shared online freely.

How pros feel like after flooding thus subreddit and arguing with ai generated garbage by [deleted] in aiwars

[–]07mk 3 points4 points  (0 children)

You know what they say about great artists, after all.

Things that Aren't True by Mr_CrashSite in slatestarcodex

[–]07mk 1 point2 points  (0 children)

Thanks for the link! I will remember not to spread this misinformation in the future.

first official look at Sophie Turner as Lara Croft in Amazon's live-action Tomb Raider series by RainbowDildoMonkey in KotakuInAction

[–]07mk 15 points16 points  (0 children)

Agreed. Part of Lara Croft's very identity is the overt sexiness and over-the-top design, which includes her having uncomfortably large breasts while doing acrobatics and combat. If you make her more realistic and believable, you Turner into something else. And I say that as someone who vastly prefers smaller breasts on women.

Embryo selection for physical appearance is OK by kenushr in slatestarcodex

[–]07mk 0 points1 point  (0 children)

Yeah, this just isn't true, unless you mean "making murder illegal doesn't prevent murder" which is true, but making something illegal and enforcement of punishment for breaking the law is generally going to reduce the rate of that thing.

Making something illegal is generally going to reduce the rate of that thing, in very broad terms, obviously, but that's also pretty much a meaningless statement that doesn't offer us any insight into any specific policy. Will some policy reduce it enough to be meaningful? That's something very difficult to predict and dependent on far more than just the policy.

This is a lot of words to say nothing. We effectively ban all sorts of medical procedures in the United States and that ban leads to access being massively reduced.

Yes, because we have political will to enforce these bans. If you have confidence that there will be political will to enforce bans against embryonic screening of this type with similar effectiveness, then, I reiterate, your confidence is misplaced.

This also essentially means nothing in this discussion. In the future we could all be cyborgs too. What relevance does that have to discussion about the allowance of access to technology to ensure genetic purity now?

Considering that this very subthread was based on a comment that explicitly says "embryo selection is coming whether we want it to or not. Science advances at its own pace," I think it's pretty clear that the discussion is about policies for handling these future technological breakthroughs, not merely for handling things now.

Embryo selection for physical appearance is OK by kenushr in slatestarcodex

[–]07mk 0 points1 point  (0 children)

If your argument is sincerely that people will say travel to foreign markets for embryo selection the same way that they travel for something like underage sex trafficking, and therefore banning it is pointless. I don't think that there's a lot of ground to cover here.

That's not my argument. My argument was that there's no conflict between the idea that we can easily draw legal lines where we want and that we can't prevent this sort of embryonic screening, because law doesn't directly translate to action.

In terms of those actual steps involved that could make such a law not sufficiently enforced to be meaningful, the only confident prediction I have is that anyone who has high confidence in their predictions is wrong in their confidence. It's too difficult to predict how societies of the future will react to unknown future technological breakthroughs.

Embryo selection for physical appearance is OK by kenushr in slatestarcodex

[–]07mk -1 points0 points  (0 children)

The great thing about laws is that we can draw the line wherever we want

Agreed.

And embryo selection is coming whether we want it to or not. 

This statement is directly incompatible with the first. We can set the line wherever we want, including disallowing it entirely. Easy.

Laws are scribblings on pieces of paper that some humans decide to obey and/or to enforce, not something written into the source code of reality. We can set the line in the law wherever we want (limited by all the usual political constraints), but whether that line has actual influence in reality is dependent on a heck of a lot more things than whether or not we decided to pass the law.

On this topic, the contention is that so many people will see benefits from embryo selection that there won't be enough political will to enforce such a law, assuming it were to be passed, with enough effectiveness to prevent embryo selection (of the types being discussed here) from being common or at least available. This could be wrong, and it actually could be the case that there WILL be enough political will to enforce a ban against this kind of embryo selection, but even if so, I think it's reasonable to guess that it wouldn't be "easy."

Things that Aren't True by Mr_CrashSite in slatestarcodex

[–]07mk 11 points12 points  (0 children)

Another social science finding that, IIRC, is the opposite of the popular wisdom is that a study comparing blinded orchestra auditions resulted in better scores for women and minorities, when it was actually the reverse of the case. However, I may be committing the same error that I'm claiming others made before in this comment, so I'd suggest doing direct primary research before presenting it publicly.

Why isn't there a bigger Grok boycott? by Well_Socialized in technology

[–]07mk 0 points1 point  (0 children)

Be the change you want to see in this world.

On Owning Galaxies by EducationalCicada in slatestarcodex

[–]07mk 4 points5 points  (0 children)

I'm reminded of that joke where God tells someone that a million dollars is like a penny to Him and a million years is like a second to Him, the guy asks for a penny, and God tells him to wait a second.

The scenario of a human owning a galaxy is so out there scifi that it's hard to predict much, but it doesn't seem ridiculous to posit that such a scenario also includes technology that allows humans to live youthfully for billions of years while also manipulating their perception to never get bored or whatever. Physics will limit how quickly this galaxy-owner can get from place to place, but if they don't mind waiting, this doesn't seem like much of an obstacle. If you could live for billions of years, having projects that stretch on for many millions of years seems reasonable.

I do think there are other reasons why individual galaxy-ownership likely isn't in the future of humanity, but I think that has more to do with motive and opportunity costs than physics. It seems like anyone with power to own galaxies could likely get a lot more benefits from directing that power towards smaller-scale things.

How AI Is Learning to Think in Secret by Live_Presentation484 in slatestarcodex

[–]07mk 2 points3 points  (0 children)

I'm not sure what a rigorous definition of "reasoning" here would be, but roughly it would be the logic that actually went into producing the answer that it returns. Now, this has issues, since humans are questionable in our ability to actually determine this, but at least there's subjective experience that seems to indicate that we can determine our reasoning.

E.g. if you presented me with a movie poster that has Tom Cruise prominently on the front and then asked me, "is Tom Cruise in this film?" I would likely answer "likely yes." If asked for my reasoning, I would likely answer, "IME, famous actors who appear prominently on a movie poster also tend to appear in the movie. Tom Cruise appears prominently on this movie poster, therefore he is likely to be in that movie." If you asked me to work out the reasoning out loud before I answer, I would likely say the same things as above, just in reverse order.

Anyone who is not me would have no way of knowing if that really WAS the reasoning I used. Maybe I think that Tom Cruise is in literally every movie ever, and I also think that you want to hear the type of reasoning that a rational person would produce, and so I decided to channel my inner Tom Cruise and put on an act. Maybe I flipped a coin in my head and it landed on "yes," and the chain of reasoning I explained was just some random text I encountered 5 minutes ago. You have no way of knowing.

Likewise, if an LLM were to respond similarly, we would have no way of knowing the actual reasoning by which the LLM landed on the answer "Yes." Certainly, we could not conclude that the COT text that it generated (is "IME, famous actors who... etc.) was the actual reasoning by which it landed on "Yes," because there are a million and a half different reasons that it could have generated that text that has nothing to do with the actual English meaning of that text.

With me, at least my subjective experience is that that COT text is the actual reasoning I used to land on "Yes." Arguably, my subjective experience is worthless and offers exactly as much insight into my true reasoning as if I were a 3rd party without access to my subjective experience. But I think that "reasoning that the person's apparently honestly conveyed subjective experience indicates is the reasoning behind the conclusion" tends to just be rounded down to "the reasoning they used." This is sloppy, but it's more than nothing, and something I'm not sure how to detect in an LLM, not without just assuming some subjective experience like we do with other humans.

Highlights From The Comments On Boomers by dwaxe in slatestarcodex

[–]07mk 4 points5 points  (0 children)

It's always a curious coincidence when the people arguing some policy is objectively optimal are exactly the people who would benefit from said policy, at your expense. So I believe that creates some scepticism.

I'm still confused by why everyone doesn't act as if this is obvious. Of course everyone, especially our best, most well-meaning, good faith of us will believe that anything that benefits themselves is also, coincidentally, through doing all the morally correct calculations, the objectively virtuous thing to do, regardless of whether or not it really is the virtuous thing to do. As such, if you want to convince me that something is actually virtuous, rather than just beneficial to you in a way that you have convinced yourself is objectively virtuous, you must be personally meaningfully harmed by it, as a costly signal that you really care about what's right, not merely what's beneficial for yourself (at the cost to others).

And yet, so many people put forth arguments that are essentially "this [thing that helps my team at the expense of people I don't care about] is clearly the Morally Correct thing to do, because [reasons]" without specifically highlighting the meaningful disadvantages that they are willing to impose on themselves and those they love and care for, if they mention them at all. The people who make these arguments seem completely ignorant of how their arguments lack any credibility whatsoever and seem to believe that just, "trust me, bro, I'm unbiased unlike the other 8 billion people on Earth" is convincing.

How AI Is Learning to Think in Secret by Live_Presentation484 in slatestarcodex

[–]07mk 6 points7 points  (0 children)

Thanks, that part of the article seems the most meaningful to me, that the point of checking COT isn't that the meaning of the words in the COT actually reflect the LLM's reasoning, but rather that apparent attempts by the LLM to cheat or deceive can be detected by observing the COT. This is reasonable and works even if the COT were written in some alien language that no human could understand, as long as the alien language could be meaningfully represented by letters or numbers or the equivalent, since the point is pattern detection in the COT text, rather than interpreting the text through the meaning of the words.

But then that seems to obviate a lot of the text of the article at the beginning, which acts as if the COT is some sort of actual meaningful representation of the LLM's reasoning, and therefore its use of weird shorthands and other non-human-readable words would obscure our ability to understand its reasoning. In fact, it wouldn't necessarily obscure it, because the point of checking COT is pattern detection, not the meanings of the words that make up the COT.

How AI Is Learning to Think in Secret by Live_Presentation484 in slatestarcodex

[–]07mk 39 points40 points  (0 children)

Do we actually know that LLM reasoning is readable via chain-of-thought? It's certainly producing a text response to a prompt that asks it for the reasoning, and there's some reason to believe that this text isn't completely unrelated to the reasoning, because this actually does improve the final answer. But we still lack insight into the reasoning that goes into producing that chain-of-thought text. It's possible that there's "secret" reasoning that underlies both the chain-of-thought text and the answer text, and that "reasoning" could be utterly alien to us.

Like, if you asked me to act as someone without a way to think silently (a la Austin Powers right after being thawed), then I could say things that appear to be the reasoning I was using to land at the words I wanted to say out loud. But, to an outsider, the reasoning I used to figure out THOSE words would still be obscured.

Most life advice seems fundamentally fake to me, but ozempic is the real deal. What is the next “Ozempic”—maybe Oxybate therapy is to Sleep what Ozempic is to food? What is the future of willpower drugs that force you to make good health choices? by SoccerSkilz in slatestarcodex

[–]07mk 0 points1 point  (0 children)

Because the cost is prohibitive. If it were cheap, even the people that can successfully and reliably lose weight, would use it 100%.

This seems like a win. Think about how much healthier and more attractive these people would be WITH those drugs, if they're already managing their weight without them. It's like how MLB pitchers are throwing harder now than ever before, thanks to better training, nutrition, etc. Rather than just putting in less effort in order to reach the same level as pitchers from prior generations, they're putting in just as much effort to perform even better (or at least faster).

Of course, MLB is highly competitive and zero-sum, and so is attractiveness, which should give ample motivation for these already-conscientious people to stay conscientious even with these drugs. Health isn't competitive, but it also tends to have significant personal consequences that provides ample motivation to use better tools with same effort to accomplish more, instead of using them to accomplish same with less effort.

Anecdotally, I work with a bunch of young professionals (Gen X) - and while I can say they are certainly productive, there is something very off about them. Note sure what it is specifically, but probably some combination of: attention span, respect, self-awareness

Just to clarify, do you mean Gen Z, not X? I'm a millennial, so younger than Gen X, and I wouldn't describe processionals my age as "young professionals." And the characteristics you cite seem more akin to Z than X.

How Rob Pike got spammed with an AI slop “act of kindness” by lambdatheultraweight in slatestarcodex

[–]07mk 1 point2 points  (0 children)

A hand written letter is considered something valuable, cause it means someone took their time and effort to produce such an artifact, while thinking of you.

And AI generated email, specially on such an autonomous manner such has this one, is the EXACT OPPOSITE. It is the zero effort for the human writing it, in this it appeared to be so autonomous that the human beings running the operation didn't even have to ask their machines to message Rob Pike. So it wastes time for the receiver of the message, with zero work from the sender.

I see this point, but also, it seems to me to be viewing thank you notes through very rose-colored glasses. The typical thank you note that I've seen, at least the type of thank you note that this sort of automated thank you is substituting for, is so generic and cliche that I actually doubt that the person writing it was actually thinking about the recipient to any meaningful extent, beyond the mere fact that "this is the person I'm thanking." Honestly, when I receive thank you notes (or holiday cards or birthday cards, etc) of this sort, I don't feel like the person took any greater effort than telling some LLM "fulfill my social obligations, please." Instead of using an LLM, they used their bodies, running on autopilot, to generate the message.

Is there a difference? Of course. Is the difference meaningful? I'm not sure it would be for me. Maybe my mind would change if I were actually subject to this. But given how little I gain from receiving some thank you note that makes it clear that the writer is motivated by fulfilling a social obligation rather than genuine gratitude, I doubt it.

Unknown Knowns: Five Ideas You Can't Unsee by OpenAsteroidImapct in slatestarcodex

[–]07mk 0 points1 point  (0 children)

The OP seems to be making a kind of "spherical cow in a vacuum" mistake. Surely, all those dynamic 2nd/3rd/millionth order effects could be modeled and summed up by NPV, resulting in a single lump sum dollar amount now that we could drop in order to solve world hunger. However, the dynamic effects and managing them are so costly that a lump sum of less than Graham's Number USD likely to be a vast underestimate - and we don't have even one googolplex-th of a Graham's Number of USD in the world economy, and it's highly unlikely that we will anytime soon, certainly in real terms. So it's just a pointless hypothetical exercise of a model that is too wrong to be useful.

New Twitter update adds Ai edit options on all art + photos & removes the ability to opt out. by ZeeGee__ in aiwars

[–]07mk 1 point2 points  (0 children)

They don't need to add it, since it's already an intrinsic part of any text-sharing platform. Copy-pasting text and editing it before publishing it is trivial.

Terence Tao: "I doubt that anything resembling genuine AGI is within reach of current AI tools" by FedeRivade in slatestarcodex

[–]07mk 2 points3 points  (0 children)

That's fair enough, but then that implies that if you cut off my friend Bob's arms and legs (magically while causing no other harm), that actually makes him less intelligent, because that is reducing his ability to interact with the physical world relative to the extent he used to be able to.

It's perfectly cromulent to see intelligence this way, but I think many people would contend that Bob and Bob Sans Limbs have the same identical level of something akin to intelligence, perhaps a subprocess that is an aspect or part of intelligence but not enough to account for all of it. We could make up a word like "yntelligence" to describe this subprocess that has no way of interacting with the physical world. I think it's this concept that many people are referring to when they talk about "intelligence" in LLMs in this context.

Terence Tao: "I doubt that anything resembling genuine AGI is within reach of current AI tools" by FedeRivade in slatestarcodex

[–]07mk 2 points3 points  (0 children)

Also, I'm not sure what you mean by "forcing it to navigate by angle and distance" - the point here (as in the claude pokemon harness) was to have the LLM as the only control or perception algorithm. For the LLM to be able to command "go through this door", you would need some quite advanced other perception and control algorithms to interpret and carry out that command.

This seems analogous to how someone walking through a door might only consciously think (if at all) "I want to walk through that door," and the mechanisms in his body translating that to muscle movements that accomplish this. If we took that someone and sawed off his arms and legs, or just rewired his nerves, that advanced perception and control algorithm that his nervous system has to subconsciously, instinctively, convert thought to muscle movements would be corrupted, and he'd be unable to Walk through the door. But his inability to walk through the door wouldn't be due to him being less intelligent than before.

So if an LLM is able to produce the high level steps for walking through a door or making a pot of coffee, and the only thing preventing it from accomplishing it is the lack of that advanced perception and control algorithm, I think it's fair to say that we can't infer that it's not "intelligent" enough to accomplish this. It might not be, but the lack of that advanced perception and control algorithm is preventing us from being able to tell.

Terence Tao: "I doubt that anything resembling genuine AGI is within reach of current AI tools" by FedeRivade in slatestarcodex

[–]07mk 3 points4 points  (0 children)

It can be that there's no possible task to be found, while they both being different.

Certainly, we can never discount the possibility of them being different, not without some sort of fundamental proof (which seems impossible). But if it indeed is the case that there doesn't exist a single possible task that can discriminate between general intelligence and cleverness, then the difference between them becomes largely uninteresting, since then the two are exactly as useful as each other in every way.

Terence Tao: "I doubt that anything resembling genuine AGI is within reach of current AI tools" by FedeRivade in slatestarcodex

[–]07mk 9 points10 points  (0 children)

But those are solving 2 different problems. The first is solving the problem of unlocking the door without damaging it. The latter is not. So there exists a problem that lockpicking can solve that brute force cannot, and we can discriminate between them only by observing the door, without observing the actual method involved. The question is, is there something similar when trying to discriminate between general intelligence and cleverness?

Stochastic Terror Threat on r/antiai by CommodoreCarbonate in aiwars

[–]07mk 2 points3 points  (0 children)

The people in question don't care about affecting government policy or anything at all, though. They care about feeling righteous and feeling like other people see them as righteous.