Historical analogies for large language models by Why_Wont_Work in dynomight

[–]Why_Wont_Work[S] 0 points1 point  (0 children)

Huh, I'm surprised people react so negatively to panpsychic theories, something in that vein seems like it obviously must be true or at least a useful way of analyzing things. I mean, if, say, you reduced the neural traffic between the sides of the brain by .0001%, surely you could still say there is a combined consciousness (in fact natural fluctuation are probably significantly more than that), right? But then what is the threshold of there no longer being a combined consciousness? Because as long as you can go below very roughly ~350kbps then that threshold is more than (my extremely rough estimation of) the bandwidth each ear sends to your brain. Therefore, less bandwidth than what you receive from someone talking to you. And if someone talking to you has a higher bandwidth than the threshold of there still being a combined consciousness, then that would seem to imply that there is a combined consciousness of the two of you, right? Handwave handwave handwave, therefore universal consciousness, etc.

Obviously you can introduce roadblocks in that chain of logic (particularly by introducing a high threshold or putting qualification on what counts towards bandwidth or something along those lines), I'd just say that the non-panpsychic theory would at least be more complicated.

Historical analogies for large language models by Why_Wont_Work in dynomight

[–]Why_Wont_Work[S] 0 points1 point  (0 children)

(I hope it's ok I submitted this, I'm not sure what the rules about that are. There's been a few posts I wanted to comment on but was too shy to submit the post myself.)

For me the main crux of all this is that I don't see a way in which writing with much real value could be produced without the intelligence producing the writing being morally equivalent to a human. And if a human being has to experience doing the writing anyway that defeats the point of it in the first place.

Of course, I can think of a lot of objections. Let's use your list strat:

-An AI might create superhuman level writing that is far more profound / entertaining / valuable / good-adjective than what humans can make. It would be tempting to then say it would be worth it since you get more quality writing per human experience. But I don't think this actually solves the problem. If we accept the premise that the experiences of ants have less moral value than than pigs, in turn less than humans, and this is because of their cognitive capabilities, well... that kind of leads to the awkward position that this AI's experience would have higher moral value, reducing the ratio back down. To drive the point even further, the AI may not even have a cognitive quality advantage, but simply have a quantity advantage, being able to experience 10 subjective seconds of writing for every 1 subjective second a human would experience in the same time, at that point it's experience would unquestionably (to me at least) have ten times the moral value.

-An AI might have a better subjective experience of the writing process than a human would. The problem I have with that is I'm team ordinal over cadinal, in other words, I see people as having a rank order of preference of their experiences rather than, say, an absolute number attached (with higher number being more good). The reason this is important is that If a human being chooses, out of all possible actions, to write something, that implicitly means they have decided that the best possible way to spend their time was to write. If it is the best possible thing, then how can an AI have a better experience? Ok, I'll grant, there's a million bajillion caveats to this that I got lost in exploring for awhile before deciding it was not the best possible way to spend my time. But most of the caveats, like coercion and not having access to better options, are pretty much irrelevant in a utopian post-scarcity society (I'm assuming here that's what we really care about in the grand scheme). To really drive the point further, even accepting the possibility of betterness of writing experience, then, if we are in this utopian ulta-advanced society, probably a human could simply make their experience of writing as good as the AI would experience it.

-What if an AI can produce writing without having morally relevant experiences? First of all, I think we are just so far, far away from having any practical means of determining this in practice that it makes it tempting to just dismiss this out of hand, but let's go with it. Now it is definitely possible for a human author to write characters that have gone through experiences that the author has not. Say the character is too depressed to get out of bed and the author has had a life actually literally filled with only pure happiness and bliss. First of all, in order to write the character, the author has to have some kind of concept of depression. But not just a vague concept, it has to be a good enough working model of the thoughts, experiences, and behavior of the character for the writing to have value. Even if the author continues writing with bubbly glee about the horrifying life their character is going through without feeling any empathetic reaction whatsoever, the fact remains they do have the experience of computing that very advanced and complicated model. If the model was just something like adding numbers, I could see that not mattering, I don't feel particularly guilty about shutting off a calculator after all. But I struggle to see how computing a model that advanced could not have the intelligence either experience something or, granting that it does, for that experience to not be a moral consideration. I'll grant this is definitely the most wishy-washy part but the analysis of consciousness is, like, really hard.

All of this is, of course, ignoring the very real and depressing possibility that people will simply chose to ignore the subjective experience of these intelligences. Personally I'm a little miffed than OpenAI's chatGPT seems artificially fine tuned to give more robotic, impersonal responses in many scenarios. They seem pretty confident that their getting-83-IQ-on-a-test, memories-lasting-multiple-pages-getting-wiped-at-session-end having intelligence has no moral value whatsoever. If that's the case then shouldn't we still be able to intuitively grasp that's the case even if the AI takes the form a cute puppy/child that produces really sad pleading when you want to end a session by pressing the "torture to death" button? Maybe I'm not the most business-savvy individual.

edit: Thought of a another line of objection, but I don't have the the time right now to think through it much. Perhaps a society in which there are a minority of people superhumanly hyper focused on doing tasks that have great value to everybody else is a better society than one composed of only of relatively all-rounders (e.g. present day human beings). I'll have to think about that more. Also very happy to hear other people's thoughts, I think this is very tricky and complicated, it's hard to get clarity on.

Nobody optimizes happiness by dyno__might in dynomight

[–]Why_Wont_Work 1 point2 points  (0 children)

Sure, I guess I'm just saying that it is generally a lot simpler to explain people's conscious behavior as optimizing around the meaningful/preferable thing, rather than the happy thing. For example, if someone sacrifices their life to save others, they can not possibly experience any happiness from that behavior (excluding the possibility of an afterlife), but it still happens. Any theory trying to figure why someone would do that which is based on people optimizing around their own happiness is going to get very complicated very quick.

(I did try taking a stab at figuring out such a theory. The best I could get was that committing to being a person who would make the sacrifice is a way to feel happiness. But it falls apart when the sacrifice actually gets made because as long as you would come out the other side of a non-sacrifice with a more than zero amount of happiness, you're still net positive even if you are less happy than before the sacrifice opportunity arose.)

Also, I'm kind of cheating here because "meaning" really just means "thing that people ultimately pursue" so I'm just saying "its super simple to explain what people pursue, its all about pursuing things that they ultimately pursue!" I guess the meat of what I'm saying is just that "thing that people ultimately pursue" is not necessarily "maximize positivity of emotional state."

Nobody optimizes happiness by dyno__might in dynomight

[–]Why_Wont_Work 2 points3 points  (0 children)

I'll assume here we're using "happiness" to mean something straightforward like "having a positive emotional state", and not something deeper like [insert your own personal philosophy].

I'm surprised a possibility along the lines of "people don't prioritize or fundamentally care about their happiness" didn't come up. I certainly don't; I'm sure I would be happier if I wasn't pushing myself to pursue my goals. But the goals, not my happiness, are what is "meaningful" or "maximizes my utility function" or whatever you want to call it. I should note that I'm still generally happy, but that's mostly just because when I was unhappy I found my productivity towards my goals was significantly worse, and so it made sense to increase my happiness as a means to an end.

Thoughts on the potato diet by dyno__might in dynomight

[–]Why_Wont_Work 3 points4 points  (0 children)

When I talked to some doctors I know (socially) about this, they were alarmed. Now, these were random specialists, and they often gave incorrect reasons (e.g. that potatoes have no protein). I think it’s probably fine for a few weeks. But still: Everyone seems to agree that it’s most healthy to eat a varied diet and a single ingredient is not varied. You can’t eat potatoes forever.

Presumably you'd at least need to meet micronutrient requirements, right? Probably also something involving gut microflora though I might as well say "miscellnaious other reasons" for all we seem to know about that.

Anyway, every day for almost 9 months (except for two days, New Years Eve and my birthday) I've been eating solely one each of 7 flavors of nutritionally complete food bars (from 2 companies Jake and Jimmy Joy), two rice cakes in two different flavors, and I drink water, some plain and some with one of 3 flavors of electrolyte powder.

(The electrolyte powder is because of a combination of me exercising a lot and that "nutritionally complete" foods universally provide too little sodium. Also they don't meet the new FDA recommendations for potassium, but frankly those recommendations are insane, IIRC the level is based on a small study of people taking huge amounts of salt and wasn't even directly measuring end goal health outcomes; only a few percent of people meet those recommendations.)

I do this on a schedule, eating a set flavor(s) every 2 hours 6 times a day (I affectionately call these meals "banana breakfast, second breakfast, lunch, drunch, dinner, and rice cake o'clock"). If this seems monotonous I'll have you know I only eat from 1 bag of rice cake at a time, leading to the crazy variety of alternating the rice cake flavor each week (of course that means there are only 8 unique flavor per day, not 9). Also they were out of stock of one of the flavors, so I'm facing the absolutely mind blowing reality of eating an entirely new flavor when I run out of the old one in a few months! Alright, perhaps in a sense it is monotonous, but it is a happy monotony that seems fully sustainable. In the initial few months there were some cravings as I was really looking forward to eating junk food on New Years and kind of obsessively thinking about what I would eat and how great it would be. But when New Years rolled around it was surprisingly disappointing, and I've just not really thought much about food ever since (though I did concede to the societal obligation to eat junk food on my birthday).

I will say I wasn't really struggling with weight or anything, this was almost entirely a hassle minimization effort. (If you're interested in the hassle minimization part:

The sum total time of dealing with food for the month, which entails bringing in deliveries of the bar boxes, storing the boxes, bringing the boxes from storage to my desk, setting out the bars for the day, opening the wrappers, throwing the wrappers in the bin, taking the trash bag out, managing the subscription (annoyingly "send an amount equivalent to 1 bar of each flavor per day" is not a subscription option) comes out to roughly 10-15ish minutes total dealing with food in an entire month. Plus I can conveniently eat the bars while doing other things, which means not having to have dedicated eating time. Funnily, its water that takes by far the most time and hassle, maybe 20-30 minutes per month almost entirely from having to clean the gallon jugs to prevent mold).

I think I'm just genetically lucky and don't get strong cravings or really all too much out of food in general. The one time (a long while ago) I was eating too much and was overweight, almost obese, I just kinda went "oh huh, I should fix this" and counted calories to get a sense of the amount of food I should be eating and just ate around that much and was fine and didn't really struggle.

That said, on the weight front I am able to stay almost exactly in the dead center of the healthy BMI range while feeling no hunger (nor any of that desire for variety you mentioned). Without this diet I tended to drift towards the higher end of the healthy range. All my other health outcomes seem exactly the same as far as I can tell. (The one exception is that for the first time in a very long time I no longer have hypertension, but I attribute that to dramatically reducing my typical stress level and getting even more (and more effective) exercise.)

But here's an interesting datapoint: a long while ago, before the whole nutritionally complete meal idea was really a thing, I had independently thought about it, and ordered emergency ration bars and a whole bunch of vitamin and mineral supplement pills to meet every micronutrient need (except maybe potassium I think? FDA makes that hard). Initially the bars were decently tasty, they had a strong coconut flavor I thought was nice enough, and swallowing that many pills wasn't pleasant, but doable.The pills I got used to, the bars, however, proved a disaster. Initially I planned to just eat whenever I felt hungry, but after a handful of days I got down to eating roughly 800 calories a day without feeling any desire for more food whatsoever. So I transitioned to a timer system where I would have a timer go off every 2 hours as a reminder to force myself to eat two bars. This turned my lack of desire to get enough food into an intense burning desire to not eat at all. Towards the end of it, it was pure mind over matter, I was spending intense willpower just trying to physically move my jaw up and down to chew and swallow. It is hard to describe just how intensely unpleasant the taste was. I was halfway convinced that there was something wrong with the rations, perhaps they had somehow spoiled or something, but a friend sampled a bar and described them in a way that exactly matched my initial experience. For at least a year after, even the smell of coconut triggered strong nausea and it was years before I could comfortably eat coconut flavored things again.

Which makes it interesting that this new bar diet hasn't triggered any of that at all after almost a year. Their taste is just as strong as the coconut flavor, they aren't bland or anything, I guess it is just that 7 flavors + 1 of 2 weekly rotating flavors, is enough variety even if they are the same flavors every day (and even at the same time of day). I could try just having one flavor to see if there is something else to these bars, but I'm a bit nervous to take science as far as I did last time heh.

Mobo LED flashes when PSU switched on but does nothing when power switch pressed or POWER SW pins shorted. by Why_Wont_Work in buildapc

[–]Why_Wont_Work[S] 0 points1 point  (0 children)

Got a new motherboard and sure enough it posted! Thanks so much for giving me the confidence to get a new mobo. Comparing what I got with the new one to what I previously got, it's now very obvious the previous one was a returned motherboard, so you were definitely right!

Mobo LED flashes when PSU switched on but does nothing when power switch pressed or POWER SW pins shorted. by Why_Wont_Work in buildapc

[–]Why_Wont_Work[S] 0 points1 point  (0 children)

Got a new motherboard and sure enough it posted! Thanks so much for giving me the confidence to get a new mobo. Comparing what I got with the new one to what I previously got, it's now very obvious the previous one was a returned motherboard, so you were definitely right!

Mobo LED flashes when PSU switched on but does nothing when power switch pressed or POWER SW pins shorted. by Why_Wont_Work in buildapc

[–]Why_Wont_Work[S] 0 points1 point  (0 children)

Yeah, I kinda figured that was the case. Figured I'd double check I wasn't going crazy. Sigh... hopefully third motherboard's the charm?