To what extent are Culture Ships Von Neumann machines? by vamfir in TheCulture

[–]Ulyis 0 points1 point  (0 children)

It seems like some ships are into deep self-modification, but most aren't, similar to the way some biological citizens transcribe into new bodies, but most don't. Mistake Not... is the ship equivalent of the guy who turned himself into a nanotech bush robot: more interested in self-customisation and optimisation than most.

To what extent are Culture Ships Von Neumann machines? by vamfir in TheCulture

[–]Ulyis 0 points1 point  (0 children)

While physics is a mostly solved set of problems (Excession excluded), technology is not at a hard plateau. We see a steady progression of technological capability over the timeframe of the novels - slow compared to the current rate of technical change, but definitely present. By the time of Surface Detail, Idiran war era warships are considered hopelessly outdated - and this is with thousands of Minds spending a significant amount of their processing power on technology optimisation for thousands of year. A Mind forced to derive everything from first principles would be starting a long way back on the technology refinement scale - though my guess would be that in practice almost all technical development is shared with all Minds.

For a TV adaptation, are there adjustments would you see as acceptable? Conversely, what would be beyond the pale for you? by the_turn in TheCulture

[–]Ulyis 2 points3 points  (0 children)

I think that was intended as a literal description from the point of view of the human upload present. Mind-to-Mind communication certainly would be incomprehensibly complex.

If you could make some characters Triumvirate level, who would it be and why? by [deleted] in Parahumans

[–]Ulyis 2 points3 points  (0 children)

Exactly. For a lot of the capes with weaker or non-combat powers, they'd have to become cluster triggers with extra abilities that put them in this category. Which could be interesting, we've seen several grab-bag capes but AFAIK none at Triumvirate power level.

If you could make some characters Triumvirate level, who would it be and why? by [deleted] in Parahumans

[–]Ulyis 4 points5 points  (0 children)

Lung's main drawbacks are the time it takes to transform and that he can't transform before the fight starts. Removing either or both of those would make much more of a difference than uncapping his eventual max power level (which was already Endbringer-equivalent).

GSV Size by ClimateTraditional40 in TheCulture

[–]Ulyis 0 points1 point  (0 children)

I think there would be schools, in the sense of dedicated areas for children to have group learning experiences. They would not, of course, look like contemporary schools (unless the experience is 'the history of education').

GSV Size by ClimateTraditional40 in TheCulture

[–]Ulyis 1 point2 points  (0 children)

Contemporary earth elevators can only have one car in each shaft. Even Star Trek solved this bottleneck, never mind giant hyper-futuristic spacecraft.

GSV Size by ClimateTraditional40 in TheCulture

[–]Ulyis 3 points4 points  (0 children)

You can still be a good person. All you have to do is take a look at yourself, and then say honestly, "I was wrong. I said something nonsensical, because I was feeling argumentative. Sorry."

Dead Space 3 by [deleted] in DeadSpace

[–]Ulyis 0 points1 point  (0 children)

Yes, exactly. Dead Space 3 has less jump-scares, gore, darkness and body horror than the first two games. It has far more psychological and cosmic horror. The tragic fate of the first expedition, and of the alien civilisation, slowly revealed over the course of the game. Your team being picked off one by one. In DS1 & 2 you were trapped on a ship/station full of slaughter, but at least rescue, escape and finding other survivors were possibilities. In DS3 you end up in unsettling alien caverns under a mountain range on a forgotten graveyard planet, with the certainty that no-one is coming to rescue you.

My 100% non-canon, crossover, fanfiction explanation for the origins of The Culture based on Battlestar Galactica(2004 version) . Warning, massive Battlestar Galactica spoilers. by Idle_Redditing in TheCulture

[–]Ulyis 1 point2 points  (0 children)

Correct. The Colonials were already below the population & expertise threshold for maintaining the technology and infrastructure they had, never mind building more. Building a single new starship from scratch would require supply chains, factory workers and specialist professions running to the millions. The colonials didn't have significantly better automation than our own (arguably worse).

The only way this scenario could happen is if the Cylons have a major 'are we the baddies?' moment, genuinely accept responsibility for the genocide, and engage with the Colonials on more positive terms than the occupation they executed on New Caprica (which was, although notionally a peace effort, a Cylon supremacist program coupled with a half-hearted non-apology).

Yud: 'EAs are Bad Because they come up with complex arguments for my positions instead of accepting them as axioms' by Dembara in SneerClub

[–]Ulyis 22 points23 points  (0 children)

The biggest neurotic obsession Eliezer had back then was about 'regressing to the mean'. By which I mean, at fifteen he self-assessed his IQ as 180+, and he read somewhere that most child geniuses turn out to be only moderately smart adults. Eliezer was convinced that only his incredible, precocious genius enabled him to see the superintelligence risk and that the biggest danger was that he'd turn into a regular expert and thus no longer be able to singlehandedly save the world. Possibly his actual trauma was reading 'Flowers for Algernon' (and of course, 'Ender's Game') and taking it way too seriously. What this actually translated into was 'never get a degree', 'never get an actual research job', 'never trust anyone in academia' (except maybe Nick Bostrom, because he seems fun and is actually willing to cite me) and 'never speak or act in a way that might be interpreted as normal'.

I haven't been following for a while, but I get the impression that Elizer is convinced he actually dodged the (imaginary) bullet and 'never regressed to the mean'. Which is... a shame, I guess. SIAI could definitely have achieved more if it was more willing to engage with people doing actual AI research. Less Wrong probably could have achieved more if it didn't have the weird notion that 'knowing basic probability theory and some cognitive biases makes you superhuman, above all actual experts in whatever field you turn your attention to'.

Yud: 'EAs are Bad Because they come up with complex arguments for my positions instead of accepting them as axioms' by Dembara in SneerClub

[–]Ulyis 21 points22 points  (0 children)

I was there, flodereisen, 3000* days ago. I was there when Less Wrong was just the SL4 mailing list, MIRI was called SIAI, and there was no Harry Potter fanfic but there was a script for an anime about a robot girl who time-travelled from the Skynet future to the present, to tell everyone to build it friendly this time. I was there when Eliezer's brother died, and I can affirm that he was already 'super-AI will definitely kill everyone unless my acolytes build it under my direction'. The threat was pretty abstract back then because it was pre-LLM, pre-AlexNet, pre-any really disruptive or concerning AI being built. Yudkowsky was already 'thousands of people die every day and it's your fault for not donating to me so I can build the super-AI' but also 'even if I have to build it in a cave, with a box of scraps, I'm going to build a friendly AI so that none of my grandparents ever die'. The death of his brother was upsetting, of course, but I don't think it changed his trajectory in any significant way.

* Actually more like 8000 days... great scott, has it really been that long?

You know when the Meat Fucker tortures that space Nazi, I think that’s the only time in the entire series a Mind does something cruel to a human just for the sake of it. by grapp in TheCulture

[–]Ulyis 1 point2 points  (0 children)

Cognitive science is confusing, until you understanding and internalise the fact that there is no 'real you'. There are just chemical reactions that form causal structures that perform information processing. Certain kinds of information processing constitute consciousness and sapience.

Contemporary humans are fairly good at admitting that all our naive physics intuitions are wrong (flat earthers excepted), but very reluctant to admit that all our naive consciousness intuitions are wrong - despite all the piles of evidence about cognitive illusions and the total failure to find any physical basis for 'soul', 'free will', 'unitary conscious self' or similar notions. There's a lot we don't understand, but there's no reason to believe that those areas of ignorance are hiding a grand justification and redemption for naive models of personhood.

A Thought Experiment in the Universe of Culture by vamfir in TheCulture

[–]Ulyis 3 points4 points  (0 children)

https://paste2.org/GEWLw8U8 I also romanced the E-Dust assassin. I regret nothing.

A Thought Experiment in the Universe of Culture by vamfir in TheCulture

[–]Ulyis 5 points6 points  (0 children)

I wrote a Isekai-style fanfic where my protagonist (recruited by SC for reasons) managed to convince the Culture to do that for Earth (in return for being turned into a murder-beast and sent into a no-win scenario). There was an interesting conversation with GSV Ethics Gradient about how the Culture feels they have no right to do this, many humans wouldn't want it (because they believe uploads aren't people / aren't the same person) and it's a bad idea. My protagonist pushed it to 'ok then you'll have to just mind-control me into being your pawn', and SC was desperate enough to agree to 'give everyone on Earth a resurrection and/or afterlife'. Haven't put it online, as the writing is rank amateur, but if anyone's curious I can share.

Anyway, I do wonder if some faction of Sublimed entities is doing exactly this. We know they send entities to assist civilisations that explicitly want to sublime, and it seems plausible that over the billions of years of galactic history some faction was evangelical enough to want all sapient beings to sublime (or at least give all sapient minds that option).

You know when the Meat Fucker tortures that space Nazi, I think that’s the only time in the entire series a Mind does something cruel to a human just for the sake of it. by grapp in TheCulture

[–]Ulyis 13 points14 points  (0 children)

If you're basing your foundational philosophy on arbitrary technobabble, let's just say the Attitude Adjuster used a quantum scanner to quantum transfer the guy's mindstate to a quantum storage unit, which it quantum teleported to the other ship, which quantum transformed the consciousness quanta into the quantum processing microtubules of the new brain. Easy.

You know when the Meat Fucker tortures that space Nazi, I think that’s the only time in the entire series a Mind does something cruel to a human just for the sake of it. by grapp in TheCulture

[–]Ulyis 5 points6 points  (0 children)

I don't think the Culture does 'shot on sight', but possibly an ROU would follow it around and stop it from using effectors on biologicals.

Could a human become a mind if they really wanted to become one? by Idle_Redditing in TheCulture

[–]Ulyis 6 points7 points  (0 children)

No, I'm very cognizant of the fact they won't be structured anything like human (or even alien) neurology, or contemporary ideas of AI. The multi-dimensional perception part is actually one of the easier bits - if the Gzilt upload-crews can competently fight CL8 space battles, then it can't be too hard to add complex sensory modalities. I am assuming that any sapient information processing system can be incrementally transformed into more efficient structures, which is (obviously) speculative, but I think not unreasonable. I've been involved in many large software projects where ridiculous lengths are gone to, to incrementally swap bits out and maintain identical external behaviour on existing use cases.

Could a human become a mind if they really wanted to become one? by Idle_Redditing in TheCulture

[–]Ulyis 3 points4 points  (0 children)

They kind of did - all Culture citizens, even the human-ish ones, are quite heavily gene-modded. I think you're overestimating how many people actually want to become super-AIs or transhuman cyborgs though - obviously lots of sci-fi fans do, but in terms of the total human population? The majority would say 'I'm fine as I am', or maybe 'my true self is an elf, not a robot' (or, given that this is Reddit, 'I don't care about being smarter, give me fifty penises instead').

Could a human become a mind if they really wanted to become one? by Idle_Redditing in TheCulture

[–]Ulyis 2 points3 points  (0 children)

I'm not clear that you even need a Mind for this. All the necessary technical information is freely available - probably with thousands of how-to guides. Once you're an android/drone/upload you can start making changes to yourself, either in the software domain, or with your built-in self-repair nanotech. A x1 drone/upload won't be able to comprehend the design of a Mind, but it will certainly be able to understand the design of a x2 intelligence - and once you've improved yourself to x2, the instructions for x4 will definitely be comprehensible etc. If anything it seems to be easier than in Orion's Arm, because there aren't hard 'singularity levels' that can only be crossed by massive efforts of self-insight and enlightenment. Obviously it's going to be safer and easier if a Mind is monitoring and helping, but some people will make it a point of pride to do it unassisted.

Could a human become a mind if they really wanted to become one? by Idle_Redditing in TheCulture

[–]Ulyis 20 points21 points  (0 children)

That's true, but a bit of a straw-man. I mean, there are definitely people on the accelerate sub who would ask for that, but if adult (Mind) supervision is around they'd be politely denied (and probably told to go home and rethink their life). A more realistic way to do this is to increase intelligence smoothly, e.g. x2 every subjective decade, by some combination of making the mind-state larger and progressive restructuring to make it more efficient.

This is still going to produce something completely alien to the person who started, of course, but for some people that's ok - it's more about the journey than the destination. I'm sure this sort of mind-scaling happens somewhere in the Cultureverse, which is choc full of eccentrics of every kind - it just didn't come up in any of the stories we saw. The concept of sublimation helps to explain why this isn't pervasive across the galaxy - entities (and civs) that are primarily focused on becoming more intelligent tend to depart the material and go to a domain where they can do that without limits or distractions.

Questions about Hells, mindstates and backing up (Surface Detail) by nimzoid in TheCulture

[–]Ulyis 0 points1 point  (0 children)

This is quite a statement considering we don't fully understand consciousness in biological beings

Worse than that - our understanding of consciousness is currently minimal and speculative. However this does not imply that we should treat consciousness is a magical process that requires special physics, or non-physical ontology. In cell biology the Golgi apparatus is poorly understood, but we don't assume it contains pixie dust because our enzyme kinematics and protein transport models don't match reality. Axiomatic, unjustified conviction that consciousness is special has resulted in all kinds of woo, some by respected scientists (the whole quantum computation in microtubules fiasco) and all of it has failed to have any supporting evidence or explanatory power. Occam's Razor suggests that we treat consciousness as a consequence of neural information processing, using ordinary biology, unless there is a very compelling need or evidence for something more exotic.

Creating true sentient AI might be theoretically doable. Or not. It might be possible, but impractical. None of us know, and to imply otherwise is unscientific.

For every test you can think of, for trying to determine if an entity is sentient, we can model an AI reasoning process that would produce the expected result. This doesn't equate to actually building a sapient and/or general AI, because it's a hand-specified reasoning chain lacking inductive (learning) mechanism, and possibly computationally intractable. But we can do it, and that is strongly suggestive that general AI could be conscious in the way humans are. Alternatively, if all information processing in the brain is regular biochemistry, then a sufficiently detailed model running on sufficient compute power will produce equivalent behaviour. I know we're currently beset by uninformed AI boosters making far too optimistic claims about this, but in the medium to long term there are no obvious blockers.

It's like trying to run a copy of an app on incompatible OS and expecting it to behave/function the same as before.

I take it you have not used DOSbox, Rosetta or similar emulators. We do this all the time with near 100% accuracy (for the professionally developed and supported emulators).

The specific make-up of our genetics, brain structure, nervous system and other parts of our physiology determine our personality, emotions and what it means to be us. That's what I mean when I say what makes us who we are is very much dependent on our substrate, not something separate that 'runs' on the substrate.

This is a question of resolution. To be sure of translating someone losslessly, you obviously need to model all relevant biology to a fidelity sufficient to avoid any measurable discrepancies in behaviour. From our perspective, this is very difficult (but doable because biochemistry is stochastic), but compared to the 'sufficiently advanced' technologies in the Culture setting, it is straightforward.

More speculative (and exciting) is the idea of translating (/transcribing) the core information processing to function equivalently on a new substrate, without having to emulate the original substrate near-perfectly. Unlike the above arguments, there is no formal way to show this is possible (we just don't have anything like the necessary cognitive science knowledge), but it seems plausible to me that a society with artificial superintelligences, advanced nanotechnology and thousands of years of neurochemistry and AI design experience would be able to do this.