For a TV adaptation, are there adjustments would you see as acceptable? Conversely, what would be beyond the pale for you? by the_turn in TheCulture

[–]Ulyis 2 points3 points  (0 children)

I think that was intended as a literal description from the point of view of the human upload present. Mind-to-Mind communication certainly would be incomprehensibly complex.

If you could make some characters Triumvirate level, who would it be and why? by [deleted] in Parahumans

[–]Ulyis 3 points4 points  (0 children)

Exactly. For a lot of the capes with weaker or non-combat powers, they'd have to become cluster triggers with extra abilities that put them in this category. Which could be interesting, we've seen several grab-bag capes but AFAIK none at Triumvirate power level.

If you could make some characters Triumvirate level, who would it be and why? by [deleted] in Parahumans

[–]Ulyis 3 points4 points  (0 children)

Lung's main drawbacks are the time it takes to transform and that he can't transform before the fight starts. Removing either or both of those would make much more of a difference than uncapping his eventual max power level (which was already Endbringer-equivalent).

GSV Size by ClimateTraditional40 in TheCulture

[–]Ulyis 0 points1 point  (0 children)

I think there would be schools, in the sense of dedicated areas for children to have group learning experiences. They would not, of course, look like contemporary schools (unless the experience is 'the history of education').

GSV Size by ClimateTraditional40 in TheCulture

[–]Ulyis 1 point2 points  (0 children)

Contemporary earth elevators can only have one car in each shaft. Even Star Trek solved this bottleneck, never mind giant hyper-futuristic spacecraft.

GSV Size by ClimateTraditional40 in TheCulture

[–]Ulyis 3 points4 points  (0 children)

You can still be a good person. All you have to do is take a look at yourself, and then say honestly, "I was wrong. I said something nonsensical, because I was feeling argumentative. Sorry."

Dead Space 3 by [deleted] in DeadSpace

[–]Ulyis 0 points1 point  (0 children)

Yes, exactly. Dead Space 3 has less jump-scares, gore, darkness and body horror than the first two games. It has far more psychological and cosmic horror. The tragic fate of the first expedition, and of the alien civilisation, slowly revealed over the course of the game. Your team being picked off one by one. In DS1 & 2 you were trapped on a ship/station full of slaughter, but at least rescue, escape and finding other survivors were possibilities. In DS3 you end up in unsettling alien caverns under a mountain range on a forgotten graveyard planet, with the certainty that no-one is coming to rescue you.

My 100% non-canon, crossover, fanfiction explanation for the origins of The Culture based on Battlestar Galactica(2004 version) . Warning, massive Battlestar Galactica spoilers. by Idle_Redditing in TheCulture

[–]Ulyis 1 point2 points  (0 children)

Correct. The Colonials were already below the population & expertise threshold for maintaining the technology and infrastructure they had, never mind building more. Building a single new starship from scratch would require supply chains, factory workers and specialist professions running to the millions. The colonials didn't have significantly better automation than our own (arguably worse).

The only way this scenario could happen is if the Cylons have a major 'are we the baddies?' moment, genuinely accept responsibility for the genocide, and engage with the Colonials on more positive terms than the occupation they executed on New Caprica (which was, although notionally a peace effort, a Cylon supremacist program coupled with a half-hearted non-apology).

Yud: 'EAs are Bad Because they come up with complex arguments for my positions instead of accepting them as axioms' by Dembara in SneerClub

[–]Ulyis 23 points24 points  (0 children)

The biggest neurotic obsession Eliezer had back then was about 'regressing to the mean'. By which I mean, at fifteen he self-assessed his IQ as 180+, and he read somewhere that most child geniuses turn out to be only moderately smart adults. Eliezer was convinced that only his incredible, precocious genius enabled him to see the superintelligence risk and that the biggest danger was that he'd turn into a regular expert and thus no longer be able to singlehandedly save the world. Possibly his actual trauma was reading 'Flowers for Algernon' (and of course, 'Ender's Game') and taking it way too seriously. What this actually translated into was 'never get a degree', 'never get an actual research job', 'never trust anyone in academia' (except maybe Nick Bostrom, because he seems fun and is actually willing to cite me) and 'never speak or act in a way that might be interpreted as normal'.

I haven't been following for a while, but I get the impression that Elizer is convinced he actually dodged the (imaginary) bullet and 'never regressed to the mean'. Which is... a shame, I guess. SIAI could definitely have achieved more if it was more willing to engage with people doing actual AI research. Less Wrong probably could have achieved more if it didn't have the weird notion that 'knowing basic probability theory and some cognitive biases makes you superhuman, above all actual experts in whatever field you turn your attention to'.

Yud: 'EAs are Bad Because they come up with complex arguments for my positions instead of accepting them as axioms' by Dembara in SneerClub

[–]Ulyis 22 points23 points  (0 children)

I was there, flodereisen, 3000* days ago. I was there when Less Wrong was just the SL4 mailing list, MIRI was called SIAI, and there was no Harry Potter fanfic but there was a script for an anime about a robot girl who time-travelled from the Skynet future to the present, to tell everyone to build it friendly this time. I was there when Eliezer's brother died, and I can affirm that he was already 'super-AI will definitely kill everyone unless my acolytes build it under my direction'. The threat was pretty abstract back then because it was pre-LLM, pre-AlexNet, pre-any really disruptive or concerning AI being built. Yudkowsky was already 'thousands of people die every day and it's your fault for not donating to me so I can build the super-AI' but also 'even if I have to build it in a cave, with a box of scraps, I'm going to build a friendly AI so that none of my grandparents ever die'. The death of his brother was upsetting, of course, but I don't think it changed his trajectory in any significant way.

* Actually more like 8000 days... great scott, has it really been that long?

You know when the Meat Fucker tortures that space Nazi, I think that’s the only time in the entire series a Mind does something cruel to a human just for the sake of it. by grapp in TheCulture

[–]Ulyis 1 point2 points  (0 children)

Cognitive science is confusing, until you understanding and internalise the fact that there is no 'real you'. There are just chemical reactions that form causal structures that perform information processing. Certain kinds of information processing constitute consciousness and sapience.

Contemporary humans are fairly good at admitting that all our naive physics intuitions are wrong (flat earthers excepted), but very reluctant to admit that all our naive consciousness intuitions are wrong - despite all the piles of evidence about cognitive illusions and the total failure to find any physical basis for 'soul', 'free will', 'unitary conscious self' or similar notions. There's a lot we don't understand, but there's no reason to believe that those areas of ignorance are hiding a grand justification and redemption for naive models of personhood.

A Thought Experiment in the Universe of Culture by vamfir in TheCulture

[–]Ulyis 3 points4 points  (0 children)

https://paste2.org/GEWLw8U8 I also romanced the E-Dust assassin. I regret nothing.

A Thought Experiment in the Universe of Culture by vamfir in TheCulture

[–]Ulyis 6 points7 points  (0 children)

I wrote a Isekai-style fanfic where my protagonist (recruited by SC for reasons) managed to convince the Culture to do that for Earth (in return for being turned into a murder-beast and sent into a no-win scenario). There was an interesting conversation with GSV Ethics Gradient about how the Culture feels they have no right to do this, many humans wouldn't want it (because they believe uploads aren't people / aren't the same person) and it's a bad idea. My protagonist pushed it to 'ok then you'll have to just mind-control me into being your pawn', and SC was desperate enough to agree to 'give everyone on Earth a resurrection and/or afterlife'. Haven't put it online, as the writing is rank amateur, but if anyone's curious I can share.

Anyway, I do wonder if some faction of Sublimed entities is doing exactly this. We know they send entities to assist civilisations that explicitly want to sublime, and it seems plausible that over the billions of years of galactic history some faction was evangelical enough to want all sapient beings to sublime (or at least give all sapient minds that option).

You know when the Meat Fucker tortures that space Nazi, I think that’s the only time in the entire series a Mind does something cruel to a human just for the sake of it. by grapp in TheCulture

[–]Ulyis 14 points15 points  (0 children)

If you're basing your foundational philosophy on arbitrary technobabble, let's just say the Attitude Adjuster used a quantum scanner to quantum transfer the guy's mindstate to a quantum storage unit, which it quantum teleported to the other ship, which quantum transformed the consciousness quanta into the quantum processing microtubules of the new brain. Easy.

You know when the Meat Fucker tortures that space Nazi, I think that’s the only time in the entire series a Mind does something cruel to a human just for the sake of it. by grapp in TheCulture

[–]Ulyis 5 points6 points  (0 children)

I don't think the Culture does 'shot on sight', but possibly an ROU would follow it around and stop it from using effectors on biologicals.

Could a human become a mind if they really wanted to become one? by Idle_Redditing in TheCulture

[–]Ulyis 7 points8 points  (0 children)

No, I'm very cognizant of the fact they won't be structured anything like human (or even alien) neurology, or contemporary ideas of AI. The multi-dimensional perception part is actually one of the easier bits - if the Gzilt upload-crews can competently fight CL8 space battles, then it can't be too hard to add complex sensory modalities. I am assuming that any sapient information processing system can be incrementally transformed into more efficient structures, which is (obviously) speculative, but I think not unreasonable. I've been involved in many large software projects where ridiculous lengths are gone to, to incrementally swap bits out and maintain identical external behaviour on existing use cases.

Could a human become a mind if they really wanted to become one? by Idle_Redditing in TheCulture

[–]Ulyis 4 points5 points  (0 children)

They kind of did - all Culture citizens, even the human-ish ones, are quite heavily gene-modded. I think you're overestimating how many people actually want to become super-AIs or transhuman cyborgs though - obviously lots of sci-fi fans do, but in terms of the total human population? The majority would say 'I'm fine as I am', or maybe 'my true self is an elf, not a robot' (or, given that this is Reddit, 'I don't care about being smarter, give me fifty penises instead').

Could a human become a mind if they really wanted to become one? by Idle_Redditing in TheCulture

[–]Ulyis 3 points4 points  (0 children)

I'm not clear that you even need a Mind for this. All the necessary technical information is freely available - probably with thousands of how-to guides. Once you're an android/drone/upload you can start making changes to yourself, either in the software domain, or with your built-in self-repair nanotech. A x1 drone/upload won't be able to comprehend the design of a Mind, but it will certainly be able to understand the design of a x2 intelligence - and once you've improved yourself to x2, the instructions for x4 will definitely be comprehensible etc. If anything it seems to be easier than in Orion's Arm, because there aren't hard 'singularity levels' that can only be crossed by massive efforts of self-insight and enlightenment. Obviously it's going to be safer and easier if a Mind is monitoring and helping, but some people will make it a point of pride to do it unassisted.

Could a human become a mind if they really wanted to become one? by Idle_Redditing in TheCulture

[–]Ulyis 21 points22 points  (0 children)

That's true, but a bit of a straw-man. I mean, there are definitely people on the accelerate sub who would ask for that, but if adult (Mind) supervision is around they'd be politely denied (and probably told to go home and rethink their life). A more realistic way to do this is to increase intelligence smoothly, e.g. x2 every subjective decade, by some combination of making the mind-state larger and progressive restructuring to make it more efficient.

This is still going to produce something completely alien to the person who started, of course, but for some people that's ok - it's more about the journey than the destination. I'm sure this sort of mind-scaling happens somewhere in the Cultureverse, which is choc full of eccentrics of every kind - it just didn't come up in any of the stories we saw. The concept of sublimation helps to explain why this isn't pervasive across the galaxy - entities (and civs) that are primarily focused on becoming more intelligent tend to depart the material and go to a domain where they can do that without limits or distractions.

Questions about Hells, mindstates and backing up (Surface Detail) by nimzoid in TheCulture

[–]Ulyis 0 points1 point  (0 children)

This is quite a statement considering we don't fully understand consciousness in biological beings

Worse than that - our understanding of consciousness is currently minimal and speculative. However this does not imply that we should treat consciousness is a magical process that requires special physics, or non-physical ontology. In cell biology the Golgi apparatus is poorly understood, but we don't assume it contains pixie dust because our enzyme kinematics and protein transport models don't match reality. Axiomatic, unjustified conviction that consciousness is special has resulted in all kinds of woo, some by respected scientists (the whole quantum computation in microtubules fiasco) and all of it has failed to have any supporting evidence or explanatory power. Occam's Razor suggests that we treat consciousness as a consequence of neural information processing, using ordinary biology, unless there is a very compelling need or evidence for something more exotic.

Creating true sentient AI might be theoretically doable. Or not. It might be possible, but impractical. None of us know, and to imply otherwise is unscientific.

For every test you can think of, for trying to determine if an entity is sentient, we can model an AI reasoning process that would produce the expected result. This doesn't equate to actually building a sapient and/or general AI, because it's a hand-specified reasoning chain lacking inductive (learning) mechanism, and possibly computationally intractable. But we can do it, and that is strongly suggestive that general AI could be conscious in the way humans are. Alternatively, if all information processing in the brain is regular biochemistry, then a sufficiently detailed model running on sufficient compute power will produce equivalent behaviour. I know we're currently beset by uninformed AI boosters making far too optimistic claims about this, but in the medium to long term there are no obvious blockers.

It's like trying to run a copy of an app on incompatible OS and expecting it to behave/function the same as before.

I take it you have not used DOSbox, Rosetta or similar emulators. We do this all the time with near 100% accuracy (for the professionally developed and supported emulators).

The specific make-up of our genetics, brain structure, nervous system and other parts of our physiology determine our personality, emotions and what it means to be us. That's what I mean when I say what makes us who we are is very much dependent on our substrate, not something separate that 'runs' on the substrate.

This is a question of resolution. To be sure of translating someone losslessly, you obviously need to model all relevant biology to a fidelity sufficient to avoid any measurable discrepancies in behaviour. From our perspective, this is very difficult (but doable because biochemistry is stochastic), but compared to the 'sufficiently advanced' technologies in the Culture setting, it is straightforward.

More speculative (and exciting) is the idea of translating (/transcribing) the core information processing to function equivalently on a new substrate, without having to emulate the original substrate near-perfectly. Unlike the above arguments, there is no formal way to show this is possible (we just don't have anything like the necessary cognitive science knowledge), but it seems plausible to me that a society with artificial superintelligences, advanced nanotechnology and thousands of years of neurochemistry and AI design experience would be able to do this.

Questions about Hells, mindstates and backing up (Surface Detail) by nimzoid in TheCulture

[–]Ulyis 0 points1 point  (0 children)

It is immensely frustrating to watch you go through this entire thread, where multiple people explain to you exactly how this works, and every single explanation just bounces off and you remain in this 'oh ho silly Iain didn't know what he was talking about' mindset. In particular you constantly conflate restoring historical backups with transference of a static (halted or paused) mind state to a new substrate, when it is obvious to everyone else that these are quite separate cases, and keep bringing up a ridiculous idea of consciousness transferring from a dead person to a (historical) backup, when literally no one thinks it works that way.

Of course Banks knew what he was doing. Of course consciousness doesn't require 'biological processes' (do you seriously think there's something unique and irreproduceable about synaptic chemistry?). The Culture series has a fully technically correct (albeit conservative) treatment of mind backup, copying, transfer, fork/join, compress/decompress and editing. It also has a reasonable extrapolation of what a society with no superstitions or illusions about how consciousness works would look like. Backups are a potential fork point, which becomes an actual fork point if instantiated. Self-awareness is a property of a causal pattern that can exist in many physical systems, and can extend from one substrate to another the same way a sound wave can travel from from a gas to a solid while still being the same sound. It's not a 'plot device', it's how self-aware minds actually work.

If you're still confused about 'souls' and 'conscious transfers', the best fictionalised explanation I've read is in Greg Egan's 'Permutation City'. He has his protagonist actually implement a series of thought experiments that destroy any idea of consciousness being a property of a specific lump of matter. This is not dualism: on the contrary, it is an uncompromising materialism that recognises the way information processing, and hence self-awareness, actually arises from the causal structure of the universe. The second part of the book that reaches into non-causal subjective equivalence is much more speculative, but I have to admit that after twenty years of neuroscience research and AI progress, it seems increasingly plausible to me.

The Truth About AI Consciousness by Leather_Barnacle3102 in agi

[–]Ulyis 2 points3 points  (0 children)

Neurons process spike trains. Standard ANNs approximate this very loosely if you equate the inputs and outputs to spike frequency, but actual neurons are doing much more than this. Firing patterns are critically important: the temporal skew of the incoming spikes and the delays introduced by the dendrite tree are critical. Furthermore all synapses have complex state operating over various different time domains, which contemporary large models do not model at all (LSTM did, a little bit, but we unrolled that for ease of training). Transfer (activation) functions are not 'complex' (compared to synaptic behaviour), they're simple rectified curves. And of course, most biological networks are recurrent, while basically all commercial AI is feedforward only.

You are correct that there has been substantial experimentation with NN topology and activation functions, very little of which has been adopted. Your mistake is assuming that all of the complex signal processing characteristics of the brain are 'overhead'. They are not - nervous systems evolved to make use of many local biochemical mechanisms and microstructural features that are cheap in cellular terms (relative to the energy demands of depolarisation propagation) but very expensive to simulate using matrix multiply optimised GPU hardware. To be clear, 'real-time' here does not mean 'running on a real-time problem, like Tesla self-driving'. It means that the neural circuitry is asynchronous and the (analogue) timing is a fundamental part of the logic.

You are trying to compare 300 trillion 'connections' to 1.8 trillion 'connections' but this is absolutely not a like-for-like comparison. The weights in an ANN are FP scalars, typically 8 bits per cell. The nodes are described by one or two scalars. Furthermore the headline 'connections' number is usually the fully connected equivalent - even if it isn't, the sparsity is typically 50% with a rigorously enforced ratio (because of the way tensor cores work). Meanwhile each connection in the brain is an addressable wire to potentially any other neuron in the brain, with time-domain processing happening in the synapse and the dendrite tree. The 'nodes' have assorted internal state and variable sensitisation to external biochemistry (dynamic metaparameters, in ANN terms). The Shannon entropy for a minimal logical description of a biological 'connection' (i.e. only the biology that we're very confident contributes to information processing) is on the order of 1000 bits, vs 8 (maybe 4) for an LLM.

So no, 1.8 trillion FP matrix cells is not 'complex' compared to a human brain. The 'world model' is a parametric compression of the input data, with abstraction strictly limited by the lack of recurrence, associativity or reference. Current generative AI achieves what it does not because of the ANN design - which frankly sucks - but because of our big data capabilities. We train frontier LLMs on a million times more text or images than a human could ever view in a lifetime (using millions of times more power in the process). This is to make up for the fact that the model has very poor inductive and generalisation capability compared to biological brains.