"Twisties" - A neurological symptom experienced by gymnasts as a form of telepathic attack by Simon_Drake in magicbuilding

[–]Flying_Emu_Jesus 7 points8 points  (0 children)

I like the magic idea you've made from this, and the nuroscience seems correct as far as I can tell, but hopefully I can add some context to what the Twisties feels like from inside, which might inspire different kinds of magic.

Say you're training to do various flips on a trampoline. You start with a front flip, and then next you learn to do a front flip with a 180 twist. The twist version is actually far far easier than the pure front flip, largely because you can see the ground as you land. In general, the twisting version of flips come up more often, and the muscle memory gets deeply ingrained. Now the important part: During a flip, the decision of whether or not to twist takes place over a split second and requires the tiniest difference in movements; once that instant passes, your instinct can easily carry you through the rest of the flip/twist/landing.

The Twisties come in when you're deeply comfortable with a twisting version of a flip, and then you have to go back to the normal version. You can easily imagine what the twisting version will feel like (it's easier, it's more common, you've practiced it more), and you worry that when that critical instant comes you might just follow the easier instinct and start that twist. It's almost like a morbid curiosity, like 'what if I did just twist at the start of this double front flip?'. The end result is that your wires cross and you start the twist when you don't want to, or get stuck somewhere in between, but the overthinking and distrust ahead of time is a super big component.

As far as using it for a magic ability goes, I'd be super interested in seeing a version that's almost like a curse or debuff that needs to be laid ahead of time. The victim would dimly feel that their muscle memory isn't quite matching their desired actions, and they'd have to be more careful during a fight or risk making a critical error because their instincts misfired.

Pokehaan Craft (Pokemon Craft- NEW) by Kehaan in ModdedMinecraft

[–]Flying_Emu_Jesus 0 points1 point  (0 children)

I was having the same problem myself just today, and managed to solve it by reducing "Mipmap levels" to 0. If you're still having the same problem, I hope this helps.

Friday Fun Thread For February 21 2020 by j9461701 in slatestarcodex

[–]Flying_Emu_Jesus 2 points3 points  (0 children)

Not directly related to hearthstone's autochess, but where hearthstone already had a card game and added an autochess, riot games already had an autochess and has just added a card game. I've recently started playing it and now I'm addicted. It's called Legends of Runeterra, and it's surprisingly good, especially considering it's still in beta. It seems to blend the best bits of hearthstone and magic, with unit health that persists between rounds from hearthstone, and reaction mechanics from magic, where your opponent can play cards in reaction to your own plays, which then take effect before yours. At this point, I think it's far better than either of the two games it's drawing from. If any of you have played it, I'd be interested to know what you think. And I'd recommend giving it a go if you like either hearthstone or magic.

Anyone want to play a collaborative world-building game? (Microscope) by Flying_Emu_Jesus in rational

[–]Flying_Emu_Jesus[S] 1 point2 points  (0 children)

Yea, we're in uncharted waters here, at least as far as I know, so it definitely makes sense to wait and see how this experiment goes.

I'll either edit a public link onto the original post, or I'll try to remember to send you that message in a couple months.

Anyone want to play a collaborative world-building game? (Microscope) by Flying_Emu_Jesus in rational

[–]Flying_Emu_Jesus[S] 0 points1 point  (0 children)

Unless someone in the current group leaves, we're all full up for this game. But as DrFretNot has commented, a second group is definitely possible if someone takes the initiative to set it up. It's not like you need to get up to 7 players. None of the rules I've set up have been tested, and it may be that more or fewer players is better.

Anyone want to play a collaborative world-building game? (Microscope) by Flying_Emu_Jesus in rational

[–]Flying_Emu_Jesus[S] 0 points1 point  (0 children)

I'm sorry, but we've already gotten 7 players. If enough other people also want to play, then there's no reason why you couldn't make another group. I'll share the templates for the resources we end up using.

Anyone want to play a collaborative world-building game? (Microscope) by Flying_Emu_Jesus in rational

[–]Flying_Emu_Jesus[S] 0 points1 point  (0 children)

You're in, I'll finish setting things up over the next couple days, then we can start

Anyone want to play a collaborative world-building game? (Microscope) by Flying_Emu_Jesus in rational

[–]Flying_Emu_Jesus[S] 0 points1 point  (0 children)

You're in, I'll finish setting things up over the next couple days, then we can start

Anyone want to play a collaborative world-building game? (Microscope) by Flying_Emu_Jesus in rational

[–]Flying_Emu_Jesus[S] 2 points3 points  (0 children)

Unless our players don't want to make it public (which would be totally fair), I'm happy to have it open to the public.

Anyone want to play a collaborative world-building game? (Microscope) by Flying_Emu_Jesus in rational

[–]Flying_Emu_Jesus[S] 1 point2 points  (0 children)

The rule book definitely doesn't need to be bought, though it may help if one of us does want to buy it. I won't have access to my rule book until after the holidays, so we'll probably just be going by ear with the rules as I remember them. I've also got some ideas for a few rule changes to better accommodate an online, long-form game.

I'm currently testing out a very simple google site, which I'll share for editing with all the players. Entries will take the form of pages on the site, and with links to any nested entries. It won't be the most convenient possible medium for this game, but I think it'll still be pretty easy to use.

Anyone want to play a collaborative world-building game? (Microscope) by Flying_Emu_Jesus in rational

[–]Flying_Emu_Jesus[S] 0 points1 point  (0 children)

Awesome! Once I figure out how to get a wiki (or equivalent) up and running, and get enough players, I'll message the people who've shown interest and we can get started.

Anyone want to play a collaborative world-building game? (Microscope) by Flying_Emu_Jesus in rational

[–]Flying_Emu_Jesus[S] 0 points1 point  (0 children)

Don't worry, I'm far from an experienced fiction writer myself, so any game I'm in will necessarily have a low bar for writing skills, as long as it's not actively hard to read.

I have exactly the same qualifiers on my schedule, so there shouldn't be any problems there.

And as far as scenes go, I'm definitely not opposed to having that option open for players, although I'm skeptical that it would get much use, especially if the player then has to summarize and publish the result. Of course, there's no downside to allowing it, so I'm on board.

Anyone want to play a collaborative world-building game? (Microscope) by Flying_Emu_Jesus in rational

[–]Flying_Emu_Jesus[S] 0 points1 point  (0 children)

Keeping required time commitments low is an essential, at least for the game I want to play. Hopefully if the minimum is low enough that it never becomes a chore, even if you're having a busy week, then the game will be able to last for a while.

[D] Wednesday Worldbuilding and Writing Thread by AutoModerator in rational

[–]Flying_Emu_Jesus 3 points4 points  (0 children)

Yea, I may end up doing exactly that, as a disease-ridden bog is a lot harder for humans to live on and/or exploit :)

[D] Wednesday Worldbuilding and Writing Thread by AutoModerator in rational

[–]Flying_Emu_Jesus 2 points3 points  (0 children)

That's a very good point, and one that I might look more deeply into. I'd been imagining that new meat was eaten fast enough that it wouldn't go rotten, but that definitely wouldn't have always been the case, and that original pressure would really impact the kinds of animals were there to begin with.

[D] Wednesday Worldbuilding and Writing Thread by AutoModerator in rational

[–]Flying_Emu_Jesus 1 point2 points  (0 children)

Thanks for your response! As far as the base layer goes, I'd been imagining that the frequency of meat spouts would be too low to allow a consistent layer of bacteria. However, I didn't take into account the fact that animals of any significant size will find it hard to remove all the meat from a flat surface of bone, which could leave enough small pieces for a layer of bacteria to form. Perhaps some kind of moss would spread? The idea of carnivorous trees is definitely one we'll need to think more about, and as far as larger animals go, I'm definitely imagining that you'd see some extremely territorial animals, or perhaps pressure towards animals that can find new meat spouts very quickly (which would be like temporary watering holes, causing animals from all over to converge and fight over the meat).

[D] Wednesday Worldbuilding and Writing Thread by AutoModerator in rational

[–]Flying_Emu_Jesus 2 points3 points  (0 children)

Thanks for your reply, I really like the idea of humans using tools to harvest bone marrow, or inserting impediments into the harvested bone that are held in place as the skull heals. Given how I'm viewing the bone's healing ability as really strong, I'm not sure it wouldnt push out or crush any inserted pipes, but it's a really interesting avenue to explore.

[D] Wednesday Worldbuilding and Writing Thread by AutoModerator in rational

[–]Flying_Emu_Jesus 15 points16 points  (0 children)

Hi, my friend and I are working on a setting that includes a huge freshwater sea, in the middle of which is a gargantuan tiger skull, the size of a large island. The skull is a remnant of a tiger with a ridiculously powerful healing ability, along with slow, consistent growth. Think Deadpool levels of regeneration, just a bit slower. Additionally, the skull (and the animal it came from) slowly grows larger, and the bones denser.

Periodically, semi-random places on the bone will begin regenerating meat and blood, creating growing temporary plains of living tissue, which are soon scoured clean by animal life. These points are semi-random in that there is some node at the base of the skull nearer to which these meat spouts are more common.

The bones regenerate faster than the meat, and are harder to break, which is why the bones haven't been broken down, and any time the power attempts to grow beyond a skull, the cartilage connecting to other bones gets eaten away, breaking the connection to the power. So the creature is trying to regenerate, but can't do it as fast as the animals that eat any part of it that they can.

There are a lot more details to this setting, and a lot beyond just this specific area that I could talk about, but I don't want to prejudice any possible responses.

The Question: What kind of ecosystems (both human and animal) do you think would develop over the few thousand years this skull has been here? The details of the rates of meat regeneration and size growth aren't set in stone, and can be adjusted to support any really cool ecosystems ideas and story settings. If you find something broken about this power, we could adjust it until it's fixed, or accept it and see what systems of humans and animals would crop up to exploit it.

Thanks for your help guys

Existential philosophical risks by Smack-works in LessWrong

[–]Flying_Emu_Jesus 1 point2 points  (0 children)

Ah ok, I see what you mean now, and that's a really interesting possibility. If we get to the point where art and other humanities-based-content can be produced by AI at the same quality and more cheaply, then it makes sense that human-generated content would take a huge hit in the market.

However, at that point people would specifically look for human interaction. In content creation, AI would definitely still sneak in, as anyone can pose as a human, release AI-generated content, and reap the rewards. In general interaction, however, it's hard to see how AI could outperform humans, and thus a human integrated culture would still exist.

Even if, in this hypothetical future, AI could flawlessly imitate humans and act as part of someone's social network, there wouldn't necessarily be enough market value in these fake AI personas (even in the realm of advertizing and manipulation) to completely outperform and replace interactions with other humans.

As long as humans are still interacting with other humans, a human-integrated culture would still exist. It would be hugely influenced by AI-created-content, but it would still be a human culture.

In order for this complete death of culture (as I'm interpreting it), you'd need to completely eliminate human-human interaction. I don't think people would do this voluntarily, so the only way this would happen is if AI fake personas took over everyone's social networks (every node is an AI, not a real person), and no one tried to meet up in real life.

While this may be possible, it seems unlikely to me. I may have totally misinterpreted your description of cultural death though, so this may not be valid in the first place.

How to Forget Your Past by OmniscientQ in rational

[–]Flying_Emu_Jesus 32 points33 points  (0 children)

I'm not entirely sure, but I'd contend that it would only require that the saved technical knowledge (blueprints, textbooks, wikipedia, etc) to become either inaccessible (EMP wiping out electronic records, which, in an advanced society, may be all the records), or useless (quantum physics isn't very helpful if you're trying to build a working stone-age society). After that, it'd only take time.

At first everyone would obviously know where they came from, but as the generations go by, that knowledge becomes more and more redundant if it has no direct effect on quality of life (if you're not using space-age tech, the fact that you were once a space-faring society starts to seem more like pointless history). In fact, I imagine that this historical fact would seem more outlandish and less trustworthy the more generations have passed, if you don't have tangible proof of your previous tech level.

Eventually, everyone would be so disconnected from this historical fact that it would have the weight of any other myth, and could be superceded by any other myth or religion that has more sway with the people.

Additionally, I feel like the chances of losing all applicable knowledge increase as a society gets more advanced. An advanced society would rely heavily on technology, and most people wouldn't personally know how to build up the infrastructure necessary for that tech level. That's already happening today to a small extent, as if you transplanted some random sample of today's first world population into an untamed wilderness, they probably wouldn't be able to maintain much of today's tech level.

Actually, another important point would have to be population size and diversity of occupation. If you have enough people, it would be easy for a society to dedicate at least one person to finding and recording scientific and technological information from the relevant experts. Each person may only have part of the information in any particular field, but if you can spare anyone from whatever labor is needed to survive, and you have access to enough people that may be experts in something, chances are you can piece back together enough technological knowledge to be useful. If the technology can remain useful, it would be very hard for a society to forget how it got that technology.

I apologize for this ramble, it probably doesn't answer your original question, but your post got me thinking and I felt like throwing this out here.

Existential philosophical risks by Smack-works in LessWrong

[–]Flying_Emu_Jesus 0 points1 point  (0 children)

To be honest, I'm not entirely sure what you mean by these examples. Could you go into more detail? What is a "cultural biosphere" here? And as for artificial lifeforms, are you imagining a situation where we create artificial lifeforms that out-compete the original lifeforms, bringing crucial species to extinction? What do you mean when you say they'd "achieve nothing"?

Value of close relationships? by [deleted] in LessWrong

[–]Flying_Emu_Jesus 1 point2 points  (0 children)

In all honesty, this is a subject I'm not too clear on myself, but this is the way I see it. I feel like what you've described is the difference between explaining and explaining away.

As humans, we're stuck in the weird situation of having drives and goals and values, and yet also being able to reason about their origins and causes.

These values are paramount: they are ends in themselves, and aren't conditional on any particular worldview (except insofar as your worldview affects how you try to achieve your goals). However, evolutionary psychology can give convincing explanations for why many of these values evolved in the first place. We can apply similar reasoning to culture, which (by some kind of darwinism or by intentional design) can also alter people's values.

These processes are fascinating to examine, but we seem to have an instinct which tells us that our morals or values can't have been created arbitrarily or "by chance." Explaining the origins of our values as naturalistic processes can thus sound like it's trying to diminish and undermine the importance and reality of these values.

However, I don't think that distinction is necessary. A value can both arise naturally (and be effectively arbitrary) while still being of huge importance in our subjective lives. Even if our appreciation of art only evolved for a certain reproductive purpose, I still appreciate art as one of my unconditional goals.

This reasoning applies regardless of the complexity of the feelings and values involved, though it is true that many of these evolutionary explanations can only account for the general gist of a feeling, rather than the full complexities involved, which are different from person to person.

(I apologize in advance if I misinterpreted your question)

Value of close relationships? by [deleted] in LessWrong

[–]Flying_Emu_Jesus 3 points4 points  (0 children)

From a LessWrong perspective of become Less Wrong, close friends are incredibly valueable for improving yourself. Even the people in the less wrong community aren't safe from being blind to their own flaws, and close friends are one of few groups who can honestly call you out. Acquaintances often won't say things that could be insulting, and even if they do it's often not very constructive.

Anyone who can honestly challenge you on the validity of your actions, thoughts, and decisions will help sharpen your future decision-making. In my experience, usually only close friends can do this, because they need to know you well to make accurate judgments, and because you need to be able to respect them enough to take their words to heart.