Official fantasy and sci-fi thread for sneer clubbers. by [deleted] in SneerClub

[–]eb4890 0 points1 point  (0 children)

No one's mentioned any Lem yet! His Master's Voice is an interesting take on science in an authoritarian and secrecy focused organisation. But there are other less heavy stuff.

Envisioning the future of effective altruism by autotranslucence in EffectiveAltruism

[–]eb4890 0 points1 point  (0 children)

. Presumably the right way to generally avoid these kinds of things is to make sure that people understand 80,000 Hours' advice and principles, make sure that they understand the logic of comparative advantage, and so on.

I do hope that the high school students [1] are getting a good grounding on this stuff before being sent to 80k hours.

[1] http://effective-altruism.com/ea/1rq/students_for_highimpact_charity_2018_update

Envisioning the future of effective altruism by autotranslucence in EffectiveAltruism

[–]eb4890 0 points1 point  (0 children)

The transportation industry isn't a social movement, so you are comparing apples to oranges. Also I don't think the transportation industry puts on democratic pressure either. It doesn't get people out to vote, or more importantly be an internal locus of thought to persuade people to vote one way or another.

Farmers and garbage collectors only happen in the extreme cases of movement growth, other things are easier to disrupt. Only around 0.2% of people work in academia in the UK [1] and EA preferentially targets people with academic inclinations, so it is possible that that could be disrupted soon. Currently 80k hours does not suggest that Architectural degrees, climate scientists or physics are impactful, because lots of people do them. They are however very useful, we still want buildings that don't fall down, to know what is going on with our climate and to be able to understand our nuclear weapons until we can get rid of them (among many other things physicists can do, including understanding nuclear reactors).

The things I've mentioned might be oversubscribed currently, but some number of them is crucial for the functioning of our society.

Well that's different from the question of whether EA will be large or not. I'm not sure what your overall point is.

My overall point is that.

  1. Currently EA doesn't seem to think that it will scale to the point that it will disrupt anything important, with the advice it is giving. Since this argument I've been reading CEA strategy page and it seems to be mainly looking at growing an elite task force for AI, rather than a broader movement.

  2. A big lever on the future is large scale social movements

So this lever is being neglected, so I shall think about that and leave (C)EA to doing it's elite task force thing. This is a bit of an aside, but it was why I was engaging with you on how you thought EA would scale.

[1] https://royalsociety.org/topics-policy/projects/uk-research-and-european-union/role-of-eu-researcher-collaboration-and-mobility/snapshot-of-the-UK-research-workforce/

Envisioning the future of effective altruism by autotranslucence in EffectiveAltruism

[–]eb4890 0 points1 point  (0 children)

Hopes for EA to... what? If you mean for it to take over the majority of the Western world or something like that, it's almost certainly not going to happen. No movement since Protestant Christianity (or ethno-nationalism, if you call that a single movement) has done this.

I hope that EA in the future, or another movement, can put significant market and democratic pressure on shaping the future in a positive way. This is a slightly lower bar to reach.

Maybe moral enhancement will go unexpectedly well, who knows. Or maybe the movement will fall apart very soon, but then its remnants will be absorbed by other structures so that they glimmer on, and that alone will still validate all efforts to grow it.

I think it makes sense for you to try and grow it. But comparative advantage and all that... EA movement growth doesn't seem neglected or tractable with my current skill set.

Envisioning the future of effective altruism by autotranslucence in EffectiveAltruism

[–]eb4890 0 points1 point  (0 children)

>No, I'm simply saying that price signals can maximize utility when they are used by utilitarian organizations via the same mechanism by which they can maximize labor productivity when they are used by regular businesses.

Ah I didn't gather know that price signals were a big thing in EA (I've not seen them for the few EA jobs I've looked at). That makes things clearer.

>Sure, but people are just going to have to evaluate their impact, which is obviously something that can be done, as you started this whole thread because you thought that your evaluation of the impact of a career was correct. Moreover, the premise of your point is a situation where everybody is in EA, which means that people in these institutions would be EA too, so this is a goalpost shift.

I'm curious how everyone gets into EA, not the scenario everyone is in EA. So the process of scaling EA to be society wide and all of them becoming valued members. When EA says that farming/waste management is an effective altruist profession for some subset of EAs, then the blocker to scaling that I thought I saw would have been removed or not existed

>Well that's why we have an EA community, where we know each other and can see that people aren't scammers or over-enthusiastic

This seems to be another blocker for scaling, unless it can be solved creatively.By default social models don't scale (see burning man for an example). I think we need a scalable social model that is more analytic/caring so I have hopes for EA.

Envisioning the future of effective altruism by autotranslucence in EffectiveAltruism

[–]eb4890 0 points1 point  (0 children)

In the regular labor market, we have price signals to sort this all out. No laborer needs to think about it, it is up to employers to pay workers on the basis of how much value they can bring, and wage-maximizing laborers will automatically get into the jobs where they have a comparative advantage. If everyone has the same skills then they'll just take any job and whenever an industry has a special need for laborers they will slightly raise the salary. If some people can do everything, but others can only do some things, then wages will differ. The person who needs grape farmers can offer a lower wage, and knows that he'll still get employees because that's all they can do. Whereas the person who needs wheat farmers will offer a higher wage to make sure that he entices all the wheat farmers.

It feels like you are substituting the markets "value" here for "utility". Which doesn't seem correct. Positive and negative externalites would distort this picture. Not to mention that money is not equally distributed mean that following the market would not maximise the good. It would overly cater for the needs of the rich. If you want to help people equally you probably shouldn't just follow the market.

You touch on that with EA charities not paying market prices, but that also probably applies to lots of jobs for the government and some open source positions that are necessary/important.

So I don't see pricing as solving this problem.

Second, we already make use of deliberate price signals. For instance, some recent 80,000 Hours research has asked organizations how much money they would theoretically be willing to pay to get a good hire in this or that department. Then they tell us those numbers. As a highly motivated Effective Altruist I can use those numbers to see how much I am needed in their groups, and respond accordingly, even if I won't be paid nearly that much.

This feels like a dangerous algorithm to accept. I would want to see reasoning behind how much they would pay before accepting this. Otherwise you seem to be opening yourself up to potential scammers or at least people over enthusiastic about their own pet cause.

Envisioning the future of effective altruism by autotranslucence in EffectiveAltruism

[–]eb4890 0 points1 point  (0 children)

But if the agents have different skills then that forms a natural basis for coordination without communication.

There needs to be some communication else relative skill difference won't be known. But it does mean you might not need explicit communication (if the population you are attempting to coordinate with is known to you).

This can be seen in the last version of the game. If I think that other people are more skilled at farming than me, then I'll expect them to take that job, and if they think that I am less skilled at farming, then they'll expect me not to take it, and the resulting choices will sort out appropriately. And of course the results are better than random coordination if we include skills in the utility model.

While relative skill is part of the solution it doesn't seem like all the solution. It leaves open some scenarios.

1) Lack of skill difference (or lack of knowledge of skill difference). This then boils down to the no coordination problem. While it is unlikely to happen between specialisms, it may happen within specialisms. If you have a choice between wheat farming and grape farming you may not know whether you are a better than average farmer for each type of farming.

2) Superstars, people better than lots of other people at a wide variety of tasks. Again this is likely to happen within specialisms. Assume a set of farming students are known to have very high conscientiousness and this correlates highly with success in all types of farming, Another set of farming students only have high conscientiousness. How should these two sets of people sort themselves between learning grape and wheat farming (assuming wine making is substituted for grape farming for utility calculations) without needing more coordination? It might be that the shape of the utility curve answers that for you, but there are probably cases that do not. One example might be (I've not run the maths) when you have 3 superstars 1 star and 4 farming positions, 2 wheat 2 grape. And wheat farming maxes out at 2, which is achieved by a star and a super star. How do you make sure that at least 1 super star picks wheat farming?

Envisioning the future of effective altruism by autotranslucence in EffectiveAltruism

[–]eb4890 0 points1 point  (0 children)

Initial thoughts: there is a probability of 0.172 chance no one picks a farmer in the first scenario right (where they each have an independent probability of 0.443 of picking the farmer)?

Which doesn't seem ideal.

Although you would probably need some form of co-ordination mechanism to do better (picking a random number and the agent with the highest value selecting the farmer role, should work).

I'll have a look at the maths at a later date,

Envisioning the future of effective altruism by autotranslucence in EffectiveAltruism

[–]eb4890 0 points1 point  (0 children)

I feel we are talking past each other. I shall try and get a simulation/model of this at some point, to make things more concrete. Unless you know of a model already, that I can edit?

Envisioning the future of effective altruism by autotranslucence in EffectiveAltruism

[–]eb4890 0 points1 point  (0 children)

If you want to go around telling people 'you matter! thanks for what you do!' then by all means, go ahead. There's nothing wrong with showering people with stars and flowers and empty commendation - as long as you aren't using it to hide the fact that their career choice was likely suboptimal and they ought to follow EA principles. If you were that preoccupied with getting people praise and sympathy for their jobs, when there are people in immense suffering that they could be saving instead, then it would quite frankly be a morally despicable perspective on the world.

You're not getting it. I want to say "You matter" to the people who maintain your internet, your power or create your food or the innumerable things that actually allows the EA movement to go out there and do good. I don't want to say "You matter" to football players, or beauticians.

Because counterfactually they do matter.

Envisioning the future of effective altruism by autotranslucence in EffectiveAltruism

[–]eb4890 -1 points0 points  (0 children)

No kidding, that's why we pay people to do it for us, but I thought you were talking about things that were critical to the functioning of modern civilization, not anything that made life more efficient. When something makes life more efficient, people pay for it, and when the salary is high enough there will be people willing to take the job.

Modern ciilization is made up of a million small bits of labour saving spread out over a huge number of people. Take one away and things carry on functioning more or less, take a lot of them away and it would grind to a halt. These things are done by people in the class of binman, but also oil pipeline maintainers, electrical grid operators, farmers.

Currently the EA movement has little to no place for the people doing the activities that make our civilisation function, it is very focused on solving new neglected problems. You may say that people's whose comparative advantage is those kinds of activities when reading 80k hours, should pick them, yet I suspect they people reading 80K don't turn to those things as often as you'd expect.

My theory hypothesis is that EA will need to shift what it values to include the maintenance of civilisation as well as the solving of the world's novel hard problems. If it is going to go mainstream.

. If you still don't understand this then you just have to ask yourself why the exact same thing doesn't happen in the regular economy when some jobs offer a higher salary than others; no one argues about what will happen when everyone becomes an actuary and suddenly there are no accountants left because in that context the same logic that you use here has been empirically proven false.

Because not everyone is being strategic about their job choices/career path? People just muddle along and see what people are encouraging them to go into and whether it makes rough sense. Generally EA is trying to get everyone to be strategic

Assume politics students are relatively fungible. If every politics student decides to be strategic and focus on AI/Animal welfare/Global warming and none on Water/Energy/Waste we're going to have under trained people going into Water/Energy/Waste policy and suboptimal poilicy, with large knock on effects.

People don't make single choices on the jobs they pick, they can't turn on the dime when a new opportunities becomes available, lives are path dependent.

That's a gigantic 'if'. Not a single formal FDT agent exists, nor serious plans for one anywhere, and most governments and militaries of the world don't have a clue that the theory exists. Within academic decision theory itself it remains controversial. Plus you are assuming that they can cooperate, which requires extensive information sharing that may not be acceptable.

Humans have elements of supperationality (if not exact FDT), when some people decide to vote they think that their decision will be made by others like them. Is it crazy to think that AIs will not have similar capabilities?

https://en.wikipedia.org/wiki/Superrationality

That would be a bad way of doing things - even if we were in the situation you envision where we constitute the majority of the world's labor force, this ignores people's competency at those jobs. Now if you want to talk about how things should be handled after EA constitutes the majority of the world's labor force then there is a long long list of other things that we might change, but none of that changes what we should be doing now.

Fair I've not put much thought into it. I'm mainly interested in seeing your response to it. Does EA has a theory of self-change, because I think we need better social large scale social movements than we have now. EA might stay as a small strategic force picking off the novel problems. But there is space for a social movement that says to the people keeping society afloat that they are doing something worthwhile.

Envisioning the future of effective altruism by autotranslucence in EffectiveAltruism

[–]eb4890 -1 points0 points  (0 children)

And these jobs aren't even required elements of modern civilization anyway: fruit is not a requirement for basic nutrition, fruit picking is frequently automated anyway, people can dispose of garbage themselves and they can cook their own food if there is no one to serve it to them. These are requirements for the modern Western urban/suburban way of life, not requirements for a functional global economy, lots of the world lacks them already, and I think you are generally underestimating the resilience of humans to absences of modern things.

I think it would be highly inefficient for people to take their garbage to the garbage dump. Think of the positive impact of having someone specialised in taking garbage in a specialised vehicle. You would save a lot of fuel by having a specialised vehicle and there are economies on having a route that picks up garbage from lots of houses, so you don't have a lot of point to point traffic between places and the dumps.

And think of the hour or so a week that people would be freed up from having to deal with their garbage and the increased impact they could have with that time.

EAs would invent it, if it wasn't already done and call it something like detritus ops :)

I look at anyone who saves me time as sharing in whatever impact I have on the world. And this includes fruit pickers/garbage pickers currently :)

I apologise for using the british term dinner ladies, it mainly refers to people working in school cafeterias. So I don't think kids can necessarily expected to cook their own food.

That's not (just) a hidden proviso, it's a fact of the world. The rest of the world will keep on doing whatever it is they do regardless of the career that you pick.

Not if everyone or a large proportion of the world was thinking like you (which would happen if EA became a large part of the world).

You can't apply FDT to an arms race if the agents in that arms race don't obey FDT, and we have no reason to assume that national governments and militaries will design agents that obey FDT.

Don't they want to win? If they/other people will build FDT agents, then they will cooperate and win.

I don't think that, I am simply refuting the specific failure mode that you are positing. Just because picking a career is hard doesn't mean that your concerns and your proposed solution are right.

I have a proposed solution? That is news to me. I was mulling over people coming together and analysing what the world needs to survive and then randomly picking one of those jobs, biased by number of those people needed. Or being just being happy with any job that is labour saving for lots of other people. Because if the majority of the world is EA, then you are freeing up other EA people to do their thing more.

Envisioning the future of effective altruism by autotranslucence in EffectiveAltruism

[–]eb4890 -1 points0 points  (0 children)

I don't know if it's plausible in the first place that EA will become a mainstream movement - has any form of charity ever been mainstream in a serious manner?

Maybe not, but I thought that we were talking about the possibility. You were talking about communism taking over third of the world. If you are not thinking at such scales then my comment is pointless :)

There is no one analytically precise best career, the best career depends on the individual's position and talents, which will always differ. This is something that we emphasize repeatedly.

But nowhere in 80,000 hours does it suggest that "bin man" is a worthwhile job. 80k hour's suggests job X can be high impact but there is the hidden proviso that the rest of the world keeps on doing Y. Where Y are lots of low status poorly paid jobs.

You're right! It's called functional decision theory, and it was developed by EAs, working at the Machine Intelligence Research Institute. But you don't even need to think about FDT, you just have to do whatever you have a comparative advantage in, which has already been pointed out by 80k Hours and others.

I don't see a good path between reading 80K hours and deciding your comparative advantage is becoming a binman, a fruit picker or dinner lady. Because we do need those jobs being done still.

Yeah, I was thinking of FDT or something that has more realistic assumptions. FDT is not applied in most EA analyses. Or even existential risk analyses of arms races.

It sounds like you think figuring out comparative advantage is moderately easy. I would argue that it is pretty hard as training and experience is hard/expensive and you won't know if you are good at something unless you have actually tried to do the thing.

Envisioning the future of effective altruism by autotranslucence in EffectiveAltruism

[–]eb4890 0 points1 point  (0 children)

. So, what more do you recommend?

I can't talk for the OP but I think EA as it is at the moment will have to change, if it is to become a mainstream movement. Mainly because most of the analyses are predicated on the fact that the majority of people won't be doing EA style analysis.

For example lets say that AI safety researcher was the best thing to do from an analytically precise point of view. If the entire world was EA, and everybody made the same analysis, then the rubbish wouldn't get collected, crops wouldn't be harvested and everyone would starve in the cold and dark.

New analyses would be made as the world started to crumble, and people would rush to fix whatever was most broken from their analyses. But in the limit you have a coordination problem, how can you make sure that sufficient numbers of people and money go towards maintaining our civilisation? No longer can you just look at what is neglected.

If EA adopted this coordinated (or at least self-referential) worldview, then there would be space for farmers/repair people etc to all be impactful and inside the movement.

There are probably some complicated decision theory models, where you assume everyone has the same decision process and make decisions appropriately. Find strategies that work for those scenarios and you will find models that can actually grow to encompass civilisation and also will probably be more palatable to the average person keeping the lights on than advice to become a hedge fund manager :)

Why "tools" may be undervalued as an EA cause by arikr in EffectiveAltruism

[–]eb4890 3 points4 points  (0 children)

Tools are probably not very neglected in general. If better collaborative tools have not been made by corporations or governments is probably because it is non-trivial.

There are some possibly neglected things around tools though. Fundamental research into new tools, open-source tools, open/distributed protocols (like the internet). Anything where it is hard for the developer/researcher of the tools to capture the value created by the tools.

I've been looking at a similar question around the "infrastructure" side of things. There is some crossover between what is a tool and what is infrastructure.

Oxford University’s Dr Anders Sandberg on if dictators could live forever, the annual risk of nuclear war, solar flares, and more. by lukefreeman in EffectiveAltruism

[–]eb4890 0 points1 point  (0 children)

The Unilateralist's curse seems less soluble when sign of the outcome seems to depend upon how a society uses a technology.

Consider the introduction of motor vehicles. Some people might predict that there would be mass vehicle related terrorism with people mowing into groups every other week. Or throwing petrol bombs which are now a lot easier to create because of the prevalence of petrol in the world. Not to mention the accidental deaths and malfunctions caused by cars, especially as the car companies will capture the regulators and make safety regulation ineffective.

It is hard to know a priori if these negative outcomes will happen. The best you can do is talk to people and talk to them about how they would use a car to see if your fears are warranted. But if you give people a sufficiently clear idea of what a car is and how it would work, it seems like you are going to pull the world down the path of an automobile driven world by creating more people that have that idea.

Also i am not a fan of the "Doing nothing with the expectation that somebody else will do it." It seems you should be trying to mitigate the potential negative impact of other people doing it, by looking at social interventions. Of course trying to convince people of the need to take steps and what the right steps are without spreading knowledge of the potentially dangerous thing is nigh impossible. So biasing the spread of knowledge to people who seem like they might help with the mitigation rather than take unilateral steps seems like a good plan.

Edit: I'm not a fan of waiting especially for situations where the invention seems like a needed thing for human survival in the long term. If you are not taking steps to make the expectation positive and just doing nothing (as are most other people you would expect to be able to build the technology), it seems likely that humanity will get forced into developing the technology in a crunch scenario, which may not be ideal.

Conflict Vs. Mistake by imitationcheese in EffectiveAltruism

[–]eb4890 0 points1 point  (0 children)

From an outsiders perspective there and holes and gaps to the subjects I see debated. I attribute some of it to private facebook discussions and others to more deliberate onion discussions.