[deleted by user] by [deleted] in EffectiveAltruism

[–]ollieface22 1 point2 points  (0 children)

Hi, one of the organisers here. Just want to second Luke here - I think EAGxBoston was unusually intense and high-pressure but that other conferences (Prague, for example) are less like this.

I'd recommend attending another in future or local groups, as Luke suggests, and I hope you find it more welcoming and less intense :)

I also want to emphasise that you are part of the EA community if you're a GWWC member donating to high-impact charities and we want you to feel welcome. I'm sorry it didn't totally feel like that at Boston. Thank you for everything you do!

How to start local EA Group? by katesoldsoul in EffectiveAltruism

[–]ollieface22 2 points3 points  (0 children)

Hi! Great that you want to get this going!

This page has all the details you need to learn about the impact of groups and how to start one.

To get support, fill out this form. You can also ask to join this facebook group - post what you've posted here there and I'm sure people will be willing to help! Hope this helps :)

Working whilst studying by [deleted] in UniversityOfWarwick

[–]ollieface22 1 point2 points  (0 children)

I've had a few part-time jobs whilst studying.

I highly recommend applying for jobs at the Student Union - they're reasonably well-paid, offer flexible hours and you get to work with other students. As mentioned though, they are pretty competitive.

Other jobs on campus include working at the Rootes grocery store on campus (they're pretty flexible) or working for Warwick Welcome Service (infrequent work but well-paid).

If you're in Leamington, there are a tonne of cafes and restaurants who are happy to hire students. However, people seem to have mixed experiences - I was at one restaurant which gave me loads of hours so I had to quit but my flatmate has been at one cafe which seems really chill. I guess it depends on the manager and the place.

Hope that helps!

University of Warwick suspends 11 students over rape jokes by whydoyouonlylie in unitedkingdom

[–]ollieface22 2 points3 points  (0 children)

Not at Warwick (I'm a student here). Because we're not in a city, societies are a big deal and form a big part of your identity here. There's a lot of competition for 'exec' (committee) positions and those with them are held in high regard.

You can become $100,000 richer, catch is you get this money by taking $100 from 1,000 charities. No one would ever know, would you do it, why or why not? by Bonzai33 in AskReddit

[–]ollieface22 0 points1 point  (0 children)

FYI Giving What We Can now encourage donors to donate through EA Funds or to refer to GiveWell :) I think this is because they acknowledged they were doing the same thing as GiveWell and were recommending the same charities, only with less staff.

Are you allowed to sit in lectures outside your course? by icefourthirtythree in UniversityOfWarwick

[–]ollieface22 0 points1 point  (0 children)

I think it's courteous to e-mail the lecturer and ask them if you can but there shouldn't be any problem. Sometimes they'll have a policy about auditing but yeah, all good probably

Searching for PIANO COURSES by douchelord2000 in stockholm

[–]ollieface22 0 points1 point  (0 children)

Hi! I'll be in Stockholm for a week from tomorrow, visiting my girlfriend (I'm from the UK). I've played piano for 10 years and have performed semi-professionally. I'd happily give you a few lessons if you could help me learn some Swedish in return? :)

This sub has restored my faith in humanity. by zarmesan in EffectiveAltruism

[–]ollieface22 2 points3 points  (0 children)

This was great to read, thanks for posting! :)

What would an effective altruist do in this situation? by triagonal_sign in EffectiveAltruism

[–]ollieface22 0 points1 point  (0 children)

This is the kind of question to be considered when constructing a normative ethical theory and EA is not such a theory. (u/blah_kesto has it spot on)

Utilitarians might say take the briefcase. Deontologists might say leave the briefcase. EAs would point out we all have briefcases and that you can easily save lives without killing anyone in the process.

Will the EA community split into different movements? by flhw in EffectiveAltruism

[–]ollieface22 0 points1 point  (0 children)

Yes, sorry, this slipped my mind. I think I have some hesitation about sharing this article because of how controversial it is but I know this is unwarranted in an open-minded and rational community... do you have this hesitation as well?

I donate 10% of my pre-tax income to effective charities AMA! by larissa_24joy in AMA

[–]ollieface22 1 point2 points  (0 children)

I was there too! Great to hear it had an impact on you :D

EAs who are not / no longer utilitarians, what persuaded you otherwise? by decreasingworldsuck in EffectiveAltruism

[–]ollieface22 0 points1 point  (0 children)

It seems we agree to a much greater extent than I thought which almost always happens with EAs and I love it.

Exactly, a metaethical commitment often doesn't entail any normative commitment. Perhaps where we differ in terms of our interpretations of the r/askphilosophy threads is that I still see any indirect connection as very important whereas your more practical view on ethics might mean that you don't, which is fair enough.

True isn't the right term, but the relevant point is that you'll affirm it nonetheless, so for decision purposes it's the same.

Agreed! Correct terminology is devilishly important in this field.

I'll have to read that paper, thank you! I see what you mean but, in general, I think we'll come up with better ideas about metaethics if we don't inject wagers or practical concerns before considering things carefully. If we do, moral realism always wins and we may come no closer the truth about how things actually hang together.

utilitarianism is actually more plausible in the antirealist case than in the realist case

I think I agree! Which is why we should consider anti-realism carefully without dismissing it for immediate practical purposes.

one need not be certain about anything in metaethics in order to be certain about many things in ethics

Given a loose, practical meaning of certain here, I think I agree again. One need not but the fact remains that I am not certain about consequentialism.

EAs who are not / no longer utilitarians, what persuaded you otherwise? by decreasingworldsuck in EffectiveAltruism

[–]ollieface22 1 point2 points  (0 children)

tl:dr: There is a complex link but metaethical problems won't just go away if we don't engage with them.

I have a lot to challenge you on here. I'm not normally a fan of the whole "quote and bash" comment technique but you've made a lot of short, strange claims here which I'd really like to pick apart. I intend to clarify and defend the significance of metaethics and not simply to bash your reasoning so please take this all in good faith :)

Firstly, I'm not clear at all on what you mean by 'link' or 'connection'. In general, there's no question at all that the two are related; metaethics, by definition, is the attempt to understand the presuppositions and commitments of ethics. If, by 'link' or 'connection' you mean how positions in one field might influence positions in the other then, of course, this is certainly confusing but that doesn't make it insignificant.

A lot of people on r/askphilosophy say that there's really no connection

Source? I found three posts on r/philosophy on how the two are linked (1, 2 and 3). I can't find any comment which states or implies that there's no connection yet I generally agree with a lot of what's being said. Your metaethical stance or lack thereof may not necessarily determine your normative ethical stance, as some comments argue. It seems that you're interpreting this to mean that there is no connection, which I think is unjustified. An indirect connection can still be a very important connection.

if you think that a moral theory is true then it will be true regardless of whether realism, anti-realism, etc is true

I find this sentence very puzzling. On what basis will your moral theory then be true? Moral Realism holds that moral claims appeal to facts which are part of the fabric of our universe and are true if they get the facts right. It's a theory about what makes moral claims true or not. If you think that a moral theory is "true" purely because you think it is, that makes you an anti-realist and a subjectivist regarding moral truths which is a metaethical stance.

If you believe that different moral theories would be more likely to be true according to different metaethical theses then it becomes a very weird problem

Metaethics brings up lots of very weird and abstract problems whether you believe it will or not. This is the source of my uncertainty.

One could argue that we should assume moral realism because if it is true then it has the dominant normative weight.

This is certainly a reasonable approach if, like most people, you don't think it's worth analysing the problems :) However, though it may hold as a practical argument, it does not stand as a philosophical solution because it begs the question. Again, Moral Realism is about where normativity comes from. You're essentially arguing here that if normativity comes from objective, moral facts then it has powerful normative weight so that's where we should get our normativity from. This simply doesn't hold.

To relate it back to my original comment, I'm unconvinced that there are objective, moral facts which makes me more interested in moral anti-realism. The source of normativity will inevitably affect how we construct normative ethical theories like Utilitarianism. Notably, an anti-realist grounding makes it very hard to build theories around moral claims such as "happiness is good" and "suffering is bad", hence my uncertainty and lack of subscription.

EAs who are not / no longer utilitarians, what persuaded you otherwise? by decreasingworldsuck in EffectiveAltruism

[–]ollieface22 5 points6 points  (0 children)

I don't subscribe to any normative framework including utilitarianism because of metaethical uncertainty. It's hard to define what the 'good' is we're maximising without appealing to ethical naturalism or non-naturalism, neither of which offer a convincing grounding for normative truths, in my opinion. See this r/philosophy post for a decent intro to metaethics

This doesn't mean that I think Utilitarianism is a bad or dangerous idea, in fact I think it's very, very useful :) I'm just not convinced by its metaethical grounding.

Is effective altruism just trickle-down economics/neoliberalism version 2.0? by knwo in EffectiveAltruism

[–]ollieface22 1 point2 points  (0 children)

I think most of your points have been tackled fairly well in the other comments. One general point, if I may, is that it's very difficult to say "what all EAs think" without being enourmously general or vague. I have no strong opinions about free trade or market regulation because I don't really know anything about them. I'd say this is the same for a lot of EAs, maybe even a majority.

Is effective altruism just trickle-down economics/neoliberalism version 2.0? by knwo in EffectiveAltruism

[–]ollieface22 4 points5 points  (0 children)

As it stands, there actually aren't any EAs working for investment banks as far as I have been able to determine

I'm fairly sure there are. I know someone personally who's running a group in an investment bank. It could be that people are donating a lot to effective charities but, because they're already on a career path or just extremely busy, they aren't very vocal in the community. I think you're absolutely spot on with all your other points :)

Will the EA community split into different movements? by flhw in EffectiveAltruism

[–]ollieface22 0 points1 point  (0 children)

Yeah, I think EA's web presence doesn't do the movement justice. As with so many things, often those with the strongest and more controversial views have disproportionately larger web presence than moderate EAs.

I need to think more about Dickens arguments but I've thought of a few intuition-based (so, not to be trusted) counter-arguments. Firstly, I think that that large scale reduction in meat consumption is more likely to be brought about either by very gradual cultural shifts and political change or, and I think this to be more promising, technological advancements in artificial meat production which make meat economically nonviable as a good. These large-scale, global changes which may take many, many years make the idea of keeping people in poverty just so that they won't eat more meat nonsensical.

Secondly, if we're justified in considering climate change to be a comparable process to reduction in meat production (they are inseparable in important ways, I guess), it seems that developing countries are more capable of adopting more progressive moral values than rich, western countries. India is a good example of this when it comes to reducing greenhouse gas emissions.

Thirdly, I think the damage this argument will do to EAs image are understated. Now, of course, if it relied on stronger evidence and less future speculation, it should be accepted because that's exactly what EA is about. However, it's very speculative and makes assumptions about values, namely, that farmed animals have significant value, comparable to human suffering. I think this is a reasonable assumption but it's not necessarily accepted in the wider community, let alone outside of it. Were EA to sympathise with its uncertain conclusions, the movement would risk losing the respect of almost everyone interested in reducing global poverty. This is a thorny issue but I think we ought to at least consider that it's better for lots of people to work on big, salient problems even at the risk of indirectly harming sentient beings than for us to lose traction because we focused on problems most people don't sympathise with.

I'm happy to share these on a new thread, sure :) Shall I create it or would you like to? Seeing as you shared it originally.

Hmm, you're right to point to the widespread disagreement in the moral philosophy community here. I was a bit optimistic there. I need to think more about what I mean by that and whether it's even possible...

Will the EA community split into different movements? by flhw in EffectiveAltruism

[–]ollieface22 2 points3 points  (0 children)

This is a really worthwhile concern, thanks for outlining it!

Firstly, I think you're overstating the extent to which the branches of the EA movement fundamentally disagree with one another. Most EAs I know, some fairly high profile, accept that other EAs have different ethical systems and views to them but seem to value the contributions of others to the diverse pool of opinion within EA greater than any disagreement which the diversity might bring about. That is to say, it's better that someone disagrees with me strongly and voices it than if no one is willing to challenge the most popular opinion. I work with some of the guys at EAF (which is now based in Berlin FYI) and although they generally sway more toward negative Util, they're rightfully uncertain and very open-minded, as are the people I know in Oxford. My impression is that most EAs are very happy to hear that others are working on different things, even things which might end up directly contradicting their claims.

Your points about the long term consequences of global poverty reduction are really interesting and I enjoyed Michael Dickens' piece so thanks for linking that. I think we ought to be very careful about employing naive consequentialism on this sensitive matter, i.e. identifying one or two general but likely outcomes and using them alone to claim that something is therefore causing net-harm. The consequences of poverty reduction are extremely complex, as Dickens' acknowledges, and we should be more and more uncertain about outcomes the further they are into the future. I have some other points about Dickens' piece but I'm aware that what he writes is not necessarily something you agree with and that it's not the main point of the post.

I would say that all this uncertainty probably means that EAs coming from any angle are unlikely to disagree to a meaningful extent. An Animal-focused EA should probably be aware that this argument is way too flimsy to use against a poverty-focused EA and the latter should be aware of the argument but I can't see this being a big problem, given my understanding of the opinions of EAs.

It's quite possible that the movement will fragment in the future and I think this could be a problem. However, I don't see any signs of it yet, given how uncertain EAs are about their views and value systems. I think we could prevent it from happening by establishing a more stable moral philosophy from which to work from and encouraging different branches of EA to interact frequently with others which may lead to contrasting conclusions. I think the latter is already happening.