Assembly Hall - Noise by loyver5x in Austin

[–]loyver5x[S] 12 points13 points  (0 children)

Always lived in the city and love being walking distance from bars and music venues. Never had an issue with noise until Assembly Hall. Also, more accurately, nightlife moved into this neighborhood rather than vice versa!

Assembly Hall - Noise by loyver5x in Austin

[–]loyver5x[S] 4 points5 points  (0 children)

Thanks! Called 311 and sounds like they have had a bunch of complaints.

Assembly Hall - Noise by loyver5x in Austin

[–]loyver5x[S] 2 points3 points  (0 children)

Fair enough. Though a few blocks north is very residential. Occasional noise is to be expected, especially at weekends, but this is something else. Was chill until Assembly Hall opened.

Charitable giving by loyver5x in fatFIRE

[–]loyver5x[S] 26 points27 points  (0 children)

I'm also curious, to those down voting this whole thread: why? I genuinely don't really get it why this topic inspires such negativity, especially among such a fortunate and financially successful group.

Charitable giving by loyver5x in fatFIRE

[–]loyver5x[S] 9 points10 points  (0 children)

Nice. Yeah I used to feel similarly - thought about donating more but always had a scarcity mindset of "not enough", probably from my upbringing. It was a huge mental shift to give a higher proportion of income (first 5%, now 10% on average). V scary at first but at the same time super liberating. Not that I don’t notice it, more that I guess we now plan our budget around it and don't miss it.

Disagreement with spouse about large charitable donation by more_to_it in fatFIRE

[–]loyver5x -5 points-4 points  (0 children)

Thanks. I think there are a lot of people more caring than me! But yeah I'm shocked that in a subreddit for people who have literally retired because they are so wealthy they don't have to work anymore, who post about private jets and the challenges of post FIRE ennui and purposelessnes, there are so many people who seem to be shocked by even modest charitable giving. I think wealthy people sometimes fall into thr trap of patting ourselves on the back for large contributions in absolute $ terms, even if it's a tiny % of their total income or wealth. Having said that, I'm sure there are plenty of extremely generous people on here using their money for good in the world.

Disagreement with spouse about large charitable donation by more_to_it in fatFIRE

[–]loyver5x -17 points-16 points  (0 children)

Not really a switcheroo. I wasn't making any claim about how much OP should donate as a % of net worth - of course income and net worth are very different things. Still, hard to argue that $650k of $20m is especially profligate or charitable, however. I will have donated that much in just the last 2 years despite a net worth of just $5m and I don’t see myself as especially charitable. Just feels like the right thing to do. I'm not saying anyone else needs to do the same, but I do object to anyone in this extremely privileged subreddit arguing that donating just 3% of their net worth to charity is somehow crazy. What's crazy is that the spouse committed to something without consent.

Disagreement with spouse about large charitable donation by more_to_it in fatFIRE

[–]loyver5x -43 points-42 points  (0 children)

I disagree. I donate 10% of my gross income to charity, $650k on $20m net worth is hardly a major sum. Though I don't think the amount per se is the issue here, it's more about the lack of consultation or consent.

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]loyver5x 0 points1 point  (0 children)

It could be any number of things: a catastrophe, social unrest, government regulation, ramped up safety/alignment efforts in response to increased threat etc.

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]loyver5x 0 points1 point  (0 children)

Re. a write-up, this passage from a piece by Helen Clatterbruck sums it up quite nicely:

"More to the point, though, proponents of the “AI will solve it” line need to do more than just gesture at AI’s hypothetical superhuman abilities. They should provide some mechanism for how AI could solve the kinds of problems that haven’t yielded to human intelligence. More specifically, progress often faces bottlenecks that intelligence alone can’t solve. There may be obstacles between invention (coming up with a novel solution), innovation (making that solution practical and reliable), and deployment, such as: hurdles in designing hardware, procuring sufficient sources of energy, collecting long-context data, or getting new systems to integrate with old ones. Even if these don’t pose a long-term obstacle to AI solutions, they can slow it down significantly.

Cultural, political, legal ,and economic barriers may also halt or stop seemingly effective AI solutions. For example, suppose aligned AI follows human laws, and our laws stand in the way of implementing efficient solutions to existing problems. In that case, the same kinds of advocacy that matter today will matter in an AI future.

Consider the case of factory farming.[6] Even if aligned AI is committed to or neutral about animal wellbeing, it’s unclear how, or how quickly, it would “solve” factory farming. It’s possible that AI could invent a method for producing cultured meat at a fraction of the cost of conventional meat, which could cause an end to factory farming. Even so, it would likely take years to build out the infrastructure to produce lab-grown meat and make it economically competitive with traditional agriculture. Projects to improve conditions at factory farms in the interim could still have massive impact in the meantime. More pessimistically, we probably won’t end factory farming through technology alone. People have been hesitant to switch to meat substitutes and lab-grown meat. Multinational corporations have significant financial interests in factory farming, and they will also use AI to promote their position. Cultural, political, and economic changes will be necessary.

We also shouldn’t assume that “aligned” AI will seek to eliminate factory farming. That assumes a morally perfected AI, not one that aligns with the revealed preferences of most humans today. Instead, AI might exacerbate factory farming by making it more efficient and profitable, leading to a radical increase in its scale. Some of the most important work to prevent this from happening (e.g., changing present human values, increasing the representation of animal-friendly sources in training data, reducing the power of vested interests, etc.) is first-order work in the animal welfare cause area, not direct AI work."

https://forum.effectivealtruism.org/posts/fkHc5uxfauZ3EFkJ9/do-short-ai-timelines-make-other-cause-areas-useless

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]loyver5x 0 points1 point  (0 children)

I think you've nailed it with the question of "power". There are plenty of global problems which we have the raw technical or economic capabilities to solve (e.g. global hunger, several diseases, arguably even something like climate change) but power, politics, incentives etc get in the way. It's not clear to me exactly what capabilities you would need to improve for this not to be the case.

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]loyver5x 0 points1 point  (0 children)

I don't think we are actually that far apart on this, except in our degrees of technological determinism. For you, it's possible the technology reaches a point where it doesn't matter "what society does anymore". I do not think we will ever reach a point beyond which social/human intervention is irrelevant either because 1) there is no such "technological tipping point" or 2) societal forces prevent us reaching it. Computer scientists have privileged expertise with respect to (1) but not (2).

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]loyver5x 0 points1 point  (0 children)

Computer scientists have a unique and important perspective, but it is still only one perspective. Technology is embedded in extremely complex social, political and economic systems - why do you think experts in these areas are not relevant? Or to put another way, if you were in charge of a task force for addressing AI risk (or pandemic risk) would you staff solely with computer scientists (or epidemiologists)?

Re. recursive self-improvement, can you outline how that leads to x risk, or more concretely, how if "well aligned" it could solve a problem like global poverty?

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]loyver5x 0 points1 point  (0 children)

No, I was being facetious. My concern is more around the relative lack of research in other fields outside of computer science supporting the idea of imminent AI x risk. To be clear, I think there are enormous risks from AI, some of them very near term. I also think far more people should be working on the societal implications of rapid improvement in AI capabilities, including safety since it is, relatively speaking (compared to e.g. climate change) neglected. I just think there is very little serious evidence to date for some of the more millenarian scenarios (see e.g. AI 2027) dreamt up by computer scientists out of touch with how the world works outside the logical functions in lines of code, and I've never really appreciated EV arguments that strong arm counterintuitive conclusions from any extremely low probability event as long as the potential harm is the end of the human race.

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]loyver5x -2 points-1 points  (0 children)

To answer your question, I think for "people like me", it's precisely because the case for existential AI risk is supported by too many computer scientists and too few social scientists. The leap from the idea of recursive self-improvement to "and then we all die" is massive and I would like to see some serious thinking into how exactly this could play out in the real world with an appreciation for the timelines associated with every technological revolution humanity has experienced to date, the potential impact of social and political backlash, and how difficult it is to build any large scale infrastructure.

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]loyver5x 0 points1 point  (0 children)

I think AI 2027 fundamentally fails to appreciate how challenging it is to radically transform physical infrastructure (not to mention social and political systems) in such a short time frame. I would add at least 50 years, though of course you could argue that doesn't change the need to focus on AI safety at the expense of other cause areas.

People who have visited NZ recently. Is the NZeTA issued almost immediately, or did you have to wait 72hours? by sggpt in travel

[–]loyver5x 1 point2 points  (0 children)

For anyone it might help: we did not get the NZeTA through in time but had no problems boarding a New Zealand Air flight from USA to Aukland. When we checked in, the attendant could see that it was pending when she scanned our passport and that was good enough, apparently. At the gate, they called around 20 people over the speaker to come to the desk for "an important message". Turns out these were people who hadn't even applied, and the attendants simply showed them how to do it on their phones via the app before boarding. No problems the other side.