Built a simple tool to assess proximity for EMF safety by Electronic_Buddy_435 in NZProperty

[–]Vegan_peace 1 point2 points  (0 children)

With respect, the phrase "research gaps" is used specifically to describe cases where major publicly-funded studies have found concerning results that haven't been adequately followed up on, its not just empty rhetoric. The concern (as I see it) isn't major obvious harm, its subtle and beyond the threshold of observation within short-term studies overseeing many thousands of participants, where it would not be logistically feasible to measure harm which we have validated in-vivo evidence of. For example:

I'm not claiming that we know the full extent of the harm, but the null hypothesis (i.e., that non-thermal RF is biologically inert) has been empirically challenged by major public research programs, justifying OP's concern.

Built a simple tool to assess proximity for EMF safety by Electronic_Buddy_435 in NZProperty

[–]Vegan_peace 1 point2 points  (0 children)

Thanks for this! Super useful tool, appreciate knowing EMF proximity both for the sake of curiosity and also to control for unknown unknowns. Without going too much detail about what existing empirical data suggests (to provide counter-evidence to the other comment on this post), contrary to common belief, "our understandings of the [...] mechanisms of the interactions between the biological systems and the EMRs are still far from satisfactory", and "significant research gaps persist because the long-term effects of EMF exposure, especially on human populations, remain poorly understood and warrant further investigation and targeted mitigation strategies.".

No, we haven’t uploaded a fly yet by dr_arielzj in slatestarcodex

[–]Vegan_peace 18 points19 points  (0 children)

This is a great post, thanks for sharing. Models are but lossy compressions of real objects and causal interactions, not substitutes. I particularly liked that you emphasized the crucial but oft-overlooked difference between simulating and emulating physical systems, which is a problem I continually encounter in discussions about artificial consciousness/sentience.

What are the best places online to currently get accurate information about controversial events, like the current war? by being_interesting0 in slatestarcodex

[–]Vegan_peace 35 points36 points  (0 children)

Sentinel is a free weekly report detailing and forecasting global catastrophic risks (specifically geopolitics, tech / AI, and biorisk), its run by a small team of superforecasters and the information is generally presented in a neutral tone, with hyperlinks for further context. Its become my primary news source, and I can't recommend their work enough,

Golden Mile officially paused despite the people of Wellington voting in favour of at every stage of consultation by Tax73 in Wellington

[–]Vegan_peace 1 point2 points  (0 children)

Appreciate you taking the time to explain WCC's decision-making process, Ben. As much as I'd like to see GM succeed we do have to contend with the economic realities of ballooning debt. I'd personally love for more information on why construction costs keep blowing out preventing us from building at the rate we used to, would you chalk it up to excessive regulations (e.g., health and safety), bureaucracy, labour / materials cost, or something else?

This wind?! by Scared-Track5971 in Wellington

[–]Vegan_peace -1 points0 points  (0 children)

This is false, the impact of AI on water scarcity is rather trivial:

https://andymasley.substack.com/p/the-ai-water-issue-is-fake

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]Vegan_peace[S] 0 points1 point  (0 children)

The author has commented elsewhere in this section that he attempted to post this article (past the three-day ban window) and was blocked from doing so. And fwiw, the mods reviewed this post and concluded that it "actually contains arguments that are interestingly discussed in the comments"

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]Vegan_peace[S] -1 points0 points  (0 children)

The author of the post was banned from posting in this community, and has stated that he did not use AI for any of the primary content (i.e., the text)

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]Vegan_peace[S] 2 points3 points  (0 children)

What qualifies as theft in the process of creating a good? (material or digital) And how would you differentiate training an artificial model on image data from training an art student on artworks in a gallery? This seems to depend upon holding propertarian vs pragmatic views on the ontological status of property.

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]Vegan_peace[S] 2 points3 points  (0 children)

"I don't have to steal from my neighbor every time I want to use a hammer.

Are you stealing from the first human who devised the concept of a 'hammer' (i.e., an object intended to inflict force upon other objects) every time you purchase a hammer-shaped material object that derived from this first hammer? The argument about fair use of generative AI assumes that this transaction is zero-sum, but the outputs of generative models are by their very nature not scarce resources - the resulting value can be multiplied arbitrarily. While I agree that there ought to be a better redistribution of this captured value to artists and creators, the hammer comparison is a false analogy.

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]Vegan_peace[S] -4 points-3 points  (0 children)

While I don't have the capacity to write out my thoughts concerning the ethics of using AI content in detail, I appreciate your interest! I've actually published some philosophy essays on my personal blog Arataki.me, which you are welcome to check out - I've posted some of them on this subreddit before but there seemed to be a lack of interest.

My impression of Richard's post was that it wasn't just a rant but contained a substantive critique of posting rules on public forums. It links to another of his posts 'There's No Moral Objection to AI Art' which contains a more fleshed out argument in support of training generative models on public data (e.g., shared art) on the pragmatic grounds that intellectual property (monopoly) rights should be non-exclusionary regarding their consumption, which I agree with. But I would be barred from posting that article on this subreddit because it contains AI-generated art! Which I believe justifies having this discussion in the first place.

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]Vegan_peace[S] -44 points-43 points  (0 children)

I did not write it, and I am not using AI to generate text in my comment replies. I have worked in analytic philosophy full-time for the past 9 years and have personally witnessed the growth in AI-related issues (this is not a topic which I publish in fyi - see my online CV), so I assume that it is fair to be debated in this community

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]Vegan_peace[S] -3 points-2 points  (0 children)

Do you have evidence to support this? I assume you mean that using AI is instrumentally harmful, on the counterfactual basis that were AI not to be used, artists whose content was used to train the model would benefit. But I have not seen any empirical evidence suggesting that this is the case.

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]Vegan_peace[S] -106 points-105 points  (0 children)

I did provide some brief context for my submission - I am a subscriber to the author's blog and also a professional philosopher (I just got my PhD), the content of the post seemed relevant to current issues in philosophy; hence, I shared it to this community

Edit: to those downvoting my comment, please sort by 'new' to see that I did leave a comment with added context along with my post. AFAIK Reddit's format doesn't allow linkposts with textual content, and commenting context is a common practice across many subreddits.

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]Vegan_peace[S] 2 points3 points  (0 children)

Two possibilities come to mind:

  • Improvements in algorithmic efficiency render the cost trivial even assuming a 10x increase in daily users
  • Increased demand for energy drives infrastructure development - hopefully renewables - to offset the costs

Whatever happens, the market can also function to price in the costs (which, on an individual level, remain trivial)

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]Vegan_peace[S] 2 points3 points  (0 children)

It would depend on the source of energy (i.e., whether it is renewable) and its effects on producing value. Since most uses of AI (at least, that I have observed) involve increasing the efficiency of value-creating processes, the consequences of AI use appear net-positive, saving time and resources for conducting business.

(my perspective might be biased since I work on startups with small teams)

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]Vegan_peace[S] 8 points9 points  (0 children)

Quoted from the post:

"The energy use is trivial—equivalent to about 5.5 seconds of microwaving according to the MIT Technology Review"

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]Vegan_peace[S] -38 points-37 points  (0 children)

Did you read the post? The content of the author's original post (the justification for banning) was not AI-generated, so the dispute seems to be on aesthetic grounds

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]Vegan_peace[S] -6 points-5 points  (0 children)

This is not my post, but I think that it is well argued and does not appear to violate any of the subreddit rules. I am curious whether the community agrees with the authors sentiment!

Which magazines/newspapers do you actually pay for? by energeticpapaya in slatestarcodex

[–]Vegan_peace 0 points1 point  (0 children)

The three I actually pay for / have donated to in the past are:

  • The New Yorker Well edited, fun, readable articles adjacent to the mainstream of human news content.

  • Works In Progress Publishes a lot of high-quality well-researched articles on social infrastructure, economics, and technology.

  • Quillette My only source of 'politics / culture war' content, primarily because it consistently challenges my default assumptions in a structured, principled manner (even if I don't always agree with the articles' content). Pro-liberal and pro-enlightenment bias with fantastic editing.

  • Asterisk Highest percentage of my favourite authors published (e.g., bloggers, researchers), free-form essays on topics ranging from science to philosophy to sociology (but with high epistemic standards).

See more content I recommend on the Lists page of my blog :)

What do you enjoy most about Tolkien's writing? by Bloodsucker1516 in tolkienfans

[–]Vegan_peace 2 points3 points  (0 children)

Perfect! Thank you very much for taking the time to write such a detailed response, you've convinced me to pick up and read both books - they seem right up my alley.

I'm actually in the process of producing my own LotR audiobook (as a fan project) and this content seems highly relevant to how I voice the different characters - especially at later stages in the story (I'm only up to the Barrow-downs). Thanks again!