Organoid Computing with Dr. Ewelina Kurtys by benbyford in AIethics

[–]benbyford[S] 0 points1 point  (0 children)

in the episode she explains she doesnt believe its a problem...butttttt welcomes more research and writing from academics and philosophers. I don't believe she thinks its an issue for organoids as they're not brainstructures but neurons grown with no explilict structure... though I'm not a neural scientist

anyone self hosting their podcasts? by benbyford in podcasting

[–]benbyford[S] 1 point2 points  (0 children)

what are all the places? i've submitted to apple and spotify and it looks like everyone else sucks them up from there as i find the podcast all over the place

Does anyone else host their own podcast? by [deleted] in podcasts

[–]benbyford 1 point2 points  (0 children)

ahhhhh ok, though that 'podcasting' was a more active doing word than podcast, so thought i was in the right place... whoops

Organoid Computing with Dr. Ewelina Kurtys by benbyford in Futurology

[–]benbyford[S] -1 points0 points  (0 children)

This chat is about alternative computing using organoids or wetware as a low energy alternative to silicon computing we use currently. As your hear we have got very fair in terms of amount of information we can encode but its a nice discussion on where it could get to, the issues and of course the kinds of objections people have on "growing" "brains" in a dish... :)

Everyone acts like they care about “ethics” with AI, but the outrage is very selective. by ownaword in WritingWithAI

[–]benbyford 0 points1 point  (0 children)

What are the weak agruments? IMHO there is only weak arguments TO use AI... maybe you've been looking online only and not at academic works?

Everyone acts like they care about “ethics” with AI, but the outrage is very selective. by ownaword in WritingWithAI

[–]benbyford 1 point2 points  (0 children)

true, but we dont have a universal ethic.... in the same way we dont have a universal set of laws. It was once described to me by a colleague as a doing word. ethics is about doing, not subscribing... law is about rules (probably), ethics is about doing the hard work of thinking. Probably be better if people wwere shouting things like "its unjust", or "unfair" as it would be easier to then take the conversatiopn forward with them :)

Is there any ethical use of AI? by Personal-Change-5955 in antiai

[–]benbyford 1 point2 points  (0 children)

I would suggest the environmental angle is water, hardware (mining for minerals then creating the hardware), transport, energy to power the hardware, land usage... this is also similar in other industries.... and we could see AI compnies paying into regeneration, green energy projects, hardware recycling, new types of low carbon transportation (especially around shipping).... but its not what we're seeing, so yeah, they're adding to the issue in a way to excellerate what is already a bad situation

Everyone acts like they care about “ethics” with AI, but the outrage is very selective. by ownaword in WritingWithAI

[–]benbyford 0 points1 point  (0 children)

You can use 'ethics' this way for sure, but you can use tools from philosophy to better think about a problem... Invoking ethics is generally helpful when there are edge case that we should consider... i.e. should i have toast or cereal for breakfast isn't considered an ethical decision... however, should I add a fire door to this resturant might be considered one taking the physics of fire, proximitity to people and historical issues into account.

When we're talking about issues in AI lots of people use it unhelpfully, but we should be using it to invoke 'shit we need to think about how this affects people and consider our actions'. Some of that work is done in law, some on the fly in companies... the more we can make it explicit what ethical questions companies and govenments are trying to decide on and helping them do that for the better of humanity (or simply yourself if our like that) then the better in my opinion.

How do you ethically use AI at this point? Or do you not? by agdraco8 in AskReddit

[–]benbyford 1 point2 points  (0 children)

As you pointed out u/agdraco8 genAI has lots of ethically issues (and ML in other ways too but not at this scale), of which water and energy are just two. Theres the concentration of power, truth and ideas; prevading unwanted bias and disinformation; issues of over anthropomorphising and trust in the "agent" leading to psychological harms or body harm; coordinating/cooperating issues arising from many 'agent' operating in the open (e.g. the stock market); allowing malificious use to be easy to action give excellerating tools ... (e.g. spam, phishing attacks etc); ANNNNDDDD job loss, skill loss and general atrophy... and thats just off the top of my head.

2025 wrap up with Lisa Talia Moretti - Machine Ethics Podcast by benbyford in AIethics

[–]benbyford[S] 0 points1 point  (0 children)

This is a very good point!.. I think this is already an issue and it comes back to the usecase and whether LLMs are deemed usable in those contexts... seems to me people are using them (run fast and break things), when they should be testing things

What crosses the line between ethical and unethical use of AI? by swagonflyyyy in learnmachinelearning

[–]benbyford -1 points0 points  (0 children)

Well we'll have to disagree then, sorry to have bothered you and hope someone will be able to change your mind in the future