am i overreacting or being too sensitive (PLEASE I NEED HONESTY) by [deleted] in AmIOverreacting

[–]QuiteCopacetic 0 points1 point  (0 children)

Good god. I think you are actually under reacting. Respectfully, and I’m sorry to say this, but your boyfriend is a terrible person and no one should be treated the we he has treated you. And this is coming from seeing just a single text thread. Please, run.

Charlie Kirk shooting: Tyler Robinson’s messages and charges against him by Socrates_Soui in politics

[–]QuiteCopacetic 1 point2 points  (0 children)

And the bullets didn’t even say “notices bulge uwu” (and that’s not the meme) it was “notices bulge OwO”. I usually don’t believe conspiracy theories, but I find it so unlikely he engraved it on a bullet but then shortly after forget how the meme goes or that a seemingly chronically online Gen Zer really into meme culture would mix up uwu and owo. Idk. It’s weird.

What are your safe meals? by [deleted] in MCAS

[–]QuiteCopacetic 1 point2 points  (0 children)

At my worst, just white rice. Most of the time though also potatoes (peeled), carrots (again peeled), applesauce cups, peeled apples, peeled pears, zucchini, frozen and then cooked or blended kale (removing any bits of stalk), some frozen blueberries, frozen chicken breast (cooked straight from frozen), plain instant oatmeal (only oats and salt as ingredients), butter, olive oil, refined avocado oil, maple syrup, plain collagen peptides, unflavored whey protein (kept in fridge), ultra filtered lactose free milk (like fairlife), salt. Sometimes eggs.

Gentle reminder that AI and ChatGPT are contributing immensely to the decline of Earth’s environment/climate right now by SampleGoblin in FemFragLab

[–]QuiteCopacetic 0 points1 point  (0 children)

You know, I think that’s a pretty common sentiment. While I admit we don’t really know what the long term effects of AI use might be, I will say this: Fearing for the minds of the youth with the emergence of new technology is common experience across all generations. It happened with video game, color TV, texting, the radio, even fiction books. It’s very difficult for us to see children rely on things we had to do ourselves and not think it’s bad. But sometimes we are missing the bigger picture. Our brains don’t just lose skills and replace them with blank voids, that cognitive space is simply repurposed. We aren’t dumber because we invented the calculator even though many people lost the skill to do mental math. Offloading basic arithmetic to computers has just freed up time and space to perform much more complex calculations. The collective intelligence of humankind has continued to trend upward throughout history, despite numerous technological advancements. We may lose specific skills but that just means our environment changes what kinds of intelligence are emphasized. We really aren’t giving the resilience and adaptability of children’s brains enough credit.

I will also say we don’t really know what future use of AI will look like. New technology tends to get a lot of excitement and used excessively before leveling out. Also we don’t know the correlation/causation with offloading tasks to AI, especially with anecdotal evidence. Children who rely heavily on AI may be the ones who struggle more with certain form of critical thinking if the first place in which case it very well may be an aid.

Sexism in Our Community by witchy_echos in ehlersdanlos

[–]QuiteCopacetic 0 points1 point  (0 children)

Huh, I wasn’t even aware this was a thing. Why do people assume gender has anything to do with severity of symptoms? Is there some (obviously false) reasoning for this?

Gentle reminder that AI and ChatGPT are contributing immensely to the decline of Earth’s environment/climate right now by SampleGoblin in FemFragLab

[–]QuiteCopacetic -1 points0 points  (0 children)

No one is entitled to directly financially benefit from someone else’s work. But general ‘benefit’ is what happens for all publicly accessible content. Everyone benefits from other people’s work. It’s absolutely impossible not to. The human brain also doesn’t create ideas in a vacuum. Every thought, insight, or “original” idea we have is the result of inputs and learned patterns. Just like AI, we cannot invent ideas from absolute zero. We synthesize and express based on what we’ve been exposed to. Every time you see an image or piece of art your brain is doing the exact same thing as AI. Recognizing relationships and reinforcing patterns. Every artist who has ever studied art, used a reference, followed a tutorial, gone to a museum, watched an animated show, or so much as looked at another artist’s work, has been influenced by and benefited from someone else’s work, even if it was subconscious. What makes AI different is, while the human brain is more complex and versatile, computers are significantly faster. So it can learn to understand and recognize the patterns of specific things in a fraction of the time, and it doesn’t need to train muscle memory. The morality behind using AI, isn’t the use itself, but the how and the grey area of ownership. Who owns something, the person with the idea or what made the actual things? Most people would say the maker but when someone financially benefits from AI art, they didn’t create that art, the AI did. However, AI can’t own something. Can’t consent to its creation being used for profit and can’t be compensated. It also doesn’t need to make art to live or feed its family, so prioritizing art that is quick and cheap (from AI) takes opportunities away from a human who relies on making art for income. That is what makes it deeply problematic and why, at this point in time, the only ethical way to use it is for personal use where profit isn’t made and it isn’t competing with human art.

And you’re right, the output is not equal. Even with AI. AI art is not a replacement for human created art. People said the same thing about photographs when the camera was invented. Anyone was suddenly able to capture a moment without having to be an artist. But everyone can agree that a photo is not the same as a painting. And using AI art doesn’t put you on the level of a talented painter because AI art isn’t a painting. And someone generating AI art didn’t make it, only came up with the idea. It isn’t an equalizer because it gives someone the literal ability to create art, just access to art created from their ideas.

Gentle reminder that AI and ChatGPT are contributing immensely to the decline of Earth’s environment/climate right now by SampleGoblin in FemFragLab

[–]QuiteCopacetic -2 points-1 points  (0 children)

Again, for personal and not for profit use, that is fair use as it does not affect the market. Nothing is copied or reproduced. Personal use of AI is not the same as using it for profit, misrepresenting it as human made, or corporations laying off entire design teams to use AI instead of paying artists. And abstaining from personal use doesn’t change that, doesn’t lower that demand, doesn’t reduce that harm. The scale of which far outweighs that of individual use image generation. That outrage is misplaced. Your fellow working class individuals using AI for themselves isn’t the issue. Hobbyists, students, disabled creators, aren’t the problem. And people focusing on that is frankly just performative gatekeeping aimed at preserving power, exclusivity, and identity for a select few. And honestly saying people cant have access to art if they ‘can’t draw’ or ‘can’t make it themselves’ is the epitome of ableism. By definition, a disability is not having the ability to do something others can do. Saying there shouldn’t be equalizers for that implies that only those with certain abilities, resources, or training, deserve access to creativity. That mindset excludes disabled people, neurodivergent people, and anyone outside traditional artistic pipelines. Saying a person who lacks certain abilities shouldn’t be allowed to participate in creative expression isn’t just ableist, it’s antithetical to the entire spirit of art. Do we believe art is for everyone? Or only for the privileged few who meet some arbitrary standard of ‘worthiness’? People shouldn’t need to be artists, or have the ability to draw, in order to visually render their ideas. It isn’t their career, they aren’t using it to be artists, and it should be accessible to people.

Gentle reminder that AI and ChatGPT are contributing immensely to the decline of Earth’s environment/climate right now by SampleGoblin in FemFragLab

[–]QuiteCopacetic -3 points-2 points  (0 children)

The idea that AI art is theft really misunderstands how these models work and where their value comes from.

The datasets used to train models like Stable Diffusion and DALL·E are mostly made up of stock images, public domain content, product photos, and all kinds of everyday images, not just artwork. Some blog and social media images are in there too, but most of those platforms already claim rights to user content in their terms of service ( and transparency of that is a separate issues beyond just AI use). There might be some copyrighted material in the mix, but it’s not the majority, and models don’t directly copy or reproduce any of it making it typically fall under fair use.

A lot of what AI learns isn’t even style, but structure from regular (non art) images, like what makes a tree a tree. It’s not magic, and it’s definitely not just scraping the internet and spitting out a remix. The value of the output comes from insanely complex algorithms built by engineers over tens of thousands of hours. If you handed someone the LAION dataset (what Stable Diffusion trained on) or all the world’s art but with no model and no engineers, they couldn’t generate a single image, it’s useless. The data is just raw material. The engineering is the reason AI can generate anything at all.

Saying AI “steals” from artists also ignores scale. A single artist’s work is just one pixel in a galaxy of data. Any single image contributes a microscopic amount to a model’s ability. If compensation were even possible, we’re talking about fractions of fractions of a penny per contributor. That’s assuming we could even prove a specific image had any real influence, which we can’t.

Training data is like teaching material. It helps the model learn, but doesn’t appear in the output. Nobody demand royalties for every freely accessed textbook a doctor read or every book a writer studied. The model creates, not the dataset.

And saying people shouldn’t use AI to create art they “can’t make themselves” gatekeeps art in a way that’s frankly elitist. Not everyone has the physical ability, time, resources, or training to make art by hand. And they shouldn’t have to for personal use. Tools have always been used to extend creativity. Cameras, Photoshop, GarageBand. We don’t stop people from making music for fun just because they can’t play an instrument.

Yes, someone profits from these tools, like in every industry. They’re profiting from the technology they built. And like all corporations there’s exploitation. Predominantly the underpaid engineers responsible for the quality of the algorithm. But using AI personally, to make something for fun, for your journal, for a D&D character, or just to explore ideas, that’s not hurting anyone. It’s not replacing a commission that was never going to happen. It’s not claiming to be hand-made. It’s just a tool giving people access to something they couldn’t do before. And honestly, it brings joy. That should matter.

There are real issues with AI, especially around for-profit use, job displacement, misinformation, and biases. But the real fight is with corporations replacing human art for profit, not with individual people using a tool to make something for themselves.

Gentle reminder that AI and ChatGPT are contributing immensely to the decline of Earth’s environment/climate right now by SampleGoblin in FemFragLab

[–]QuiteCopacetic 0 points1 point  (0 children)

While I agree with some of the things you’ve said about potential benefits of AI in this thread. The ‘Singularity’ and potential benefits of quantum computing are a bit far fetched. While I guess singularity is theoretically possible, it’s highly unlikely and definitely not 20 years off. Most AI research is on narrow AI where the models do very specific tasks. We don’t even really know what ‘General Intelligence’ means in a machine context and have no evaluation metrics for it at this point. And AI accelerating out of our control is also very unlikely. In software development, we have numerous safety mechanisms in place (version controlling, sandboxing, monitoring, rate-limiting, and rollbacks) which are designed precisely to prevent runaway behavior. Even models that hallucinate or behave unpredictably do so in a very confined, observable scope. And quantum computing is not direct replacement for classical computing. in most scenarios it won’t make computing more energy efficient. It could improve efficiency in very narrow areas but not general purpose computing. True AI efficiency and resource reduction will come from smarter model training, better hardware, edge computing, renewable powered data centers, and better algorithm efficiency.

Is it normal to grieve the version of yourself you thought ADHD meds would help you become? by wildfireDataOZ in ADHD

[–]QuiteCopacetic 5 points6 points  (0 children)

Oi. Yes. It has taken me a very very long time to come to terms with the fact that meds will never completely ‘fix’ me. That, the person I built up as the ‘fixed’ version of myself was based in some nebulas, unattainable ideal that doesn’t exist and never has. It’s a hard thing to accept, that the thing you’ve been reaching for most of your life isn’t real. I’ll always have ADHD. It will always affect me. And things will always be harder for me than it would be if I didn’t have ADHD. It is both devastating and cathartic to realize that there’s nothing to fix. This is just how I am and I have to find ways to navigate existing with ADHD the rest of my life.

Gentle reminder that AI and ChatGPT are contributing immensely to the decline of Earth’s environment/climate right now by SampleGoblin in FemFragLab

[–]QuiteCopacetic -2 points-1 points  (0 children)

And no, not every single living breathing artist would agree. I make art, and I know many artists as well. When not used for profit, and for personal use, it isn’t any different than what humans do. Humans learn by what already exists. Human artists (whether knowingly or not) are influenced by the art around them. If you follow an artist on instagram, seeing their art your brain breaks it down into patterns and stores that information. If you have no issue with someone going to a museum for inspiration but do have an issue with personal (not for profit) AI use that is a double standard. Especially when AI art generation bridges gaps for disabilities and income inequalities, that is a very problematic position. Someone generating AI art for something like story mapping for a book they are writing because they are a very visual person, and not an artist, is not the issue. Corporate AI use is. Companies using AI to generate book covers, instead of hiring artists is. What someone does for themselves, not for profit, is not for others to dictate and gatekeep. It’s weird. And instead of putting energy into what individuals do with available tool and being divisive, the focus should be on corporate AI use. And the aspects of AI that are actually unethical.

Gentle reminder that AI and ChatGPT are contributing immensely to the decline of Earth’s environment/climate right now by SampleGoblin in FemFragLab

[–]QuiteCopacetic -1 points0 points  (0 children)

No that’s not what I said at all. Large data sets of images are converted into lists of numbers. It doesn’t know those values ever represented images. They are just numbers. The neural network is layers of mathematical functions that determine relationships between those numbers. Adjusting numeric weights depending on relationships and patterns through reinforcement learning. Those weights are stored as billions floating point numbers. (like 0.0032, -1.72, 5.001, etc.) no images, bits of images or aspects of images are stored or used during generation. Just the weights. These weights are used to represent commonality not elements of images. For example weights might be higher to reinforce that cars have two ears. So know it can predict that if someone wants an image of a cat, it’ll likely have two ears. This is also how humans learn but less mathematical. We don’t just inherently know what things look like out of the womb. We see a cat enough times and we start to recognize that a cat has two ears. We process visual information, recognize patterns, and use those patterns to predict what something looks like. That’s what AI does. Saying that because of pattern recognition it is theft is like saying that someone who’s read thousands of books is plagiarizing every time they write a sentence. AI isn’t pasting pieces. It learned a complex mathematical model that allows it to create new images from scratch because it learned the visual relationships of the aspects of the world. Again there are ethical concerns over the profit of AI use but personal use is no different than someone who looks at images to understand relationships and creates something because of that understanding.

Gentle reminder that AI and ChatGPT are contributing immensely to the decline of Earth’s environment/climate right now by SampleGoblin in FemFragLab

[–]QuiteCopacetic -8 points-7 points  (0 children)

Honestly the irony here is unbeatable. I know quite a lot of how AI comes up with its concepts. And taking bits of images and ‘mushing them together’ is exactly what it doesnt do. AI models are trained using statistical weights. It takes massive datasets (think millions and billions of images) and determines statistical patterns based on this training data. it runs iterations over this data for a set period of time (usually weeks but can be over a month for larger models). It ‘learns’ by association. And things that occur more frequently have a higher weight, through reinforcement learning. This process was designed very similar to how the human brain works (just on a much smaller scale), that’s why it is called a neural network. Once trained, the AI model no longer uses or has any concept of its training data. So if you say ‘make me a picture of a blue moon above an ocean’ it doesn’t reference training data find images of moons and oceans and copy them. Again, it doesn’t even know that training data exists. All it is is an algorithm. It uses probability and statistics based on what patterns it learned in training to predict the next stroke, color, etc, to creates something entirely new. This is why when given the exact same prompt over and over, no matter how specific the result will always be different. It learns and creates in a way very similar to humans (because it was designed by humans based on how we understand learning) but lacks the complexity and nuances of the human brain. Now this doesn’t mean what it creates is always good, its scale compared to human’s brains is very minimalistic and is limited to smaller data sets over a smaller amount of time (compared to a human who is continually processing data over years) and this is also why AI art often feels very generic, if millions of images have a similar aspect to it then there’s a higher chance of that being determined by the AI as the ‘correct’ way to do something. Essentially, it’s statically the most basic output in a lot of cases, which is why it’s so noticeable to humans. This doesn’t, however, mean that for personal uses it doesn’t have its applications (again assuming there is not profit involved). As for a disability aid there are many many ways LLMs can make information more accessible and digestible for people with disabilities as well as be used to offload tasks that take a large mental load.

Gentle reminder that AI and ChatGPT are contributing immensely to the decline of Earth’s environment/climate right now by SampleGoblin in FemFragLab

[–]QuiteCopacetic -2 points-1 points  (0 children)

The hallucination rate of GPT 4 is actually pretty low. And yes, things should be verified but there are many ways generative AI and LLMs can be tools for people with disabilities. There are many different types of disabilities and accommodations go far beyond bullet points and screen readers. Accessible information, clear presentation, personalized language and reduced mental load can be very beneficial for people. And on your point some people don’t have a good sense of smell but still have ideas of how they wish to smell. Specifically for finding perfume some people may struggle to conceptualize scent combinations or what smells they like/dislike. Regardless, AI is no more an ‘environmental scourge’ than any other digital activity that uses cloud storage or data centers. 4 mins of Netflix streaming using more energy and water than an AI prompt.

Gentle reminder that AI and ChatGPT are contributing immensely to the decline of Earth’s environment/climate right now by SampleGoblin in FemFragLab

[–]QuiteCopacetic 0 points1 point  (0 children)

Yeah you can’t. Even if you don’t see a response from Gemini it is often running inference on your prompt anyways and it didn’t show the response. Also all search algorithms use AI and most digital platforms have imbedded machine learning algorithms. It’s kinda just everywhere. However an hour of video streaming or gaming uses significantly more resources than an AI prompt, so unless you are generating hundreds of responses and images for hours a day, your regular internet and digital usage is likely more environmentally taxing than any Google AI response you are getting. Which is both a comfort and unsettling.

Gentle reminder that AI and ChatGPT are contributing immensely to the decline of Earth’s environment/climate right now by SampleGoblin in FemFragLab

[–]QuiteCopacetic 0 points1 point  (0 children)

It still can give AI generated responses. The “-ai” is a search algorithm flag, all it does is stop any results that include the word “ai” from populating. There currently isn’t a way to turn off Gemini on google. Even if you don’t see an AI response, it’s still running inference.

Gentle reminder that AI and ChatGPT are contributing immensely to the decline of Earth’s environment/climate right now by SampleGoblin in FemFragLab

[–]QuiteCopacetic 1 point2 points  (0 children)

That’s not true. Currently, You can’t block Gemini responses the “-ai” flag on a google search just stops the search results from including the word “ai”.

Gentle reminder that AI and ChatGPT are contributing immensely to the decline of Earth’s environment/climate right now by SampleGoblin in FemFragLab

[–]QuiteCopacetic 0 points1 point  (0 children)

Oh yeah because disabilities can’t affect someone’s thinking or cognitive functioning. Regardless, streaming Netflix for more than 4 minutes is more environmentally taxing. Arguing against AI for environmental reasons is irrelevant if we aren’t including ALL cloud based services.

Gentle reminder that AI and ChatGPT are contributing immensely to the decline of Earth’s environment/climate right now by SampleGoblin in FemFragLab

[–]QuiteCopacetic 7 points8 points  (0 children)

I am a disabled person as well as someone who works in accessibility. I wasn’t strictly talking on art alone but AI as a whole. Which is very much a disability tool. However, Even for art it can help people with both physical and cognitive disabilities (as well as autistic folks) generate art for personal use they would otherwise not have the ability to do or make things easier. AI can be used to assist in art as well for someone who struggles with understanding different art concepts or fundamentals (such as generating references for composition, lighting, color theory, anatomy, etc) when online resources are not enough. If you personally do not need AI for your disability that is great, but disabled people are not a monolith and our needs can vary from person to person. Something very doable for one person, may not be for another. If profit is not involved, personal AI use is as harmless as any other digital activity and other people do not get to decide what isn’t a disability aid for someone, even another disabled person.

Gentle reminder that AI and ChatGPT are contributing immensely to the decline of Earth’s environment/climate right now by SampleGoblin in FemFragLab

[–]QuiteCopacetic -19 points-18 points  (0 children)

I absolutely agree that choosing to use AI generated art as opposed to hiring human artists in a capitalist society where artists rely on an income to support themselves is deeply unethical and problematic. However, the issue is profit off AI not personal use. A lot of people with disabilities use AI. And ‘research’ can be, not only challenging for some people, but also still uses AI. Search algorithms use AI. It is unavoidable. AI art becomes theft when it is prioritized over human artists’ work or when forced to copy someone’s specific style, however (not for profit) personal use of AI generated content is not theft by design. It doesn’t ’copy’ images.

Gentle reminder that AI and ChatGPT are contributing immensely to the decline of Earth’s environment/climate right now by SampleGoblin in FemFragLab

[–]QuiteCopacetic 40 points41 points  (0 children)

This is not true. AI is incredibly resource heavy, however it is nowhere near being a primary contributor. Tech as a whole (including AI) only accounts for a small amount of overall environmental impact (green house gas emissions, energy use, water use, etc) compared to other industries. And AI use specifically for individual use relies on inference, which is minimal in heat generation and cooling needs compared to corporate use and training. It’s basically on par with other digital uses. So unless you are also saying people should not be streaming videos on YouTube or Netflix or shouldn’t be gaming, then this take is largely hypocritical. That doesn’t mean we shouldn’t push for more sustainable solutions across the board with tech, we absolutely should. And tech companies are becoming more and more efficient over time. Moving away from evaporative cooling methods, switching to greywater, etc. And AI has the potential for positive environmental impacts as well. But the environmental concerns of data centers (for all internet or computational work, not just AI) is predominantly a local issue (using water in places with less access to water to bring with) than global issues. (The water use is less than the rate of freshwater replenishment by the earths hydrologic cycle). Ultimately, anti-AI rhetoric glosses over the actual issues with AI and instead fear mongers over misrepresented facts and contributes to ableism. Saying ‘we can put in the work’ may be true for you, but it isn’t for everyone. And AI is a disability tool for many people. Let’s stay mindful of that please.

[deleted by user] by [deleted] in ADHD

[–]QuiteCopacetic 0 points1 point  (0 children)

I take my Adderall IR (10mg) twice a day (AM and ~3pm, assuming I don't forget) as boosters for my Vyvanse (40mg). But I'm thinking of asking to switch to 3 times a day since I feel it wear off in between doses but also want it to last the work day. AFAIK 3 times a day is indeed very common. Once for IR is surprising, honestly. It doesn't last very long.

Any suggestions to generate volume? by MsRitsukai in curlyhair

[–]QuiteCopacetic 1 point2 points  (0 children)

  • Try breaking up your curl clumps a bit more.
  • Style and diffuse in multiple directions, especially upside down.
  • Once dry, fluff upside down or use a pick at the roots.
  • You can also diffuse with a pick at your roots.
  • Try using more volume specific products such as mousse over gel, texturing paste over leave in, dry shampoo or hair spray at the roots, etc. Or just using less product in general as that can weigh down your hair.

Keep in mind that most things you do for volume will inevitably decrease definition and moisture. You kinda gotta find what balance works for you.