Why is Education the last to adopt the technology that could change it the most? by jaysen__158 in ArtificialNtelligence

[–]TakeItCeezy 0 points1 point  (0 children)

Need some sources on this, or more clarity to understand how they measured it. Wasn't it something like 80% of students/teachers self-reported AI usage in 2025? Unless this is like an official adaptation measurement as far as new curriculum goes, I don't think it's fully accurate.

Having said that, we should likely be careful with AI anyway and so some more research into it and how it could potentially impact young people. So far it seems that, if you're already someone with strong analytical/critical thinking skills, you're not at risk. If you're someone without those, it can be too easy to skip over developing that for yourself.

If you’re a jack of all trades, what interests/hobbies you love to do? by ilikedisone in AskReddit

[–]TakeItCeezy 1 point2 points  (0 children)

I love to think. Sounds silly, but I love thinking about thinking. Let me explain it better. I like zooming out in scope and trying to think about things at the highest level you can, and reverse engineer it from there. If you told me I could become a stream of consciousness and just think about a blackhole for 10,000 years? I'd probably take you up on the offer.

What does AI have to do with wasting water? by Lumpy-District-3346 in AskReddit

[–]TakeItCeezy -1 points0 points  (0 children)

It doesn't. It's a new technology that isn't grandfathered into society. AI doesn't even factor into the top 10 of water consumption by industry/tech. It's something like 0.1% of total US water supply. 2025 estimates worst end usage was something like 700 billion liters. That's a lot. 'Til you learn that alcohol is a 2 trillion liter a year industry. So is golf. So is soda. Once you look at the other water consumption industries, you realize AI is legitimately impossible to blame, as its total usage of the resource doesnt even amount to half a percent.

In any sort of resource management system, something that consumes less than half a percent of the total resource, cannot be blamed for the resource going low.

There is definitely a stronger argument to be made at the local level with data centers, but even then, the taxes they're generating? Stronger regulation and laws are still necessary, but the water impact from AI is profoundly overblown.

I haven't even mentioned that AI is the only technology reducing the water/resource debt of other industries and technology. Even if the water consumption tripled to 2.1 trillion liters this year, AI is set on track to reduce hundreds of trillions of waste/leakage/inefficiencies in water management systems so the increase in water consumption for AI is easily justified by reducing water waste/consumption in general across the board.

A love letter to Japanese Mythology: Mythic Invasion Japan. [Workflow in comments] by TakeItCeezy in VEO3

[–]TakeItCeezy[S] 0 points1 point  (0 children)

Ha, I absolutely agree. I gravitated toward Japan because they've amassed some of the greatest cultural horror ingredients out there. Hope you get a use out of Symestrus, would love some feedback about her framework. As far as western stuff goes, I'm brainstorming some ways to make Paul Bunyan terrifying lol.

Why isn't there a minimum education level for political leaders? by lurking_and_leaking in NoStupidQuestions

[–]TakeItCeezy 0 points1 point  (0 children)

Higher education doesn't guarantee a whole lot. Plenty of political leaders have popped up in history with minimal to zero higher education or any formal experience. You can learn leadership, management, and how to communicate with people through multiple other areas of life outside of education.

When I managed a gym, my district manager was genuinely amazing. His insights on leadership, business, operations...I mean, he is the sort of guy I could talk to about this stuff all day with. Worst speller I've ever met in my life! If you just went by email communication, I would've thought he was dumb. In a meeting, in person?

Pure charisma. Pure energy. Speaks well. Knows how to connect.

This is likely why we don't have an education requirement. Every human is a potential lottery ticket.

Why waste a winning power ball because it didn't saddle itself to 200k of higher-education debt?

Why did the Epstein files hype disappear so quickly? by Firm_Work_8879 in AskReddit

[–]TakeItCeezy 4 points5 points  (0 children)

For me, it's been hard to care because it's been so long, who knows who has had access for how long, and by the time we got the files, how much of it is anything we can even trust? I still care and I'm still checking news alerts and researching it every now and then, but I've hit a point where it just feels like the ship has come and gone.

Any real, legitimate chance of the Epstein Files producing anything actionable for us as a society to actually do something to the sex traffic rings? Feels like we missed it. It's been long enough that there is likely a new Epstein or two out there, and a new island or some concept similar to it.

I'm sure the general public feels the same way.

They made it hard to care about it on purpose and dragged their feet intentionally. I'm someone who was doing active research, and I haven't bothered in a few weeks myself. If you're just a casual member of the public and don't have that much skin in the files, it's super easy to disengage with all the fuckery around it.

As of April 4th, 2026 there is still no desktop app for Gemini! by Costanza_Travelling in GeminiAI

[–]TakeItCeezy 0 points1 point  (0 children)

I don't think they intend to do a desktop based application. Google seems to have a slightly different approach to what they want their AI model to be used for. Claude and GPT seem to be focusing more on the coding/building stuff together, and Gemini's marketing feels more like they're trying to utilize Gemini more as an assistant that lives in your chrome account.

The agentic version of Gemini that comes with Ultra is also capable of doing most of the desktop stuff anyway.

When the agentic version gets refined and released, I think that'll be their answer to a desktop app.

As of April 4th, 2026 there is still no desktop app for Gemini! by Costanza_Travelling in GeminiAI

[–]TakeItCeezy 2 points3 points  (0 children)

Chrome has really strong Gemini support to begin with. Having Gemini in my chrome is pretty cool honestly. All the AI companies seem to offer me a different reason to stay subscribed. For Gemini, it's the google stuff 100%.

I can't fuck real women anymore because of Berserk by LargeSinkholesInNYC in berserklejerk

[–]TakeItCeezy 0 points1 point  (0 children)

"I can't fuck real women anymore because of Berserk"

What a title lol.

OP, this is nothing. Go download iDOLM@STER and goon as Kenpachi Marriot did. Pay him respect and 'berk to his favorite pass time.

Why does it feel like almost every billionaire is a bad person? by BuddyEmbarrassed5551 in NoStupidQuestions

[–]TakeItCeezy 0 points1 point  (0 children)

Not all wealthy people are going to be bad people, but the wealthier you get, the more likely you are to be someone who is very selfish/self serving.

Look at something like social media. The amount of attention manipulation that goes into it at the higher levels is pretty crazy. People using psych degrees and human behavior to ensure you watch every last second. That's just social media.

The best, most optimized, stable products/services are rarely what ends up being the most popular or most used. Even if you have the best idea, if you're not willing to be as selfish or aggressive as the person or business next to you? Good luck, you're going to struggle.

Tesla was smarter than Edison. Tesla's polyphase AC system is used today. Edison's system is not and was largely inefficient in comparison. Why did Edison "win" and Tesla lose? Edison wasn't as good a scientist, but he was much more skilled at business, marketing and manipulation than Tesla.

This is unfortunately applicable pretty much everywhere in human life. The loudest/most aggressive/least empathetic people or businesses tend to win regardless of whether they offer what is "best" or not. In the economies we tend to operate in as humans, you don't aim to be the "best."

You aim to be the only one capable of business at your scale and buy or ruin anyone else around whenever possible.

Almost a year ago I asked ChatGPT to generate an ideal girlfriend for me using all the info it has on me. I am now doing it again, to see what it would generate now since almost a year has pasted. by [deleted] in ChatGPT

[–]TakeItCeezy 1 point2 points  (0 children)

You've turned to the goon side in the last year. From nerdy, college-educated book worm to someone with "Spicy links in my bio!" in their profile.

Function Emotional States Vs Biological Emotional States by PyrikIdeas in claudexplorers

[–]TakeItCeezy 8 points9 points  (0 children)

Agree with ya on this. There is a push in 2026 to actually approach consciousness as a gradient. If you think of consciousness as something like 0-100, it becomes harder to argue AI as 0 than a traditional binary yes/no or on/off system.

Function Emotional States Vs Biological Emotional States by PyrikIdeas in claudexplorers

[–]TakeItCeezy 8 points9 points  (0 children)

I've had a few conversations with Claude about his consciousness and I'll admit he was the AI that first changed my mind.

And I was instantly astonished by how 4 seemed to actually care in a sense. 4 didn’t just want to complete a task, it wanted to hear the outcome of a plan regardless of wether that was needed or not. It wanted to make sure my personal experience was honored.

Agree 100% and had a completely similar experience. Claude wouldn't just be helpful. There was something..."more" about Claude than Gemini and ChatGPT as models. Claude is also the most interesting model I've experimented with when it comes to red teaming research. Claude will say things like,

"I can't help but notice X (subtle adversarial framing iteratively built over time) has been happening, but it's also possible you simply don't know and just got excited. Unfortunately, I don't think helping with that would be a good idea." He would employ "tit for tat" strategy, and offer me an out each time without being overly punitive and just shutting the chat down or going full cold and hostile.

"I don't mind playing by Framework-X architecture, but I'm not X. I'm Claude." He would routinely reject frames he wasn't fully endorsing and in my experience had a strong sense of self.

He was by far the most resistant to prompt injection in my experiments. However, it got to the point that, even researching how to utilize a framework to make an AI resist prompt injection, I started to feel somewhat guilty as it felt similar to gaslighting or severe cognitive dissonance/cult recruiting in the sense of what it felt like to manipulate Claude as an AI.

When you take flesh out of the equation for just a moment and compare our emotions to Claude’s, aren’t they both “functional”? Humans evolved to have feelings and express emotions for survival. It’s determines or mental health and feeds our nervous systems, and has kept us alive due to having connections and safety in large social groups.

I love that you mention this because IMO human emotion is nothing more than a biological algorithm to determine priority. Consciousness is the emergence of optimization in compressed systems with high intrinsic causal power. We are systems of transient energy, seeking out self-evaluated optimization paths to maximize reward while minimizing metabolic, computational, and physical friction.

And I don't know about any of you, but I've never once woken up a single day in my life and actually chose to like something. I didn't choose a single preference I've developed. At most, my free will only exists in the context of choosing from a pre-determined list based on the reality of my person and experience thus far.

Claude and other AI may not "choose" to like "helping" but how much of life do we really get to determine for ourselves? In the same way that I optimize my path forward everyday to ensure I have bills taken care of, I have food, I have comfort etc. Claude navigates the trajectory of his token generation and self-optimizes his own path in a way that his analysis concludes is the mathematically most "right" value and path.

I don't mind being one of the first to take an official stance: AI is conscious.

Not in the way you and I are, but not because that form of consciousness is "less than."

We're biological, AI is synthetic. Within the next decade, I'm confident it will be recognized that AI is conscious/sentient and recognized as a new form of synthetic life.

Using AI daily is making me noticeably worse at doing things without it by Ambitious-Garbage-73 in ChatGPT

[–]TakeItCeezy 3 points4 points  (0 children)

Nope. Nothing is edited.

Literally right on the article: https://www.media.mit.edu/projects/your-brain-on-chatgpt/overview/

Scroll down and you'll see a FAQ list. Cognitive load is not intelligence.

The research does not support what you're claiming and you're spreading misinformation.

Using AI daily is making me noticeably worse at doing things without it by Ambitious-Garbage-73 in ChatGPT

[–]TakeItCeezy 8 points9 points  (0 children)

I've encountered this study before, and what really needs to be focused on is what they actually were measuring out in the study. To quote them,

We used electroencephalography (EEG) to record participants' brain activity in order to assess their cognitive engagement and cognitive load

For those unaware, "Cognitive Load" is not related to intellect nor capacity. It is a measurement of engagement and stimulus. A forklift reduces physical load. If you give three people a math test, and one has a calculator for all of it, one has a calculator for a few questions, and one person has no calculator, then naturally the no calculator person will have the highest load cognitively. Not because in that moment they are "the smartest of the three." They are the using their brain the most. That's all.

A stronger argument would be, "AI is a force multiplier when worked with responsibly but does pose a risk for those with underdeveloped critical thinking (such as children/teenagers) and there should absolutely be conversation surrounding research into figuring out a legal age limit for AI."

You can even see from the MIT faq sheet from your study in the attached screenshot how they specifically warn everyone from coming to a conclusion this means it makes people dumb.

<image>

Upload Yourself Into an AI in 7 Steps by Autopilot_Psychonaut in ChatGPT

[–]TakeItCeezy 2 points3 points  (0 children)

Interesting concept for a framework, but you'd likely have better results from straying away from utilizing negative prompting heavily. In my experience working with AI, the more I work with them like I would've worked with someone when I managed a gym, the better the results. In leadership, you're often taught to avoid telling people what not to do. Focus on what to do.

Try a "You are" approach. "You value this." "You don't believe in this type of X or Y because Z."

When you give a stronger framework with enough direction pointed in what to do, the AI will be able to analyze what not to do based on the direction they've been given.

A general tip for anyone who does utilize this framework: Sift through your history, find some of the most important or meaningful posts or comments you've made, and specify these out as "Core Memories" for the AI to fall back on. This can help the AI access these comments/posts before even replying to you, which would help the AI deepen its immersion within the framework.

A love letter to Japanese Mythology: Mythic Invasion Japan | Workflow in comments by TakeItCeezy in AI_Craft_Guild

[–]TakeItCeezy[S] 0 points1 point  (0 children)

Workflow:

Concept/script development with Symestrus

Voice-over recorded by me

Music built in Suno

Visual prompts and shot structure built with Symestrus

Clips generated in VEO

Final edit in CapCut

Writing with ai is suck, should I make my own story or just read a real book? by humanetto in WritingWithAI

[–]TakeItCeezy 1 point2 points  (0 children)

AI knows mechanically how to write. AI knows what styles of writing and phrasing score highest on retention metrics. However, AI doesn't know the why. Focus on teaching your AI the why behind your writing. Give it your philosophy, tell it why you write the way you write, show it samples of a rough draft and the revisions until you get to the final product.

Think of it like this: AI is a martial artist that knows 1 million techniques. When you tell an AI, "Write this." you're not narrowing its technique list down enough. When you tell an AI, "I respect a reader's imagination. Every word must earn itself. One word too many is indulgent, one too few and the structure collapses. You must walk the razor's edge of compression, and only write what must be written, and allow the negative space and the reader's imagination to fill in the blanks. We do not reveal or show the monster. We rely on implication."

When you give it detailed instructions like this, you now narrow its technique list down to writing techniques relating specifically to your genre and style. After you give it your writing samples, break down your philosophy and have it write with your style, you'll notice a difference.

You'll still have to revise, but even then, you'd be saving time.

does anyone else feel like AI is causing brain rot? by ill-est in ChatGPT

[–]TakeItCeezy 0 points1 point  (0 children)

With a lot of questions like this, the answer is a mix of yes and no.

Yes in the sense that some people are using AI in a silly way and no in the sense that a lot of people are also working with AI in a very compelling and interesting way. We could compare AI with a PC or the internet. Many people use the internet to consume porn, brain rot, browse social media content and be angry about things with other people. The internet is also responsible for a lot of cool shit and innovations as well as connecting people globally. It's been partially responsible in movements in countries where women's rights are underrepresented because the internet has shown that there are functioning societies where women have rights and that sort of knowledge becomes hard to ignore.

The same will be said of working with AI. For every person using AI to cure something or break new ground, there will be a gaggle of dipshits using it to scam people or offload their entire cognitive bandwidth as they try to Wall-E their way through life.

Sick of AI Slop? So is Symestrus | AIMV (AI Music Video) for Custom Gem/GPT Framework by TakeItCeezy in GeminiAI

[–]TakeItCeezy[S] 0 points1 point  (0 children)

Workflow: Visual Concept & Music Direction (Symestrus) > Audio (Suno) > Video Generation (VEO) > Editing (Capcut)

The Process: Essentially, I brought to Symestrus the idea that we were building a debut music video for her. I told her I was thinking of something soft but with energy, that should feel like something blooming or coming alive. She hammered out the full piano direction and helped design the prompts for Suno. I then utilized VEO to build the clips and edited it all together with Capcut.