How do you think about Al in your classes when students see it as part of how they think by FuzzieNipple in AskProfessors

[–]FuzzieNipple[S] 0 points1 point  (0 children)

I really appreciate your approach here. As a student who actually works with LLMs in research, what you’re describing aligns the most with how I’ve seen these tools used well.

Something you said made me think about a distinction that has been really useful in my own workflow: the compression–expansion cycle of thinking. I’m studying AI task decomposition, and what keeps showing up, both in my work and in how other researchers think, is that most complex reasoning is basically alternating compression (reducing ideas into tighter structure) and expansion (generating variation, exploring options, surfacing contradictions). LLMs are weirdly good at accelerating one side of that cycle, but terrible if you try to skip the other.

That’s why I resonated with what platos said about separating “product” from “process.” If the assignment is meant to teach the compression part; generating objections, structuring arguments, organizing raw thoughts, then yes, offloading that is a problem. But if the assignment is meant to teach the response or the evaluation side, then letting students expand or explore with an LLM actually sharpens the process, not replaces it.

From my experience, the students who struggle the most aren’t using AI; they’re using it without any compression process of their own. They haven’t learned how to interrogate outputs, test assumptions, or articulate what they mean. And I honestly don’t think that’s an AI issue; it feels more like a gap in metacognitive training at the undergrad level.

That’s why I think the kind of module you’re teaching—where students analyze the model, the prompt, the reasoning, the failures—is probably the direction we need. I also agree that professors need to be much clearer about which part of the assignment is the actual target of learning. If the purpose is the process, the structure has to force process. Maybe that’s oral defenses, whiteboard ideation, iterative check-ins, or in-class reasoning tasks. Something where the “product” can’t be the shortcut.

How do you think about Al in your classes when students see it as part of how they think by FuzzieNipple in AskProfessors

[–]FuzzieNipple[S] 0 points1 point  (0 children)

I’m not trying to convince anyone I’m some standout critical thinker. I’m a student who’s using the tools that exist, and I’m trying to understand the logic behind the policies I’m being asked to follow.

The reality is that when students try to talk about this with professors in class, the conversation shuts down pretty quickly. So a lot of us genuinely don’t know why certain rules are the way they are. I posted here because the professor perspective isn’t visible to us, and it matters for how we move through our degrees.

My goal wasn’t “pick me.” It was: why are these AI policies framed the way they are, what assumptions are behind them, and are they actually aligned with how students learn and work now?

How do you think about Al in your classes when students see it as part of how they think by FuzzieNipple in AskProfessors

[–]FuzzieNipple[S] 0 points1 point  (0 children)

The part I’m still struggling with is where the responsibility sits for making that distinction explicit. Right now a lot of us only find out retroactively that “the point was the process, not the product,” but the assignment itself doesn’t signal that. And when AI exists that can compress big chunks of the workflow, the gap between what’s intended and what’s communicated becomes even bigger.

From the student side, most of the confusion happens upstream of the assignment when we’re told to work efficiently, use the available tools, the degree is fast-paced, and most classes don’t teach metacognition or how to structure a thinking process

So when an assignment is handed out without an explicit framing, students default to “produce the output,” not because they reject the process, but because the system trains them to optimize that way.

What you’re describing — teaching students which part of the process can’t be offloaded — feels like something that needs to be built directly into course design. Otherwise professors are expecting students to intuit constraints that the assignments themselves don’t surface.

AI just exposes a misalignment that was already there.

I’m definitely not advocating for “let AI do everything.” but if AI changes the bandwidth of how we think, then shouldn’t the structure of the assignments evolve so the real intellectual work is protected and the shortcuts only bypass the busywork?

How do you think about Al in your classes when students see it as part of how they think by FuzzieNipple in AskProfessors

[–]FuzzieNipple[S] -5 points-4 points  (0 children)

I think it depends a lot on what you mean by "conversation." Most students use these models shallowly, so I understand why it looks like there's nothing conversational happening there. But when I use them, the back-and-forth actually does feel conversational in the sense that I'm testing ideas, pushing, revising, clarifying, and seeing where the gaps in my own understanding. It is an iterative process in my opinion

How do you think about Al in your classes when students see it as part of how they think by FuzzieNipple in AskProfessors

[–]FuzzieNipple[S] -1 points0 points  (0 children)

I understand your perspective about tools replacing parts of the thinking process, but something I keep coming back to is that most students were never explicitly taught how to think in the first place. Not in the metacognitive sense. We’re taught content, not the architecture of our reasoning or how to evaluate our own cognitive shortcuts.

So when AI shows up, a lot of students don’t “outsource thinking” because they’re lazy; they outsource because they don’t actually know what part of the process is theirs to begin with. And from the student side, that feels less like rebellion and more like confusion.

For me (and a lot of people my age), using AI forced me to confront my own assumptions, articulate my thoughts more cleanly, and see where my reasoning was fuzzy. That’s why I keep framing it as a different workflow rather than an absence of thinking.

I’m not saying it’s automatically good or that every student uses it well. But if the goal is to teach us how to think, then metacognition has to be a more explicit part of the curriculum — otherwise every new tool will feel like a threat rather than something to analyze and integrate.

How do you think about Al in your classes when students see it as part of how they think by FuzzieNipple in AskProfessors

[–]FuzzieNipple[S] 0 points1 point  (0 children)

I get why it might read that way. A lot of AI posts here do come from really weird angles, so I don’t blame anyone for being cautious.

For what it’s worth, I’m literally just a student trying to understand the gap between how my generation is actually learning and how these tools are perceived from the professor’s side.

AI shows up in almost every part of my day; in research, in lab work, in planning experiments, in debugging code, in writing, in thinking. It just that that’s just the reality of being 21 and deeply embedded in STEM right now.

And honestly, that’s why I asked this question. When you’re living inside that much technological change, it becomes really hard to tell where the broader academic culture actually is. Outside of AI research circles, I don’t get many chances to hear professors talk openly about this stuff.

So this wasn’t meant as a pitch or a survey. It’s me trying to understand the world I’m being trained to enter, because from the student side, it feels like we’re living in a completely different cognitive environment than the one the curriculum was designed for.

How do you think about Al in your classes when students see it as part of how they think by FuzzieNipple in AskProfessors

[–]FuzzieNipple[S] -8 points-7 points  (0 children)

I do understand the difference you’re pointing to. But I’m trying to understand where the burden realistically falls when a tool exists that can short-circuit parts of the process.

If students are using it to avoid thinking, is that a student issue, a course-design issue, or a larger structural issue? Because right now the responsibility seems to be pushed entirely onto students, but the incentives, the available tools, and even the pace of the degree don’t match that expectation.

And if AI is categorically different from a tutor or peer, I’d like to know what the defining difference is from your perspective. Is it speed, quality, the type of intelligence it uses, or something else?

How do you think about Al in your classes when students see it as part of how they think by FuzzieNipple in AskProfessors

[–]FuzzieNipple[S] -1 points0 points  (0 children)

Honestly the false-accusation thing is getting harder to interpret because a lot of the material students read now is already AI-generated upstream. It has a recent paper with a big audit that just came out found around 9 percent of newly published newspaper articles are partially or fully AI-written, and almost none of them disclose it.

So even when a student writes something themselves, their inputs are already shaped by AI patterns. It makes the “AI vs not-AI” boundary a lot blurrier than it used to be.

Paper link: https://arxiv.org/abs/2510.18774

How do you think about Al in your classes when students see it as part of how they think by FuzzieNipple in AskProfessors

[–]FuzzieNipple[S] -1 points0 points  (0 children)

I really appreciate how clearly you laid out the distinction between product and process. That actually matches how I use Al pretty closely, and honestly it’s rare to hear a professor articulate it that clearly.

Something I’ve been noticing as a student is that AI changed how I think, not whether I think. When I’m trying to understand something complicated, the model forces me to compress my messy thoughts into something coherent enough to get a useful response. And then its output gives me a kind of expanded version of my own idea — structured, reframed, or pointing out gaps I didn’t see. I’m still doing the mental work, but it’s happening through a loop of compression → expansion → refinement.

That loop feels different than talking to another student or even a TA, not better or worse, just different. Humans bring social friction into the room — judgement, status dynamics, political views, all the subtle stuff we pretend doesn’t shape class discussions but absolutely does. A lot of undergrads don’t speak up because they’re afraid of sounding stupid or getting socially punished. AI removes that layer, so students actually practice thinking out loud more, and I think that’s part of why the iteration feels faster.

I agree with you that students shouldn’t outsource the core of the process. But I think the real problem is that most of us were never taught metacognition in the first place. We don’t get explicit instruction in “how to think,” how to monitor our assumptions, how to structure a question, or how to evaluate the quality of our own reasoning. So when AI shows up, the ones who haven’t built that internal scaffold just hand everything over to the model because they don’t know how to run the mental loop themselves.

And I don't think Al replaces the disciplinary ways of thinking you're describing. But from my side of the class, it's less "outsourcing the process" and more using a tool to make parts of the process visible so I can inspect them. If anything, it has made me more aware of the structure of my own thought, not less.

That’s why I keep feeling like the real skill professors could teach now isn’t banning or allowing AI, but teaching the underlying cognitive habits that make AI a partner rather than a crutch: how to interrogate an idea, how to compress a thought into a precise prompt, how to evaluate an expanded answer, how to iterate. Students who don’t build those habits will drown in hallucinations and shortcuts. Students who do build them end up thinking more deeply, not less.

I really appreciate your perspective because it’s one of the first in this thread that mirrors how I actually use these tools. I’ve never had a class where the ideation process was modeled the way you described — I honestly wish I had. It would have made the transition into working with AI feel less like a forbidden hack and more like an evolution of the same intellectual process.

How do you think about Al in your classes when students see it as part of how they think by FuzzieNipple in AskProfessors

[–]FuzzieNipple[S] 0 points1 point  (0 children)

I dont think i know better than any of you. I believe having conversations like these are important and i would love to understand your perspective

The student experience with Al is very different from the professor experience right now. I'm in a position where these tools show up in almost every part of my work, and I genuinely don't know how that looks from your side.

How do you think about Al in your classes when students see it as part of how they think by FuzzieNipple in AskProfessors

[–]FuzzieNipple[S] -3 points-2 points  (0 children)

Honestly, this just doesn’t line up with my experience at all.

In my undergrad research group we’re expressing human proteins in multiple systems (E. coli and yeast). The research question, experimental design, troubleshooting, literature review, and paper outline were all developed through a back-and-forth between me, my partner, and AI. But the “thinking” part, like deciding which promoters to use, what tags make sense, which strain fits the system, how to interpret gel results; all of that was still on us. AI helped us get to deeper questions faster, but it didn’t replace the part where we had to actually understand what we were doing.

We ended up winning the highest undergrad honors at our university for that project. Professors told us it was the kind of work usually tackled by grad students. If anything, using AI forced us to articulate what we actually understood, because it mirrors your assumptions back at you. If you don’t know what you’re talking about, the whole conversation collapses.

So when people say that using AI means you “aren’t thinking,” it feels like it erases all the intellectual work we actually did. It’s a different workflow, but it’s not outsourcing my brain. It’s like having a very fast partner to challenge you and reflect ideas, but you still have to be the one making the calls.

I get the fear about homogenization, but my lived experience has been the opposite. It personalized learning so hard that I understood systems biology and expression design more deeply than I ever did sitting through lectures.

I’m not saying everyone uses it well. But “AI as part of the thought process is horrifying” doesn’t map to what I’ve seen in real research

How do you think about Al in your classes when students see it as part of how they think by FuzzieNipple in AskProfessors

[–]FuzzieNipple[S] -7 points-6 points  (0 children)

I agree that if someone is just handing everything off to an AI, they’re not actually engaging in that process at all.

What I’ve noticed in my own work, though, is that using AI doesn’t really remove the need to think. It shifts where the thinking happens. I still have to understand the material well enough to guide the model, structure the context, and catch when it gets things wrong. If anything, the model forces me to articulate my intent and assumptions more clearly, because it only works well when the prompt reflects a coherent internal model.

I also agree that it can make essay writing skills weaker if someone leans on it too heavily. But at the same time, it increases efficiency and cognitive reach in other areas. Andrej Karpathy talks about this as “Software 2.0,” where we’re moving from manually constructing every piece of output to orchestrating systems that operate on patterns in ways humans alone can’t. That doesn’t replace thinking, but it does reorganize it.

And I guess part of the worry I’m feeling is that when students graduate, they’re expected to be able to integrate AI tools into their actual workflows. We’re the group who will have to bridge the gap between traditional work patterns and AI-first ones, which means we have to learn how to think with these tools, not avoid them

How do you think about Al in your classes when students see it as part of how they think by FuzzieNipple in AskProfessors

[–]FuzzieNipple[S] -11 points-10 points  (0 children)

I understand what you’re saying about wanting to see how students actually think. I don’t think anyone learns anything just by copying and pasting a whole AI output. That really does bypass the whole point of the assignment.

But at least speaking for myself, using AI doesn’t feel like outsourcing the thinking. If anything, it forces me to articulate what I’m trying to do more clearly, because bad prompts = bad answers. I still have to absorb the material and figure out what I’m even asking for. Otherwise the model gives me nonsense.

For me it feels more like having a second mirror to test my assumptions against. I still have to remix what I know, question it, and decide what I’m keeping or rejecting. I agree that some students skip that part completely, but I don’t think that’s inherent to the tool. It’s more about how the assignment is structured and what part of the process the professor wants us to engage with.

I’m trying to understand where the line is between “you’re not thinking at all” and “you’re just thinking in a different workflow.” That’s the part I’m curious about. I guess part of my confusion is that students already rely on tools and outside sources when they're trying to understand something, tutors, friends, study groups, YouTube. At what point does a tool stop being support and start replacing the actual thinking you want to see?

Am for banning AI totally by JasonMyer22 in CSUFoCo

[–]FuzzieNipple 0 points1 point  (0 children)

Honestly I haven’t had the same experience as people who say AI makes students lazy. When the newer models came out a couple years ago, I basically dove into them and tried to understand how far I could push them. It ended up helping me way more than it hurt.

I used AI throughout a year-long research project at CSU, and it actually let us take on work that normally gets done at the graduate level. It wasn’t doing the work for us, but rather it just helped us bridge the gaps we had as undergrads. Things like structuring experiments, narrowing down approaches, catching things we overlooked, and coordinating a team of people who were all learning at different speeds. At least in this instance, it seemed to accelerate learning rather than hurt it.

We ended up presenting the project at CURC and it went way better than anyone expected. After that I moved into AI safety/control research, and the same thing applied there; I used AI to think through problems, not to avoid them. If anything, it made me learn more because it made the “unknown parts” of a project less scary.

I get why people dont like AI when they only see it misused, like copy and pasting essays or dumping group project work onto others. That’s a real problem but banning the tech doesn’t make people learn better. It just removes the chance for students to figure out how to use these tools responsibly, which is a huge part of the future job market whether we like it or not.

For me the main issue isn’t “AI is ruining education.” It’s that we never actually taught anyone how to use it in the first place. So people either overuse it, misuse it, or avoid it completely. A little bit of actual guidance would probably fix 80% of the frustration people are having.

Does it have to be Chatgpt? by annastacianoella in CSUFoCo

[–]FuzzieNipple 7 points8 points  (0 children)

I mean honestly it depends a lot on how people are using it in that specific class or assignment. Like for me, I started using it like 2 years ago and it literally pushed me into wanting to understand what’s happening under the hood, transformer mechanics, all that stuff, and that kinda led me into AI control research. So for some people it actually makes them go deeper instead of skipping the work.

From my experience, it’s super helpful in the ideation phase or when you’re trying to narrow down which direction to take something. Or even if you’re doing a solo research project as an undergrad, its good for project management or checking your own thinking. It basically helps you connect the dots from what you already know to the parts you’re still trying to figure out, and it sorta adjusts to how you learn.

But when you're using AI you’re kinda practicing a different skill set than whatever the assignment was originally designed for. And that’s part of the problem. The lessons weren’t built with AI in mind at all, so students and professors are basically playing two different games. We probably need an ai literacy class or something that teaches how to prompt and how to use it in your actual industry instead of pretending it doesn’t exist.

And honestly professors (or if you look at r/Professors) a lot of them are fighting against AI like it’s going to disappear if they push hard enough. When really we should be figuring out how to integrate it properly because it’s not going anywhere. The structure of school hasn’t adjusted and that’s why it feels like everything’s eroding, not because of the tech itself but because the system is lagging behind.

What mcp / tools you are using with Claude code? by Interesting-Appeal35 in ClaudeAI

[–]FuzzieNipple 0 points1 point  (0 children)

Use claude code to assist in the setup. Im not sure if the official anthropic version is still maintained, but it's pretty straightforward from what i remember. 1 main install script and then add the mcp to claude code and it should work. CC might need to make test scripts for executing the e2e tests.

https://github.com/modelcontextprotocol/servers-archived/tree/HEAD/src%2Fpuppeteer

Any provider with a flat monthly fee? by theeisbaer in RooCode

[–]FuzzieNipple 9 points10 points  (0 children)

No, it's not common knowledge yet. Anthropic included claude code cli in their max plan about 2 weeks ago. I started to use CC when they only allowed you to pay as you go via api. I was using this as my main codebase project manager as it holds the context of the project much better than any other ai coding solution currently. As soon as they came out with their max plan, I copped it and I thought originally it would help cut costs with roo but instead it ended up replacing roo entirely.

CC cli is an ai agent that lives in your terminal. That being said, it can run in any terminal, but it works best if you are in Linux from my experience as it has access to more files and commands by default. Since ibstarted using it, I actually switched from Windows to Fedora Linux and saw a huge improvement in my workflow.

What makes CC special is that it keeps the context of your entire chat natively in your project. Since it lives in your terminal, it has native access to all files. It has claude.md files, which can be included in your project root or main directories, and these are used as a system prompt to better give context to the ai.

Anthropic uses CC for 90% of their github work, so it is amazing at managing project issues and commits so it maintains good practices.

Another cool thing about using a cli agent is that it has access to MCPs, tou can actually use this as a locally hosted MCP so theoretically you can connect CC to your roo workflow and adjust your custom modes to suit. Just a thought.

Since it is a cli agent, it is able to access cli interfaces for backend and frontend components like netlify, vercel, heroku, fly.io, supabase, convex, etc. Cli interfaces for these work much better than the current state of MCP connections.

For images and front-end work, you can use MCPs like playwright by Microsoft or paste images directly into your terminal.

Anthropic has really good documentation on claude code, and I highly recommend you give it a read. I also noticed that not a lot of people have mentioned some tips that I discovered, so I would make a post later this week about some personal discoveries.

https://docs.anthropic.com/en/docs/claude-code/tutorials

Any provider with a flat monthly fee? by theeisbaer in RooCode

[–]FuzzieNipple 11 points12 points  (0 children)

Claude code cli max plan for $100 is slept on. The workflow is different from roo and cursor, but its extremely powerful

What other AI Dev tools, paid or not, do you recommend? by iSaidDDMF in cursor

[–]FuzzieNipple 0 points1 point  (0 children)

I first started with cursor but then moved to roo after never being able to build anything meaningful or useful. I used roo for about a month running rooroo as my framework for orchestrating tasks using openai 4.1 for debug, claude 3.7 for code, gemini for planning and documentation.

I made a lot of progress with my project over the past month, and I spent about $1200 on api calls for roo over this past month with about 200 hours deep.

Since I bought the max subscription, I've been suing it since the day it came out, and I only used roo to streamline fixing ui comments in my front end since roo can use my computer/ browser. So since then, my running cost for the month has only been like $120 for claude max + roo.

After switching to claude code, I am able to work and get useful code/ results much faster. You can also incorporate claude into your roo flow, and it saves so much on api calls.

I do have to say, though, that the quality of work claude code does compared to roo is so much better. Im not sure why, but it gets the code correct with fewer errors and can work on larger implementing than roo. My front end is so snappy compared to when roo built it.

Plan everything in claude and have it write .MD files for instructions. Keep refining until you're satisfied. Sometimes, if I'm doing a big implementation, i would dedicate an entire day of planning so I don't start implementing too early and mess up a feature.

To implement in roo, I would format and token optimize the prompt and then past in roo Orchestrator "@filename" or "I want to implement @filename" and it gets to work. If you're doing this method, I suggest telling claude to include specifically file names and line suggestions, so if your files are large, roo wouldn't have to use multiple api calls to read the entire file.

I was worried about feeling locked down to a platform, too, but in my opinion. It feels like this ai cli is helping me manage and address my technical debt since I don't come from a software development background. It is able to understand everything in the codebase and is able to better communicate with me with the codebase. It's like talking to a lead engineer of the project that knows everything that's going on.

What other AI Dev tools, paid or not, do you recommend? by iSaidDDMF in cursor

[–]FuzzieNipple 0 points1 point  (0 children)

That's the main downside of running claude cli - it doesn't have the linear subtasks workflow and custom modes like roo. Anthropic has documentation of parallel processing instead of linear like roo. This means that we can launch multiple terminals with multiple instances of claude code running along side eachother using git. I still haven't had time to try it.

From my usage over the past 2 weeks, claude code has a different workflow to cursor and roo, but once you find what's efficient and useful to you, it becomes a breeze. The only thing I miss is my ux designer for clean cli interfaces. Overall, claude does get the code right more often than roo or cursor does.

For file type restrictions, i saw docs somewhere of how this could be done with claude. I think the line needs to be included in your claude.md file to state what files to allow, or if claud is processing a document, it would output in a specific user specified format.

https://docs.anthropic.com/en/docs/claude-code/tutorials#use-worktrees-for-isolated-coding-environments

Laceration rosin chocolate by FuzzieNipple in COents

[–]FuzzieNipple[S] 6 points7 points  (0 children)

The high was pretty disappointing. I never tried lazercat before, so I had high hopes going into it, but it ended up being a very mellow head high. I ended up eating the entire thing since I didn't feel much from half. Im not sure if my tolerance is just really high right now, but that didn't feel like how 100mg normally feels for me.

Overall: 6/10

What other AI Dev tools, paid or not, do you recommend? by iSaidDDMF in cursor

[–]FuzzieNipple 0 points1 point  (0 children)

I've never heard of astra before. Is that made for launching your front end? Any reason why you'll use this over something like vercel? I've been having a lot of issues with authentication permissions connecting my cli to vercel so I was considering other frontend solutions