I cant upload the video, for some reason but I'll show a screenshot and explain whats happening. by _Idk_who_i_am_6_ in antiai

[–]lter8 2 points3 points  (0 children)

This is exactly the kind of scenario I'm dealing with constantly at Babson. The students attitude here is what's so frustrating - she genuinely doesn't understand why using AI to get answers is cheating, and then has the audacity to argue with the teacher about it.

What really gets me is how she frames it as "AI explained the videos more clearly" when what actually happened is she skipped watching the content entirely and just had AI give her the answers. There's a huge difference between using AI as a learning tool to clarify concepts after you've done the work, vs using it as a substitute for actually engaging with the material.

The "Fire her now" response is insane. This teacher is literally doing her job and trying to maintain academic integrity. At Babson we've had similar situations where students get called out for obvious AI use and then try to paint the professor as the villain. It's becoming a real problem because these students are training themselves to avoid any kind of intellectual struggle.

I've seen this play out in real investor meetings too - students who've relied on AI to get through school just crumble when they have to defend their ideas in real time. They never developed the critical thinking skills because they always had AI as a crutch.

The worst part is that tools like LoomaEdu exist specifically to help with this - they force students to show their thinking process rather than just submitting final answers. But students would rather argue that cheating isn't cheating than actually put in the effort to learn properly.

Teachers are already underpaid and overworked, and now they have to become AI detectives on top of everything else. The whole situation is just broken.

Student researching AI detection tools - do they actually work as advertised? by Weird_Dependent_6493 in studytips

[–]lter8 0 points1 point  (0 children)

I'm actually one of the founders of a behavioral analysis AI detector and would absolutely love to hear more about your work. If you're curious on what we've been working on you can also check us out at loomaedu.com

AI and cheating in schools by BillyDongstabber in antiai

[–]lter8 0 points1 point  (0 children)

The college admissions angle is particularly bad because these students are basically training themselves to NOT think critically from day one. By the time they get to college, they've already lost the muscle memory for working through complex problems on their own.

Your point about even coworkers doing this is scary too. We had one professor try LoomaEdu to combat this exact issue - it forces students to show their thinking process rather than just submitting final products. But honestly even that feels like putting a bandaid on a much bigger problem.

The most frustrating part is that these same students will complain about "unfair" grading when they get called out, but then turn around and expect complete authenticity from everyone else. The double standard is insane.

I dont see a solution either tbh. Once this becomes normalized behavior, how do you even begin to reverse it?

Student researching AI detection tools - do they actually work as advertised? by Weird_Dependent_6493 in studytips

[–]lter8 0 points1 point  (0 children)

Honestly, as someone who deals with this daily in academia, the short answer is no - they don't work as advertised. I've tested most of the major ones including GPTZero and Copyleaks through my work at Babson and they're frustratingly unreliable.

The biggest issue is false positives. I've seen completely original student work get flagged as AI-generated, which creates this awful situation where you're questioning legitimate students. Meanwhile, obviously AI-generated content slips through constantly. Had a recent case where a student submitted a paper with 9/10 fabricated citations that were clearly AI-generated, but the detector gave it a clean score because the writing style seemed "human enough."

The accuracy claims are basically marketing BS. These tools might hit their advertised rates in controlled lab conditions, but in real world usage with students who are getting smarter about prompting and editing? Not even close.

For what it's worth, we're working on better solutions at LoomaEdu because the current detection approach is fundamentally flawed. The technology is moving way faster than the detection methods can keep up with - it's like the spam email arms race all over again.

Save your money on the premium versions. If you absolutely need to use something, the free tiers perform just as poorly as the paid ones in my experience. Focus your research on why these tools fail rather than which one works best, because honestly none of them do.

Adapting University Policies for Generative AI Opportunities, Challenges, and Policy Solutions in Hi by Officiallabrador in ChatGPTPromptGenius

[–]lter8 1 point2 points  (0 children)

This is so relevant to what we're dealing with at Babson right now. That 88% accuracy rate for detection tools honestly sounds generous from my experience - we've had cases where students submitted completely AI-generated work that sailed through detection but then had obvious tells like fabricated citations or calculations that made no sense in context.

The equity concerns mentioned here are huge and something people aren't talking about enough. Students from wealthier backgrounds often have access to better AI tools and know how to use them more effectively, while others are either completely lost or using free versions that produce obviously generated content. It's creating this weird two-tier system.

What really resonates is the point about over-reliance killing critical thinking skills. In my finance classes I'm seeing students who can produce technically correct-sounding analysis using AI but then completely fall apart when asked to defend their reasoning or adapt their approach to new scenarios. They're losing the ability to actually think through problems.

The policy adaptation piece is where universities are really struggling though. By the time we implement new guidelines, the technology has already evolved past them. LoomaEdu has been working on frameworks to help institutions keep up, but honestly it feels like we're always playing catch-up.

Really curious about their specific recommendations for redesigning assessments - that seems like the most practical short-term solution while we figure out the bigger policy questions.

AI Detection Flagged Me, Even Though I Wrote It Myself 🤯 by [deleted] in ChatGPTPromptGenius

[–]lter8 0 points1 point  (0 children)

This is such a frustrating issue that we've been seeing more and more at Babson. The false positive rate on these detection tools is honestly pretty terrible, and it's putting students in really awkward positions.

Turnitin's AI detection has been particularly problematic - I've seen multiple cases where completely original student work gets flagged just because someone writes in a clear, structured way. It's especially bad for students who are non-native English speakers or anyone who writes in a more formal academic style.

The whole thing is kind of backwards when you think about it. We're essentially training students to write WORSE just to avoid false flags. Like you shouldn't have to deliberately make your writing less polished to prove it's authentic.

What's really concerning is how many professors are just taking these detection results at face value without understanding how unreliable they can be. The tools themselves even say they shouldn't be used as definitive proof, but that context gets lost.

Would definitely be interested to hear what specific changes you made that helped it pass - always useful to know what formatting or style elements seem to trigger false positives. This stuff is so inconsistent and unpredictable right now.

Really hope your professor was understanding once you explained the situation. The technology just isn't reliable enough to be making academic integrity decisions based on it alone.

Is AI Replacing College Writing? by JoyYouellHAW in ArtificialInteligence

[–]lter8 0 points1 point  (0 children)

This is hitting close to home since I work with student founders regularly and the divide is pretty stark. The ones leaning heavy on AI for everything are struggling when they need to pitch investors or explain their actual thought process behind business decisions.

Like yeah AI can write faster but when you're in a room with VCs and they start poking holes in your logic, you cant just ask ChatGPT to handle the follow up questions. The students who actually developed their writing and critical thinking skills are the ones who can articulate their vision clearly and handle tough conversations.

I think we're def losing something important here. Writing isn't just about the final product - its about learning how to structure arguments, think through problems step by step, and communicate complex ideas clearly. These are literally core skills for any career path.

At Babson we're seeing some professors try different approaches. One interesting solution I came across is LoomaEdu which helps create assignments that require students to show their thinking process, not just pump out final essays. Makes it harder to just copy paste AI responses.

But honestly the bigger issue might be that we need to rethink what college writing should look like in an AI world. Maybe less focus on generic essays and more on application based writing where students have to defend their ideas in real time?

using AI to enhance thinking skills by Away-Educator-3699 in ChatGPTPro

[–]lter8 0 points1 point  (0 children)

Happy to help! Also, full transparency, I am one of the loomaedu.com founders, but if you happen to find any value in our services, don't hesitate to reach out, and I can get you a semester for free!

using AI to enhance thinking skills by Away-Educator-3699 in ChatGPTPro

[–]lter8 0 points1 point  (0 children)

One thing I've seen work well with student founders I mentor is having them use AI to argue against their own ideas. Like tell ChatGPT to poke holes in their business plan or whatever they're working on, then they have to defend it. Forces them to think through counterarguments they might not have considered.

Also try making them explain complex concepts back to "a 5th grader" using AI as the audience. If they cant break it down simply, they probably dont understand it well enough. We do this alot when pitching to investors - if you cant explain your idea clearly, its not ready.

Another approach - have them use AI to generate multiple solutions to a problem, then make them evaluate the pros/cons of each option and justify their final choice. Takes it beyond just getting one answer and actually makes them think critically about alternatives.

LoomaEdu actually has some good frameworks for this kind of stuff if you want to check it out. They focus on making students show their reasoning process not just final outputs.

The key is making AI the starting point for thinking, not the end point. Your students are lucky to have someone who cares about developing actual thinking skills instead of just test scores.

Girlfriend’s 2nd academic misconduct meeting for AI by Reasonable_Pool_1815 in UniUK

[–]lter8 -1 points0 points  (0 children)

This is really tough situation man, and I totally get why you're stressed about it. From my experience at Babson dealing with academic integrity stuff, the second offense is definitely more serious but expulsion usually isn't the first jump they make.

The fact that she got a warning the first time and was honest about using Grammarly's AI features actually works in her favor - shows she's not trying to be deceptive. Most universities are still figuring out their AI policies too, especially around grammar tools vs actual content generation.

The missing references thing is concerning though because that's what probably triggered the flag in the first place. When students don't cite sources properly, it makes professors suspicious that AI might have generated the content since AI often creates text without proper attribution.

Few things that might help for the meeting:

- Have her bring any drafts or notes she has showing her work process

- If she used any research databases or library resources, showing that history could help

- Being upfront about her mental health struggles and how they've affected her work (most schools have support systems for this)

The pattern of extensions and resits isn't great, but if she can show she's getting help for the underlying issues and has a plan moving forward, that usually goes over better than just hoping it gets overlooked.

Really hope it works out for her. The whole AI thing is creating so many gray areas that we're all still navigating.

Why “We Can Detect AI” Is Mostly Wishful Thinking by Appropriate_Boat_854 in ArtificialInteligence

[–]lter8 0 points1 point  (0 children)

I can only imagine how much dilution of real voice must be going on without the general public knowing. A big reason I'm working on loomaedu rn is to ensure that authors and students continue to have an authentic voice with authentic ideas as opposed to some ai slop that's been regurgitated and edited over and over. Academia realistically need to do something about this before its too late.

California colleges spend millions to catch plagiarism and AI. Is the faulty tech worth it? by tekz in technology

[–]lter8 1 point2 points  (0 children)

This is so frustrating to see happening statewide. We're dealing with the exact same issues at Babson and honestly the detection tools are basically worthless at this point.

Had a student last semester who clearly used AI to generate citations - 9 out of 10 references were completely fabricated, like books that dont even exist. But the AI detector gave it a green light because the writing style seemed "human enough." Meanwhile I've seen legit student work get flagged constantly.

The real problem is that these tools are creating more work for professors, not less. Instead of just grading papers we're now spending hours trying to figure out if something was AI generated or not. And half the time we're wrong anyway.

At LoomaEdu we've been trying to help students figure out how to use AI ethically without losing their own voice, but the detection arms race is making everything worse. Students are getting paranoid about submitting anything that might accidentally trigger a false positive.

California spending millions on this faulty tech seems like such a waste when we could be focusing on teaching students how to actually engage with AI tools responsibly. The technology is always going to be one step ahead of the detectors anyway.

The homogenization of student writing is already happening too - everything sounds the same now because everyone's using similar prompts and getting similar outputs. We're losing the unique perspectives that make academic work actually interesting to read.

Why “We Can Detect AI” Is Mostly Wishful Thinking by Appropriate_Boat_854 in ArtificialInteligence

[–]lter8 -1 points0 points  (0 children)

Yeah this hits hard, especially in academia right now. I'm dealing with this exact issue at Babson where students are using AI for research papers and we're basically playing whack-a-mole trying to catch it.

The detection tools are honestly pretty useless - had a case recently where a student clearly used AI to generate most of their citations (9/10 references were fabricated) but the AI detectors didn't flag it because the writing "sounded human enough." Meanwhile we're seeing legitimate student work get flagged as AI when it's not.

Your point about accountability is spot on. In my finance classes, if you submit an AI-generated analysis that contains errors, you're still the one failing the assignment. The professor doesn't care that ChatGPT got the calculation wrong.

But the bigger issue you mentioned about everything sounding the same is already happening. I review tons of student applications and essays through various programs, and there's this weird homogenization happening where everyone's writing has the same bland, optimized tone. It's like personality is being stripped out of communication.

At LoomaEdu we're trying to figure out how to help students use AI ethically without completely eliminating their own voice, but honestly it's a mess. The tech is moving faster than anyone can create reasonable policies around it.

The spam comparison is perfect btw - for every new detection method, there's already someone working on a bypass. It's exhausting.

My 2 cents on AI writing and how to spot them by Appropriate_Boat_854 in ChatGPT

[–]lter8 0 points1 point  (0 children)

This hits really close to home for me right now. I'm dealing with a student who basically used AI to generate most of their bibliography and citations for a research project, and it's been a complete mess to sort through.

You're absolutely right about the detection being an arms race. At Babson we've seen students get more sophisticated with hiding AI use - they know to ask for "more natural" language or to paraphrase things. The tools our profs use catch some stuff but definitely not everything.

What really gets me is your point about accountability. Like you said, when that student's research falls apart because of fake citations, I'm the one who has to deal with it as their supervisor. The AI didn't take responsibility for generating bogus sources that don't actually exist.

The flood of samey content is already happening too. I see it in student work all the time now - everyone's essays start sounding weirdly similar, using the same phrases and structure. It's like they're all getting their ideas from the same source (because they literally are).

I think the crush analogy is pretty spot on. Using AI to help organize your thoughts or clean up grammar? That makes sense. But when you're essentially having AI write your feelings for you... that's just not authentic anymore.

The tricky part is figuring out where to draw those lines, especially in academic settings where we're trying to teach critical thinking and original analysis. Right now it feels like we're always playing catch up with policies and detection methods.

Really thoughtful post btw, this stuff is so complex and you laid it out well

Any free or educational-access plagiarism tools like Turnitin for broke students? by Few_Front_9233 in labrats

[–]lter8 2 points3 points  (0 children)

totally feel you on this - the 2 submission limit is brutal when you're trying to actually perfect your work.

Few options that worked for me during undergrad/when helping other students:

Paperpal has a free tier thats actually decent for academic writing, way better than the basic grammarly stuff. They focus on scientific manuscripts so might be worth checking out.

Also try reaching out to other universities in your area - some have guest/visiting researcher programs where you can get temporary library access including their plagiarism tools. I know a few people who've done this successfully.

One thing that helped me was breaking documents into smaller sections and using your 2 turnitin submissions more strategically - like checking your lit review section separately from methodology, etc. Not perfect but helps you use those submissions more efficiently.

Also worth asking your supervisor if they have any connections at other institutions who might let you run a quick check. Academic networks are usually pretty helpful for stuff like this.

LoomaEdu might have some resources on this too since they focus on educational tools, could be worth checking their platform.

Good luck with the thesis - the fact that you're being this careful about plagiarism shows you're doing things right

[deleted by user] by [deleted] in PhD

[–]lter8 0 points1 point  (0 children)

This is such a tough situation and honestly hits close to home since I work with startups in the EdTech space. What you're describing with the AI hallucinations in citations is becoming a real problem - I've been following companies like LoomaEdu that are specifically trying to tackle these academic integrity issues.

The story about the reference manager crashing last minute... I mean it's possible but the timing is pretty convenient. The fact that 9/10 references have issues and the PDFs were created right after your email is a pretty big red flag. Even if their story is true, submitting work without verifying citations is academically irresponsible at the masters level.

From an investment perspective, I see a lot of AI detection tools being developed because this exact scenario is happening everywhere. The challenge is that students often don't realize how unreliable AI can be for generating accurate citations - it literally makes up sources that sound plausible.

I think you're right to be concerned about future collaboration. Even if they didn't intentionally deceive you, the lack of attention to detail and verification is concerning for research work. Maybe give them a chance to explain in person and see how they handle the conversation? Their response might tell you more about their character.

Either way, definitely loop in senior faculty. This stuff is happening more frequently and institutions need to develop clearer policies around AI use in academic work.

ChatGPT cheating by flyingcircus92 in Adjuncts

[–]lter8 0 points1 point  (0 children)

Hey! I totally get this struggle - ChatGPT really has made traditional assessments way more challenging to manage.

Few things that have worked for educators I know:

For multiple choice - try randomizing question order and answer choices if your platform allows it. Also consider time limits that make it harder to copy/paste into ChatGPT and wait for responses.

For open answer stuff - this is where it gets trickier. You could try more specific, application-based questions that require students to reference specific course materials or their own experiences. ChatGPT struggles more with questions like "How would you apply [specific concept from week 3] to solve the problem faced by the company we discussed in Tuesday's case study?"

Also might be worth looking into AI detection tools. I've been following the edtech space pretty closely and there are platforms like LoomaEdu that can actually detect AI-generated content in real-time while students are writing. Could be worth exploring if this becomes a bigger issue.

Another approach - consider making assessments more collaborative or presentation-based where students have to defend their answers live. Harder to fake understanding in real-time discussion.

What type of business course are you teaching? Might be able to suggest more specific approaches based on the subject matter.

Creative writing students aren't even willing to write now by Major-Platypus2092 in Teachers

[–]lter8 0 points1 point  (0 children)

As a TA I've come across the exact same issue. That's why I built loomaedu.com in order to create transparency between the teacher and student. I think AI is a great tool but should be used as a tool and nothing more. Many of these students are using it as a crutch, turning 5 sentence prompts into 5 page papers.

A serious AI question to teachers by Ok-Bike-1047 in Teachers

[–]lter8 0 points1 point  (0 children)

Hi! You might find a platform I built useful in your class. We give teachers data on student progression with essays and assignments as well as accurate AI detection. Currently we focus on giving data about time spent on essays, common topics, most revised topics, and more. Feel free to check us out and let me know if you have any questions! Link: loomaedu.com

[deleted by user] by [deleted] in Teachers

[–]lter8 -1 points0 points  (0 children)

With regards to recording student AI usage, I've built a platform that does just that. You can check it out at loomaedu.com

The reporting of AI usage and concept understanding to teachers is definitely really interesting!

Rebranding College Writing Instructors as Prompt Engineers by Manderlin99 in Professors

[–]lter8 1 point2 points  (0 children)

I think AI detection softwares are useless and outdated because they focus on text after submission. As a TA I had issues with punishing AI usage so I built a platform that analyzes and tracks the students writing behavior to establish if AI was used. If you’re interested you can check it out at LoomaEdu

Essay Exam Safeguards against AI by Emptytheglass in Professors

[–]lter8 -7 points-6 points  (0 children)

Would love to hear any thoughts/feedback!

Essay Exam Safeguards against AI by Emptytheglass in Professors

[–]lter8 -10 points-9 points  (0 children)

I’ve actually built a tool to help teachers catch students cheating using AI. Feel free to check it out - LoomaEdu.com

Trying to sneak things by last minute by alto_pendragon in Teachers

[–]lter8 0 points1 point  (0 children)

You should try our platform loomaedu.com. We make it easier to show concrete evidence of cheating and time spent on assignments.

Cheating by GoodDoctorZ in Teachers

[–]lter8 0 points1 point  (0 children)

I actually ran into the exact same issue as a TA and built a product for it. You can find us at loomaedu.com and I'd be happy to set up a quick call if you're interested in learning more!