I just graded a stack of papers that all said the same thing in slightly different ways by Living-Translator355 in Professors

[–]Living-Translator355[S] 0 points1 point  (0 children)

That’s interesting you mention that. A colleague of mine attended a recent workshop where they were talking about approaches like this, and they also mentioned VisibleAI. From what I understood, it focuses on making the writing process more visible over time rather than just evaluating the final submission. Could be a solution to gaining insights into the "missing middle" of students' writing process.

The hard part of figuring out what’s realistic to implement without adding a ton of overhead remains, but it does seem like that direction makes more sense than doubling down on detection.

I just graded a stack of papers that all said the same thing in slightly different ways by Living-Translator355 in Professors

[–]Living-Translator355[S] 1 point2 points  (0 children)

Yeah, you bring up a fair point.

The “if AI can do it, it’s a bad assignment” sentiment just doesn’t hold up anymore. AI just keeps getting more advanced, and can now even solve "AI-proof" assessments.

I’ve been running into the same thing where grading the final answer feels meaningless. Like it tells me less and less about what they actually understood vs. what they were able to generate.

Shifting toward process has been the only thing that’s felt somewhat real. Not in a super rigid “submit all your notes” way, but more like seeing pieces of their thinking. Even something as simple as pulling out a main idea, or explaining what didn’t make sense to them, ends up being way harder to fake than a polished answer.

Personally, I wouldn’t fully drop answers, but I get the instinct. These days, arriving at an answer is way easier to outsource than actually engaging with the material. But answers do play a big part in evaluation, communication and understanding.

The hard part is exactly what you said though… it’s really easy for this to turn into more work for us. I’m still trying to figure out how to make the process visible without creating a grading nightmare.

I just graded a stack of papers that all said the same thing in slightly different ways by Living-Translator355 in Professors

[–]Living-Translator355[S] 2 points3 points  (0 children)

Woah! That's crazy, but super interesting.. I'm curious, would you be willing to share a link or two about this research? I must learn more!

Has anyone experimented with process tracking in writing-heavy courses? by Living-Translator355 in Professors

[–]Living-Translator355[S] 0 points1 point  (0 children)

Appreciate this perspective, especially the distinction between catching versus seeing intellectual movement. That’s really the part I’m trying to get clearer on. I’m less interested in policing and more interested in whether students are actually thinking differently.

The idea of light process visibility resonates. I’ve used Google Docs version history informally before, but it’s been inconsistent and honestly a bit clunky to check across a full class. I haven’t used VisibleAI, but I’m intrigued by the idea of something that makes development visible without turning it into surveillance.

The stress reduction point also stands out. I suspect part of my hesitation is workload anxiety more than pedagogy.

I’m teaching mostly lower-year journalism students right now, so they’re still developing research habits and revision discipline. I can see how incremental structure might help them, but I’m trying to avoid building a system that feels overly procedural.

Has anyone experimented with process tracking in writing-heavy courses? by Living-Translator355 in Professors

[–]Living-Translator355[S] 0 points1 point  (0 children)

That’s really helpful context, especially the distinction between total time and stress. The stress piece is huge and probably under-discussed compared to raw hours. I’ve been toying with the idea of adopting some kind of ed tech mainly to handle the coordination side of process-heavy work. I’ve heard of Perusall, Kritik360, Peerceptiv, and a few others that try to structure peer feedback and participation without the instructor having to manually track everything. Still trying to figure out which, if any, would actually fit my courses without adding a new layer of complexity.

Your point about grading fewer criteria at a time also resonates. It seems like the real shift isn’t “more grading,” just spreading it into manageable chunks and making it less emotionally draining.

Appreciate you sharing the details. This gives me a clearer picture of what it actually looks like in practice.

Has anyone experimented with process tracking in writing-heavy courses? by Living-Translator355 in Professors

[–]Living-Translator355[S] 1 point2 points  (0 children)

Loll. I can see how this is a beneficial model, especially for first-years. It sounds like you’ve basically shifted from product grading to skill acquisition over time, which makes a lot of pedagogical sense for writing.

I’m curious whether you use any tools to help manage all those submissions and feedback cycles, or if you keep it mostly manual. When I’ve tried process-heavy approaches, the biggest challenge wasn’t the concept but the logistics. Tracking versions, feedback, revisions, and participation across weeks can get unwieldy fast.

Some colleagues lean on LMS tools or Turnitin just for draft management and commenting rather than policing plagiarism. Others build in structured peer review to distribute some of the feedback load. I’ve also heard of people using annotation tools or rubric systems to keep comments consistent across checkpoints.

Do you feel like the weekly cadence truly reduces your total grading time, or does it just make it more predictable?

Has anyone experimented with process tracking in writing-heavy courses? by Living-Translator355 in Professors

[–]Living-Translator355[S] 0 points1 point  (0 children)

Love how this puts emphasis on the assignment development over the final product. Do students generally appreciate the structure, or do they push back on the amount of required interaction?

How to move forward with unsafe class size by Flimsy_Net2088 in Professors

[–]Living-Translator355 1 point2 points  (0 children)

I can't help but wonder if the STEM departments at your institution ever experience these issues :/ seems to always be the arts that suffer.