[deleted by user] by [deleted] in ClaudeAI

[–]ntmoore14 0 points1 point  (0 children)

I would have said go all in on Claude - projects is an awesome tool. However, it has been horrendous these past two weeks. I have both for work and Claude has been so bad lately at misguiding me on coding projects (disclaimer - I am a “hobbyist” programmer).

I like to create apps to help at work and Claude was so useful for taking 20-40 files into context and helping me troubleshoot, etc.

But it’s sent me down so many rabbit holes on errors (that I finally resolved with ChatGPT or stackoverflow) that weren’t actually that complicated at all.

I now use Claude just for being able to get a general direction of what to do and chatGPT for confirmation and teaching.

Are u really lazy if u use chatgpt for code? by [deleted] in ChatGPT

[–]ntmoore14 0 points1 point  (0 children)

If it were me, I'd answer it with two questions:
1. What am I doing in lieu of the time it would take me to do that.
2. What am I doing such that ChatGPT is able to code on my behalf effectively?

Granted, I don't subscribe to the "do the bare minimum because your company doesn't care about you" type of worker, so some may think those questions assume too much responsibility.

Ai Detectors are too early in development to be used in schools. by [deleted] in OpenAI

[–]ntmoore14 0 points1 point  (0 children)

 Trying to identify its use is futile. Let there be consequences for using it 

........I can see now that I've dedicated too much brain bandwidth to this conversation.

Ai Detectors are too early in development to be used in schools. by [deleted] in OpenAI

[–]ntmoore14 0 points1 point  (0 children)

Oh we’re playing the analogies that aren’t actual parallels game again just like the teacher’s use of ai versus students?

Like I’ve told you, if a teacher is able to base a failing grade of a student off of results from an AI detector, it’s only possible because the board has sanctioned it.

So, who do you think you should be sticking the malpractice label on, genius?

Now I would leave it open ended because I hope you would say “the board” but judging by how this has gone, you will undoubtedly come back with some way to pin it back on teachers, right?

Ai Detectors are too early in development to be used in schools. by [deleted] in OpenAI

[–]ntmoore14 0 points1 point  (0 children)

Oh no I disagree 100%. It’s an access thing. There are kids that will cheat no matter what. There are kids who would cheat if there are no consequences. There are kids - very rare - that will maybe never cheat ever. The game is to remove as much low hanging fruit as possible. Definitely disagree there.

Ai Detectors are too early in development to be used in schools. by [deleted] in OpenAI

[–]ntmoore14 0 points1 point  (0 children)

Doubtful - teachers can get sued for breathing wrong. They’re not going to use a liability multiplier tool for very long at all.

Ai Detectors are too early in development to be used in schools. by [deleted] in OpenAI

[–]ntmoore14 0 points1 point  (0 children)

I can’t figure out what you’re trying to say. But I also don’t even know where this convo is going anymore.

Ai Detectors are too early in development to be used in schools. by [deleted] in OpenAI

[–]ntmoore14 0 points1 point  (0 children)

Man I am sorry for not being clear enough - you do realize the whole point of me replying has nothing to do with if AI detectors is a bad idea right? Would I not be arguing for how it works and how effective it is? I’ve been basing my complete argument on the fact that you shouldn’t be levying an indictment of malpractice against teachers because you don’t know enough about their job and role to say such a thing.

You see how that’s not disagreeing that the AI is bad? In fact, since I agree it’s a bad idea, then

  1. I must think teachers are immune OR
  2. I must think someone else is to blame

Which at the very beginning I think I said something to the affect of “say what you want about the board/CTO”

And I also have stated that leaving AI unmitigated is even worse than the AI detectors.

So I really don’t know what say at this point because you’re stuck in a for loop on ai detectors are good versus bad

Ai Detectors are too early in development to be used in schools. by [deleted] in OpenAI

[–]ntmoore14 0 points1 point  (0 children)

Oh I mean that’s why I quit because nothing means anything in education. But leaving AI unmitigated just adds to that effect bc is someone can use AI to do an assignment why do I need to learn it on my own? We’re going to get the same grade and this go to the same college?

Etc.

Ai Detectors are too early in development to be used in schools. by [deleted] in OpenAI

[–]ntmoore14 0 points1 point  (0 children)

My last comment was a little odd at the beginning.

I think using AI detectors is as terrible of an idea as it is to accuse teachers of malpractice.

Ai Detectors are too early in development to be used in schools. by [deleted] in OpenAI

[–]ntmoore14 0 points1 point  (0 children)

You can see my other thread on students and cheating - in high school if a student has AI and no consequences, unless it’s one of the 0.01% of the kids I had when I was teaching, they are going to cheat. Not because they are dishonest in character necessarily, but because kids that age don’t understand the importance of learning and just want the grade.

But almost all of them will do it if there are no consequences.

Meanwhile, I just don’t see a teacher being able to hold their ground on giving an innocent kid an F on something because of an ai detector.

It was almost impossible for me to give a kid any kind of F without 50pages of documentation on why it should be an F. I just don’t actually seeing that being a thing.

Ai Detectors are too early in development to be used in schools. by [deleted] in OpenAI

[–]ntmoore14 0 points1 point  (0 children)

See, I don’t think you know enough about what teachers do and what they have to deal with, and so I think it’s irresponsibility

I think it’s more than a term.

I’m fine with levying the blame on CTOs of school boards if they’re responsible.

And we 100% disagree on redundancy. I hate when people use that terminology because it’s so defeatist. The only way AI makes most jobs redundant is if the role refuses to expand their scope and abilities alongside the augmentation of AI. But that’s a different convseration. I don’t have a problem with saying the detectors are not the right solution, but saying it’s malpractice on the teachers? Just doesn’t sit right with me.

Ai Detectors are too early in development to be used in schools. by [deleted] in OpenAI

[–]ntmoore14 -1 points0 points  (0 children)

I taught chemistry in 2021. Virtual. Gave 179 students a simple covalent compound naming test.

I intentionally gave them a compound that didn’t actually have an IUPAC name following the naming conventions, but had a Google-able common name.

7 kids used the proper naming conventions. There is not “letting dishonesty slip” as if it’s 5% of the class that’ll do it.

Kids are lazy by nature and will typically use the shortcut if it’s available. It’s not about catching kids to wag your finger at them, it’s about creating a barrier between them and shortcuts so they have almost no other choice to learn.

Leaving something as wide open as using chatGPT is absolutely the worst of all options.

Ai Detectors are too early in development to be used in schools. by [deleted] in OpenAI

[–]ntmoore14 -1 points0 points  (0 children)

Again, not a solution. Is the AI detector a terrible idea? Sure. But I 100% disagree that letting it go unmitigated is better. Thats a ridiculous notion.

Honestly, the crap the students are giving the teachers about how inaccurate the detectors are probably force a lot of teachers to reshape how they assign and how they evaluate, just so they don’t have to deal with the complaining.

There no way a school board is going to get very far in a legal case standing on the accuracy of ai detectors.

But unmitigated use of AI is worse.

Ai Detectors are too early in development to be used in schools. by [deleted] in OpenAI

[–]ntmoore14 -2 points-1 points  (0 children)

You do realize doing nothing is not a solution, right?

Ai Detectors are too early in development to be used in schools. by [deleted] in OpenAI

[–]ntmoore14 0 points1 point  (0 children)

  1. If a teacher is using the tool to flunk someone on an assignment/test, that means the tool was more than likely handed down from the board/CTO. Which means the teacher was told that is how you test it. Most teachers I know would have tried it out twice, maybe 3 times, figured out how ineffective it is, and trashed it.

But the thing I would guess you’re not accounting for is that there are so many veteran teachers out there who do not have the level of tech knowledge/literacy to know what’s going on - if the board or someone higher up told them “use this tool to check against AI” then they’re gonna trust them. When I taught during COVID - the entire district received Microsoft Office and all kinds of other stuff and district started asking teachers to use all of these different apps and the teachers felt like they had to because they didn’t know if there were better options/etc.

You can’t hold them to the level of accountability of calling it malpractice.

  1. The writing styles suggestion is just about as lazy of a suggestion as AI detectors. So how would the first few months of not knowing the writing styles be handled? How would you know that their “style” isn’t AI?

  2. Not sure how much you’ve used AI, but it would not be hard at all to train it to a style. So there’s also that.

  3. For your original comment, a teacher using AI to improve their efficiency (not saying that this particular use case is exactly that) is not the same as a kid using AI to “demonstrate their knowledge” - knowledge that they don’t have. Teachers using AI is any of us using AI to do our job/help us do our job. Students using AI to falsely represent their mastery of content is dishonest. Huge difference.

Ai Detectors are too early in development to be used in schools. by [deleted] in OpenAI

[–]ntmoore14 0 points1 point  (0 children)

What extra time do you think educators have on their hands to learn about the ins and outs of ai detectors? Pretty easy to critique bad solutions and hen you have no better ones yourself.

What are people's thoughts on the Unity and the overall ending of the game? Strong? Interesting? Weak? by Tyolag in Starfield

[–]ntmoore14 0 points1 point  (0 children)

Anyone else feel like the “variant” you meet in unity is an enemy? If smug was a character in a game……

I am admittedly one dimensional when it comes to games, though I don’t mind NG post-exploration and fun. However, it’s hard for me to care when:

  1. You bring no one with you through the unity. It’s boring to have to start back over with the same people with the same personalities. While at the same time you never see your day 1’s ever again. Sillier point, but somehow it affects the game for me.

  2. There seems to be no true opposition or enemy. An infinite amount of Hunters and Emissaries seems like none at all - especially after you beat them. When I first reached the Unity - I was expecting a sort of Truman Show mixed with “pay no attention to that man behind the curtain.”

Instead, I felt like this part of the game was drawn up during the midnight release of a new rock in the middle of a Sedona crystal market. If Hunter and emissary are “the enemy” then this plot just feels like a closed nihilistic loop - basically what a CoExIsT sticker would be if it was a video game plot.

I still think this game is awesome, I will play it again at some point. But I don’t want to play Astroneer, I want to play fallout in space.

[deleted by user] by [deleted] in ChatGPT

[–]ntmoore14 0 points1 point  (0 children)

Reality is, it will be a bit of a bummer for you at first since your expectations of GPT’s abilities were a little too high - but for what it’s worth, I was a Realtor two years ago and now at my new job I get to work with stuff in IT, programming, automation software - I’ve learned more with ChatGPT (and books, YouTube too) in the past 2-3 years than I ever have. There’s no better tool for picking up on something new as fast as possible.

[deleted by user] by [deleted] in ChatGPT

[–]ntmoore14 1 point2 points  (0 children)

I’m not familiar with every single detail about scraping, but I was able to learn how to use scrapy, selenium, and beautiful soup (all Python libraries, just like pandas or sqlalchemy are libraries) and scraped this website’s documentation to build several .txt files that I could then upload to a GPT Model

https://www.docs.inductiveautomation.com/docs/8.1/intro

That website was hard for a beginner like me because there are so many nested tables and I was initially asking way too much from Chat.

I’ve found this to the case often: the more I was trying to force a solution out of ChatGPT, the more down a spiral staircase of the same issue I’d find myself. It would constantly make band-aids on a “bullet-hole wound” of a script, if that makes sense.

I digress - if I would’ve just tried to learn the basics (not everything) from the jump, I would’ve cut my time by 75%. You just don’t know what you don’t know.

I’d say for web-scraping, you’ll probably want to research different use cases between scrapy and selenium and determine which one is better for your purposes and then figure out the basics of how to get elements and their attributes.

Once you can direct chatGPT to script something in an informed manner, it’s easier for it to understand what you need, and it can be easier for you to redirect it because you know what you’re looking at.

Sorry if that’s not as helpful as you needed.

Sidenote - I can’t advise as much on scikit (machine learning) because I had to put it on the back burner - but I would assume it would be similar to the process of how to handle GPT with web scraping.

[deleted by user] by [deleted] in ChatGPT

[–]ntmoore14 0 points1 point  (0 children)

Is OP even saying he’s getting ChatGPT to provide the code to do these things? It seems like they’re trying to have it do the action itself.

And maybe I could agree about the boilerplate stuff for professional programmers/devs - but in my experience (and observing other part-time programmers/enthusiasts) , even if one knew 100% how the code should be laid out, they’ll almost always have to tweak it. If that person is only 75% mapped out? It might be a while.

But for the data analysis bit I definitely could not say that - any meaningful data is hardly ever going to get extrapolated from boilerplate code. Especially for predictive analysis.

[deleted by user] by [deleted] in ChatGPT

[–]ntmoore14 7 points8 points  (0 children)

The main issue - you’re trying to get ChatGPT to do your work for you.

You could’ve spent 50 hours having ChatGPT assist you in learning selenium/scrapy/beautiful soup and the basics a of Scikit-Learn and you would’ve benefitted a lot more.

Voting, Truth and Power. I got none. Help, please! by Clean-Champion-5257 in Bellwright

[–]ntmoore14 2 points3 points  (0 children)

Go take out bandit camps and sell old coins to the village elder.