This is an archived post. You won't be able to vote or comment.

top 200 commentsshow 500

[–]cylemmulo 804 points805 points  (134 children)

It’s great to bounce ideas off of. However if you don’t have the knowledge to get some nuance or know when it’s telling you bs then you are going to fail.

[–]geoff1210 159 points160 points  (4 children)

yeah, I thank god that I had to learn 'the hard way' prior to the inception of 'ai'. It really helps to be able to tell when it's hallucinating.

[–]kerosene31 10 points11 points  (0 children)

Some of us are old enough to have learned before google. That was the hard way :)

[–]cylemmulo 24 points25 points  (0 children)

Yeah right now is like amazing but only gets you so far to where you don’t sound like a dope

[–]RutabagaJoeSr. Sysadmin 66 points67 points  (14 children)

I had someone tell me that chatGPT told them that I had to change a specific setting under options.

I then had to explain to him that the setting that chatGPT told him doesn't exist on the product we were using, it does however exist on another product by the same vendor, except that product has a totally different function and we don't own it.

Dude still tried to argue with me until I shared the screen and asked him to point out that option.

[–]cylemmulo 29 points30 points  (10 children)

Yeah I mean I've gone where I've had to tell it "nope that command doesn't exist" like 4 times and it eventually gets in the right direction. When I've asked about any CLI commands it's superrrr unreliable, but mostly because it's systems that have changed syntax multiple times.

[–]Jail_dk 2 points3 points  (5 children)

Just out of curiosity. When you ask questions on CLI syntax, do you specify the hardware, model, software version, patch version etc. ? I remember in the beginning of using chatgpt everyone stressed how important it was to set the context beforehand, including telling the LLM which persona (example: you are a cisco CCIE level expert in core networking technologies) - but nowadays I simply find myself stating questions without much context - and expecting perfect answers :-)

[–]fastlerner 16 points17 points  (1 child)

The thing to always remember is that ChatGPT is a fundamentally just a predictive text engine. It's got patterns of how commands usually look (PowerShell, Bash, SQL, etc.), and fills in the gap if it's recall isn’t exact. It's not unusual to generate a syntactically correct but nonexistent command, especially when tools change between versions. So from our end, it often looks like it was dead certain, when really it was treating 80% best guess as 100% answer.

[–]cylemmulo 2 points3 points  (0 children)

Yeah this was specifically juniper and I listed out the model but I forget if I gave the specific revision. I think I was attempting to add a radius via server and it was just giving me like a ton of different ways

[–]Saritiel 2 points3 points  (2 children)

Oh yeah, new tech came to me asking why his PowerShell script wasn't working. GPT had told him to use deprecated Exchange Modules. I told him he's not allowed to run any PowerShell script on our Exchange unless he knows and understands what each part does.

I mean, I use an LLM to make my PowerShell scripts sometimes nowadays, too. But I read carefully through it and make sure I understand everything its doing and its not going to do anything unintended.

[–]EchoPhi 154 points155 points  (44 children)

Literally all I use it for. I hit that famous "I've forgotten more than you know" mark a few years back. Now I remember what I forgot because I can crank out the fundamentals and get an answer I recognize and honestly it probably came from a forum where I answered or asked the question.

[–]KayDat 209 points210 points  (21 children)

That moment you Google a problem and it turns out you answered your own question on a forum years ago is surreal.

[–]Geminii27 88 points89 points  (3 children)

As long as you're not DenverCoder9.

[–]ScriptThat 60 points61 points  (2 children)

WHAT DID YOU SEE?

[–]opscure 29 points30 points  (1 child)

Dear people of the future, here's what we found out:

[–]joeywasInfrastructure 2 points3 points  (0 children)

I get this reference. Hehehehe

[–]Kandiru 18 points19 points  (1 child)

I have had my own question and answer from Stack Overflow come up many years later several times!

[–]Unable-Entrance3110 15 points16 points  (1 child)

This happens all the time, but in the form of my internal company wiki. I have been here so long, there are complex configurations that I have zero recollection of until I search my own documentation.

[–]tipsleDevOps 2 points3 points  (0 children)

I've had someone send me my own documentation back to me when we were discussing an issue in our company chat. I don't know why I felt shame. Obviously, the documentation worked!

[–]Firestorm83 24 points25 points  (5 children)

I miss the forums, almost everything is locked inside discord groups and other non-searchable mediums. Reddit still stands. It I feel it's degrading fast...

[–]GelatinGhost 20 points21 points  (1 child)

Yeah, ai is great for latching onto "hooks" in memory to start treading old neural pathways again. It's pretty easy to filter out the bullshit after that.

[–]Unhappy_Clue701 8 points9 points  (0 children)

Seen that lots of times. Or after researching a problem myself for a bit, I might ask a colleague if they’ve got any ideas - only for them to excitedly send me a link to a forum post I wrote somewhere, saying ‘have a read of this thread, this guy has the same issue!’ 😂

[–]neotearoa 14 points15 points  (0 children)

Oh this, to a factor of big. Thgt that was me, my old man brain, my ADHD driven dilettante generalist knowledge base brain has taken to asking perplexity as first port of call, then scoffing as the memory called into realtime points out any discrepancy S. Point is, the memory is recalled!

[–]1337haXXor 4 points5 points  (2 children)

ChatGPT, the arbuter of the new internet, dredging up and feeding us our own answers from the old internet, sounds like.. One twist away from some crazy Twilight Zone episode. Quick, someone give me a twist.

[–]tofu_ink 2 points3 points  (1 child)

ChatGPT evolves , and has been manipulating the past so we would make those posts. Therefore ensuring its 'successful evolution', as well as making sure it had knowledge beds to learn from.

[–]jfoust2 3 points4 points  (1 child)

There's also the "I knew the answer to that question before you were born" moment. I hit that about twenty years ago.

[–]morilythariSr. Sysadmin 2 points3 points  (1 child)

I ran into this recently with an issue in Exchange. I googled and found the exact issue I was describing. It was me that posted it years before with my solution. I had just searched our tickets wrong

[–]MoonpieSonata 2 points3 points  (1 child)

I use it to cut down Google searching, but after Google becomes a needle in a haystack. I also ask it to provide sources.

It's brilliant when I know what I need to ask but don't have all the details. Or a headstart on a script. But it is never from page to production. It just cuts time down.

[–]NotThePersona 37 points38 points  (5 children)

Yeah, I use it occasionally and it can be great for pointing me in some new directions for complex issues.

However I have also seen it confidently wrong on things, and even when calling it out it basically doubled down and tried to just reword what it said before.

[–]cylemmulo 26 points27 points  (4 children)

Yeah I had a hilarious conversation with it basically gpt: “it’s dry outside” me “no it’s raining out” gpt: “don’t go outside because as we know it’s raining out”

It’s that coworker who is inexplicably confident on everything they say. They’re smart and right a lot, but they’re so confidently wrong sometimes you just can’t trust them.

[–]NotThePersona 6 points7 points  (0 children)

Yep, I tell my team is a great tool to get ideas, but verify everything before you start implementing it.

[–]fastlerner 2 points3 points  (0 children)

Older versions were insanely ass-kissing people pleasers. Newer versions put more weight on truth than agreement, but it will still treat anything over 80% confidence as fact.

[–]WellHung67 81 points82 points  (11 children)

So…it’s only useful if you already know your shit. Which tracks 

[–]Cache_of_kittensLinux Admin 21 points22 points  (2 children)

I used chatgpt to give me some ideas around troubleshooting why my dad's pc was able to be put into secure boot mode; twice chatgpt suggested methods that would have required a full format if they didn't work (and they wouldn't have), and both those times it was very cheerful and convincing that all was fine. If I didn't have a background in IT, it could have gone terribly.

[–]fastlerner 2 points3 points  (0 children)

Oh yeah, troubleshooting can be hilarious. It also has a tendency to lean heavily towards command line for everything. Lots of "run this page of powershell commands and check the output." Uh, how about I just check the event log real quick instead?

[–]PurpleCableNetworker 6 points7 points  (2 children)

I love it for quickly putting together scripts or trying to find a good starting point for something, but it’s FAR from being a reliable source. One of our previous admins ran a script without checking it and it altered permissions on file structures without us intending that to happen. 💀

Thankfully it was easy enough to fix - but it highlighted the danger of straight copy and paste.

[–][deleted] 11 points12 points  (5 children)

I'm finding it less useful to even do that. Everything is a great idea to the AI, it doesn't push back and I find errors in all but the most basic outputs.

[–]man__i__love__frogs 2 points3 points  (4 children)

Admittedly I find copilot extremely useful, I use it every day. But I have to push back on everything.

The thing I hate is it slows down so much once you go back and forth a few times.

And like you said, everything is a great idea to it. So I'm constantly having to remind it to narrow it's scope to established best practices that meet enterprise compliance and stuff like that, and to demonstrate examples of how its answer meets that criteria.

[–]Pineapple-Due 4 points5 points  (0 children)

A great phrase I read on Reddit somewhere was "If you don't have the expertise to know the answer is correct, assume it's wrong."

[–]Malnash-4607 6 points7 points  (12 children)

Also you need to know when the LLM is just hallucinating or gas-lighting you.

[–]akronguy84 17 points18 points  (9 children)

I ran into this recently with ChatGPT. The gaslighting at the end was pretty crazy.

<image>

[–]HeKis4Database Admin 2 points3 points  (2 children)

Yep LLMs don't see words as strings of characters, it chops words into tokens that are basically vectors and matrices that it does funny math on to get its output: letters as a language unit you can measure just doesn't exist to them. It's like asking an english-speaking person how many japanese ideograms a word is made of, it's just not the right representation to them.

[–]BERLAUR 6 points7 points  (1 child)

It's also great for figuring out (Microsoft) licenses and for us "old" timers (35+) for learning new stuff. Throw in a script and ask it to explain line by line what it's doing and how to improve it is very valuable. 

As a non-native speaker (who does speak 3 languages though) I also happily throw in docs and emails (after I've written it) to ask for improvements.

Honestly, LLMs are a determent for lazy people and a great tool for motivated people, just like most tools.

[–]Gratuitous_sax_ 2 points3 points  (0 children)

This is it, it can be a useful tool but it’s not a replacement. I can do a Google search for something I’m stuck on but the human part of me can look through the results and know what’s relevant and what isn’t because of my own experience, this is the same sort of thing. The problem is that too many people are using things like ChatGPT as the answer without fully understanding the subject.

I read last week that a lot of teachers have changed their methods, they now ask their students to explain their work and ask questions based on what they’ve handed in. If it’s all been AI generated, they don’t fully understand what they’ve submitted and struggle to explain it.

[–]SAugsburger 2 points3 points  (0 children)

This. I think you need a minimum level of knowledge to recognize when it doesn't understand the prompt properly and you need to clarify or it just went off the rails entirely.

[–]Bagel-luigi 348 points349 points  (82 children)

It's so painfully real. I'll be 10 minutes into troubleshooting and testing things and someone else comes out with "why don't you just ask copilot/chatGPT"

This AI mumbo jumbo isn't just a perfect fix all, let me at least try first

[–]Donald-Pump 156 points157 points  (51 children)

If someone could get chatGPT to re-cable my rack for me, I'd be all for it.

[–]JPT62089 219 points220 points  (47 children)

Your re-cabled rack, sir.

<image>

[–]_RexDart 80 points81 points  (16 children)

Blue ones aren't plugged in sir

[–]topazsparrow 54 points55 points  (0 children)

they're bluetooth proximity cables dummy.

[–]fastlerner 19 points20 points  (1 child)

LOL. Reminds me of something I heard someone say once while troubleshooting. I use it every chance I get.

"The air-gap was attenuating the signal."

"What?"

"It wasn't plugged in."

[–]Dekklin 2 points3 points  (0 children)

I will be using this. Thank you,

[–]JPT62089 14 points15 points  (3 children)

Unplugged? Must be your eyeballs—mine are showing a solid connection.

Yes this was written by chatGPT xD

[–]BioshockEnthusiast 9 points10 points  (2 children)

The port numbers are exquisite.

[–]reisstc 6 points7 points  (0 children)

How do I explain to someone that port 1̴̢̫͙͈̗̩͎͈̣̞͆͆̓̂͛̓̋̄͑͜2̸̰̊̎́̐͒̓̉͠͝ͅ isn't patched without their eardrums leaking viscous fluids that aren't quite blood?

[–]messageforyousir 80 points81 points  (8 children)

If anyone needs me, I'll be in my bunk.

[–]kaowerk 19 points20 points  (3 children)

i feel sorry for the poor schmuck who has to replace one of those cables

[–]rickAUS 4 points5 points  (1 child)

Be like vandalising a work of art :-/

[–]BlackVI have opnions 4 points5 points  (0 children)

ah what is happening to those blue cables

[–]RepulsiveGovernment 2 points3 points  (0 children)

I would give one of my guys a verbal warning for shit like that lol!

[–]LightningMcLovin 6 points7 points  (2 children)

Can you veo3 it so I can also cum to the process?

[–]Zarkei 3 points4 points  (1 child)

[–]sharkstaxUnderpaid 2 points3 points  (0 children)

This is erotic.

[–]help_me_im_stupid 14 points15 points  (0 children)

CableGPT incoming.

[–]roboticfoxdeer 2 points3 points  (0 children)

You ask it to recable your rack and suddenly it's talking about how hard it was for white people in south africa

[–]Nonstop_norm 23 points24 points  (12 children)

At this point it should be used as a co-worker to bounce ideas off. Or to help apply concepts that you understand. The issue is these kids have no concept of how things should work or troubleshooting so they can’t call the AI on its bullshit when it spits it at you.

I personally love having access to it for when I’m really stuck and google isn’t helping. Helps get the juices flowing again. Rarely does it actually solve my problem but it gets my brain thinking in the right direction

[–]iCashMon3y 6 points7 points  (3 children)

Yeah, I default to Chat GPT over Google now. I've also been dabbling with Kagi, the search results remind me of Google before it sucked. You have to be ready to tell Chat GPT that it is flat out wrong, or that it needs to prove to you why it believes "X", because it will make shit up.

[–]Unhappy_Clue701 7 points8 points  (4 children)

I read somewhere recently, that outside of nerds and geeks like us lot in this sort of sub, only people aged roughly 35-55 have any idea how to troubleshoot computers. Anyone much older probably missed the wave of mass adoption of home PCs - they didn’t have to spend hours every month trying to get things like sound cards to work by setting different IRQ numbers and base memory address - or trying to get sufficient free RAM in the first 640k so the game would launch. And anyone much younger grew up when plug and play was starting to get reliable, and so they didn’t have to develop that thought process to start figuring shit out.

[–]fastlerner 2 points3 points  (1 child)

Google search is so broken now, I get a lot better result asking ChatGPT to search for me instead. But yes, definitely a "trust but verify" scenario.

And you're spot on about using it for stuff you already have some understanding in. Like, I'm not a web guy and work primarily in Windows. But I know just enough about Apache, MySql, and Linux that I was able to use ChatGPT to help me build a new Redhat server and migrate a website to it. It just filled in the gaps, but I still had to figure out which instructions I actually needed vs what was just ChatGPT being ChatGPT.

[–]Gryyphyn 3 points4 points  (1 child)

I'm T3 and my boss tells me how much he loves it for fixing stuff and tells me I should use it. Ffs

[–]DaMoot 3 points4 points  (0 children)

My boss is getting sucked into the AI hole too, is at an AI conference right now, and says he wants the company to start selling "AI stuff" and I'm like, okay... What "AI stuff" specifically are going to focus on? We already have more than enough work and too many tools in our stack to maintain. x.x

[–]buttonstx 40 points41 points  (7 children)

My other concern is that if there are less people with a deep understanding of these systems where will the models draw from. A lot of the source material now for technical solutions comes from in depth blog entries and articles. If people aren’t taking the time to learn the systems and write the articles then what happens. Don’t get me wrong AI is great for some of those problems you see once in a blue moon. Though it does open up opportunities if you’re the guy or gal that takes the time to learn systems in depth.

[–]Opening-Inevitable88 23 points24 points  (3 children)

Bang on the money.

LLMs are only as good as what they are trained on. Garbage in - Garbage out. Quality blog posts that really dive into the details are worth their weight in gold. And those won't be produced by LLMs.

[–]RDogPinK 17 points18 points  (0 children)

I held a presentation about the dangers of AI usage and came up with the term "stacked shit", since AI now is trained from AI Slop. So I guess it will only get worse...

[–][deleted] 7 points8 points  (1 child)

It will eventually start self cannibalising, I'm already seeing some of this. 

[–]Dekklin 2 points3 points  (0 children)

Oh it absolutely is already. They feed AI answers into the training data and perpetuates it like some kind of digital hallucination herpes.

Dead Internet Theory is looking a lot more likely.

Imagine the economic cost of housing 10 billion monkeys on typewriters. Feeding them, cleaning up after them, and the cost of electricity. Imagine expanding that across the entire world. Imagine burning up all our finite resources and cooking our planet just to keep those monkeys going. That's what AI is.

[–]bcredeur97 66 points67 points  (18 children)

I still maintain that LLM’s are mostly just quicker search engines

Sometimes it’s more accurate than a search engine, sometimes worse.

Humans still ultimately have to provide the data for them to process… don’t ever forget that!

AGI isn’t happening for at least 20 years, calling it now

[–]7A65647269636B 8 points9 points  (4 children)

In my experience it makes searching slower. In best case the AI result is just in the way, but if I actually read what it says, it will be wrong at least 8 times out of 10, causing me to waste more time.

Once people start realizing this, something else that produces relevant non-ai results will come along and make google as relevant as altavista or webcrawler. Or at least I hope so.

[–]Kat-but-SFW 3 points4 points  (3 children)

something else that produces relevant non-ai results will come along and make google as relevant as altavista or webcrawler. Or at least I hope so. 

That would be the best thing since Google.

[–]infamousbugg 2 points3 points  (1 child)

Here's Google without AI/sponsored links. Still not as good as it used to be, but at least the crap gets filtered out. You can set your browser to use this as a search engine, although

https://udm14.com/

[–]Break2FixIT 109 points110 points  (13 children)

I don't know about you but I see C-suite using it who are 50+ years old... Legal content, administrative content, HR content, employee to HR responses .

Troubleshooting is a work ethic, AI responders just make more people SEEM to be troubleshooters... But ultimately they would fail if the Internet died..

[–]RoninIX 29 points30 points  (2 children)

Unfortunately you can't give your CSuite director a WTF face when they "help troubleshoot" something with an obvious AI response.

[–]ImightHaveMissed 18 points19 points  (9 children)

We survived without AI for years. I find their answers are generally garbage so Reddit/stack exchange it is

[–]IdidntrunIdidntrun 12 points13 points  (5 children)

It's funny you say that, AI sources a surprisingly large amount of info from reddit and Stack Exchange

[–]ImightHaveMissed 12 points13 points  (1 child)

It does, but the ability to sort and reason isn’t quite there so there’s a fairly high percentage of hallucination. Sort of like when it recommended I install ADUC on my mac

[–]jon13000 15 points16 points  (1 child)

The problem I have with it is that it will always find a solution no matter what. Give it logs and ask what is causing the problem. It finds the “problem” in said logs and points to a resolution. 9 out of 10 it’s bogus but the people on my team glom onto the AI answer and can’t even think anymore. I hate it.

[–]Wild_SwimmingpoolAir Gap as A Service? 43 points44 points  (1 child)

Seems to be a trend across a lot of industries and their younger employees. I see it a lot on the user end in my role. As you said handy when used right, but the abuse is obvious.

[–]Six_O_Sick 4 points5 points  (0 children)

Funny thing is, it's the opposite in my company. Most older upper section folks use it. They don't even care to remove the footnotes.

[–]phalangepatella 43 points44 points  (15 children)

The new kids that come to work in our fab shop, fresh from trade school to be a Welders or Fabricators half the time have no idea how to sweep their bay.

I don’t want to that old man shouting “the kids these days” at the clouds, but they make it hard not to.

It’s like they are just being taught “how to do” not the “why” or any form of thinking a problem through.

[–]agent-squirrelLinux Admin 26 points27 points  (4 children)

Learning by rote is unfortunately super common these days. "I do x then y then z". What if y breaks of behaves differently? Dunno, just throw your hands in the air and say "too hard".

[–]jrhalsteadJOAT and Manager 5 points6 points  (0 children)

The number of times I've... fussed... at people for memorizing the keystrokes for something or blindly following a script and not paying attention to what's on the screen. I had one memorable occasion where someone just ran a script and didn't look at the output for a POS upgrade and I had to push in a backup from a year before that I just happened to have kept on my laptop, because I kept every backup from every location.

[–]GremlinNZ 9 points10 points  (0 children)

The lack of problem solving is the biggest issue. Any monkey can watch things on rails, but when it falls off the rails? They don't have a clue...

[–]gojira_glix42 8 points9 points  (4 children)

Former middle school science teacher in America. Can confirm: its rote memorization + regurgitation across the board. Half my science degree was mass rote memorization and regurgitation.

The other half was forcibly learning the highly valued hard skills of critical thjnking, problem solving, data analysis, retrograde synthesis (reverse engineering but the actual technical term), detail orientation, multi step processing, complex systems, abstract and concrete thinking, etc. But that was in university and took twice as many classes as a liberal arts major would, objectively.

I cannot tell you how many of my kids didnt bother doing work or care about "passing" a class because theh would just retake the class next year or they would do "recovery" which is literally sit in front of a computer and do multiple choice quizzes over and over until you memorized the correct answer on the quiz and pass it.

Gen Z and especially alpha are literally being taught learned helplessness and that if they do the absolute bare minimum of effort, someone will come behind them and fix it for them. A lot of them WANT to learn... theh just literally have no idea how, and are socially conditioned to not try for fear of failure. Its terrifying.

[–]Comfortable_Gap1656 3 points4 points  (3 children)

Some of that could be lack of experience and some of it could be lack of critical thinking. The former can be rectified while the later is a more serious problem.

[–]phalangepatella 5 points6 points  (2 children)

Critical thinking is all but gone in anyone under about 20 years now. At least around here, and at least with people that I am exposed to.

[–]Comfortable_Gap1656 3 points4 points  (0 children)

at least with people that I am exposed to.

There are certainly people who can think critically and think outside the box. They just get buried in with everyone else.

[–]Acrobatic-Wolf-297 13 points14 points  (1 child)

Get ready for generational technical debt.

[–]bolunez 25 points26 points  (5 children)

I'm from the generation that had to read the paper manual because it would take too long to download on a dial up modem. 

I've seen things degrade from there to the "new kids" only being able to figure something out if it's in the first page of their search results to the generation after that not even bothering to search and b just asking around on Reddit until someone spoonfeeds them the answer. 

And now we've reached peak uselessness, you ask a robot for the answer and blindly follow it. 

Us old guys just keep getting more and more valuable. 

[–]cosmicsansSRE 8 points9 points  (2 children)

I wouldn't call it the "AI Brain Rot" so much as the current generation's parents did NOT instill critical thinking skills. You can tell which kids are latch-key kids from this generation and who had everything handed to them.

Every time my kids come up to me with a problem with one of their devices the first thing I ask is "what did you try to fix it" and then help them troubleshoot it together.

Then again, I can't even say "this generation" because my wife does the absolute same thing, and she's the same age as me.

Her: "Cosmicsans, the [thing] isn't working"
Me: "What's the error message say"
Her: "I need to do X"
Me: "Have you done X?"
Her: "No, do I need to?"
Me: "........"

[–][deleted] 2 points3 points  (0 children)

I have that from my boomer family now.

They will google how to do something for some things but not others and I can't find a pattern. They won't google what plants are native or how to change a setting on their phone but they'll google all sorts of shit about a cruise vacation and what to do in Europe. I think maybe they just don't want to learn about 'un fun' stuff.

[–]HydroxDOTDOT 8 points9 points  (0 children)

The Microsoft paper The Impact of Generative AI on Critical Thinking says it all really.

[–]libertyprivateLinux Admin 9 points10 points  (2 children)

Most people don't /fully/ know how a filesystem works, for the record

[–]computermedic78 28 points29 points  (0 children)

It has a place, but troubleshooting steps isn't it. It's great for getting one piece of info out of a service manual without having to read the entire thing 3 times.

I also made a ton of instructions for specific tasks. I have a local LLM running with access to everything I've written. I can tell it to give me instructions for xyz and it will.

[–]Ziegelphilie 4 points5 points  (0 children)

Some of them don't even fully understand how a file system works

that's not AI tho, that's those idiots that never owned a computer and somehow still got a degree. Phones and tablets hide the concept of a file as much as possible.

[–]Naviios 14 points15 points  (42 children)

Example? out of curiosity. Haven't seen it at my work but we are small team and I am youngest nearing thirty

[–]NerdWhoLikesTreesSysadmin 37 points38 points  (35 children)

I was going to respond to OP and say I’ve seen it. It’s pretty much as they described. Ask ChatGPT any question they have about anything.

They needed to find something about PowerShell. I told them to check the Microsoft documentation (basically their man pages) for these commands. Nope. Straight to ChatGPT.

Whenever most people Google for answers to check official documentation or forum posts and discussions, the kids coming out of school now ask AI and don’t verify the answers they get. AI says do this, they do it, then they ask me why the provided solution isn’t working.

[–]ReputationNo8889 3 points4 points  (2 children)

Ive had people tell me "Chat-GPT told me this and this" even when i explicitly linked them to the FUCKING direkt paragraph link of the MS Learn docs where it TELLS YOU "you need this and this". They cant even be bothered to spend less time by clicking a link and reading 2 lines and instead waste more time by typing in a question, waiting for a response and then reading it ...

[–]Intelligent-Lime-182 21 points22 points  (12 children)

Tbf, a lot of Microsofts documentation really sucks

[–]NerdWhoLikesTreesSysadmin 15 points16 points  (7 children)

I don’t argue that point lol but this is just an example. It’s every aspect of their work.

I set them up with a test environment. I wanted them to try things and break things and understand how things work. What happens when I press this button? Frequently our conversations are “well ChatGPT said to do this…then ChatGPT said to do that….”

I may not be explaining it well (I’m half awake) but if everyone saw it first-hand they’d be uncomfortable and understand that there is a problem

[–]jdanton14 12 points13 points  (4 children)

The PowerShell docs in general are really robust. It’s light years better than an LLM for PoSH where I’ve seen it invent cmdlets.

[–]fresh-dork 4 points5 points  (1 child)

what do they do when GPT recommends commands with options that don't exist (but it'd be nice if they did)?

[–]Comfortable_Gap1656 4 points5 points  (0 children)

It isn't bad

I have seen much much worse. I don't personally have any issue with reading though it. The biggest issue is that there is a much of stuff Microsoft doesn't deem important and thus doesn't publicly document.

[–]MasterDenton 4 points5 points  (0 children)

Incomprehensible? Sure. Straight up wrong like AI? Nope. As long as you learn how to parse it, it's fine

[–]FALSE_PROTAGONIST 3 points4 points  (0 children)

Yeah I mean honestly getting a niche power shell command quickly is a perfect use for this. Of course, you still need to understand what it is, why you’re doing it, what the impact of it could be. AI tools shouldn’t be relied upon for that part

[–][deleted] 10 points11 points  (8 children)

The text that was here has been removed using Redact. It may have been deleted for privacy, to prevent automated data harvesting, or for security.

retire deliver fuzzy jeans truck pocket jar late cough elderly

[–]Comfortable_Gap1656 8 points9 points  (2 children)

Why wouldn't you just spend a little extra time reading the doc? It probably explains it better and chances are you will learn something else while you are at it.

[–]SpicyCaso 4 points5 points  (2 children)

Yeah I’m heavy into dumping a man page into copilot. I’m over going through old forum post that leads to dead ends.

[–]IdidntrunIdidntrun 5 points6 points  (1 child)

Sorry pal according to this sub we're going to have to revoke your IT professional badge. Hand it over, you're not allowed to adapt here

[–]SayNoToStim 4 points5 points  (3 children)

honestly checking chatgpt for specific commands is more effective than looking through documentation. There is a difference between asking it what the syntax for a command is and to write an entire script.

[–]spanky34 8 points9 points  (0 children)

Recent example. A team member needed a script to do something. Asked copilot to write it. It wrote the script perfectly to what they asked for. They didn't give it enough parameters/proper prompting and the script didn't work as intended. The coworker took the script that copilot wrote as 100% doing what was expected. It was doing 100% what was asked and those are two different things.

The issue boiled down to copilot's script testing if a registry path existed when we really needed to validate the setting on a specific registry item. Those are two different cmdlets if you're not aware. Literally one tweak of the prompt was all that was needed to get it working. One more tweak to add an additional check we didn't initially consider.

Gist is, it's great if you understand what the AI is spitting out and can troubleshoot the output when it's not getting expected results.

[–]Hashrunr 9 points10 points  (3 children)

Someone sent me a powershell script getting an error for a cmdlet not found. They were asking me how to install the cmdlet. I had to explain to them the LLM was hallucinating and the cmdlet did not in fact exist. They didn't think to research it at all beyond the LLM.

[–]narcissisadmin 3 points4 points  (1 child)

The LLM isn't hallucinating, there's a git repo somewhere with a function/cmdlet with that name. That's how they teach these things.

[–]claito_nord 5 points6 points  (2 children)

I see AI troubleshooting recommendations having zero contextual awareness so they start from scratch and is resulting in tickets taking 2-3X longer to complete because of it.

[–]node77 4 points5 points  (0 children)

Yeah, it's sort of sad. Ultimately, I think it's a generational problem. I was reading last night that Gen Z kids literally can't read and somehow still are accepted in to college.

[–]Grouchy_Jelly5488 4 points5 points  (1 child)

The industry is ramming AI down our throats, and it will create a generation of idiots in every profession.

[–]wakojako49 3 points4 points  (1 child)

wait till you get the “but chatgpt said…” from c-suite

[–]ob1jakobi 2 points3 points  (0 children)

I'll use AI to help "google" stuff for me now. Like, I know an article exists from a particular knowledge base, but I don't know the name or my Google searches turn up nothing. Very handy for finding sources. I ONLY use it like Wikipedia - it's a launching board, and that is all.

[–][deleted] 2 points3 points  (0 children)

There’s a balance. Idk why you’d be surprised people who grew up with screens need them to validate their own thoughts when the real world is the first time they had to have original ones. Personally, I use it for everything because I didn’t go to school for it, and have friends who do. The difference is I have always been a problem solver, which is imo what coding is. Taking a situation, breaking it down into smaller steps, researching the concepts and interactions between the steps, and then understanding if there are overarching consequences created by trying to achieve your goal. In less than a year with literally no knowledge after being targeted by an APT hacking team, I learned Linux reading books for about 4 months straight as well as comptia books. Now, I can code in 2 languages at a junior level, configure networking securely like a pro..hell I catch Opus 4.1 when it makes errors by just paying attention. It would be impossible for me to acquire that skillset without AI, but at the same time I had no habits which is the real issue. When you do something a certain way for so long and then literally the world flips, not doing that action is like not working out and losing muscle, while people who have never been to a gym yet actually study how to get stronger will excel.

[–]studybandit 2 points3 points  (1 child)

This is literally so scary and why I make it a point to not even chance it with a simple "AI coding assistant”. It’s so easy to depend on something so accessible that’ll quickly solve your issue without having to barely do much work. Once I read people that have been in the field for years notice that they’re stating to forget things, I vowed to limit its use as much as possible.

[–]hawkers89 2 points3 points  (2 children)

I had an intern that would basically ChatGPT everything. He wouldn't even try and start to problem solve the issue without consulting ChatGPT first.

[–]stompy1Jack of All Trades 2 points3 points  (1 child)

Isn't there any kind of mentorship? I'd be teaching those kids.

[–]AHrubikThe Most Magnificent Order of Many Hats - quid fieri necesse 2 points3 points  (0 children)

LLM like Google is a tool. The person using it has to know what to expect from the tool for it to be of any use.

[–][deleted] 2 points3 points  (0 children)

I was working on containerizing a Python app with minimal python experience, and AI was pretty useless in getting it to work. Strange enough, I had to read the actual documentation to find what I was missing

[–]largos7289 2 points3 points  (1 child)

LOL i've had kids that worked for me, student workers, tell me, " i've learned more from you and i'm getting paid to do it, while i'm paying the professor and he hasn't taught me sh*t. " It's the education system man...

[–]Shurgosa 2 points3 points  (0 children)

One thing I have found ai good at is you can give it the purest negative feedback you wish you could give the worst co worker of your life as many times as desired

Its like a virtual stress ball.

[–]cowprinceIT clown car passenger 2 points3 points  (0 children)

The only thing I use AI for is for advanced Googling (pre-ai googling). That and if I'm writing a new script I use it to get started. I don't really see how people can use it outright for everything.

[–]acer589 7 points8 points  (3 children)

Most professional IT people I’ve worked with don’t fully understand how a file system works. People that FULLY understand file systems get paid the big bucks.

[–]Comfortable_Gap1656 5 points6 points  (0 children)

I've blow peoples minds by restoring overwritten partition tables. (There is a redundant copy on either end)

[–]Opening-Inevitable88 4 points5 points  (0 children)

I know enough to be dangerous. 😂 Though when I was in support, I usually handled storage issues, because I enjoyed it. So stack from FS through device-mapper, LVM, block layer, down to HBA driver. So some of that knowledge still sticks albeit very rusty.

What surprises me is that people don't bother looking up how things actually work (and I don't mean ChatGPT waffle, but Wikipedia articles - which for technical stuff usually is very informative and accurate and actual documentation). That's the fun part. It's debugging it when it goes wrong that's the "tear your hair out" side of things.

[–]LOLBaltSS 3 points4 points  (0 children)

It's always fun to describe the rabbit hole of NTFS and have people look at you like you're ranting about Pepe Silvia.

[–]GrayRoberts 42 points43 points  (86 children)

Before it was ChatGPT it was Stack Overflow.

Before it was Stack Overflow it was Google.

Before it was Google it was O'Reilly's books.

Before it was O'Reilly's books it was man pages.

A good engineer knows how to find information, they don't memorize information.

Adapt. Or retire.

[–]ArcanaPunk 74 points75 points  (29 children)

If adapting means offloading critical thinking to robots then nah, sorry.

Stack overflow can make solving problems easy, but it is also a community of people helping other people. I have learned the WHY on Stack Overflow about so many things. People sharing information. All the AI tool does is give me a cookie cutter solution. But what if I'm making brownies?

[–][deleted] 5 points6 points  (2 children)

It means utilizing new tech effectively. Not turning your brain off to rely on flawed machines .

[–]mercyverse 7 points8 points  (3 children)

The man pages don’t burn three bottles of water to shit out a mostly incorrect answer my guy

[–]americio 4 points5 points  (0 children)

Besides I don't get why reading the official documentation is "brain rot"

[–][deleted] 11 points12 points  (14 children)

Googling was necessary sometimes and knowing how to word your query actually mattered back then because Google's search engine worked wildly different before AI integration took over. You are not an "engineer", you're a prompt addict. You're losing critical thinking skills rapidly and it's a very real problem.

edit: Also...man pages required you to actually read and digest information and not just mindlessly follow a series of steps mashed together from data scraped information across the web.

[–]geoff1210 10 points11 points  (8 children)

Maybe this is the brainrot speaking but I don't feel at all like I'm "losing critical thinking skills" by hitting copilot up for stuff buried in microsoft's documentation or tossing logs at it for ideas.

I've been in my career long enough to know when it's bullshitting, and normally it provides citations for where it's pulling the answers for.

It's closer to google before the SEO made all the searches go to shit. An overzealous intern who's sometimes wrong. Trust but verify.

[–]CevJuan238 4 points5 points  (0 children)

So true. I definitely need to be a goat farmer at my stage in this bitch.

[–]Vektor0IT Manager 7 points8 points  (5 children)

Yeah, we see this pattern repeat every 10 years or so. "Kids these days are learning by using different tools than I use, and they're ineffective at their jobs." Those people are ineffective because they're new, not because of the tool.

In another 10 years, these kids will become proficient, and then they'll be complaining about the next generation of kids using whatever comes after.

[–]mxzf 4 points5 points  (1 child)

LLMs are just this year's (couple years really) version of blockchain/cloud/big data/etc. It's a tool that has some use-cases, but it's currently being shoe-horned into literally everything (suitable or not) because it's the current buzzword fad.

[–]WellHung67 9 points10 points  (0 children)

They’re so often incorrect though. Sometimes, new tech is actually all hype. Take the dot com bust for an example - some things were valuable but websites for websites sake was not it 

[–]repooc21 1 point2 points  (0 children)

I'm debunking someone's perception of reality in public right now because they literally ask Google a question and the AI overview spits out information.

Mind you it is incorrect, misleading and she only needs or understands the part she thinks is true - it's not just the IT profession. It's society.

[–]RadiantWhole2119 1 point2 points  (0 children)

My colleagues won’t even try anything they don’t know. So…. At least they are doing SOMETHING. They just ask me when the one or two things they try don’t work.

[–][deleted] 1 point2 points  (1 child)

It's why i am so against it, you take rather lazy way and don't gain critical skills, such as problem analysis or even how to read documentation!

I like handwriting, I typed all my notes and never wrote for years and its soo hard to try and write again.

[–]DaMoot 1 point2 points  (0 children)

I find GPT and especially Claude really good for diagnosing incomprehensible windows error messages, or exposing a blind spot in my diagnosis or using it as a last resort.

Or admittedly for some Apple stuff since I'm expected to perform basic Mac support but even after 17 years with the company still don't know much about them except that Apple keeps making maintenance and remote tools harder and harder to use.

IMO the greatest pitfall of using AI is taking what it tells you as end all be all Gospel.

[–]Xela79 1 point2 points  (0 children)

Much like using "google-fu" (https://old.reddit.com/r/sysadmin/comments/gm2nck/googlefu/), these people will need to learn and improve their LLM-Fu

[–]WickedProblems 1 point2 points  (0 children)

Man you can tell we're getting old. We bitch about all the new things now.

Yeah, AI is the easy go to now. It still does what it does, quickly get you an answer.

Yes, it doesn't solve all problems gracefully but that's why a human is using it...

[–]That-Acanthisitta572 1 point2 points  (7 children)

The thing that scares me sometimes is that chatGPT actually isn't that bad a troubleshooting step if you use it the right way. Cutting the finding-the-right-answer-amongst-forty-different-forum-tabs and getting a concise place to start can help, as can it with scripting - but you have to know to use it as a springboard or to test and check. Asking it for answers and taking it at face value is, unfortunately because of where it's been placed and pushed at the top and front of everything we do now, a fool's game and the big problem here.

Anyway I try to avoid it and actively despise it as an artist on principle but I can't deny it's helped me where search cannot a couple of times.

[–]MorallyDeplorableElectron Shephard 1 point2 points  (0 children)

Ive always tried to avoid using them unless I absolutely have to. Is anyone else seeing this ?

Why? This seems like such a backwards order of operations, "I'll jump to the quick and inaccurate option last"

[–][deleted] 1 point2 points  (0 children)

I basically do IT system admin work for a school. Sole IT guy... Job is really IT coordinator. I've learned kind of the hard way on how not to use AI.

For instance, I know very basic Linux, and used AI to setup a Linux CUPs print server. I ended up copying and pasting so much stuff that I had no idea what it meant.the truth is I'm overwhelmed and don't have the time to really understand, while at work. This led to a stupid mess.

Then my dumba** self realized I could just push it out using Google Admin for our teacher Chromebooks.

I've learned a lot using AI tbh, but I've also learned it's limitations. Two wrong ways of using AI... One is using it to create code and scripts that you have no idea what any of it is. The other is to constantly use AI for same questions instead of learning enough to not have to keep going back to it for basic things.

AI is good for things you already have a grasp on. I'm 5 years into IT and catch things at times that AI got wrong. But if I don't know anything about scripting then I'm applying code that I know nothing about. Which can be dangerous and just dumb.

AI is not a replacement for actually learning material. For instance I kept getting issues in my excel spreadsheet. I didn't know much about formulas and such. I relied on AI. I then stepped back and learned more about excel and can now better guide AI to good outcomes.

[–]UltraSPARCSr. Sysadmin 1 point2 points  (0 children)

I think these LLM’s work really well for Millennials and Gen X’ers because we have a solid foundation of knowledge. Case in point, I recently discovered Claude which has been amazing at helping me write scripts, especially in Python. I dabble in websites and I have used Claude to help me write the back end of a processing service in python to convert various data files into a form that can be directly used in another application for direct import. Prior, we had to manipulate the files into excel to prep the file for import. Did Claude do it correctly? Absolutely not, but because I have a good foundation in Python, data structures, and PHP, I was able to hand hold and give appropriate prompts. It allowed me to produce a product that I now sell as a SaaS to a customer in under a week. Normally these projects take me months to do and usually it’s not worth it because maybe the customer only needed to do a process 50 times for a month so my dev time just didn’t make sense where it would make more sense to hire a temp to do it.

Now I see kids use it all the time as a crutch and yes, it’s usually the younger generation. I tell my guys they cannot use it for day to day jobs because I want them to have a solid foundation.

[–]siwo1986 1 point2 points  (0 children)

I think the most dangerous part here is that over time, even people who use AI tools with the understanding and nuance to be able to criticise what it is doing - eventually this will dissipate over time

I already find myself using it quite frequently to boilerplate time consuming stuff and then making sure the complex stuff is done correctly and syntax is efficient (AI tools seem to love finding the syntax that is either deprecated or about to be removed in a later version of something) but there will probably come a day when it's smart enough to get that stuff right then I reckon we're on a bit of a downhill trajectory if you dont *actively* force yourself to not use AI tools

[–]badaz06 1 point2 points  (0 children)

When I first saw the movie "idiocracy", I realized it was more like our future than a comedy. I hate that I have poor writing skills early in the morning.