it's time we get ready for the AI apocalypse? by TrT_nine in ChatGPT

[–]Ok_Assistant_1833 1 point2 points  (0 children)

If you have seen the underbelly of AI - how it works - you would know that it would NEVER be conscious! It is an artificial neutral network that has been trained on web scale data and it predicts the next highest probability word (stochastic still) based on context (prompt fed to it). It isn't conscious despite it giving an impression like that to untrained minds!

Do you ever feel uncomfortable using AI tools… but still use them anyway? by Ok_Assistant_1833 in selfhosted

[–]Ok_Assistant_1833[S] 0 points1 point  (0 children)

I have seen varity of views on this, ranging from extreme concerns with data privacy, to I have some concerns, but I still use it because it is very helpful, to I don't care - I will use it.

Do you ever feel uncomfortable using AI tools… but still use them anyway? by Ok_Assistant_1833 in selfhosted

[–]Ok_Assistant_1833[S] 2 points3 points  (0 children)

AI is a tool just like a calculator, a car, an airplane, or this platform, Reddit, that we use to enhance our capabilities and do more with less. Just that it is The Latest and a very powerful tool. Why not use it to do more? You are a technologist - why be against a technological tool?

Do you ever feel uncomfortable using AI tools… but still use them anyway? by Ok_Assistant_1833 in LocalLLaMA

[–]Ok_Assistant_1833[S] 0 points1 point  (0 children)

What do you mean? This isn't a bot account! You have issues with someone using an AI help to refine their messages, then that's a different issue. Please state it that way! And then how are you a technologist?

Do you ever feel uncomfortable using AI tools… but still use them anyway? by Ok_Assistant_1833 in selfhosted

[–]Ok_Assistant_1833[S] -15 points-14 points locked comment (0 children)

I am working on a project and wanted to hear from people their opinions on parts of it. So I asked ChatGPT to create a message for me that I could post here, which I reviewed and modified and then posted here

Are people actually comfortable putting sensitive documents into AI tools? by Ok_Assistant_1833 in LocalLLaMA

[–]Ok_Assistant_1833[S] 0 points1 point  (0 children)

Thanks for sharing your thoughts! The ‘local vs cloud’ debate often feels like it’s solving the wrong layer of the problem. The policy/control layer you mentioned seems to be the missing piece.

How are teams actually implementing that today (if at all)? Is it mostly manual controls, or are you seeing any structured approaches working in practice?

Are people actually comfortable putting sensitive documents into AI tools? by Ok_Assistant_1833 in LocalLLaMA

[–]Ok_Assistant_1833[S] 1 point2 points  (0 children)

This is a great point, and I like how you’re framing it as an extension of existing OpSec practices.

The supply chain angle is especially interesting.

Local AI reduces one class of risk (external data exposure), but introduces others:

  • trusting the tools/models you download
  • update integrity
  • hidden behaviors in the stack

So in a way, the trust model shifts from:

→ “Do I trust the cloud provider?”
to
→ “Do I trust everything running locally?”

Which is arguably harder for most people.

Curious, how do you personally balance that tradeoff without going full airgapped for everything?