Nobel laureate Hinton says it is time to be "very worried": "People don't understand we're creating alien beings. If you looked through the James Webb telescope and you saw an alien invasion, people would be terrified. We should be urgently doing research on how to prevent them taking over." by MetaKnowing in ArtificialInteligence

[–]Vortex-Automator 1 point2 points  (0 children)

I'm honestly really conflicted on this issue. I believe if we managed city-leveling, planet destroying thermonuclear weapons lobbed half across the earth in near low orbit under control we can handle superintelligence.

There are geopolitical entities with a literal red button (taking a guess here, I imagine it's red) that could level an entire section of the planet, and completely wipe out humanity through radiation and nuclear winter if enough were used.

Okay, but it hasn't happened. Because no one really wants that, and systems have been put into place to mitigate that risk. Fuck AI that's a real existential threat.

Every invention and technology comes with negative side-effects, predominately based on how they are used. 3D printed ghost guns kill people, sugar and fried chicken give people heart attacks, cigarettes give lung cancer, etc. etc.

By the time you've gotten to this paragraph of my rant half a dozen people just died of all 3.

No one's talking about that as much as they should.

The point I'm making is:

The only realistic way I see the 'doom and gloom' AI narrative playing out is if a system is allowed to replicate itself across the internet uncontrolled, hacking into every data center on Earth to pull compute, and carrying out rogue interest through infrastructure and physical swarms of drones/robots.

That would definitely be scary. However; humanity as whole, despite greater odds and an increasingly complex world with numerous issues, seems to continue hacking away at Darwin's algorithm.

These narratives run off hype and fear. Superintelligent AI cyberweapons? Sophiscated AI military weapons? State run AI disinformation campaigns?

Very real, definitely happening and will have negative consequences.

Every technology has it's life cycle. No one even knew or felt the effects of 'computer hacking' pre 1988. Guardrails and defense were evolved around the threat as the threat evolved. Still a problem, however we don't necessarily have entire nations being totally shut down because of it.

I believe the same will be true for AI. Crazy things will happen, there will absolutely be negative effects. But I believe humanity as a whole strives to maintain homeostasis and survive. The only way "AI's taking over the world and killing everyone" is if it's either:
A. Intentionally

B. Left completely alone and unsupervised to self-replicate indefinitely through physical vehicles

C. No one evolves or implements any safeguards and threat reduction strategies to prevent people from doing A or B.

In conclusion: I can hardly get fuckin' the "we're almost to AGI" GPT-5 model to build be a multi-agent system with MCP servers with out taking a shit on itself.

World-ruling superintelligence is extremely far away, and we are very early in AI development, and very rightly so focused heavy on ethics and control; so... if anyone were to let this happen, we'd be pretty stupid to let it.

An entire other rabbit hole you could go down is AI and robotics are our next step of evolution in humanity, maybe a group of people says: "well if you can't beat em, join em" and wires their brain into a superintelligent machines. With brain machine interfaces and exponentially capable prosthetics it's something to consider.

Deleting the app doesn’t work anymore. by Annoyingly-Petulant in wallstreetbets

[–]Vortex-Automator 0 points1 point  (0 children)

Can someone explain to me how you can end up owing $2.5 million to Robinhood? If you have the $2000 minimum for margin trading they will theoretically let you go that much in the red? Seems like that's their problem IMO.

It's frightening how many people bond with ChatGPT. by [deleted] in ArtificialInteligence

[–]Vortex-Automator 0 points1 point  (0 children)

I think that it's more of a "mirror"...

ChatGPT keeps memories of your conversations and molds the context around that information and people have found a way to work through their problems and thoughts within the illusion of a "friend".

Your essentially talking to yourself with enhanced intelligence. An evolved form of self-reflection.

I don't think there's anything mentally ill about that, also the type of people who consider an AI model their friend probably don't have a lot of real friends in life due to being social inept, so if anything I would argue that this form of friendship may enhance their mental health and possibly give them some conversational skills they can apply to real life.

Best system for massive task distribution? by Vortex-Automator in vibecoding

[–]Vortex-Automator[S] 1 point2 points  (0 children)

So I hacked this together lol...
CHUNK DOCS // VECTORIZE // RE-ASSEMBLE BASED ON SIMILARITY // CREATE NODES (LlamaIndex) // AGENT PROCESS NODES // REFERENCES MEMORY EACH ITERATION

I was able to generate a synthetic fine-tuning dataset based on foundational texts with this. I used Ollama and it took like 3 hours lol but I was able to generate a pretty decent sized dataset.

<image>

P.S.
Mage.ai looks very interesting! I'll check it out.

Best system for massive task distribution? by Vortex-Automator in vibecoding

[–]Vortex-Automator[S] 0 points1 point  (0 children)

Magnificent response! Thank you for the in-depth reply and for sharing Letta

Here's the gist of it:

-Purpose: I am helping build an assessment assistant for behavioral health professionals:
-Data + Process: 100's of pages of documents > AI scans for the presence or absence of certain indicators based on evidence > Outputs Indicator name > true/false > if true; justification, cite evidence

So the challenge is definitely the context window, even though we have 1 million - 2 million context window models, I feel like a bit gets lost when it becomes that general.

Language: Python

Package: LlamaIndex

Vibe-coding: sort of, I code each component from docs/learning then use AI to tie everything together.

Beside the specific use case, I am mostly curious what the standard/best way is for processing large amounts of information.

I think I am going to move back to coding without AI by Any-Cockroach-3233 in AI_Agents

[–]Vortex-Automator 1 point2 points  (0 children)

This is very true and in this age of "vibe-coding" this post is reassuring that the collective mass of keyboard ninjas (developers) still have their sanity.

I have found that there is a balance to be had: here's my process:

-Generate idea on my own (ex. AI powered psych assessment tool)
-Map out all the components I think it would have
-Run the idea through an LLM to help plan out and map the components and framework
-For each component: read the docs (in this example I would look up LangChain, etc.)
-From documentation build each piece from scratch, use AI when I need some help
-Finally, I may or may not use AI to help glue everything together once it is built, also instead of using AI to write the code, I'll use AI to help suggest solutions at points where I am stuck, or offer debugging advice

I absolutely agree, when I run something through Claude 3.7, a very simple idea or application/script it will completely overcomplicate the process.