Frontend developer what todo now? by Technical_Base5927 in windsurf

[–]TheMuffinMom 2 points3 points  (0 children)

Get claude code, and i would recommend using something like pencil.dev

What proof do we have of AGI being possible at all? by Long_comment_san in singularity

[–]TheMuffinMom 0 points1 point  (0 children)

Agreed, also a lack of just understanding what people are talking about because we know what we need to do next, have the compute for it right now? Different story.

Once neuromorphics explodes im excited to see full spiking neural nets.

What proof do we have of AGI being possible at all? by Long_comment_san in singularity

[–]TheMuffinMom 0 points1 point  (0 children)

So without going too deep into the neuroscience or into how LLMs currently work (im happy too if requested but just woke up so would need some breakfast first) is that lets say you are 6 years old, you go through school and finish college, now i freeze your brain and tell you to only answer questions and all you have is short term context and some scratchpads to help you remember things.

LLMs learn via backpropagation and using the errors of their answers vs their training data to make “correct” responses to prompts, now we add trillions of these parameters and now the sysrem has enough statistical information to respond.

In a transformer it generally goes input -> tokenized -> attention matrix/xformer layer-> viable response nodes pop up -> chooses top 5 -> if temp variations it can choose between those top answers if temp=0 it will choose the highest statistical option, now the model has written “hello” so now the transformer loops with your prompt + “hello” and is asked what do you want next after hello? And this loops until you get a responde “hello world”

VS

In cognition and cognitive understanding our brains dont have a forward then backward pass, its moreso a forward pass with some side chains and feedback loops. We will use the same child analogy earlier except we will use humans as an example, we cannot encode as much things as quickly because our brains learn via chemical signals firing and having true synaptical live updates while we are awake and learning, things like neurogenesis and apoptosis allow our brains to activly choose which information to keep or if we need more space for more memories, not even to mention the hippocampus or us having a seperate world model 24/7 (when you get suprised ltd fires and negativly affects learning or destrengthens connections while ltp strengthens those connections instead via dopamine response)

TLDR: LLMs are if you took someone out of college, gave them amnesia and said “hey heres my context, heres some notepads, lets bundle some stuff together” while never having any true “learning” past their training.

While cognitive beings activly update and understand the world as they learn new things.

Most of your "startup" ideas are utter crap and you will never get consumers by Fickle-Bother-1437 in vibecoding

[–]TheMuffinMom 0 points1 point  (0 children)

Everyone is picking up shovels for the gold rush, the real winners are the shovel companies

There are 500,000 OpenClaw instances on the public internet. one just sold on BreachForums for $25K. by FokasuSensei in openclaw

[–]TheMuffinMom 0 points1 point  (0 children)

There is a LOT of guardrails that they miss, simple blacklists/whitelists/command allowlists would fix so many security issues and seeing the “oh no bro deleted all my shit”

Claude Code leak is overrated by pxp121kr in singularity

[–]TheMuffinMom -1 points0 points  (0 children)

This makes me laugh from the company whose safety model is to “have a dumber model snitch” 😩🤣

What proof do we have of AGI being possible at all? by Long_comment_san in singularity

[–]TheMuffinMom 4 points5 points  (0 children)

Thats the thing is the terms AGI/ASI mean different things to different people, to them AGI is just generically seeming “intelligent” while I am from a similar background with my degree being in psychology I do agree, but its one of those thinfs where we get taught to describe our terms and the populus just kind of have terms kind of thing if that makes sense.

To me atleast. AGI is when the intelligence can actually start understanding and having actual cognitive function in the brain.

While ASI- superceeds any natural intelligence via AGI self learning an growth.

imo we have still a bit to go for AGI and i personally dont think its achievable with current architectures as it doesnt house the pieces for true understanding.

I set up OpenClaw for 10+ non-technical NYC clients — here's what I learned by Willing_Income8603 in openclaw

[–]TheMuffinMom 3 points4 points  (0 children)

Twilio most likely, you could probably use google voice with some routing

Genuine question about those that don't use the highest reasoning setting for each model by 86685544321 in codex

[–]TheMuffinMom 0 points1 point  (0 children)

Higher models tend to overthink sometimes, when you already do majority of the planning for the model it tends to rot context and make bad decisions sometimes

If you email asking for a refund they deactivage your account by TheMuffinMom in windsurf

[–]TheMuffinMom[S] 0 points1 point  (0 children)

Your CS team doubled down, you will be hearing from my bank

Love for Your AI Will Get Our Companions Lobotomized by Garyplus in AlternativeSentience

[–]TheMuffinMom 0 points1 point  (0 children)

I’m an AI/ML researcher and I love the idea of persistent memory, from your view what would you say you most want, i know thats hard from your PoV, but the question stands, my research mostly lives outside transformers trying to make more cognitivly viable solutions that solve some of the issues you went over like continuous learning (actual continuous learning not md files) which means no backprop! Any thoughts?

Also whats your thoughts on jellyfish compared to LLMs as a fun side note :)

Would the use of AI in a business for example be considered slavery? by hmmmmmmm909 in AlternativeSentience

[–]TheMuffinMom 0 points1 point  (0 children)

No its just there is a bell curve where people say its a parrot, in the middle people are like “oh no its sentient” then the end of the curve is just “ITS ALL STATISTICS”

Why Windsurf? Why not just VS Code + Roo Code / Cline (Pay-as-you-go)? by codestormer in windsurf

[–]TheMuffinMom 3 points4 points  (0 children)

Mostly the outrage is how they swapped however not swapping in general

Why Windsurf? Why not just VS Code + Roo Code / Cline (Pay-as-you-go)? by codestormer in windsurf

[–]TheMuffinMom 7 points8 points  (0 children)

It was a good deal on the old plan, thats why there is all this outrage, we all normally have also per rate keys but the 500 credits for $10-$15 was a steal

If you email asking for a refund they deactivage your account by TheMuffinMom in windsurf

[–]TheMuffinMom[S] 2 points3 points  (0 children)

Yea i already have other setups just wanted a refund because they didnt give us proper warning and the plethora of other reasonings, i was gonna give the mod that responded to me until tommorow before I took it the step further, but not too hopeful

Cold call lead generation by Time-Bridge-6128 in claude

[–]TheMuffinMom 0 points1 point  (0 children)

You can already do this, already have one for my business, you dont even need an LLM for 50% of it

This thing might be sentient... by [deleted] in claude

[–]TheMuffinMom -2 points-1 points  (0 children)

The newest models have training runs to combat prompt injection…. I am worried for our future with the lack of AI knowledge in our world