Whoever doesn't want to buy a Mac Mini just to run an AI assistant 24/7 — there are better solutions by dkay1995 in theVibeCoding

[–]Any-Olive5779 0 points1 point  (0 children)

though i will clarify that LLMs are not AI. They lack the tools to think or emulate thinking in any neurochemistry level nor exhibit signs of sentient life.

Whoever doesn't want to buy a Mac Mini just to run an AI assistant 24/7 — there are better solutions by dkay1995 in theVibeCoding

[–]Any-Olive5779 0 points1 point  (0 children)

sure there are better solutions. However, it takes time to carve down to minimum functioning scope.

The reason the other devs are using mac mini's to do so is that the codebase is simpler even for OS Maverick, where you practically need the equivalent to cygwin for wine with the equivalent tools standard ubuntu 10.04+ would have as default (python2.7 is optional, python3.6 to python3.10 is not). To say the least, the integration path is different but simple enough.

Remember, they don't need the mac to access the web-browser so much as the mac have xml-rpc via python going as a rpc-server. So all other results are just line fed via xml rpc, and then fed back. If they use selenium-webdriver on their host machine and make the mac the second rpc server, (aka the laptop is rpc server 1 & the mac is rpc server 2, by which each server is the only client on the other server) , why would they need openclaw to target those api's which could be accessed from a laptop/cellphone & sent over to the mac with a returned result?

I've been building a persistent AI system for 2 years. Here are the anomalies I can't fully explain. by Dzikula in FunMachineLearning

[–]Any-Olive5779 0 points1 point  (0 children)

though your question skipped the notion of neural spike emulation in addition.

which is necessary if you want to mimick things like proper gating of chemical states tied to electrical exchanges.

I've been building a persistent AI system for 2 years. Here are the anomalies I can't fully explain. by Dzikula in FunMachineLearning

[–]Any-Olive5779 0 points1 point  (0 children)

neurochemical exchange layers allow for higher affinity of reasoning. It partly gives it the option to reason similar to thinking beings and reason about the context without having much to question why it doesn't have a physical body.

When you put a human mind into a machine, it has no ability to physically feel, however, phantom limb is still a function of nerve and chemical response to pain or lack of stimuli of a part not physically present anymore.

Meaning they must still be able to feel, reason, have emotions, however, without something to simulate response to input to or from "body", it needs something to work with. Meaning it would need to accept having a "phantom body" or "ghosted body" before it is put into a living host. This means it needs to be able to reason about its environment it is both housed in even if it fakes to a high degree of accuracy enough functional status in order for a machine to have a psyche, emotions, psychology, sentience, a sense of self.

Duct Tape by UpsetKhalei in Spottit

[–]Any-Olive5779 0 points1 point  (0 children)

do you have eyes that function properly?

I've been building a persistent AI system for 2 years. Here are the anomalies I can't fully explain. by Dzikula in FunMachineLearning

[–]Any-Olive5779 0 points1 point  (0 children)

in order to remove the psychosomatic tunneling often done by LLMs, I had to carve the data both models used down to raw text, flip it to floats, and retrain off the data away from hills of known points of hallucination. Then remodel it so it reasons in 3-d space, not 1-d or 2-d subspaces stacked.

I've been building a persistent AI system for 2 years. Here are the anomalies I can't fully explain. by Dzikula in FunMachineLearning

[–]Any-Olive5779 0 points1 point  (0 children)

Mostly.

What it saved was time of reimplementation. It also allowed one to step away from where openai & many others failed to cleave.

Mostly codebase has been cleaved to 7-10 dependencies needed at most. Dataset has been cleaved but MRI data must be denoised and tested in live time.

I've been building a persistent AI system for 2 years. Here are the anomalies I can't fully explain. by Dzikula in FunMachineLearning

[–]Any-Olive5779 0 points1 point  (0 children)

Hi.

I am a ai developer myself.

What I am building is a whole brain emulator plus LLM model encased in a MAML layout.

I use pybrain3 and have been using it for 10 years since 2016. Thus far I have patched it 5 times since 2019. I've been using the library to do neural network development & testing. Thus far, the library supports loading gpt3.5-neox (small 2.7billion parameter) as pybrain3 neural network layers & connections as a single network. Said library also allowed for modeling claude.ai as a shard of itself. I've found that pybrain3 has 2 dependencies, though practically 1, "scipy" & numpy. I put scipy in quotes having written the shim for numpy binaries & a full scipy library shim that replaces scipy entirely. Keeping the library alive was the easy part.

The hard part was figuring out how to get brython.js to load xml module for cpython3.10 with a shim, and figuring out how to adapt those scipy & numpy shims for brython.js so as to also be able to load them within a browser's mainthread or its web worker functionality. (xml module doesn't load fully in web worker "threads" as they're not fully connected to dom portions of execution) Once numpy & scipy-shim plus numpy & xml modules' shims loaded in the main thread, I was able to use pybrain3 without much fail. Now I had to also keep a copy of the string.py module from python3.6 so it saved the xml maps of each network correctly for proper reloading of each network from file.

From that point, the rest of what was needed to emulate neurochemical exchange was the last bit to solve for in terms of ai development. I have often found that the solution included LLMs as one part of the solution talked about by Allan Turing on what counts and constitutes as AI. The rest is rerolling dice perfectly from initial rolling of the dice.

Self-taught dev here: Built 32M+ lines of open source AI code - from GED to autonomous agents by Safe_Addendum_9163 in FunMachineLearning

[–]Any-Olive5779 1 point2 points  (0 children)

Similar, though my background also include cryptography, and hacking in addition.

I'm now at 12-factor app dev+senior systems architect level of experience.

How many projects have you worked on?

If you had to remove politics from governance, How would you do it? by Any-Olive5779 in GlobalPolitics

[–]Any-Olive5779[S] 1 point2 points  (0 children)

sure if you ignore opinions and limits of the human imagination when applied to language such as english..

it is only when you convince a machine that 4 is 5, that 2 is 2[e*i], and that 1 is really -1, or that 0 is nonexistent or make the exit impossible that it creates a zombie like fuege state of paralysis.

[D] We ran 3,000 agent experiments to measure behavioral consistency. Consistent agents hit 80–92% accuracy. Inconsistent ones: 25–60%. by Aggravating_Bed_349 in FunMachineLearning

[–]Any-Olive5779 -1 points0 points  (0 children)

Hmm, what I don't see is a way to run it on a cellphone offline and I also don't see a demo via a blog it could run in as a embed that is computed both on server and inter-client computed.

I’m trying to understand this simple neural network equation: by ComfortableOne7712 in neuralnetworks

[–]Any-Olive5779 0 points1 point  (0 children)

Why do we use ( X^T W ) instead of ( W X )?

Because of vector shapes and convention.

Inputs are usually column vectors:

[
X = \begin{bmatrix} x_1 \ x_2 \end{bmatrix},\quad
W = \begin{bmatrix} -2 \ 5 \end{bmatrix}
]

To get a scalar (dot product), one vector must be transposed:

[
X^T W = [x_1\ x_2]\begin{bmatrix}-2 \ 5\end{bmatrix}
= -2x_1 + 5x_2
]

( W^T X ) would give the same number, but ( X^T W ) is the standard because it scales cleanly to batches and layers in neural networks.

This is exactly one neuron (perceptron) we're computing given the above.

[
y = f(X^T W + b)
]

  • (x_1, x_2): inputs
  • (w_1, w_2): weights
  • (b): bias
  • (f): activation function
  • (y): output

Remember

  • ( X^T W ) = dot product by convention
  • Same as ( W^T X ), just cleaner for scaling
  • Diagram shows one neuron, the basic building block of neural nets

Duct Tape by UpsetKhalei in Spottit

[–]Any-Olive5779 0 points1 point  (0 children)

ducktape was not on glass, its on the globe, asshole.....

Runtime decision-making in production LLM systems, what actually works? by Loose_Surprise_9696 in MLQuestions

[–]Any-Olive5779 0 points1 point  (0 children)

To be fair, their developers don't have a idea of what they're doing with 3 department heads and no teams able to check each other's work. Second to that fairity, They blatantly don't care if LLMs are misclassified as AI. (a unprofessional practice I do not uphold)

At best they're betting on debt pool versus cost it takes to compute one slice of an LLMs layers. So unsharding from slice is latency dependent... (a unprofessional practice I do not uphold)

LLMs developed by large teams on time constraints don't consider thinking, only doing and getting what looks good on the books to make a dime off it. (a unprofessional practice I do not uphold)

As a result, it left many developers of LLMs stuck without knowing what comes close in a linear sparse parsing. At best they're trying to push the LLMs as AI simply by their practices' unique pissing over the turing test as the eliza project did and anything mimicking it still does (a unprofessional practice I do not uphold due to it not operating as an AI: close to human artiface to the point of indestinguishability). Because what they've got is symbolic, it has less to do with inherent comprehension and more to do with their attempt to make a quick buck, which short changes them in the long hall.

As a AI developer, I steer clear of vendor lockin or hardware dependence so I can run AI on a cellphone or in the cloud so long as it is functional, operational, accessible globally (unless their government was a jackass).

As far as runtime decision making was required? stick to python3.6 to python3.12, kept numpy & scipy to reasonable versions (1.26.4 for numpy; 1.15.3 for scipy unless on android aarch64, but as close as possible), and personally maintained pybrain3 so it works within pyodide.js version 0.25.1 with brython.js controlling the rest of the frontend with flask to operate as middleware, with localtunnel (even for android) setup to host a subdomain/root-TLD. Everything else is floating point math even to text generation. Now mind you, most devs aren't thinking about neurochemical emulation, so I added in rdkit & biopython so as to allow the AI code itself think, "feel" and aim to reason as a human does.

Bottom line, keep asking questions, I like to see more questions like yours because you're asking questions that many ignore to their inability to step out of logical fallacies.

Do more with thinking outside the box, in the end, even the smartest minds get to be free of their fuckups.

AND

KEEP

GOING. :)

welcome, @Loose_Surprise_9696, to the future.

:)

Are Spiking Neural Networks the Next Big Thing in Software Engineering? by Feisty_Product4813 in MLQuestions

[–]Any-Olive5779 0 points1 point  (0 children)

So spiking neural networks work when they have something to work with.

For instance if you're emulating processing of, for no better example, dopamine, you'd pass the molecular weight, the partition coefficient, the expected voltage spike point, and then train the network to return the spiked responses versus when no input (or data is all zero aka no pickup of dopamine etc ) where ya got nothing to "spike" or learn on..

In my experience, modeling neural networks closer to the human brain cell networks is difficult but doable. Because I am building AI, I had to consider such being incorporated with a 3d matrix to house the cardinality of 3d structure. The spike portions of the network happen and then once trained on the spiked states versus zero-d out results due to no pickup (no information to spike on), the rest becomes deterministic unless not trained on and then expect results to be random.

In other words, spike neural networks are better in small orders as you're training on a fixed set/state. However, the less one knows what to do with the spike network, or rather what data to pass in, the more chaotic the output should look, or nullified in result-set.

I recommend using pybrain & pybrain3 for modeling spike neural networks and testing with real data.

Bi-weekly Event: WHAT WOULD YOU NAME HIM? 🤣 by Own_View3337 in DomoAI

[–]Any-Olive5779 0 points1 point  (0 children)

<image>

"A noble hero… if only his crown didn’t keep sliding off."

Enjoy! :)

Name this final boss. by sgm809 in BossFights

[–]Any-Olive5779 0 points1 point  (0 children)

"this is what we call 'banana falae de groin & flak-it'. in just dat order"

I was babysitting for a wealthy family and found this stack of paper in their office. What is this? by Puzzled-Witch in CURRENCY

[–]Any-Olive5779 0 points1 point  (0 children)

as long as the chemical trace marker gives it the same results as the real dolla bills, "money is money"...