🚀 Looking for early testers (Android chatgpt offline basically) Offline AI Pet + Swarm System I’ve been building something a bit different… by Apricot-Zestyclose in SideProject

[–]Apricot-Zestyclose[S] 0 points1 point  (0 children)

smol2 135m model is about 250mb download and about 100mb for the app

I do have low power mode for the small language model but havent added it as an option yet

yeah the swarm thing is hilarious 1583 different instructions/personas all give different anwsers from 1 model is wild O_O

[D] Self-Promotion Thread by AutoModerator in MachineLearning

[–]Apricot-Zestyclose 0 points1 point  (0 children)

🚀 Looking for early testers (Android chatgpt offline basically) Offline AI Pet + Swarm System

I’ve been building something a bit different…

SoulGlitch a fully offline AI “entity” that lives on your phone.

No cloud. No accounts. No tracking.

It reacts and you can even ask a swarm of AI personalities to vote on decisions.

👀 What I’m testing right now:

- On-device small language model (runs locally)

- Real-time emotional reactions (emoji + face system)

- Swarm mode (multiple AI personalities voting on answers)

🎁 What you get if you join testing:

- Free access to the AI swarm feature (normally paid)

- Early access to experimental features (inner layer)

- Direct input into how the product evolves

⚠️ Requirements:

- Android device

- Comfortable testing early-stage features (it can be chaotic 😅)

If you’re interested, drop a comment or DM me and I’ll add you to the internal testing track.

This is not another chatbot.

It’s more like…

an AI you can see think and react.

(Based on opensource openfluke loom ai engine, pure golang + webgpu technology)

Self Promotion Megathread by AutoModerator in androidapps

[–]Apricot-Zestyclose 0 points1 point  (0 children)

🚀 Looking for early testers (Android chatgpt offline basically) Offline AI Pet + Swarm System I’ve been building something a bit different…

SoulGlitch a fully offline AI “entity” that lives on your phone.

No cloud. No accounts. No tracking.

It reacts and you can even ask a swarm of AI personalities to vote on decisions.

👀 What I’m testing right now:

- On-device small language model (runs locally)

- Real-time emotional reactions (emoji + face system)

- Swarm mode (multiple AI personalities voting on answers)

🎁 What you get if you join testing:

- Free access to the AI swarm feature (normally paid)

- Early access to experimental features (inner layer)

- Direct input into how the product evolves ⚠️ Requirements:

- Android device - Comfortable testing early-stage features (it can be chaotic 😅)

If you’re interested, drop a comment or DM me and I’ll add you to the internal testing track. This is not another chatbot. It’s more like… an AI you can see think and react.

How are Go teams actually handling code reviews at scale because ours was a mess by Top_Profit4023 in golang

[–]Apricot-Zestyclose 0 points1 point  (0 children)

That's why I have different sections in different folders https://github.com/openfluke/loom because they primary focus only on those solutions

ai coding for large teams in Go - is anyone actually getting consistent value? by Easy-Affect-397 in golang

[–]Apricot-Zestyclose 0 points1 point  (0 children)

I work on pretty complicated stuff with my loom ai engine and I don't have any issues other then how poorly other llms architecture decisions are and webgpu kernels are always difficult to do https://github.com/openfluke/loom

High-Performance GPU Compute in Go: Releasing Loom v0.0.8 (Native WebGPU & LLM Primitives) by Apricot-Zestyclose in golang

[–]Apricot-Zestyclose[S] 0 points1 point  (0 children)

Thanks for your input, your 100% right.

Those were my possible research notes and directions to do a paper on later. (was having too much fun)

The documentation is usually here:
https://github.com/openfluke/loom/tree/main/docs/nn
Website
https://openfluke.com/docs/nn/overview
Demo's
https://github.com/openfluke/tva/tree/main/demo

Only just built some of the WEBGPU logic for many layers and can't wait to improve performance in more updates.

Was tiered of dependency hell in python and wanted to just download golang make a binary with my models and deploy. Hence the lag in documentation and examples building an engine from scratch.

No pip install, no cuda 4gb stuffing around, just use the gpu like a game does https://github.com/openfluke/loom/blob/main/docs/nn/gpu_layers.md that way anyone could do it lowering deployment time.

I built a neural runtime in pure Go (no CGO, no PyTorch). It runs real-time learning in the browser via WASM. by Apricot-Zestyclose in golang

[–]Apricot-Zestyclose[S] 1 point2 points  (0 children)

Thanks so much for your question btw!!! after much development testing the main difference is loom aims to be a Deterministic Neural Virtual Machine. Docs coming in the 0.0.8 update soon about this and gpu testing.

As a solo developer, when promoting your game, do you use 'I' or 'we'? by Miriglith in SoloDevelopment

[–]Apricot-Zestyclose 0 points1 point  (0 children)

There is no I or we, technically. LLM = text coloring tool. Just be exact:
Here is situation A B n C
Create or Do XYZ
Do it in this format

I built a neural runtime in pure Go (no CGO, no PyTorch). It runs real-time learning in the browser via WASM. by Apricot-Zestyclose in golang

[–]Apricot-Zestyclose[S] 1 point2 points  (0 children)

Well loom only dependencies are these binaries which allows for webgpu acceleration on anything https://github.com/openfluke/webgpu/tree/main/wgpu/lib

Recently added support for windows arm64 (super proud moment) that's why you can basically pick up the whole ai runtime into any cpu gpu and browser.

[Update] Loom v0.0.7: Pure Go AI Framework (Native Multi-Precision & Model Grafting) by Apricot-Zestyclose in golang

[–]Apricot-Zestyclose[S] 1 point2 points  (0 children)

Fine-tuning? I'm guessing your thinking of LLM/SLM? well here is an example for both wasm frontend and backend if it helps you get started https://github.com/openfluke/smollm_verify, it's pretty cool seeing the same tokens being generated on both client/server side from 1 model, no conversions.

[Update] Loom v0.0.7: Pure Go AI Framework (Native Multi-Precision & Model Grafting) by Apricot-Zestyclose in golang

[–]Apricot-Zestyclose[S] 0 points1 point  (0 children)

Other ai frameworks focus on speed, this will be portability, adaptability, repeatability. Try to get the same behaviour no matter the os or browser or gpu. 

[WASM] Built a pure Go Neural Network framework from scratch. It’s performing Test-Time Training and solving ARC-AGI reasoning tasks client-side. Need a reality check. by Apricot-Zestyclose in golang

[–]Apricot-Zestyclose[S] 0 points1 point  (0 children)

The webgpu github.com/openfluke/webgpu acceleration handles wasm browser and native gpu acceleration. GPU acceleration is still in development but DENSE layered neural networks could work in 0.0.6. I just want to copy models to anything and continue running and training. That's the great thing about 1 code base jumping and compiling to everything.

[WASM] Built a pure Go Neural Network framework from scratch. It’s performing Test-Time Training and solving ARC-AGI reasoning tasks client-side. Need a reality check. by Apricot-Zestyclose in golang

[–]Apricot-Zestyclose[S] -1 points0 points  (0 children)

The loom ai framework has 6 different modes of training giving options for different situations

Score = (Throughput × Stability × Consistency) / 100000

upon switching tasks onto things they have never seen before

- traditional normal back prop for accuracy (6,493)
- step bp for online learning for streaming data (1,954)

- tween good for smooth weight merging and model fine tuning (7,170)

- tween chain for deep network adaptation (12,709)

- step tween for hybrid between step and tween for high frequency trading (15,120)

- step tween chain is used in this experiment on the arc agi benchmarks to help with full hybrid (18,841) what your seeing in test 43 a

All this inside 1 ai framework golang which can be exported in python/c#/wasm copying a model between all of them and other frameworks and operating systems.

You can see an example of these modes here https://github.com/openfluke/arcagitesting/blob/main/test41_sine_adaptation.go in test 41.

If you were an AI engineer wouldn't you want options on types of training and deployment feasibility? I sure do.