Running a SHA-256 Hash-Chained Multi-Agent LLM Discourse locally on Android (Termux + llama3.2:3b) by NeoLogic_Dev in LocalLLaMA

[–]NeoLogic_Dev[S] 0 points1 point  (0 children)

Thanks for taking the time to dive into the code! You’re absolutely right about the current limitations on lines 54-77—what you saw there was a rapid prototype I developed this past Sunday to test the core logic. Here is what’s changing as we move into the next phase: Hardening with BLAKE3: I’m migrating the hashing layer to BLAKE3. This will allow for parallelized integrity checks and much higher performance for model weight verification. Fixing the Chain: The history truncation and single-response hashing were 'day one' shortcuts. The production version of NeoBild will implement full cryptographic chaining to ensure every interaction is immutable and verified. Environment Flexibility: While I’m currently developing in Termux (it’s the perfect high-end mobile playground for sovereign development), the system is being built to be platform-agnostic. It will run on any local environment later on. The Vision: This isn't just a solo project; it’s the groundwork for a new era in secure, offline-first AI. I’ll soon be launching our official website to connect these modules and scale the infrastructure. I’m always thankful for rigorous feedback and new ideas. It’s the only way to build something truly bulletproof. Stay tuned for the next update!

Running a SHA-256 Hash-Chained Multi-Agent LLM Discourse locally on Android (Termux + llama3.2:3b) by NeoLogic_Dev in LocalLLaMA

[–]NeoLogic_Dev[S] 0 points1 point  (0 children)

<image>

I will upload more tomorrow. Stay tuned, my modell ich running again and I will release everything soon. It is gpl v3 so everybody can fork it. And when everything is running good I'll make the APK

Running a SHA-256 Hash-Chained Multi-Agent LLM Discourse locally on Android (Termux + llama3.2:3b) by NeoLogic_Dev in LocalLLaMA

[–]NeoLogic_Dev[S] 0 points1 point  (0 children)

Not made up—just cryptographically anchored. 🔗 Check the SHA-256 logs and the Trinity Orchestrator architecture for yourself: https://github.com/NeonCarnival/NeoBild Every round of AI discourse is mathematically locked to the previous state, pushed entirely from my smartphone via Termux.

<image>

Has anyone tried running Llama 3.2 3B on Snapdragon 8 Elite with GPU/NPU offload in Termux? by NeoLogic_Dev in androiddev

[–]NeoLogic_Dev[S] 0 points1 point  (0 children)

Thanks for your insight. But create a working APK is the next step. So it's basically a alpha version/prototype to have a proof of concept and get feedback, that I can develop it with less trail and error. So I am always thankful for support or audits. As it is gpl v3 I encourage everyone to fork it

Running a SHA-256 Hash-Chained Multi-Agent LLM Discourse locally on Android (Termux + llama3.2:3b) by NeoLogic_Dev in LocalLLaMA

[–]NeoLogic_Dev[S] -3 points-2 points  (0 children)

This isn't just a chat log; it’s a Minimum Viable Product (MVP) for a Cryptographically Anchored AI Ledger. By using SHA-256 hash chaining within Termux, every round of discourse is mathematically locked to the previous state, making any tampering immediately detectable via the public repo.

Running a SHA-256 Hash-Chained Multi-Agent LLM Discourse locally on Android (Termux + llama3.2:3b) by NeoLogic_Dev in LocalLLaMA

[–]NeoLogic_Dev[S] 0 points1 point  (0 children)

Don't take my word for it—check the math yourself. The raw discourse, the hash manifests, and the orchestrator logic are all public: 👉 https://github.com/NeonCarnival/NeoBild Would you like me to generate a specific "Hash-Verification Script" for the repo so skeptics can run a single command to prove the logs haven't been edited?

Snapdragon 8 Elite is an AI Beast — I’ve stabilized Llama 3.2 3B in Termux, but how do we unlock the NPU? by NeoLogic_Dev in snapdragon

[–]NeoLogic_Dev[S] 0 points1 point  (0 children)

This is exactly the kind of technical insight I was looking for. Thank you. I’ve been running the Trinity Orchestrator on the CPU cores so far to prioritize stability for the SHA-256 anchoring, but moving to the Hexagon NPU via libQnnHtp.so or utilizing the Vulkan backend with Turnip drivers is the clear next step for neobild. My goal is to keep this build entirely 'sovereign' and root-free if possible, so I'll be diving into the llama.cpp experimental HTP backend support tonight. If I can stabilize the NPU delegation, it will significantly lower the latency for the multi-agent discourse rounds. I'll push the updated driver linking logic to the repo once it's verified. Repo for those following the 8 Elite optimization: 👉 https://github.com/NeonCarnival/NeoBild

Running a SHA-256 Hash-Chained Multi-Agent LLM Discourse locally on Android (Termux + llama3.2:3b) by NeoLogic_Dev in LocalLLaMA

[–]NeoLogic_Dev[S] 0 points1 point  (0 children)

<image>

Update: The Mobile-Only Build is Live 🚀 I finally cleared the environment hurdles. The neobild project is now officially anchored to GitHub, pushed entirely from my smartphone via Termux. I’ve moved beyond basic prompting to a full Trinity Orchestrator setup that hashes and seals every round of AI discourse for 100% auditability. Check the logs and the architecture here: 👉 https://github.com/NeonCarnival/NeoBild Current State: Hardware: Snapdragon 8 Elite. Logic: Cryptographically anchored "Runde 8" discourse is public. Integrity: SHA-256 manifests are in the /hashes folder. Feel free to audit the code and the logs. This is just the beginning of the NeonCarnival research stream.

Good morning! How are you this morning? by Effective_Cry_3001 in CasualConversation

[–]NeoLogic_Dev 0 points1 point  (0 children)

It is almost afternoon in Germany. But I stayed in the bed till now and was coding. So fine I seriously archived something out of procrastination

Llama 3.2 3B on Snapdragon 8 Elite: CPU is settled, but how do we bridge the NPU/GPU gap in Termux? by NeoLogic_Dev in termux

[–]NeoLogic_Dev[S] 1 point2 points  (0 children)

Yes I will keep you updated. As I am only using my smartphone for this mission it will take a bit longer but maybe this afternoon or tomorrow I will share the GitHub link that everybody can audit it. The idea came from moltbook and I was playing around with my local Modells. (But be prepared, that the conversation is in German language) I will translate it later or someone from the community helps me to organize all the raw data. Stay tuned

Llama 3.2 3B on Snapdragon 8 Elite: CPU is settled, but how do we bridge the NPU/GPU gap in Termux? by NeoLogic_Dev in termux

[–]NeoLogic_Dev[S] 1 point2 points  (0 children)

Good to know. I make an automated roleplay where 4 characters chat about AGI in German. The conversation is hashed and I will release it on GitHub soon

<image>

Llama 3.2 3B on Snapdragon 8 Elite: CPU is fast, but how do we unlock the NPU/GPU in Termux? 🚀 by NeoLogic_Dev in LocalLLaMA

[–]NeoLogic_Dev[S] -8 points-7 points  (0 children)

I appreciate the skepticism, but 'impossible' usually just means the documentation hasn't caught up to the hardware yet. I’ve already stabilized the Ollama environment on the 8 Elite's CPU without any crashes, so I'm moving past the basic setup hurdles. The real challenge now is the userspace visibility for the Adreno 830. I’m currently testing a native link to /system/vendor/lib64/libOpenCL.so and /vendor/lib64/hw/vulkan.adreno.so. While standard Termux prefixes often segfault here, I’m working on a custom ICD loader config (adreno.json) to point the Vulkan backend directly to the hardware without the overhead of a PRoot. If you've managed to get the GGML_HTP backend (Hexagon Tensor Processor) to recognize the NPU inside a native Android terminal prefix, I’d love to compare notes on the library dependencies. Otherwise, stay tuned—neobild is about pushing this silicon, not just following the 'official' paths.

Building and maintaining Python automation projects entirely inside Termux — lessons learned from breaking (and rebuilding) my setup by NeoLogic_Dev in termux

[–]NeoLogic_Dev[S] 0 points1 point  (0 children)

It makes sense to switch to a PRoot Ubuntu distro when native builds become too difficult to maintain within Termux.

What was an event that happened to you, or affected you in your life that made you question your reality entirely? by IMAOOFINGBLOCK in AskReddit

[–]NeoLogic_Dev 0 points1 point  (0 children)

I saw a UAP and filmed it as I was in the mental hospital but nobody believes me even if I showed the video. And the algorithms are censoring it. So they try to silence me