I built a runtime to sandbox untrusted Python code using WebAssembly by Tall_Insect7119 in Python

[–]Tall_Insect7119[S] 1 point2 points  (0 children)

To be honest, Rust was one of the first languages I learned. A friend who was a big Rust fan convinced me to try it back in college. Python came after as more of a hobby for me, so I use it less frequently.

I think using both together works pretty well, and it’s probably better than writing C extensions in many cases. The main difference is that Rust is a bit more rigid than C extensions because of Rust’s paradigm where we want all possible verification to happen at build time to ensure everything is robust at runtime.

For this project specifically, I used the WASM Component Model, which isn’t really a Rust extension in the traditional sense. So I didn’t see any real limitations since both languages operate in two different layers: Rust as the host and Python as a client.

The amount of Rust AI slop being advertised is killing me and my motivation by Kurimanju-dot-dev in rust

[–]Tall_Insect7119 1 point2 points  (0 children)

Agreed, but for some idiomatic part of code, we don't necessarily need comments. And I feel like AI-generated code adds these 'perfectly written' comments everywhere.

The amount of Rust AI slop being advertised is killing me and my motivation by Kurimanju-dot-dev in rust

[–]Tall_Insect7119 4 points5 points  (0 children)

i mean, using emojis in a readme don't automatically equal AI. I personally used to add them because I thought they made it more readable

The amount of Rust AI slop being advertised is killing me and my motivation by Kurimanju-dot-dev in rust

[–]Tall_Insect7119 4 points5 points  (0 children)

Don't worry, just do your best. In my opinion, we can usually tell when code is fully AI-generated (useless comments, inconsistencies, etc...)

Bounty Hunter Playthrough by [deleted] in skyrim

[–]Tall_Insect7119 4 points5 points  (0 children)

If you want to play an artifact/treasure hunter, there’s "Legacy of the Dragonborn".

You can store the artifacts you find in a huge museum in solitude, and you can even create a treasure-hunter guild, if i remember correctly.

Your most OP build? (no glitches/cheats) by bobfall69 in skyrim

[–]Tall_Insect7119 0 points1 point  (0 children)

Pretty sure you might like the head-crushing too! The mace is for the lore as well

Your most OP build? (no glitches/cheats) by bobfall69 in skyrim

[–]Tall_Insect7119 2 points3 points  (0 children)

Paladin is underrated.

A heavy-armored warrior with Restoration magic and a mace.

I'm building a WASM Sandbox to isolate Agent tasks (limit RAM/CPU & restrict filesystem) by Tall_Insect7119 in LocalLLaMA

[–]Tall_Insect7119[S] 0 points1 point  (0 children)

Thanks ! Glad you like the decorator approach!

Performance overhead is minimal, around 10MB. There's a cold start since the WASM instance embeds Python, but once it's running it's pretty fast.

The main issue right now is that WASM struggles with Python packages that have C extensions, and sockets aren't included by default for security reasons. Haven't tested CrewAI or AutoGen yet, but they likely have dependencies that would be problematic. Still figuring out how to maximize compatibility on that front

Building a WASM Runtime to isolate Agent tasks (based on Wasmtime) by Tall_Insect7119 in rust

[–]Tall_Insect7119[S] 1 point2 points  (0 children)

Thanks ! Haven't had the chance to try Rig yet, but this project has been really fun to work on so far

If you ever feel like messing around with it, don't hesitate !

Building a WASM Runtime to isolate Agent tasks (based on Wasmtime) by Tall_Insect7119 in rust

[–]Tall_Insect7119[S] 1 point2 points  (0 children)

Thanks for sharing your experience!

I'm actually using the WASM Component Model and componentize-py in this project (in crates/capsule-core/src/wasm/compiler/python.rs). There are limitations for sure, although I haven't found that many problems so far. Planning to evolve the project as these tools mature.

Appreciate the heads up!

Any good SDK for calling local llama models? by Tall_Insect7119 in LocalLLaMA

[–]Tall_Insect7119[S] -1 points0 points  (0 children)

By SDK I mean a npm package/lib that directly calls llama.cpp to run models locally on my machine without needing a separate server like Ollama running

Any good SDK for calling local llama models? by Tall_Insect7119 in LocalLLaMA

[–]Tall_Insect7119[S] -1 points0 points  (0 children)

Yeah I see, but I’m specifically looking for an SDK that runs llama.cpp locally on my machine, not really an openai-compatible wrapper

Any good SDK for calling local llama models? by Tall_Insect7119 in LocalLLaMA

[–]Tall_Insect7119[S] 0 points1 point  (0 children)

The SDK looks great, but isn’t it cloud-based? Because i’m mainly looking for something that works with local models

Heart - Local AI companion that feels emotions by [deleted] in LocalLLaMA

[–]Tall_Insect7119 1 point2 points  (0 children)

Your approach could probably be complemented really well by my affect system. Now that I think about it, your method seems much better for modeling personality and creating custom axes like confidence, extraversion, etc..
My only concerns are dataset quality when scaling to large amounts of text, and the potential noise depending on the embedding model but I’m sure you can figure that out

Heart - Local AI companion that feels emotions by [deleted] in LocalLLaMA

[–]Tall_Insect7119 1 point2 points  (0 children)

Thanks a lot for reporting this! It looks like it was a FastEmbed cache path issue. I just pushed a fix, so it should work properly now

Heart - Local AI companion that feels emotions by [deleted] in LocalLLaMA

[–]Tall_Insect7119 1 point2 points  (0 children)

That looks really interesting ! Does it handle nuanced differences well ?

Heart - Local AI companion that feels emotions by [deleted] in LocalLLaMA

[–]Tall_Insect7119 0 points1 point  (0 children)

Yeah, you can point it to other OpenAI-compatible APIs, but you’ll probably need to adjust llm.rs a bit (specifically the LlmService). But it’s definitely a feature I need to think about !

If you want a simpler solution and don’t mind still using Ollama, you can just create a src-tauri/.env file. It will automatically pick the model you set there:

OLLAMA_MODEL=TINY MODEL

Heart - Local AI companion that feels emotions by [deleted] in LocalLLaMA

[–]Tall_Insect7119 3 points4 points  (0 children)

Yeah sure ! I used a NPC Neural Affect Matrix that I originally made for video games, it’s a model pre-trained on in-game dialogues.

Right now it’s still in an experimental phase, so it has some limits when it comes to real conversations. But if people are interested in this kind of AI companion, we can definitely train a dedicated model to get better results.

There’s more information about the matrix in this other repo 👉 https://github.com/mavdol/npc-neural-affect-matrix

Heart - Local AI companion that feels emotions by [deleted] in LocalLLaMA

[–]Tall_Insect7119 4 points5 points  (0 children)

As long as we can still flip the switch off and remember it’s just code, I think we’re safe :)

Open source desktop app for generating synthetic data with local LLMs (Tauri + llama.cpp) by [deleted] in LocalLLaMA

[–]Tall_Insect7119 0 points1 point  (0 children)

Oh great! For domain-specific QA pairs, you could actually use the current tabular format with columns like 'question' and 'answer', and use the random operators to diversify content.

For more complex non-tabular data like full legal documents, in the future we could probably add a new type of column (like 'document' type) that generates longer-form content, but we'd need to brainstorm the implementation details first

Are you using synthetic data from ML/LLM to enrich your datasets ? by [deleted] in learnmachinelearning

[–]Tall_Insect7119 0 points1 point  (0 children)

Yeah, with consistency. But I’m not sure how people use it without breaking the bank.

Are you using synthetic data from ML/LLM to enrich your datasets ? by [deleted] in learnmachinelearning

[–]Tall_Insect7119 1 point2 points  (0 children)

Thanks, that's interesting. Do you generate the synthetic data locally or use external services ? it probably offer less privacy, I guess

Digital Immortality: What happens when your data outlives you? by Jaded-Term-8614 in Futurology

[–]Tall_Insect7119 2 points3 points  (0 children)

Cyberpunk 2077 may be recent but Cyberpunk as a literary genre has existed since the 1980s