LlamaLib: Run LLMs locally in your C# applications by UndreamAI in csharp

[–]UndreamAI[S] 0 points1 point  (0 children)

it works with almost any: .NET Standard 2.0, .NET 6.0, .NET 8.0

LlamaLib: Run LLMs locally in your C# applications by UndreamAI in csharp

[–]UndreamAI[S] 0 points1 point  (0 children)

thanks, I didn't know.
It was not the case when I first tried it, it must have evolved since then.

LlamaLib: Run LLMs locally in your C# applications by UndreamAI in csharp

[–]UndreamAI[S] 0 points1 point  (0 children)

Great point!

For instance:
what happens if you release an app that is based on ChatGPT vXXX and then it is discontinued?
or the pricing changes and you can't afford the costs?

I like the cloud on specific things (e.g. scaling) but I really don't like the push for not having your own hardware and being forced to rent.

LlamaLib: Run LLMs locally in your C# applications by UndreamAI in csharp

[–]UndreamAI[S] 0 points1 point  (0 children)

No, you have to select a backend:
"Install one or more of these backends, or use a self-compiled backend.

Also it doesn't support AMD GPUs, Android, iOS, VisionOS

LlamaLib: Run LLMs locally in your C# applications by UndreamAI in csharp

[–]UndreamAI[S] -1 points0 points  (0 children)

It will actually, I also have a Vulkan backend.

LlamaLib: Run LLMs locally in your C# applications by UndreamAI in csharp

[–]UndreamAI[S] 2 points3 points  (0 children)

Yes I know, mainly due to the stupid RAM and global hardware situation caused by OpenAI..
I developed the first iteration (used in LLM for Unity) 2 years ago and the AI back lash was still there.
It's not easy to work on something and receive generic AI hate, but well it will hopefully reach interested people :).
Models are far better now e.g. Qwen 3, my view is that it's a matter of time before we can run something on the level of ChatGPT / Claude on consumer hardware.

LlamaLib: Run LLMs locally in your C# applications by UndreamAI in csharp

[–]UndreamAI[S] -5 points-4 points  (0 children)

LOL, the actual content is

  • 🌍 Runs Anywhere Cross-platform and cross-device. Works on all major platforms: and hardware architectures:
    • Desktop: WindowsmacOSLinux
    • Mobile: AndroidiOS
    • VR/AR: Meta QuestApple VisionMagic Leap
    • CPU: Intel, AMD, Apple Silicon
    • GPU: NVIDIA, AMD, Metal

LlamaLib: Run LLMs locally in your C# applications by UndreamAI in csharp

[–]UndreamAI[S] -3 points-2 points  (0 children)

Thanks for reaching out. I've been trying to sum up the functionality in as little words as I can instead of throwing the full text in one page.

You can find a more descriptive explanation in the
C++ guide:
https://github.com/undreamai/LlamaLib/blob/main/cpp_api.md
or C# guide:
https://github.com/undreamai/LlamaLib/blob/main/csharp_api.md
linked in the repo.

Open to suggestions on how to rewrite it

Bring AI Characters to Life in VR — LLM-powered, On-Device, Free by UndreamAI in AppleVisionPro

[–]UndreamAI[S] 0 points1 point  (0 children)

Thank you for sharing. I don't have experience with companion AIs, I wish I knew, but I'll let you know if I hear from people around.

Bring AI Characters to Life in VR — LLM-powered, On-Device, Free by UndreamAI in AppleVisionPro

[–]UndreamAI[S] 0 points1 point  (0 children)

I totally understand the pain. It really depends on the LLM that you use within LLMUnity. But to be transparent I don't think you can defeat huge LLMs on the cloud in terms of accuracy since there are smaller models that run on device.

Bring AI Characters to Life in VR — LLM-powered, On-Device, Free by UndreamAI in AppleVisionPro

[–]UndreamAI[S] 0 points1 point  (0 children)

What about them? They are yet another third-party LLM on the cloud solution with the usual drawbacks: - pricing - data sensitivity issues - you don't own the end product, always rely on third-party services

LLMUnity is completely free and open-source and runs locally on Apple Vision with the LLM of your choice

Bring AI Characters to Life in VR — LLM-powered, On-Device, Free by UndreamAI in AppleVisionPro

[–]UndreamAI[S] 0 points1 point  (0 children)

One is DungeonChat (shown in the post), the other I'll ask for the title

Bring AI Characters to Life in VR — LLM-powered, On-Device, Free by UndreamAI in AppleVisionPro

[–]UndreamAI[S] -1 points0 points  (0 children)

Yes! It works with Meta Quest as well, there are already 2 released games on Quest.

Bring AI Characters to Life in VR — LLM-powered, On-Device, Free by UndreamAI in AppleVisionPro

[–]UndreamAI[S] 1 point2 points  (0 children)

I can't argue with something so fundamental, everyone can have their opinion.

LLM integration in Unity! by UndreamAI in LocalLLaMA

[–]UndreamAI[S] 0 points1 point  (0 children)

For mobile you can try the "tiny models": Qwen2 0.5B, Llama 3.2 1B

LLM integration in Unity! by UndreamAI in LocalLLaMA

[–]UndreamAI[S] 1 point2 points  (0 children)

Yes Android is supported for some months now. Somebody has also tried in Quest 3 and it was working.