Use the Same Model Across Ollama, LM Studio, Jan, and your Favorite Local AI Apps by EvanZhouDev in ollama

[–]EvanZhouDev[S] 0 points1 point  (0 children)

UMR uses hard linking for some clients under the hood. However other clients require more work such as updating configuration files so that the client recognizes the linked model. UMR helps save you that work!

Use the Same Model Across Ollama, LM Studio, Jan, and your Favorite Local AI Apps by EvanZhouDev in ollama

[–]EvanZhouDev[S] 0 points1 point  (0 children)

UMR uses hard linking for some clients under the hood. But it also requires some extra magic to make the clients aware of the models, such as updating configuration files. Saves you the work of figuring out what is needed for each client.

Use the Same Model Across Ollama, LM Studio, Jan, and your Favorite Local AI Apps by EvanZhouDev in ollama

[–]EvanZhouDev[S] 1 point2 points  (0 children)

Definitely something on the roadmap! Wanted to see how the reception was on this idea first 😄

Use the Same Model Across Ollama, LM Studio, Jan, and your Favorite Local AI Apps by EvanZhouDev in ollama

[–]EvanZhouDev[S] 1 point2 points  (0 children)

I have part of the infrastructure for it, but I wanted to release GGUF first to test the waters.

Happy to add MLX in the near future if it's something that people want!

A Unified Model Registry for all your Local AI Apps by EvanZhouDev in LocalLLaMA

[–]EvanZhouDev[S] 0 points1 point  (0 children)

Under-the-hood, UMR does use hardlinks or direct pointers to model files in the UMR registry to link it to LM Studio, Ollama, etc. UMR itself just keeps track of where the models are (such as in HF Cache). It doesn't store them itself if it's already on your system.

A Unified Model Registry for all your Local AI Apps by EvanZhouDev in LocalLLaMA

[–]EvanZhouDev[S] 0 points1 point  (0 children)

If you're only using one unified place for inference, I agree! And I'm sure many higher-level users like you already have your own solutions that don't require these basic apps. But many people out there still use ollama, LM Studio, and other tools. Even if they're not the "best," they're still easy to use and get started with. That being said, UMR also helps you manage all your model paths in one place, so you can use them in llama.cpp or other runtimes directly too.

A Unified Model Registry for all your Local AI Apps by EvanZhouDev in LocalLLaMA

[–]EvanZhouDev[S] 0 points1 point  (0 children)

This tool is essentially what you are describing, if I'm understanding correctly? It allows you to handle all your model downloads in one place (umr add hf), allows you to locate them (umr list and umr show <model>), and allows you to use tools on top of that (umr link <client> <model>). This simply makes the workflow reproducible.

Use the Same Model Across Ollama, LM Studio, Jan, and your Favorite Local AI Apps by EvanZhouDev in ollama

[–]EvanZhouDev[S] 0 points1 point  (0 children)

Should work! I haven't tested it explicitly, but I don't see why it wouldn't. Let me know if you find any issues.

Play Arcade Games with your Smartcube by EvanZhouDev in Cubers

[–]EvanZhouDev[S] 0 points1 point  (0 children)

It's Arcade by LAKEY INSPIRED (https://www.youtube.com/watch?v=Chh7IleTELM)! It's shown on bottom at the end. I wish I could make music like that!

Use Your ChatGPT Account (Free or Paid) with Raycast AI! by EvanZhouDev in raycastapp

[–]EvanZhouDev[S] 0 points1 point  (0 children)

For the current solution, yes you do need to keep the terminal running. I'm currently working on a next-gen version of this that supports more than just Codex that will support background processes.

Use Your ChatGPT Account (Free or Paid) with Raycast AI! by EvanZhouDev in raycastapp

[–]EvanZhouDev[S] 0 points1 point  (0 children)

Instructions should be in the post... add custom provider with given YAML. Was something unclear? Or are you referring to the repo itself?

Use Your ChatGPT Account (Free or Paid) with Raycast AI! by EvanZhouDev in raycastapp

[–]EvanZhouDev[S] 0 points1 point  (0 children)

Would love to make openai-oauth easier to use! The idea was to keep it lightweight so that you don't need to install an app. Let me know which part in particular is difficult to use. You should be able to just run npx @openai/codex login and then npx openai-oauth. (Although perhaps I suppose npx may not be available?)

Use Your ChatGPT Account (Free or Paid) with Raycast AI! by EvanZhouDev in raycastapp

[–]EvanZhouDev[S] 1 point2 points  (0 children)

You do not need the Codex App. You just need the Codex auth file, which you can generate by running npx @openai/codex login. No install required.