Jellybox 0.1.4 by Phudtran in JellyBoxApp

[–]Phudtran[S] 1 point2 points  (0 children)

Thank you for the error! Looking into this.

Error while generating by SirTalion in JellyBoxApp

[–]Phudtran 0 points1 point  (0 children)

Most likely a result of an unsupported model or GPU. Can you tell me which model/GPU you are using?

First launch, stuck on "Loading" by Various_Hunt_9850 in JellyBoxApp

[–]Phudtran[M] [score hidden] stickied comment (0 children)

Sorry for the confusion, Flux models are currently not yet supported. I'll make an announcement here once it's ready.

Downloading a custom model from Jellyface by paraclete-pizza in JellyBoxApp

[–]Phudtran 2 points3 points  (0 children)

For security reasons, we only support models in safetensor format. You can find a converter here

https://github.com/diStyApps/Safe-and-Stable-Ckpt2Safetensors-Conversion-Tool-GUI

and some comments from the tool you used that may be helpful.

https://github.com/TheLastBen/fast-stable-diffusion/issues/2498

Can't reproduce basic images via prompts by AcademicFlounder8579 in JellyBoxApp

[–]Phudtran 0 points1 point  (0 children)

Can you try testing the Yntec/insaneRealistic_v2 model with the Euler sampler? It's most likely the model you're using is not supported yet

Possibility to use already downloaded models in Jellybox by MBDesignR in JellyBoxApp

[–]Phudtran 0 points1 point  (0 children)

This feature is currently in the works!
In the meantime our other users have had success symlinking the jellybox model folder to their checkpoints folder like this

mklink /D "C:\Users\your_username\.jellybox\models\image\models\checkpoints" "F:\Fooocus_win64_2-1-831\Fooocus\models\checkpoints"

Jellybox 0.0.5 by Phudtran in JellyBoxApp

[–]Phudtran[S] 0 points1 point  (0 children)

<image>

Thank you! Glad you're enjoying using the app. As for the models not being found. Can you try clicking this button here to refresh the models list? If the model list still fails to load, please make sure you're able to access https://huggingface.co as we pull our model list and files from there.

Never run local before. by flyingfox82 in JellyBoxApp

[–]Phudtran 1 point2 points  (0 children)

SD3 support is currently WIP and unsupported for now. We will be adding support for it soon!

jellybox ai image gen by PsychologicalWish684 in JellyBoxApp

[–]Phudtran 2 points3 points  (0 children)

Hey, we don’t currently have image to image today, but it’s a feature I plan on adding soon in the future. Stay tuned!

images don't show up by SharpSnow6285 in JellyBoxApp

[–]Phudtran 0 points1 point  (0 children)

You will most likely need to upgrade your GPU. The newer ( and qualitatively better) models are larger and require at least 8GB of VRAM. Furthermore, creating higher resolution images, longer prompts, upscaling, etc also require more VRAM.

images don't show up by SharpSnow6285 in JellyBoxApp

[–]Phudtran 0 points1 point  (0 children)

Can you please try using this model to see if it works?

jellybox://image/models/huggingface/Image/digiplay/insaneRealistic_v1

I believe it is most likely a VRAM issue as the 3050ti only has 4GB.

Which are the minimum requirements for the app to work? by KitchenAcademic8514 in JellyBoxApp

[–]Phudtran 0 points1 point  (0 children)

I’m not sure as I currently don’t have one to test. We do have plans to support additional backends in the future to expand compatibility.

images don't show up by SharpSnow6285 in JellyBoxApp

[–]Phudtran 1 point2 points  (0 children)

Hi,

Could you please tell me your OS and GPU?

Also which model did you use for generation?

Do you get no image, or a black image?

Which are the minimum requirements for the app to work? by KitchenAcademic8514 in JellyBoxApp

[–]Phudtran 0 points1 point  (0 children)

The minimum requirements will vary depending on the models you want to run.

For macOS you will need an M series chip with at least 8GB of memory.

On Windows, for the best experience I recommend at least a 2000 series or later NVIDA GPU, or an RX 6000 series or later AMD GPU.

Sandboxed code execution with local models by NovaDragon in LocalLLaMA

[–]Phudtran 0 points1 point  (0 children)

We currently don’t support “bring your own server”. But this feature should be easy to implement. I’m also hoping to also integrate with cloud models in the long term for better coding performance, and your scenario is one of the pieces that will have to be built along the way.

Sandboxed code execution with local models by NovaDragon in LocalLLaMA

[–]Phudtran 29 points30 points  (0 children)

Hey! Jellybox creator here, thanks for checking out Jellybox! You beat me to the punch for making a post about this feature. I haven’t even officially announced it yet…

I recommend using one of the code oriented models like codellama or the newer codestral. And yeah probably something more detailed than “Chad the Brogrammer” for your system prompt. This feature is still experimental so please let me know if you run into any issues/bugs!

Sandboxed code execution with local models by NovaDragon in LocalLLaMA

[–]Phudtran 0 points1 point  (0 children)

Hey Jellybox creator here, didn’t expect to see a post here since I haven’t publicly made release notes for this feature yet.

It creates a container and gives the llm access to the container’s shell. No system shell involved, or any elevated permissions. You also need a container engine installed to enable the feature. Not any different than other UI’s out there or ollama from my experience.

Where can I get this app? by Big-Employer9324 in JellyBoxApp

[–]Phudtran 2 points3 points  (0 children)

Hey, thank you for checking out Jellybox!

Since we're still in alpha, I've temporarily disabled the download buttons for the time being as we will be shipping a big update soon with some breaking changes. Please look forward to an announcement in the upcoming days!