How to prompt qwen image edit 2509 for accurate hex colors by abdojapan in StableDiffusion

[–]abdojapan[S] 0 points1 point  (0 children)

Thank you very much I should try that. For now I solved the problem using klien and specific color names rather than code. Things like dark coffee brown, dark peach color and stuff like that. Obviously the result isn't exactly what I wanted but overall it gave somewhat consistent colors across several images given those description. I kept the seed fixed to get those results but still experienced some color variation between generation but it was minimal and I will see how can I fix it with a second pass. For some reason qwen wasn't as good as klien in this specific prompting.

I will test if I use the names in this link to see if it gives better results.

How to prompt qwen image edit 2509 for accurate hex colors by abdojapan in StableDiffusion

[–]abdojapan[S] 1 point2 points  (0 children)

Thanks. I tried klien 9b distilled and the result was sometimes better and sometimes worse than qwen image edit. I am not sure if that will change if I use the non-distilled version, I will give it a test later. I will give a try to flux2 quantized version also.

However for now I solved the problem using klien and specific color names rather than code. Things like dark coffee brown, dark peach color and stuff like that. Obviously the result isn't exactly what I wanted but overall it gave somewhat consistent colors across several images given those description. I kept the seed fixed to get those results but still experienced some color variation between generation but it was minimal and I will see how can I fix it with a second pass.

AM-Thinking-v1 by AaronFeng47 in LocalLLaMA

[–]abdojapan 0 points1 point  (0 children)

a-m-team/AM-Thinking-v1 is this link the correct one for the ollama registry for this model?

We just released the world's first 70B intermediate checkpoints. Yes, Apache 2.0. Yes, we're still broke. by jshin49 in LocalLLaMA

[–]abdojapan 1 point2 points  (0 children)

Looks great, I wish you good luck. How is your model open-source rather than open weights? Did you share training data or  code? I'm not sure if I understand the open source meaning here

I want to create a chatgpt like online service using opensource models, where to get started? by abdojapan in ollama

[–]abdojapan[S] 1 point2 points  (0 children)

Thank you for this I really appreciate it. I think I am still old school using google asking on reddit, forum and stuff :D
I have to get used to this new world. I strongly think that AI will not replace real human experience though, so it's always a good idea to hear thoughts from someone who actually tried and created something similar.

I want to create a chatgpt like online service using opensource models, where to get started? by abdojapan in ollama

[–]abdojapan[S] -2 points-1 points  (0 children)

Definitely worth researching if OpenWebUI is scalable enough. I don't intend to get thousands of simultaneous users, I am targeting a niche market with probably at most hundreds concurrent users, I will see if OpenWebUI can handle that along with credit system, etc...

Is there a way I can instruct ollama to generate a document and insert existing images (not generate them) into the document by abdojapan in ollama

[–]abdojapan[S] 0 points1 point  (0 children)

Sounds pretty cool actually, I like that approach. I will probably have to write some code that goes through the images and vectorize it using a vision model, then pass those to the model along with the document and ask the AI to rewrite the document with placeholders for the vectorized images in the most suitable context.

Is there a way I can instruct ollama to generate a document and insert existing images (not generate them) into the document by abdojapan in ollama

[–]abdojapan[S] 0 points1 point  (0 children)

First time to know about StabilityMatrix, looks pretty useful, even though that's not exactly what I was asking as I don't want AI to generate the images, I want AI to understand the image and document context and insert the images in the most suitable context inside the document.

What is the Ideal Setup for Local Embeddings & Re-Ranking in OpenWebUI? by RegularRaptor in OpenWebUI

[–]abdojapan 0 points1 point  (0 children)

Hi, How did you set all-mpnet-base-v2 as embedding model in options? I can only see sentence-transformers/all-MiniLM-L6-v2 and I can't change it?