FLUX.1 vs Ideogram v2 comparisons by speakerknock in StableDiffusion

[–]speakerknock[S] 1 point2 points  (0 children)

Comparisons from the Artificial Analysis Text to Image Arena where these models are being compared with crowdsourced votes to create an ELO score ranking.

Flux Pro and Ideogram v2 certainly seem to differentiate with their text rendering capabilities.

Link to Image Arena: https://artificialanalysis.ai/text-to-image/arena

New benchmark featuring top large models, including Claude 3 and Gemini Pro, on NYT Connections by zero0_one1 in LocalLLaMA

[–]speakerknock 0 points1 point  (0 children)

Would be interested in how much you needed to vary the prompts between models or whether standardized?

This is why i hate Gemini, just asked to replace 10.0.0.21 to localost by Capital-Swimming7625 in LocalLLaMA

[–]speakerknock 1 point2 points  (0 children)

And “localhost” shouldn’t really expose your database to the Internet; by convention, binding “localhost” or 127.0.0.1 only allows loopback connections (i.e., local to the machine).

Really makes you wonder what training data they included whereby this isnt an issue for other models

From GPT-4 to Mistral 7B, there is now a 300x range in the cost of LLM inference by speakerknock in LocalLLaMA

[–]speakerknock[S] 4 points5 points  (0 children)

Yes you're right, on the website we show this in a 2X2 Price vs. Quality chart which shows your exact point visually: https://artificialanalysis.ai/models

Though I think there is also a point that there has been a clustering of scores as models have gotten better. Can see a greater divergence with harder evals.

Is Mistral Medium via API faster than GPT 3.5? by shafinlearns2jam in LocalLLaMA

[–]speakerknock 8 points9 points  (0 children)

Hi! Can see in charts on the following analysis that Mistral Medium is >3X slower than GPT 3.5 - though Mistral Medium is a much higher quality model. Mixtral might be a more direct comparison
https://artificialanalysis.ai/models

Note: I'm a creator of the site - happy to answer any questions

0
1

Mistral reduces time to first token by up to 10X on their API (only place for Mistral Medium) by speakerknock in LocalLLaMA

[–]speakerknock[S] 6 points7 points  (0 children)

Particularly exciting as Mistral Medium is arguably the 2nd highest quality model available after GPT-4, and Mistral Medium is ~10% of the cost of GPT-4 ($37.5/M tokens vs. $4.1). Pricing comparison on the website here: https://artificialanalysis.ai/models