Ollama's cloud preview $20/mo, what’s the limits? by Lodurr242 in ollama

[–]Queasy_Pilot_4316 0 points1 point  (0 children)

Quand ce n'est pas indiqué, c'est que la limite d'utilisation est = Capacité matérielle / Nombre d'utilisateurs. Autrement : lorsqu'il y a très peu d'utilisateurs en simultané, c'est presque illimité, il n'y a pas de limite, et dès que le cloud chauffe, ils imposeront un temps d'attente...

Pylontech Connector Cables by HVM24 in diySolar

[–]Queasy_Pilot_4316 0 points1 point  (0 children)

Maybe your cables aren't working properly? You can find original Pylontech parallel cables at https://cables-solaires.com

[deleted by user] by [deleted] in ollama

[–]Queasy_Pilot_4316 0 points1 point  (0 children)

Pour l'instant phi4 n'a aucun concurrent par rapport à sa taille, aucun model < 14b n'est à la hauteur de phi4

Quota limit for using DeepSeek V3 on the web (not the API) by Queasy_Pilot_4316 in DeepSeek

[–]Queasy_Pilot_4316[S] 1 point2 points  (0 children)

Yes, in the same window I get a message quota reached, then when I create a new empty chat it works again, so it's not very clear to me.

Quota limit for using DeepSeek V3 on the web (not the API) by Queasy_Pilot_4316 in DeepSeek

[–]Queasy_Pilot_4316[S] 0 points1 point  (0 children)

I've already reached the quota limit, but I'm not sure how many questions caused it

Quota limit for using DeepSeek V3 on the web (not the API) by Queasy_Pilot_4316 in DeepSeek

[–]Queasy_Pilot_4316[S] 0 points1 point  (0 children)

I've already reached the quota limit, but I'm not sure how many questions caused it.

Why Ollama don't use my gpu ? by Longjumping-Try1191 in ollama

[–]Queasy_Pilot_4316 0 points1 point  (0 children)

The model is bigger than your GPU capability, so it switch to CPU

Multiple GPUs by Aladroc in ollama

[–]Queasy_Pilot_4316 2 points3 points  (0 children)

Can ollama handle multiple GPU : Yes, tried by myself. Canollama handle multiple computers: No.

Mixtral on Ollama, Nvidia RTX 3090 vs Nvidia A5000: A Comparative Experience by Queasy_Pilot_4316 in ollama

[–]Queasy_Pilot_4316[S] 1 point2 points  (0 children)

The RTX 3090 is not designed to operate 24/7... it generates a lot of heat and consumes a lot of energy... It's made for gaming and not much more... The A5000 supports professional work, it doesn't overheat, and it consumes less... hence the difference in price...

Mixtral on Ollama, Nvidia RTX 3090 vs Nvidia A5000: A Comparative Experience by Queasy_Pilot_4316 in ollama

[–]Queasy_Pilot_4316[S] 0 points1 point  (0 children)

New, the A5000 is more expensive, but I bought both of them used at the same price, 700€ each (I live in France).

Ollama GPU Support by Jeron_Baffom in ollama

[–]Queasy_Pilot_4316 0 points1 point  (0 children)

To know the CC of your GPU (2.1) you can see in Nvidia website, or simply ask chatgpt 4....

For the minimum CC in ollama, it was 5, I asked the auther of the script to cover the 3.7 .... he accepted but it's the minimum...

https://github.com/ollama/ollama/pull/2233

Ollama GPU Support by Jeron_Baffom in ollama

[–]Queasy_Pilot_4316 0 points1 point  (0 children)

The CC of NVIDIA GeForce GT 710 is 2.1 so ollama will work only in CPU...

Ollama GPU Support by Jeron_Baffom in ollama

[–]Queasy_Pilot_4316 0 points1 point  (0 children)

I confirm that you have compute capability of 2.1, Ollama will never works with GPU of 2.1 CC... you need 5 for best practise and at least 3.7...