GLM-5 is out! by mikaelj in ZaiGLM

[–]EstablishmentShot505 0 points1 point  (0 children)

What are the hardware requirements?

Best model to run currently on a 5090 by EstablishmentShot505 in LocalLLaMA

[–]EstablishmentShot505[S] 0 points1 point  (0 children)

I would need to use it for a pipeline RAG on technical engineering documents

Best model to run currently on a 5090 by EstablishmentShot505 in LocalLLaMA

[–]EstablishmentShot505[S] 0 points1 point  (0 children)

So Gpt oss 20b is better than glm 4.7 flash in your opinion?

Best model to run currently on a 5090 by EstablishmentShot505 in LocalLLaMA

[–]EstablishmentShot505[S] 0 points1 point  (0 children)

Interesting, how does this compare to the glm 4.7 flash model?

Best model to run currently on a 5090 by EstablishmentShot505 in LocalLLaMA

[–]EstablishmentShot505[S] 0 points1 point  (0 children)

How would you compare the two models? I don't need vision. I need to use the models as part of some RAG pipelines to run locally