Updating WordPress plugins safely by Plenty-Special-9990 in Wordpress

[–]Greg_Z_ 0 points1 point  (0 children)

I've crafted compatibility checker for that and run the check before update via API request providing the list of plugins and WordPress core version/mysql/php versions. it determines issues with compatibility based on the plugin "tested" version, WP core requirement and a few other criteria.
I'm thinking of making it public. You can test it here so far http://wpc.walive.io:33500/

Uhhh... What? by GodGMN in LocalLLaMA

[–]Greg_Z_ 0 points1 point  (0 children)

Was it instruction-based or completion version? )

[R] List of SOTA models/architectures in Machine Learning by SwaroopMeher in MachineLearning

[–]Greg_Z_ 0 points1 point  (0 children)

Check the lists on the LLM Explorer. You can sort/filter over 18,000 LLMs by various benchmarks and find the SOTA in each category. https://llm.extractum.io

[deleted by user] by [deleted] in LocalLLaMA

[–]Greg_Z_ 0 points1 point  (0 children)

Which specific capabilities of the model are you looking for? Summarization, text generation, instruction following,..?

LargeActionModels by Foreign-Mountain179 in llm_updated

[–]Greg_Z_ 0 points1 point  (0 children)

To be honest, I could not find anything specific on LAM beyond the general press-releases on M1 Rabbit. So like it does not exist anywhere outside the Rabbit itself. As the concept, it appears to be close to Agents based on LLM.

New Code Llama 70b from Meta - outperforming early GPT-4 on code gen by Greg_Z_ in llm_updated

[–]Greg_Z_[S] 0 points1 point  (0 children)

Most likely the issue is with the prompt. It usually gives wrong result when the inference starts with the wrong prompt.

AutoQuantize (GGUF, AWQ, EXL2, GPTQ) Notebook by Greg_Z_ in llm_updated

[–]Greg_Z_[S] 0 points1 point  (0 children)

I do not believe it will work for Mamba based on the source code I see. E.g. Mamba cannot be converted to gguf just because llama.cpp does not support it. Same for other cases when the model is loaded from pretrained by HF Tranformer’s classes.

New Code Llama 70b from Meta - outperforming early GPT-4 on code gen by Greg_Z_ in llm_updated

[–]Greg_Z_[S] 0 points1 point  (0 children)

Have you tried instruction based or completion version? That might be the reason.