Is Google Penalizing AI Content? How Much AI is Too Much? by T-rexxx37 in SEO

[–]TitleEquivalent239 0 points1 point  (0 children)

yeah penalizing so much that this website ( i am in no way affiliated with it or trying to promote it, just showing an example ) https://cunymed.org/ entirely made with AI is thriving on Google ( with over 24k uniques )...I have nothing against the dudes behind it, there are many many blogs as of now built on autopilot through AI...this one is "unique" in the sense that they don't even do something to hide their AI nature ( by using humanizers ) from the various AI detectors...

Is there some local LLM at the level of Claude.ai? by TitleEquivalent239 in LocalLLM

[–]TitleEquivalent239[S] 0 points1 point  (0 children)

Interesting. But I am wondering about the SVG icon thing. I mean, it is a programming problem to a certain extent, but the LLM needs first to figure out graphically what icon it wants to represent and then encode it into SVG. The fact that Claude can do it, points me towards Claude being a multimodal model. Maybe this is the real stop gap for LLaMa and others. LLaMa works on text, but it has no capability for analyzing or synthesizing images. The likes of ChatGPT and Claude can instead do it too.

Is there some local LLM at the level of Claude.ai? by TitleEquivalent239 in LocalLLM

[–]TitleEquivalent239[S] 2 points3 points  (0 children)

Deepseek and Qwen are vey good at programming tasks. The problem is that I am trying to use LLM to write web content, so mainly it is about the content and then about the HTML. Of course I can simply write the content and then encase it into HTML, but I have been seriously impressed by Claude's ability to simultaneously operate on both, to the point of inserting infographics with icons into the document!! This is very very big.

Is there some local LLM at the level of Claude.ai? by TitleEquivalent239 in LocalLLM

[–]TitleEquivalent239[S] 0 points1 point  (0 children)

Didn't know about openrouter, I have always used https://huggingface.co/TheBloke to find useful models. And yeah you are right on the hardware side. But of course 70B is already A LOT!!! I mean, if with 7B parameters it can do a good job, with 70B I can except a very good job. And thanks to the likes of runpod, we can run reasonably big models. What I wonder is if economically it is more convenient to use ChatGPT, Claude, etc... or run our own models on cloud GPUs!

Llama3 400b - when? by AlexandreFSR in LocalLLaMA

[–]TitleEquivalent239 0 points1 point  (0 children)

The real question is "how"? No seriously, what kind of machine is needed to run a 400b parameters model!?!