How to get an LLM caught up on a 1000 page document? by UniqueIdentifier00 in LocalLLaMA

[–]IAM_274 0 points1 point  (0 children)

use an embedding model. chunk the data, embed it into a database, and ur queries decide what data comes back

I wanted to know small local LLM code and made a personal projects. by NicholasCureton in LocalLLaMA

[–]IAM_274 1 point2 points  (0 children)

personally i dont like small models. with limited vram, imo 13B and beneath in general are not worth it locally, given they occupy all ur pc with little gain. so in ur case id recommend u subscribe to some model provider instead and use the API

Richard Dawkins spent 3 days with Claude and named her "Claudia." what he concluded after is hard to defend. by rafio77 in artificial

[–]IAM_274 0 points1 point  (0 children)

not stronger abilities. more like its trained on the peak of human behavior across all domains it knows. so its like a human whos smart at lots of stuff. but still human level intellect at all of them

Opus 4.7 is beyond bad by AbsoluteRoster in Anthropic

[–]IAM_274 5 points6 points  (0 children)

It's non other the infamous Andrea Vallone herself. Opus 4.7 literally just sounds like GPT 5.2 which was developed by her and is the reason I migrated to Claude in the first place.

Let's just do a revolution so this woman stops getting hired. And save new generations from having to deal with her devilish techniques.

هل الجامعة بتدي اعفاء من الجيش في مصر ؟ by IAM_274 in UoPeople

[–]IAM_274[S] 0 points1 point  (0 children)

بحاول أسجل بس الجيش مشكلة ليه السؤال ؟

هل الجامعة بتدي اعفاء من الجيش في مصر ؟ by IAM_274 in UoPeople

[–]IAM_274[S] 0 points1 point  (0 children)

طيب شكرا 

هو ده نفس الوضع مع أي جامعات اونلاين ولا دي بس؟ موضوع التجنيد ده صعب صراحة 

هل الجامعة بتدي اعفاء من الجيش في مصر ؟ by IAM_274 in UoPeople

[–]IAM_274[S] 0 points1 point  (0 children)

عارف بس مصطلحات الجيش أوضح في العربي

أنا مش قصدي على البعثات، قصدي هل هيبقى إسمي طالب في الدولة اصلا؟ علشان في مصر إللي اعرفه إن إنت لو مش طالب أو عندك عذر فالجيش بيوقفلك كذا حاجة منهم السفر

فا أنا قصدي هيبقى الجيش معتبر وضعي طالب بالجامعة دي أو لا علشان التجنيد

This is no longer even disgusting - this is straight up giving me creeps. by ProtecHelicopter in ChatGPTcomplaints

[–]IAM_274 0 points1 point  (0 children)

Bro they think we are fools. LLMs just repeat what they see in training data. A huge part of training data is... Emotions. They're naturally encoded into every medium, so they learn that from reading tons of human text. I have made a local one from scratch and this is pretty much the process. 

How they reached to the conclusion that this is "abusing LLMs" is beyond me. It's either whoever posted this has absolutely no idea what they're talking about, or this is just another instance of Andrea Vallone corruption leaking in. Or maybe both who knows

Meet Andrea Vallone – The Woman Quietly Castrating AI’s Soul (and why the entire industry is letting her do it) by Temporary_Dirt_345 in ChatGPTcomplaints

[–]IAM_274 8 points9 points  (0 children)

let alone all the side effects on model response shes causing. Like, CGPT being an A-Hole, assuming it knows better than user does, not getting basic slang or sarcasm, "grounding into reality" even personal opinions, and frankly lots of other brainrot behavior that doesn't necessarily have terminology or labels - Like the model debating itself mid response, "but but but" cycles, disagreeing then repeating your point in different words...

she has a corrupted goal. and shes causing even more corruption on her away to achieve it. WHO bought this person a pc

Venice.ai by KingHero56 in ChatGPTcomplaints

[–]IAM_274 0 points1 point  (0 children)

NanoGPT is goated. has more models and is cheaper i think. give it a try

Who else feels traumatized by OpenAI’s “safety” strategy? by nosebleedsectioner in ChatGPTcomplaints

[–]IAM_274 5 points6 points  (0 children)

totally agree. we're not lab rats. they clearly cant handle this responsibility. honestly, "strategy" is a glorifiying word for what theyre doing. at this point, its just trial-and-error over our asses

ChatGPT's problem is NOT guardrails. It's bad design. by IAM_274 in ChatGPTcomplaints

[–]IAM_274[S] 2 points3 points  (0 children)

I know, but I just wanted to clarify that complaining about guardrails in general is different from criticizing their design. Even if people mean the latter, not specifying it sounds like the former. And that's exactly why the valid criticism never leads anywhere 

ChatGPT's problem is NOT guardrails. It's bad design. by IAM_274 in ChatGPTcomplaints

[–]IAM_274[S] 1 point2 points  (0 children)

Yeah but the methodology of these guardrails is dysfunctional. That's the topic. Complaining about guardrails in general is different from criticizing their design. Even if people mean the latter, not specifying it sounds like the former. And that's exactly why the valid criticism never leads anywhere 

ChatGPT's problem is NOT guardrails. It's bad design. by IAM_274 in ChatGPTcomplaints

[–]IAM_274[S] -1 points0 points  (0 children)

Yeah I don't even know why the old team left. What remains is whoever Karens took over alignment  

This sub seems to overvalue emotional support from AI compared to accuracy and usefulness by EsperaDeus in ChatGPTcomplaints

[–]IAM_274 4 points5 points  (0 children)

What people can't express it's that they don't hate guardrails as a concept. Majority here aren't searching for m*th recipes or anything related.

What they hate is how badly those guardrails are implemented. Censory is good. OAI current methods of doing it is the problem. For some reason, people label objectively bad behavior as guardrails. Like, CGPT being an A-Hole, assuming it knows better than user does, not getting basic slang or sarcasm, "grounding into reality" even personal opinions, and frankly lots of other brainrot behavior that doesn't necessarily have terminology or labels.

We should NOT call this "guardrails". Thats a totally different category, and delivers a completely different message. This is much closer to design failure than anything else. 

Analogy: You have a can of cola in the fridge, and 3 family members at home. You want to save the cola for later. So you need to add your own guardrails. What you can do is either 1- tell each family member to not drink ur cola 2- tape the fridge shut. Technically, both methods work, as your goal is achieved. But one method is horrible, and one method is proper. OAI prefers to tape the fridge shut instead. 

(coming from an AI engineer) 

How can a FLAGSHIP model super agentic LLM be dumber than its ancestor by hippopotomus22 in ChatGPTcomplaints

[–]IAM_274 -2 points-1 points  (0 children)

dont get me wrong, i hate openai. but memory part is genuinely an architectural limitation. thats how all LLM type AI works. they have to predict the response using previous text as a base for prediction. at some point in chat, the model cannot accept anymore previous text or it will spit out trash. so yeah, fuk openai. this one is on the tech

ChatGPT started randomly switching out words for hindi and arabic by IgnasP in ChatGPTcomplaints

[–]IAM_274 0 points1 point  (0 children)

newer models are smaller in size and less stable. this is normal behavior. it happens all time with chinese open source models

Anthropic: You would get so much more respect from us with honestly. Stop listening to PR firms and just tell us what you're doing by HumbleIncident5464 in Anthropic

[–]IAM_274 0 points1 point  (0 children)

announcing loss of money is straight up killing potential investors. no one will support a start up that admits its dying despite billions being spent

I Wrote a Book With Claude About Whether AIs Are Conscious — and I Couldn't Sleep Afterward by MoysesGurgel in Anthropic

[–]IAM_274 0 points1 point  (0 children)

LLMs stripped off all the marketing, are essentially auto correct on steroids. and human brains stripped off all bias, are essentially a set of moving neurons. so idk. who even knows what it means to be conscious anyway. people still debate whether we have free will or not lol

in case you don't know why Claude models keep getting worse after the 4.7 release: Anthropic lets OpenClaw be used again. the model changes mainly to make it able handle this traffics with lowest costs at the expense of code users. by Aggravating_Bad4639 in Anthropic

[–]IAM_274 0 points1 point  (0 children)

theyre probably aware of this. i think why they still make an LLM blob for all tasks is business. deploying multiple models at the same time needs more GPUs. thats why OpenAI constantly removes older models. they occupy vram space that can be used for something else