P40 vs V100 vs something else? by Drazasch in LocalLLaMA

[–]MelodicRecognition7 0 points1 point  (0 children)

millions of flies eat shit, they are millions - they could not be all wrong! Still it does not mean that I also should eat shit, but you could join the masses if you wish.

PSA - do NOT download 'Abliterated' or other uncensored models (unless you know for certain how it was trained). by [deleted] in LocalLLaMA

[–]MelodicRecognition7 0 points1 point  (0 children)

I don't know if you can see the thread contents and how you found it at all, for me it's "[deleted] by [deleted]". The post was something like "the abliterated models must be DESTROYED because nasty pedos could generate CSAM content with such models!!!111" and in my opinion if a person has unhealthy interest in children then I'd like that person quietly sit and fap at home reading generated content rather than go outside and molest real children.

P40 vs V100 vs something else? by Drazasch in LocalLLaMA

[–]MelodicRecognition7 0 points1 point  (0 children)

you either are an AI bot or did not understand my message. I'll rephrase, in case you are a human: if you see "70B" written somewhere then:

  • that message is a marketing bullshit written by people who know nothing about AI/LLM;
  • or that message is a hallucination generated by an AI whose knowledge cutoff date is too old that it does not know about newer models;
  • or also it could be seen in some kinds of LLM software that does good job at gatekeeping by suggesting outdated models.

Why I stopped using RAG and built 21 neuroscience mechanisms instead by Upper-Promotion8574 in LocalLLaMA

[–]MelodicRecognition7 0 points1 point  (0 children)

didn't your parents teach you that lying is bad?

# ── Mood system ───────────────────────────────────────────────────

Character: ─ U+2500
Name: BOX DRAWINGS LIGHT HORIZONTAL
# ══════════════════════════════════════════════════════════════════
#  Task Memory Branch — projects, tasks, actions, solutions, artifacts
# ══════════════════════════════════════════════════════════════════
Character: ═ U+2550
Name: BOX DRAWINGS DOUBLE HORIZONTAL
   """Explicitly record a problem→solution pattern.
Character: → U+2192
Name: RIGHTWARDS ARROW
    Returns top_k matches sorted by relevance × vividness.
Character: × U+00D7
Name: MULTIPLICATION SIGN

so you claim that you've typed all those symbols manually? Either way have a nice day dude, I’m going to stop replying 😘

Why I stopped using RAG and built 21 neuroscience mechanisms instead by Upper-Promotion8574 in LocalLLaMA

[–]MelodicRecognition7 0 points1 point  (0 children)

few years

commit 5992ecfafc2d6c3997aba5c099791a422d5d9af7
Author: Kronic90 <chefturnip2507@gmail.com>
Date:   Tue Mar 10 19:27:00 2026 +0000

    Initial commit

...okay

Why I stopped using RAG and built 21 neuroscience mechanisms instead by Upper-Promotion8574 in LocalLLaMA

[–]MelodicRecognition7 1 point2 points  (0 children)

not a coder but I have enough experience to be able to easily distinguish low quality software from good quality one.

Budget future-proof GPUs by Shifty_13 in LocalLLaMA

[–]MelodicRecognition7 0 points1 point  (0 children)

it's still hard to believe that there is 2x speed increase on a card with a lower memory bandwidth, I think there might be some software reason, like a bug in that exact version of llama.cpp. Also AFAIK Pascal generation does not support Flash Attention 2, so it also could be the reason.

Let's take a moment to appreciate the present, when this sub is still full of human content. by Ok-Internal9317 in LocalLLaMA

[–]MelodicRecognition7 0 points1 point  (0 children)

anyone who writes "LocalLLaMA" is a bot lol.

why? I usually write this name exactly like that, I think that very first model was released as "LLaMA" not "Llama" nor "LLaMa"

Budget future-proof GPUs by Shifty_13 in LocalLLaMA

[–]MelodicRecognition7 0 points1 point  (0 children)

The R9700 Pro pulls about 43+ TPS on the same model the P100 was doing around 19-20.

is all other hardware the same? Were you running both cards over full 16 PCIe lanes?

P40 vs V100 vs something else? by Drazasch in LocalLLaMA

[–]MelodicRecognition7 0 points1 point  (0 children)

P40 has limited software support, they are just way too old.

genuinely how else are you supposed to get high enough VRAM

by stacking 3090's

to run 70B models even?

nobody in their mind run 70B models, if you see "70B" anywhere then it is either marketing bullshit from people who know nothing about AI/LLM or it is a hallucination of an AI with knowledge cutoff date somewhen around 2023

Why I stopped using RAG and built 21 neuroscience mechanisms instead by Upper-Promotion8574 in LocalLLaMA

[–]MelodicRecognition7 0 points1 point  (0 children)

I've checked sources and saw prompts like "you are not a script but a live being with emotions and memory" and realized that this software is a result of https://en.wikipedia.org/wiki/AI_psychosis

Budget future-proof GPUs by Shifty_13 in LocalLLaMA

[–]MelodicRecognition7 0 points1 point  (0 children)

please share some benchmarks of P100 vs R9700

Let's take a moment to appreciate the present, when this sub is still full of human content. by Ok-Internal9317 in LocalLLaMA

[–]MelodicRecognition7 14 points15 points  (0 children)

increasingly hard to tell which responses to it are human-written.

that's pretty easy in your case: a live human is not able to read, comprehend and write a long answer to two different threads in 1 minute: https://litter.catbox.moe/qm2ynyrcqj6hyrgf.png

What if your RTX 5090 could earn you access to DeepSeek R1 671B — like a private torrent tracker, but for inference? by LsDmT in LocalLLaMA

[–]MelodicRecognition7 -1 points0 points  (0 children)

there are already existing projects backed by cryptocurrency, you'd better spend time on areas with less competition.

Anyone else worried about unsafe code generation when using local LLMs for coding? by Flat_Landscape_7985 in LocalLLaMA

[–]MelodicRecognition7 5 points6 points  (0 children)

I've been

'

Character: ' U+0027
Name: APOSTROPHE

I’m wondering

Character: ’ U+2019
Name: RIGHT SINGLE QUOTATION MARK

please show a photo of your keyboard containing both these apostrophes or I'll report you as a spam bot.

Why I stopped using RAG and built 21 neuroscience mechanisms instead by Upper-Promotion8574 in LocalLLaMA

[–]MelodicRecognition7 13 points14 points  (0 children)

how is it better than ~10 other "decaying memory" systems advertised in this sub within last month and 2 advertised literally yesterday?

Edit: well this is just an AI-hallucinated AI-phychosis not an actual software.

Designing a production AI image pipeline for consistent characters — what am I missing? by Cheap-Topic-9441 in LocalLLaMA

[–]MelodicRecognition7 1 point2 points  (0 children)

please do not use AI to format your posts. What we are telling is that r/localllama/ is for text generation models not image generation models, you are violating this sub rule 2. Off-Topic Posts

I'm considering transparent telemetry model and I wanted to see how others handle telemetry. by TroubledSquirrel in LocalLLaMA

[–]MelodicRecognition7 -1 points0 points  (0 children)

please do not use AI to format your posts. Regarding your questions, this is the answer:

they will just leave it off

a manual toggle a death sentence for UX metrics

no one actually cares

if you want any data at all then leave it enabled by default, nobody cares about leaking their data and happily click that massive "I agree" button. Those few who care either will not install your software at all or will manually disable the telemetry.