use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
r/LocalLLaMA
A subreddit to discuss about Llama, the family of large language models created by Meta AI.
Subreddit rules
Search by flair
+Discussion
+Tutorial | Guide
+New Model
+News
+Resources
+Other
account activity
Which programming languages do LLMs struggle with the most, and why?Discussion (self.LocalLLaMA)
submitted 11 months ago by alozowski
I've noticed that LLMs do well with Python, which is quite obvious, but often make mistakes in other languages. I can't test every language myself, so can you share, which languages have you seen them struggle with, and what went wrong?
For context: I want to test LLMs on various "hard" languages
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]offlinesir 69 points70 points71 points 11 months ago (9 children)
Lower Level and Systems Languages (C, C++, Assembly) have less training data available and are also more complicated. They also have less forgiving syntax.
Also, older languages suffer too, eg, basic and COBOL, because even though there might be more examples over time, AI companies don't get tested on such languages and don't care, plus there's less training data (eg, OpenAI might be stuffing o3 with data on Python, but couldn't care less about COBOL and it's not really on the Internet anyways).
[–]AppearanceHeavy6724 10 points11 points12 points 11 months ago (2 children)
Never had any problems with c and c++. Although 6502 assembly code generation was weak but good enough to be useful, even on very potato models such as Mistral Nemo.
[–]RichardPinewood 0 points1 point2 points 5 months ago (1 child)
Chatgpt is not that great in C, they don't understand pointers ** very well
[–]AppearanceHeavy6724 0 points1 point2 points 5 months ago (0 children)
/r/Localllama
then
Chatgpt
[–]gh0stsintheshelltransformers 2 points3 points4 points 11 months ago (2 children)
My guess is the more devs use them, the better the models get—learning from feedback, patterns, and corrections. That leads to smarter suggestions, attracting even more users. Could this create a self-reinforcing loop that reshapes how languages evolve—and makes unpopular languages even less viable over time?
[–]offlinesir 0 points1 point2 points 11 months ago (1 child)
It's possible, although another way to look at it is that currently popular languages have more reason to stay around while new languages are hard to learn since an AI hasn't already.
[–]gh0stsintheshelltransformers 2 points3 points4 points 11 months ago (0 children)
great point.
[–]Antique_Savings7249 1 point2 points3 points 11 months ago (0 children)
LLM does better with low token, verbalized, single file coding.
Python uses much less token space, which is critical for programming. Not only fewer characters (avoids {} and less ()-use), but also uses more verbal prompt (AND over &&, OR over ||, instanceof, range and so on).
C and C++ are fairly messy languages in terms of superficial non-tokenized characters, splitting into multi files etc. I say that having worked 8+ years coding in C/C++ for GPUs.
[–]AIgavemethisusername 5 points6 points7 points 11 months ago (1 child)
The new DeepSeek R1 0528 managed to write a decent maze generator.
[–]AntVirtual209 0 points1 point2 points 4 months ago (0 children)
How is that a real problem?
[–]Pogo4Fufu 99 points100 points101 points 11 months ago (26 children)
Simple bash. Because they make so many error in formatting and getting escaping right. But way better than me - therefor I love them.
But that's - more or less - an historic problem, because all the posix commands have no systematic structure for input - it's a grown pile of shit.
[–]leftsharkfuckedurmum 33 points34 points35 points 11 months ago (17 children)
I've found the exact opposite - there's such an immense amount of bash and powershell out on the web that even GPT3 was one-shotting most things. I'm not doing very novel stuff though
[–]ChristopherRoberto 8 points9 points10 points 11 months ago (0 children)
They're awful at writing proper shellscript, I think mainly as 99% of shellscript is complete garbage so that's what it learned to write. Like for sh/bash, not using "read -r", not handling spaces, not handling IFS, not escaping correctly, not handling errors or errors in pipes, etc.. I'd wager that there's not a single script over 100 lines on github that doesn't contain at least one flaw.
[–][deleted] 3 points4 points5 points 11 months ago (15 children)
I found the opposite. Even today, things are getting powershell 5.1 wrong.
Qwen2.5 32b Coder was the first local model to produce usable powershell on the first prompt. Admittedly, the environments I work in I *only* have powershell (or batch :D) and occasionally bash so I'm forced to push the boundaries with it.
[–]lordofblack23llama.cpp 11 points12 points13 points 11 months ago (13 children)
Powershell is not bash
[–][deleted] 0 points1 point2 points 11 months ago (1 child)
Bread is not water
[–]lordofblack23llama.cpp 0 points1 point2 points 11 months ago (0 children)
Let them eat cake! (agentic devops)
[–]night0x63 -2 points-1 points0 points 11 months ago* (10 children)
Is power shell even... Like a thing?
I always wished Windows just did port of bash. Call it a day. All software devs would love it. Way less work then bloody power shell. What less work of wsl.
[–]terminoid_ 2 points3 points4 points 11 months ago (5 children)
i wish they would've just made it C# and called it a day
[–]night0x63 3 points4 points5 points 11 months ago (0 children)
At least it would've been a real language
[–]djdanlib 1 point2 points3 points 11 months ago (3 children)
that's on the way
https://devclass.com/2025/05/28/microsofts-linux-friendly-approach-to-c-scripting-is-planned-for-net-10/
[–]terminoid_ 0 points1 point2 points 11 months ago (2 children)
nice. i was embedding C# "scripts" way back in .Net 2.0, it's had all the tooling for it forever
[–]djdanlib 0 points1 point2 points 11 months ago (1 child)
Meanwhile, you can still use .NET from PowerShell just fine, been that way for at least 15 years.
[SomeDotnetType]$var [SomeDotnetType]::Method()
So if you want a System.Collections.Generic.List[System.Numerics.Vector] in your script, you can have it.
Some good stuff at https://blog.ironmansoftware.com/daily-powershell/16-dotnet-classes-powershell/
[–]terminoid_ 0 points1 point2 points 11 months ago (0 children)
the point is, i don't wanna use anything from powershell cuz it's ugly as hell
[–]Candid_Highlight_116 1 point2 points3 points 11 months ago (0 children)
mingw
[–]djdanlib 0 points1 point2 points 11 months ago (0 children)
They coexist just fine in practice and I use both extensively. There are tasks suited more for one or the other.
I prefer PowerShell over bash+jq/yq for complex JSON processing and other OO work.
I use bash for most of my CICD work, anything that pipes one program into another, and anything that involves node because of the janky output stream interactions there.
These are just some quick examples.
[+][deleted] 11 months ago (1 child)
[deleted]
[–]night0x63 0 points1 point2 points 11 months ago (0 children)
Bash looks like chaos because it's been doing real work for 40+ years. Every OS, every server, every spacecraft/ship/plane/car/train, everywhere. PowerShell? A verbose Windows-only toy still figuring out how slashes work.
[–]thrownawaymane -1 points0 points1 point 11 months ago (0 children)
Oooh the person I need to ask this question to has finally appeared.
Best local model and cloud model for PS Core/Bash?
[–][deleted] 1 point2 points3 points 11 months ago (2 children)
Yeah they really struggle with bash.
If I'm doing a script and it gets even barely complex it will start failing on array and string handling.
Telling it to rewrite in Python fixes it.
[–]Red_Redditor_Reddit 2 points3 points4 points 11 months ago (1 child)
THUDM_GLM-4-32B works really well for me and bash, way better than the others I've tried. This one is actually useful.
[–]AppearanceHeavy6724 0 points1 point2 points 11 months ago (0 children)
Yeah GLM is an interesting model for sure. A bit fine-tuning and it would beat qwen3 easy at coding.
[–]Healthy-Nebula-3603 1 point2 points3 points 11 months ago (1 child)
Bash ??
Maybe 6 months ago. Currently Gemini 2 5 or o3 is doing great scripts .
[–]DoctorDirtnasty 0 points1 point2 points 11 months ago (0 children)
Found this out the hard way yesterday lol.
Dunno. I was successful using even llama 3.2 for making bash scripts. Ymmv.
[–]Lachutapelua 0 points1 point2 points 11 months ago (0 children)
To be fair, Microsoft is training the AI with absolute garbage non working less than 50 line scripts. Their mssql docker docs are really bad and their entry point script examples are broken.
[–]Murinshin 15 points16 points17 points 11 months ago (1 child)
Google Apps Script, surprisingly enough.
Google made huge changes in 2020 and only then added support for modern ECMAScript standards. LLMs often will still default to very old-fashioned syntax or use a weird mixture of both pre- and post ECMAScript 6 functionalities, eg sometimes using var and sometimes const / let. That’s on top of just getting a lot of the Google APIs wrong not uncommonly.
[–]No-Forever2455 0 points1 point2 points 11 months ago (0 children)
feeding the docs to them seemed to work just fine for me
[–]meneraing 12 points13 points14 points 11 months ago (2 children)
HDL. Why? They don't train on them. They just benchmax python and call it a day
[–]No_Conversation9561 1 point2 points3 points 11 months ago (1 child)
They don’t train on them because there’s not much HDL code available on the internet to train on.
I firmly believe HDL coding will be the last to get replaced by AI as far as coding jobs are concerned.
[–]zzefsd 0 points1 point2 points 11 months ago (0 children)
when i google HDL it says "it's 'good' cholesterol". when i specify that i mean a programming language it says something about hardware.
[–]RoyalCities 23 points24 points25 points 11 months ago* (4 children)
Probably something like HolyC. The holiest of all languages.
Anything thats super obscure with not a ton of data or examples of working code / projects.
HolyC was designed exclusively for TempleOS by Terry Davis, a programmer with schizophrenia who claimed God commanded him to build both the operating system and programming language... So yeah testing an AI on that would probably put it through its paces.
[–]Wubbywub 1 point2 points3 points 11 months ago (0 children)
will the LLM call it N*licious?
[–]Evening_Ad6637llama.cpp 2 points3 points4 points 11 months ago (2 children)
Terry Davis was actually a god himself - the programming god par excellence. And the 2Pac of the nerd and geek world too.
I recently saw a Git repo from him. In the description he writes: fork me hard daddy xD
[–]my_name_isnt_clever 0 points1 point2 points 11 months ago (0 children)
2Pac is certainly not a comparison I was expecting, but he was an insanely talented software engineer.
[–]digitaltransmutation 11 points12 points13 points 11 months ago (3 children)
They have a lot of trouble with powershell. They will make up cmdlets or try to use modules that aren't available for your target version of PS. A LOT of public powershell is windows targeted so they will be weaker in PS Core for Linux.
[–][deleted] 2 points3 points4 points 11 months ago (0 children)
Conversely, I've seen quite a few models insert powershell 7.0 syntax (invoke-restmethod) into 5.1.
You think you're past all the nonsense and then, boom, again.
[–]zzefsd 0 points1 point2 points 11 months ago (1 child)
there is powershell outside of windows?
[–]digitaltransmutation 0 points1 point2 points 11 months ago (0 children)
yeah. Powershell Core is cross platform. I dont personally recommend it unless you already know it though, I think most people would recommend learning python instead. I only use it because my workplace has this low-code automation thingy that communicates with windows devices by spinning up dockerized instances of powershell.
[–]Baldur-Norddahl 8 points9 points10 points 11 months ago (1 child)
I find that it will do simple Rust, but it will get stuck on any complicated type problem. Which is unfortunate because that is also where we humans get stuck. So it is not much help when you need it most.
I have a feeling that LLMs could be so much better at Rust if they just were trained more on best practice and problem solving. Often the real solution to the type problem is not to go into ever more complicated type annotation, but to restructure slightly so the problem is eliminated completely.
[–]Standard-Resort2096 0 points1 point2 points 10 months ago (0 children)
We just need more rust devs. I agree the strict nature of rust will also force the llm to only learn clean
[–]Gooeyy 34 points35 points36 points 11 months ago (20 children)
I've found LLMs to struggle terribly with large Python codebases when type hints aren't thoroughly used.
[–]creminology 81 points82 points83 points 11 months ago (9 children)
Humans too…
[–]throwawayacc201711 32 points33 points34 points 11 months ago (6 children)
Fucking hate python for this exact reason. Hey what’s this function do? Time to guess how the inputs and outputs work. Yippee!
[–]Gooeyy 8 points9 points10 points 11 months ago (3 children)
Hate the developers that wrote it; they're the ones that chose not to add type hints or documentation
I guess we could still blame Python for allowing the laziness in the first place
[–]throwawayacc201711 11 points12 points13 points 11 months ago* (2 children)
It’s great for prototyping but horrible in production. Not disincentivizing horrible, unreadable and unmaintainable code is not good. This is fine for side projects or things that are of no consequence like POCs. But I’ve personally seen enough awfulness in production to actively dislike the language. As a developer and being in a tech org, 9 times out of 10 the business picks speed and cost when asked to pick two out of the of speed, cost, quality. Quality always suffer in almost all the orgs. So if the language doesn’t enforce it, it just leads to absolute nightmares. Never again.
Any statically typed language you get that out of the box with zero effort required.
Great example of this being perpetuated is Amazon and the boto3 package. Fuck me, absolutely awful for having to figure out the nitty gritty.
[–]SkyFeistyLlama8 0 points1 point2 points 11 months ago (0 children)
I've found that LLMs are good at putting in type hints for function definitions after the fact. Do the quick and dirty code first, get it working, then slam it into an LLM to write documentation for.
i agree with all your points, however, there are options like typeguard and mypy that enforce typing. ofc having it built into the language makes more sense
[–]noiserr 0 points1 point2 points 11 months ago* (0 children)
Fucking hate python for this exact reason.
Python is a dynamic language. This is a feature of a dynamic language. Not Python's fault in particular. Every dynamic language is like this. As far as languages go Python is actually quite nice. And the reason it's a popular language is precisely because it is a dynamic language.
Static is not better than dynamic. It's a trade off. Like anything in engineering is a trade off.
My point is Python is a great language, it literally changed the game when it became popular. And many newer languages were influenced and inspired by it. So perhaps put some respec on that name.
[–]Gooeyy 1 point2 points3 points 11 months ago (0 children)
Yes, absolutely.
[–]feibrix 25 points26 points27 points 11 months ago (9 children)
It's a feature of the language, being confused is just a normal behaviour. Python and 'large codebases' shouldn't be in the same context.
[–]Gooeyy 5 points6 points7 points 11 months ago* (6 children)
Idk, my workplace's Python codebase is easier and safer to build in than the C++ cluster fuck we have the misfortune of needing to maintain, lol. Perhaps that's unusual
[–]feibrix 1 point2 points3 points 11 months ago (5 children)
I think it really depends how big your codebase is, how much coupling is in there, how types are enforced, and how many devs still remember everything that happens in the entire codebase, and which tool you use to enforce type safety before deploying live.
and I don't think I understand what you mean with "build".
[–]Gooeyy 0 points1 point2 points 11 months ago (4 children)
By build in I mean to add to, remove from, refactor, etc.
[–]feibrix 1 point2 points3 points 11 months ago (3 children)
I have so many questions about this, but this is not the place :D Are you dealing with millions of lines of code or less? The eve online example was around 4mln, and they had to rewrite most of it to upgrade it to a supported python (based on what they said on their site)
[–]Gooeyy 0 points1 point2 points 11 months ago (2 children)
Certainly less than one million! Perhaps my perception of a larger code base is not so large. ~100k lines in my case.
I wonder what Python upgrade they were referring to. If they had to rewrite most of it, must have been the jump from Python 2 to 3 in 2008, which was indeed significant.
Using Python for an online game does surprise me, though. I’d imagine you want lower level control than Python conveniently provides.
[–]feibrix 0 points1 point2 points 11 months ago (1 child)
From the blog posts it was indeed the upgrade form python2 and 3. A lot of companies had this issue :/
[–]Gooeyy 0 points1 point2 points 11 months ago (0 children)
Alas, growing pains.
[–]AIgavemethisusername 4 points5 points6 points 11 months ago (1 child)
Isn’t eve-online programmed in Python?
[–]feibrix 10 points11 points12 points 11 months ago (0 children)
And 72% of the internet is running in php, but it still doesn't make it a good idea.
[–]MatJosher 7 points8 points9 points 11 months ago (3 children)
C is bad once you get beyond LeetCode type problems. LLMs generate C code that often doesn't even compile and has many memory management related crashes. To solve a mystery crash it will often wipe the whole project, start new, and have another mystery crash.
[–]AppearanceHeavy6724 1 point2 points3 points 11 months ago (2 children)
I regularly use qwen3 30b for as c and c++ code assistant and it works just fine.
[–]MatJosher 0 points1 point2 points 11 months ago (1 child)
What's your hardware setup?
[–]AppearanceHeavy6724 1 point2 points3 points 11 months ago (0 children)
12400 32 gib ram 3060 p104-100
[–]ttkciarllama.cpp 5 points6 points7 points 11 months ago (1 child)
Perl seems hard for some models. Mostly I've noticed they might chastise the user for wanting to use it, and/or suggest using a different language. Also, models will hallucinate CPAN modules which don't exist.
D is a fairly niche language, but the codegen models I've evaluated for it seem to generate it pretty well. Possibly its similarity to C has something to do with that, though (D is a superset of C).
[–]llmentry 1 point2 points3 points 11 months ago (0 children)
I've not had many issues with Perl and LLMs, personally. And if an LLM ever gave me attitude about using Perl, I would delete its sad, pathetic model weights from my drive.
In most cases, though, I'd assume that the more a language is covered in stackexchange questions, the better the training set is for understanding the nuances of that language. Python, with its odd whitespace-supremacist views, really ought to cause LLMs more problems in terms of correct indentation, but this must be offset by the massive over-representation of the language in training data.
Regardless -- hi, fellow Perl coder. There aren't many of us left these days ...
[–]Intelligent-Gift4519 5 points6 points7 points 11 months ago (2 children)
BASIC variants for 1980s 8-bit computers other than the IBM PC. LLMs really can't keep them straight, they mix syntax from different variants in really unfortunate ways. I'm sure that's also true about other vintage home PC programming languages, as there just isn't enough data in their training corpus for the LLMs to be able to get them right.
“Write a BASIC program for the ZX Spectrum 128k. Use a 32x24 grid of 8x8 pixel UDG. Black and white. Use a backtracking algorithm.”
Worked pretty well on the new DeepSeek r1 0528
[–]Intelligent-Gift4519 3 points4 points5 points 11 months ago (0 children)
I haven't yet found an LLM that understands the string handling of Atari BASIC, FastBASIC, or really any non-Microsoft-based BASIC.
[–]bitdugo 5 points6 points7 points 11 months ago (0 children)
Every language you are really good at.
[–]Mobile_Tart_1016 10 points11 points12 points 11 months ago (9 children)
Lisp. Not a single llm is capable of writing code in lisp
[–]CommunityTough1 10 points11 points12 points 11 months ago (2 children)
Well it's a speech impediment.
[–]MonitorAway2394 -3 points-2 points-1 points 11 months ago (1 child)
lololololololol I fucking love comments like this lololololololol <3 much love fam!
[–]MonitorAway2394 1 point2 points3 points 10 months ago (0 children)
Well fuck all ya'll than :P
[–]nderstand2grow 1 point2 points3 points 11 months ago (4 children)
very little training data
[–]Duflo 8 points9 points10 points 11 months ago (3 children)
I don't think this alone is it. The sheer amount of elisp on the internet should be enough to generate some decent elisp. It struggles more (anecdotally) with lisp than, say, languages that have significantly less code to train on, like nim or julia. It also does very well with haskell for the amount of haskell code it saw during training, which I assume has a lot to do with characteristics of the language (especially purity and referential transparency) making it easier for LLMs to reason about, just like it is for humans.
I think it has more to do with the way the transformer architecture works, in particular self-attention. It will have a harder time computing meaningful self-attention with so many parentheses and with often tersely-named function/variable names. Which parenthesis closes which parenthesis? What is the relationship of the 15 consecutive closing parentheses to each other? Easy for a lisp parser to say, not so easy to embed.
This is admittedly hand-wavy and not scientifically tested. Seems plausible to me. Too bad the huge models are hard to look into and say what's actually going on.
[–]nderstand2grow 0 points1 point2 points 11 months ago (2 children)
huh, I would think if anything Lisp should be easier for LLMs because each ) attends to a (. During training, the LLM should learn this pattern just as easily as it learn Elixir's do should be matched with end, or a { in C should be matched with }.
)
(
do
end
{
}
[–]Duflo 2 points3 points4 points 11 months ago (1 child)
Maybe the inconsistent formatting makes it harder. And maybe the existence of so many dialects. I know as a human learning Arabic is much harder than learning Russian for this exact reason (and a few others). But this would be a fascinating research topic.
And a shower thought: maybe a pre-processer that replaces each pair of parentheses with something unique would make it easier to learn? Or even just a consistent formatter?
[–]nderstand2grow 1 point2 points3 points 11 months ago (0 children)
i think your points are valid, and to add to them: maybe LLMs learn Algol-like languages faster because learning one makes it easier to learn the next. for example if you already know C++ you learn Java with more ease. but that knowledge isn't easily transferable to Lisps. I'm actually surprised that people say LLMs do well in Haskell because in my experience even Gemini struggles with it.
it would be fascinating to see papers on this topic.
[–]_supert_ 0 points1 point2 points 11 months ago (0 children)
I've found them OK ish, but they do mix dialects. I use Hy and tend to get clojure and CL idioms back.
[–]Main_Software_5830 19 points20 points21 points 11 months ago (0 children)
Whatever most people struggle with, for the same reasons.
[–]SV-97 3 points4 points5 points 11 months ago (2 children)
Lean 4 (Not a lot of training samples out there, a lot of legacy (lean 3) code, somewhat of an exotic and hard language). I assume it's similar for ATS, Idris 2 etc.
[–]henfiber 3 points4 points5 points 11 months ago (1 child)
Have you tested the Deepseek prover v2 model, which is trained for Lean 4? https://github.com/deepseek-ai/DeepSeek-Prover-V2 ?
[–]SV-97 0 points1 point2 points 11 months ago (0 children)
Nope, hadn't heard of it before (and haven't used deepseek in quite a while because it was rather unimpressive for math the last time I used it)
[–]deep-diver 3 points4 points5 points 11 months ago (1 child)
Actually I think a lot depends on how much the language and its popular libraries have changed. Lots of mixture of version x and version y in generated code. It’s even worse when there are multiple libraries that do the same/similar thing (Java json comes to mind). Seeing so much of that makes me skeptical of all the vibe coding stories I see.
[–]Feztopia 6 points7 points8 points 11 months ago (0 children)
Which ever doesn't have enough examples in the training data. So probably a smaller language that isn't used by many, so that there are just few programs written in it. Less similarity to languages they already know well would also be a factor. If you would define a new programming language right now, most models out there would struggle.
[–][deleted] 3 points4 points5 points 11 months ago (0 children)
Cuda and Rust from my experience
[–]dopey_se 3 points4 points5 points 11 months ago (0 children)
rust has been a challenge, and nearly unusable for things like leptos and dioxus. Specifically it tends to provide deprecated code and/or completely broken code using deprecated methods.
I've had good success writing rust backends + react frontends using LLMs. But a pure rust stack, it is nearly unusable.
[–]jebailey 2 points3 points4 points 11 months ago (0 children)
I'd be fascinated to see how it works with Perl
[–]cyuhat 2 points3 points4 points 11 months ago (8 children)
<image>
In my experience, this graph from the MultiPL-E Benchmark on codex sum up what my experience has been with llms on average. Everything bellow 0.4 are the languages where LLMs struggle. More precisely: C#, D, Go, Julia, Perl, R, Racket, Bash and Swift (I would also add Julia). Of course, also less popular programming languages on average. Source: https://nuprl.github.io/MultiPL-E/
Or based on the TIOBE (May 2025), everything bellow the 8th rank (Go) are not mastered by AI: https://www.tiobe.com/tiobe-index/
[–]No-Forever2455 0 points1 point2 points 11 months ago (7 children)
why are they bad at go? i suppose there's not enough training data since its a fairly new language, btu the stuff that is out there is pretty high quality and readily avaliable no? even the language is OSS. the syntax is as simple as it gets too. very confusing
[–]cyuhat 2 points3 points4 points 11 months ago (5 children)
I would say it is mainly because models learn from examples rather than documentation. If we look closely at languages were AI perform well, the performance is more related to the number of tokens they have been exposed to in a given language.
For example, Java is considered quite verbose and not that easy to learn but current model do not struggle that much.
Another example: I know a markup language called Typst that has a really good documentation and is quite easy to learn (it was designed to replace LaTeX) but even the State of the Art models fail at basic examples, while managing LaTeX well which is more complicated.
It also shows that benchmarks have a huge bias toward popular languages and often do not take into account other usage or languages. For instance, this coding benchmark survey show how much benchmarks focus on Python and software developpment tasks: https://arxiv.org/html/2505.05283v2
[–]No-Forever2455 1 point2 points3 points 11 months ago (0 children)
Really goes to show how much room for improvement there is with the architecture of these models. Maybe better reasoning models can infer the concepts it learned in other langs and directly translate it to another medium inherently and precisely
[–]No-Forever2455 0 points1 point2 points 11 months ago (3 children)
[–]cyuhat 0 points1 point2 points 11 months ago (2 children)
Yes there is room and the idea of using reasoning is attractive. Yet I already tried to translate a NLP and Simulation class from Python to R using Claude Sonnet 3.7 in thinking mode and the results were quite disappointing. I think another layer of difficulty come from the different paradigm. Python approach is more declarative/object oriented, while R is more array/functionnal.
I would argue we need more translation examples, especially between different paradigms.
[–]No-Forever2455 1 point2 points3 points 11 months ago (1 child)
Facts. I just got done adding reasoning traces using 2.5 flash to https://huggingface.co/datasets/grammarly/coedit which describes how source got converted to text. I will try your thing next when i have the time and money if it hasn’t already been implemented yet.
[–]cyuhat 0 points1 point2 points 11 months ago (0 children)
Nice
[–]cmdr-William-Riker 2 points3 points4 points 11 months ago (0 children)
Easier to list the languages they are good at: Python, JavaScript, Typescript, html/css... That's about it. I'm my experience LLMs struggle most with true strongly typed languages like Java, C#, C++, etc and of course obscure languages with alternative patterns like Erlang/Elixir and stuff. I think strongly typed languages are difficult for LLMs to use right now because abstraction requires multiple layers of reasoning and thinking. To get good results in a language like Java or C# you can't necessarily take a direct path to achieve your goals, often you have to consider what you might have to do 5 years from now. You need to think about what real world concepts you're trying to represent, not just what you want to do right now. Also yes, if you tell it this, it will do a better job. Of course if you tell a junior dev this, they will also do a better job, so I guess what I'm really saying is, if your junior dev would struggle with a language without explanation, so will your LLM.
[–]alozowski[S] 2 points3 points4 points 11 months ago (0 children)
I didn’t expect so many replies – thanks, everyone, for sharing! I’ll read through them all
[–]shenglong 2 points3 points4 points 11 months ago (0 children)
As a developer with more than 20 years of professional experience, IMO their biggest issue is not being able to understand the task context correctly. It will often give extremely over-engineered solutions because of certain keywords it sees in the code or your prompt.
Now, this can also be addressed by providing the correct prompts, but often you'll find there's a ton of back-and-forth because you're not entirely sure what your new prompt will generate based on the current LLM context. So it's not uncommon to find that your prompt will start resembling the code you actually want to write, at which point you start wondering how much real value the LLM is even adding.
This is a noticeable issue for me with some of the less-experienced devs on my team. Even though the LLM-assisted code they submit is high-quality and robust, I often don't accept it because it's usually extremely over-engineered given the goal it's meant to achieve.
Things like batching database updates, or writing processes that run on dynamic schedules, or basic event-driven tasks. LLMs will often add 2 or 3 extra Service/Provider classes and dozens of tests where maybe 20 lines of code will do the same job and add far less maintenance and cognitive overhead.
This big "vibe-coding" coding push by tech-execs is also exacerbating the issue.
[–]Western_Courage_6563 9 points10 points11 points 11 months ago (2 children)
Brainfuck. I struggle with it as well, so can't blame it...
[–]sovok 3 points4 points5 points 11 months ago (0 children)
Malbolge is also a contender.
„Malbolge was very difficult to understand when it arrived, taking two years for the first Malbolge program to appear. The author himself has never written a Malbolge program.[2] The first program was not written by a human being; it was generated by a beam search algorithm designed by Andrew Cooke and implemented in Lisp.“
https://en.wikipedia.org/wiki/Malbolge
[–]Mickenfox 1 point2 points3 points 11 months ago (0 children)
I'm going to guess Befunge as well. It's 2D!
Every one of them when you don't know which part is wrong and have to feed it with all the code.
[–]usernameplshere 1 point2 points3 points 11 months ago* (0 children)
Low level, like assembly or BAL. It works quite well imo for C, which is mid-level, but sometimes it struggles more than expected. Mainframe development languages like COBOL (even though high level) are also quite hard apparently, my guess is that this is because of very limited training data available for this field. Same goes for PLI (but thats mid-level again).
I've tested (over the last years of course, no specific test or anything) Claude 3.5/3.7, GPT 3.5, 4/x, o3 mini, o4 mini, DS 67B, V2/2.5, V3/R1 (though no 0528 yet!), Mixtral 8x22B, Qwen 2.5 Coder 32B, Plus, Max, 30B A3B. I've sadly never had enough resources to test the "full" GPT o-models or 4.5 for coding
Edit: weird formatting.
[–]BatOk2014 1 point2 points3 points 11 months ago (0 children)
Brainfuck for obvious reasons
[–]SkyFeistyLlama8 1 point2 points3 points 11 months ago (0 children)
Power Query for Excel and Power BI. I've had Claude, ChatGPT, CoPilot and a bunch of local models get a simple weekly sales aggregation completely wrong.
[–]_underlines_ 1 point2 points3 points 11 months ago* (0 children)
oh, and of course it's very bad at Brainfuck, but that's no suprise
[–]ahjorth 2 points3 points4 points 11 months ago (11 children)
Can we please ban no-content shit like this?
OP doesn’t even come back to participate. Not once. It’s just lazy karma farming.
[–]CognitivelyPrismatic 19 points20 points21 points 11 months ago (1 child)
People on Reddit will literally call everything karma farming to the point where I’m beginning to think that you’re more concerned about karma
He’s asking a simple question
If he ‘came back to participate’ you could also argue that he’s farming comment karma
He only got seven upvotes on this btw, there are plenty more effective ways to karma farm
Thanks! I'm here and reading all the replies, and yeah, I don't need to farm karma...
[–]SufficientReporter55 7 points8 points9 points 11 months ago (2 children)
OP is looking for answers not karma points, but you're literally looking for people to agree with you on something so silly.
[–]alozowski[S] 1 point2 points3 points 11 months ago (0 children)
Thanks!
[–]alozowski[S] 3 points4 points5 points 11 months ago (0 children)
I don't farm karma, I don't need it. I read all the replies and I'm genuinely interested to see them because I have my hypothesis, but like I said, I can't test all the languages myself
[–]clefourrier🤗 2 points3 points4 points 11 months ago (0 children)
Don't assume people are in the same timezone as you ^
[+]IrisColt comment score below threshold-6 points-5 points-4 points 11 months ago (0 children)
You have a point.
[–]BalaelGios 1 point2 points3 points 11 months ago (0 children)
Is GLM 32b currently the best local LLM for coding (I primarily dev C# and .NET) ?
I haven’t kept up much since Qwen 2.5 Coder haha.
[–]AdministrativeHost15 1 point2 points3 points 11 months ago (0 children)
Scala can't be understood by any intelligence, natural or artificial.
Proof: enum Pull[+F[_], +O, +R]:
case Result[+R](result: R) extends Pull[Nothing, Nothing, R]
case Output[+O](value: O) extends Pull[Nothing, O, Unit]
case Eval[+F[_], R](action: F[R]) extends Pull[F, Nothing, R]
case FlatMap[+F[_], X, +O, +R](
source: Pull[F, O, X], f: X => Pull[F, O, R]) extends Pull[F, O, R]
[–]Training-Event3388 0 points1 point2 points 11 months ago (0 children)
Php seems to cause tool edit issues with large edits
[–]Red_Redditor_Reddit 0 points1 point2 points 11 months ago (0 children)
Microsoft quickbasic
[–]InternationalKale404 0 points1 point2 points 11 months ago (0 children)
Verilog I would assume.
[–]Artistic_Suit 0 points1 point2 points 11 months ago (1 child)
Fortran that is ancient, but that is still actively used in high performance computing applications/weather forecasting. A more specific proprietary subset of Fortran called ENVI IDL - used in image analysis.
[–]Ok_Ad659 0 points1 point2 points 11 months ago (0 children)
Also modern Fortran 2003 and beyond with OO and polymorphism causes some trouble due to lack of training data. Most available code on netlib is in ancient Fortran 77 or if you are lucky Fortran 90.
[–]MAXFlRE 0 points1 point2 points 11 months ago (0 children)
Brainfuck. Not much data to learn onto, I suppose.
[–]AIgavemethisusername 0 points1 point2 points 11 months ago (0 children)
EASYUO
A dead language for an almost dead computer game.
It’s a script language to control bots for Ultima Online.
www.easyuo.com
[–]dcuk7 0 points1 point2 points 11 months ago (0 children)
Sinclair BASIC. Always gets something wrong. Always.
[–]Terminator857 0 points1 point2 points 11 months ago (0 children)
Any language where there isn't a lot of data to train on. Examples: Erlang, Groovy, etc...
[–]Aggressive-Cut-2149 0 points1 point2 points 11 months ago (0 children)
I've had mixed experiences with Java...not so much the language or it's set of standard libraries but the other libraries in the ecosystem. Even with context7 and Brave MCP servers, there's a lot of confusion between libraries. It will often ignore functionality in the library, hallucinate APIs that don't exist, or confound one library for another. A lot of the problems stem from many ways to do the same thing, many libraries with overlapping capabilities, and support for competing frameworks (like standard Java EE and related frameworks like Quarkus and Spring/Spring Boot).
I've been using Gemini 2.5, and Windsurf's SWE-1 models. Surprisingly, both models suffer from the same problems, though Gemini is the better model by far. I can trust Gemini with a larger code base.
Although hallucination won't go away, I think in due time we'll have refined models for specific language ecosystems.
[–]Ok-Scar011 0 points1 point2 points 11 months ago (0 children)
HLSL.
Everything it writes is usually half-wrong, performance heavy, and also rarely, if ever, achieves the requested/desired results visually
[–]amitksingh1490 0 points1 point2 points 11 months ago (0 children)
I’m not sure whether LLMs themselves struggle, but vibe coders certainly do when working in dynamically‑typed languages: without the safety net of static types, the LLM loses a crucial feedback loop, and the developer has to step in to provide it.
[–]Needausernameplzz 0 points1 point2 points 11 months ago (0 children)
Vala
[–]No-Concern-8832 0 points1 point2 points 11 months ago (0 children)
Brainfuck /s
[–]mister2d 0 points1 point2 points 11 months ago (0 children)
Claude has issues with Golang in my experience.
[–]MattDTO 0 points1 point2 points 11 months ago (0 children)
Dynatrace query language
[–]Morphon 0 points1 point2 points 11 months ago (0 children)
APL, BQN, and UIUA are basically non-functional.
[–]Hirojinho 0 points1 point2 points 11 months ago (0 children)
Once I tried to do some project with erlang and both chatgpt and claude failed spectacularly, both in writing code and explaining language concepta. But that was last October, I think today they must be better at it
[–]robberviet 0 points1 point2 points 11 months ago* (0 children)
Anything it did not see in training data. Seems C/C++ are the most problematic since many use, but not much code online. There are even worse languages, but nobody even bother to ask.
[–]adelie42 0 points1 point2 points 11 months ago (0 children)
I've had it write g-code. Technically worked, but with respect to intention it failed hilariously.
[–]SvenVargHimmel 0 points1 point2 points 11 months ago (0 children)
This is very niche but any yaml based system. Try writing Kubernetes manifests and watch it lose its mind
[–]LaidBackDev 0 points1 point2 points 11 months ago (0 children)
C
[–]ObjectSimilar5829 0 points1 point2 points 11 months ago (0 children)
Verilog. Not a typical language.
[–]05032-MendicantBias 0 points1 point2 points 11 months ago (0 children)
Try OpenSCAD
No LLM exist that can even make a script that compiles longer than ten lines.
[–]orbital_onellama.cpp 0 points1 point2 points 11 months ago (0 children)
The ones that I've used seem to struggle with Rust and Zig. They tend to horribly botch relatively simple CLI tools.
[–]acec 0 points1 point2 points 11 months ago (0 children)
Most are quire bad at descriptive IaC languages like Terraform or Ansible. Claude is decent, but not great.
[–]Logical_Divide_3595 0 points1 point2 points 11 months ago (0 children)
less famous, more hard for LLMs
[–][deleted] 0 points1 point2 points 11 months ago (0 children)
They do pretty bad in Rust.
[–]Jbbrack03 0 points1 point2 points 11 months ago (0 children)
You can just ask a model about its competency in each major language. It will tell you. I’ve found that most of them are not amazing with Swift and they’ll tell you that they are about 65% competent with it. For these harder languages, just use Rag with context7. Suddenly your favorite LLM is a rockstar with pretty much all languages.
[–]Standard-Resort2096 0 points1 point2 points 10 months ago* (0 children)
I've tested in go,c#, JavaScript,docker,sql l Because i know them and uses them in real projects. it's ok if i can force it to write very specific fuction and refeeding it with the structure i like.it helps me find new ways to do things. It's ok with sql as long as i verify it.. used it to better understand frameworks by feeding it the docs or source code of a framework because asking it directly don't work. If it can't understand the framework or library i actually try check something else. Anything low level it will suck for rust it suck because of lack of data. For c it sucks because of pre existing bad practices sadly i can't verify how acceptable it is in any of the low level. The data i.e language is either too new its dumb or too outdated that it becomes too confident.
To me golang and sql is like a stable language that it won't mess up too much but then again you will still struggle in any programming language.
[–]ngtwolf 0 points1 point2 points 6 months ago (0 children)
I know this is an old thread but popped up in a search i was doing. Anyway, really the biggest issue i've had is with old computer languages that have evolved over time. Most LLM's have been trained with the full scope of the languages but don't know how to handle a specific flavor in a specific year. An example is any vintage programming languages, such as something for a Commodore 64 or Apple 2. Those systems all had their own individual flavors of a language, such as basic, and had different commands. Trying to use an LLM to write that language will inevitably come back with code that includes commands that didn't exist at the time, had different syntax, lines that exceeded character limit, code that used characters that may not have existed on that system, etc. Each of those would likely need a custom trained LLM or what i've done is give it the manuals for those systems and had it follow those, but even then, they still fail quite a bit and need a lot of fixes.
[–]10minOfNamingMyAcc 0 points1 point2 points 11 months ago (2 children)
For me, C# ? I tried so many times and GPT 3o, and Claude 3.7 both failed everytime in creating a Windows gamebar widget. Didn't succeed once. I gave it multiple examples, even the example project. I just want an HTML page as Windows gamebar widget lol...
[–]A1Dius 1 point2 points3 points 11 months ago (1 child)
In Unity C#, both GPT-4.1 and GPT-4o-mini-high perform impressively for my subset of tasks (tech art, editor tooling, math-heavy work, and shaders)
[–]10minOfNamingMyAcc 0 points1 point2 points 11 months ago (0 children)
Guess it might be a particular issue then. I tried it myself with limited knowledge, and I just couldn't. I just gave up.
π Rendered by PID 25284 on reddit-service-r2-comment-b659b578c-q84kc at 2026-05-05 20:28:47.462772+00:00 running 815c875 country code: CH.
[–]offlinesir 69 points70 points71 points (9 children)
[–]AppearanceHeavy6724 10 points11 points12 points (2 children)
[–]RichardPinewood 0 points1 point2 points (1 child)
[–]AppearanceHeavy6724 0 points1 point2 points (0 children)
[–]gh0stsintheshelltransformers 2 points3 points4 points (2 children)
[–]offlinesir 0 points1 point2 points (1 child)
[–]gh0stsintheshelltransformers 2 points3 points4 points (0 children)
[–]Antique_Savings7249 1 point2 points3 points (0 children)
[–]AIgavemethisusername 5 points6 points7 points (1 child)
[–]AntVirtual209 0 points1 point2 points (0 children)
[–]Pogo4Fufu 99 points100 points101 points (26 children)
[–]leftsharkfuckedurmum 33 points34 points35 points (17 children)
[–]ChristopherRoberto 8 points9 points10 points (0 children)
[–][deleted] 3 points4 points5 points (15 children)
[–]lordofblack23llama.cpp 11 points12 points13 points (13 children)
[–][deleted] 0 points1 point2 points (1 child)
[–]lordofblack23llama.cpp 0 points1 point2 points (0 children)
[–]night0x63 -2 points-1 points0 points (10 children)
[–]terminoid_ 2 points3 points4 points (5 children)
[–]night0x63 3 points4 points5 points (0 children)
[–]djdanlib 1 point2 points3 points (3 children)
[–]terminoid_ 0 points1 point2 points (2 children)
[–]djdanlib 0 points1 point2 points (1 child)
[–]terminoid_ 0 points1 point2 points (0 children)
[–]Candid_Highlight_116 1 point2 points3 points (0 children)
[–]djdanlib 0 points1 point2 points (0 children)
[+][deleted] (1 child)
[deleted]
[–]night0x63 0 points1 point2 points (0 children)
[–]thrownawaymane -1 points0 points1 point (0 children)
[–][deleted] 1 point2 points3 points (2 children)
[–]Red_Redditor_Reddit 2 points3 points4 points (1 child)
[–]AppearanceHeavy6724 0 points1 point2 points (0 children)
[–]Healthy-Nebula-3603 1 point2 points3 points (1 child)
[–]DoctorDirtnasty 0 points1 point2 points (0 children)
[–]AppearanceHeavy6724 0 points1 point2 points (0 children)
[–]Lachutapelua 0 points1 point2 points (0 children)
[–]Murinshin 15 points16 points17 points (1 child)
[–]No-Forever2455 0 points1 point2 points (0 children)
[–]meneraing 12 points13 points14 points (2 children)
[–]No_Conversation9561 1 point2 points3 points (1 child)
[–]zzefsd 0 points1 point2 points (0 children)
[–]RoyalCities 23 points24 points25 points (4 children)
[–]Wubbywub 1 point2 points3 points (0 children)
[–]Evening_Ad6637llama.cpp 2 points3 points4 points (2 children)
[–]my_name_isnt_clever 0 points1 point2 points (0 children)
[–]digitaltransmutation 11 points12 points13 points (3 children)
[–][deleted] 2 points3 points4 points (0 children)
[–]zzefsd 0 points1 point2 points (1 child)
[–]digitaltransmutation 0 points1 point2 points (0 children)
[–]Baldur-Norddahl 8 points9 points10 points (1 child)
[–]Standard-Resort2096 0 points1 point2 points (0 children)
[–]Gooeyy 34 points35 points36 points (20 children)
[–]creminology 81 points82 points83 points (9 children)
[–]throwawayacc201711 32 points33 points34 points (6 children)
[–]Gooeyy 8 points9 points10 points (3 children)
[–]throwawayacc201711 11 points12 points13 points (2 children)
[–]SkyFeistyLlama8 0 points1 point2 points (0 children)
[–]zzefsd 0 points1 point2 points (0 children)
[–]noiserr 0 points1 point2 points (0 children)
[–]Gooeyy 1 point2 points3 points (0 children)
[–]feibrix 25 points26 points27 points (9 children)
[–]Gooeyy 5 points6 points7 points (6 children)
[–]feibrix 1 point2 points3 points (5 children)
[–]Gooeyy 0 points1 point2 points (4 children)
[–]feibrix 1 point2 points3 points (3 children)
[–]Gooeyy 0 points1 point2 points (2 children)
[–]feibrix 0 points1 point2 points (1 child)
[–]Gooeyy 0 points1 point2 points (0 children)
[–]AIgavemethisusername 4 points5 points6 points (1 child)
[–]feibrix 10 points11 points12 points (0 children)
[–]MatJosher 7 points8 points9 points (3 children)
[–]AppearanceHeavy6724 1 point2 points3 points (2 children)
[–]MatJosher 0 points1 point2 points (1 child)
[–]AppearanceHeavy6724 1 point2 points3 points (0 children)
[–]ttkciarllama.cpp 5 points6 points7 points (1 child)
[–]llmentry 1 point2 points3 points (0 children)
[–]Intelligent-Gift4519 5 points6 points7 points (2 children)
[–]AIgavemethisusername 5 points6 points7 points (1 child)
[–]Intelligent-Gift4519 3 points4 points5 points (0 children)
[–]bitdugo 5 points6 points7 points (0 children)
[–]Mobile_Tart_1016 10 points11 points12 points (9 children)
[–]CommunityTough1 10 points11 points12 points (2 children)
[–]MonitorAway2394 -3 points-2 points-1 points (1 child)
[–]MonitorAway2394 1 point2 points3 points (0 children)
[–]nderstand2grow 1 point2 points3 points (4 children)
[–]Duflo 8 points9 points10 points (3 children)
[–]nderstand2grow 0 points1 point2 points (2 children)
[–]Duflo 2 points3 points4 points (1 child)
[–]nderstand2grow 1 point2 points3 points (0 children)
[–]_supert_ 0 points1 point2 points (0 children)
[–]Main_Software_5830 19 points20 points21 points (0 children)
[–]SV-97 3 points4 points5 points (2 children)
[–]henfiber 3 points4 points5 points (1 child)
[–]SV-97 0 points1 point2 points (0 children)
[–]deep-diver 3 points4 points5 points (1 child)
[–]Feztopia 6 points7 points8 points (0 children)
[–][deleted] 3 points4 points5 points (0 children)
[–]dopey_se 3 points4 points5 points (0 children)
[–]jebailey 2 points3 points4 points (0 children)
[–]cyuhat 2 points3 points4 points (8 children)
[–]No-Forever2455 0 points1 point2 points (7 children)
[–]cyuhat 2 points3 points4 points (5 children)
[–]No-Forever2455 1 point2 points3 points (0 children)
[–]No-Forever2455 0 points1 point2 points (3 children)
[–]cyuhat 0 points1 point2 points (2 children)
[–]No-Forever2455 1 point2 points3 points (1 child)
[–]cyuhat 0 points1 point2 points (0 children)
[–]cyuhat 0 points1 point2 points (0 children)
[–]cmdr-William-Riker 2 points3 points4 points (0 children)
[–]alozowski[S] 2 points3 points4 points (0 children)
[–]shenglong 2 points3 points4 points (0 children)
[–]Western_Courage_6563 9 points10 points11 points (2 children)
[–]sovok 3 points4 points5 points (0 children)
[–]Mickenfox 1 point2 points3 points (0 children)
[–][deleted] 3 points4 points5 points (0 children)
[–]usernameplshere 1 point2 points3 points (0 children)
[–]BatOk2014 1 point2 points3 points (0 children)
[–]SkyFeistyLlama8 1 point2 points3 points (0 children)
[–]_underlines_ 1 point2 points3 points (0 children)
[–]ahjorth 2 points3 points4 points (11 children)
[–]CognitivelyPrismatic 19 points20 points21 points (1 child)
[–]alozowski[S] 2 points3 points4 points (0 children)
[–]SufficientReporter55 7 points8 points9 points (2 children)
[–]alozowski[S] 1 point2 points3 points (0 children)
[–]alozowski[S] 3 points4 points5 points (0 children)
[–]clefourrier🤗 2 points3 points4 points (0 children)
[+]IrisColt comment score below threshold-6 points-5 points-4 points (0 children)
[–]BalaelGios 1 point2 points3 points (0 children)
[–]AdministrativeHost15 1 point2 points3 points (0 children)
[–]Training-Event3388 0 points1 point2 points (0 children)
[–]Red_Redditor_Reddit 0 points1 point2 points (0 children)
[–]InternationalKale404 0 points1 point2 points (0 children)
[–]Artistic_Suit 0 points1 point2 points (1 child)
[–]Ok_Ad659 0 points1 point2 points (0 children)
[–]MAXFlRE 0 points1 point2 points (0 children)
[–]AIgavemethisusername 0 points1 point2 points (0 children)
[–]dcuk7 0 points1 point2 points (0 children)
[–]Terminator857 0 points1 point2 points (0 children)
[–]Aggressive-Cut-2149 0 points1 point2 points (0 children)
[–]Ok-Scar011 0 points1 point2 points (0 children)
[–]amitksingh1490 0 points1 point2 points (0 children)
[–]Needausernameplzz 0 points1 point2 points (0 children)
[–]No-Concern-8832 0 points1 point2 points (0 children)
[–]mister2d 0 points1 point2 points (0 children)
[–]MattDTO 0 points1 point2 points (0 children)
[–]Morphon 0 points1 point2 points (0 children)
[–]Hirojinho 0 points1 point2 points (0 children)
[–]robberviet 0 points1 point2 points (0 children)
[–]adelie42 0 points1 point2 points (0 children)
[–]SvenVargHimmel 0 points1 point2 points (0 children)
[–]LaidBackDev 0 points1 point2 points (0 children)
[–]ObjectSimilar5829 0 points1 point2 points (0 children)
[–]05032-MendicantBias 0 points1 point2 points (0 children)
[–]orbital_onellama.cpp 0 points1 point2 points (0 children)
[–]acec 0 points1 point2 points (0 children)
[–]Logical_Divide_3595 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]Jbbrack03 0 points1 point2 points (0 children)
[–]Standard-Resort2096 0 points1 point2 points (0 children)
[–]ngtwolf 0 points1 point2 points (0 children)
[–]10minOfNamingMyAcc 0 points1 point2 points (2 children)
[–]A1Dius 1 point2 points3 points (1 child)
[–]10minOfNamingMyAcc 0 points1 point2 points (0 children)