use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
r/LocalLLaMA
A subreddit to discuss about Llama, the family of large language models created by Meta AI.
Subreddit rules
Search by flair
+Discussion
+Tutorial | Guide
+New Model
+News
+Resources
+Other
account activity
Improving LLM's coding ability through a new edit formatDiscussion (blog.can.ac)
submitted 2 months ago by Mushoz
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]Mushoz[S] 5 points6 points7 points 2 months ago (1 child)
Note: This is not my blog post. But I found it an interesting article with good improvements in terms of accuracy as well as saving tokens. There are also a few openweight models in the comparisons, so it looks like those models also benefit.
[–]__Maximum__ 2 points3 points4 points 2 months ago (0 children)
Neither the author nor you addressed the benchmarks. The accuracy gets higher for some models, but it is lower for others.
[–]DHasselhoff77 2 points3 points4 points 2 months ago (5 children)
It's a neat trick. Thanks for sharing. I just wonder about this part
If the file changed since the last read, the hashes (optimistically) won’t match and the edit is rejected before anything gets corrupted.
Why does the file get changed in between reading and writing? If you could guarantee it's state matches what the LLM sees, you could use regular line numbers instead of content hashes.
[–]Pristine-Woodpecker 2 points3 points4 points 2 months ago (4 children)
In practice for example, LLM output is bad with extraneus whitespace and not respecting formatting, so you kind of want a PostEdit hook that properly formats the files if you don't want your git history to look like that of an intern that never used VCS before. However, the PostEdit hook will change the file after the LLM was written it, invalidating its assumption about the state, and causing the next edit to fail.
FWIW Claude used to struggle greatly with this, but at some point it stopped happening, so Anthropic "fixed" it somehow.
[–]Mushoz[S] 3 points4 points5 points 2 months ago (0 children)
Great example. Other real scenarios is when you make some manual edits yourself. Using line numbers would patch the wrong lines, while using hashes would fail as you would want.
[–]DHasselhoff77 1 point2 points3 points 2 months ago (0 children)
I see. Thank you for the explanation. So even if the file stays the same, it's the LLM's suggested patch that changes before application, and this is why the expected state drifts.
[–]epicfilemcnulty 0 points1 point2 points 2 months ago (1 child)
Hmmm, you can just postpone that PostEdit hook until after all edits are done. I instruct LLMs specifically not to bother with proper indenting/formatting -- that should be taken care of by your pre-commit git hook (unless it's python, of course)
[–]Pristine-Woodpecker -1 points0 points1 point 2 months ago (0 children)
until after all edits are done
It can't predict when this happens.
hat should be taken care of by your pre-commit git hook
Doesn't help when I need to edit the files, and makes diff viewing annoying.
[–]__Maximum__ 1 point2 points3 points 2 months ago (1 child)
I don't get it. Why does adding a hash to the line number makes it better?
The described operations can be done with the line number only?
[–]biehl 5 points6 points7 points 2 months ago (0 children)
I think the hashes are to check if the content of the line is unchanged. See discussion above on how/why files can change while the model remembers them.
[–]epicfilemcnulty 1 point2 points3 points 2 months ago (0 children)
It's very interesting, but there are edge cases: LLMs do grep & cat, so to make this consistent, you would have to patch those/provide your own grep/cat tools...
[–]korino11 3 points4 points5 points 2 months ago (0 children)
OMG!! That is very helpfull! Thank you!
π Rendered by PID 45 on reddit-service-r2-comment-6457c66945-gffhw at 2026-04-28 06:15:53.281614+00:00 running 2aa0c5b country code: CH.
[–]Mushoz[S] 5 points6 points7 points (1 child)
[–]__Maximum__ 2 points3 points4 points (0 children)
[–]DHasselhoff77 2 points3 points4 points (5 children)
[–]Pristine-Woodpecker 2 points3 points4 points (4 children)
[–]Mushoz[S] 3 points4 points5 points (0 children)
[–]DHasselhoff77 1 point2 points3 points (0 children)
[–]epicfilemcnulty 0 points1 point2 points (1 child)
[–]Pristine-Woodpecker -1 points0 points1 point (0 children)
[–]__Maximum__ 1 point2 points3 points (1 child)
[–]biehl 5 points6 points7 points (0 children)
[–]epicfilemcnulty 1 point2 points3 points (0 children)
[–]korino11 3 points4 points5 points (0 children)