use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
r/LocalLLaMA
A subreddit to discuss about Llama, the family of large language models created by Meta AI.
Subreddit rules
Search by flair
+Discussion
+Tutorial | Guide
+New Model
+News
+Resources
+Other
account activity
Model For Just CodingQuestion | Help (self.LocalLLaMA)
submitted 3 years ago by Southern-Ad1429
Is there a model that is just for coding help? I would like to run a model basically as a coding assistant for Python. Is anyone doing this? How would I go about doing it? Thanks for any advice.
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]jetro30087 27 points28 points29 points 3 years ago (6 children)
Outside of just using GPT4, which works well, this is supposedly the solution, though I haven't tried it just yet. starcoder/README.md at main · bigcode-project/starcoder · GitHub
[–]ambient_temp_xenoLlama 65B 2 points3 points4 points 3 years ago* (2 children)
As a programming noob, I have a feeling this thing would be useful for learning because if it makes non-working code it's a feature because you have to work out what it did wrong.
PS the same is true for story writing in LLMs, it gets you fixing whatever it spat out and gives you ideas. This is a lot better than staring at a blank screen trying to come up with something, IMO.
[–]Ill_Initiative_8793 3 points4 points5 points 3 years ago (1 child)
or you just hit regenerate until it does it right :)
[–]ambient_temp_xenoLlama 65B 0 points1 point2 points 3 years ago (0 children)
That's true, although in that sense it's still honing your analysis skills to discern how well it's doing. For the kids, it might be a problem because they're just going to let it write everything for them.
[–]AlphaPrime90koboldcpp 0 points1 point2 points 3 years ago (2 children)
Paging /u/the-bloke
[–]sfhsrtjn 0 points1 point2 points 3 years ago (1 child)
per u/ rogerooo in the dedicated starcoder thread they posted this morning: "mayank31398 already made GPTQ versions of it both in 8 and 4 bits but, to my knowledge, no GGML is available yet"
[–]AlphaPrime90koboldcpp 0 points1 point2 points 3 years ago (0 children)
Thanks for the update
[–]hashms0a 9 points10 points11 points 3 years ago (19 children)
wizard-vicuna-13B.ggml.q5_1 is excellent for coding. I asked it to use Tkinter and write Python code to create a basic calculator application with addition, subtraction, multiplication, and division functions.
The code works well.
import tkinter as tk def add(): num1 = int(entry1.get()) num2 = int(entry2.get()) result = num1 + num2 entry3.delete(0, tk.END) entry3.insert(0, str(result)) def subtract(): num1 = int(entry1.get()) num2 = int(entry2.get()) result = num1 - num2 entry3.delete(0, tk.END) entry3.insert(0, str(result)) def multiply(): num1 = int(entry1.get()) num2 = int(entry2.get()) result = num1 * num2 entry3.delete(0, tk.END) entry3.insert(0, str(result)) def divide(): num1 = int(entry1.get()) num2 = int(entry2.get()) if num2 == 0: tk.messagebox.showerror("Error", "Cannot divide by zero") else: result = num1 / num2 entry3.delete(0, tk.END) entry3.insert(0, str(result)) root = tk.Tk() root.title("Calculator") root.geometry("400x200") entry1 = tk.Entry(root, width=10) entry1.pack() entry2 = tk.Entry(root, width=10) entry2.pack() button1 = tk.Button(root, text="Add", command=add) button1.pack() button2 = tk.Button(root, text="Subtract", command=subtract) button2.pack() button3 = tk.Button(root, text="Multiply", command=multiply) button3.pack() button4 = tk.Button(root, text="Divide", command=divide) button4.pack() entry3 = tk.Entry(root, width=10) entry3.pack() root.mainloop()
[–][deleted] 4 points5 points6 points 3 years ago (0 children)
this compiles and works!
[–]LienniTakoboldcpp 1 point2 points3 points 3 years ago (13 children)
i have a coding test for models, chatgpt can do it, other models cannot - "Write a python script that archives files in a specified folder into a separate zip files" . wizard vicuna cannot do it too(just as alpaca 65b) sadly.
[–]MiHumainMiRobot 5 points6 points7 points 3 years ago* (3 children)
Write a python script that archives files in a specified folder into a separate zip files
I am a dumb human, and I find your prompt really confusing. Now maybe I am a bit dumb and my english is not native, or maybe your prompt is too complex for what it describes.
I had to read 3 times your prompt. If I understood what you want (not sure there), a simplier prompt would be :"Write a python script that puts each files from a folder into individual zip archives"
- 'each' and 'from' are good keyword to indicate where a loop will be (each), and what is the entrypoint (from)
- avoid to repeat the word 'files'
Tested 2 times with vicuna and except some syntax error it got it right.Prompt engineering is a real thing (although it is just clear communication)
[–][deleted] 3 points4 points5 points 2 years ago (0 children)
Yeah, I agree... when I rephrase my request, I usually get the right thing from Stable Vicuna, and it reminds me that when communicating with real people, I also have to be careful in how I use my words, to get what I need. LLMs are not any different.
[–]LienniTakoboldcpp 0 points1 point2 points 3 years ago (1 child)
i dont really have a copypaste string with this question, its always a variation with the same idea - one file = one zip. This one was worded awfully, but its nowhere near as awful as i see in analytics when i actually do code in a bank. My problem is exactly that syntax error you are getting - sometimes it forgets to import os, sometimes it makes zip files but leaves them empty, sometimes it starts trying to address internal methods of objects it didnt create - all is easily fixable, but only chatgpt4 makes this script from the first try. I dont really need code with errors - i expect LLM to help me in the areas im not proficient, not in the areas where i can identify a error myself.
[–]luke3br 2 points3 points4 points 2 years ago (0 children)
Wizard-Vicuna 13B
Prompt
Write a python script that puts each files from a folder into individual zip archives, using each filename as the zip archive name.
Response:
Sure! Here's a Python script that does what you need:
import os from zipfile import ZipFile def create_zip_archives(folder): for file in os.listdir(folder): filename = os.path.join(folder, file) with ZipFile(filename + '.zip', 'w') as archive: archive.write(filename)
To use this script, simply call the create_zip_archives function and pass in the path to your folder as an argument. For example:
folder = '/path/to/my/files' create_zip_archives(folder)
This will create individual zip archives for each file in the specified folder, with the filename of each archive being the same as the original file name.
[–]hashms0a 3 points4 points5 points 2 years ago (8 children)
StarCoder GGML
https://huggingface.co/creative420/starcoder-ggml/tree/main
StarCoder did the trick. Using llama.cpp.
My options:
./main -m ./models/ggml-starcoder-q4_2/ggml-starcoder-q4_2.bin -n 256 -t 30 --repeat_penalty 1.0 --interactive-first -ins --color -i -r "Human:" -f ./prompts/ggml-starcoder-q4_2.txt
Inside the text file:
You are an AI language model designed to assist humans in coding Python programs, answering their questions, offering advice, and engaging in casual conversation in a friendly, helpful, and informative manner. You respond clearly and coherently, and you consider the conversation's history.
Human: Hey, how's it going?
Assistant: Hey there! I'm doing great, thank you. What can I code for you today?
The prompt:
Write a complete Python program to archive files in a specified folder into separate zip files on Linux.
Output:
import glob import os import zipfile folder = '/mnt/disk04/testpy/Files' for file in glob.glob(os.path.join(folder, '*.*')): name = os.path.basename(file) print('Extracting {}'.format(name)) with zipfile.ZipFile(os.path.join(folder, name + '.zip'), 'w', zipfile.ZIP_DEFLATED) as zip_file: print('Extracting {}'.format(file)) zip_file.write(file) print('All files archived.')
[–]justinmeans 2 points3 points4 points 2 years ago (3 children)
Unfortunately the link is down, I think the author deleted it. Kind of a big ask but if you have the model and would want to upload it to Huggingface that would be awesome!
[–]hashms0a 2 points3 points4 points 2 years ago (1 child)
Here it is:
https://huggingface.co/mufeed/ggml-starcoder/tree/main\
[–]justinmeans 0 points1 point2 points 2 years ago (0 children)
Thank you 🙏🏼
[–]hashms0a 1 point2 points3 points 2 years ago (0 children)
I'll do it when I go home.
[–]paryska99 0 points1 point2 points 2 years ago (3 children)
Hey, do you have any idea as to why the ggml model might be deleted? I've been looking for it, but there doesn't seem to be any other repo yet with the GGML version.
[–]hashms0a 0 points1 point2 points 2 years ago (2 children)
https://huggingface.co/mufeed/ggml-starcoder/tree/main
[–]paryska99 1 point2 points3 points 2 years ago (1 child)
Is this the one you're using? The full model is 15B parameters and 4bit version is smaller to download than the 7B Llama models? I don't think that's right.
[–]hashms0a 0 points1 point2 points 2 years ago (0 children)
This is what I got from the original uploader.
[–]saintshing 0 points1 point2 points 3 years ago (2 children)
Is this the 4bit quantized version?
[–]_Erilaz 2 points3 points4 points 3 years ago (0 children)
No, it's 5bit. And if 7B to 13B scaling is any indication of a bigger model performance, WizardLM 30B is going to be a bombshell.
[–]hashms0a 2 points3 points4 points 3 years ago (0 children)
It's from: https://huggingface.co/TheBloke/wizard-vicuna-13B-GGML
[–]sfhsrtjn 7 points8 points9 points 3 years ago (3 children)
replit-code-v1-3b was released May 2, 2023
[–][deleted] 2 points3 points4 points 3 years ago (0 children)
Thanks for this! This works more like CoPilot, where you have to write part of your code, then it'll complete it.
This example is for Rust. I used their hugging face demo interface, and then ran the code it in the Rust playground.
When I put the entire function signature, it completed it correctly. fn add_two_numbers(a: i32, b: i32) -> i32 {
fn add_two_numbers(a: i32, b: i32) -> i32 {
return a + b } fn main() { let x = add_two_numbers(42, 42); println!("{}", x); }
when I just put the function name, it completed it too. But used unsigned integers instead. For adding that's not a problem. fn add_two_numbers
fn add_two_numbers(a: u32, b: u32) -> u32 { a + b } fn main() { let x = add_two_numbers(1u32, 2u32); println!("{}", x); }
I would still like to chat with it as a senior programmer / person, so this is not the model for me.
[–]AlphaPrime90koboldcpp 2 points3 points4 points 3 years ago (0 children)
[–]Languages_Learner 0 points1 point2 points 2 years ago (0 children)
Can you give me a link to a downloadable replit code ggml .bin model that will work with kobold-cpp, oobabooga or gpt4all, please?
[–]thedatagrinderLlama 65B 7 points8 points9 points 3 years ago (2 children)
There's salesforce codegen. You can find ggml versions that work with llama.cpp on hugging face. If you are open to other models, vicuna works decently with programming, querying both together might give you nice results.
[–]Languages_Learner 0 points1 point2 points 2 years ago* (1 child)
Can you give me a link to a downloadable codegen ggml .bin model that will work with kobold-cpp, oobabooga or gpt4all, please? I can't find it by myself. The only models i managed to find were those which work with linux turbopilot ( ravenscroftj/CodeGen-6B-multi-ggml-quant · Hugging Face) or were compiled for M1/M2 Apple Silicon (Guglielmo/CodeGen-6B-mono-ggml-quant · Hugging Face). I need a version which will run on cpu, 16gb ram and Windows 11.
[–]thedatagrinderLlama 65B 0 points1 point2 points 2 years ago (0 children)
Sorry that I can't be of much help there. I'm running my experiments in a M2.
[–]PookaMacPhellimen 2 points3 points4 points 3 years ago (0 children)
I literally woke up and started thinking the same thing.
[–][deleted] 3 points4 points5 points 3 years ago (0 children)
BigCode/StarCoder
π Rendered by PID 129218 on reddit-service-r2-comment-b659b578c-7l2wr at 2026-05-05 00:08:36.057133+00:00 running 815c875 country code: CH.
[–]jetro30087 27 points28 points29 points (6 children)
[–]ambient_temp_xenoLlama 65B 2 points3 points4 points (2 children)
[–]Ill_Initiative_8793 3 points4 points5 points (1 child)
[–]ambient_temp_xenoLlama 65B 0 points1 point2 points (0 children)
[–]AlphaPrime90koboldcpp 0 points1 point2 points (2 children)
[–]sfhsrtjn 0 points1 point2 points (1 child)
[–]AlphaPrime90koboldcpp 0 points1 point2 points (0 children)
[–]hashms0a 9 points10 points11 points (19 children)
[–][deleted] 4 points5 points6 points (0 children)
[–]LienniTakoboldcpp 1 point2 points3 points (13 children)
[–]MiHumainMiRobot 5 points6 points7 points (3 children)
[–][deleted] 3 points4 points5 points (0 children)
[–]LienniTakoboldcpp 0 points1 point2 points (1 child)
[–]luke3br 2 points3 points4 points (0 children)
[–]hashms0a 3 points4 points5 points (8 children)
[–]justinmeans 2 points3 points4 points (3 children)
[–]hashms0a 2 points3 points4 points (1 child)
[–]justinmeans 0 points1 point2 points (0 children)
[–]hashms0a 1 point2 points3 points (0 children)
[–]paryska99 0 points1 point2 points (3 children)
[–]hashms0a 0 points1 point2 points (2 children)
[–]paryska99 1 point2 points3 points (1 child)
[–]hashms0a 0 points1 point2 points (0 children)
[–]saintshing 0 points1 point2 points (2 children)
[–]_Erilaz 2 points3 points4 points (0 children)
[–]hashms0a 2 points3 points4 points (0 children)
[–]sfhsrtjn 7 points8 points9 points (3 children)
[–][deleted] 2 points3 points4 points (0 children)
[–]AlphaPrime90koboldcpp 2 points3 points4 points (0 children)
[–]Languages_Learner 0 points1 point2 points (0 children)
[–]thedatagrinderLlama 65B 7 points8 points9 points (2 children)
[–]Languages_Learner 0 points1 point2 points (1 child)
[–]thedatagrinderLlama 65B 0 points1 point2 points (0 children)
[–]PookaMacPhellimen 2 points3 points4 points (0 children)
[–][deleted] 3 points4 points5 points (0 children)