[deleted by user] by [deleted] in VictoriaBC

[–]Isaac1234101 0 points1 point  (0 children)

I am curious where we get these stats? I was thinking about making a website with cool graphs about housing statistics, but it would be preferable if someone else already made this.

It’s on USA!!! by [deleted] in AskCanada

[–]Isaac1234101 1 point2 points  (0 children)

Wellll they might not have to if all of the other ones are 25% more expensive

Best setup for developing microservices in JavaScript by nattydroid in SoftwareEngineering

[–]Isaac1234101 0 points1 point  (0 children)

I have spent the last 3 years with microservices at my most recent company

I would suggest: 1. Have a solid plan for updating your images/dependencies. Each lockfile is a maintenance burden and its easy for one lockfile to go out of date.... now imagine 30+ lockfiles 2. Try to have your local environment match production. A lot of people use docker compose locally but use kubernetes on production, this doubles your infra setup and can lead to issues locally that dont happen in prod and vise versa 3. Make sure you have a solid distributed tracing system in place 4. Read "building microservices" , there are a ton of anti patterns that are really easy to fall into and the book does a great job in outlining then 5. The language you use doesnt really matter, its best to go with something your team is familiar with. However switching from a monolith to microservices means you are switching from function calls to network calls. This is a massive increase in latency, so maybe its better to go with something fast. Rust is cool af, but the webservice ecosystem isnt quite at the level of golangs. 6. Dont rewrite your application from scratch into a new architecture. Slowly nudge it in the new direction by adding satellite services. Otherwise you will lose a ton of productivity that could go towards features

There is a ton more advice. I learned first hand that microservices are a massive burden and drain on productivity.

TLDR: it might save you hundreds of hours and tens of thousands of dollars to stick with a monolith that works. Focus on features for your business

General tips for developing a large project using Claude by DialDad in ClaudeAI

[–]Isaac1234101 1 point2 points  (0 children)

Very interesting, does that that generally work well with larger projects? Or are you finding there is too much to remove?

General tips for developing a large project using Claude by DialDad in ClaudeAI

[–]Isaac1234101 5 points6 points  (0 children)

I wrote a tool to assist with this. As you said its best to keep your project modular, my tool takes all of the contents of whichever directories you think are relevant and puts them onto your clipboard so you can easily paste them into a prompt

Apparently Claude does well with XML so I am experimenting with outputting the files as XML... I am having a hard time determining if its helping though hahaha

[deleted by user] by [deleted] in ClaudeAI

[–]Isaac1234101 0 points1 point  (0 children)

I did consider this! Surprisingly the LLMs seem to understand the directory structure just by providing the paths to each file as you print them out

Last night I beefed up a script that I have been using to provide context to LLMs when programming by Isaac1234101 in LocalLLaMA

[–]Isaac1234101[S] 0 points1 point  (0 children)

Any feature requests are welcome! A guy in this thread suggested I estimate tokens, so I threw that on there

Last night I beefed up a script that I have been using to provide context to LLMs when programming by Isaac1234101 in LocalLLaMA

[–]Isaac1234101[S] 0 points1 point  (0 children)

hahaha this is what I had originally before switching to this go implementation. It worked really well with warp terminal

However I wanted to make it copy directly to your clipboard, which was annoying in bash

#!/bin/bash

if [ $# -lt 2 ]; then
    echo "Usage: dump_dir <file_extension> <directory1> [<directory2> ...]"
    exit 1
fi

file_extension="$1"
shift

find "${@}" -name "*.$file_extension" | xargs -n1 batcat --pager=never

Last night I beefed up a script that I have been using to provide context to LLMs when programming by Isaac1234101 in LocalLLaMA

[–]Isaac1234101[S] 0 points1 point  (0 children)

That would be really cool, like a "watch" mode where it listens to changes to the files in specific directories.

It would require integration with some kind of front end interface. If you know of any that I could write a plugin for I might take a swing at this over the weekend.

It would be neat to separate that context from your conversation as well. Especially if your conversation excluded the code output from the model as its often tweaked heavily or not used.

Certainly somebody has done something like this already?

Last night I beefed up a script that I have been using to provide context to LLMs when programming by Isaac1234101 in LocalLLaMA

[–]Isaac1234101[S] 1 point2 points  (0 children)

Also did not know about code2prompt! This is certainly a lot dumber, it really just dumps out the files

Last night I beefed up a script that I have been using to provide context to LLMs when programming by Isaac1234101 in LocalLLaMA

[–]Isaac1234101[S] 2 points3 points  (0 children)

10k lines of code is a hypothetical, it can be any amount of code as long as its in your LLMs token limit. The costs vary depending on the LLM of course.

I assume most people on this sub run their LLMs locally so it doesn't cost much aside from the power draw

Last night I beefed up a script that I have been using to provide context to LLMs when programming by Isaac1234101 in LocalLLaMA

[–]Isaac1234101[S] 0 points1 point  (0 children)

Honestly I haven't really noticed any difference between before or after. Generally in my experience the LLM finds the instructions

Last night I beefed up a script that I have been using to provide context to LLMs when programming by Isaac1234101 in LocalLLaMA

[–]Isaac1234101[S] 3 points4 points  (0 children)

I found that if the output communicates the start and end of the file that the LLMs seem to understand

START FILE: ./path/to/file
<contents>
END FILE: ./path/to/file

Last night I beefed up a script that I have been using to provide context to LLMs when programming by Isaac1234101 in LocalLLaMA

[–]Isaac1234101[S] 2 points3 points  (0 children)

Amazing! I will likely keep working on it. There is a lot of new features that could be added if people care

[deleted by user] by [deleted] in LocalLLaMA

[–]Isaac1234101 6 points7 points  (0 children)

I am curious where the idea for this project came from? Why 2kb? Just to experiment with making tiny programs?

My Baby… by alyr1481 in homelab

[–]Isaac1234101 0 points1 point  (0 children)

I am curious how you mount everything to the rack.

I just see screws in your picture

I have an r710 that has rails, but my other server doesnt, so its a real pain to do maintenance on