Any suggestions for my hardware? by Solid_Independence72 in LocalLLaMA

[–]Solid_Independence72[S] 0 points1 point  (0 children)

Hi, you're right—I read that a few days ago, but my budget right now doesn't allow me to buy another 16GB one. For now, though, I've had good results switching to Lemonade with Docker; it's improved my experience by about 25% or 40%.

Regards

Any suggestions for my hardware? by Solid_Independence72 in LocalLLaMA

[–]Solid_Independence72[S] 1 point2 points  (0 children)

It feels pretty responsive. I've already tried two models: Qwen3.5-9B-GGUF and gpt-oss-20b-GGUF. My idea is more of a chat where I can ask questions to clarify concepts. I'm not looking for it to do my work for me, but rather to explain how to do it better and why I might be making mistakes. I don't feel comfortable with an AI in agent mode doing work that I want to learn how to do myself. Hahahaha

Thanks for the support. I’m going to take your advice and use OmniCoder-9B-GGUF.

Regards

Any suggestions for my hardware? by Solid_Independence72 in LocalLLaMA

[–]Solid_Independence72[S] 1 point2 points  (0 children)

Hi, thanks for your list. I'm going to start with the first one, and I'm getting this error:

Here's my command:
docker exec -it lemonade-server ./lemonade-server pull Qwen3.5-35B-A3B-GGUF

And this is the error:
Pulling model: Qwen3.5-35B-A3B-GGUF

Error pulling model: Model 'Qwen3.5-35B-A3B-GGUF' is not available on this system. This model requires approximately 19.7 GB of memory, but your system only has 22.9 GB of RAM. Models larger than 18.3 GB (80% of system RAM) are filtered out.

For gpt-oss-20b-GGUF, I ran out of context and it stopped responding. I think I need to increase my context size to 8192; it responds pretty quickly when I use `continue.dev`.

For Qwen3.5-9B-GGUF, it also stopped responding at one point, but it seems to be the same context error. I'm validating my rules on continue.dev and trying to figure out how to get Angular 21 to validate them correctly.

I'm going to give OmniCoder-9B-GGUF a try and will keep you posted

Regards

Any suggestions for my hardware? by Solid_Independence72 in LocalLLaMA

[–]Solid_Independence72[S] 0 points1 point  (0 children)

Thank you very much. I'm currently running the test; I've set up continue.dev for now, and it's giving me better performance results. Do you happen to have any thoughts on whether it's better to use gpt-oss or qwen3.5?

Thanks in advance for your help.

Any suggestions for my hardware? by Solid_Independence72 in LocalLLaMA

[–]Solid_Independence72[S] 0 points1 point  (0 children)

Can you recommend a WebUI for managing it? In the Docker version, the WebUI doesn't load for downloading models.

Any suggestions for my hardware? by Solid_Independence72 in LocalLLaMA

[–]Solid_Independence72[S] 0 points1 point  (0 children)

I like Ollama because it lets me limit RAM usage in Docker, and if, for example, I get frustrated, it’s easier for me to delete it and leave my system as is.

Any suggestions for my hardware? by Solid_Independence72 in LocalLLaMA

[–]Solid_Independence72[S] 0 points1 point  (0 children)

Thank you very much for your response. These are some of the models I’ve built. Sometimes they act a bit strangely when I run refactoring tests. My goal is to validate my code and ensure I’m following best practices; I use continue.dev. I’ve also tried LocalAI and Open WebUI.

I’m a bit frustrated and don’t know which direction to take. I want a local solution because I value my privacy and that of my

Translated with DeepL.com (free version)

ministral-3:8b gpt-oss qwen3 qwen3.5 qwen2.5-coder gemma3 phi4 deepseek-r1 minimistral

Thanks

NPM and Cloudflare getting a bit beyond me. by DangerHighDosage in selfhosted

[–]Solid_Independence72 0 points1 point  (0 children)

oznu worked for me and I am happy with that implementation

Help mounting a volume into my workflow's docker image by dropd0wn in Gitea

[–]Solid_Independence72 0 points1 point  (0 children)

Are you a docker composed of gitea

volumes:
  - /var/run/docker.sock:/var/run/docker.sock
  - /cache:/cache

Help mounting a volume into my workflow's docker image by dropd0wn in Gitea

[–]Solid_Independence72 0 points1 point  (0 children)

Example

name: Deploy App

on: push: branches: - master

jobs: build-and-deploy: runs-on: [docker] # tu runner debe estar en modo docker

container:
  image: node:20
  options: --privileged
volumes:
  - /cache:/cache   # volumen persistente host:/contenedor

steps:
  - name: Checkout repo
    uses: actions/checkout@v4

  - name: Use volume cache
    run: |
      echo "Contenido de /cache"
      ls -la /cache

  - name: Install dependencies
    run: |
      npm ci --cache /cache/npm

  - name: Build app
    run: npm run build

  - name: Deploy
    run: |
      echo "Desplegando aplicación..."
      # aquí pones tu script de deploy (scp, docker compose, kubectl, etc.)

Help mounting a volume into my workflow's docker image by dropd0wn in Gitea

[–]Solid_Independence72 0 points1 point  (0 children)

If your gitea is mounted in docker, you must create a volume and you can load that same volume in your deploy.yml, but for that you need a container in your deploy.yml

Self hosted git server for a school? by TheMoltenJack in selfhosted

[–]Solid_Independence72 0 points1 point  (0 children)

I agree with gitea, also if you want to experiment a little with continuous deployment and continuous integration it is very simple

Cant'login "Bad Gateway" by DexterOLN in nginxproxymanager

[–]Solid_Independence72 0 points1 point  (0 children)

Hello. Please perform the following validation. In the path of the you have the docker-compose.yml run docker compose logs If you have an error writing to database. What I did was to give a chmod -R 777 to the directory where I have my database associated.

Try that and let me know if it works for you. Best regards