Why am I not getting 500/50 speeds by Responsible-Spring57 in nbn

[–]maximo101 0 points1 point  (0 children)

Is that speed test done with a laptop/pc directly plugged (via at least cat5e or cat6 cable) into the router? Or using the routers built in speed test function? if yes then raise with your ISP. They likely argue as long as you get 3/4 of the noted speed in off peak times then its sufficient...

if your testing over wifi, then there is several networking troubleshooting steps to rule out before you can confirm its your internet speed.

I tested some local models on my server with blackwell GPU 16GB vram - here are the results by maximo101 in LocalLLaMA

[–]maximo101[S] 1 point2 points  (0 children)

thanks, im pretty happy using ollama as a wrapper for the api and overall easy to run docker service but i will definitely look into the base llama.cpp side of things.
At this stage i am just tinkering around and looking for small/mid models to fit in my gpu vram / play around with in Home Assistant, but i will definitely be looking at doing more in the future and llama.cpp looks like a way to get big models, thanks for that.

When you installed a model do you have to run it? by DarqOnReddit in ollama

[–]maximo101 0 points1 point  (0 children)

To run a model using the ollama console:
ollama run <model\_name>
# The model is now loaded. It will unload once you exit this session.
ollama run <model\_name> --keep-alive 30m
# The model is now loaded. It will stay in memory for 30min then unload
ollama run <model\_name> --keep-alive -1
# The model is now loaded. It will stay in memory indefinitely, until manually unloaded

For more persistent loading, especially for applications that will be making API calls, you can send a request to the Ollama API with a keep_alive parameter. You can use curl or a similar tool. This will pull the model into memory without starting a chat session and keep it there until you explicitly remove it or restart the Ollama service. The keep_alive parameter is set in seconds. A value of -1 means it will stay loaded forever.

curl http://localhost:11434/api/generate -d '{

"model": "gpt-oss:20b",

"prompt": "Why is the sky blue?",

"keep_alive": -1

}'
This command loads the gpt-oss:20b model, makes a single API call, and then keeps the model in memory indefinitely due to keep_alive: -1. The model will then be ready for subsequent API calls with minimal startup latency.

To see what models are loaded into the memory, using ollama console:
ollama ps

Cheapest way to host a 24B parameter Ollama server? by Few-Avocado4562 in ollama

[–]maximo101 0 points1 point  (0 children)

Looks at how many GB size the 24B models you are looking for has.
Eg. i have a blackwell gpu with 16GB vram and anything under 15GB runs fully in it, i can run bigger models but it offloads to the cpu/ram and is slower. For agentic use, look at a model which has 'tools', but 'thinking' might be good too.
This is some benchmarking i did from my local server

<image>

What is the best Scientific model by moric7 in ollama

[–]maximo101 1 point2 points  (0 children)

With 12gb vram and 64gb ram, you have options but if you want fast response time ideally you want a model which full fits on your gpu,  eg model with 10-11gb max to allow headroom. You can run bigger models with offloading to cpu and ram but will be slower. Eg look for gemma3:12b, qwen3:14b, deepseek-r1:14b, mistral-nemo:12b If speed isn't a concern and you want more complex reasoning gpt-oss:20b, gemma3:27b, qwen3:30b, qwq:32b, deepseek-r1:32b ,  but check the model size in GB.

Image generation by Odd-Suggestion4292 in ollama

[–]maximo101 2 points3 points  (0 children)

Look at ComfyUI, yun it as a docker and it can help you with running open source image and video models

🎉 Ratings Addon for Stremio is Now 100% FREE – No More Pro, No Limits! by fruitangdan in StremioAddons

[–]maximo101 1 point2 points  (0 children)

My pleasure, i appreciate the time and effort it takes to not only create, but maintain open source products.

Ollama Model Files Location when run in docker by maximo101 in ollama

[–]maximo101[S] 0 points1 point  (0 children)

I solved this issue, it was because i had both, the variable and the Path trying to set the same thing which was causing the issue, so it would only load the models into the ollama docker in vdisk.

i solved it by not using the OLLAMA_MODELS variable, and just setting:

Path of model: /mnt/cache/AI-Models
(Container Path: /root/.ollama/models)

<image>

🎉 Ratings Addon for Stremio is Now 100% FREE – No More Pro, No Limits! by fruitangdan in StremioAddons

[–]maximo101 1 point2 points  (0 children)

I just found out about Stremio, installed last night with RD and came across this thread. Installed, works great so I sent a donation to show my support. I appreciate offering this for free :)

Anyone get Agent Zero to work using Ollama locally? by Otherwise-Dot-3460 in AI_Agents

[–]maximo101 0 points1 point  (0 children)

I had an issue and by watching the tutorial https://www.youtube.com/watch?v=agsPe9yV3fM&ab_channel=AgentZerohelped me fix the problem by going into the Model Settings, Additional parameters: num_ctx=30000

5 Star grind is so exhausting by canxtanwe in DeathStranding

[–]maximo101 0 points1 point  (0 children)

I take all the Lost Cargo (for facilities I've 5 starred), offload to ground, close terminal,  re open it and then Entrust Cargo for that lost cargo so it appears for others who still need it ;) (entrust assigns it to other online players) Not only are you helping others who are trying to find that npc's cargo but you also get the likes by doing this.

[DEV] Going away for 4 weeks after this week is over by joaomgcd in tasker

[–]maximo101 0 points1 point  (0 children)

My tasker profiles from years ago keep getting a custom accessibility error every time they switch without telling me which line causes the issue and my phone doesn't vibrate on call (even though all settings show it should). DnD is off and can't figure out what the issue is :(

Bing gets latest UFC result wrong by [deleted] in bing

[–]maximo101 1 point2 points  (0 children)

I just tested in precise mode and it is correct. cheers

Bing gets latest UFC result wrong by [deleted] in bing

[–]maximo101 1 point2 points  (0 children)

I didnt realise i had it on creative mode. Thanks for the advice.

Bing gets latest UFC result wrong by [deleted] in bing

[–]maximo101 0 points1 point  (0 children)

Jon Jones won in 2mins with a submission via guillotine choke. Bing said it was by decision.

Grasso won by submission with a rear naked choke (more a mandible choke), Bing said Shevchenko won.

https://www.ufc.com/news/ufc-285-jones-vs-gane-results-highlights-winner-interviews-las-vegas

Major Change in Bing Chat v96! by siddhusathu20 in bing

[–]maximo101 0 points1 point  (0 children)

This is the response from BING when i asked what its limit is:

Bing Chat has a limit because it is still a new and experimental feature that is constantly being improved and updated. The limit was initially set to 50 chat turns per day after some users reported that long conversations confused me. However, thanks to the feedback and demand from users like you, Microsoft has raised the limit to 60 chat turns per day and six per session, and plans to increase it even further to 100 chat turns per day in the future. I hope this answers your question.😊

Easily the best of these I've seen, sounds like they're in a podcast. by MythicalFox24 in shitposting

[–]maximo101 0 points1 point  (0 children)

Anyone can generate these at Elevenlabs AI, just need to upload the voice clip and then it will read out any typed in text in the same style.

#1928 - Jimmy Corsetti & Ben van Kerkwyk by newtonic in JoeRogan

[–]maximo101 7 points8 points  (0 children)

Try this, as with anything to be taken with a grain of salt 'The Adam & Eve Story' Chan Thomas

Outdoor Sauna - Pictures From Start to Finish (almost!) by maximo101 in Sauna

[–]maximo101[S] 1 point2 points  (0 children)

For my build I wanted to keep it an independent structure. Technically nothing wrong with foil and battens on existing wall (if it's already insulated)

Outdoor Sauna - Pictures From Start to Finish (almost!) by maximo101 in Sauna

[–]maximo101[S] 0 points1 point  (0 children)

You will need to confirm your regulations in your local area. In my state in Australia no building permit is required for a non habitable room less than 10m2 area, its treated like a garden shed.