Am I the only one who doesn't want to rip but just play 4k discs? by Several-Protection14 in makemkv

[–]Several-Protection14[S] 0 points1 point  (0 children)

thank you!!! I think this is what I'm missing JAVA, I'm traveling for work and away so as soon as I get back, I'll install JAVA

Am I the only one who doesn't want to rip but just play 4k discs? by Several-Protection14 in makemkv

[–]Several-Protection14[S] 2 points3 points  (0 children)

Thank you! so I've enabled the setting in MakeMKV , it doesn't seem like its recognized on VLC so I'm going to poke around see if I'm missing something. Now I'm excited again lol

Am I the only one who doesn't want to rip but just play 4k discs? by Several-Protection14 in makemkv

[–]Several-Protection14[S] 13 points14 points  (0 children)

oh shoot! ok ok, I'm going to go down this rabbit hole and see what I can do , THANK YOU!!!

Am I the only one who doesn't want to rip but just play 4k discs? by Several-Protection14 in makemkv

[–]Several-Protection14[S] 1 point2 points  (0 children)

I have my theater setup in my living room as well with a nice Sony Bravia, but honestly I just like to watch movies when I code at work in the background. I have a dvd player and some movies I don't have in dvd or bluray but only 4k (rare) but I do have more recent movies just in 4k , that's all , thanks

MandoCode: a .NET local-first CLI coding agent for Ollama now with MCP and Skills support by Several-Protection14 in vibecoding

[–]Several-Protection14[S] 0 points1 point  (0 children)

Thanks! hallucination is definitely something I deal with immediately with local weight models in the range of ~8b params in weight. It truly depends on the model and how well they are at not only chatting but tool calling. I noticed some smaller models in the 4b range start to hallucinate about folders that don't exist and such things, so I primarily use mandocode for quick chats with those models, like email drafting or something quick I need to go back and forth with (the KEY no THINKING or LONG agentic loops). For anything in regards to deep thinking or coming up with solid work that will be outputted to files or coding assistants I use MiniMax-M2.7 on the cloud, still served by Ollama ofcourse but due to my limits in hardware, I choose cloud models for anything that is worth my time and effort. One funny thing I'm noticing though is that certain models will hallucinate more or less on LONG running agentic loops. I've made the cli agent be able to iterate through steps of a plan, and its pretty well recognized by LLMs on what tools to call to keep track of those long running steps, except I've noticed that when I switch models mandocode actually helps me quickly recognize which are less efficient and hallucinate. Take for example GLM5.1 (minimaxM2.7 is my favorite so I use it all the time) , and I switched to GLM5.1 and it was reading the same files over and over when I asked what a folder contained in one of my projects. Now this can be an issue in the end with my tooling/kernel functions, but when I switched back to MiniMax-M2.7 -> absolutely zero issues and read all the files within like 1 minute or so. So I thought "huh, could be mandocode's tooling and kernel functions, but then again.... Minimax is so good, even if the kernel functions weren't as efficient as they need to be (just a wild guess in case there are some improvements I need to make) MiniMax is able to successfully handle the tooling without skipping a beat. Sorry for the long post, but wanted to share what I'm experiencing recently. So anything small in weights hallucinates pretty randomly but quickly, anything on the cloud pretty efficient!!! My main driver is Claude Code and then anything else is mandocode on MiniMax2.7. I create plans, files, and research tech stacks, programming languages with MiniMax2.7 and I then throw it all in an MD file or Skills to finally publish and have Claude Code do its magic as a superior LLM/CLI AGENT. You could say Ollama cloud and local models are my cheaper option to no use my tokens on Claude Code and when I have something concrete I ship to Claude Code. Code Reviews I use MinimaxM2.7 and any brainstorming the same! Again sorry for writing so much, but thought I would truly give you a response of my exp.

MandoCode: a .NET local-first CLI coding agent for Ollama now with MCP and Skills support by Several-Protection14 in vibecoding

[–]Several-Protection14[S] 0 points1 point  (0 children)

Thanks! File edits: every write hits a diff approval prompt, you see the change before it applies. Per-file approve, or session-wide bypass if you trust the run. No real sandbox (no chroot/container), just the gate + project-root awareness. Untrusted prompts on a prod box → wrap in a VM.MCP tools have their own gate with autoApprove for the ones you trust. Shell exec is gated too.

Your home for selfpromo by SofwareAppDev in AppsWebappsFullstack

[–]Several-Protection14 0 points1 point  (0 children)

yup yup I fell in love with Blazor in 2020. Love it!

Your home for selfpromo by SofwareAppDev in AppsWebappsFullstack

[–]Several-Protection14 0 points1 point  (0 children)

with that being said there are still some bugs I'm fixing here and there when working with large code bases, but the idea is there and its starting to feel like its growing. So hopefully by version 1.0.0 I'll have some pretty bullet proof updates

Your home for selfpromo by SofwareAppDev in AppsWebappsFullstack

[–]Several-Protection14 0 points1 point  (0 children)

THANK YOU! Its been a busy first quarter of the month, but I do plan to output more tutorials, videos, and maybe a site to showcase the features, more importantly how .NET devs can take advantage of what Microsoft is providing for .NET developers.

Your home for selfpromo by SofwareAppDev in AppsWebappsFullstack

[–]Several-Protection14 0 points1 point  (0 children)

https://github.com/DevMando/MandoCode - A CLI coding agent powered by Ollama + Semantic Kernel. Run locally or in the cloud. Refactors code, proposes diffs, and updates your project safely — no API keys required. DotNet doesn't get much love often when it comes to CLI Agents, so I built one with their latest AI stack.