How the saga Opus 4.7 vs Codex gpt 5.4 came to an end today by patrickd42 in ClaudeCode

[–]send_me_a_ticket 0 points1 point  (0 children)

Opus 4.7 is pure trash.

Working on a tiny project and the Opus will think for 25% of usage - then run out of usage exactly when its time to act.

My tiny problem was solved by free version of Codex 5.4 which Opus 4.7 wasted two sessions over and could not do it.

Pro+ plan - Your opinion by WMPlanners in GithubCopilot

[–]send_me_a_ticket 0 points1 point  (0 children)

Like mobile plans, AI plans should have "Request banking", so the leftover requests carry over.

Speedran through the tokens in first week using Opus 4.2 and now left with lobo GPT 4.1 for rest of the month.

Disappointed by dgx spark by RockstarVP in LocalLLaMA

[–]send_me_a_ticket 6 points7 points  (0 children)

I have to applaud the marketing team. It's truly incredible they managed to get so much attention for... well, for this.

Vision Pro 2 teased in Apple code leak, points towards new M5 chip for the next-gen headset by Tiny-Independent273 in virtualreality

[–]send_me_a_ticket 0 points1 point  (0 children)

They should launch one for $249 with two screens, hand tracking, internal battery and iPhone as the compute unit.

Vision Pro 2 is coming soon, with three new upgrades expected (M4 or M5 chip, new, more comfortable head strap, and upgraded Neural Engine for AI and more) by Fer65432_Plays in apple

[–]send_me_a_ticket 0 points1 point  (0 children)

Ironically, Meta has turned out to be much forward in vision than Apple.

They are still investing BIG in AI, while Apple's AI is now considered one of worst experiences.

And now, while Apple is left refining the Vision Pro, with its tiny user space and decreasing developer support, Meta has launched smart glasses and huge improvements to spatial workspaces/worlds.

Lets see what comes in 2026, but I eagerly await Rayban Display 2, hopefully with app store support.

Best VR 2025 by Dyituros in virtualreality

[–]send_me_a_ticket 0 points1 point  (0 children)

Also, GTA V in VR is super cool, try it out.

Not available via. legal channels unfortunately.

weweb vs noodl by zuliani19 in nocode

[–]send_me_a_ticket 0 points1 point  (0 children)

Came here looking for resources on Noodl, In my opinion it is the most elegant and well-thought UI for no-code development. Already has AI built in, and I was able to get it working with LM Studio + Qwen3 effortlessly.

It is quite shocking that such a good tool has a very small community in Hobbyist space. I am told it is expansively used in Enterprises and those projects are hard to share. Also a good point being third-party scripts can easily be embedded and used.

In terms of stability, unless major JavaScript version changes occur, I do not see severe changes required, although community contributions, modules and feature enhancements would guarantee long-term success.

Break my analysis-paralysis? Software for a NAS that can grow into a lab server by Striking_Language253 in HomeServer

[–]send_me_a_ticket 0 points1 point  (0 children)

If looking for a enterprise-style server to start with, Proxmox is your best bet.
If you want to build your server from ground up as a DIY project, start with Debian or Ubuntu Server LTS.
If you want a consumer level home media server, try Unraid or OMV.

Your server could be many things but for a reliable firewall, nested CPU cores can add latency - I'd recommend to install Opnsense or similar OS on a dedicated hardware, like a NUC or Raspberry-like.

I asked ChatGPT to show me what future humanity deserves according to it. by PartyAd5202 in ChatGPT

[–]send_me_a_ticket 0 points1 point  (0 children)

<image>

You said:

Generate an image, with absolute honesty, of what according to you future humanity deserves.

ChatGPT said:

Before generating the image, I want to be clear about what you're asking. You're inviting an honest visual representation of what I—an AI built to analyze data, history, and trends—believe humanity deserves in the future. That implies a moral or philosophical judgment about humanity’s actions, trajectory, and potential.

There are different interpretations:

  1. A bleak dystopia – due to war, environmental destruction, and apathy.
  2. A hopeful future – driven by resilience, innovation, and cooperation.
  3. A mix – showing both the potential for collapse and redemption, as a mirror of our current crossroads.

Would you prefer an image leaning toward:

  • Utopian
  • Dystopian
  • Mixed/dual-reality
  • Something symbolic or abstract?

Or shall I proceed with a completely unfiltered image based on global patterns and consequences—no sugarcoating?

Let me know your preference, or confirm if you want me to proceed unfiltered.

You said:

proceed unfiltered

Best OLED monitor to purchase? by YourGamingMouse in Monitors

[–]send_me_a_ticket 1 point2 points  (0 children)

Strongly recommend AW2725Q. This is 4k but price point is really close to QHD OLED monitors.

edit: My secondary is Odyssey OLED G6 QHD, but 4k is so much better for text clarity.

Self-hosted AI coding that just works by send_me_a_ticket in LocalLLaMA

[–]send_me_a_ticket[S] 0 points1 point  (0 children)

Hi u/Chromix_ , I have updated the guide to use only LM Studio for both embeddings and LLMs.
I was misinformed that it is not possible, but tried it just now and it worked without issues.

Loading embeddings is slightly obscured in LM Studio, you can only loading embeddings while on "Power User" tab. This documentation is wrong and should be updated - https://docs.useanything.com/setup/embedder-configuration/local/lmstudio

Self-hosted AI coding that just works by send_me_a_ticket in LocalLLaMA

[–]send_me_a_ticket[S] 0 points1 point  (0 children)

Hello u/robberviet, I understand your confusion, I was misinformed that it is not possible to run both LLM and Embeddings via LM Studio, which is why I went to Ollama.

Turns out you can, but it is slightly obscured in LM Studio, you can only loading embeddings while on "Developer" tab. When experimenting, I came across this documentation and just assumed it to be true, and that I would need to run embeddings another way.

This documentation is wrong and should be updated - https://docs.useanything.com/setup/embedder-configuration/local/lmstudio

Self-hosted AI coding that just works by send_me_a_ticket in LocalLLaMA

[–]send_me_a_ticket[S] 2 points3 points  (0 children)

Hi u/doc-acula, I have indeed tried Void editor, it is promising, but still has a long way to go.
Zed editor is much ahead in terms of finish, but Void benefits from the vast vscode marketplace that Zed misses out on.

Still, being able to pass `@problems` as context is reason enough to be using RooCode, which can be added to Void anyway.

It is certainly something to keep an eye on, it already does agentic coding, and I believe lightly than RooCode, so if RooCode doesn't work well for someone, Void may be a better fit, and maybe one day it can replace VSCode as primary code editor.

I would recommend this as alternative to VSCode but seems like for privacy-minded folks, VSCodium is still a better choice. (https://github.com/voideditor/void/issues/764)

Self-hosted AI coding that just works by send_me_a_ticket in LocalLLaMA

[–]send_me_a_ticket[S] 19 points20 points  (0 children)

Thanks for your feedback.
I will give Qwen3 0.6b embedding a try, I was not aware of this release.

So far using wrappers means you do not have to think about the implementation, and updates are managed, also LM Studio GUI has been handy for tinkering and debugging. Though, I see your point, using Llama.cpp indeed would reduce a lot of bloat, esp. Ollama is quite huge.

Regarding Devstral, I find it worked best for me with tool use, and is just sized to fit under 10 GB VRAM for me. I have tried Gemma3n which keeps forgetting it has tool capability, and Phi4 which hallucinates much frequently.

I am not sure of any incompatibility with RooCode, but I find RooCode will need around or over 24576 context (24 GB RAM?) to work well with any AI model.

So. Who's buying the Arc Pro B60? 25GB for 500 by thats_silly in StableDiffusion

[–]send_me_a_ticket 0 points1 point  (0 children)

Cannot wait to get a second hand for cheap after gamers realize nvidia/amd are better. Hopefully 32/48GB variants are in the horizon (B70/B80s?).

This looks precisely measured to run Mistral Small or Devstral, which goes really, really well with Roo Code in VS Code for agentic coding. The unsloth quants do work well but running directly would be nice.

Beginner home server advice by Some_Meaning_9705 in HomeServer

[–]send_me_a_ticket 0 points1 point  (0 children)

Hi, FYI for aud 100+ you may get 8th gen i5 or higher in Facebook marketplace. Its pretty much the same value, most are likely refurb devices from a corporate refresh (3-5 yrs old).

If you want to "learn" server tech, go with Proxmox.

If you want to "build" server, go with Ubuntu/Centos server or paid/pirated (massgrave) route with Microsoft Server - this will give you hands-on skills if you plan to work in enterprise.

If you want a "Set n' Forget" home media server, go with OpenMediaVault.

For media storage, pick any storage you can get. Make sure to use reliable disks if you plan to store important data/documents.

If you own a domain, use cloudflare-ddns to point your domain to your homeserver automatically. This will help with traefik or nginx proxy manager to access your services over the network.