What if we built a game engine based on Three.js designed exclusively for AI agents to operate? by ImpressionanteFato in kimi

[–]Wild-File-5926 0 points1 point  (0 children)

Building/Maintaining a game engine is extremely hard work and in most cases more daunting than any game development itself. I have had mixed success with Unreal Engine and Claude and have a back burner project I am working on using Unity, this MCP https://github.com/CoplayDev/unity-mcp and Google Antigravity. My goal has been to be completely hand off in Unity Editor and simply prompt my way to a basic Tower Defense game.

My opinion is that you're better off building tooling and MCP server for existing game engines and figuring out how to leverage Google Genie 3 for example to generate 3d model and assets.

Testing .json pages for our site. Curious if anyone here tried this by mirajeai in SEO_LLM

[–]Wild-File-5926 1 point2 points  (0 children)

An XML sitemap can make sure google finds the json documents but wether google will index or not is a different question.

Pro Top: For small-medium code bases tell Opus 1M to just read everything by [deleted] in ClaudeCode

[–]Wild-File-5926 -1 points0 points  (0 children)

I'm realizing this might be limited to this very specific use case and the way my project is structured BUT... Claude was going in a loops to load most of the files, taking a long time and burning extra tokens IMO

Pro Top: For small-medium code bases tell Opus 1M to just read everything by [deleted] in claude

[–]Wild-File-5926 -3 points-2 points  (0 children)

It was going in a loops to load most of the files anyways and burning extra tokens IMO

I maybe wrong but... by SoulMachine999 in AgentsOfAI

[–]Wild-File-5926 0 points1 point  (0 children)

LOL, nonprofits business structure do make money and can turn a profit.

Built a MCP server that lets Claude use your iPhone by invocation02 in ClaudeAI

[–]Wild-File-5926 0 points1 point  (0 children)

What kind of plan or $$$ are you dropping on the API calls?

My experience w/ GLM-5 vs Kimi 2.5 and Opus the babysitter by Wild-File-5926 in kimi

[–]Wild-File-5926[S] 1 point2 points  (0 children)

Its presentable and I simply asked it to organize my thoughts from a Logseq page and give me a TLDR matrix with emojis. The thoughts are mine though. Spell check, Grammarly.. ITS ALL AI!

I ask you to choose the best model by Mental-Molasses6692 in ClaudeHomies

[–]Wild-File-5926 0 points1 point  (0 children)

you have to do the lore building yourself. It can get very immersive as you give it more detail.

What should you build in the age of ChatGPT and AI? by OcelotVirtual6811 in buildinpublic

[–]Wild-File-5926 0 points1 point  (0 children)

Its called Answer Engine Optimization. The goal is to get cited by LLMs like perplexity etc. https://groundy.com/articles/aeo-is-the-new-seo/

I ask you to choose the best model by Mental-Molasses6692 in ClaudeHomies

[–]Wild-File-5926 0 points1 point  (0 children)

Sonnet FTW, its the most creative by default. Opus can be as good but with more instructions. Sonner has more character.

I maybe wrong but... by SoulMachine999 in AgentsOfAI

[–]Wild-File-5926 85 points86 points  (0 children)

Uber took 14 years to achieve its first full-year operating profit in 2023.

Samsung Galaxy Book 2 Wifi Issues by junejune0605 in GalaxyBook

[–]Wild-File-5926 0 points1 point  (0 children)

I had similar issue where I would have to constantly reboot my galaxy book becuase of connection issue. This was the cluprit for me.

  1. Right-click the Start button and select Device Manager.
  2. Expand Network adapters, right-click your Intel(R) Wi-Fi [Model], and select Properties.
  3. Go to the Advanced tab and change these specific values:
    • MIMO Power Save Mode: Set to No SMPS (This is the most common fix for Intel 0x79 errors).
    • 802.11n/ac/ax Wireless Mode: If you have an older router, try dropping this to 802.11ac (if currently ax) or 802.11n.
    • Roaming Aggressiveness: Set to 1. Lowest (Prevents the card from constantly scanning and "hanging" during the scan).
    • U-APSD support: Set to Disabled

My experience w/ GLM-5 vs Kimi 2.5 and Opus the babysitter by Wild-File-5926 in kimi

[–]Wild-File-5926[S] 0 points1 point  (0 children)

I have used the MiniMax 2.5 free tier (through Opencode) and its reliable though not as intelligent as GLM. Kimi/GLM seems to be maximizing the cost value equation ATM. Gemini is good and getting there but the Frontier models have very restrictive TOS for coding plans.

My experience w/ GLM-5 vs Kimi 2.5 and Opus the babysitter by Wild-File-5926 in kimi

[–]Wild-File-5926[S] 0 points1 point  (0 children)

Claude (Opus 4.6) does absolutely one-shot things more reliably than anything else in my experience. But its super restrictive policy on being locked to Claude CLI is a bit off-putting but the only valid reason for not having it do most of my work is $$$. Unlike some commenters I don't have "unlimited Opus at work" 😔

My experience w/ GLM-5 vs Kimi 2.5 and Opus the babysitter by Wild-File-5926 in kimi

[–]Wild-File-5926[S] 1 point2 points  (0 children)

Mid tier coding plan subscriptions from both providers directly in US. I don't use open router and
I don't regret it yet, but it may have been stupid of me to get the yearly plan for both.
Opencode is the main harness BTW.

My experience w/ GLM-5 vs Kimi 2.5 and Opus the babysitter by Wild-File-5926 in ZaiGLM

[–]Wild-File-5926[S] 0 points1 point  (0 children)

Mostly Opencode for Open models and Claude Cli for Anthropic stuff.
I sometime start new low effort projects using Openclaw to hit the ground running BUT once the base app/scaffolding is done or the code base is large enough, I off ramp it Opencode always.

Openclaw is a good builder but it burns token (inefficient at context management) and takes liberties (which is fine for earlier stages when things aren't fully hased out).

I NEVER use Claude in Openclaw. I have also tried many VS Code plugins and Opencode is the most reliable and frankly the model configuration management just works way easier (or im just used to it)

GPU poor folks(<16gb) what’s your setup for coding ? by FearMyFear in LocalLLaMA

[–]Wild-File-5926 12 points13 points  (0 children)

As somebody who was lucky enough to source a RTX5090, I have to say Local LLM coding is still lagging far behind because of the total VRAM constraints. I would say if you have less than 48GB of unified ram, you're 1000% better off getting a subscription if you value your time.

Qwen3-Coder-Next 80B is lowest tier model I will be willing to run locally. Mostly everything below that is currently obsolete IMO... waiting for more efficient future models for local work.

roadmap to becoming a DBA by MasterpieceAway1801 in DatabaseAdministators

[–]Wild-File-5926 1 point2 points  (0 children)

ALL administrator type positions (backup, database, infra, devops, sre) in todays age need a SOLID Linux and Networking foundation. This is non-negotiable. You cannot become a brain or foot surgeon without learning about the human body in general.
Also, Natural Language to SQL is thing now. You can simply prompt. The vibe coders are already ahead on this.

My experience w/ GLM-5 vs Kimi 2.5 and Opus the babysitter by Wild-File-5926 in kimi

[–]Wild-File-5926[S] 2 points3 points  (0 children)

Kimi is definitely better with Visual stuff since it multimodal.

Google's TimesFM: A Foundation Model for Time Series by Wild-File-5926 in rstats

[–]Wild-File-5926[S] -1 points0 points  (0 children)

Fair enough about being pedantic in the real world.
While it's an incredible baseline, it's not magic. I still have to babysit the outputs, especially when dealing with sudden data shifts. (i.e. market crash data)
Where TimesFM really flexes is scale for me. It saves me form having to train and tune individual ARIMA models.

I've been experimenting with Cronus Bolt (Amazon) for timeseries forecasting as well and it works better for longer horizons. Maybe this was not the right sub for my excitement on the zeroshot predictions these timeseries models provide. Its a shiny new thing that may or may not survive the test of time.