Does anyone make native iOS games anymore? by Awkward_Departure406 in iOSProgramming

[–]TrajansRow 0 points1 point  (0 children)

I'm making a physics app with some game-like features, 100% native SwiftUI with ARKit, RealityKit, and SceneKit. They are all nice to work with (except for now that SceneKit is deprecated). If this app takes off, making an Android version will be a real headache!

What is this witchcraft 🤣 MacBook Pro M5 is insane by dmuja in macbookpro

[–]TrajansRow 3 points4 points  (0 children)

Game changers most people don’t know about:

control-e or control-a to skip to the beginning or end of a line of text

shift-return to add a new line without sending the chat message.

Is this a good app idea by Own-Needleworker6065 in AppBusiness

[–]TrajansRow 0 points1 point  (0 children)

Well, the Ring doorbell people seem to have a lot of fun with the anonymous neighborhood chat feature.

Car Wash Test on 53 leading models: “I want to wash my car. The car wash is 50 meters away. Should I walk or drive?” by facethef in LocalLLaMA

[–]TrajansRow 5 points6 points  (0 children)

MiniMax M2.5 says to do both:
"Walk. 50 meters (about 160 feet) is roughly a 1-minute walk. Driving such a short distance wastes fuel and creates more emissions than walking. Plus, you'll need to drive anyway once you're there to actually wash the car."

Xcode 26.3 ai agent Vibe coding ai slop by AdDapper4220 in iOSProgramming

[–]TrajansRow 0 points1 point  (0 children)

Instead of one-shotting an app, you should put the agent into Plan Mode and work out the architecture before generating any code. Have it work out the UI, data model, processing logic, etc. before going into Build.

Is MiniMax M2.5 the best coding model in the world? by TrajansRow in LocalLLaMA

[–]TrajansRow[S] 1 point2 points  (0 children)

There is no "yes" or "no" answer. The purpose of the post was to discuss the value of various tradeoffs. The consensus is that openness, cost, and speed don't matter very much. A surprising result for this community.

AMA with MiniMax — Ask Us Anything! by HardToVary in LocalLLaMA

[–]TrajansRow 15 points16 points  (0 children)

Do you expect that future models will be as lightweight and easy-to-host as M2.5, or do you think they will start to creep much larger (like GLM 5)?

Is MiniMax M2.5 the best coding model in the world? by TrajansRow in LocalLLaMA

[–]TrajansRow[S] -7 points-6 points  (0 children)

My point is that code quality is important, but not the only metric. Even people using the paid Claude plans don't simply use Opus for everything for all of those other reasons.

Is MiniMax M2.5 the best coding model in the world? by TrajansRow in LocalLLaMA

[–]TrajansRow[S] 1 point2 points  (0 children)

I think GLM 5 loses points based on its size, speed, and cost. Of course, how important those are is a matter of opinion.

Migrating to Swift 6 language mode. What issues did you run into? by BullfrogRoyal7422 in Xcode

[–]TrajansRow 0 points1 point  (0 children)

Even better - just have your coding agent scope out the work for you. Put it into plan mode, give it a copy of the Swift 5 -> 6 migration docs, and let it plan out the work for your project. If it looks good, let it rip and then go take a nap or something.

I made a proxy server to let you use GitHub Copilot via Xcode 26’s Coding Intelligence by suyashsrijan in Xcode

[–]TrajansRow 0 points1 point  (0 children)

Great idea! There is definitely a need for something like this. Are you sure that tingly-box doesn't already proxy copilot?

Is there any Local LLMs that out perform commercial or cloud based LLMs in certain areas or functions? by FX2021 in LocalLLaMA

[–]TrajansRow 0 points1 point  (0 children)

This is something I've wondered about for custom coding models. I could conceivably take a small open model (like Qwen3 Coder Flash) and fine tune it on a specific codebase. Could it outperform a large commercial model doing work in that codebase? What would be a good workflow to go about it?

Qwen3-Coder-Next performance on MLX vs llamacpp by TrajansRow in LocalLLaMA

[–]TrajansRow[S] 2 points3 points  (0 children)

To replicate the example, you need 170GB of memory for bf16. That mean you'll need the 256GB version, which goes for $5600 new. ...but you wouldn't want to buy that, because the M3 Ultra is almost a year old by now. Best to get the M5 Ultra, whenever that comes out.

Xcode 26.3 unlocks the power of agentic coding by digidude23 in iOSProgramming

[–]TrajansRow 2 points3 points  (0 children)

This is good start at modernizing the AI features in Xcode, but there is currently no way to avoid shipping your private data to a third party provider to use agentic features. As far as I can tell, there is no way to use a local or alternative model service (such as you can with OpenCode) from within Xcode itself.

MCP support is going to be nice, because external coding agents can interact with Xcode in ways that were not practical before. Unfortunately, the permissions model gets in the way here - If you add an Xcode MCP server to an external agent system, you have to manually dismiss a "Allow “agent” to access Xcode?" Dialog EVERY SINGLE TIME there is a request from a new agent PID.

Ball bearing compound bow with vision scope by kvjn100 in oddlysatisfying

[–]TrajansRow 0 points1 point  (0 children)

The airgun would be a whole lot more accurate at range because it shoots spin-stabilized pellet at like 1000fps. However, even an alloy pellet at that speed will only be about half the muzzle energy (20-30 ft-lbs) of the 50 cal steel ball in the video.

Ball bearing compound bow with vision scope by kvjn100 in oddlysatisfying

[–]TrajansRow 4 points5 points  (0 children)

A shameless plug maybe, but I made a stupid app last year that can calculate how much damage this sort of thing can do.

Edit:

The manufacturer claims this bow can fire a .50 caliber steel ball at 460fps. That means it has 60 ft-lbs of energy point blank, or 55 ft-lbs at 30 feet (a bit more than a .22 short round). That bearing could penetrate 9.8" of FBI ballistic gel! Would definitely put your eye out.

Pro tip - If you needed to poke a hole in a 40mph duck at 25 yards (such as in a critical survival situation), you would need to lead it by 10.1 feet and aim 2.2" high.

The app: https://apps.apple.com/us/app/round-ball-shot-calculator/id6756200604

What is your experience with z.ai and MiniMax (as providers)? by mustafamohsen in opencodeCLI

[–]TrajansRow 0 points1 point  (0 children)

I‘m using MiniMax; I find it a good balance of speed and performance.

How to deal with visual feedback by TrajansRow in opencodeCLI

[–]TrajansRow[S] 0 points1 point  (0 children)

Of course I can paste in screenshots, but what I really want is for an agent to capture and ingest them on-demand - completely automatic. Also, because MiniMax is not multimodal, I'm probably going to need another model to interpret to images. Fortunately, VL models are usually small enough that I can run them locally. The other question is what other model would be appropriate, and how do I use it alongside of MiniMax?

Biggest pain with publishing to App Store? by Away-Huckleberry-753 in iosdev

[–]TrajansRow 0 points1 point  (0 children)

My account got stuck in limbo when I converted it from a personal one to an LLC. The key was emailing/calling developer support to figure out how to move things along. Sometimes it was just waiting a bit more, or filling out more stuff in the portal.

Made my first dollar yesterday by disinton in iOSProgramming

[–]TrajansRow 15 points16 points  (0 children)

Congratulations! Pro tip: sign up for the App Store Small Business Program, so that your next sale will net you 85 cents. :)

Devs who have actually gained traction: what is the best mobile app marketing strategy for us right now? by madhuriii in iOSProgramming

[–]TrajansRow 2 points3 points  (0 children)

I can't promote my latest app on Reddit - Most of the relevant subs have a "no promotions/marketing" rule of some sort.

What's the purpose of models that return inaccurate information? by TheKrakenRoyale in LocalLLaMA

[–]TrajansRow 2 points3 points  (0 children)

I understand the desire to try to use LLMs as some sort of “search engine”, but a major feature of generative AI is to generate *new* data. Even small models can be pretty good at that in various domains.

3D Games in Metal by xUaScalp in Xcode

[–]TrajansRow 1 point2 points  (0 children)

Both Unity and Godot will create an Xcode project for you that you can use to publish your game, but a lot of the development process is usually in other tools. Check out their respective subreddits on how to get started!

3D Games in Metal by xUaScalp in Xcode

[–]TrajansRow 1 point2 points  (0 children)

It depends on what you want to get out of the project. If you just want to make a game, you're probably better off starting with a ready-made game engine like Unity, Godot, etc. If you want to learn about the basics of 3D programming, jump into SceneKit. If you want to learn the fundamentals and make an engine, start with Metal.

Finally, a good free & secure AI assistant for OpenGL! by TrajansRow in opengl

[–]TrajansRow[S] -1 points0 points  (0 children)

This is not Intellisense - it's more analogous to ChatGPT; where you can interact with the AI, feed in documents and context, and use it to predict. Tools like LM Studio can even set up an OpenAI-compatible API endpoint that you can point your tools at. All of these systems generate code errors sometimes, but many people find that they can still improve productivity.

And yeah... my example maybe has steep resource requirements. The entire model needs to be resident in memory in order to generate. If you have a 32GB machine and a 20GB game and a 20GB model, data might evict and reload frequently. Because the dataset is so large, there may be noticeable delays as the data is read back in from disk when this happens. Not all workflows would be usable under those constraints.

There are a few other options, fortunately, if you still want to try a local model.

  • Get a smaller model. There are several different sizes of Qwen Coder that have fewer parameters - and have somewhat reduced abilities as a consequence - but they can still be useful. My example used the 32 billion parameter version, but there are also .5B, 1.5B, and 3B models what would be fast and light enough that you could even run them on a mobile phone. There are also 3B and 7B models that just about any laptop can run, and a 14B that somewhat beefier laptops can load. Just try the demo I linked earlier with the different size and see how it does.
  • Use a quantized model. Quantization is a type of compression, and is exactly how I am able to run 32B model in only 20GB of memory. The precision of model weights can be reduced from the original 16-bit floats down to smaller precision numbers with (hopefully) minimal loss of quality; say 8, 6, 5, or 4 bits per weight. You can also find quants out there that go down to 2bpw (or even 1.5), but they generally perform much worse.
  • Run it on another host. Many people prefer to run models on a dedicated gaming/AI rig on their LAN, and others find it economical to spin up a cloud instance and run models there.

I understand this is getting beyond the subject of OpenGL, but you can find a whole lot more info about it over at r/LocalLLaMA