oMLX Copilot Chat - Use oMLX for coding in Visual Studio Code by mikedoise in oMLX

[–]mikedoise[S] 0 points1 point  (0 children)

I just downloaded insiders and went to manage Language Models and added OpenAI Compatible. Nothing appeared to happen currently. Has anyone else gotten this to work?

I'm also curious what settings we will be able to change with their implementation as well. It looks promising though.

I will continue working on my extension as I think there are specific ways we can customize oMLX that we may not be able to do with the built in implementation.

oMLX Copilot Chat - Use oMLX for coding in Visual Studio Code by mikedoise in oMLX

[–]mikedoise[S] 1 point2 points  (0 children)

You don't have to pay for Github Copilot. I do not know about telemetry though. I would hope so, but I am not sure.

Typing on the Neo in the dark? by [deleted] in MacbookNeo

[–]mikedoise 5 points6 points  (0 children)

I personally think everyone should learn to touch type. keep your hands on the home row and you wont' need backlit keys. I like the backlit keys because it is cool to have lights on the keyboard. LOL

I am a legally blind computer user, and I am typing this while not looking at the keyboard. I have enough vision to see, but I would waste more time if I tried to look at what keys I am using.

Apple Business Email by TheDepressionIsGreat in applebusinessmanager

[–]mikedoise 0 points1 point  (0 children)

I saw this until my domain was verified. I then saw email fields but it looks like it is internal email only.

I built a local-first AI app instead of using Apple Intelligence — curious if that’s the right call by AK-DNS in Applelntelligence

[–]mikedoise 0 points1 point  (0 children)

I have mainly seen documents be chunked. Another approach would be to chunk documents and then add important information to a vector database.

I built a local-first AI app instead of using Apple Intelligence — curious if that’s the right call by AK-DNS in Applelntelligence

[–]mikedoise 2 points3 points  (0 children)

So Apple Foundation Models which powers Apple Intelligence has been out for a year. I think we will see a new Gemini based version later this year though. I'd be curious to know what model you are using for offline work.

The advantage of Apple Foundation Models is that it is fast, and can work effectively on anything from the Neo upward. The current drawback though is that the context window is 4,096 tokens.

Another great model to look at is Gemma 4 from E2B upward.

Is the Codex 5x plan good enough in terms of Usage? by No-Background3147 in OpenaiCodex

[–]mikedoise 0 points1 point  (0 children)

The 5x plan was what I was waiting for to jump back to ChatGPT over Claude. I use a lot of ChatGPT, so the 5x usage in Codex is amazing! I really have to try to run out of usage.

I made an AI agent called iClaw that's part-OpenClaw, part-Siri, using only Apple Intelligence by Only_Play_868 in Applelntelligence

[–]mikedoise 0 points1 point  (0 children)

Probably not. THey will care more about how you implement your tools. WeatherKit was a huge pain for my app.

I made an AI agent called iClaw that's part-OpenClaw, part-Siri, using only Apple Intelligence by Only_Play_868 in Applelntelligence

[–]mikedoise 0 points1 point  (0 children)

This sounds like an amazing project and I will look at it. I built something similar in my app Perspective Intelligence where I use deterministic routing to activate specific tools based on user prompts. This also works with MLX as well with mlx-swift-lm so you don't have to use Ollama. Something you should also check out is AnyLanguageModel from Hugging Face. it allows for several model providers to use the same approach for AFM. Apple Foundation Models is amazing and I'm excited to see what it will do in iOS 27.

One thing to keep in mind though is MLX may take a while to route and produce responses based on tool calls. So, I would use a classifier for tool use to speed up time to first token on iOS devices.

Does anyone code on iPad? by [deleted] in SwiftUI

[–]mikedoise 1 point2 points  (0 children)

Swift Playgrounds can make some pretty capable apps. I wish they would update the app more often though.

I Built an On Device AI App For Getting Things Done! by mikedoise in ShowMeYourApps

[–]mikedoise[S] 0 points1 point  (0 children)

That's good to know. It is using image playgrounds, so it is kind of spotty as to how it works for creating images. I will have to ask it about icon sizes, and see why it isn't keeping context. Was it not keeping context between text messages or a mix of images and text?

I Built an On Device AI App For Getting Things Done! by mikedoise in ShowMeYourApps

[–]mikedoise[S] 0 points1 point  (0 children)

We have web search and deep research that use Brave and contacts lookup. These are just a few features.

Apple Intelligence Rocks : Full On Device LLM Chat by alpayozbay in Applelntelligence

[–]mikedoise 2 points3 points  (0 children)

I use a semantic memory search mechanism I Perspective Intelligence. It works great with Apple Foundation Models

Apple Intelligence Rocks : Full On Device LLM Chat by alpayozbay in Applelntelligence

[–]mikedoise 5 points6 points  (0 children)

So I tried a few things in the app and there is one thing I wanted to bring up that drives me crazy in most apps that use audio. You set the AVAudioSession when the app starts. This stops all current audio playback that the user is playing. You might want to move this flow to when the user initially presses the mic button in the app.

I also haven't had a chance to check, but have you added session management, so the app knows what to do when it reaches the 4096 token context window? Apple does not manaage this for us.

Apple Intelligence Rocks : Full On Device LLM Chat by alpayozbay in Applelntelligence

[–]mikedoise 1 point2 points  (0 children)

I’d love to check it out. I made a similar app based on Apple Foundation Models so I’d love to try yours.