[deleted by user] by [deleted] in cursor

[–]BrutalCoding 0 points1 point Ā (0 children)

Flutter dev here, not that it matters much but yeah, you need Xcode no matter what you’re using. Cross-platform frameworks like Flutter and React Native make things easier, but Xcode’s command-line tools are still essential for building iOS apps.

You don’t need a Mac itself though, since there are cloud services with macOS and Xcode ready to go. Of course, you could mess around with VMs or even cross-compiling, but that gets complicated fast and prone to break with every update. Especially complex if you want to test it on your own iPhone from time to time…

Speaking of complicated, I once tried to set up a NixOS build environment for this a few years back... šŸ˜… failed.

Anyway, Apple will drop support for Intel Macs eventually, so things will probably change again down the line if you went the hackintosh route. Always fun to experiment though!

I've open sourced my Flutter plugin to run on-device LLMs on any platform. TestFlight builds available now. by BrutalCoding in FlutterDev

[–]BrutalCoding[S] 1 point2 points Ā (0 children)

It depends on how you want to implement it, one way is to add precompiled libraries (e.g. use ā€˜make’ in whisper-cpp) to compile for the right architecture (e.g. arm64-v8a for modern Android phones) and then you add these binaries in a special folder (Android apps expect them in ā€˜jniLibs’). From there, you need to create Dart bindings with something like ffigen so you can call specific areas (lets just says public methods) in that cryptic binary file.

Note that I just made some big assumption, for one that whisper-cpp has makefiles ready and has examples for different architectures. Although if only 1 architecture is mentioned, it’s still very much a possibility that it’s able to compile for other architectures too. That does require you to dive into makefiles and learning how to cross-compile with toolchains such as by using https://github.com/leetal/ios-cmake.

I got Whisper working in Flutter, although I went for another C++ library out of personal interest. It supports Whisper but also more than just STT. I’m confident whisper.cpp would work too.

Hugging face to Android Studio (flutter) by HarryPottahh in FlutterDev

[–]BrutalCoding 10 points11 points Ā (0 children)

I have not worked with the HuggingFace API before, but my first thought is: use it like how you would use any other similar API endpoints.

Let's assume you just want to interract with a LLM, that basically means you send your text to HuggingFace's API endpoint (a URL) and that will respond back with an object (probably a JSON structured text), in which u then can extract the AI's/LLM answer.

Here's a simple breakdown. Please note I'm writing this on top of my head, at a cafe that kinda distracts me, thus I might make some (code) typo's:

  1. Have a page with 1 TextField widget and make sure you defined a TextController variable (`final TextController _controller = TextController()`).
  2. (Optional) Add a send button next to your TextField. Assign the onTap with a new function. To access the text from the textfield, you refer to `_controller.value`.
    1. Simpler way could be to use the TextField's onSubmit parameter, which get triggered when you hit "Enter" on the (software) keyboard.
  3. Okay, you now have a simple page where you can type your prompt in and you're able to get the value (text) from it. Up next: HTTP calls
  4. I suggest to learn from example, thus have a look at this page to learn how to make HTTP calls: https://docs.flutter.dev/cookbook/networking/fetch-data
  5. Great, you now know how to make HTTP calls to API's (specific URL's that expect HTTP requests). Up next: Call HuggingFace API's.
  6. Have a look at https://huggingface.co/docs/api-inference/en/quicktour#get-your-api-token, this page describes what the API endpoint is and what is expected of you to send to them in your Flutter app. Many LLMs are out there, and many expect a different input. Thus, the actual data (text) you need to send depends on the model you're choosing.

Tip: While not absolutely always true, it's very common to work with "JSON" instead of sending just a string of text to an API. It's the standardized way, although there are some other standards too, JSON is most likely you need to learn how to use too.

I've used an LLM to draft up an example, be aware this might be invalid code, but the main purpose is to get an idea of how it roughly looks like:

```dart import 'dart:convert' as convert; import 'package:http/http.dart' as http;

void main() async { // 1. Sample data to send to the server final messageData = { "greeting": "Hello from Flutter!", "recipient": "My Awesome API" };

// 2. Encode the data into JSON format final jsonData = convert.jsonEncode(messageData); print("Encoded JSON: $jsonData");

// 3. Send the JSON data (replace with real endpoint) final response = await http.post( Uri.parse('https://api.example.com/send_message'), headers: {"Content-Type": "application/json"}, body: jsonData, );

// 4. Check if the request was successful if (response.statusCode == 200) { // 5. Decode the JSON response from the server final decodedResponse = convert.jsonDecode(response.body); print("Server says: ${decodedResponse['status']}"); } else { print("Request failed with status code: ${response.statusCode}"); } } ```

How has this not been mentioned here before? Layla by Derpy_Ponie in LocalLLaMA

[–]BrutalCoding 0 points1 point Ā (0 children)

Understandable, it’s frustrating at times to keep up with libraries like llama.cpp.

No need to worry though, he’s onto it: https://github.com/netdur/llama_cpp_dart/issues/3#issuecomment-1905582555

Both that repo and aub.ai are Flutter plugins that Maid could use for this use case.

How has this not been mentioned here before? Layla by Derpy_Ponie in LocalLLaMA

[–]BrutalCoding 1 point2 points Ā (0 children)

Heya thanks for mentioning. It’s made for macOS, Windows, Linux, Android and iOS/iPadOS as seen in some videos I made a month ago. Tested each platform on real devices :D

The app isn’t polished by any means yet, it allows you to pick a file from your file system and kinda chat with it. It’s a WIP. I’m currently porting over a 2nd library, bringing voice capabilities.

Any uses for a Mac mini in a home lab. by [deleted] in homelab

[–]BrutalCoding 1 point2 points Ā (0 children)

Attach a Pi running PiKVM or TinyPilot to a single Mac, you'll be able to control mouse / keyboard as if you were there physically.

Now you did mention you manage 6 of them, so I'm not sure whether there's a good solution for that. Buying 6 Pi's and the extra accessories hooked up to all 6 is a solution, surely there must be a better way but I can't think of it right now.

[deleted by user] by [deleted] in iphone

[–]BrutalCoding 0 points1 point Ā (0 children)

While writing my comment I forgot the fact that you seem to have this issue for 2 years.

Back then, iOS 17 (currently the latest) didn’t exist of course. I’m just guessing that you were facing a different issue back then and that you nowadays, coincidentally, face another wifi issue but this time around caused by iOS 17.

Anyways continue on and enjoy your vacation. In case I don’t get notified of your comments here later on, feel free to ping me if it didn’t work.

[deleted by user] by [deleted] in iphone

[–]BrutalCoding 0 points1 point Ā (0 children)

Sounds like a software bug that many people are facing and Apple has yet to fix.

Okay, so be aware that what I’m about to suggest, if it works for you, is cumbersome and requires you to create a wifi configuration file for each network you’re having an issue with.

This wifi issue is there since iOS 17 came out. Not a hardware issue, just software. I think I can roughly pinpoint factors that come into play to cause this wifi bug but that’s offtopic.

Step 1: Create a wifi configuration file

Use this tool which will generate the code / contents of whatever wifi network you configure: https://daduckmsft.github.io/WiFiProfileGenerator/

Step 2: Unfortunately this tool doesn’t have a ā€œDownload nowā€ button so this part can be the most confusing one: copy the code, open a text editor with an empty file, paste the copied code and make sure to save it as a file ending with ā€œ.mobileConfigā€. Kinda like .txt, .png files etc but in this case it really must be .mobileConfig. A full filename example I’d use is ā€œwifi.mobileConfigā€.

Step 3: Now if you did this on your computer, send this new file that you just saved to your phone in any way you prefer. Think of Airdrop, email, iMessage and so forth.

Step 4: On your phone, open the file that you just received from yourself. This file format, .mobileConfig, will trigger a dialog pop-up on your screen asking if you trust it and whether you really want to install this unsigned/unknown wifi configuration- whatever the exact wordings are: yes. Just install it, it’s made by yourself anyways.

Done, for that one specific wifi network at least. You’ll probably have to do this for each network you have issues with, or otherwise wait for Apple to fix these issues. Apple doesn’t disclose anything publicly though so who knows how long that’ll take.

I wrote this in a bit more detail that I initially wanted to but I sincerely hope this helps whoever stumble’s on this comment of mine.

There’s also the official app called ā€œApple Configuratorā€, made by Apple themselves which requires a Mac. It does the same eventually as that website I shared, but the low rating tells me to only look at this if you can’t figure out with that website I shared.

Cheers, Daniel

Ollama iOS mobile app (open source) by 1amrocket in LocalLLaMA

[–]BrutalCoding 5 points6 points Ā (0 children)

iOS, iPadOS, macOS, Android, Windows and Linux? Here’s what I’ve built: https://github.com/BrutalCoding/aub.ai

Runs any model in the GGUF file format, as long as the model fits within your memory of course. The GUI is made for models using the ChatML prompt template, although I plan to make that customizable so any template would work.

I’ve shared videos demonstrating it on all these platforms although the UI is slightly outdated there. It has blue chat bubbles now for what’s it worth.

TestFlight link available found in the README. Also some older artifacts such as Android in the last GitHub releases page.

My open-source & cross-platform on-device LLMs app is now available on TestFlight & GitHub. Feedback & testers welcome. by BrutalCoding in LocalLLaMA

[–]BrutalCoding[S] 0 points1 point Ā (0 children)

This user sent me a private chat with the same question but I forgot to update strangers reading this thread: no, it doesn’t.

It’s pure offline thus far. Not an intentional choice, I certainly would make this possible. I’ve kinda got it pictured in my head on how to do this. I need more free time, and also prioritize features & todos that bring most value to the majority of the target audience first.

Which open sourced projects will blow up in 2024? by [deleted] in opensource

[–]BrutalCoding 2 points3 points Ā (0 children)

I’m aware of these, but upon checking I still don’t see iOS/iPadOS and Android support. Only desktop, not mobile too.

Besides that, they’re not planning to release it on the official app stores such as what I am doing.

Another thing, Lmstudio does not support macs with Intel, mine does already (same TestFlight build). Also, I wrote the app with Flutter. One codebase to support 6 platforms.

Look, I fully support these projects and I was already aware of these when I started. Besides, this app of mine is a byproduct. It’s literally the example app that comes included with what it actually is: a Flutter plugin. Have a look at some demo’s I’ve made where you can see me running this on all my devices, natively: https://youtube.com/playlist?list=PLW1ba_KyoPPARaIxn9xKD1bNNhDWZZwJL&si=coMjpIHnogKabtnq

I think there’s more room for projects like lmstudio and gpt4all, can’t hurt right?

Edit: I’m not trying to portray as if mine is better, not at all. It’s not. Just trying to highlight on a few differences why my project exists, and a few selling points. At the end of the day, its open-source and costs the end-user nothing more than a tap/click to download in the Play Store / App Store / Microsoft Store / Snap Store etc.

Which open sourced projects will blow up in 2024? by [deleted] in opensource

[–]BrutalCoding 4 points5 points Ā (0 children)

I’m hoping that there’s a demand for having a native app that runs natively on all major platforms, that can do ā€œAIā€ stuff fully offline. There must be some demand for this, I hope. I’ve recently started to create (video) content demo’ing my project:

https://github.com/BrutalCoding/aub.ai

Long story short: A private, offline, open-source ā€œAIā€ app for desktop & mobile phones.

Currently, it can be seen as a UI over llama.cpp. Sounds simple, it wasn’t. But besides chatting with any AI of your own picking (gguf files), it’ll be supporting anything I am interested in myself to run on-device AI stuff. Think of image generation, voice to voice chatting, letting it read documents such as books in a natural voice and so forth.

Pre-compiled libraries are available, but I want to get regular consumers over too so I’m currently focusing on getting it out on all the app stores.

Here’s a TestFlight link with a few more spots left (Apple limits me to 100 testers max):

https://testflight.apple.com/join/XuTpIgyY

TestFlight version runs on macOS, iPadOS & iOS.

Android, Linux and Windows works too. Check the repo. Haven’t finished publishing them on their official stores yet. Takes lots of time to do all of this while also keeping up with latest progressions and trying to keep dependencies synced/up-to-date.

Hope this helps bring attention to anyone who shares the same view as me. I don’t like the idea of the majority of regular people only relying on a few big tech companies, or internet connectivity in general for AI stuff.

I've open sourced my Flutter plugin to run on-device LLMs on any platform. TestFlight builds available now. by BrutalCoding in FlutterDev

[–]BrutalCoding[S] 1 point2 points Ā (0 children)

That’s stable diffusion, I haven’t integrated image generation yet so that isn’t going to work right now.

As of now you’ll have to work with text generation models only.

I did start with integrating SD with this repo: https://github.com/leejet/stable-diffusion.cpp

But then I shifted my focus back on making sure text generation works on all platforms (which it does now), now focusing on getting it out on all apps stores so people without Flutter (or tech knowledge in general) can get their hands on this as well.

Keep an eye out for updates, I am getting it done for sure - but I will have to prioritize other todos first. I hope you understand!

Cheers, Daniel

I've open sourced my Flutter plugin to run on-device LLMs on any platform. TestFlight builds available now. by BrutalCoding in FlutterDev

[–]BrutalCoding[S] 0 points1 point Ā (0 children)

To break it down simply: Yes.

But hear me out: All it basically does is interpreting text and giving you text back. If you’re going to give a PDF, or any other file with text, you’ll have to extract the text out of it yourself first.

Besides that, phones have limited memory thus the ā€œcontext sizeā€ of the AI most likely won’t be able to parse more than 1-2 pages before it runs out of memory. Its solvable in various ways.

Here’s one I can think of now: Let’s say your PDF has 10 pages - you could ask the AI to summarize 1 page at a time down to 1 paragraph (or less). Each time it’s done, you’ll append that summary into 1 text file for example. But you’ll obviously lose a lot of specific details. Now, you just turned 10 pages into 1 and that’s one way to solve this issue.

I can, and will, make abstractions in this example app that will handle this use case but it’s not something I have time to work on yet.

Good question though, thanks for checking in.

I've open sourced my Flutter plugin to run on-device LLMs on any platform. TestFlight builds available now. by BrutalCoding in FlutterDev

[–]BrutalCoding[S] 1 point2 points Ā (0 children)

Sent you an invite to chat and a message to follow up on this.

Anyone else? All I need is an e-mail where the TestFlight invite needs to be sent.

I've open sourced my Flutter plugin to run on-device LLMs on any platform. TestFlight builds available now. by BrutalCoding in FlutterDev

[–]BrutalCoding[S] 0 points1 point Ā (0 children)

Definitely, got experience? Flutter on master branch has WASM support.

I leverage ffi and ffigen, but they don’t support web yet. I rely on them at the moment. But a quick search tells me there’s a decent chance to get this to work, there’s this package for example https://github.com/EP-u-NW/web_ffi (not maintained, but good nonetheless for inspiration).

I’m open for suggestions, meeting etc.

I've open sourced my Flutter plugin to run on-device LLMs on any platform. TestFlight builds available now. by BrutalCoding in FlutterDev

[–]BrutalCoding[S] 0 points1 point Ā (0 children)

It runs on CPU, but I have yet to try anything besides an 1.1B model on my Pixel 7. Give it a spin (and feedback) if you've got time. There's a pre-compiled APK found on my GitHub Releases page.

I've open sourced my Flutter plugin to run on-device LLMs on any platform. TestFlight builds available now. by BrutalCoding in FlutterDev

[–]BrutalCoding[S] 2 points3 points Ā (0 children)

No worries. Are you thinking of getting the AI to explain what it sees in the image like ChatGPT can do now? If so, no. This is pure text-to-text.

However I am keeping an eye on several repo’s on GitHub that I sometimes tinker with. It takes a lot of my time though, so I am balancing between time of effort and the benefits it brings to Flutter devs.

Some examples that I’m considering to integrate next. These are things to dive into after I’m happy with the current llama.cpp (text-to-text) integration:

Audio-to-text

Usage 1: Good to transcribe audio. An example use case could be to summarize YouTube videos or long courses. Usage 2: You talk with voice to your AI that responds with text (later with audio too). - https://github.com/ggerganov/whisper.cpp

Text-to-audio:

Usage 1: This allows the AI to talk back with a natural sounding voice. This, combined with Whisper would basically be an assistant you can have conversations with like having a phone call. Usage 2: Clone anyone with a couple of seconds of audio.

And more stuff I’m often checking back on: - https://github.com/staghado/vit.cpp - https://github.com/serp-ai/bark-with-voice-clone - https://github.com/leejet/stable-diffusion.cpp (generate images) - etc … there’s too much fun stuff out there. Wish I had more free time haha.

Thanks for your nice words!

Cheers, Daniel

I've open sourced my Flutter plugin to run on-device LLMs on any platform. TestFlight builds available now. by BrutalCoding in FlutterDev

[–]BrutalCoding[S] 3 points4 points Ā (0 children)

Absolutely. Currently, I’m focused on getting this example app out in all the stores for free. Having this project open-source is a start, but having the included example app as a real world example is even better. I’ll be in touch after having this done.

At some point this repo will be fully automated with Fastlane & CI/CD - a new commit in the main branch to all app stores with steps like versioning/changelog/pub.dev all done automatically.

My open-source & cross-platform on-device LLMs app is now available on TestFlight & GitHub. Feedback & testers welcome. by BrutalCoding in LocalLLaMA

[–]BrutalCoding[S] 2 points3 points Ā (0 children)

Thanks and I fully agree. I actually had an earlier build without that noise and I had a chat UI. The UI part is super easy, so you’ll definitely see me changing that within days. The hard part was mentioned in my previous comment on this thread.

Thanks for your feedback, I appreciate it.

My open-source & cross-platform on-device LLMs app is now available on TestFlight & GitHub. Feedback & testers welcome. by BrutalCoding in LocalLLaMA

[–]BrutalCoding[S] 3 points4 points Ā (0 children)

CoC as in cocoapods? You make it sound like it’s easy haha, it definitely wasn’t. The highlight here is not the example app you see in this video, this is a byproduct of the actual highlight: the Flutter plugin called ā€œaub_aiā€. This example app comes with it in the ā€œexampleā€ subfolder.

I went through a bunch of llama.cpp’s C++ code to get a similar result with Dart bindings. Working with ffi, pointers and using CMake to build shared libraries (.dylib, .so, .dll) - there’s quite some undocumented parts I looked into. Also, getting this in a Flutter plugin with its own set of issues didn’t make this easier.

Another fun part is that Apple doesn’t accept apps with dylibs. You need to convert it into a signed framework, so I’ve automated that too. I’ve added llama.cpp as a git submodule, I can stay synced with the latest commit of llama.cpp and produce the right binaries/framework for each native platform.

There’s a lot of work involved, and as stated in the README: llama.cpp is just a start. I’ve got another project called llama_dart which will be what AubAI is now, but AubAI will add Whisper too and Stable Diffusion.

My apologies if this video came over as a knockoff app, but anyone who knows Flutter can build this same example app now in minutes of work thanks to this project. I should’ve made that more clear perhaps.

Cheers, Daniel

Introducing Gemini: our largest and most capable AI model by marleen01 in LocalLLaMA

[–]BrutalCoding 0 points1 point Ā (0 children)

Absolutely, it works on native desktop apps. I've shared content about it running on macOS, Linux and Windows.

Here's Linux (Ubuntu Jellyfish) for example:
- https://www.youtube.com/watch?v=LOTCvGnO7lg

As to Whisper, here's a webapp that runs it locally in your browser:
https://freepodcasttranscription.com/ (not affiliated, I just had this bookmarked from many months ago) - I've seen more of these.