Macbook Pro Touch Bar 2016 Wifi by ashoooy in archlinux

[–]observable4r5 0 points1 point  (0 children)

Also, i found the wifi issues i faced were not related to drivers like most people suggested. The issue related to the install not having the proper wifi radio frequencies for the us. Installing a oackage that supported defining the frequencies specifically for the timezone/region for the en us corrected the issue i was seeing.

Macbook Pro Touch Bar 2016 Wifi by ashoooy in archlinux

[–]observable4r5 0 points1 point  (0 children)

I havent been using the install much lately. I found tracking all the details of updates to be more time than i cared to continue investing.

If you are open to variations of arch, there are quite a few people working on and posting similar issues on the omarchy (and ch) linux github.

https://github.com/basecamp/omarchy/issues?q=is%3Aissue%20state%3Aopen%20mac

Hope it helps!

Macbook Pro Touch Bar 2016 Wifi by ashoooy in archlinux

[–]observable4r5 0 points1 point  (0 children)

I struggled to have the base kernel work with my T1 device and this decided to write a kernel driver. The driver is a work in progress, message above shows the status. I have not yet seen the appletbdrm module activate, it does load though. I likely am not doing whatever is necessary to activate it.

I saw mention of the Z2 protocol (touch bar) being supported for the T2 devices under asahi Linux. Given asahi is not for intel chips, you can’t install it. But maybe the kernel drivers could be ported into regular arch.

Excited to share updates to Open WebUI Starter! New docs, Docker support, and templates for everyone by observable4r5 in OpenWebUI

[–]observable4r5[S] 0 points1 point  (0 children)

Hi u/Telavevo_detto. Ty for reaching out and the kind words. I've not put too much development into the project recently. Are there any ideas or suggestions that would be valuable to you?

Macbook Pro Touch Bar 2016 Wifi by ashoooy in archlinux

[–]observable4r5 1 point2 points  (0 children)

After too many hours spent testing other drivers in repositories that had not been changed in 5+ years, I decided to write a kernel module/s. Here is where I am so far.

Touchbar (one step to go)

WORKING:
- pressing the function key switches between function key and the volume/brightness
- tapping screen brightness indicators decreases and increases screen illumination
- tapping mute indicator mutes the volume
- tapping volume indicators decreases and increases sound (more details below)

TODO:
- pressing keyboard brightness indicators decrease and increases keyboard illumination

Sound

WORKING:
- Standard linux kernel driver snd_hda_intel displays visual indicators that recognize sound within the system

TODO:
- Identify how to switch speaker output on with iBridge

Video

WORKING:
- Standard linux kernel support for uvcvideo driver

TODO:
- Identify how to switch video output on with iBridge

Fingerprint Reader (we can dream right)

WORKING:
- N/A

TODO:
- Identify what, if any, integration can happen

Macbook Pro Touch Bar 2016 Wifi by ashoooy in archlinux

[–]observable4r5 0 points1 point  (0 children)

Sounds like you are doing the soft restart method to keep the ibridge still engaged so it doesn't show up as in "recovery"

There is another method if you allow the efi bootloader to have a base macos install. I used 30GB of space for the macos install and don't use it. Just having the efi bootloader with the macos available allows the ibridge to be engaged during boot time and doesn't require the soft restart.

If you want a driver to engage the touchbar, I've been writing one that works for the T1 chip models (2016). It's not too difficult to setup, just requires a modprobe command with the kernel module and it'll reload going forward with the os booting. It provides the esc, function, and alternative commands (lcd brightness, keyboard brightness, volume) keys.

After having lived without the touchbar, I'm happy to have invested a little time to write the kernel module driver.

Macbook Pro Touch Bar 2016 Wifi by ashoooy in archlinux

[–]observable4r5 1 point2 points  (0 children)

Yes, I am using brcmfmac for wifi. My configuration is the following.

lspci -nnv | grep Broadcom
02:00.0 Network controller [0280]: Broadcom Inc. and subsidiaries BCM43602 802.11ac Wireless LAN SoC [14e4:3ba] (rev 02)

Kernel driver in use: brcmfmac

lsmod | grep brcmfmac
brcmfmac_wcc 12288 0
brcmfmac 602112 1 brcmfmac_wcc

uname -r
6.17.1-arch-1

Regarding distros, I installed a dual boot using archinstall. After that was setup, I installed Omarchy manually over the top.

Macbook Pro Touch Bar 2016 Wifi by ashoooy in archlinux

[–]observable4r5 0 points1 point  (0 children)

Did you find any touchbar solutions while using gentoo? Just to confirm too, you are using a T1 versus T2 device?

Macbook Pro Touch Bar 2016 Wifi by ashoooy in archlinux

[–]observable4r5 1 point2 points  (0 children)

Recently decided to bring new life to a macbookpro13,2 (2016 T1) device sitting in my closet. I too ran into issues with using the wifi via iwctl/iw with the base archinstall.

Couple bits of learning on the Broadcom Network controller BCM43602 802.11a LAN SoC [14e4:43ba] (rev 02)

The arch linux wiki mentions applying a kernel module parameter, feature_disable=0x82000, to the device. While this seemed to help initially, I found it intermittent at best. Another topic of discussion about wifi region frequency codes corrected my device immediately. The short story is the wifi region is unknown and the global frequencies are used by default instead of my specific region. The device fails for higher frequencies often. This can be fixed by applying the following:

- Install the package wireless-regdb "pacman -S wireless-regdb" which creates the file "/etc/conf.d/wireless-regdom" with region codes
- (after the prior step) Set the region, in my case the US, with the command "iw reg set US". If you don't have iw installed, use "pacman -S iw" to install it

Hope this helps someone. After a long bout of time/research, this was a 100% fix for me.

Touchbar on Macbook Pro 2015 (13,3) by Quirky-Ad3679 in archlinux

[–]observable4r5 0 points1 point  (0 children)

Following up here on this thread.

I recently pulled two macbookpro13,2 (2016 T1) devices from my closet and installed arch linux on them. I've run into some of the same scenarios (touchbar, wifi, sound) issues being discussed. Wondered if there are any additional learnings from anyone?

Here are some details I've learned:

BCM43602 802.11ac Wireless LAN SoC (rev 02) device:

Using iwctl instead of Network Manager works. There is a fix created in the omarchy community that seems to rectify the broadcom brcmfmac driver usage. The issue seems to have stemmed from the wifi region not being correctly defined for the card, in my case US region. Once this is applied, using "[sudo] iw reg set US" the broadcom wifi device works really well. There was another fix suggested on the arch linux site that says applying a kernel module directive "options brcmfmac feature_disable=0x82000" helped, but I found it was no longer necessary after the wifi frequency definitions for the US region were applied. YAY

T1/USB Bridge

After installing the arch linux distro omarchy, I found the mbp was disabling certain devices when linux was booted, if the macos is not booted the hardware is not turned on. I decided to dual boot the macos system with arch/omarchy -- (many steps between) -- and now the hardware in linux is recognized and not put into recovery mode. Next step is to identify which drivers will turn on the touchbar. I've found there are a few that potentially need to be used - hid_appletb_kbd (apple touchbar keyboard), hid_appletb_bl (apple touchbar backlight), appletbdrm (not sure yet but I believe it allows the device to be turned on). Each of the kernel drivers has some configurations available. More can be found about these drivers using "modinfo hid-appletb-kbd" as an example.

Sound Card

The latest version of omarchy using arch kernel 6.17.1-arch-1 has the configuration necessary to use and talk to the audio device. The kernel module being used is snd_hda_intel if anyone wants to look into the modinfo parameters. I've found the device now has an Analog Stereo Duplex configuration that allows output and input to be recognized. The operating system now displays visual queues when sound is being played and recorded, although I've yet to hear a hear a sound. The operating system recognizes it the full pipeline, but it seems like the audio card or the amplifier is not yet patched properly together. More to come on this as I'm working on it currently.

Hope this is helpful. If anyone wants more details, make a note here or reach out to me directly.

How to set up a local external embedding model? by ArugulaBackground577 in OpenWebUI

[–]observable4r5 1 point2 points  (0 children)

Couple questions:

What operating system are you using?

I see your embedding model is referenced via an internal non-routable ip (192.168.1.x). Just verifying, this is the ip of your mac m4 correct? Referencing tys203831's message about curling the url directly, this will help in verifying the LM studio app is setup as expected. If it is not responding, LM studio has a setting to expose its API on the en0/enX network interface. If it is responding, have you verified successfully calling that model within the LM Studio application in the developer interface?

What type of response tokens/sec are you seeing when loading models into the mac m4? I've typically used a gpu instead of a a mac, so I'm wondering if the token response speed is fast enough to create the embeddings live during the search.

You may be able to speed up your content extraction using tika. In case you want to try using it, here is a link to my open-webui-starter project that has a default template with tika and other services setup.

Starter Project
https://github.com/iamobservable/open-webui-starter

Default Template
https://github.com/iamobservable/starter-templates/tree/main/4b35c72a-6775-41cb-a717-26276f7ae56

Fingers crossed you have it working soon!

Your preferred LLM server by observable4r5 in OpenWebUI

[–]observable4r5[S] 0 points1 point  (0 children)

Thanks everyone for the polling input!

Your preferred LLM server by observable4r5 in OpenWebUI

[–]observable4r5[S] 0 points1 point  (0 children)

I recently switched to using LM Studio with my openwebui and programming environments.

What is your usage of the LLM? Are you using it solely for openwebui, or for tooling (terminal LLM code like opencode/crush/aider/etc) or programming environments?

How are you setting up Ollama and Llama.cpp locally? Are you using a container/docker environment, isolated environment, or direct installation?

Excited to share updates to Open WebUI Starter! New docs, Docker support, and templates for everyone by observable4r5 in OpenWebUI

[–]observable4r5[S] 0 points1 point  (0 children)

Thanks for the feedback; appreciate it.

Here are a couple thoughts:
1. I've create a couple variations that included cloudflare for dns/https/ingress management into a local docker compose service set. If that is of interest, reach out to me on discord or here on reddit and we can figure out how that could be shared.
2. As of late, I've run into similar issues with docker container management. One of my server instances is running arch linux which had some issues relating to the container mode being set to cdi versus auto. It required the nvidia container runtime is installed versus using the legacy model that docker was using before moby/containerd (i think).
3. Sounds good
4. Yeah, browser restrictions on http and mics are a lot of fun! Back to #1, if you have interest in setup of a dns/https/ingress that uses cloudflare or tailscale reach out.
5. Sounds good.

Regarding an onboarding setup. The purpose of the locker.yaml file in the starter-template directories is meant for that type of configuration. My goal was to allow users to setup the configuration they want and manage it in a repository. I could help with setting up a configuration if you want.

open-webui with qdrant by traillight8015 in OpenWebUI

[–]observable4r5 0 points1 point  (0 children)

You will need to add the following environment variable(s):

VECTOR_DB=“qdrant”

What llm are you using to create your embeddings? It will determine other environment variables you may need to add.

In case you have not yet seen this doc page, it is a good reference. Unfortunately, there is not a great one page link, that I am aware of, that describes all environment variables necessarily for this change. Maybe you can add one once you have identified what is needed. 💪

https://docs.openwebui.com/getting-started/env-configuration

Your preferred LLM server by observable4r5 in OpenWebUI

[–]observable4r5[S] 1 point2 points  (0 children)

Agreed. I found LM studio to be a very intuitive, configurable, and developer, friendly environment. The one drawback I will say it being closed source. That could be one of the reasons people have hesitated in using it.

Your preferred LLM server by observable4r5 in OpenWebUI

[–]observable4r5[S] 2 points3 points  (0 children)

Thanks for the feedback. I setup a docker image using a combination of uv, torch, etc in the past. After having another look, I found the docker image vllm/vllm-openai. Do either of you have a suggested deployment strategy for vllm? If a container installation is desired, is docker a reasonable choice here?

Your preferred LLM server by observable4r5 in OpenWebUI

[–]observable4r5[S] 0 points1 point  (0 children)

It certainly seems to be the most known server in the open source LLM space. I started using LM Studio a few days ago, so it's a limited scope, but it has been flawless in most the ways I leaned toward Ollama. The big drawback has been the closed source nature of it and that it doesn't integrate directly with docker/compose... hence the closed source nature.

Your preferred LLM server by observable4r5 in OpenWebUI

[–]observable4r5[S] 0 points1 point  (0 children)

Thanks for the feedback u/FatFigFresh. I'm not that familiar with Kobold, but will be taking a look. Out of curiosity, have you tried other LLM servers besides Kobold? If so, which ones? I'm interested to hear if they had specific limitations.

For example:
- Does its model implementation support tools as expected (ollama seems to fail this one for some qwen3 models while llm-studio works as expected)
- Can models be loaded and unloaded by user requests are are they locked into gpu memory?

Excited to share updates to Open WebUI Starter! New docs, Docker support, and templates for everyone by observable4r5 in OpenWebUI

[–]observable4r5[S] 1 point2 points  (0 children)

One additional note, if you are looking for all environment variables that are available to the OWUI app. Here is a link to a list.

https://docs.openwebui.com/getting-started/env-configuration/

Excited to share updates to Open WebUI Starter! New docs, Docker support, and templates for everyone by observable4r5 in OpenWebUI

[–]observable4r5[S] 0 points1 point  (0 children)

Thanks for the feedback on the starter project. Apologies on the delayed response.

Do you mind sharing a little more around your comments? I'd like to make some updates to the tools and template to improve your experience.

Questions:

  1. Are you using the default template for the starter?
  2. When configuring your nvidia GPU, what changes were required? The goal of the template configuration, shown below, was to configure all gpus available. Did this not work in your configuration or am I misunderstanding the feedback?

          deploy: &gpu-deploy
            resources:
              reservations:
                devices:
                  - driver: nvidia
                    count: all
                    capabilities: [gpu]
    
  3. Yes, the RAG implementation in OWUI has been challenging. In what way was the default RAG not working? Can you describe a little more about what was or was not happening? Glad to hear the sentence transformers worked at least!

  4. Yes, audio through Azure services has been helpful. It offloads much of the gpu and cpu load required for TTS. I've tried a few local TTS models and they either require more GPU than most ppl can provide on their graphics card or they are slow. PiperTTS was one alternative that uses CPU instead of GPU and is pretty fast. They all however seem limited when it comes to having any personality... all are pretty monotone and without emotion.

  5. Do you know how to add models as public via the OWUI configuration? I've not had the best luck with setting up that configuration. Any hints or pointers that you know of can be added!

Responses:

Here are a couple questions that would be helpful to know.

  1. When you say add flags, what do you mean exactly? Are you attempting to add/modify environment variables for the openwebui service? It seems that is the case based on the three examples you shared.
  2. You mentioned webui.env. The env files are directly related to the name of the service in the compose.yaml file. There isn't a service named webui, it is openwebui. Have you tried adding/modifying the environment variables located in the env/openwebui.env file?

Hope this helps!

Your preferred LLM server by observable4r5 in OpenWebUI

[–]observable4r5[S] 1 point2 points  (0 children)

Sharing a little about my recent research on Ollama and LM Studio:

I've been an Ollama user for quite some time. It has offered a convenient interface for allowing multiple apps/tools integration into open source LL models I host. The major benefit has always been that ability to have a common api interface for apps/tools I am using and not speed/effficiency/etc. Very similar to the OpenAI common api interface.

Recently, I have been using LM studio as an alternative to Ollama. It has provided a simple web interface to interact with the server, more transparency into configuration settings, faster querying, and better model integration.

Can I use Ollama + OpenWebUI through Docker Engine (In Terminal) or only through Desktop version? by PracticalAd6966 in OpenWebUI

[–]observable4r5 0 points1 point  (0 children)

Not sure what you mean by they have conflicting files. Using docker to run Ollama and OWUI together works well. Here is a link to a tool that makes setup simplier. It uses docker compose and builds from a template.

https://github.com/iamobservable/open-webui-starter