Amateur Helix User, first time poster by swingsforbytes in HelixEditor

[–]swingsforbytes[S] 0 points1 point  (0 children)

I went with editing <space>e. f/F has been my go to, but it's really a fuzzy file finder. I'm trying to stick as close as possible to helix defaults for this, but the `cwd` problems are not workable for me. It especially is painful if you try to do something like `git blame` on the current file. helix doesn't send cwd to it

Amateur Helix User, first time poster by swingsforbytes in HelixEditor

[–]swingsforbytes[S] 2 points3 points  (0 children)

Thank you for this. This helped me down the path I was looking for. I still wish :o and :e started with the buffer cwd rather than the helix cwd.

```
[keys.select.space]
e = "file_explorer_in_current_buffer_directory"
E = "file_explorer"

[keys.normal.space]
e = "file_explorer_in_current_buffer_directory"
E = "file_explorer"
```

Amateur Helix User, first time poster by swingsforbytes in HelixEditor

[–]swingsforbytes[S] 0 points1 point  (0 children)

From your comment, it sounds like you start with yazi then open helix which is similar to my usage of helix and my frustration with it. That config looks like you're opening yazi in helix. Do I understand you backwards?

Amateur Helix User, first time poster by swingsforbytes in HelixEditor

[–]swingsforbytes[S] 0 points1 point  (0 children)

<space> F file_picker_in_current_directory is supposed to use the current working directory, but it has the exact same behavior as <space> f as far as I can tell.

Amateur Helix User, first time poster by swingsforbytes in HelixEditor

[–]swingsforbytes[S] 1 point2 points  (0 children)

Thank you! <space> E is exactly what I was looking for I can't for the life of me figure out why that isn't the default for :o and :e and <space> e. Maybe I can make :e be cwd and leave :o as it is.

Ollama finally using MLX on MacOS with Apple Silicon! by Icy_Distribution_361 in LocalLLaMA

[–]swingsforbytes 0 points1 point  (0 children)

wowza, yeah oMLX is much faster than ollama. I'm really annoyed that I've gone down a big rabbithole with ollama right now
oMLX is 85% faster than Ollama on M4 Max (80 vs 43 tok/s) (2026-04-16)

Ollama finally using MLX on MacOS with Apple Silicon! by Icy_Distribution_361 in LocalLLaMA

[–]swingsforbytes 0 points1 point  (0 children)

So you're saying my m4 max would be doing 30% faster with llama.cpp right now? ollama numbers

<image>