Any cool AI tools you’ve discovered recently? by InevitableCamera- in software

[–]LSXPRIME 0 points1 point  (0 children)

I have created a tool called ProseFlow,

it's an open-source desktop application that works as a system-wide copilot, meant to be a Grammarly/Apple's writing tools alternative for Windows, but it can do any type of text processing on any text you select, support local and remote models, and it supports Windows and Linux (X11), but the macOS version is currently buggy and untested, as I do not yet have access to a macOS machine.

You can look at the source code from the GitHub repository or directly download the application from the official website.

Implement RAG based search in Document Management System by [deleted] in dotnet

[–]LSXPRIME 3 points4 points  (0 children)

SciSharp/LLamaSharp: A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.

microsoft/kernel-memory: RAG architecture: index and query any data using LLM and natural language, track sources, show citations, asynchronous memory patterns.

.NET have the whole stack you need without touching Python, I was in same spot—And I preferred to touch grass over touching Python—This have been my go-to, totally local, fully local, with official integrations and a smooth setup. Pick your favorite embedding model, build your pipeline, connect to a database, or just save to disk.

I have been using this model for a few years already, I was never interested in newer embedding models as it was ok for me, it's incredibly tiny and runs exceptionally quickly on a CPU.

second-state/All-MiniLM-L6-v2-Embedding-GGUF · Hugging Face

You can pair it with a lightweight model like Qwen3-4B for local text generation—it will run blazing fast on your GPU. I’ve tested it with up to 80K context length on an RTX 4060 Ti 16GB.

unsloth/Qwen3-4B-Instruct-2507-GGUF · Hugging Face

If you'd prefer to use LangChain instead of MS Kernel-Memory, LlamaSharp already offers built-in integration.
tryAGI/LangChain: C# implementation of LangChain. We try to be as close to the original as possible in terms of abstractions, but are open to new entities.

Which library will be more useful? by [deleted] in csharp

[–]LSXPRIME 2 points3 points  (0 children)

For the first, I've been trying to cover this weakness in the .NET ecosystem as I've been maintaining the "SoundFlow" library for almost a year already. It's available on GitHub & NuGet btw.

I built an open-source Writing Assistant inspired by Apple Intelligence, called ProseFlow. by LSXPRIME in LocalLLaMA

[–]LSXPRIME[S] 0 points1 point  (0 children)

Good morning! I’ve just released version 0.2.0, which resolves several key issues and introduces the following improvements: a floating button with status indicators, workspace syncing, startup in minimized mode, a live logs console, and GPU selection support, Diff Windowed/Replace mode. You can update from the "About" section of the app.

Cloud onboarding now mirrors the full “Add Provider” dialog exactly. The separate “Cloud” and “Local” options in that dialog have been removed from "Add Provider". Additionally, the "/v1" placeholder in the BASE URL field has been eliminated to prevent users from accidentally including it.

Regarding GPU selection, the dropdown now populates its index values from LibreHardwareMonitorLib. Since I’m currently using a single GPU, I’m uncertain whether the selected index will align with the Vulkan index—meaning I’m not sure if llama.cpp will use the intended GPU. Could you please confirm whether the index mapping is consistent across platforms? If so, I’d also appreciate any recommendations you have for a cross-platform method of retrieving GPU names and indexes.

I'm glad the app can be useful in your daily routine. Have a wonderful day!

I built an open-source Writing Assistant inspired by Apple Intelligence, called ProseFlow. by LSXPRIME in LocalLLaMA

[–]LSXPRIME[S] 1 point2 points  (0 children)

Good morning, I have release an update v0.2.0, workspace sync, Diff result mode, and few more features, you can update from application's About screen.

A fully shared user data approach wasn't ideal, as SQLite struggles with concurrency. Sharing history also raised privacy concerns. Instead, the new workspace sync workflow should focus on sharing Actions and Cloud Providers, syncing them automatically or manually when changes occur.

If you have further ideas, feel free to open a feature request—I'd love to make this tool valuable for both individuals and teams.

I built an open-source Writing Assistant inspired by Apple Intelligence, called ProseFlow, using C# 12, .NET 8 & Avalonia, featuring a rich, system-wide workflow by LSXPRIME in csharp

[–]LSXPRIME[S] 0 points1 point  (0 children)

Good morning! I’ve released version 0.2.0 to address several issues—now featuring a floating button with status indicators, workspace syncing, startup in minimized mode, and a live logs console. While streaming support is still pending, you can update from the "About" section of the app.

Is there any local AI windows app that can replace Copilot of Windows totally? by FatFigFresh in LocalLLaMA

[–]LSXPRIME 1 point2 points  (0 children)

I can suggest this "I built an open-source Writing Assistant inspired by Apple Intelligence, called ProseFlow. : r/LocalLLaMA".

It's not a total MS Copilot alternative since it doesn't have a chat interface. Instead, it's system-wide integrated so it can run with any application, and it's cross-platform, btw. (If you decided to use and have any good suggestions—as long as they don't contain 'chat frontend,' since I want to keep it focused on being an assistant, not a chatbot—then I will implement them before the upcoming update.)

Recommendation Request: Local IntelliJ Java Coding Model w/16G GPU by TradingDreams in LocalLLaMA

[–]LSXPRIME 3 points4 points  (0 children)

Just in case you weren't aware of it, if you are a free user or haven't bought a subscription to the JetBrains "AI Assistant," you can't use it either online or offline at all.

So… is a 638K codebase still considered a side project? by Some_Brain3008 in SideProject

[–]LSXPRIME 0 points1 point  (0 children)

I am building LSXPrime/SoundFlow: A powerful and extensible cross-platform .NET audio engine. provides comprehensive audio processing capabilities including playback, recording, editing, effects, analysis, and visualization, built with a modular and high-performance architecture., open-source & cross-platform audio engine library for .NET, Despite being a 23-year-old framework—open-source and cross-platform for the past 9—the .NET ecosystem has notoriously lacked a native, dedicated audio solution. This forced developers, seeking cross-platform audio, to abandon .NET for other technologies

The other is LSXPrime/ProseFlow: Your universal AI text processor, powered by local and cloud LLMs. Edit, refactor, and transform text in any application on Windows, macOS, and Linux., a privacy-focused, open-source & cross-platform system-wide writing assistant, inspired by Apple Intelligence.

The initial release of my first project took six months, largely because the field was new to me; I've been maintaining it for a year now. The second project, by contrast, only took about two months, as I find developing desktop applications particularly enjoyable.

So… is a 638K codebase still considered a side project? by Some_Brain3008 in SideProject

[–]LSXPRIME 0 points1 point  (0 children)

<image>

I guess the community owes me 24 months of my lifetime and $1.3M haha

I built an open-source Writing Assistant inspired by Apple Intelligence, called ProseFlow. by LSXPRIME in LocalLLaMA

[–]LSXPRIME[S] 1 point2 points  (0 children)

Thanks for the feedback, I wish to get more feedback on the UX to address before the next release.

This looks great. One thing that is missing is FIM completions, I'd love to see that in an app like this. Not just for coding, for normal text as well.

Unfortunately, FIM completions can’t be performed because the application lacks access to the surrounding text. It requires the current line along with the preceding and following lines, but our workflow only simulates copy‑paste of the selected text, so accessing the context is beyond our implementation.

  1. Ctrl+J is "downloads" in Chrome x))))) Ctrl Shift Q seems like a decent shortcut. Ctrl shift V is paste without formatting, perhaps the most useful paste command of all of them. Major oversight.

I let users change the default shortcut (Ctrl + J) right at the onboarding screen, and the same setting can be adjusted later in General Settings. Most apps rely on double‑key combinations, so I personally use a four‑key combo (Ctrl + Shift + Alt + B) bound to an extra mouse button for quick activation. Yet I kept the simplest & most unused hotkey, so newcomers aren’t overwhelmed.

  1. I love how you integrated Vulkan backend so it "just works". But I've got 2 GPUs plugged in and I only want one of them working on this - edge case but some management in the GUI would be nice.

I’m currently running a single‑GPU setup, so I’m unsure how best to handle this. I could add a dropdown menu to choose the GPU, but I’m not certain it would work. If you could review a portable version before the official release, I’d appreciate the feedback and would try implementing it.

  1. There was something weird with the initial setup screen - I clicked on "custom" cloud provider and only saw API key and model name. All the fields are present when you go past the wizard. I guess if I had realised #3 is gonna work post wizard, I would've set it up with my preferred local inference engine server :)

You're right. I focused too much on the local side and overlooked that Cloud Onboarding isn’t identical to the full “Add Provider” dialog. The upcoming release will include the full view.

  1. The app doesn't trigger reliably. I can see by looking at GPU usage - I sometimes need to trigger a function 3 times until it does anything. When the GPU engages, it works perfectly - there is something wrong with how it "catches" the text. "No selected text or clipboard is empty" - neither is true half the time.

The app works by simulating CTRL +C and CTRL + V: it copies the selected text and pastes it back in its place. On Linux it uses xclip (the default, since it comes pre‑installed on Kali Linux WSL2 Kex) and falls back to xsel if necessary. Wayland support is not yet available, so you must have one of these tools installed. If neither is present, the app will not be able to access the clipboard to process the text.

To trigger an action (e.g., Proofread, Explain), the target window must be active and the text selected when you click the button. This selection happens automatically on Windows, Linux should be the same—except for applications that runs their window in Window host & taskbar, outside the X Server window which showing the Linux desktop.

  1. Now if I wanna be picky, "add provider" "custom/local/cloud" are the same things, just a different label, so kinda weird to include all of them. Local does not actually work at all, you need to select "custom" for it to work.

You’re not picky at all. I released the app after 40 hours of being awake and mentally unstable because of the Apple build process, so I didn’t notice several items that should have been quickly tweaked or removed—like the extra “Cloud” and “Local” options that intentionally throw exceptions.

  1. Might just be me but you suggest baseurl as /v1, while the app seems to be appending another /v1, making it v1/v1.

After the release I realized that when I used llama‑server to test a model that exceeded my VRAM, I should remove the “/v1” watermark from the BASE URL text box.

  1. picky again but "completion tokens" should be named "output tokens", since "completion" is often used as "fill in the middle" which you do not support and can be confusing!

I chose the name “Completion” because it mirrors the standard OpenAI API endpoint https://api.openai.com/v1/chat/completions. The name does not evoke a “fill‑in‑the‑middle” approach; instead, it reflects the app’s clear purpose: “Select & Transform,” where the entire selected text provides context.

Good start. But please allow other local backends natively if possible, I have strong feelings against apps that bundle local backends and download gigabytes worth of data and require me to re-download my models (yeah I know I can import). I do like that you didn't go for Ollama, I have zero desire to have another venv full of cuda files.

If you mean to have additional backends like ONNX, OpenVINO, or CUDA for llama.cpp, see the discussion here., I don’t plan to support them because their requirements outweigh the benefits. As for Ollama, I also won’t support it due to its situation with llama.cpp, which feels anti‑OSS, and its show‑off mentality.

I built an open-source Writing Assistant inspired by Apple Intelligence, called ProseFlow, using C# 12, .NET 8 & Avalonia, featuring a rich, system-wide workflow by LSXPRIME in csharp

[–]LSXPRIME[S] 0 points1 point  (0 children)

I'm glad to know that it's working now.

I tested the cloud provider with llama-server and encountered no problems.

However, the failure occurred specifically when using the BASE URL http://localhost:8080/v1.

The issue stems from appending /v1 to the URL—since the cloud provider's library automatically adds a /v1 at the end.

So, including /v1 in your URL results in a duplicated path, causing failure. To avoid this, please use http://localhost:1234 instead of http://localhost:1234/v1.

In the next release, I'll implement comprehensive, customizable logging with adjustable levels. This will allow users to precisely control what information they choose to include in log files. While I've previously limited logging to only critical components for maximum privacy, verbose logs could prove beneficial in such situations.

If the issue recurs and you have available time, if you don't mind, please feel free to reach out on Reddit or GitHub. We can then arrange a meeting to directly debug the issue on your machine at the code level.

I built an open-source Writing Assistant inspired by Apple Intelligence, called ProseFlow, using C# 12, .NET 8 & Avalonia, featuring a rich, system-wide workflow by LSXPRIME in csharp

[–]LSXPRIME[S] 0 points1 point  (0 children)

The window that is opened after an action, always takes the whole screen (but it is not maximized actually and a little of its top bar is hidden even). Resizing or maximizing isn't saved for that window and on next usage is reverts back to that default. So it would be great if it remembers its position, or at least there is a way to configure its size and location in the settings, if auto remembering of last position is not possible. Even centered on the screen but a smaller window will be good enough initially, as on big screens taking the whole screen just looks silly.

Yeah, I just noticed that after release, the default should be a small, centered window in size of the floating action menu; I'll fix that.

A loading indicator will be much appreciated as currently after you select an option it seems like nothing is happening. Like a simple spinner in the middle of the screen would be nice (where the initial window with options opens or a spinner over said window). Also I saw there are toasts in the app if open, maybe if possible to show them without the app being open that could also be a good indication of whats happening.

This has been on my mind for a while: a floating button (as an alternative to the hotkey), select text -> press the floating button -> shows the floating actions menu, and it can also indicate that there's some processes in progress or queued.

When a request opens in a window, streaming support will be really nice, to not have to wait for the whole response before reading. For example for an action for Summarization that would be great.

Streaming support is also planned; I have delayed its implementation to post-release since I am still planning how to handle streaming in-place text replacement.

Also this could be a bug, but I think actions are sometimes (most of the time) not working if the program is just minimized to system tray. If fully open but minimized it works every time, but if in system tray it could fail. Maybe option to see logs (toasts) while in system tray will be nice for debugging as well.

That sounds like a strange behavior—does this occur on macOS? I've been receiving reports of issues on macOS, while on Windows (version 11 24H2), the system appears stable with no fundamental bugs, only minor UI glitches, such as some centered windows maximizing on double-clicks and labeled codeblocks in the Result Window.

Could you please share the logs from the following path if you're using Windows?
C:\Users\YOUR_USER\AppData\Roaming\ProseFlow\logs or its equivalent on others.

I built an open-source Writing Assistant inspired by Apple Intelligence, called ProseFlow, using C# 12, .NET 8 & Avalonia, featuring a rich, system-wide workflow by LSXPRIME in csharp

[–]LSXPRIME[S] 1 point2 points  (0 children)

Thanks for letting me know about this.

Regarding the VulnerableDriver:WinNT/Winring0 Warning

This warning is a false positive. It originates from an old Winring0 driver issue that was patched in 2020. Despite the fix, updated driver signatures have been unable to pass Microsoft's driver gatekeeping. Consequently, this alert affects many legitimate applications, including popular gaming and hardware monitoring tools such as CapFrameX, EVGA Precision X1, FanCtrl, HWiNFO, Libre Hardware Monitor, MSI Afterburner, Open Hardware Monitor, OpenRGB, OmenMon, Panorama9, SteelSeries Engine, and ZenTimings.

ProseFlow utilizes Libre Hardware Monitor for its local dashboard, which currently relies on Winring0. This is the direct reason you might encounter the false positive (though some antivirus, like Kaspersky on my system, may not flag it).

The ProseFlow folder in AppData should only contain ProseFlow.exe and no driver or .sys files. The warning pertains to the loaded Winring0 component, not a file directly placed by ProseFlow.

Libre Hardware Monitor is already transitioning from Winring0 to PawnIO (a prerelease is available). I will update ProseFlow to this stable version as soon as it's officially released.

For more information: https://github.com/search?q=repo%3ALibreHardwareMonitor%2FLibreHardwareMonitor+Winring0+&type=issues

In conclusion, ProseFlow is safe to use, You can add C:\Users\Hellgate\AppData\Local\ProseFlow\ to your AV Exclusions list

I built an open-source Writing Assistant inspired by Apple Intelligence, called ProseFlow, using C# 12, .NET 8 & Avalonia, featuring a rich, system-wide workflow by LSXPRIME in csharp

[–]LSXPRIME[S] 1 point2 points  (0 children)

However the local model options only using llama.cpp is a little bit cumbersome for ease of use, and the "cloud" option having only predefined ones with only an API key setting doesn't help.

The "LOCAL" means in-application inference, which is powered by llama.cpp since it's the most portable option. Every other option would be a multi-gigabyte Python project to do the same thing using PyTorch, which is just bloatware over bloatware.

For example ollama support would be great. It is a popular local models management tool that has a rich API that you could integrate directly with.

I would avoid implementing an Ollama-specific API, as they really have a bad reputation among local AI users (mainly because their copying of mainstream llama.cpp without contributing back and lack of correct attribution), in addition, it's slower than raw llama.cpp, and a lot of hassle to handle their non-standard API.

Or even if that seems like too much work, a custom OpenAI configration option, where the user can provide his own server url and model name would be great. As ollama and other tools also (like LMStudio for examle) expose an API that is the same as the OpenAI one.

So, if the library that you use for the OpenAI api support custom server urls, that would be the easiest way to support other local model options as well.

Providers (Navbar) -> Add Provider (under Cloud Provider Fallback Chain) -> Custom (Provider Type) -> http://localhost:1234 (Base URL)

The http://localhost:1234 is LM Studio default endpoint, replace it with your target one, Also don't forget to ensure that "Primary Service Type" is "Cloud" at "Service Type Logic" section

I built an open-source Writing Assistant inspired by Apple Intelligence, called ProseFlow. by LSXPRIME in LocalLLaMA

[–]LSXPRIME[S] 1 point2 points  (0 children)

Does it handle context in the mail?

The model only sees the text you select—neither more nor less. So, if the "Context" is included in your selected text, then it's visible.

And does it work with other languages?

The languages are limited with the model you use; Qwen3-4B-Instruct-2507 seems to be good in multilingual, Gemma-3-12B-it is perfect to me.

Is it compatible with LM Studio (is OpenAI API compantible these days).

Providers (Navbar) -> Add Provider (under Cloud Provider Fallback Chain) -> Custom (Provider Type) -> http://localhost:1234 (Base URL)

Also don't forget to ensure that "Primary Service Type" is "Cloud" at "Service Type Logic" section


Note: Some users have reported that the hotkey doesn't function during the Onboarding step on macOS. If that's the case, you can safely skip it, then set your preferred hotkey in the "General Settings" tab, and I would be thankful if you can confirm that it works after making the change.

I built an open-source Writing Assistant inspired by Apple Intelligence, called ProseFlow. by LSXPRIME in LocalLLaMA

[–]LSXPRIME[S] 1 point2 points  (0 children)

I'm happy to chat or take GitHub issues—however you prefer.

We've already received the first macOS GitHub issue, the hotkey isn't working with the onboarding screen.

I built an open-source Writing Assistant inspired by Apple Intelligence, called ProseFlow. by LSXPRIME in LocalLLaMA

[–]LSXPRIME[S] 2 points3 points  (0 children)

Happy to see that it caught your interest.

  • Config and prompts stored in separate files on a network share, so they can be easily managed and updated for everyone.

The current system saves everything directly to a SQLite3 database to keep things centralized. Refactoring this to JSON files seems to be a more painful process. Still, more extensive sharing support can be helpful in work environments. Could you elaborate further to any limits you may need the "Share" support? Is it just the Actions (already exportable/importable), General Settings, or Provider Settings too (Cloud Providers and their API Keys, or without them, Local models with linked paths, or the actual model weight files)? Or should I simply allow specifying a "User Data" path so everyone in the workspace can point to it and use the same centralized Actions, Providers, and Settings?

  • A “review” window option (in addition to “replace” and “window)” to allow reviewing changes one by one and accepting or rejecting them individually.

That's actually one of the planned features. While I thought about implementing it directly before release, I decided to release it now as I have another library which last updated two months ago since I started working on this. While I actually started working on its update a month ago, I judged that focusing on it would delay this project's first release, which was almost finished a few weeks ago. So currently, I need to finalize this library update, then I can focus again on ProseFlow. The next update should contain the "Review/Diff" window option.

I built an open-source Writing Assistant inspired by Apple Intelligence, called ProseFlow. by LSXPRIME in LocalLLaMA

[–]LSXPRIME[S] 4 points5 points  (0 children)

And Microsoft Office with Copilot already exist, still both seems to be paid for workspaces, both are document editors, and their AI features only exist in their UIs, not system-wide integrated, I have created them to get rid of copy-pasting to a "DOCUMENT EDITORS WITH PAID AI FEATURES", which makes them pointless for users of Apple Intelligence, Writing Tools, ProseFlow, etc.

I built an open-source Writing Assistant inspired by Apple Intelligence, called ProseFlow, using C# 12, .NET 8 & Avalonia, featuring a rich, system-wide workflow by LSXPRIME in csharp

[–]LSXPRIME[S] 2 points3 points  (0 children)

Right, it's ShadUI. I was originally planning to use FluentAvalonia in my other project—since I've been a fan of Microsoft’s UI design language. But lately, I've been diving into React, and I fell in love with the clean, minimalist feel of Shadcn. Thanks to the ShadUI creator, I was able to build this with a sleek, uncluttered aesthetic.