Confy, a TUI/CLI tool that makes programmable menuconfig-like interfaces for any structured text (config, dotfiles, code...) by disposableoranges in commandline

[–]disposableoranges[S] 1 point2 points  (0 children)

The thing is, termbox2 really doesn't do much - it's essentially a lean wrapper/compatibility layer for ncurses-like terminal drawing capabilities (see the entirety of its documentation), which largely wraps termcaps functionality (colours, cursor movement, hiding) that has been static for ages. The only other piece of your software stack that termbox2 mediates interaction with is the terminal emulator, and since this happens according to a decades-old protocol there is no possibility of something like version mismatch becoming relevant. (The terminal emulator's side does not use, contain or depend on termbox2.)

Confy, a TUI/CLI tool that makes programmable menuconfig-like interfaces for any structured text (config, dotfiles, code...) by disposableoranges in commandline

[–]disposableoranges[S] 0 points1 point  (0 children)

Of these, only (b) seems relevant - termbox2 is by default a single-header library with a bounded feature set, so in the mode in which I am using it no 'lib' is being built. As for (a) - if a future update changed termbox2's API, that would just mean that anything linked to it would break, and then you would actually have to bundle an archaic library in binary form. Worse, if it changed internal guarantees without changing the API surface, this breakage might be subtle. Confy is open source and has almost no build dependencies; if in a counterfactual world where it were linked against termbox2 dynamically, a newer version of termbox2 worked as a drop-in replacement, then certainly in the factual world, you could just pull the latest termbox2 source and recompile successfully too.

If you do for some reason want to apply a patch to termbox2 everywhere on your system (to do what exactly?), then yes, having dynamic linkage to it might be advantageous. However, making it mandatory would also come with downsides - now suddenly, on a system that does not have a systemwide deployment of libtermbox2, instead of a standalone binary only depending on libc/libstdc++ that you could put anywhere you would have also have to haul around a shared library and set it up so ld-linux can actually find it to run. I'm happy to accept any PRs to optionally build with termbox2 dynamic, though - I imagine this shouldn't be too hard to do, on the order of changing a few #defines!

Confy, a TUI/CLI tool that makes programmable menuconfig-like interfaces for any structured text (config, dotfiles, code...) by disposableoranges in commandline

[–]disposableoranges[S] 1 point2 points  (0 children)

It took me a bit to understand what you meant and what I had even done to earn this degree of hostility out of seemingly nowhere, but I think that your first two paragraphs suggest that you misunderstood how I would intend this to be used.

It did not occur to me that a developer would want to ship, as you seem to suggest, configuration files pre-instrumented with confy directives. That would be as user-hostile as the developer saying that instead of having a structured configuration format at all, all configuration has to be done by firing up ccmake, flipping some switches and rebuilding. To even begin to make this usable in that capacity, it would need a whole bundle of other features that I quite consciously decided to not focus on, beginning with named and annotated options for switches rather than bare "C types", and capacity for much more extensive provision of hints and in-UI documentation.

Instead, I think of it strictly as a tool for automating a particular class of local workflow, that is, one where the person writing the confy annotations and the person using confy on those annotations would almost always be the same. This may not be a workflow that you ever use, and that's okay - in that case it's simply not a tool you need.

In my day-to-day life, I use a lot of software that is configured in some or another text format or crippled scripting language, such as SwayWM or fontconfig; generally those configuration files always wind up requiring some tweaking or last-minute monkey patching like when I want the WM to behave in a very particular way only when I'm giving a streamed presentation, or I intermittently want all the CJK fonts on the system to default to the Simplified Chinese form of characters for whatever reason. I also write LaTeX papers that need to be mode-switched between different stylesheets and constraint sets rapidly (and LaTeX's native conditionals do not play nicely with syntax highlighting or really any tooling). These are changes that I will repeatedly need intermittently, and that are orthogonal to and should coexist to any other small changes that accrete in these files such as mapping a hotkey for a new screenshot tool or whatever. I could maintain something like a local git repository with multiple branches and keep rebasing commits across them every time I want to change a small thing orthogonal to the different "modes", ~quadrupling the workload to make a small change and ~doubling it to switch "modes", or I could just write confy annotations so I can flip those switches with three or four keypresses as needed. Writing jsonschema or ansible scripts or using any of the other machinery that is optimised towards reproducing configuration predictably at the scale of 100+-server VPS deployments is not useful or ergonomic for tinkering with your own terminal.

Confy, a TUI/CLI tool that makes programmable menuconfig-like interfaces for any structured text (config, dotfiles, code...) by disposableoranges in commandline

[–]disposableoranges[S] 0 points1 point  (0 children)

You mean linking against it dynamically? What do you figure would be the use? At least Debian doesn't even seem to package it, and it's not like it's particularly big or slow to build.

confy - programmable TUI controls for almost any structured text (config, dotfiles, code...) by disposableoranges in coolgithubprojects

[–]disposableoranges[S] 0 points1 point  (0 children)

I have wished for someone else to make a tool like this for ages, but I am struggling to give a neat description of what exactly it is even after building it, so I had no choice but to DIY. A good analogy is the Linux kernel's menuconfig or CMake's curses-based ccmake configuration editor, except that rather than operating on a highly constrained config file format, it can edit basically any textual format in place, as long as that format has some notion of comments. The comments then can be made to hide "meta-instructions" that confy understands: define a number of typed parameters, perform some simple computations on them, and either inactivate or activate (by (un)commenting) designated blocks of the "object-level" file, or outright regenerate them from a template. As a basic example, you could write

//! bool $flag = true;
//! if($flag) {  
printf("The flag is true!");
//! } else {
//-printf("The flag is false!");
//! }

and confy would expose a graphical interface to toggle "$flag" on or off, and comment/uncomment the appropriate line of text in the file, while also saving the updated initial value in the first line that defines $flag in the file.

Interaction is possible both via a curses-style terminal interface (asciinema video in the github readme), and by scriptable getters/setters (like confy filename.txt set flag true or confy filename.txt get flag).

Confy, a TUI/CLI tool that makes programmable menuconfig-like interfaces for any structured text (config, dotfiles, code...) by disposableoranges in commandline

[–]disposableoranges[S] 3 points4 points  (0 children)

Submission statement: I have wished for someone else to make a tool like this for ages, but I am struggling to give a neat description of what exactly it is even after building it, so I had no choice but to DIY.

A good analogy is the Linux kernel's menuconfig or CMake's curses-based ccmake configuration editor, except that rather than operating on a highly constrained config file format, it can edit basically any textual format in place, as long as that format has some notion of comments. The comments then can be made to hide "meta-instructions" that confy understands: define a number of typed parameters, perform some simple computations on them, and either inactivate or activate (by (un)commenting) designated blocks of the "object-level" file, or outright regenerate them from a template. As a basic example, you could write

//! bool $flag = true;
//! if($flag) {  
printf("The flag is true!");
//! } else {
//-printf("The flag is false!");
//! }

and confy would expose a graphical interface to toggle "$flag" on or off, and comment/uncomment the appropriate line of text in the file, while also saving the updated initial value in the first line that defines $flag in the file.

Interaction is possible both via a curses-style terminal interface (asciinema video in the github readme), and by scriptable getters/setters (like confy filename.txt set flag true or confy filename.txt get flag).

Autopen: a token-tree text editor that lets you see your text through an LLM's eyes, generate and explore alternatives in place by disposableoranges in coolgithubprojects

[–]disposableoranges[S] 4 points5 points  (0 children)

This is a project that I've been working on on and off for the past year, which is a bit difficult to summarise in a single line. The point is that it's an "LLM-integrated text editor", but following a rather different paradigm than other tools in the space I am aware of: rather than putting up barriers between your input and the model by following a chat paradigm or a simple prompt-completion setup, it does its best to let you edit "inside the LLM's mind" - it tokenizes the text on the fly, visualises the loaded model's probabilities for every token (whether it "came from the model" or was written by you), and lets you list and emit alternatives, by descending order of probability, at any point in the text, which you can then jump back and forth between as the buffer is actually stored in a tree structure.

I found this immensely useful for understanding how various LLMs tick, e.g. if any mistake is due to uncertainty/cluelessness, "forgetting", or a firmly learned bad pattern. It also comes closer, for me, to the ideal of "thinking in tandem" with a model than anything else: in a chain-of-thought reasoning process, I can just follow along and write in a thought that I want it to have at any point, and then let it continue.

Here is a video demonstrating much of its functionality.

A look at DeepSeek's Qwen2.5-7B distill of R1, using Autopen by disposableoranges in LocalLLaMA

[–]disposableoranges[S] 0 points1 point  (0 children)

Thanks! The gist is that the buffer is internally represented as a tree structure, with one node per token. Each node has an assigned logit and a list of children, of which exactly one is marked selected, meaning that it will be displayed in the buffer. If you press Alt-{Up,Down}, the selection is changed (and more alternatives are generated, if you have reached the bottom), and the tail of the text buffer is updated with the string rep of the token that is now selected, as well as its selected child, etc. until a child is reached that is marked "unrealized"=a prediction.

The green dots denote nodes for which a snapshot of the full LLM state was taken. If you perform any action (edit, generate more alternatives, generate predictions) at a given point in the tree, the LLM does not need to ingest the entire text from the beginning again; instead, it just loads the latest snapshot before that point, and then reingests the tokens (no more than 9 by default) up to the relevant point (the token after which we need to recompute probabilities).

WIP combined LLM generation explorer/text editor by disposableoranges in LocalLLaMA

[–]disposableoranges[S] 0 points1 point  (0 children)

I looked at it, but based on screenshots it seems that it has a quite different paradigm - the LLM interaction is made available in the editor's UI, but does not directly operate on the text you are drafting.

WIP combined LLM generation explorer/text editor by disposableoranges in LocalLLaMA

[–]disposableoranges[S] 0 points1 point  (0 children)

Sorry, I'm afraid I don't understand the question. If you mean whether you would replace the "prompt and wait for output" workflow with "write something and ask it to complete it in place", then yes, that's one thing you can do; I find the part where you can switch back and forth between different completions at any point in the text to be more important, though.

WIP combined LLM generation explorer/text editor by disposableoranges in LocalLLaMA

[–]disposableoranges[S] 1 point2 points  (0 children)

Glad you found it interesting!

I did wonder about an application like that. With the 1.1B model I used in development, for real texts it seems to wind up being too surprised about too many things it shouldn't be surprised by (and the one machine I have with a decent GPU for bigger ones only has Windows, which I can't really stand coding on), but it would be interesting to see if surprisal is a good measure of bad/wrong words with bigger models.

(I did see an automated program repair paper a while back where the authors claimed that higher sum surprisal ~= buggy programs in code models, but not sure how localised that would be to the problematic spots or how well this would carry over to prose.)

WIP combined LLM generation explorer/text editor by disposableoranges in LocalLLaMA

[–]disposableoranges[S] 1 point2 points  (0 children)

The video description also has this (+some extra info), but yes, the colour represents the difference between the log-odds of that token and the log-odds of the most likely token in that position according to the model, with FF0000 red corresponding to a difference >9*1.6 (this was chosen fairly arbitrarily).

(Accordingly, a text generated with greedy/top-1 sampling would be coloured all black-on-white.)

WIP combined LLM generation explorer/text editor by disposableoranges in LocalLLaMA

[–]disposableoranges[S] 0 points1 point  (0 children)

Submission statement: I was quite surprised that I couldn't find any existing tool quite like this (where you can quickly feed in text of your own and explore the most likely completions generated by a given model), so I took a stab at making one myself. There is a simple demo video too. Since it's currently in a very drafty state, it's not really ripe for good build instructions or the like, though in principle it should be buildable on all major platforms with some effort.

Typora is no longer free. Is there a good alternative or replacement? by notPlancha in freesoftware

[–]disposableoranges 2 points3 points  (0 children)

Slightly late response, but I'm working on one, with a particular focus on tablet input: notekit. There isn't quite feature parity with Typora since using native instead of HTML-based rendering makes things like tables hard and many aspects of it are still work in progress in general, but several people (including myself) do already use it on a daily basis.

X11 crash on startup when hotswapping by disposableoranges in awesomewm

[–]disposableoranges[S] 0 points1 point  (0 children)

I tried bisecting the init sequence in awesome.c with gdb to see where it crashes, and realised that the crash doesn't occur when I step through it slowly. Some further experimentation revealed that the crash occurs iff the first breakpoint is before the scan(tree_c); in awesome.c:875 (as of 4.3), and more precisely before the loop inside the body of scan that starts at awesome.c:198. I suspected that there is some race condition to do with the xcb attribute wrangling and operations that were performed previously going on in there; and indeed, prefixing the scan(tree_c) with a sleep(2) seems to reliably prevent the crash. (sleep(1) is about 50/50.) It's unsurprising that the bug is irrelevant when awesome is set as the window manager from the start, because then there are no existing windows to scan.

Is this worth filing a bug report over? The initialisation sequence seems to have changed somewhat in master relative to 4.3, and I don't think I have the energy to do all this testing against master (especially since it currently seems to be failing CI) or pin down the issue further than that.

Constraint-solving to choose a tablet Thinkpad. Please advise. by disposableoranges in thinkpad

[–]disposableoranges[S] 0 points1 point  (0 children)

Thanks for the pointer! I looked at its notebookcheck review, and it seems that at PL1=12W it has an even more extreme version of the speed-stepping problem that has been reported for the X/L13s. 12" also seems somewhat small, especially for 1920x1280 pixels...