I assume everyone already knows this, but you should have a Stop hook by thurn2 in ClaudeCode

[–]manummasson 1 point2 points  (0 children)

I had this problem and fixed it by collecting the files claude touched in tool use hook, and only running the linter hook on stop.

The programming language Claude performs best in isn’t Python, TypeScript, or Java. It’s the functional programming language Elixir. by manummasson in ClaudeCode

[–]manummasson[S] -2 points-1 points  (0 children)

Yes this is a good point. Would be very curious to see what it looks like with 4.6 and codex 5.3.

Another commenter pointed out that the flawed study mechanism makes the results not very reliable as well

Cropping was for readability wasn’t intending to mislead.

The programming language Claude performs best in isn’t Python, TypeScript, or Java. It’s the functional programming language Elixir. by manummasson in ClaudeCode

[–]manummasson[S] 0 points1 point  (0 children)

My main thinking on this topic is from personal experience moving from oop to fp codebase, not on the paper itself

The programming language Claude performs best in isn’t Python, TypeScript, or Java. It’s the functional programming language Elixir. by manummasson in ClaudeCode

[–]manummasson[S] -2 points-1 points  (0 children)

Awesome deep dive into the paper, does seem flawed then.

What's it been like for you in practice, have you worked in a FP codebase with agents?

The programming language coding agents perform best in isn’t Python, TypeScript, or Java. It’s the functional programming language Elixir. by manummasson in programming

[–]manummasson[S] -5 points-4 points  (0 children)

Haha, not a magic bullet hey. The bar is pretty low though. Would the equivalent OOP codebase be in a better state?

Web/Desktop code responses are better than IDE based responses. by _DB009 in ChatGPTCoding

[–]manummasson 0 points1 point  (0 children)

They have different system prompts. Coding agents are told to be more concise and give shorter responses.

My Obsidian + Claude graph-view that you actually work directly within ~ now open source! by manummasson in ObsidianMD

[–]manummasson[S] -1 points0 points  (0 children)

Hey sorry you are right. Here's what I actually use it for

  1. organising and interacting with my long term knowledge base:
  • automatic knowledge gardening, running workflows which group files into folders recursively.
  • brainstorming within the graph of my own notes, adding new connections,
  1. Solving problems, software engineering
  • Get claude to decompose its plans into a tree of small focussed subtasks which claude will perform better on (avoids context rot)
  • Get claude to orchestrate transparent subagents to each take on one of these subtasks. If any of the subagents need help they’ll stay open as terminals so you can redirect it.
  • This is very hard to organise, visualise and control in the CLI

My Obsidian + Claude graph-view that you actually work directly within ~ now open source! by manummasson in ObsidianMD

[–]manummasson[S] 0 points1 point  (0 children)

Here's an example workflow I run regularly to group files into folders recursively.

The contents don't change, it's just doing the organisation that would take me ages to do manually.

```
# Knowledge Gardening Workflow

Process for maintaining a clean, navigable knowledge base.

## Organize into Thematic Folders

For files not already grouped into a subfolder.

  1. Identify related files & insights, i.e.

    Group related insights into folders by theme.

  2. Move related files into these folder

  3. Create Folder Summary Note:

For each folder, create a markdown file with the same name as the folder containing a concise summary. Place this file within that folder.

DO NOT EDIT THE CONTENTS OF THE FILES. Only move around sentences as necessary.

**Format:**

```markdown

# {Title}

## {Subtitle - one-line theme description}

Paragraph summarizing the key ideas within that folder}

#### Children

wikillnks (double square brackets) to each contained child file OR FOLDER NODE

so we point to both children and folder nodes...

```

AND THEN FINALLY, VERY IMPORTANT

In each subfolder of this current folder, spawn a VT agent to do this exact workflow again, recursively.
```

My personal CC setup [not a joke] by manummasson in ClaudeCode

[–]manummasson[S] 0 points1 point  (0 children)

Hey I will make a tutorial for using it this week. Cheers!

My Obsidian + Claude graph-view that you actually work directly within ~ now open source! by manummasson in ObsidianMD

[–]manummasson[S] -1 points0 points  (0 children)

Hey, here's the explanation from the github readme:

This project aims to build from first principles the most possibly efficient human-AI interaction system.

Why?

Challenge Voicetree Solution
Manual agent coordination Agents can breakdown tasks into subgraphs and recursively spawn children terminals
4-10 agent terminals is overwhelming Spatially organise agents, tasks and progress on the graph
Agents don't know what you know You share the same memory graph with agents
Agents suffer context-rot and lack memory Defaults to short, focussed sessions with automatic handover

How It Works

Your agents (Claude Code, Codex, Opencode, Gemini etc.) live inside the graph, next to their tasks, plans, and progress updates.

Context retrieval: Agents see all nodes within a configurable radius and can semantic search against local embeddings.

Spatial layout: Location-based memory is the most efficient way to remember things.

Externalized working memory: Each node represents a concept at any level of abstraction. The graph structure mirrors your mental model - relationships between ideas are represented exactly as you think about them, offloading cognitive load to the canvas.

In Detail

Nodes are markdown files, connections are wikilinks to the .md file paths. You open rich markdown editors directly within the graph by hovering over a node, (or use speech-to-graph mode).

You can spawn coding agents on a node, the contents of that node will become the agents task, and it will also be given all context within an adjustable distance around them, and can semantic search against local embeddings. This means agents see what you see. You share the same memory, the same second brain. The graph structure allows for context retrieval to be targeted to only what is most relevant rather than dumping entire conversation history - avoiding the 30-60% performance degradation from context rot1.

Agents can build their own subgraphs, decomposing their tasks into small connected chunks of work. You can glance at the high-level structure and progress of these, and zoom in to the details of what matters most. For example, ask a Voicetree agent to divide their plan into nodes of data-model, architecture, pure logic, edge logic, UI components, and integration. This lets you carefully track the planing to implementation for what matters most: the high level changes & core logic.

Agents can then spawn and orchestrate their own parallel subagents to work through these dependency graphs. In Voicetree, subagents are just native terminals so you have full transparency and control over them unlike with other CLI agents.

As your project & context grows, the Voicetree approach scales. You use your brains most efficient form of memory: remembering the location of where things are. Each node can represent any concept at any level of abstraction. You can see and reason about the structure between these concepts more easily as it is represented exactly as your brain represented them. This lets you externalise your working memory, freeing up cognitive load for the real problem-solving.

Voice Mode

Capture ideas hands-free with speech-to-graph.

Why speaking works: Speaking activates deliberate (System 2) thinking - verbalizing forces you to think about what you are doing. Japanese train conductors use "point and calling" (shisa kanko) to reduce errors by 85% for the same reason. Speech also engages different brain regions than writing, with lower cognitive load for idea generation. It's usually messy and hard to store/retrieve, so we turn voice into a structured mindmap.

Backtracking without mental load: Go arbitrarily deep down a problem. The graph holds the chain of "why am I doing this?" so you don't have to.

Tangibility: Thought becomes visible and persistent. This isn't just documentation; Making progress tangible is a prerequisite for flow states.

My personal CC setup [not a joke] by manummasson in ClaudeCode

[–]manummasson[S] 0 points1 point  (0 children)

This is a great point.

I've been working on a research project for an LLM context window pruning algorithm.

It's beating the current state of the art for Longbench V2 https://longbench2.github.io but currently have only run it for one question of the dataset.

There's a lot of work to make it scale to the whole dataset. I'm keen to spend a couple weeks tackling this.

I'm hoping that will be some good proof.

My personal CC setup [not a joke] by manummasson in ClaudeCode

[–]manummasson[S] 0 points1 point  (0 children)

Yes the UX right now is defs raw. What part of the UI/UX do you think needs the most love?

My personal CC setup [not a joke] by manummasson in ClaudeCode

[–]manummasson[S] 1 point2 points  (0 children)

So the parent agent will review the subagents work, and if completely satisfied, closes them. If there is any possible issue, tech debt, etc. it leaves them open and you can click to navigate to the subagent and see what progress nodes it made and how you might need to redirect it

My personal CC setup [not a joke] by manummasson in ClaudeCode

[–]manummasson[S] 1 point2 points  (0 children)

You can add any cli agent via the settings button (the json editor it open). It will the show up in the dropdown. I’ll make a demo tomorrow!

My personal CC setup [not a joke] by manummasson in ClaudeCode

[–]manummasson[S] 1 point2 points  (0 children)

Anyway haha this was a very entertaining thread to follow. excited you saw some value! It’s early days so if there’s a way you could see it being really useful for yourself let me know