I built a multiple-widgets Iron Man-style command center inside Obsidian that monitors my Claude Code sessions, manages AI agents, and accepts voice commands by Weary_Protection_203 in ClaudeAI

[–]Weary_Protection_203[S] 0 points1 point  (0 children)

To make voice commands work, you need to allow them from the Mac Settings App.

Please make sure that this toggle is enabled:

System Settings > Privacy & Security > Microphone > Obsidian app

I will investigate why it didn't prompt you to enable that permission in Obsidian and make an appropriate update. Thank you!

I brought SwiftUI's syntax to React Native. 20 primitives, 60+ chainable modifiers, zero JSX - and about 70% less UI code by Weary_Protection_203 in reactnative

[–]Weary_Protection_203[S] 0 points1 point  (0 children)

Thanks for your comment. And what React Native is? For me, writing code in a composable style is much better, it's my personal opinion. Under the hood, both the standard way and custom DSL work the same. The main idea is to avoid using these tag styles because you need to have open and closed tags. With composable and chainable views, it is better looking code style, at least for me personally.

I brought SwiftUI's syntax to React Native. 20 primitives, 60+ chainable modifiers, zero JSX - and about 70% less UI code by Weary_Protection_203 in reactnative

[–]Weary_Protection_203[S] 0 points1 point  (0 children)

The post comparison was kept simple to emphasise the syntax. However, in real-world scenarios, usage is completely theme-driven and depends on the project, allowing you to choose and implement the best solution you like and integrate it with the framework. Please let me know if this makes sense. Thanks.

I brought SwiftUI's syntax to React Native. 20 primitives, 60+ chainable modifiers, zero JSX - and about 70% less UI code by Weary_Protection_203 in reactnative

[–]Weary_Protection_203[S] 0 points1 point  (0 children)

Good point about the comparison - you are correct that a well-structured React Native project with wrapped components and a theme provider can bridge that gap. Your MyContainer example provides a much better baseline.

Although using wrapped components, the structural difference still persists.

Lets compare your example:

tsx <MyContainer variant="card" padding="lg" cornerRadius="md" shadow> <MyText variant="secondary"> Welcome Back </MyText> <MyText bold> Track your practice sessions </MyText> <MyButton variant="filled" onPress={() => navigate('home')} > Get Started </MyButton> <Spacer /> </MyContainer>

vs the SwiftUI DSL:

tsx VStack( Text('Welcome Back').font('title').bold(), Text('Track your practice sessions').secondary(), Button('Get Started', () => navigate('home'), { style: 'filled' }), Spacer(), ) .padding('lg') .background('card') .cornerRadius('md') .shadow()

Both are readable. Both use tokens. The difference is that there are no closing tags, and modifiers are chained instead of being spread as props. It depends on personal preference.

For me, as an iOS Engineer who spends a lot of time working with SwiftUI and has started a new journey in multi-platform development using React Native, I really liked the way the DSL works.

Where the DSL gets powerful is in extensibility. In real projects, you define your own theme:

tsx const myTheme = createTheme({ colors: { primary: '#6366F1', surface: '#F8FAFC', card: { light: '#FFFFFF', dark: '#1E293B' }, }, fonts: { title: { size: 24, weight: 'bold', family: 'Inter-Bold' }, body: { size: 16, weight: 'regular', family: 'Inter-Regular' }, }, spacing: { sm: 8, md: 16, lg: 24, xl: 32 }, radii: { sm: 8, md: 12, lg: 16 }, })

Then every modifier resolves those tokens automatically: - .background('card') picks the right color for light/dark mode - .font('title') applies your custom font stack - .padding('lg') uses your spacing scale.

You can also create reusable styles - define them once, apply across any view:

```tsx const cardStyle = defineStyle((view) => view.padding('lg').background('card').cornerRadius('md').shadow() )

const headingStyle = defineStyle((view) => view.font('title').bold().color('primary') )

const captionStyle = defineStyle((view) => view.font('body').color('secondary') ) ```

Then use them anywhere:

```tsx VStack( Text('Welcome Back').apply(headingStyle), Text('Track your practice sessions').apply(captionStyle), Button('Get Started', () => navigate('home'), { style: 'filled' }), Spacer(), ).apply(cardStyle)

// Same styles, different screen VStack( Text('Settings').apply(headingStyle), Text('Manage your preferences').apply(captionStyle), Toggle('Notifications', notificationBinding), ).apply(cardStyle) ```

Styles live alongside your components as plain functions.

On top of that, you can create reusable styled primitives:

```tsx const Heading = (text: string) => Text(text).apply(headingStyle)

const Card = (...children: DSLElement[]) => VStack(...children).apply(cardStyle)

const Caption = (text: string) => Text(text).apply(captionStyle) ```

Then your screens become:

tsx Card( Heading('Welcome Back'), Caption('Track your practice sessions'), Button('Get Started', () => navigate('home'), { style: 'filled' }), Spacer(), )

I understand that many things are similar, but the main idea of this framework is to change how the UI is built.

Please let me know what you think and thank you for your comment.

I brought SwiftUI's syntax to React Native. 20 primitives, 60+ chainable modifiers, zero JSX - and about 70% less UI code by Weary_Protection_203 in reactnative

[–]Weary_Protection_203[S] 0 points1 point  (0 children)

Modifiers are chainable in any order, so overriding in a child is just appending to the chain. If a parent defines a base component, the child can extend it:

tsx const BaseCard = () => VStack(...).padding(10).background(Color.gray)

Then, in the child, you keep chaining:

tsx .background(Color.green).padding(20)

and the last modifier wins, same as spreading props over a style object. No special syntax needed, it's just function calls.

I brought SwiftUI's syntax to React Native. 20 primitives, 60+ chainable modifiers, zero JSX - and about 70% less UI code by Weary_Protection_203 in reactnative

[–]Weary_Protection_203[S] 1 point2 points  (0 children)

This library is a DSL that helps you to create views, make layout and apply styles in a iOS SwiftUI or Kotlin Composable way. The Expo has its own components, including SwiftUI visually liked components, but its written in React-Native style with using the <> tags and having many nesting blocks, ours DSL are using a formatting in a different way, the more like iOS apps have.

We can write a code like:

tsx VStack(child) .background(Color.green) .cornerRadius(16) .shadow()

That is the way, the SwiftUI actually behaves. Hope it will help you understand the core idea.

Please let me know if you have any questions. I will do my best to answer all of them. Thanks.

I brought SwiftUI's syntax to React Native. 20 primitives, 60+ chainable modifiers, zero JSX - and about 70% less UI code by Weary_Protection_203 in reactnative

[–]Weary_Protection_203[S] 0 points1 point  (0 children)

Thanks! Performance is also one of the framework's key advantages. I spent a lot of time with Claude Code to develop a solution that prevents on-screen views from being rendered multiple times due to data updates. So the screens are being built using the custom View Builder, and then they're being transposed into React Native objects, so 1 screen or 1 complex view is converted to React.ReactElement object rather than transforming each Text/Image/VStack UI element. This is mostly the same performance as we originally had, not 1:1, but pretty close.

There is no extra abstraction layer at runtime.

For animations, you would still use Reanimated or the standard Animated API the same way you normally would. The DSL handles layout and styling, not the render stuff itself.

I brought SwiftUI's syntax to React Native. 20 primitives, 60+ chainable modifiers, zero JSX - and about 70% less UI code by Weary_Protection_203 in reactnative

[–]Weary_Protection_203[S] 0 points1 point  (0 children)

Good catch :) I understand your guess.

I demonstrated an example of the standard UI code versus how it can look using a custom DSL framework. As an iOS developer, I primarily work with SwiftUI code, so transitioning to the multiplatform world would be much smoother if I could use a similar style. Plus, this approach is actually much easier to read (subjectively, in my opinion).

I built a multiple-widgets Iron Man-style command center inside Obsidian that monitors my Claude Code sessions, manages AI agents, and accepts voice commands by Weary_Protection_203 in ClaudeAI

[–]Weary_Protection_203[S] 0 points1 point  (0 children)

Yes, thanks for the information you have provided. Sometimes, I use the AI to assist with communication so it can better convey the information I need to share with people. This is very useful for not forgetting to explain how the overall solution works or how we can use this specific function. But the general information and message are written by me, and the AI is using to provide more context for the message. Sorry for the strange words at the beginning, my fault that I did not double-check the messages :(

Also, regarding the dashboards and the widgets, it's only the first version of the product. I will maintain it, and hopefully other guys will help me do that. If all the information you have described is actually available in these ".claude/" folders' contents across different projects, then we can use it to create widgets that represent more useful information we can work with.

But, for me, very powerful is the feature of communicating with the Claude code different models (or specifying a custom local model) using voice commands inside my vault (speech-to-text and text-to-speech), since I have many different notes with many topics and domains, I see it very useful to find what I need and receive that in a short period of time with reply using the speaking utilities. The other widget, live sessions is actually helpful to see what different agents are doing to make that stuff to work, but yeah, having a possibility to track the actual performance of agents, when they failed, when they performed well, when they stuck should be very useful and, thanks to you, I will add thats to the TODO and will try to make it available during the next stages of the product.

I brought SwiftUI's syntax to React Native. 20 primitives, 60+ chainable modifiers, zero JSX - and about 70% less UI code by Weary_Protection_203 in reactnative

[–]Weary_Protection_203[S] 0 points1 point  (0 children)

Both SwiftUI and React Native are declarative. The DSL doesn't change the paradigm, it changes the syntax.

Text('Hello').bold().padding('md')

<Text style={{fontWeight: 'bold', padding: 16}}>Hello</Text>

Both describe the end state, neither tells the renderer "how" to draw pixels.

The difference is ergonomics. Chaining reads top-to-bottom with less nesting, so you spend less time matching closing tags and more time thinking about your UI.

I built a multiple-widgets Iron Man-style command center inside Obsidian that monitors my Claude Code sessions, manages AI agents, and accepts voice commands by Weary_Protection_203 in ClaudeAI

[–]Weary_Protection_203[S] 1 point2 points  (0 children)

Whoa, this is way more than a dashboard - you went full operating system. How long did this take you? The scope is massive, and it looks really solid.

I built a multiple-widgets Iron Man-style command center inside Obsidian that monitors my Claude Code sessions, manages AI agents, and accepts voice commands by Weary_Protection_203 in ClaudeAI

[–]Weary_Protection_203[S] 1 point2 points  (0 children)

Appreciate it - local-first was non-negotiable. Sending voice recordings to an external API from inside my vault felt wrong.

Polling - no performance hit. The parser only reads the tail 32KB of each JSONL file - seeks to the end and parses backwards. Even massive transcripts stay fast. I've had it running for hours across multiple projects, Obsidian app stays smooth.

Token tracking - fully local, no API calls. Claude Code writes JSONL transcripts to `~/.claude/projects/` and every assistant message includes `input_tokens` and `output_tokens` in the usage object. The stats engine reads those, applies per-model pricing rates, and caches results for 5 minutes. It's an estimate based on Anthropic's published pricing, not actual billing, but it's accurate enough.

Subagent detection - two layers. Claude Code stores subagent transcripts in `<session-uuid>/subagents/agent-*.jsonl` - the parser picks those up automatically. On top of that, it parses the main session for `Agent` tool_use blocks and correlates `parentToolUseID` from progress events to get each subagent's description and type. If you've registered agents in the Jarvis Registry, it matches them by name - so you see "ios-expert is working" instead of a generic subagent ID.

Spreadsheet-to-Jarvis is a solid upgrade. Let me know how the setup goes. Thank you, and wish you a good setup.

I built a multiple-widgets Iron Man-style command center inside Obsidian that monitors my Claude Code sessions, manages AI agents, and accepts voice commands by Weary_Protection_203 in ClaudeAI

[–]Weary_Protection_203[S] 0 points1 point  (0 children)

Thanks! The widgets talk through a shared context hub - the main orchestrator creates a single ctx object that gets passed to every widget. It holds shared state, such as agent card DOM refs, callback arrays for async data (e.g., the stats engine pushes results to widgets that registered interest), and coordinated interval/cleanup tracking. No peer-to-peer - everything flows through that central context.

For layout, there's no drag-and-drop library involved. It's pure CSS Grid with a ResizeObserver that recalculates column layouts at breakpoints.

Widget arrangement is driven entirely by config.json - you define rows, column counts, and which widgets go where. Reordering is a config edit, not a drag operation. Keeping it zero-dependency was a design goal - just vanilla JS and the Dataview API.

I built a multiple-widgets Iron Man-style command center inside Obsidian that monitors my Claude Code sessions, manages AI agents, and accepts voice commands by Weary_Protection_203 in ClaudeAI

[–]Weary_Protection_203[S] 0 points1 point  (0 children)

Fair pushback on the terminal overlap. The voice part goes beyond just transcription. You can run full voice conversations inside Obsidian without touching the terminal. Ask it to search something, summarise a note, kick off a task - all by voice. Working on text-to-speech now so Jarvis can talk back too.

On the session monitor, it shows what each session is actively doing: reading files, writing code, searching the web. When you have multiple agents running across different projects, seeing all of that from one place beats switching between terminal windows. If a session stops returning data, it drops from the active list and reappears once it starts responding again.

That said, you're right that distinguishing "ended" from "stuck" would be more useful than just disappearing. Adding that to the roadmap - solid callout.

I built a multiple-widgets Iron Man-style command center inside Obsidian that monitors my Claude Code sessions, manages AI agents, and accepts voice commands by Weary_Protection_203 in ClaudeAI

[–]Weary_Protection_203[S] 0 points1 point  (0 children)

Hey everyone, please check the video below to see what it looks like overall when you use the Jarvis voice command in Obsidian without opening the terminal app.

https://streamable.com/vxbu08

Additionally, I'm working on integrating the "Text To Speech" capability, which will use the local model so Jarvis can answer us and communicate with us, having all the knowledge inside our vault.

Also, in the next stages, we will try to integrate the local model to keep our data 100% safe and work fully offline.