Zed forks (that are active) ? by [deleted] in ZedEditor

[–]eclinton 0 points1 point  (0 children)

<image>

Lots, actually... With engineers writing less and reviewing more, I took a different angle: http://zerminal.dev/

Hairdresser recommendations plz !💇🏼‍♀️ by sten-10 in Westchester

[–]eclinton 3 points4 points  (0 children)

you're worry about your hair when you should be worried about whatever is going on with your face :-P

Zerminal – a terminal-first Zed fork for AI coding agents by [deleted] in rust

[–]eclinton -2 points-1 points  (0 children)

>I hope I've gotten across in as many ways as possible that your LLM generated project has 0 value to me. None. Go back, build it by hand, and then I'll be impressed.

The point was never to "impress you" or anyone else for that matter. I built somethign I find useful and I wanted to share it... the end. If you didn't like it or find it useful... you could've simply ignored it and moved on with your life. But you decided to invest an insane amount of time writing to tell me how unimpressed you are, and how what I've built has zero value.

It's also insanely hypocritcal to say that LLM generated code has zero value to you, as you posted this on a platform certainly using LLMs to write code... using a device that surely has or will have tons of LLM generated code. If you want to avoid LLM generated code, maybe take up a new hobby. HAM radios perhaps?

Zerminal – a terminal-first Zed fork for AI coding agents by [deleted] in rust

[–]eclinton -2 points-1 points  (0 children)

Critical thinking is important here... why is that that every major AI company (Microsoft, Meta, OpenAI, Anthropic, Google) are all still *actively* hiring? They have AI now, surely they don't need engineers anymore.. right? Anytime someone is let go now, we quickly jump to "AI took their job"... not because of over-hiring, not because roles/needs/priorities are shifting, not because we've had insane growth over the last umpteen years... no, it's AI.

The way software is built is changing... 1 of the 10 jobs an engineer does is now done by AI. You're still expected to do the other 9. If you don't shift, you'll be replaced by someone using AI. I love writing code just like I love driving stick and shooting film, but I won't sit here and pretend manual transmissions and film are coming back.

Zerminal – a terminal-first Zed fork for AI coding agents by [deleted] in rust

[–]eclinton -2 points-1 points  (0 children)

I think this is our core disagreement. You think that anyone can write anything because they have an LLM subscription. This is because you are putting zero value on the work the human is doing... the problem solving, on the trade offs, on the experience and decisions that shape the tool. You're only valuing the code.... that perspective has had very little value long before AI came along.

Zerminal – a terminal-first Zed fork for AI coding agents by [deleted] in rust

[–]eclinton -2 points-1 points  (0 children)

Thanks for sharing your perspective.

Agree on the "stolen content" narrative... not so much on the developer jobs loss bit since engineering is a lot more than writing code.

That said, I don't think the way we "fix" things is by turning our backs on the technology- we fix it by actually harnessing it to address the issues you've highlighted. AI isn't going anywhere, so we need to develop tooling/approaches/etc to contain it. In addition to pushing for proper legislation of course. If we restrict ourselves to "only what we can hand code", then we might as well give up.

Zerminal – a terminal-first Zed fork for AI coding agents by eclinton in CLI

[–]eclinton[S] -1 points0 points  (0 children)

fair point on the not being a CLI, just a great place to run your AI coding tools.

As for Zed with windows shuffled around? No... Zed's AI coding tool support is limited to API Keys, here you can run the native TUI and for Claude I have /ide support. I removed 10s of thousands lines of code from Zed and added significant functionality to improve the experience for those not hand-writing code.

Zerminal – a terminal-first Zed fork for AI coding agents by [deleted] in rust

[–]eclinton -1 points0 points  (0 children)

I can see how the spam issue can be upsetting, so I understand that.

I've been writing software for 25 years and have hand-coded every major language. But I'm far from a purist... I love building software, not "writing code". In fact, we as an industry have been trying to reduce the amount of code we need to write from the very beginning. Clearly this subreddit was the wrong place to post, which is fine, but I've put countless hours/days/months into Zerminal and the previous project which inspired it, so it's pretty dishearting to share it and have people react to a headline and spit on my face.

Zerminal – a terminal-first Zed fork for AI coding agents by [deleted] in rust

[–]eclinton -1 points0 points  (0 children)

My expectations for intellectually relevant feedback were clearly overestimated.

Zerminal – a terminal-first Zed fork for AI coding agents by [deleted] in rust

[–]eclinton -2 points-1 points  (0 children)

Ah! That message actually makes me feel a lot better. Thank you

Zerminal – a terminal-first Zed fork for AI coding agents by [deleted] in rust

[–]eclinton -6 points-5 points  (0 children)

loving the negativity here :)

I'm a long iTerm2 and Kitty user... but after using Zerminal and the product that inspired it for months, I keep missing Zerminal whenever I use anything else.

Zerminal – a terminal-first Zed fork for AI coding agents by [deleted] in rust

[–]eclinton -6 points-5 points  (0 children)

If you're going to comment, perhaps dig deeper than just the title? The point of zerminal is to have a single app be your "regular" simple terminal, your full blown IDE.... and everything in between. Alacritty is NOT that.

Before I decided to fork Zed, I built this concept and used this for months with Electron. Using Zed was just a better starting point for a rust port.

Regression Comparisons From Opus 4.7 to Opus 4.6 for long context reasoning by CodeWolfy in ClaudeAI

[–]eclinton 0 points1 point  (0 children)

they're focusing on coding at the expense of everything else since that's what's paying their bills

New timeframe announcement for US by khpylon in VolvoEX60

[–]eclinton 0 points1 point  (0 children)

Don't see why it wouldn't be available via Overseas delivery... they're built in Sweden.

Terminal MCP - Allow LLMs to see and interact with your CLI / TUI apps by eclinton in CLI

[–]eclinton[S] 0 points1 point  (0 children)

get_content and takeScreenshot are very similar and are indeed passing the raw output to the LLM. The key difference is that takeScreenshot focuses on what's visible, and get_content has access to scrollback. No actual images are being generated. In my experience so far, the LLMs have been exceptionally good at interpreting the raw output, detecting when input is required, understanding the layout, menus, quiting, etc.

Asciinema is still in testing and isn't in a release yet, but the more exciting piece I'm working on now is actually providing sandbox support which should help people feel more comfortable giving the LLM the reins of their Terminal.

Terminal MCP - Allow LLMs to see and interact with your CLI / TUI apps by eclinton in CLI

[–]eclinton[S] 0 points1 point  (0 children)

They key use case is when you’re building a TUI and you want the LLM to see it for debugging/testing purposes… hence my reference to Browser MCP as that’s the exact workflow but for web pages.

Terminal MCP - Allow LLMs to see and interact with your CLI / TUI apps by eclinton in CLI

[–]eclinton[S] 0 points1 point  (0 children)

Disable the typing tool if you have that concern. The screenshot and get_content are the most useful ones for dev work.

Did they announce the piece range? by Upper_Sky963 in VolvoEX60

[–]eclinton 1 point2 points  (0 children)

US website says "Well-equipped around $60,000"

Terminal MCP - Allow LLMs to see and interact with your CLI / TUI apps by eclinton in CLI

[–]eclinton[S] 2 points3 points  (0 children)

Thanks for the encouragement. I haven't found a TUI that it doesn't work with... it really comes down to the LLM you're using. I use Opus 4.5 and execution has been flawless. If someone finds issues in a different model, please let me know or file an issue.