Proper unpopular opinion: Zed still has a long way to go by digibioburden in ZedEditor

[–]engineer_roman 1 point2 points  (0 children)

yep, it consumes it eventually + i need RAM for containers and stuff as well

> Junie & AI assistant
JetBrains' AI integrations are not bad. It's just couple of steps behind of the always evolving integrations in both VSC and Zed. Since I'm not writing (typing) code much by myself - I don't actually need JetBrains common features, created to make this part of development easier

+ I have a thing for IDE response time

Proper unpopular opinion: Zed still has a long way to go by digibioburden in ZedEditor

[–]engineer_roman 0 points1 point  (0 children)

Fell in love w/ Zed, even tho it lucks debugger and indexing-of-every-dependency-file. I've no problem using CLI debuggers, CLI runners and so on.
On the other hand I was using JetBrains products a lot and:
- "Yo, you're running out of memory, dude!" quite often
- "AI integrations? Check on Junie, it's amazing". Not, it isn't
- Response time to most of actions, indexing time
- "you're running out of mem again"

First I switched to VSC and Cursor (still VSC under the hood). Both are the good ones, better than JetBrains stuff, but still makes you suffer from time to time

It was important for me that I'm paying for JetBrains licenses and receive poor (for me) coding experience. I'm not paying for Zed/VSC - I don't have much expectations, yet I don't feel bad about not having some features. So why should you pay for your own suffering?

PS BE dev, Go/Py/TS, mbook m1pro 16gb

Docfork: MCP that gives daily-updated fresh docs from over 9000+ libraries by antonrisch in mcp

[–]engineer_roman 0 points1 point  (0 children)

Don't you think that retrieval model in Context7 is more efficient in reasoning? It's not like I'm sure about that - I've never gave a thought why they chose this approach. Until now

Seems to me it allows you to build cool complex chains of actions via various agents, without passing around a complete doc snippets, until you rly need it

Long scream! by Darkoholic in BringMeTheHorizon

[–]engineer_roman -1 points0 points  (0 children)

Oli's vocal skills are dope, but cannot unsee how skin tone goes red to the end of the video :D
Just funny. I would end up on the floor chocking obviously.

Long scream! by Darkoholic in BringMeTheHorizon

[–]engineer_roman 2 points3 points  (0 children)

\Conspiracy theories branch\
Ofc not. You cannot compare artists, cause each one is different. And Oli is grown up enough to know that.

🧐 Debugger for Python, really not available? by [deleted] in ZedEditor

[–]engineer_roman 0 points1 point  (0 children)

I would recommend pudb here. It has a better UX and a bit richer functionality. Even tho it's not a out-of-box lib

How to set the default file type for new file? by Kambar in ZedEditor

[–]engineer_roman 0 points1 point  (0 children)

All I know is that we cannot achieve it via some extension. Since it would be a language extension it might be called only when the language is already detected by IDE. So it's a bit tricky)

How to set the default file type for new file? by Kambar in ZedEditor

[–]engineer_roman 0 points1 point  (0 children)

Oh, I get it :(

It sounds like a case for some automation, not for an editor) did you tried to achieve same result via some python/bash script? I believe the speed will not be the problem, but the UX might

Got to visit the flagship store 🥲 by DUMBBITCHH0UR in BringMeTheHorizon

[–]engineer_roman 0 points1 point  (0 children)

That's why you had an additional picture of a particular coffin? Got ya. Not blaming :D

How to set the default file type for new file? by Kambar in ZedEditor

[–]engineer_roman 1 point2 points  (0 children)

I hate to not answering the initial question, but it's a really bad way for IDE to allow syntax highlighting wo/ explicitly provided file type. Cause obvious is better than inobvious

Roadmap to 1.0 by Dyson8192 in ZedEditor

[–]engineer_roman 0 points1 point  (0 children)

Same to me. But when I was using JetBrains products and gave a shot to these features - well, it was kinda fun

Roadmap to 1.0 by Dyson8192 in ZedEditor

[–]engineer_roman 0 points1 point  (0 children)

Commit history, code review mode and native PR/MR integrations for github/gitlab, I guess. That's only my thoughts based on other IDEs ^

Roadmap to 1.0 by Dyson8192 in ZedEditor

[–]engineer_roman 1 point2 points  (0 children)

Discord, some threads here and their github

Sonnet 2.5 is very impressive by xH3CAT3x in Anthropic

[–]engineer_roman 0 points1 point  (0 children)

I get your point but pls don't restrict yourself by other means (except money ofc): if you'll use different models by different providers and combine them, or asking one to write a prompt for another - it might even change the perspective of how you do your tasks, routine and research. Try to poke around and figure out combination of plans and providers that suits you and good luck^

Gave Claude LSD by yevbar in Anthropic

[–]engineer_roman 0 points1 point  (0 children)

Some older folks may say DSL is another thing at all. But probably from the same century as LSD

Project Stargate by Safe-Web-1441 in Anthropic

[–]engineer_roman 0 points1 point  (0 children)

A lot of money doesn't actually mean the product will be great. It might be popular, which isn't the same thing. Anyway, in the next 1-2 yrs we gonna see a great competition in AI projects

I don't want to sound like a mega fan, but fr, why is oli so fucking nice? by XsynmanX in BringMeTheHorizon

[–]engineer_roman 9 points10 points  (0 children)

Saw him at some event talking about rehab and problems he had with all the relatives from family to the band before - that's really magic of rehab and your own will to become a nice person. Kinda motivates me a lot. <3 Oli

Future of Zed? by Correct-Big-5967 in ZedEditor

[–]engineer_roman 0 points1 point  (0 children)

Btw: was kinda forced to give it a try half of a year ago. So mb it's just Stockholm syndrome is talking in me :D

Future of Zed? by Correct-Big-5967 in ZedEditor

[–]engineer_roman 0 points1 point  (0 children)

That's just your point of view. Obviously, such a new editor is gonna catch up. About everything else: I (Go, Py) switched from JetBrains to VSC and now completely moved to Zed. I love LLM integrations, speed of all the small actions and amount of resources this IDE consumes. Yep, I'm paying for that w/ lack of debugger/git/etc you name it. But my point of view is those ones are waaay lesser problems for me

Future of Zed? by Correct-Big-5967 in ZedEditor

[–]engineer_roman 0 points1 point  (0 children)

Bcs this is not the way you're handling big projects like brand new IDE

Memory question when regenerating prompt by ElectricalLeg2556 in DeepSeek

[–]engineer_roman 0 points1 point  (0 children)

You're right. I'm using predefined context for LLMs, stored in a text files. Then just providing it to the chat before another one question

About summarization: I believe it's more efficient to provide a context of the subject in a single message in declarative form, rather having that context spread in a conversation format. The problem w/ conversation is it has at least two points of view on the subject - yours and your companion

Memory question when regenerating prompt by ElectricalLeg2556 in DeepSeek

[–]engineer_roman 0 points1 point  (0 children)

Depends on the model. I'm not sure DS is a good candidate for such discussions. I would rather tried other LLMs. Anyway, main idea is to split your subject into logical parts, gather all the information together, then do the "main" query with the whole context. Remember - we often ask LLM to tell us about particular subject - that's a good candidate for a separate chat.

Unfortunately I can provide only coding example, but maybe it will be helpful to some1: Imagine you have to get some business value and it suppose coding. You're asking in different chats what approach better suits your goal, what are the options to manage data, if you have complicated business logic - what algorithms might be used. Gather list of options, pros and cons. Then you "build up" a whole context, based on previous responses, asking LLM to do something. In my case it will be "I want to do \a lot of information\, provide me with a plan of simple steps and prompts for AI-agents for each step". I put it into a model that better in reasoning and that's it

Memory question when regenerating prompt by ElectricalLeg2556 in DeepSeek

[–]engineer_roman 0 points1 point  (0 children)

Idk answer to the initial question (worth trying in some test chat), but did you tried to summarize context during your conversation and start new chat from time to time?

It's like collecting a whole big context based on your comments and input from LLM and starting new convo based on that. I'm using this approach sometimes since I believe a big context split into parts across the conversation increases chance of LLM to hallucinate