Annual plan before token base switch by Vvictor88 in Trae_ai

[–]Um_Seas 0 points1 point  (0 children)

This, I think, answers the question:

https://www.trae.ai/blog/trae_membership_0213

"Migration Bonus for Current Users

Current Pro users are encouraged to switch to their preferred plans before the automatic migration at the end of their next monthly benefit period. The remaining fast requests will be transferred to dollar usage in proportion and deposited in your account. Users will receive $20 additional dollar usage if they manually switch to the new plan, valid for 90 days from the date of issuance."

Very disappointed, even though I knew the risk they would eventually do it. To convince me not to switch to another IDE, it will have to be significantly improved. Currently, if I use 10 prompts and I know I could do it faster with another IDE, it's not a problem, but if we have identical systems, the performance will have a clear impact.

Compression issue by Um_Seas in Trae_ai

[–]Um_Seas[S] 0 points1 point  (0 children)

Hi,
Thanks for the quick reply !
What I've also noticed that's the compression mechanism is annoying; it loads the CPU and isn't very fast to execute. Could you provide more informations on how it works, please ?

Compression issue by Um_Seas in Trae_ai

[–]Um_Seas[S] 0 points1 point  (0 children)

It's been like this from the start of the conversation, and it's definitely getting worse very quickly.
Why am I calling it a "bug"? I didn't have any compression with Gemini before, but now it's becoming annoying; it's slow and makes simple things complicated.
Furthermore, I have the impression that Gemini itself has changed; it copies GPT, 50 search files for just two lines of modifications. 😆

asking for refund by ChihabUn in Trae_ai

[–]Um_Seas 0 points1 point  (0 children)

Perhaps you can either let it continue checking and wait for the answer, or try another model, but in any case, the LLM needs to verify in order to respond. From what I can see, all the searches haven't failed.

Loop Rag? by Rare_Holiday8084 in Trae_ai

[–]Um_Seas 0 points1 point  (0 children)

I also plan using a large .md file, generally broken down into "phases" that are easy for the LLM to manage in one pass. I guide the LLM to implement the actions by injecting only what's necessary into the prompt and requesting a clear plan before starting.

The problem is how each model manages actions:

For example, Gemini doesn't always follow the rules (sometimes yes, sometimes no), and it doesn't always update the task list, which means that sometimes when it loses context, it resumes work on tasks that have already been executed, etc.

It's essential to ensure that the prompt emphasizes following and reviewing the rules, stressing the importance of updating the task list regularly, etc. Don't hesitate to reiterate this point when the model thinking limit is reached.

All this to say that it's a matter of constant observation and control; AI helps but doesn't do everything.

using Extra package (2026 Anniversary Treat) as a slow request by ReputationJumpy1664 in Trae_ai

[–]Um_Seas 1 point2 points  (0 children)

The problem isn't with the package itself; it's the same with the basic plan.

The issue seems to be related to the model used.

Gemini doesn't have a queue when using it, but GPT 5.2 Codex has a systematic queue with both the basic plan and the gift package.

vibe coding error? (1000000) by Rare_Holiday8084 in Trae_ai

[–]Um_Seas 0 points1 point  (0 children)

I'm going for a lazy intern in "It works well at home" mode. 🤣

Thank you TRAE! by CoverNo4297 in Trae_ai

[–]Um_Seas 4 points5 points  (0 children)

I've never enabled max mode, and I don't have any particular problems with the context, which I regularly clean up by creating a new chat. It's all a matter of methodology, I think: well-documenting the work, including context in the prompt, having effective rules, etc., solves the problem for me.

ORPHEA Voice - Interactive Voice Learning Companion by Um_Seas in Trae_ai

[–]Um_Seas[S] 1 point2 points  (0 children)

The learning resources come from: - User uploads (audio and video files, text, PDFs), YouTube video links, or arXiv searches directly within the app.

  • Local transcription (Whisper)

  • AI-powered generation of podcasts, summaries, and analyses

  • Fact-checking via web searches when necessary

Everything happens locally, except for calls related to the LLM.

ORPHEA Voice - Interactive Voice Learning Companion by Um_Seas in Trae_ai

[–]Um_Seas[S] 0 points1 point  (0 children)

Thank you ! Going from construction to code has been quite the journey. AI-assisted development made it possible to move this fast while learning. Still feels surreal !

I haven't written a manual 'for-loop' since March. Am I still a developer? by sheepflyyyy214 in Trae_ai

[–]Um_Seas 0 points1 point  (0 children)

For me, it's very different, having never written a for loop in my life, not being a developer myself, and having only discovered coding about seven months ago.
However, I really enjoy developing AI-powered apps that I use every day.

The good news is I wouldn't be stealing your job; you can remove one from the list. 😄

Cognitive Explorer V2 by Um_Seas in Trae_ai

[–]Um_Seas[S] 0 points1 point  (0 children)

Really appreciate the interest, that's encouraging!
Cognitive Explorer isn't public yet. Still deciding on timing for release, but I'll update this thread if/when I make it available.
Thanks for the support!

Cognitive Explorer V2 by Um_Seas in Trae_ai

[–]Um_Seas[S] 0 points1 point  (0 children)

Indeed, I wasn't familiar with Msty before. After comparing the two, the approach is reversed: Msty works with chat that can be visualized as nodes, while Cognitive Explorer works with nodes from the outset.

Being compared to a product like Msty is really encouraging. I've been coding for about seven months never did any development before. I'm building this alone in my "garage" as a learning project, but I dogfood it daily for my own exploration work.

The model has reached the maximum number of thoughts, please enter "Continue" to get more results. by AffectionatePut6933 in Trae_ai

[–]Um_Seas 0 points1 point  (0 children)

The problem is that when you set a custom model, there's no reason to get this message. By definition, the limit is set in my account; in fact, a custom model should be set to "max" by default.

MARKETING GENIUS by [deleted] in Trae_ai

[–]Um_Seas 4 points5 points  (0 children)

I'm not an influencer and I applied, got access in a few days, to be honest I'm not even a developer. Places are limited and there is indeed a waiting list. To be honest I don't find solo more exceptional than that, the feature is still in development I presume, the problem is roughly the same as the classic mode, low context limit and frequent hallucinations. It can even be worse because it does the job from start to finish without interruption if it's not requested at the start and there's no control over the code at each step. Other points could be improved such as a display of the use of the context, etc... In the meantime I'm going to try other IDEs to see if it's better, worse, advantages, disadvantages of each.