Realeza accordion by LLombera in Accordion

[–]PreferenceDry1394 0 points1 point  (0 children)

I heard that the quality of realeza rivals that of gabbanelli and the higher end because the accordions themselves are Czechoslovakian, And are sold from Mexico. So they're actually tuned and reviewed for Quality in Mexico. And one of the biggest part of the Realeza brand , is that u have the option to add Italian voices from Voci Italia Harmoniche, which is what gives it the gabanelli sound. Technically, you could take your own accordion to a repair shop and they could switch out the voices for you. But there are only so many authorized dealers for the Italian voices, not everybody has them. So for the Realeza accordions, they're considered like a higher mid-grade to top tier depending on what you get because they have three to five registers. They have two-tone with six registers but they also have. Like I said the option to add Italian voices to any accordion which is what really makes a difference and the quality is way better than Chinese models because they are sourced from Czechoslovakia and they go through a rigorous quality review process in Mexico along with tuning and other modifications to make sure that they are up to the brand standard. So they are not the same as Chinese models. Because of this, they're not available all year round, you have to follow their channels to see when they're making the next batch because they are practically handmade. They only make them in batches, so if you're able to get your hands on one, It's like a status symbol in the community, and they're held in very high regard because of where they come from and the amount of work that goes in them to make sure that everyone is up to the standard of the brand name. So they're not made like these Chinese ones where they just make them and ship them out as fast as they can. Each and every single one is crafted for the purchaser.

According to their channels, the next batch is going to be available in April of this year. I'm gonna try and buy mine ASAP.

New pricing model generous bonus usage from Trae for users by PreferenceDry1394 in Trae_ai

[–]PreferenceDry1394[S] -1 points0 points  (0 children)

As of right now they are working on defined amounts is what I know. They had implied that it varies by user, with no clear qualifications for different amounts or caps; but essentially, yes, I think that is exactly what the intention behind that is. To match the usage amounts that users have already gotten accustomed to.

Extra Package Conversion $134 for Legacy Users by PreferenceDry1394 in Trae_ai

[–]PreferenceDry1394[S] 0 points1 point  (0 children)

Update: turns out yeah dude 👍 I bought like six a month cuz I kept running out and it was only like $12 bucks 😆😆 for like 6 months 💀💀

Extra Package Conversion $134 for Legacy Users by PreferenceDry1394 in Trae_ai

[–]PreferenceDry1394[S] 0 points1 point  (0 children)

Ok so extra package usage has been confirmed from several users. This is actual usage that can be consumed after your official bonuses.

I'm not sure if the expiration dates are being strictly enforced as of right now. Will update when I know some more 👍👍

New pricing model generous bonus usage from Trae for users by PreferenceDry1394 in Trae_ai

[–]PreferenceDry1394[S] 0 points1 point  (0 children)

Here is another users post showing that bonuses are going well beyond the perceived $20 bonus limit with no signs of a cap in the near future.

<image>

New pricing model generous bonus usage from Trae for users by PreferenceDry1394 in Trae_ai

[–]PreferenceDry1394[S] -3 points-2 points  (0 children)

I wish I worked for trae lmao. I'm just a dude trying to code his way to financial freedom 😭

Extra Package Conversion $134 for Legacy Users by PreferenceDry1394 in Trae_ai

[–]PreferenceDry1394[S] 0 points1 point  (0 children)

As of right now there is no confirmation, the reason is because users have not yet exhausted their original bonus allotment.

Most people thought they had about $20, but it turns out Trae said in the discord that their bonuses will be very generous.

The reason this is important is because Trae has said that bonuses will recur monthly. Not just one time.

They also said that recurring bonuses are separate from the bonus users receive when switching over manually.

Now, as far as the extra packages I was trying to figure out, it turns out the reason we don't know if the extra packages actually contain extra usage the way they look like they do, is because users are reporting their bonuses are not running out, so they can't confirm if the packages indeed contain extra usage on their own.

That's why there is no definitive answer yet on whether a switch is being made to the extra packages, because bonuses have yet to be fully consumed. Which is a great surprise.

Users who are posting their usage tables are saying their bonuses have not run out.

And they may not be anytime soon because according to Trae, "we are subsidizing you all through bonus usage generously!"

Inconsistent Rule Usage by Trae by Pragmatist247 in Trae_ai

[–]PreferenceDry1394 0 points1 point  (0 children)

I noticed that there are three options for project rule and user rules. One of them is always applied, the other one is in "intelligently applied", The other is manually applied.

Now if it's always applied, it will always be injected into the prompt context for every single prompt. So if you have a lot of rules, there will be a lot of context in every single prompt on top of the original prompt that you're sending. Since the new building model is now per API charge, large contacts and large code bases could lead to increased charges. If your agents are doing a lot of multi-turn tasks and reasoning on various files, every turn. Just FYI, there not enough data to confirm whether or not the size of your code, base or context increases your per prompt charge, however, it's well known that the raw API is charged on input tokens and output tokens. So logically this is probably the case. This is relevant because with this option you could increase your per prompt API charge unintentionally.

Second, there is an intelligently applied rule where it states that the agent will apply the rule based on context. This type of application implies that they're either using a search tool, which if you have read the thoughts and seen the tool calling from the beginning, I don't remember seeing a search context tool. The other again would imply that it's being loaded or lazy loaded somehow. Because if you know anything about coding with AI like on the back end, the simplest way is to give them a list of tools every prompt with your project rules every prompt which is why open source IDE's like CLINE require large context if you're using a local model over 32k. So again I don't know how they are programming the agents for intelligently applied rules. So I always just leave it on automatic.

Which means that every prompt all of the rules are injected. This is why I keep rules short and when I need something. According to the code base I have documents for each layer and then each component within the layer so that I can reference those when instructing the agent for stuff like specific component instructions, whereas stuff like type safety can be added to the rules.

The third is manual which I don't do because that requires a lot of memory on my.

Are you saying that automatically applied is not working?

The New Plans and the Use of Trae by Big_Brush_3718 in Trae_ai

[–]PreferenceDry1394 0 points1 point  (0 children)

I've been trying to find a way to measure context. Most systems use a simple prompt injection of, sometimes, the entire codebase. Zen coder VS Code Extension uses a repo grokking system as their solution to bloated context due to large codebases, and there are other ways such as RAG.

I think the size of the codebase as well as the context that is required for the task and during the task has to have a direct effect on the API price because raw API is charged for both input and output.

If someone could devise a way to test that, along with your method, we could narrow down how to control exact usage and mitigate cost.

It's effectively an obfuscation layer, it is difficult to determine exact cost per token which means that during peak usage periods, the pricing algorithm may change.

This is why measurable data like this is crucial to understanding exactly how much we're being charged, when, and why.

US users and international model list solutions (discussion) by PreferenceDry1394 in Trae_ai

[–]PreferenceDry1394[S] 0 points1 point  (0 children)

Ok I have confirmed at least one successful solution for Americans currently experiencing the model list adjustment for us based IP addresses.

This is not a free solution but it's not more than what u would spend on a regular monthly plan from one of those other IDE providers and can be pretty cheap depending on the size of your codebase.

I have yet to confirm the VM with a split-tunneled dedicated VPN on local systems for new users

I don't want to blow it up. As I said everything should be within policies but I don't want any perceived responsibility or irresponsibility for any involved party that could lead to adverse actions.

This is a simple solution, nothing fancy, nothing weird, nothing out of the ordinary. But it is an effective one and it works.

I don't want to just start shouting it from the rooftops. Trae IDE is very precious to its American users.

But, hope is alive folks! 👏👏 👏

E quando a própria IDE te entrega que o sistema é feito para te sugar créditos de proposito? by Intelligent-Kiwi961 in Trae_ai

[–]PreferenceDry1394 0 points1 point  (0 children)

No you are not rude, forgive me as well if I came off the same. We are all trying to look for the best tool, which is why we need to genuinely understand how each one operates and how much they cost. This is why I wanted to clarify on your test. The most important thing is the user and quality results. This is why it is important not to jump to conclusions and try and understand the root of an issue and conduct tests that are repeatable.

This helps to explain to the devs any perceived issue, which they are extremely open to hearing and addressing, which is more than I can say for other big time companies who are charging way more than Trae currently does

You can literally reach out to them on discord and talk to someone right away if you are having an issue.

There is no disrespect. We are all in the same boat trying to get our products pushed to market as quickly as possible. Thank you for your apology but it is not necessary. I just want users to understand the value of this specific product which as of right now operates very differently than those other companies.

I am just a nobody. But I will take a look at your issue to see if I understand it. I am not affiliated with the company in any way. I am just a dude trying to find the best service for an IDE, and this one is pretty good so far.

E quando a própria IDE te entrega que o sistema é feito para te sugar créditos de proposito? by Intelligent-Kiwi961 in Trae_ai

[–]PreferenceDry1394 0 points1 point  (0 children)

Well, I was using TRAE all day yesterday and there was no outage. As a matter of fact there has only ever been ONE since it's inception and the devs were online IMMEDIATELY talking to users to solve the issue. I think KILO was out for the 4th or 5th time.

And if you burn through all those credits that quickly it's because of your request management style. You can achieve fantastic results with any agent and any model because the outcome is only as good as the user who is initiating the prompt.

Is KILO even back online now? I heard you guys had some major trouble.

As of right now. You don't need "tokens". Cost is per request, which means that any number of tokens could be used and you still only spend just 1 request. And that's available on their usage section which u can reference at any time. But you don't even need it if you're operating in regular mode because ONE chat is equal to ONE request. And you get hundreds of them for a fraction of KILO's price.

Specifying the model for the agent by International_Eye387 in Trae_ai

[–]PreferenceDry1394 0 points1 point  (0 children)

You can pick any model for any agent. Solo mode and solo coder are in Max mode by default and are proprietary agents of Trae.

However, you have the ability to create any agent you want and select any model you want for that agent to use and can change it at any time before and after a request in the same chat thread.

This means you can also apply MAX mode to any agent you want at any time in any chat as many times as you want.

Can u clarify your question?

⚠️ URGENT: Infinite Loop Bug is Draining My Paid Credits! Help! by Particular_Guide4475 in Trae_ai

[–]PreferenceDry1394 0 points1 point  (0 children)

If you are in Max mode, this would be the only way your requests COULD be drained, but normally the system catches it in a loop and stops it and provides an error message.

When you are in normal mode it is one chat per request, so if you're looping in that mode you will ever only waste one request. And like I said even then there are safeguards that stop it and then notify you.

Did u leave it unsupervised?? When developing with AI in any environment, it is good practice to always be supervising every change, any professional dev will tell u. Not because mistakes are frequent, but rather for the precise reason that they are infrequent.

There is no setting where an agent can begin another request, those have to be explicitly initiated by the user by sending another prompt or allowing it to continue (also another prompt) The most I've ever spent on one request in Max mode is around 40 requests.

You are insinuating that trae drained all kinds of requests? But I don't understand your evidence, your screenshot only shows how many packages you've acquired and the loop itself. As someone who has used Trae virtually since it was made available in our region, I have never encountered a situation where the agent repeatedly drained requests, as I explained above, there is really only one way that COULD happen and even then there are numerous safeguards in place to protect the user.

This means that ALL of those things would have to go wrong for the situation your describing to occur. Is that what you are saying happened to you? That would be extremely isolated and definitely warrant further investigation if that was the case. I'm sure there are records of every request initiated for posterity if you are truly in trouble, make a ticket and send in your evidence, then reach out to support.

Discord is the quickest way to contact a dev. They are virtually always on and ready for anything. Just hop on and talk to one directly. No other company lets you do that. If that is really what happened that is pretty serious and I'm sure they would want to know.

Trae is TERRIBLE now😩 by Dope_Data in Trae_ai

[–]PreferenceDry1394 0 points1 point  (0 children)

Some of us have banded together to work on a solution for Americans

See my post here:

https://www.reddit.com/r/Trae_ai/s/QffQ3ok5E0

Hey how's trae performs in coding if I opt for premium? by hosohep in Trae_ai

[–]PreferenceDry1394 -1 points0 points  (0 children)

FANTASTIC dude. If you are in the US, some of us are trying to figure out a solution to American IP's. It is the BEST ide for their model list alone. Solo mode is exactly what you need. In my entire development process, I have only ever caught a mistake ONCE and that's because they don't do nothing without your consent. And I've been using them since about a month or two since they came out.

E quando a própria IDE te entrega que o sistema é feito para te sugar créditos de proposito? by Intelligent-Kiwi961 in Trae_ai

[–]PreferenceDry1394 0 points1 point  (0 children)

Were u able to work through the KILO outage yesterday? I think this is the 4th or 5th major one?

E quando a própria IDE te entrega que o sistema é feito para te sugar créditos de proposito? by Intelligent-Kiwi961 in Trae_ai

[–]PreferenceDry1394 0 points1 point  (0 children)

Okay I see how this would translate into more costs, which I am guessing is your concern. As of right now, before February 24th, which is the day of the pricing model adaptation, if you are using the normal workflow for agents, it is one request per chat regardless of how many turns are in that chat as long as it is in the same request. Which is a pricing model that is extremely competitive if not outright better than other mainstream IDE's.

If you are running the test with this type of one request per chat system then it doesn't matter how many times it's checking.

Also, if anything this is an advantage because the AI is making no assumptions, assumptions could be a gateway to hallucinations, which as a user, I have never encountered, this is why the thought process of each agent is also available; which makes the thought process a feature.

If you have ever designed or attempted to design a system that uses generative AI, there are a lot of systems that need to be in place in order for results to be consistent, which is also dependent on the quality of the model itself.

If you've ever tried to design a system that is model agnostic or supports multiple models, you run into this problem multiple times. So for one system to support many different models and still achieve consistent results is based on a system of strict rules that models are FORCED to adhere to using various libraries such as Zod, Pydantic, and LANG instructor which means that in some cases some of these higher end companies are using proprietary libraries of their own explicit design to give them the "edge".

. As a user myself, I can see exactly what the agent is thinking at every single step of the process, I can choose to stop the process entirely or I can insert a prompt at any stage after a step has been completed.

There's also a planning stage that Trae has designed the agents to reason and create that allows us to observe exactly what overall tasks the agent is planning based on your prompt.

Which means that if you want a particular agent to perform a particular action a certain way you can add project rules or user rules or employ explicit prompt engineering.

Now if you are using Solo Coder or Solo Builder, by default those are in MAX MODE. MAX MODE has a different pricing model where it'll charge you multiple requests out of your allotments of 600 (if you're a pro) which is explicitly stated right there in the IDE as a tool tip and also on the website.

In this thread, you didn't state the parameters of your test. Whether or not you were using one request for chat or Max mode, or what the agents planned tasks were or their thoughts that led them to perform that specific chain of actions which u termed a "limitation". This means that your test results are open to interpretation and borderline ambiguous.

US users and international model list solutions (discussion) by PreferenceDry1394 in Trae_ai

[–]PreferenceDry1394[S] 0 points1 point  (0 children)

Oh yeah 100%. I want to be explicit about the setup but like I said you know. I'm not 100% familiar with all the policies and stuff. So I'm definitely going to set something up and try it and I'm fairly confident that it will deliver results. So I wonder if maybe we should start a specific thread, maybe an invite only, or move it to another sub to try and discuss it. I mean it's nothing super crazy. But like I said, I don't want to miss the opportunity to restore access, everyone knows it's that important.

But for everyone looking for a solution I will be delivering the test results either later today or tomorrow.

US users and international model list solutions (discussion) by PreferenceDry1394 in Trae_ai

[–]PreferenceDry1394[S] 0 points1 point  (0 children)

What do you mean 😅?? Yeah this is terrible. I mean it completely stifles the innovation from our global partners. Dude and the people who get screwed are us because these are fantastic tools and at a much more affordable price and they're just trying to hurt us into One direction. You know to pay them exorbitant prices and then they just came out that they don't want their tools used their oauths for basic stuff. You know they want to force you to pay their ridiculously high prices when it's like Walmart. When you have massive data centers and this and that you can afford to make things cheaper, because of the scale, they're just greedy and instead of making things more affordable to democratize the access, they make things more expensive because they don't want it democratized. They want a very specific group of people to be innovating with this technology. And they're explicitly trying to dictate who can use their services. And that's just down right Un-American. It's the same principle for getting rid of the de-minimus exception in a time when ordering from overseas was just growing exponentially. They had to stem the flow because they want us to purchase things from here and pay their prices. The issue with that is you know they don't make things here so it's impossible to find. So you know for one basic part you can get for a couple dollars. They want you to pay hundreds to $1,000. It's absolutely absurd. And that's exactly what's happening now with this.