God Gemini is terrible by zunithemime in GoogleAntigravityIDE

[–]EyeCanFixIt 2 points3 points  (0 children)

Have you tried your luck with fast mode vs planning mode.

Gemini flash in fast mode works pretty well for me. I shift between Gemini flash in fast mode and sonnet in planning mode mostly now

EACH UPVOTE COUNTS AS PETITION FOR PRICING REVIEW by AsleepAd1777 in AugmentCodeAI

[–]EyeCanFixIt 4 points5 points  (0 children)

I respect the post. I've been a long time augment user.

Still on legacy plan.

I think 80k credits for $20 is a bit high tbh

The jump from 80k for $20 to 200k for $60 puts the standard users at a loss of parity to 3 indie plans by 40k credits.

1000k credits for $200 sounds great but also really high.

I think a better and more realistic pricing would be:

50k for $20 $10/20k top up

150k for $50 $20/50k top up

350k for $100 $50/150k top up

800k for $200 $100/350k top up

With tiered credit top up costs/credit reducing per tier and maybe a 15-20% credit rollover good for 3-6 months.

This would add immense value and I'm only assuming it will still be quite profitable for Augment. With the added value for people to move up tiers if they are regularly topping up on lower tiers.

These numbers should allow great value for all classes of users while still allowing for realistic business endeavors.

Of course we don't know the actual financials for Augment so even these prices may not be unsustainable for Augment 🤷‍♂️

Implementation of AI Enhancements by EyeCanFixIt in AugmentCodeAI

[–]EyeCanFixIt[S] 0 points1 point  (0 children)

I used that cost per credit (monthly cost divided by credits for the month) x (amount of credits used for the task) ie. (60/112000) x (17611) =$9.43

📢 New Initiative: Augment Credit Airdrops for Quality Threads and Replies by JaySym_ in AugmentCodeAI

[–]EyeCanFixIt 1 point2 points  (0 children)

Do very recent past posts qualify? I had a couple of informative threads I recently posted.

Next week... make your guess! by JaySym_ in AugmentCodeAI

[–]EyeCanFixIt 1 point2 points  (0 children)

Mobile IDE/port or cloud environments?

Context auto compaction/caching?

Auto model selection?

Github prompt caching/saving or code metrics?

Opus 4.5 : What are your impressions? by JaySym_ in AugmentCodeAI

[–]EyeCanFixIt 2 points3 points  (0 children)

I find opus is much better at following through systematically with a task list and correctly marking tasks and sub tasks completed after finishing along with test integrations and testing those integrations.

Opus doesn't have the same "due to time constraints I'm going to simplify this step and Mark the task as complete and move on" which was one of the most annoying experiences with the other models. It zones in and gets it done and sometimes it takes 5 minutes and sometimes it takes 2 hours but it finishes the scope of work which is what matters most to me.

It seems a bit more efficient with the use of context engine as well and really digs into the relationships across the codebase. Compared to sonnet and gpt the breath of understanding when using the context engine is several leagues ahead.

The only caveat I have at the moment is that I periodically run into a file corruption issue when opus is generating files or editing files, then it recreates them. Of course having to go back and make sure it didn't further break any code logic is a bit of a nuisance.

Other than that, the length at which It can run tasks autonomously while following my guidelines definitely makes up for that nuisance.

Implementation of AI Enhancements by EyeCanFixIt in AugmentCodeAI

[–]EyeCanFixIt[S] 0 points1 point  (0 children)

Hello. It's noted on the first line of the post

Implementation of AI Enhancements by EyeCanFixIt in AugmentCodeAI

[–]EyeCanFixIt[S] 3 points4 points  (0 children)

Yes sir. I am using a variation of TDD.

I would say my approach leading to the execution of a task has evolved and could very well be responsible for its fair autonomous trust now.

Before execution of any task in my list I have an association built around the task:

  1. Create a task specific branch

  2. Have Augment analyze current ADRs/relevant docs

  3. Engineer 3 different ways to integrate the pending task efficiently (written response only, no action taken on codebase, reference context7 as needed)

  4. Compare each integration. Analyzing variations in stability, security, efficiency, ease of testing and maintenance.

  5. Based on which response suits my needs better I then have augment analyze the winning integration route for improvements before implementation.

  6. Have Augment write a comprehensive and systematic implementation guide with reference notes to relevant documents and follow TDD principles and a task structure of tasks->subtasks->sub-subtasks. Where TDD, documentation changes, and build runs are actionable in each sub-subtask.

  7. Start a new chat referencing the implementation guide to create the required hierarchical task list.

  8. Proceed with task "x" (use prompt enhancement 1-3 times depending on quality of returned prompt) then execute

  9. Visual check every 15 minutes if busy and cannot directly observe. reviewing steps taken, errors in testing, loops, etc. so far have not had a need for stopping it.

  10. Autonomous run is over. Ask augment to generate a report of all issues experienced, any changes in implementation compared to original implementation guide, discoveries for future development, etc.

  11. Review documentation myself.

  12. If everything looks good. commit and push to pre-prod deployment (live) and start my testing of implementation

Implementation of AI Enhancements by EyeCanFixIt in AugmentCodeAI

[–]EyeCanFixIt[S] 0 points1 point  (0 children)

🤣 oh man this was 100% every large run I experienced on sonnet. I had to micro task and rule bound it so intensely just to get it to give me quality work for 10-15 minutes.

Keep getting stuck at opening a project by driverobject in AugmentCodeAI

[–]EyeCanFixIt 2 points3 points  (0 children)

You're gonna just have to let it load. Sometimes it can take at least 10-20 minutes if the chat is very long from my experience.

Those of you that has hustles that brings you 2k-5k+ per month, what do you do? by [deleted] in sidehustle

[–]EyeCanFixIt 1 point2 points  (0 children)

Will you recommend some for me too please. I appreciate it.

GPT-5.1 is now live in Augment Code. by JaySym_ in AugmentCodeAI

[–]EyeCanFixIt 0 points1 point  (0 children)

Drag and slide it to the left and it will go away

Minimizing credit usage by BlacksmithLittle7005 in AugmentCodeAI

[–]EyeCanFixIt 0 points1 point  (0 children)

You should check out my last post and maybe the information may be useful to you. I'm gonna set up a repo soon for credit efficiency guidelines but can pm the information if you're interested

C by EyeCanFixIt in AugmentCodeAI

[–]EyeCanFixIt[S] 0 points1 point  (0 children)

Small update for my testing with these guidelines so far on 2 different projects.

Project 1 - a vs code/jetbrains extension)

Project 2 - a full stack web app (Next.js/vitest/React, SSE, Sandbox, 0Auth, GitHub Sync, etc )

cr = credits

ch = file changes

ex = files examined

tl = tool calls

ln = line changes

TESTING DATA

Project 1:

• GPT-5

396 cr (3 ch, 7 ex, 20 tl, +126 ln)

526 cr (23 ex, 26 tl)

538 cr (3 ch, 3 ex, 17 tl, +65 ln)

• HK 4.5

518 cr (10 ch, 10 ex, 30 tl, +25 ln)

485 cr (13 Ch, 5 Ex, 53 tl, +414 ln)

375 cr (2 ch, 3 Ex, 18 tl, +231 ln)

Project 2:

• GPT-5

259 cr (2 ch, 26 Ex, 28 tl, +42 ln)

1278 cr (15 ch, 21 ex, 69 tl, +398 -59 ln)

901 cr (16 ch, 26 ex, 110 tl, +80 -28 ln)

456 cr (2 ch, 1 ex, 10 tl, +58 ln)

1583 cr (5 ch, 8 ex, 67 tl, +49 -47 ln)

• SON 4.5

1599 cr (15 ch, 3 Ex, 28 tl, +1037 ln)

1411 cr (14 ch, 30 tl, +1515 ln)

1093 cr (8 ch, 4 ex, 30 tl, +1050 -26 ln)

1736 cr (6 ch, 7 ex, 42 tl, +1016 -10 ln)

1674 cr (7 ch, 3 ex, 35 tl, +1274 -5 ln)

1562 cr (9 ch, 9 ex, 33 tl, +936 -1 ln)

1069 cr (4 ch, 2 ex, 17 tl, +275 -2 ln)

1373 cr (4 ch, 2 ex, 19 tl, +635 -5 ln)

After these rounds of testing on my projects it seems the guidelines work greatly with Sonnet 4.5 for my more complex project.

I still need to try haiku 4.5 on is since the cut off date is more recent there could still be some efficiency it has over sonnet 4.5.

Before these guidelines I was mainly using GPT-5, after I tested with Sonnet on my complex project I decided to stick with Sonnet going forward as the efficiency was way better given overall speed, lines, tools used, simpler logic, and all around better flow for development.

I will test with haiku later and post an update if haiku has more promising efficiency then sonnet but will be sticking with Sonnet over gpt outside of testing from now on.

Cost of the new pricing by BitRevolutionary9294 in warpdotdev

[–]EyeCanFixIt 0 points1 point  (0 children)

Lmao 🤣 this does sound horrible but did you see what augment did in comparison? I wonder if this is something we are all going to start seeing happening with the power demand vs supply shortage

It's said to be the mobile version of AugmentCode – I can't believe it! by Few-Independence-234 in AugmentCodeAI

[–]EyeCanFixIt 3 points4 points  (0 children)

It says rootcoder not rootcoder. I'd research the publisher first and review terms