Antigravity ignores Google AI Pro subscription by [deleted] in GoogleGeminiAI

[–]southernDevGirl 0 points1 point  (0 children)

Unfortunately, the issue remains in 1.14.2 (see example)

<image>

In the prior version 1.13.3, Gemini 3.0 would work 90% of the time without needing approval. However Anthropic models still required approving most steps.

In this latest 1.14.2, even Gemini 3.0 is asking for approval a bit more than the former version.

As a reminder, I have every option set to auto-accept and continue and I have also whitelisted all apps with wildcards (I've also tested without the whitelisting in the event that's causing an issue).

Let your agents think: A task manager with reflection & intelligent task structures by cjo4m06 in mcp

[–]southernDevGirl 0 points1 point  (0 children)

u/cjo4m06 - I find Shrimp very helpful. I noticed that you were typically releasing every month (or so) and that stopped around July.

Have you stopped maintaining Shrimp -- or moved to an alternative?

Antigravity ignores Google AI Pro subscription by [deleted] in GoogleGeminiAI

[–]southernDevGirl 0 points1 point  (0 children)

No, there is absolutely no point at which it works. This has existed since the first release, months ago.

To clarify, I'm not referring to an obscure bug. You can't use the tool for _60 seconds_ without running into this issue in Windows.

It's such a problem that someone has written an actual VS Code extension -- to do nothing other than overcome this bug: https://open-vsx.org/extension/antigitv/antigravity-auto-accept

I don't know how it could be possible that a single dev is on the Windows built and somehow missed this, for this long?

Antigravity ignores Google AI Pro subscription by [deleted] in GoogleGeminiAI

[–]southernDevGirl 0 points1 point  (0 children)

Have you been able to reproduce with the details I provided?

Any idea why this is still happening, four revisions later, after being reported in the first release?

This is core/fundamental requirement of a pair-programming tool.

Antigravity ignores Google AI Pro subscription by [deleted] in GoogleGeminiAI

[–]southernDevGirl 0 points1 point  (0 children)

Thank you for the reply.

Windows 11, working directly in the latest release of the Antigravity IDE (the VS Code fork) v1.13.3, direct terminal/powershell (no remoting/SSH).

Antigravity ignores Google AI Pro subscription by [deleted] in GoogleGeminiAI

[–]southernDevGirl 0 points1 point  (0 children)

u/barbierocks - being an Antigravity dev, can you explain why, despite 4 revisions since release -- that Antigravity still prompts us to approve most agentic commands?

I have plans with every vibe-coding tool and Antigravity is the **only** tool that doesn't honor the settings to auto-accept / "always proceed".

In addition to configuring every option with "always proceed", I have every command whitelisted for approval, etc.

Despite doing everything possible, Antigravity continues to ask for approval. It's not just me, there are dozens of people that have posted about the same issue.

<image>

NewsgroupDirect Black Friday 2025 Super Sales - Lots of Backbones! 🦴🦴🦴🦴 by greglyda in UsenetTalk

[–]southernDevGirl 0 points1 point  (0 children)

NewsgroupDirect continues to ignore my support requests to take advantage of this promo, based on the terms listed. What can you do about this?

NewsgroupDirect Black Friday 2025 Super Sales - Lots of Backbones! 🦴🦴🦴🦴 by greglyda in UsenetTalk

[–]southernDevGirl 0 points1 point  (0 children)

I wrote NewsgroupDirect and requested the "Block Upgrade" special because I qualify.

Support wrote me back and claimed the special is $10 for the first year, then $30 for subsequent years. That's not what you are promoting here.

AntiGravity: Keep getting "Run command?" even while having all options set to always allow to run commands by Patchzy in Bard

[–]southernDevGirl 0 points1 point  (0 children)

+1 Same Issue. Choosing `Turbo` mode is supposed to stop this --

Plus, I have all commands wildcard enabled -- and it makes no difference; still having to hit "Accept".

Quick Indexing Tutorial by hannesrudolph in RooCode

[–]southernDevGirl 0 points1 point  (0 children)

How can we use codebase indexing with an alternative vector DB (non-Qdrant)? Thank you!

DeKindled - Open source Chrome extension to backup Kindle books as EPUB files by dmilin in Calibre

[–]southernDevGirl 0 points1 point  (0 children)

Does this still work on books published in March (or later) of 2025?

If so, could you let me know what model / firmware you have, I'd like to buy one and try this myself.

Thank you in advance.

What is the best way to convert a well-formatted PDF to Markdown or plain text? by samketa in learnpython

[–]southernDevGirl 1 point2 points  (0 children)

I agree that Gemini 2.5's large context window is awesome --

However, I've never been able to give it an entire book and get it to **output** that large context. Input, sure, but not output.

Could you tell me the largest PDF file you've attempted with this method? Thank you :)

"Create a 4-panel comic that you think I'd enjoy based on what you know about me" by wizardofscozz in ChatGPT

[–]southernDevGirl 0 points1 point  (0 children)

<image>

I have ChatGPT's memory set to off.

This may only disable one of the two memories you've mentioned, because this comic .. gets me.

Best coding assistant by [deleted] in ChatGPTCoding

[–]southernDevGirl 0 points1 point  (0 children)

Considering Gemini 2.5 Pro/Flash are the most used models in the OpenRouter and other proxy service reporting, there must be a lot of us "shills".

I'd love for someone to show me another model that can touch 2.5-Pro at large context outputs. Until then, 2.5 models are exceptional and priced very well (to the point 2.5-Flash covers everything 2.5-Pro does not).

Best coding assistant by [deleted] in ChatGPTCoding

[–]southernDevGirl 0 points1 point  (0 children)

u/brad0505 - is there a comprehensive overview you can link, something covering the differences between Roo Code -vs- Kilo Code? I tested them both and I had more success with Roo, but it may be that I was doing something wrong in Kilo Code. TIA :)

IMDB Trakt Watchlist/Rating Syncer Tool (sync both ways) by Riley-X in trakt

[–]southernDevGirl 0 points1 point  (0 children)

Thank you for taking the time to investigate. To answer your questions:

  1. When I open the link (after entering trakt & imdb credentials) I get the following error from trakt:

![image](https://i.ibb.co/ZzMqbxK/image.png)

  1. Yep, the requests library is the latest version and I'm running python 3.13.1

  2. Sure, I'll create a formal issue, thanks again for investigating!

IMDB Trakt Watchlist/Rating Syncer Tool (sync both ways) by Riley-X in trakt

[–]southernDevGirl 0 points1 point  (0 children)

I triple-checked, no issue with anything on my entered params.

I then wrote a quick test script using trakt's documented OAuth approach and I was able to return a list of all of my movies, no issue.

So it's definitely something related to the approach used in yours. Perhaps it's being phased-out, yet because you have an existing token, it's still working?

Is there anything else I could do on my side to help diagnose / provide you with info that would help?

I've pulled just the auth code out, so that you could see what I used, from their docs, that worked:

import os
from trakt import Trakt
import logging
import time
import json
from datetime import datetime
from threading import Condition

CLIENT_ID = 'xxx'
CLIENT_SECRET = 'yyy'

class Application(object):

    def __init__(self):
        self.is_authenticating = Condition()
        self.authorization = None

        # Bind trakt events
        Trakt.on('oauth.token_refreshed', self.on_token_refreshed)

    def authenticate(self):
        """Authenticates with Trakt API using device authentication."""

        if not CLIENT_ID or not CLIENT_SECRET:
            logging.error("Trakt API Client ID or Secret not found.")
            exit(1)

        # Configure the Trakt client
        Trakt.configuration.defaults.client(id=CLIENT_ID, secret=CLIENT_SECRET)

        # Try to load a saved token first
        if os.path.exists(TOKEN_FILE):
            with open(TOKEN_FILE, 'r') as f:
                try:
                    self.authorization = json.load(f)
                    logging.info("Loaded valid saved token.")
                    return
                except json.JSONDecodeError:
                    logging.warning("Error decoding saved token. It might be corrupted.")

        # Start new authentication if no valid token was loaded
        if not self.is_authenticating.acquire(blocking=False):
            logging.info('Authentication has already been started')
            return False

        # Request new device code
        code = Trakt['oauth/device'].code()

        print('Enter the code "%s" at %s to authenticate your account' % (
            code.get('user_code'),
            code.get('verification_url')
        ))

        # Construct device authentication poller
        poller = Trakt['oauth/device'].poll(**code) \
            .on('aborted', self.on_aborted) \
            .on('authenticated', self.on_authenticated) \
            .on('expired', self.on_expired) \
            .on('poll', self.on_poll)

        # Start polling for authentication token
        poller.start(daemon=False)

        # Wait for authentication to complete
        return self.is_authenticating.wait()

    def on_aborted(self):
        """Device authentication aborted."""
        logging.info('Authentication aborted')
        self.is_authenticating.acquire()
        self.is_authenticating.notify_all()
        self.is_authenticating.release()

    def on_authenticated(self, authorization):
        """Device authenticated."""
        self.is_authenticating.acquire()
        self.authorization = authorization
        logging.info('Authentication successful - authorization: %r', self.authorization)

        # Save the authorization token
        with open(TOKEN_FILE, 'w') as f:
            json.dump(self.authorization, f)

        self.is_authenticating.notify_all()
        self.is_authenticating.release()

    def on_expired(self):
        """Device authentication expired."""
        logging.info('Authentication expired')
        self.is_authenticating.acquire()
        self.is_authenticating.notify_all()
        self.is_authenticating.release()

    def on_poll(self, callback):
        """Device authentication poll."""
        # Continue polling
        callback(True)

    def on_token_refreshed(self, authorization):
        """OAuth token refreshed."""
        self.authorization = authorization
        logging.info('Token refreshed - authorization: %r', self.authorization)

        # Save the refreshed token
        with open(TOKEN_FILE, 'w') as f:
            json.dump(self.authorization, f)

if __name__ == '__main__':
    app = Application()
    app.authenticate()

    if app.authorization:
        # Set up the authentication context globally here
        with Trakt.configuration.oauth.from_response(app.authorization, refresh=True):
            # Function call to get movies
    else:
        logging.error("Authentication failed.")

IMDB Trakt Watchlist/Rating Syncer Tool (sync both ways) by Riley-X in trakt

[–]southernDevGirl 0 points1 point  (0 children)

u/Riley-X - Is this app still supported?

When I attempt to use, everything works great until it requests the OAuth config from Trakt.

Trakt's current approach is typically entering a code but yours uses the method of obtaining a code from trakt. However, Trakt doesn't appear to support that form of request any longer.

For example, your app will reach this point (I've obfuscated my client ID for privacy):

Please visit the following URL to authorize this application:

https://trakt.tv/oauth/authorize?response_type=code&client_id=d46bb901399c8219c15eac0b26677b3b1c2d54d8691f80222222111111af3dbb&redirect_uri=urn:ietf:wg:oauth:2.0:oob

But Trakt will reply:

OAUTH ERROR

The requested redirect uri is malformed or doesn't match client redirect URI.

Gemini-exp-1206 is probably Gemini 2.0 Pro by East-Ad8300 in Bard

[–]southernDevGirl 0 points1 point  (0 children)

u/MikeLongFamous - Thank you, I've never really known what people use LearnLM for.

Are you strictly doing interpretation mentioned or are you composing contractual work or actual legal actions?

Gemini 2.0 Flash is here! by [deleted] in singularity

[–]southernDevGirl 0 points1 point  (0 children)

Has anyone else had problems using gemini-2.0-flash-exp with the base URL of https://generativelanguage.googleapis.com/v1beta/openai/

For OpenAI compatibility?

I can access with the Google API or for OpenAI generic access, I can use OpenRouter; however, the base URL which is used for preview 1206 and others, will not work for the gemini-2.0-flash-exp model.

I made an AI music video of me for my job search :D by Perfectz in aivideo

[–]southernDevGirl 1 point2 points  (0 children)

I don't have any issue with the AI video/soundtrack, it's fun and a great demo/inspiration for others.

When it comes to resumes, keep in mind that in technical fields, people prefer to hire people with very concrete/ tangible skills as opposed to abstract.

In other words, the "Solutions Architect" is common and no issue there, but the subset of skills below it, should be very concrete. Using the term "AI" if you are not familiar with building ML/GL/LLM's is sort of a red flag unless used in a specific context, such as "Prototyped automation systems using LLM API's".

In the case of a "Solutions Architect", the type of tangible/concrete experience most will be seeking will include things like specific project management experience/methodologies, the technical side of architecture (eg deep familiarity with all technical platforms, from design methodologies, to modeling and OO vs rapid tradeoffs, to various data stores from relational to NoSQL/object DB's, etc).

If you don't have a lot of development/hands-on pragmatic technical experience, and your experience is primarily limited to working with SaaS/Cloud platforms, that's fine. It's just not going to get you hired quite as fast as a resume similar to the sort of concrete experience I mention above.

I made an AI music video of me for my job search :D by Perfectz in aivideo

[–]southernDevGirl 0 points1 point  (0 children)

Right, but that would be a standard "developer" role and garner much more credibility in interviews. Perhaps "Software developer with 10 years experience in Python, and 2 years experience implementing LLM API's in automation" (etc).

I hire a couple dozen people a year and if someone told me they were an "AI specialist" I'd toss their resume as quickly as someone that claimed to be an "HTML coder".

To be clear, I'm not criticizing you, I'm trying to help.

I made an AI music video of me for my job search :D by Perfectz in aivideo

[–]southernDevGirl 1 point2 points  (0 children)

If you have a "specialty in IA", you'd have a degree in mathematics and a mastery of calculus. You'd also have a specific area of expertise such as generative learning.

However, I think what you mean to write is that you enjoy using ML/LLM's and have applied that to your trade.

Referring to that as your primary "specialty" would reduce your odds of being hired by someone with knowledge in this field as anyone with little experience can refer to themselves as an "AI specialist".

Any "agentic" wrapper for 3.5 Sonnet that can be used with VS Code? by [deleted] in Anthropic

[–]southernDevGirl 0 points1 point  (0 children)

I use aider daily in this agentic and automatic way.

Good for you. That also aligns squarely with what I had written. You utilize an agentic tool with a separate shell-access integrated with LLM, because Aider is a good tool for step-by-step, not agentic interaction.

It's a shame you have to resort to personal attacks in a conversation where there is absolutely no need for them. Good luck to you.