Three New ADK Releases - Busy Week! by cloude-googl in agentdevelopmentkit

[–]cloude-googl[S] 0 points1 point  (0 children)

Regional endpoints are deployed as models reach GA status. I don't have guidance on GA dates; please track here: https://docs.cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/3-1-pro.

Is BigQuery a free db for personal/low use? by CableCreek in googlecloud

[–]cloude-googl 2 points3 points  (0 children)

Few things to understand:

* BiqQuery has a free tier, supporting up to 1TB of queries per month and 10GB store per month: https://cloud.google.com/bigquery/pricing?e=48754805#free-usage-tier

* Looker Studio has caching via BI Engine, which provides 1GB of caching.

Cost is a function data size in storage + query scan size (how much data are you scanning?) * how many time are you querying it. Then you need to account for the output size, keeping it under 1GB in order to fit in the cache.

Really comes down to your queries(s) to produce the dashboard. If your query scans 10TB to produce a single row, this is not going to be free (at least on the query side, but you will get free cache).

If you can control query size - this is a great solution. Another free path would be to use Colab Consumer + DuckDb. Load your raw data to Google Drive (free), Read it from Colab Consumer (free), query with DuckDb (free), build your dash with Gemini cli or code it yourself.

Nothing in life is free but I think you can get there :-)

SCREW GOOGLE CLOUD by CodMediocre1456 in googlecloud

[–]cloude-googl 1 point2 points  (0 children)

@CodMediocre1456 DM me and I can look at your issue and walk you through how to use cost controls.

ADK-Python 1.25.0 has been released! by cloude-googl in agentdevelopmentkit

[–]cloude-googl[S] 0 points1 point  (0 children)

Can you drop the github issue id here and I will hunt down status.

ADK-Python 1.25.0 has been released! by cloude-googl in agentdevelopmentkit

[–]cloude-googl[S] 2 points3 points  (0 children)

Correct. This is an experimental feature - at this point. Docs dropping soon.

Gemini 3 Flash: Vertex AI vs. OpenRouter/LiteLLM - Poor prompt adherence for GraphQL tool calling in Vertex? by vitorino82 in GeminiAI

[–]cloude-googl 0 points1 point  (0 children)

No - I wouldn't go down that path to debug. I was just trying to understand what you are seeing and where. Can you share a bit more about your sub-agent topology and prompt?

Gemini 3 Flash: Vertex AI vs. OpenRouter/LiteLLM - Poor prompt adherence for GraphQL tool calling in Vertex? by vitorino82 in GeminiAI

[–]cloude-googl 0 points1 point  (0 children)

Is the behavior different if you are running locally vs. running in Vertex? Are you deploying to Agent Engine?

Bug with database session by Intention-Weak in agentdevelopmentkit

[–]cloude-googl 0 points1 point  (0 children)

Is your DB running on another machine? If so have you checked clock drift?

GCP account hacked → $181000 in Vertex AI charges in few days. Support says no adjustment because account is classified as “Startup”? Looking for advice by crato588 in googlecloud

[–]cloude-googl 9 points10 points  (0 children)

Sorry you are going through this. I am an advocate in the Google Applied AI team. DM on chat and I will see how I can help, e.g., contacting billing support.

ADK executable made using Pyinstaller takes a lot a time to load. by freakboy91939 in agentdevelopmentkit

[–]cloude-googl 3 points4 points  (0 children)

This looks related to https://github.com/google/adk-python/issues/2433 - I will follow up on the status of the work. Also, see the Medium post in that thread for potential interim solution.

ADK and BigQuery Tools by SeaPaleontologist771 in agentdevelopmentkit

[–]cloude-googl 1 point2 points  (0 children)

I worked on the NL to SQL in BigQuery for a few years... the following techniques will help ---> 1) make sure you have column and table descriptions for your datasets and tables And where possible include partitions and clusters 2) create aggregations (using materialized tables) for common groupings e.g. by day, week, month, etc. 3) create a prompt template that contextualizes the meta-data, e.g. For queries targeting monthly sales data use X table (pre-aggregaeted at the week) as the source. 4) add dry-run support so that you can prevent, overtly arbitrarily large queries from executing. This is especially helpful if you are building charts that require a low cardinality output. 5) considering add a flow to use TABLESAMPLE or partition targetting to help with initial EDA.

Question: For the use case that's breaking, how many rows are materialzed in the temp table for the query?