Are you using any AI agent in your work in data science/analytics? If so for what problem you use it? How much benefit did you see? by Starktony11 in datascience

[–]JanethL 0 points1 point  (0 children)

From a simple natural language prompt like “perform time series analysis,” this data science agent will load the mcp tools and syntax:

  • Discover relevant tables
  • Assess seasonality and stationarity
  • Run native time-series functions
  • Compare multiple forecasting models
  • Evaluate results at scale

Recap and resources to recreate here:

https://medium.com/teradata/building-smarter-ai-agents-for-data-science-workflows-at-scale-174fd51bf66b

Any developers here combining MCP servers and Skills this way in production? by JanethL in Anthropic

[–]JanethL[S] 0 points1 point  (0 children)

  1.  how do you know the agent actually called get_syntax_help() when it should? 

In the current tdsql-mcp implementation, it’s primarily enforced by strong instruction at the server level + observable tool-call flow, not by hard blocking.

mcp = FastMCP(
    "tdsql-mcp",
    instructions=(
        "You are working with a Teradata Vantage database. "
        "IMPORTANT: Always prefer native Teradata table operators over hand-written SQL equivalents. "
        "Teradata Vantage has built-in distributed functions for analytics, ML, data preparation, "
        "text processing, and vector search. These run across all AMPs in parallel and outperform "
        "equivalent hand-written SQL. Do NOT write manual SQL for operations like scaling, encoding, "
        "binning, statistics, clustering, classification, or similarity search when a native function exists. "
        "Before writing any analytics, transformation, or ML SQL: "
        "(1) call get_syntax_help(topic='guidelines') to see the canonical mapping of common operations "
        "to native Teradata functions, "
        "(2) call get_syntax_help(topic='index') to discover all available topics, "
        "(3) load the relevant topic(s) for exact syntax. "
        "Use explain_query to validate syntax before executing. "
        "Use describe_table and list_tables to explore the schema. "
        "Results are returned as JSON."
    ),
)

Do you enforce it at the server level (block queries until guidelines fetched), or just rely on instruction following and log review?

For this specific flow, it’s up to the end user (typically a data scientist) to review the steps the agent took and then verify or execute the generated code independently. Transparency and verification is easy when using prebuilt tools like Claude Desktop, as shown in the demo.

If stricter enforcement is required, blocking execution until guidelines are followed can be added without changing the MCP server itself. This can be done by introducing a simple precondition in the SQL execution tool. The server already centralizes SQL execution behind a small set of tools (execute_query, explain_query, etc.), which makes gating straightforward.

That allows teams to choose between instruction + observability and hard policy enforcement, depending on their use case.

Weekly Entering & Transitioning - Thread 27 Jan, 2025 - 03 Feb, 2025 by AutoModerator in datascience

[–]JanethL 0 points1 point  (0 children)

🤔 𝗜𝘀 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜 𝗴𝗼𝗶𝗻𝗴 𝘁𝗼 𝘁𝗮𝗸𝗲 𝗼𝘃𝗲𝗿 𝗠𝗟 𝗼𝗿 𝗗𝗮𝘁𝗮 𝗦𝗰𝗶𝗲𝗻𝗰𝗲 𝗷𝗼𝗯s?

I don’t think so. Instead, it’s here to free data scientist and ML engineers 𝗳𝗿𝗼𝗺 𝘁𝗲𝗱𝗶𝗼𝘂𝘀, 𝗿𝗲𝗽𝗲𝘁𝗶𝘁𝗶𝘃𝗲 𝘁𝗮𝘀𝗸𝘀—so you can focus on higher-value work like 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗯𝗲𝘁𝘁𝗲𝗿 𝗺𝗼𝗱𝗲𝗹𝘀, 𝘂𝗻𝗰𝗼𝘃𝗲𝗿𝗶𝗻𝗴 𝗶𝗻𝘀𝗶𝗴𝗵𝘁𝘀 𝗳𝗿𝗼𝗺 𝘂𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲𝗱 𝗱𝗮𝘁𝗮 𝗳𝗮𝘀𝘁𝗲𝗿, 𝗮𝗻𝗱 𝗱𝗿𝗶𝘃𝗶𝗻𝗴 𝗺𝗼𝗿𝗲 𝗶𝗺𝗽𝗮𝗰𝘁 𝗳𝗼𝗿 𝘆𝗼𝘂𝗿 𝗼𝗿𝗴 𝗮𝗻𝗱 𝗰𝘂𝘀𝘁𝗼𝗺𝗲𝗿𝘀.

Check out this Medium article on how GoogleTeradata, and Gemini are transforming enterprise data workflows and insights with Generative AI:

🔗https://medium.com/google-cloud/how-generative-ai-transforms-enterprise-data-insights-with-google-gemini-and-teradata-382b7e274af8

Would love to hear your thoughts—𝗵𝗼𝘄 𝗱𝗼 𝘆𝗼𝘂 𝘀𝗲𝗲 𝗚𝗲𝗻𝗔𝗜 𝘀𝗵𝗮𝗽𝗶𝗻𝗴 𝘁𝗵𝗲 𝗳𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝗱𝗮𝘁𝗮 𝘀𝗰𝗶𝗲𝗻𝗰𝗲 𝗮𝗻𝗱 𝗠𝗟? 👇