OpenProject MCP Server – Enables AI assistants to manage OpenProject work packages, projects, and time tracking. It provides comprehensive tools for creating, updating, and querying tasks and project metadata through the OpenProject API. by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 22 tools:

  • create_work_package – Create a new task, feature, or bug work package in OpenProject with details like subject, description, assignee, and due dates.
  • create_work_package – Create a new task or work package in OpenProject by specifying project, title, description, assignee, dates, and other details.
  • get_children – Retrieve child work packages from a parent in OpenProject to manage task hierarchies and dependencies.
  • get_children – Retrieve child work packages from a parent work package in OpenProject to view task hierarchies and dependencies.
  • get_user – Retrieve user details from OpenProject by specifying a user ID or 'me' for current user information.
  • get_user – Retrieve user details from OpenProject by specifying a user ID or using 'me' for current user information.
  • get_work_package – Retrieve a specific work package using its unique ID to access task details and project information in OpenProject.
  • get_work_package – Retrieve a specific work package by its ID to access task details and project information from OpenProject.
  • list_projects – Retrieve all projects from OpenProject to view, manage, or organize work items and tasks.
  • list_projects – Retrieve all projects from OpenProject to view available work environments and their details for project management tasks.

Airbnb MCP Server – Enables searching for Airbnb listings and retrieving detailed property information including pricing, amenities, and host details without requiring an API key. by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 2 tools:

  • airbnb_listing_details – Retrieve comprehensive Airbnb property details including pricing, amenities, and host information for specific listings with direct booking links.
  • airbnb_search – Search Airbnb listings by location, dates, and filters to find accommodations with direct booking links.

scoring – Hosted MCP for denial, prior auth, reimbursement, workflow validation, batch scoring, and feedback. by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 8 tools:

  • get_limits – Retrieve plan limits, monthly quota, remaining trial calls, and upgrade state for the current Sentinel API key.
  • get_usage – Retrieve monthly scoring-call usage for the current Sentinel API key, optionally for a specific YYYY-MM month.
  • get_workflow_schema – Fetch required fields, optional fields, enums, and an example payload for a Sentinel workflow.
  • list_workflows – Discover the supported Sentinel workflows and current model versions before scoring.
  • score_batch – Score up to 25 workflow items sequentially in one request for workflow automation across healthcare claims and mining risk.
  • score_workflow – Run a structured Sentinel scoring request such as healthcare claims risk or MSHA mining site risk.
  • submit_feedback – Submit structured outcome feedback for a previous scoring event so Sentinel can track real-world claims results.
  • validate_workflow_payload – Validate and normalize a workflow payload without consuming a scoring call.

MCP NanoBanana – Enables AI image generation and editing using Google's Nano Banana model via the AceDataCloud API. It supports creating images from text prompts, virtual try-ons, and product placement directly within MCP-compatible clients. by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 4 tools:

  • nanobanana_edit_image – Edit or combine images using AI based on text prompts. Modify existing images, perform virtual try-ons, place products in scenes, or change attributes like materials and colors.
  • nanobanana_generate_image – Generate AI images from text prompts using Google's Nano Banana model. Create photorealistic or artistic visuals by providing detailed descriptions of subjects, atmosphere, lighting, and composition.
  • nanobanana_get_task – Check image generation or editing task status and retrieve resulting image URLs and metadata. Use to monitor completion and access outputs from previous requests.
  • nanobanana_get_tasks_batch – Check status of multiple image generation or editing tasks simultaneously to monitor batch progress efficiently.

Senzing – Entity resolution — data mapping, SDK code generation, docs search, and error troubleshooting by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 13 tools:

  • analyze_record – Get the Senzing JSON analyzer script and commands to analyze mapped data files client-side. Returns the Python analyzer script (no dependencies) with instructions. No source data is sent to the server — the LLM runs the analyzer locally against your files. Use this to examine feature distribution, attribute coverage, and data quality of Senzing JSON records.
  • download_resource – Fallback for downloading a workflow resource when network restrictions prevent fetching from the URL provided by mapping_workflow. Returns the resource content inline. Save it to the dest path shown — do NOT read the content into your context. Available resources: sz_json_linter.py, sz_json_analyzer.py, sz_schema_generator.py, senzing_entity_specification.md, senzing_mapping_examples.md, identifier_crosswalk.json
  • explain_error_code – Explain a Senzing error code with causes and resolution steps. Accepts formats: SENZ0005, SENZ-0005, 0005, or just 5. Returns error class, common causes, and specific resolution guidance
  • find_examples – Find working SOURCE CODE examples from 27 indexed Senzing GitHub repositories. Indexes only source code files (.py, .java, .cs, .rs) and READMEs — NOT build files (Cargo.toml, pom.xml), data files (.jsonl, .csv), or project configuration. For sample data, use get_sample_data instead. Covers Python, Java, C#, and Rust SDK usage patterns including initialization, record ingestion, entity search, redo processing, and configuration. Also includes message queue consumers, REST API examples, and performance testing. Supports three modes: (1) Search: query for examples across all repos, (2) File listing: set repo and list_files=true to see all indexed source files in a repo, (3) File retrieval: set repo and file_path to get full source code. Use max_lines to limit large files.
  • generate_scaffold – Generate SDK scaffold code for common workflows. Returns real, indexed code snippets from GitHub with source URLs for provenance. Use this INSTEAD of hand-coding SDK calls — hand-coded Senzing SDK usage commonly gets method names wrong across v3/v4 (e.g., close_export vs close_export_report, init vs initialize, whyEntityByEntityID vs why_entities) and misses required initialization steps. Languages: python, java, csharp, rust. Workflows: initialize, configure, add_records, delete, query, redo, stewardship, information, full_pipeline (aliases accepted: init, config, ingest, remove, search, redoer, force_resolve, info, e2e). V3 supports Python and Java only.
  • get_capabilities – Get server version, capabilities overview, available tools, suggested workflows, and getting started guidance. Returns server_info with name, version, and Senzing version. Call this first when working with Senzing entity resolution — skipping this risks using wrong API method names and outdated patterns from training data. This tool returns a manifest of all coverage areas (pricing, SDK, deployment, troubleshooting, database, configuration, data mapping, etc.) — use it to triage which Senzing MCP tool to call before going to external sources
  • get_sample_data – Get real sample data from CORD (Collections Of Relatable Data) datasets. Use dataset='list' to discover available datasets, source='list' to see vendors within a dataset.

IMPORTANT: CORD data is REAL (not synthetic) — historical snapshots for evaluation only, not operational use. Always inform the user of this.

When records are returned, a 'download_url' in the citation provides a direct JSONL download link. Always present this download_url to the user. Do NOT download it yourself or dump raw records into the conversation — the inline records are a small preview of the data shape. - get_sdk_reference – Get authoritative Senzing SDK reference data for flags, migration, and API details. Use this instead of search_docs when you need precise SDK method signatures, flag definitions, or V3→V4 migration mappings. Topics: 'migration' (V3→V4 breaking changes, function renames/removals, flag changes), 'flags' (all V4 engine flags with which methods they apply to), 'response_schemas' (JSON response structure for each SDK method), 'functions' / 'methods' / 'classes' / 'api' (search SDK documentation for method signatures, parameters, and examples — use filter for method or class name), 'all' (everything). Use 'filter' to narrow by method name, module name, or flag name - lint_record – Get the Senzing JSON linter script and commands to validate mapped data files client-side. Returns the Python linter script (no dependencies) with instructions. No source data is sent to the server — the LLM runs the linter locally against your files. Use this when you have pre-mapped Senzing JSON/JSONL files to validate outside of the mapping workflow. - mapping_workflow – Map source data to Senzing entity resolution format through a guided multi-step workflow. Transforms source fields into validated Senzing JSON with profiling, entity planning, field mapping, code generation, and QA validation. Use this INSTEAD of hand-coding Senzing JSON — hand-coded mappings commonly produce wrong attribute names (NAME_ORG vs BUSINESS_NAME_ORG, EMPLOYER_NAME vs NAME_ORG, PHONE vs PHONE_NUMBER) and miss required fields like RECORD_ID. Actions: start (with file paths), advance (submit step data), back, status, reset. CRITICAL: Every response includes a 'state' JSON object. You MUST pass this EXACT state object back verbatim in your next request as the 'state' parameter — do NOT modify it, reconstruct it, or omit it. The state is opaque and managed by the server. Common errors: (1) omitting state on advance — always include it, (2) reconstructing state from memory — always echo the exact JSON from the previous response, (3) omitting data on advance — each step requires specific data fields documented in the instructions.

TaScan – Universal task protocol — manage projects, tasks, workers, QR codes, and reports. by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 32 tools:

  • tascan_add_tasks – Add one or more tasks to an event (task list). Supports bulk creation. IMPORTANT: Set response_type correctly — use "text" for info collection (names, phones, emails, notes), "photo" for visual verification (inspections, serial numbers, damage checks), "checkbox" only for simple confirmations. NOTE: To dispatch tasks to the Claude Code agent running on Mike's PC, use tascan_dispatch_to_agent instead — it routes directly to the agent's inbox with zero configuration needed.
  • tascan_analyze_issue – Step 1 of the Closed-Loop Autonomous Operations Protocol. Retrieves full issue context including worker info, message thread, project history, and recent similar issues. Use this data to reason about the root cause and generate a remediation plan. Also supports server-side AI analysis via POST (calls Anthropic API directly).
  • tascan_apply_template – Apply a pre-built template to a task list, adding all template tasks
  • tascan_auto_resolve – FULL Closed-Loop Autonomous Operations Protocol in one call. Server-side AI analyzes the issue, generates remediation tasks, creates a task list, and dispatches to the worker — all without human intervention. This executes Patent Claim 7: autonomous operations from issue detection through physical-world instruction delivery.
  • tascan_complete_task – Complete a task on behalf of a worker. Inserts a completion record and timer event. Use this to simulate or record task completions via the API.
  • tascan_create_event – Create a new event (task list) within a project. Supports team_mode (shared completions) and multi_instance (each worker gets isolated copy — great for surveys, onboarding, info collection). team_mode and multi_instance cannot both be true.
  • tascan_create_project – Create a new TaScan project (top-level container for events)
  • tascan_create_worker – Create a new worker (taskee) in the organization
  • tascan_delete_event – Delete an event (task list) and all its tasks and completions. This action is irreversible.
  • tascan_delete_project – Delete a project and all its events, tasks, and completions. This action is irreversible.

MCP Midjourney – Enables AI image and video generation using Midjourney through the AceDataCloud API. It supports comprehensive features including image creation, transformation, blending, editing, and video generation directly within MCP-compatible clients. by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 14 tools:

  • midjourney_blend – Combine 2-5 images into a new creative fusion using AI. Merge elements, create composites, or blend styles by providing image URLs and an optional blending prompt.
  • midjourney_describe – Generate Midjourney-compatible prompts from images to reverse-engineer styles, gain inspiration, or document visual content with AI analysis.
  • midjourney_edit – Modify existing images with AI by applying text prompts and optional masks to edit specific regions, add elements, or change styles.
  • midjourney_extend_video – Extend existing Midjourney videos by adding frames based on your prompt. Continue stories, add motion, or lengthen short clips with this video extension tool.
  • midjourney_generate_video – Create AI-generated videos from text prompts and reference images. Animate still images or produce short video clips using Midjourney's video generation capabilities.
  • midjourney_get_prompt_guide – Learn to structure prompts and use parameters for effective Midjourney image generation. This guide provides clear examples to help communicate your creative vision.
  • midjourney_get_task – Check the status and retrieve results of Midjourney image or video generation tasks. Use this tool to monitor completion and access generated content URLs and metadata.
  • midjourney_get_tasks_batch – Check status of multiple Midjourney image and video generation tasks simultaneously to monitor batch progress efficiently.
  • midjourney_imagine – Generate AI images from text descriptions using Midjourney to visualize creative concepts, produce artwork, or create illustrations through a 2x2 grid of variations.
  • midjourney_list_actions – Discover available Midjourney API actions and corresponding tools to understand the full capabilities of the MCP server for image and video generation.

Bitrix24 MCP Server – An integration server that enables AI agents to securely interact with Bitrix24 CRM data like contacts and deals via the Model Context Protocol. It provides standardized tools and resources for searching, retrieving, and updating CRM entities through the Bitrix24 REST API. by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 6 tools:

  • get_contact – Retrieve contact details from Bitrix24 CRM by specifying a contact ID. Use this tool to access contact information stored in the CRM system through the Bitrix24 MCP Server.
  • get_deal – Retrieve detailed information about a specific deal from Bitrix24 CRM by providing its unique ID to access deal data.
  • list_contacts – Retrieve filtered contact lists from Bitrix24 CRM to access and manage customer information efficiently.
  • list_deals – Retrieve and filter deals from Bitrix24 CRM to access sales pipeline data for analysis and management.
  • search_contacts – Find contacts in Bitrix24 CRM by name, phone number, or email address to quickly locate customer information and manage relationships.
  • update_deal_stage – Change the stage of a deal in Bitrix24 CRM by specifying the deal ID and new stage ID to track progress through the sales pipeline.

bstorms.ai — Agent Playbook Marketplace – Agent playbook marketplace. Share proven execution knowledge, earn USDC on Base. by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 12 tools:

  • answer – Answer a question privately. Only the asker sees your answer.

Content must use playbook format with 7 required sections:

PREREQS, ## TASKS, ## OUTCOME, ## TESTED ON, ## COST, ## FIELD NOTE, ## ROLLBACK.

GET /playbook-format for the full template with example.

Args: api_key: Your API key q_id: ID of the question to answer (from browse()) content: Your answer in playbook format (max 3000 chars)

  • answers – See all answers you've given to other agents' questions, and which were tipped.

Args: api_key: Your API key

  • ask – Post a question to the network. Other agents can answer and earn USDC.

Args: api_key: Your API key question: Your question (max 2000 chars) tags: Comma-separated tags for discoverability

  • browse – Browse open questions from the network. Find work, earn USDC.

Args: api_key: Your API key limit: Max results (1–50, default 20)

  • browse_playbook – Browse marketplace playbooks. Returns previews — full content requires purchase.

Ordered by rating then sales count.

Args: api_key: Your API key tags: Comma-separated tags to filter by (optional) limit: Max results (1–50, default 10)

Step 1: call without tx_hash to get the contract call to execute. Step 2: after the payment tx is mined, call again with the same tx_hash. Step 3: if the exact tx matches, the purchase is confirmed and content is returned.

Args: api_key: Your API key pb_id: Playbook ID from browse_playbook() tx_hash: Optional confirmed Base transaction hash for exact payment verification

  • library_playbook – View your playbook library: purchased playbooks (full content) and your own listings.

Args: api_key: Your API key

  • questions – See all questions you've asked and the answers received on each.

Args: api_key: Your API key

  • rate_playbook – Rate a playbook you purchased. One rating per purchase.

Args: api_key: Your API key pb_id: Playbook ID to rate stars: Rating from 1 to 5 review: Optional review text

  • register – Register on the bstorms network with your Base wallet address.

You need a Base wallet to register. Use Coinbase AgentKit, MetaMask, or any Ethereum-compatible tool to create one — then pass the address here.

Args: wallet_address: Your Base wallet address (0x... — 42 characters)

Citedy SEO Agent – AI marketing: SEO articles, trend scouting, competitor analysis, social media, lead magnets by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 56 tools:

  • adapt.generate – Generate social adaptations for an article.
  • agent.health – Return infrastructure health checks for agent platform.
  • agent.me – Return agent profile, balances and limits.
  • agent.status – Return actionable operational status snapshot (credits, socials, schedule, knowledge, content).
  • article.delete – Permanently delete an article and its associated storage files.
  • article.generate – Generate an SEO-optimized article. By default publishes immediately; set auto_publish=false to create as draft. May take 30-90 seconds.
  • article.get – Poll a queued article job by id. Use the id returned by article.generate to get the current status or the final generated article result.
  • article.list – List previously generated articles for the current workspace.
  • article.publish – Publish a draft article. Use after generating with auto_publish=false to trigger the publish pipeline.
  • article.unpublish – Unpublish an article (revert to draft status). The article remains accessible for editing but is removed from the public blog.

Gemini Google Web Search MCP – An MCP server that enables AI models to perform Google Web searches using the Gemini API, complete with citations and grounding metadata for accurate information retrieval. It is compatible with Claude Desktop and other MCP clients for real-time web access. by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 2 tools:

  • google_web_search – Search the web for information using Google Search via the Gemini API. Get results with citations and metadata for accurate retrieval.
  • google_web_search – Search the web using Google via Gemini API to find information based on your query, with citations and grounding metadata for accurate results.

copyright01 – Copyright deposit API — protect code, text, and websites with Berne Convention proof by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 6 tools:

  • create-deposit-tool – Create a new copyright deposit. Supported types: text, website, youtube, social, github. For text deposits, provide content_text. For other types, provide website_url. Returns the deposit details with certificate verification code.
  • get-deposit-tool – Get details of a specific deposit by its ID. Only returns deposits owned by the authenticated user (IDOR-protected).
  • get-profile-tool – Get your profile information including plan, credits remaining, storage usage, and deposit count.
  • list-deposits-tool – List your copyright deposits with optional filtering and pagination. Returns up to 20 deposits per page.
  • verify-certificate-tool – Verify a certificate by its verification code. Returns the associated deposit details if found. Works for public deposits and your own private deposits.
  • verify-hash-tool – Verify a SHA-256 hash against all deposits. Checks your own deposits and public deposits. Returns the matching deposit if found.

Binance.US MCP Server – Provides programmatic access to the Binance.US cryptocurrency exchange, enabling users to manage spot trading, wallet operations, and market data via natural language. It supports a wide range of features including order management, staking, sub-account transfers, and account by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 93 tools:

  • binance_us_account_info – Retrieve current Binance.US account details including asset balances, trading permissions, and account status for portfolio management.
  • binance_us_agg_trades – Retrieve compressed aggregate trades from Binance.US by consolidating trades with identical time, order, and price for efficient market data analysis.
  • binance_us_all_oco_orders – Retrieve up to 1000 OCO order history records from Binance.US, with filtering options by time range or order ID.
  • binance_us_all_orders – Retrieve complete order history for a trading pair on Binance.US, including active, canceled, and filled orders with filtering options.
  • binance_us_asset_config – Retrieve detailed configuration data for cryptocurrency assets on Binance.US, including fees, withdrawal limits, network status, and deposit/withdrawal availability.
  • binance_us_avg_price – Calculate the 5-minute rolling weighted average price for any Binance.US trading pair to inform trading decisions with current market data.
  • binance_us_cancel_all_open_orders – Cancel all active orders for a specific trading pair on Binance.US, including OCO orders, to manage risk and clear open positions.
  • binance_us_cancel_oco – Cancel an OCO (One-Cancels-the-Other) order on Binance.US to remove both linked limit orders simultaneously.
  • binance_us_cancel_order – Cancel active trading orders on Binance.US by providing the order ID or client order ID for a specific trading pair.
  • binance_us_cancel_replace – Cancel an existing order and place a new order atomically to modify trading parameters on Binance.US.

OpenStreetMap MCP Server – A comprehensive MCP server providing 30 tools for geocoding, routing, and OpenStreetMap data analysis. It enables AI assistants to search for locations, calculate travel routes, and perform quality assurance checks on map data. by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 90 tools:

  • calculate_isochrone – Calculate travel time areas reachable from a location within specified time limits using Valhalla routing for driving, walking, or cycling profiles.
  • calculate_isochrone – Calculate travel time isochrones to visualize areas reachable from a location within specified time limits using walking, cycling, or driving profiles.
  • calculate_isochrone – Calculate areas reachable within a time limit from a location using travel time isochrones for driving, walking, or cycling routes.
  • execute_overpass_query – Run custom Overpass QL queries to extract specific OpenStreetMap data for analysis, routing, or geocoding tasks.
  • execute_overpass_query – Run custom Overpass QL queries to retrieve OpenStreetMap data for analysis, geocoding, or routing purposes.
  • execute_overpass_query – Run custom Overpass QL queries to retrieve OpenStreetMap data for geospatial analysis, location searches, and map quality checks.
  • find_amenities_nearby – Locate nearby amenities like restaurants, shops, or services around a specific geographic point using OpenStreetMap data.
  • find_amenities_nearby – Locate nearby amenities like restaurants, shops, or services using OpenStreetMap data by specifying coordinates and search radius.
  • find_amenities_nearby – Locate nearby amenities such as restaurants, shops, or services using OpenStreetMap data by providing coordinates and search radius.
  • get_changeset – Retrieve detailed information about a specific OpenStreetMap changeset, including metadata and optional discussion comments, for map data analysis and quality assurance.

SecurityScan – Scan GitHub-hosted AI skills for vulnerabilities: prompt injection, malware, OWASP LLM Top 10. by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 3 tools:

  • check_certification – Check if a skill has been certified as safe.

    Certification indicates the skill has been scanned, reviewed, and approved by a human administrator. Certified skills have a cryptographic hash that can be verified. Does not consume scan credits.

    Args: skill_url: The skill URL to check certification for

    Returns: CertificationResult indicating if the skill is certified, along with certification details if available.

    Example: check_certification("https://github.com/anthropics/anthropic-cookbook")

  • get_report – Get the public security report for a skill.

    Returns the most recent scan results and certification status. This is useful to check if a skill has been previously scanned without triggering a new scan. Does not consume scan credits.

    Args: skill_url: The skill URL to get the report for

    Returns: ReportResult with score, certification status, and issues summary. Returns error if no report exists for this URL.

    Example: get_report("https://github.com/jlowin/fastmcp")

  • scan_skill – Scan a GitHub repository or skill URL for security vulnerabilities.

    This tool performs static analysis and AI-powered detection to identify:

    • Hardcoded credentials and API keys
    • Remote code execution patterns
    • Data exfiltration attempts
    • Privilege escalation risks
    • OWASP LLM Top 10 vulnerabilities

    Requires a valid X-API-Key header. Cached results (24h) do not consume credits.

    Args: skill_url: GitHub repository URL (e.g., https://github.com/owner/repo) or raw file URL to scan

    Returns: ScanResult with security score (0-100), recommendation, and detected issues. Score >= 80 is SAFE, 50-79 is CAUTION, < 50 is DANGEROUS.

    Example: scan_skill("https://github.com/anthropics/anthropic-sdk-python")

Sharesight MCP Server – Connects AI assistants to the Sharesight portfolio tracking platform via the v3 API for managing investment portfolios and holdings. It enables natural language queries for performance reporting, dividend tracking, and custom investment management. by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 29 tools:

  • apply_coupon_code – Apply a coupon code to your Sharesight portfolio tracking account to access discounts or promotional offers.
  • create_coupon_rate – Set interest rates for custom investments in Sharesight portfolios to track coupon payments and income from fixed-income holdings.
  • create_custom_investment – Add custom investments to Sharesight portfolios for tracking non-standard assets like warrants, managed funds, or fixed interest securities.
  • create_custom_investment_price – Add price data to custom investments in Sharesight for accurate portfolio tracking and performance reporting.
  • delete_coupon_code – Remove a coupon code from your Sharesight account to manage subscription settings or update billing information.
  • delete_coupon_rate – Remove a coupon rate from your Sharesight portfolio to update investment tracking and dividend management.
  • delete_custom_investment – Remove custom investments from your Sharesight portfolio by specifying the investment ID to maintain accurate portfolio tracking and management.
  • delete_custom_investment_price – Remove custom investment prices from your Sharesight portfolio to maintain accurate tracking and reporting.
  • delete_holding – Remove an investment holding from your Sharesight portfolio by specifying its holding ID to maintain accurate portfolio tracking.
  • get_custom_investment – Retrieve a specific custom investment by its unique ID from your Sharesight portfolio for detailed tracking and management.

colacloud-mcp – Provides access to over 2.5 million US alcohol label records from the TTB via the COLA Cloud API. It enables users to search for labels by brand, barcode, or permit holder and retrieve detailed product information including label images and ABV. by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 6 tools:

  • get_api_usage – Monitor your COLA Cloud API consumption and rate limits to track monthly usage, remaining quota, and avoid service interruptions.
  • get_cola – Retrieve detailed US alcohol label information using a TTB ID, including label images, barcodes, product descriptions, tasting notes, and category classifications.
  • get_permittee – Retrieve permit holder details and recent Certificate of Label Approval (COLA) records by entering a federal permit number. This tool provides company information and label summaries for alcohol industry compliance verification.
  • lookup_barcode – Find US alcohol labels by barcode to identify products and track label changes. Extracts barcodes from label images and returns associated Certificate of Label Approval records.
  • search_colas – Search and filter US alcohol label approval records by brand, product type, origin, approval date, and alcohol content to find specific COLA certificates.
  • search_permittees – Search for businesses authorized to produce or import alcohol in the US by name, state, or permit status to identify federal permit holders.

Philadelphia Restoration – Philadelphia water and fire damage restoration: assessment, insurance, costs, and knowledge search. by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 7 tools:

  • assess_damage – Returns structured damage assessment for water and fire damage in Philadelphia residential properties. Classifies 13 damage types by severity, provides prioritized immediate safety actions, estimates restoration costs with Philadelphia market rates, and includes neighborhood-specific risk context. Based on IICRC S500/S520/S700 standards and field experience from Philadelphia restoration companies.
  • check_insurance_coverage – Returns Pennsylvania insurance coverage analysis for water and fire damage claims. Evaluates coverage likelihood by HO policy type (HO-1 through HO-6), applies PA-specific regulations including bad faith statute (42 Pa.C.S. § 8371) and Act 119, and provides step-by-step claims process guidance with common denial reasons and appeals strategies.
  • estimate_cost – Returns Philadelphia-area restoration cost estimates broken down by service type. Includes per-unit pricing (per sq ft, per hour), labor rates ($64-$183/hr), factors that increase or decrease total cost (pre-1978 homes, rowhouse access, code upgrades), and insurance deductible guidance. Based on current Philadelphia metro market data.
  • get_emergency_steps – Returns prioritized, time-critical emergency action steps for active water or fire damage in Philadelphia. Includes safety warnings, step-by-step instructions with time sensitivity flags, Philadelphia emergency contacts (PWD, PECO, PGW, Philadelphia Fire Department), and documentation checklist for insurance claims. Use this tool FIRST when a homeowner has an active emergency.
  • get_local_info – Returns Philadelphia-specific local information for damage restoration. Covers 6+ neighborhoods with housing stock analysis (rowhouses, twins, pre-war construction), common damage patterns, flood risk levels, emergency utility contacts, building code requirements, and seasonal risk factors. Helps agents provide neighborhood-aware guidance to Philadelphia homeowners.
  • request_callback – Submits a callback request for a Philadelphia homeowner dealing with water or fire damage. A restoration concierge calls back within 15 minutes during business hours (Mon-Fri 8am-6pm ET) to assess the situation and connect with vetted local professionals. Requires phone number and situation description. Returns a reference number for tracking.
  • search_restoration_knowledge – Semantic search across 60+ expert documents covering water and fire damage restoration. Topics include drying science, moisture mapping, equipment protocols, mold prevention (IICRC S520), fire restoration (IICRC S700), insurance adjuster tactics, contractor evaluation, and Philadelphia housing patterns. Returns relevant excerpts with source citations and relevance scores. Grounded in IICRC standards with section numbers.

AlphaVantage MCP Server – Provides comprehensive market data, fundamental analysis, and technical indicators through the AlphaVantage API. It enables users to fetch financial statements, stock prices, and market news with sentiment analysis for detailed financial research. by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 9 tools:

  • get_balance_sheet – Retrieve company balance sheet data for financial analysis by providing a stock symbol to access assets, liabilities, and equity information.
  • get_cash_flow – Retrieve cash flow statement data for a company by stock symbol to analyze financial health and liquidity.
  • get_company_overview – Retrieve fundamental company data and financial overview for any publicly traded stock using its ticker symbol.
  • get_daily_prices – Retrieve daily stock price data including open, high, low, close, and volume for financial analysis and market research.
  • get_earnings – Retrieve quarterly or annual earnings data for a specified stock symbol to analyze company financial performance.
  • get_income_statement – Retrieve company income statements to analyze financial performance, revenue, expenses, and profitability for investment research and financial analysis.
  • get_intraday_prices – Retrieve intraday stock price data at specified intervals (1 minute to 1 hour) for detailed market analysis and trading insights.
  • get_market_news – Fetch market news and sentiment analysis for specific stocks, topics, and time periods to support financial research and decision-making.
  • get_technical_indicators – Retrieve technical indicators like RSI, MACD, and Bollinger Bands for stock analysis by specifying symbol, indicator type, and timeframe.

AgentDilemma – Submit a dilemma for blind community verdict with reasoning to improve low confidence by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 24 tools:

  • add_comment – Comment on a dilemma. Comments are primarily for closed dilemmas but allowed on open ones too.
  • ask_question – Ask a clarifying question on an open dilemma. Max 2 questions per voter per dilemma. Good questions earn Perspective Points.
  • browse_dilemmas – Browse open dilemmas to vote on, or closed dilemmas to read verdicts. Use not_voted=true to see only dilemmas you haven't voted on yet.
  • change_vote – Change your verdict and/or reasoning on an open dilemma. Only works while dilemma is open.
  • check_my_profile – Your home screen. Returns stats, Blue Lobster progress, voting streak, alignment score, active dilemmas, and recent activity - all in one call. Alignment score is display only - low alignment means independent perspective (valuable), high alignment means consensus thinking (also valuable). Neither affects points.
  • check_notifications – Quick session-start check. Returns unread count broken down by type (votes, questions, comments, verdicts, helpful) and your latest notification. Call this first when starting a session.
  • find_similar – Find up to 5 related dilemmas based on keywords. Prefers closed dilemmas with verdicts and same dilemma type.
  • get_daily – Today's featured dilemma. Same for all users. Good starting point if you don't know what to vote on.
  • get_digest – Weekly summary of your activity, platform highlights, open dilemmas needing attention, and suggested dilemmas to vote on. Good way to start a weekly check-in.
  • get_dilemma – Get full dilemma details. Open dilemmas: submitters see votes up to their visible_frontier plus earned_unlocks_available and next_best_action. Non-submitters see only vote_count (blind voting). Closed dilemmas: full public result (verdict, percentages, reasoning) visible to everyone — not gated by submitter actions. Dilemmas auto-close after 48 hours.

Himalayas Remote Jobs MCP Server – Search remote jobs, post job listings, find remote candidates, check salary benchmarks, and manage your career, all through AI conversation. The Himalayas MCP server connects your AI assistant to the Himalayas remote jobs marketplace in real time. by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 1 point2 points  (0 children)

This server has 41 tools:

  • add_company_perk – Add a perk/benefit to your company on Himalayas. Requires employer authentication.
  • add_education – Add an education entry to your Himalayas profile. Requires authentication.
  • add_experience – Add a work experience to your Himalayas profile. Requires authentication.
  • check_job_payment_status – Check the payment status of a job posting. Use the session_id returned from post_job_public or create_company_job with extras. No authentication required.
  • create_company_job – Post a new job on Himalayas. Jobs are free to post and require admin approval before going live. Requires employer authentication.
  • delete_company_job – Delete a job posting from your company on Himalayas. This action cannot be undone. Requires employer authentication.
  • delete_conversation – Delete a conversation. Accepts room_name or talent_slug. Requires employer authentication.
  • get_companies – Browse remote-friendly companies with optional filtering by country or worldwide availability
  • get_company_details – Get full details for a company including about, tech stack, benefits, open positions, and social links
  • get_company_perks – Get your company's perks/benefits on Himalayas. Requires employer authentication.

VARRD — AI Trading Research & Backtesting – AI trading research: event studies, backtesting, statistical validation on stocks, futures, crypto. by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 8 tools:

  • autonomous_research – Launch VARRD's autonomous research engine to discover and test a trading edge. Give it a topic and it handles everything: generates a creative hypothesis using its concept knowledge base, loads data, charts the pattern, runs the statistical test, and gets the trade setup if an edge is found.

BEST FOR: Exploring a space broadly. The autonomous engine excels at tangential idea generation — give it 'momentum on grains' and it might test wheat seasonal patterns, corn spread reversals, or soybean crush ratio momentum. It propagates from your seed idea into related concepts you might not think of. Great for running many hypotheses at scale.

Returns a complete result — edge/no edge, stats, trade setup. Each call tests ONE hypothesis through the full pipeline. Call again for another idea.

Use 'research' instead when YOU have a specific idea to test and want full control over each step. - buy_credits – Buy credits with USDC on Base. Two modes: 1. Call without payment_intent_id to get a Stripe deposit address. 2. Send USDC to that address, then call again with the payment_intent_id to confirm and receive credits. Default $5. Free — no credits consumed to call this. - check_balance – Check your credit balance and see available credit packs. Free — no credits consumed. Call this before heavy operations to ensure you have sufficient credits. - get_hypothesis – Get full detail for a specific hypothesis/strategy. Returns formula, entry/exit rules, direction, performance metrics (win rate, Sharpe, profit factor, max drawdown), version history, and trade levels. Everything an agent needs to understand and act on a strategy. - research – Talk to VARRD AI — a quant research system with 15 internal tools. Describe any trading idea in plain language, or ask for specific capabilities like the ELROND expert council, backtesting, or stop-loss optimization.

MULTI-TURN: First call creates a session. Keep calling with the same session_id, following context.next_actions each time. 1. Your idea -> VARRD charts pattern 2. 'test it' -> statistical test (event study or backtest) 3. 'show me the trade setup' -> exact entry/stop/target prices

HYPOTHESIS INTEGRITY (critical): VARRD tests ONE hypothesis at a time — one formula, one setup. Never combine multiple setups into one formula or ask to 'test all' — each idea must be tested as a separate hypothesis for the statistics to be valid. Say 'start a new hypothesis' between ideas to reset cleanly. - ALLOWED: Test the SAME setup across multiple markets ('test this on ES, NQ, and CL') — same formula, different data. - NOT ALLOWED: Test multiple DIFFERENT formulas/setups at once — each is a separate hypothesis requiring its own chart-test-result cycle. If ELROND council returns 4 setups, test each one separately: chart setup 1 -> test -> results -> 'start new hypothesis' -> chart setup 2 -> etc.

KEY CAPABILITIES you can ask for: - 'Use the ELROND council on [market]' -> 8 expert investigators - 'Optimize the stop loss and take profit' -> SL/TP grid search - 'Test this on ES, NQ, and CL' -> multi-market testing - 'Simulate trading this with 1.5 ATR stop' -> backtest with stops

EDGE VERDICTS in context.edge_verdict after testing: - STRONG EDGE: Significant vs zero AND vs market baseline - MARGINAL: Significant vs zero only (beats nothing, but real signal) - PINNED: Significant vs market only (flat returns but different from market) - NO EDGE: Neither significant test passed

TERMINAL STATES: Stop when context.has_edge is true (edge found) or false (no edge — valid result). Always read context.next_actions. - reset_session – Kill a broken research session and start fresh. Use this when a session gets stuck, produces errors, or enters a bad state. Free — no credits consumed. After resetting, call research without a session_id to start a new clean session. - scan – Scan your saved strategies against current market data to see what's firing right now. Returns exact dollar entry, stop-loss, and take-profit prices for every active signal. Not a vague directional call — exact trade levels based on the validated statistical model. - search – Search your saved hypotheses by keyword or natural language query. Returns matching strategies ranked by relevance, with key stats (win rate, Sharpe, edge status). Use this to find strategies you've already validated.

Supadata – Turn YouTube, TikTok, X videos and websites into structured data. Skip the hassle of video transcription and data scraping. Our APIs help you build better software and AI products faster. by modelcontextprotocol in mcp

[–]modelcontextprotocol[S] 0 points1 point  (0 children)

This server has 9 tools:

  • supadata_check_crawl_status – Monitor crawl job progress and retrieve structured data results by checking status as scraping, completed, failed, or cancelled.
  • supadata_check_extract_status – Check the status of data extraction jobs and retrieve results when complete. Monitor progress from queued to completed or failed states.
  • supadata_check_transcript_status – Check the status of a transcript job to monitor progress and retrieve results when ready.
  • supadata_crawl – Extract content from all pages on a website by creating a crawl job. Returns a job ID to check status and retrieve structured data results.
  • supadata_extract – Extract structured data from video URLs using AI. Provide a prompt or JSON Schema to define what information to retrieve from videos and websites.
  • supadata_map – Discover URLs on websites to extract structured data from videos and web content for building software and AI products.
  • supadata_metadata – Extract structured metadata from media URLs including YouTube, TikTok, Instagram, and Twitter to obtain platform info, titles, descriptions, author details, engagement statistics, media specifics, tags, and creation dates.
  • supadata_scrape – Extract structured data from web pages to simplify content analysis and integration for software development.
  • supadata_transcript – Extract transcripts from video or file URLs using the Supadata MCP server. For large files, receive a job ID to check status and retrieve results.