Overview

MCP tools are the atomic building blocks of the Agentic BI system. Each tool performs a single, well-defined action and returns structured output. The orchestration layer composes these tools to fulfill user requests.

  • Each tool has a defined input schema and output format
  • Tools are stateless — all context passed via parameters
  • Tools can be called by the LLM agent or by the orchestration service directly
  • New tools can be added without modifying the orchestration layer

Tool Catalog

ToolSourceActionReturns
list-semantic-modelsPBIEnumerate datasets in a workspaceModel ID, name, workspace
get-measuresPBIRetrieve measures from a modelMeasure name, expression, description
generate-daxPBIGenerate a DAX EVALUATE queryDAX query string
generate-sqlSFGenerate a SQL SELECT querySQL query string
validate-queryBothCheck query against policy rulesPass/fail + reasons
execute-previewBothRun query with row limitResult rows (capped)
save-exportBothExport results to file/reportExport URI or report URL

MCP Configuration

mcp.jsonConfig
{
  "servers": {
    "agentic-bi": {
      "type": "stdio",
      "command": "node",
      "args": ["./mcp-server/index.js"],
      "env": {
        "PBI_TENANT_ID": "${env:PBI_TENANT_ID}",
        "PBI_CLIENT_ID": "${env:PBI_CLIENT_ID}",
        "SNOWFLAKE_ACCOUNT": "${env:SNOWFLAKE_ACCOUNT}"
      }
    }
  }
}

list-semantic-models

Enumerate all Power BI semantic models available to the authenticated user.

Input
{
  "workspaceId": "optional — filter to workspace"
}
Output
[
  {
    "id": "abc-123",
    "name": "ROME_PROD_LSM",
    "workspace": "Analytics Prod"
  }
]

get-measures

Retrieve all measures from a specific semantic model, including DAX expressions and descriptions.

Output Example
[
  {
    "name": "PMPM",
    "expression": "DIVIDE([Total Claims], [Member Months])",
    "description": "Per Member Per Month cost",
    "table": "Fact Claims"
  }
]

generate-dax

Generate a DAX EVALUATE query based on a natural-language description and model context.

Input → OutputDAX
// Input:
{
  "question": "Top 10 clients by total claims",
  "modelId": "abc-123",
  "metadata": { ... }
}

// Output:
{
  "dax": "EVALUATE TOPN(10, SUMMARIZECOLUMNS('Dim Client'[Client Name], \"Total Claims\", [Total Claims Amount]), [Total Claims Amount], DESC)",
  "confidence": 0.92
}

generate-sql

Generate a Snowflake SQL query based on a natural-language description and schema context.

Input → OutputSQL
// Input:
{
  "question": "Claims denied in Q3 2025 by denial reason",
  "schema": { ... }
}

// Output:
{
  "sql": "SELECT denial_reason, COUNT(*) as denied_count FROM claims WHERE status = 'DENIED' AND claim_date BETWEEN '2025-07-01' AND '2025-09-30' GROUP BY denial_reason ORDER BY denied_count DESC",
  "confidence": 0.88
}

execute-preview

Execute a validated query and return capped results. Always enforces row limits.

  • Default row limit: 100 rows
  • Query must pass validation before execution
  • Returns column types for proper formatting
  • Includes execution duration in metadata

save-export

Export query results or create an embedded report link.

CSV Export

Full query results exported to a local CSV file. Returns the file path.

Report Link

Generate a Power BI embedded report URL with pre-applied filters matching the query context.

Extension Points

Add new tools without modifying the orchestration service:

Adding a New Tool
// 1. Define the tool schema
{
  "name": "get-anomalies",
  "description": "Detect statistical anomalies in a dataset",
  "inputSchema": { ... },
  "outputSchema": { ... }
}

// 2. Implement the handler
// 3. Register in the MCP server
// 4. The orchestration layer auto-discovers it