Overview

The Agentic BI system is organized into five distinct layers. Each layer has a single responsibility and communicates only with its immediate neighbors. This separation ensures that security policies, business logic, and UI concerns never bleed into each other.

The Golden Rule

Treat the VS Code chat participant as just the front end/orchestrator, not the place where all business logic lives. VS Code chat participants own user interaction. Tool-like capabilities and MCP servers handle reusable, agent-invokable actions.

System Diagram

LAYER 1 VS Code Chat Participant LAYER 2 โ€” ORCHESTRATION SERVICE Intent Classify Route Source Build Prompt Validate & Policy Auth ยท Metadata ยท Caching ยท Audit Logging LAYER 3 โ€” MCP TOOLS & ENDPOINTS List Models Get Metadata Gen SQL/DAX Validate Query Execute / Export POWER BI Semantic Model ยท Execute Queries API SNOWFLAKE SQL API ยท Governed Execution LAYER 5 โ€” PRESENTATION: Markdown ยท Tables ยท Charts ยท Embedded Reports ยท Exports

Layer 1 โ€” Chat UI

The VS Code chat participant is purely a front-end orchestrator. It owns the user interaction โ€” collecting natural-language prompts, displaying responses, and managing conversation context.

  • โœ“Respects the model the user selected in the chat UI (never hardcodes a model)
  • โœ“Uses the model passed through the request object per VS Code LM docs
  • โœ“Delegates all business logic to the orchestration layer
  • โœ“Supports conversation memory per session
  • โœ“Can be replaced by a web UI or Teams bot without changing backend

Layer 2 โ€” Orchestration Service

A backend API service that sits between the chat UI and all data systems. This is the single enforcement point for security, governance, and business rules.

Intent Classification

Parse the user's question to understand what they're asking: KPI lookup, trend analysis, data exploration, report generation, etc.

Source Routing

Decide whether to query Power BI semantic model or Snowflake based on the nature of the question and available metadata.

Prompt Building

Construct source-specific prompts with relevant metadata subset, few-shot examples, and business glossary context.

Validation & Policy

Validate generated queries before execution. Enforce row limits, deny dangerous operations, apply role-based restrictions.

Layer 3 โ€” Tool Layer (MCP)

Discrete, reusable tools exposed via the Model Context Protocol. Each tool does one thing well.

ToolActionReturns
list-semantic-modelsEnumerate available Power BI datasetsModel IDs, names, workspace
get-measuresRetrieve measures from a semantic modelMeasure definitions, descriptions
generate-daxProduce validated DAX skeletonDAX query string
generate-sqlProduce validated SQL querySQL query string
execute-previewRun query with row limitResult rows (capped)
save-exportExport to file or create report linkExport URI / report URL

Layer 4 โ€” Data Layer

Power BI

Access via the Execute Queries REST API against semantic models. Requires Azure AD / Entra app registration with proper scopes.

  • โœ“DAX query execution
  • โœ“Row-level security preservation
  • โœ“Business measure reuse

Snowflake

Access via the SQL API โ€” submit, poll, cancel, fetch statements. Always through a policy layer with warehouse/role mapping.

  • โœ“Raw detail-level queries
  • โœ“Exploratory analysis
  • โœ“Schema whitelisting

Layer 5 โ€” Presentation

Every response includes multiple output types, giving the user actionable results:

  • โœ“Plain-English interpretation โ€” what the data means
  • โœ“Executed query โ€” the actual SQL or DAX that ran
  • โœ“Result preview โ€” first N rows in a formatted table
  • โœ“Recommended visual โ€” chart type suggestion with config
  • โœ“Export artifact โ€” CSV, embedded report link, or report config

End-to-End Data Flow

Two example scenarios showing how the pipeline works in practice:

Scenario 1: KPI Query via Power BIDAX
// User asks:
"Show top 20 clients by PMPM trend"

// Step 1 โ€” Intent Classification
โ†’ type: "kpi_trend", entities: ["clients", "PMPM"]

// Step 2 โ€” Source Routing
โ†’ source: "power_bi"  // PMPM is a defined measure

// Step 3 โ€” Prompt + Validate
โ†’ DAX query generated and validated against policy

// Step 4 โ€” Execute + Present
โ†’ Table of 20 clients with PMPM values, trend chart suggestion
Scenario 2: Exploration via SnowflakeSQL
// User asks:
"How many claims were denied in Q3 by denial reason?"

// Step 1 โ€” Intent Classification
โ†’ type: "exploration", entities: ["claims", "denied", "Q3"]

// Step 2 โ€” Source Routing
โ†’ source: "snowflake"  // detail-level, no PBI measure

// Step 3 โ€” Prompt + Validate
โ†’ SQL query generated, validated, schema whitelisted

// Step 4 โ€” Execute + Present
โ†’ Table of denial reasons with counts, bar chart suggestion