Back to feed
Fabric Recent Update·Mar 12, 2026·Mehrsa Golestaneh

Operationalizing Agentic Applications with Microsoft Fabric


Agentic apps are moving quickly from prototypes to real workloads. But once you go beyond a proof of concept (POC), the hard part isn’t getting an agent to respond; it’s knowing what the agent did, whether it was safe and correct, and how it’s impacting the business.

Let’s explore what it takes to operationalize agentic applications using Microsoft Fabric. Through a production minded reference implementation, we demonstrate how to capture agent behavior as governed data, monitor safety and performance in real time, and turn agent activity into measurable business insight. Although the example uses a banking experience, the architecture and patterns are applicable to any production agentic system.

The real challenge with agentic apps

Agentic AI apps generate rich operational data: user prompts, agent routing decisions, tool calls, model outputs, latency, token usage, and safety signals. Teams racing to deliver a POC often treat that data as “just logs,” which makes it hard to answer basic production questions:

  • Which agents were invoked, in what order, and why?
  • Did the system use the right tools and data sources?
  • Where are failures happening (latency, safety blocks, tool errors), and how do we debug them?
  • How do we evaluate answer quality and tie usage to measurable outcomes?

Orchestration is only half the problem. Operationalizing agents with governance, observability, evaluation, and analytics is where teams tend to struggle.

Why Microsoft Fabric changes the equation

Microsoft Fabric provides a unified, governed data plane for operational, analytical, and AI workloads. Instead of stitching together separate systems for transactions, telemetry, safety monitoring, and BI, Microsoft Fabric brings these capabilities together in a single, governed workspace. With OneLake as the shared data foundation, operational, analytical, and AI workloads work over the same data — making agent behavior easier to observe, evaluate, and optimize end to end.

Introducing the Agentic Banking App reference implementation

The Agentic Banking App is an open-source, full-stack, production-minded reference implementation. It’s designed to demonstrate real agentic patterns — not just prompt chaining — along with the data and operational plumbing you need to run agents responsibly at scale.

Explore the reference implementation:

Architecture Overview

Agentic app architectureFigure: Agentic app’s architecture

Conceptually, the flow is simple: a React frontend calls a Python backend (LangGraph) to run a coordinator agent and specialist agents. As the app executes transactions, answers questions, and generates custom UI, it also captures agent interactions and telemetry. Fabric acts as the system of record for both the banking data and the agentic operational data; so, analytics, monitoring, and evaluation sit right next to the runtime.

What this reference implementation shows in practice

Multi-agent reasoning with traceability

The app uses a coordinator-and-specialists pattern so each request can be routed, executed, and inspected end-to-end:

  • Coordinator agent routes each user request to the right specialist and provides a single-entry point for policy and intent checks.
  • Account agent performs banking operations (reads/writes) via parameterized SQL.
  • Support agent answers service questions using RAG grounded in bank documentation.
  • Visualization agent enables generative UI by producing and persisting user-specific visualization configurations.

Agentic operational data as a first-class asset

In the reference implementation, SQL Database in Fabric plays two roles: it powers core transactional scenarios (user info, accounts, balances, transfers), and it stores structured operational data produced during agent execution.

Instead of treating chat transcripts as opaque blobs, the app captures agent sessions, routing decisions, tool usage, model metadata (tokens/latency), and safety outcomes as relational data. This makes it possible to trace agent behavior end-to-end, debug failures, and correlate agent behavior with business outcomes.

If you want the full telemetry schema and examples of reconstructed traces, the repo includes detailed documentation.

Generative UI and personalization driven by agents

To keep experiences stateful and personalized, the app persists session memory and user-specific UI artifacts. Cosmos DB in Fabric is a natural fit for this semi-structured, high-velocity data (this is also why OpenAI is using Azure Cosmos DB to optimize for write-heavy workloads!): conversation state can be restored instantly, and generated visualization configurations can be saved to a user profile and rehydrated on the next visit.

This pattern generalizes beyond banking: any app that needs durable agent sessions, preferences, and generated UI components can reuse the same approach.

Built-in content safety monitoring with real-time visibility

Every user prompt is evaluated for content safety, and the resulting signals can be streamed into Fabric in real time. In this sample, Eventstream routes those events into Eventhouse (KQL), where you can query, monitor, and alert on safety trends with low latency.

The repo includes example KQL queries you can use as a starting point for dashboards such as “safety flags over time” or “top categories of blocked content.”

From agent behavior to business insight

Once agentic operational data is captured and modeled, the next step is turning it into decisions. In this reference app, data can flow into a Lakehouse and semantic model, so teams can use Power BI and notebooks to evaluate quality, performance, and outcomes; connecting AI usage (tokens, tools, latency, safety flags) to product KPIs and business value.

Fabric lineage view

Figure: Fabric Components Lineage View

Power of OneLake, Lakehouse and Semantic Model

One key advantage that Microsoft Fabric provides is that all the data is stored in OneLake, which serves as the centralized data lake for the business, eliminating the need for siloed storage solutions while simplifying data sharing and access management.

Then, you can shape the data into a Lakehouse, with customizable schema, security and governance policy, and expose it through a semantic model that maps relationships and measures once, so Power BI, notebooks, and Data Agents can all work from a consistent definition. For this application, we have built a similar pipeline by pulling in all banking and agentic data (from both SQL and Cosmos databases).

Analyze and monitor in Power BI

With a semantic model in place, Power BI reports can surface operational insights such as token usage, tool usage, common intents, latency hotspots, and safety flags, making it easier to spot regressions and prioritize improvements.

Evaluate and iterate in notebooks

Fabric notebooks make it straightforward to run recurring data science and evaluation workflows. The sample includes a notebook that scores responses for qualities like intent resolution, relevance, coherence, and fluency using an LLM-as-judge approach (utilizing the Azure AI Evaluation SDK), so you can track quality over time, not just anecdotal feedback.

Data Agent

You can also layer in conversational experiences over the semantic model (for example, with a data agent) to provide governed “chat with your data” scenarios. Stay tuned for future updates showing how you can use the data agent as another agent in the application to enable a more secure and seamless NL2SQL experience.

Built for production and extensibility

The sample is intentionally “production-minded”: it separates responsibilities across services, keeps session memory durable, and keeps data products (tables, event streams, semantic models, reports, notebooks) inside the same Fabric governance boundary. And while the UI is a banking experience, the patterns apply to any domain where agents act on operational systems and must be observed and improved continuously.

Git-based deployment of Fabric artifacts

All required Fabric artifacts can be deployed from the GitHub repo using Fabric’s native Git integration. Clone, connect, sync, and the workspace resources (databases, Lakehouse, semantic model, reports, notebooks, and streaming components) can be created consistently.

Explore, adapt, and contribute

If you’re building agentic applications, this reference implementation provides a practical blueprint for moving from prototype to production — covering multi-agent patterns, traceability, safety monitoring, and a clear path from agent behavior to analytics and evaluation.

Get started by cloning the repo (aka.ms/AgenticAppFabric), running the app, and reusing the patterns in your own domain. We welcome your contributions and feedback; Feel free to open an issue in the GitHub repo!

Related blog posts

Operationalizing Agentic Applications with Microsoft Fabric

ExtractLabel: Schema-driven unstructured data extraction with Fabric AI Functions

Most enterprise data lives in free text – tickets, contracts, feedback, clinical notes, and more. It holds critical information but doesn’t fit into the structured tables that pipelines expect. Traditionally, extracting structure meant rule-based parsers that break with every format to change, or custom NLP models that take weeks to build. LLMs opened new possibilities, … Continue reading “ExtractLabel: Schema-driven unstructured data extraction with Fabric AI Functions”

Fabric February 2026 Feature Summary

Welcome to the February 2026 Microsoft Fabric update! This month brings a wide range of enhancements across the Fabric platform—from improvements to the OneLake Catalog and developer experiences, to meaningful updates in Data Engineering, Data Factory, Real‑Time Intelligence, and more. Whether you’re building, operating, or scaling solutions in Fabric, there’s plenty here to explore. And … Continue reading “Fabric February 2026 Feature Summary”

Microsoft Fabric

Accelerate your data potential with a unified analytics solution that connects it all. Microsoft Fabric enables you to manage your data in one place with a suite of analytics experiences that seamlessly work together, all hosted on a lake-centric SaaS solution for simplicity and to maintain a single source of truth.

Get the latest news from Microsoft Fabric Blog

This will prompt you to login with your Microsoft account to subscribe

Visit our product blogs

View articles by category

View articles by date

What's new

Microsoft Store

Education

Business

Developer & IT

Company

#AI#Databases#Fabric platform#Lakehouse#Microsoft Fabric#OneLake#Real-Time Intelligence#semantic model