Insight

Mastering Data Export in Eventhouse: From Eventstream to OneLake and SQL

First, I can't believe we're already at 10 editions! Super excited to see how much this has grown already really looking forward to next year! This will be the last edition of the year, stay tuned to see some exciting things I'm working on for early 2026! With all of the Ignite a

2025-12-02
insightsplatform-specificsarchive

First, I can't believe we're already at 10 editions! Super excited to see how much this has grown already really looking forward to next year! This will be the last edition of the year, stay tuned to see some exciting things I'm working on for early 2026! With all of the Ignite announcements as well, there has been a lot to keep up with:

  • SQL 2025 going GA - Change Event Streams brings event driven architectures to all things SQL
  • Activator released capabilities to additionally call Fabric functions
  • Eventstream now has HTTP and MongoDB CDC feeds
  • Operations Agents allow you to create autonomous agents that monitor your business 24x7
  • Eventhouse endpoint brings the power of event driven architectures to Lakehouse and Warehouse
  • Ontologies allow you create and define the important business concepts in your organization for AI systems, bringing shared understanding across engines

Mastering Data Export in Eventhouse

Over the past few weeks, I've worked with two different customers who shared a common challenge: how to operationalize curated Eventhouse data for analytics, reporting, and cross‑team sharing without giving broad access to the source Eventhouse.

Although their use cases were different, the architectural pattern ended up being the same:

  • Customer 1 needed to export select Eventhouse tables to AWS S3, where ClickHouse powered legacy operational reports.
  • Customer 2 needed to expose a curated Eventhouse table to a different department but struggled with Lakehouse sharing and Fabric security. They needed table-level export, not workspace-level permissioning.

These scenarios highlight a broader truth:

Operational analytics frequently require pushing Eventhouse data downstream into other engines: OneLake, ADLS, SQL, or external systems to enable integration without duplicating processing or broadening security boundaries.

Today's article walks through the practical pattern for doing exactly that using the .export command. We'll use the Bicycle sample dataset (from Eventstream) and focus purely on the management commands and process.

Why Export from Eventhouse?

Eventhouse gives teams a unified, high-performance environment for real-time data shaping. But operational systems often need that data outside Eventhouse for:

  • BI workloads embedded in other platforms
  • Department-level views without granting access to source Eventhouse
  • External systems that rely on flat files (CSV, JSON, Parquet)
  • Cloud databases requiring curated fact tables
  • Cross-cloud integrations with S3 or service-specific processors
  • Supporting legacy systems while transitioning to Fabric

Exporting becomes the bridge.

For this walkthrough, we'll work from a table populated from Eventstream's Bicycle sample data, containing bike station locations, usage, and operational metrics.

Reference docs for setup: https://learn.microsoft.com/en-us/fabric/real-time-intelligence/overview

Export Command Overview

The .export command moves Eventhouse data to storage in multiple formats:

  • CSV
  • TSV
  • JSON
  • Parquet

Step 1: Handling Sensitive Values - Obfuscated String Literals

When exporting to storage that uses access keys or connection strings, never expose keys in logs.

Kusto supports obfuscated string literals: prefix a string with h to ensure the value is masked in telemetry:

h'MySuperSecretString'
H"MySuperSecretString"
"ThisIsMy"h'SuperSecretString'

Notice that you can both put it at the beginning to secure the entire string or only portions.

Step 2: Export to OneLake and ADLS

When exporting the data to OneLake, use the below structure. You can grab both GUIDs directly from your Lakehouse URL in Fabric. The folder should already exist in your Lakehouse in Fabric.

https://onelake.dfs.fabric.microsoft.com/<workspaceGUID>/<lakehouseGUID>/Files/<folder>/

You can also send the data to ADLS, the command is almost exactly the same. For OneLake, note that you can only leverage Impersonate as the authentication method, but for ADLS you can use any of the ones listed in the docs. Because the commands are essentially the same, I put them in the same code sample below:

//export examples
.export to csv 
(h@"https://onelake.dfs.fabric.microsoft.com/<workspaceGUID>/<LakehouseGuid>/Files/EventhouseExtracts/;impersonate") //Onelake
//("https://<MyStorageAccount>.blob.core.windows.net/containername/"h';impersonate') //adls
with (
   sizeLimit=10000,
   namePrefix="export",
   includeHeaders="all",
   encoding="UTF8NoBOM"
)
<|
//this is your query you want to export. You can export the whole table, 
//rows since the last time you exported, or only rows received in the last time period.  
bicyclesampleraw
| take 100 

Step 3: Export Data to SQL

Alternatively, instead of sending the data to storage you may want to export the data to SQL. To export data to a SQL database, use the .export command with the to sql option. This will work on any cloud version of SQL, I did not test it against on prem but I don't believe anything would stop you as long as you can authenticate to the instance. At any rate, here is a good code sample you can use to get started.

//export to SQL
.export to sql ['dbo.EventhouseExtracts']  
   h@"Server=tcp:MyServer.database.windows.net, 1433;Authentication=Active Directory Integrated;Initial Catalog=MyDatabaseName;Connection Timeout=30;"
with (
   createifnotexists="true"
)
<|
bicyclesampleraw
| project tostring(BikepointID), tostring(Street), tostring(Neighbourhood), 
          tostring(Latitude), tostring(Longitude), toint(No_Bikes), toint(No_Empty_Docks)
| take 100 

Important Note: In SQL, the project clause is important. SQL requires strict type mapping.

Operationalizing Export Commands

Once you have your .export command defined, you can operationalize it through:

  • Notebooks (parameterized operational flows)
  • Activator (scheduled data pushes)

This enables:

  • Cross-team table sharing without granting Eventhouse access
  • Exporting curated views into Lakehouse for governed analytics
  • Feeding external analytical systems during transition periods
  • Continuing support for legacy operational workloads post‑go-live

Thanks all, happy holidays and looking forward to our next edition!


If you're navigating AI applications of data, Fabric, or event-driven architectures and want a second opinion, feel free to reach out!