Insight
Building Action Systems on data in Azure SQL and SQL Server 2025 with Change Eventstreams
With all the Ignite announcements this year, it's easy to overlook features you can start using right now. One of the most exciting additions now in public preview (and GA in SQL Server 2025 on-prem!!!) is Change Event Streaming (CES).
With all the Ignite announcements this year, it's easy to overlook features you can start using right now. One of the most exciting additions now in public preview (and GA in SQL Server 2025 on-prem!!!) is Change Event Streaming (CES).
This feature unlocks powerful event-driven scenarios for databases deployed in Azure SQL. Today, we'll explore how to integrate Azure SQL with Microsoft Fabric using CES to build real-time, event-driven systems.
But first, let's start with the why:
Why Use Change Event Streams?
Business Reasons
- Build event-driven systems on top of relational databases with minimal overhead and seamless integration
- Synchronize data across systems—especially between microservices or distributed architectures
- Enable real-time analytics on operational data
- Audit and monitor sensitive changes or log specific events
- Consolidate event data from multiple Azure SQL databases into a single Fabric endpoint in real time
Technical Reasons
- CES avoids writing back to the database (unlike CDC), reducing overhead
- Schema flexibility: Each event includes the row schema. If a DDL change occurs (add, drop, rename column), the next DML event reflects the updated schema. CES does not emit DDL events
- No primary key required
How It Works
Although there are many more possibilities that you can get with this as well. In addition to making the data available in Fabric in real-time, leveraging Eventstream custom endpoints means change data from SQL can be made available for downstream consumption, Activator alerts can be triggered when new records are added to the database table, or data can be transformed and processed in real-time.
Enough background, let me show how this is done!
Implementation Steps
To demonstrate this example, I've created an Azure SQL database with the Adventure Works sample database deployed to it. I've also created a Fabric capacity (my F8 if you are curious) where I am going to set up the database to send change event streaming data. Within my Fabric capacity, I've created an Eventhouse and an Eventstream with a custom endpoint. After creating the custom endpoint for the eventstream, retrieve the details.
Important Note: Something that trips a lot of people up in Eventstream is that schemas are not read until events are sent. Unless you are using schema registry, but that's out of scope for this article 😊 So your initial publish of the eventstream should only be the custom endpoint, don't create the Eventhouse yet until you see events flowing into the stream. Otherwise, your Eventhouse will fail.
Step 1: Enable Change Event Streaming in Azure SQL
Switching over to our Azure SQL DB, run the following commands to enable change event streaming:
-- Create the Master Key with a password, if it does not already exist.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<>';
-- Create a database scoped credential to connect the database to your Fabric event stream.
-- This example is using SharedAccessKey auth
CREATE DATABASE SCOPED CREDENTIAL EventstreamSAK
WITH
IDENTITY = '<>',
SECRET = '<>'
-- Enable event streaming in azure sql
EXEC sys.sp_enable_event_stream
-- Create the event stream group and input the event hub connection information
EXEC sys.sp_create_event_stream_group
@stream_group_name = N'AzureSQLEventDriven',
@destination_type = N'AzureEventHubsAmqp',
@destination_location = N'<>.servicebus.windows.net/<>',
@destination_credential = EventstreamSAK
-- The destination location comes from the custom endpoint you previously created.
-- Click on the Event Hub, select SAS Key Authentication, and retrieve the fields.
-- Run this command for each table you want to add to the group.
EXEC sys.sp_add_object_to_event_stream_group N'AzureSQLEventDriven', N'SalesLT.SalesOrderDetail'
-- If you run into errors run this
SELECT * FROM sys.dm_change_feed_errors ORDER BY entry_time DESC
Step 2: Generate Sample Data
After attaching the table and running some commands, you should see the change events start to flow. I ran the following commands against AdventureWorks sample if it helps:
-- =========================================
-- 1. Validate Foreign Keys Before Insert
-- =========================================
DECLARE @SalesOrderID INT = 71774; -- Example existing SalesOrderHeader ID
DECLARE @ProductIDs TABLE (ProductID INT);
INSERT INTO @ProductIDs VALUES (709), (712), (714), (716), (718);
-- Check if SalesOrderID exists
IF NOT EXISTS (SELECT 1 FROM SalesLT.SalesOrderHeader WHERE SalesOrderID = @SalesOrderID)
BEGIN
PRINT 'Error: SalesOrderID does not exist.';
RETURN;
END
-- Check if all ProductIDs exist
IF EXISTS (
SELECT p.ProductID
FROM @ProductIDs p
LEFT JOIN SalesLT.Product sp ON p.ProductID = sp.ProductID
WHERE sp.ProductID IS NULL
)
BEGIN
PRINT 'Error: One or more ProductIDs do not exist.';
RETURN;
END
PRINT 'Foreign key validation passed. Proceeding with inserts...';
-- =========================================
-- 2. Insert 5 Sample Rows
-- =========================================
INSERT INTO SalesLT.SalesOrderDetail (SalesOrderID, OrderQty, ProductID, UnitPrice, UnitPriceDiscount, rowguid, ModifiedDate)
VALUES
(@SalesOrderID, 2, 709, 125.00, 0.00, NEWID(), GETDATE()),
(@SalesOrderID, 1, 712, 250.00, 10.00, NEWID(), GETDATE()),
(@SalesOrderID, 3, 714, 75.00, 0.00, NEWID(), GETDATE()),
(@SalesOrderID, 5, 716, 50.00, 5.00, NEWID(), GETDATE()),
(@SalesOrderID, 2, 718, 300.00, 15.00, NEWID(), GETDATE());
PRINT 'Inserted 5 sample rows successfully.';
-- =========================================
-- 3. Update Example
-- =========================================
UPDATE SalesLT.SalesOrderDetail
SET OrderQty = 4, UnitPriceDiscount = 20.00, ModifiedDate = GETDATE()
WHERE SalesOrderDetailID = 110701;
PRINT 'Updated SalesOrderDetailID = 110701 successfully.';
-- =========================================
-- 4. Delete Example
-- =========================================
DELETE FROM SalesLT.SalesOrderDetail
WHERE SalesOrderDetailID = 110703;
PRINT 'Deleted SalesOrderDetailID = 110702 successfully.';
Step 3: Configure Eventstream Connection
Switch back over to Eventstream, and you should see events flowing into the stream. If you are transforming the data in Eventstream, you will need to select "Event Processing Before Ingestion", otherwise select "Direct".
After completing the configuration screen, you should see your data available in the Eventhouse table.
Implementation Considerations
Single Table Setup
If you are only setting up 1 table:
- Stable schema: If the schema doesn't change that much, you can use the built in json parser in the Eventhouse to parse the columns out of the data column during ingestion and save yourself a step
- Dynamic schema: If the schema changes a lot, or can change. Use
DropMappedFieldsin Kusto to build in defensive coding mechanisms to capture the additional future schema columns in an additional data column. Alternatively, you can simply usebag_unpackto materialize it at query time.
Multiple Tables
If there is more than 1 table:
- Don't try and recreate all of the tables in Eventhouse. Think of the ingested data as one big stream and pull the data out that you want.
- For tabular recreation: If you have a few tables, and you really want to re-create the structured tabular view that you can see in SQL, take a look at Tyler Chessman's blog where he walks through how to do this with CDC enabled tables.
Real-Time Analytics Example
Now that this data is in Eventhouse, there are tons of opportunities that open up. We can create Activator triggers to search for anomalies, map the data of where orders are coming from in real-time, create Operations Agents to take actions on our data, or simply create a real-time dashboard or a Power BI report.
Below is a query that calculates any order that has come in over the past hour that is more than 200% of normal, opening the door for real-time activator rules that can notify instantly:
// 1) Parse incoming events & compute line-level amount
let AllOrderLines =
AzureSQLCESraw
| where operation == "INS"
| extend current = parse_json(data.eventrow.current)
| extend CurrentJson = todynamic(current)
| extend ParsedCurrentJson = parse_json(CurrentJson)
| extend
SalesOrderID = tostring(ParsedCurrentJson.SalesOrderID),
SalesOrderDetailID = tostring(ParsedCurrentJson.SalesOrderDetailID),
OrderQty = toint(ParsedCurrentJson.OrderQty),
UnitPrice = todouble(ParsedCurrentJson.UnitPrice),
UnitPriceDiscount = todouble(ParsedCurrentJson.UnitPriceDiscount),
ModifiedDate = todatetime(ParsedCurrentJson.ModifiedDate)
// If UnitPriceDiscount is a currency amount (e.g., 10.00 off of 250.00):
| extend line_amount_currency = OrderQty * (UnitPrice - UnitPriceDiscount)
// If UnitPriceDiscount is a decimal percentage (e.g., 0.10 = 10%):
| extend line_amount_percent = OrderQty * UnitPrice * (1.0 - UnitPriceDiscount)
// ***Pick the correct one for your feed:***
| extend line_amount = line_amount_currency;
// 2) Aggregate to "order size" (sum of all lines per SalesOrderID)
// Also track first/last ingestion times for that order.
let PerOrderTotals =
AllOrderLines
| summarize
order_total = sum(line_amount),
first_event_time = min(['time']),
last_event_time = max(['time'])
by SalesOrderID;
// 3) Global baseline: average order size across *all* records
let global_avg_order_total =
toscalar(PerOrderTotals | summarize avg_order_total = avg(order_total));
// 4) Anomalies: orders whose latest ingestion happened in the last hour
// and whose order total exceeds 200% of the global average
PerOrderTotals
| where last_event_time between (ago(1h) .. now())
| where order_total > 2.0 * global_avg_order_total
| project
SalesOrderID,
order_total,
global_avg_order_total,
threshold_200pct = 2.0 * global_avg_order_total,
first_event_time,
last_event_time
| order by order_total desc
CU Consumption Analysis
In the interest of transparency, here is how many CU's were used by my Eventstream and Eventhouse to ingest this data. Each data payload sent by SQL total 18.9 Kb for 5 insert transactions, and used the following CU on my capacity totaling 1.25 CU per the Eventhouse and 0.05003 CU for the Eventstream.
The eventstream can be further broken down to understand uptime vs data charges:
Seeing that the majority of the time was spent in uptime, and the actual data charge was very small. In the real-world, this would be a trade off as with an actual active system there would be more orders flowing through.
We can calculate the size of the payload for an individual event which will help with our estimates. Back in the SQL database, I sent a single insert statement through and monitored the size of the payload, which came in at 2.3 kB.
Using this we can estimate the size of the capacity we'll need by using the capacity calculator. For example, I used Copilot to tell me how much the total volume per day would be at varying levels. Notice that the top 3 are all <1 GB, which including Eventhouse means this could run technically on an F2. Even 3 GB a day at 1 million events in the SQL server per day equates to an F4. Don't forget though that your number may change based on other workloads and downstream consumers of that data. 😊
Daily Data Volume Analysis
| Events/Day | Data Volume | Capacity Needed |
|---|---|---|
| 100,000 | 0.23 GB | F2 |
| 500,000 | 1.15 GB | F2-F4 |
| 1,000,000 | 2.3 GB | F4 |
If you're navigating AI applications of data, Fabric, or event-driven architectures and want a second opinion, feel free to reach out!