Back to Blog
dbt-coretutorialwarehouse

Running dbt-core with Microsoft Fabric: Complete Setup Guide

Step-by-step guide to setting up dbt-core with Microsoft Fabric — workspace configuration, authentication, profiles.yml, and troubleshooting for both Fabric DWH and Fabric Lakehouse.

ModelDock TeamFebruary 17, 202613 min read

Microsoft Fabric is Microsoft's unified analytics platform, and if your organization is betting on it, you're probably wondering how to get dbt-core working with it. The answer is: it depends on which Fabric experience you're using.

This guide covers both paths — Fabric Data Warehouse and Fabric Lakehouse — because they use different adapters, different authentication methods, and have different limitations. There aren't many guides out there covering this in detail, so we're going to be thorough.

What is Microsoft Fabric?

If you've worked with Azure Synapse Analytics, think of Fabric as its successor — a single SaaS platform that bundles data engineering, data warehousing, real-time analytics, data science, and Power BI into one experience. Everything sits on top of OneLake, Microsoft's unified data lake.

For dbt users, two Fabric experiences matter:

  • Fabric Data Warehouse: A full SQL-based data warehouse with a T-SQL endpoint. This is the closest thing to a traditional warehouse like Snowflake or BigQuery.
  • Fabric Lakehouse: A lakehouse built on Delta Lake, queryable through a SQL analytics endpoint or a Spark/Livy endpoint.

These are fundamentally different compute engines, which is why they need different dbt adapters.

Two Paths: Different Adapters for Different Experiences

Here's the key decision:

Experiencedbt AdapterSQL DialectAuth Options
Fabric Data Warehousedbt-fabricT-SQLService principal, Azure CLI, username/password
Fabric Lakehousedbt-fabricsparkSpark SQLAzure CLI only

If you're starting fresh and have a choice, Fabric Data Warehouse with dbt-fabric is the more mature path. The adapter is better supported, service principal authentication works (critical for production), and the T-SQL dialect is familiar if you've used SQL Server or Synapse.

Fabric Lakehouse with dbt-fabricspark is viable but has real limitations, especially around authentication in automated environments. We'll cover both honestly.

Prerequisites

Before you start with either path, you'll need:

  • A Microsoft Fabric workspace with at least one capacity (F2 or higher, or a trial capacity)
  • Python 3.9+ installed
  • pip (or your preferred Python package manager)
  • Azure CLI installed (required for some auth methods)
  • A Fabric Data Warehouse or Lakehouse already created in your workspace

A note on capacity: Fabric is not free. You need an active Fabric capacity assigned to your workspace. If your capacity is paused (common with pay-as-you-go), dbt connections will fail. We'll cover that in troubleshooting.

Path 1: Fabric Data Warehouse Setup

This is the recommended path for most teams. The dbt-fabric adapter connects to Fabric Data Warehouse through its T-SQL (SQL) endpoint.

Installing dbt-fabric

# Create a virtual environment (recommended)
python -m venv dbt-venv
source dbt-venv/bin/activate
# Install dbt-fabric
pip install dbt-fabric

This installs dbt-core along with the Fabric adapter. At the time of writing, dbt-fabric supports dbt-core 1.7+ and uses the pyodbc driver under the hood, which means you also need the Microsoft ODBC Driver for SQL Server.

On macOS:

brew install microsoft/mssql-release/msodbcsql18

On Ubuntu/Debian:

curl https://packages.microsoft.com/keys/microsoft.asc | sudo tee /etc/apt/trusted.gpg.d/microsoft.asc
sudo add-apt-repository "$(curl https://packages.microsoft.com/config/ubuntu/$(lsb_release -rs)/prod.list)"
sudo apt-get update
sudo apt-get install -y msodbcsql18

Finding Your SQL Endpoint

You need the SQL connection string for your Fabric Data Warehouse. To find it:

  1. Open your Fabric workspace in the browser
  2. Click on your Data Warehouse
  3. Go to Settings (gear icon) or check the bottom-right corner of the warehouse view
  4. Look for the SQL connection string — it looks something like:

```

your-workspace-name.datawarehouse.fabric.microsoft.com

```

You'll also need your warehouse name (the name you gave it when you created it) and the database name (usually the same as the warehouse name).

Authentication Options

The dbt-fabric adapter supports several authentication methods. Here are the most common ones.

Service Principal (recommended for production)

Create an Azure AD app registration, grant it access to your Fabric workspace, and use its credentials:

# profiles.yml
my_fabric_project:
target: dev
outputs:
dev:
type: fabric
driver: "ODBC Driver 18 for SQL Server"
server: your-workspace.datawarehouse.fabric.microsoft.com
database: your_warehouse_name
schema: dbo
authentication: ServicePrincipal
tenant_id: "your-azure-tenant-id"
client_id: "your-app-client-id"
client_secret: "your-app-client-secret"
threads: 4

To set up the service principal:

  1. In Azure Portal, go to App registrations and create a new registration
  2. Under Certificates & secrets, create a new client secret
  3. In your Fabric workspace, go to Manage access and add the service principal with at least Contributor role
  4. Note the Application (client) ID, Directory (tenant) ID, and the secret value

Azure CLI (good for local development)

If you're logged into the Azure CLI, dbt can use your identity:

# Log in to Azure first
az login
# profiles.yml
my_fabric_project:
target: dev
outputs:
dev:
type: fabric
driver: "ODBC Driver 18 for SQL Server"
server: your-workspace.datawarehouse.fabric.microsoft.com
database: your_warehouse_name
schema: dbo
authentication: CLI
threads: 4

This is the easiest option for development — no secrets to manage. But it won't work in CI/CD or Docker without additional setup.

profiles.yml Configuration

Here's a complete profiles.yml for Fabric Data Warehouse using environment variables (a good practice for keeping secrets out of files):

my_fabric_project:
target: "{{ env_var('DBT_TARGET', 'dev') }}"
outputs:
dev:
type: fabric
driver: "ODBC Driver 18 for SQL Server"
server: "{{ env_var('FABRIC_SERVER') }}"
database: "{{ env_var('FABRIC_DATABASE') }}"
schema: dbo
authentication: CLI
threads: 4
prod:
type: fabric
driver: "ODBC Driver 18 for SQL Server"
server: "{{ env_var('FABRIC_SERVER') }}"
database: "{{ env_var('FABRIC_DATABASE') }}"
schema: dbo
authentication: ServicePrincipal
tenant_id: "{{ env_var('AZURE_TENANT_ID') }}"
client_id: "{{ env_var('AZURE_CLIENT_ID') }}"
client_secret: "{{ env_var('AZURE_CLIENT_SECRET') }}"
threads: 8

Running Your First Model

With your profile configured, test the connection:

dbt debug

If everything is wired up correctly, you'll see a successful connection message. If not, check the troubleshooting section below.

Create a simple model to verify end-to-end:

-- models/staging/stg_test.sql
select
1 as id,
'hello from fabric' as message,
getdate() as created_at
dbt run --select stg_test

You should see the model materialize as a view (the default) in your Fabric Data Warehouse.

Path 2: Fabric Lakehouse Setup

The Fabric Lakehouse path uses the dbt-fabricspark adapter, which connects through a Livy endpoint (the Spark session API). This is a less mature path, but it's the right choice if your data already lives in a Fabric Lakehouse and you want to use Spark SQL.

Installing dbt-fabricspark

# Create a virtual environment
python -m venv dbt-venv
source dbt-venv/bin/activate
# Install dbt-fabricspark
pip install dbt-fabricspark

The Authentication Limitation

Here's the honest truth about dbt-fabricspark: at the time of writing, it only supports Azure CLI authentication. There's no service principal option.

This means:

  • For local development, it works fine — just az login and go
  • For production, CI/CD, or Docker, you need a workaround (service principal login via az login --service-principal, which we'll cover in the production section)

This is the single biggest limitation of the Lakehouse path for dbt.

Finding Your Livy Endpoint

The Livy endpoint is the Spark session API for your Lakehouse. To find it:

  1. Open your Fabric workspace
  2. Click on your Lakehouse
  3. Go to Settings
  4. Look for the Livy API endpoint — it typically looks like:

```

https://msit-onelake.pbidedicated.windows.net/...

```

Alternatively, you can construct it from your workspace and lakehouse IDs. Check the Fabric documentation for the exact format, as it varies by region.

profiles.yml Configuration

my_lakehouse_project:
target: dev
outputs:
dev:
type: fabricspark
method: livy
host: "{{ env_var('FABRIC_LIVY_HOST') }}"
lakehouse: "{{ env_var('FABRIC_LAKEHOUSE_NAME') }}"
workspace_id: "{{ env_var('FABRIC_WORKSPACE_ID') }}"
lakehouse_id: "{{ env_var('FABRIC_LAKEHOUSE_ID') }}"
authentication: CLI
threads: 1
connect_retries: 3
connect_timeout: 60
schema: dbo

A few things to note:

  • threads: Keep this at 1 or 2. Spark sessions through Livy are heavier than SQL connections, and parallelism can cause session conflicts.
  • connect_timeout: Spark sessions take time to start. A 60-second timeout is reasonable; the first run after a cold start can be slow.
  • connect_retries: Livy sessions can be flaky. Retries help.

Running Your First Model

# Make sure you're logged in
az login
# Test the connection
dbt debug

Create a simple model (note: Spark SQL, not T-SQL):

-- models/staging/stg_test.sql
select
1 as id,
'hello from fabric lakehouse' as message,
current_timestamp() as created_at
dbt run --select stg_test

This will create a Delta table in your Lakehouse. The first run will be noticeably slower than the Data Warehouse path because a Spark session needs to spin up.

Common Errors and Fixes

"Login failed" or "AADSTS" errors

This almost always means an authentication problem.

  • Service principal: Double-check tenant_id, client_id, and client_secret. Make sure the app registration has been granted access in the Fabric workspace.
  • CLI auth: Run az login again. Tokens expire. You can check with az account show.
  • Token scope: The service principal needs the correct API permissions. In the Azure Portal, check that it has delegated or application permissions for the Fabric API.

"Cannot connect to server" or endpoint not found

Error: Unable to connect to server your-workspace.datawarehouse.fabric.microsoft.com

Check the following:

  1. Is the server name correct? Copy it directly from the Fabric portal.
  2. Is the ODBC driver installed? Run odbcinst -j to check.
  3. Is a firewall blocking outbound connections on port 1433?
  4. Is the workspace capacity active? (See below.)

"Capacity is paused" or mysterious timeouts

Fabric uses capacity units. If you're on pay-as-you-go and your capacity is paused, nothing will connect. You'll see vague timeout errors rather than a clear "capacity paused" message.

Go to the Fabric admin portal, check your capacity, and make sure it's running. If you're using a trial capacity, remember that trial capacities have time limits.

"Permission denied" on schema or table operations

Fabric Data Warehouse has its own permission model. If your service principal or user can connect but can't create tables:

  1. Check that the identity has at least Contributor access in the Fabric workspace
  2. For the Data Warehouse specifically, the identity may need db_owner or CONTROL permission on the database
  3. In the Fabric portal, verify the workspace role assignments under Manage access

ODBC driver version mismatch

Error: [unixODBC][Driver Manager]Can't open lib 'ODBC Driver 18 for SQL Server'

This means the driver isn't installed or the name doesn't match. Run:

odbcinst -q -d

This lists installed drivers. Make sure the driver name in your profiles.yml matches exactly.

Fabric-Specific Gotchas

T-SQL vs Spark SQL Differences

If you're writing models for Fabric Data Warehouse (dbt-fabric), you're writing T-SQL. If you're writing for Fabric Lakehouse (dbt-fabricspark), you're writing Spark SQL. These are not interchangeable.

Common differences that will trip you up:

OperationT-SQL (Data Warehouse)Spark SQL (Lakehouse)
Current timestampGETDATE()current_timestamp()
Date formattingFORMAT(date, 'yyyy-MM-dd')date_format(date, 'yyyy-MM-dd')
String concatenation+ or CONCAT()concat() or `\\`
Type castingCAST(x AS VARCHAR)CAST(x AS STRING)
ConditionalIIF(cond, a, b)IF(cond, a, b)

If you're using dbt packages written for Snowflake or BigQuery, they likely won't work out of the box with either Fabric adapter. Check package compatibility before adding them to your project.

Lakehouse Tables vs Warehouse Tables

Fabric Data Warehouse stores tables in a managed columnar format accessible via T-SQL. Fabric Lakehouse stores tables as Delta Lake files in OneLake. Both are queryable from Power BI, but they behave differently:

  • Materializations: The Data Warehouse adapter supports views, tables, and incremental models. The Lakehouse adapter supports tables and incremental models; view support may be limited.
  • Performance: Data Warehouse queries return faster for typical analytics queries. Lakehouse queries have Spark session startup overhead.
  • Cross-access: You can query a Lakehouse's SQL analytics endpoint from the Data Warehouse using cross-database queries, but you can't go the other direction easily.

Schema Handling

Fabric Data Warehouse supports custom schemas. Fabric Lakehouse is more limited — you typically work within the default schema. If your dbt project uses custom schemas heavily (via generate_schema_name macros), test this carefully with the Lakehouse adapter.

Production Considerations

Fabric Data Warehouse in Production

Service principal authentication is the clear path here. Set up an Azure AD app registration, store the credentials in your CI/CD secrets or vault, and you're done. This works in Docker, Kubernetes, Airflow, or any automated environment.

# Example: running dbt in Docker with service principal
docker run --rm \
-e AZURE_TENANT_ID="$AZURE_TENANT_ID" \
-e AZURE_CLIENT_ID="$AZURE_CLIENT_ID" \
-e AZURE_CLIENT_SECRET="$AZURE_CLIENT_SECRET" \
-e FABRIC_SERVER="your-workspace.datawarehouse.fabric.microsoft.com" \
-e FABRIC_DATABASE="your_warehouse" \
my-dbt-image:latest \
dbt build --target prod

Fabric Lakehouse in Production (The Hard Part)

Since dbt-fabricspark only supports CLI authentication, running it in Docker or CI/CD requires a workaround. The approach is to use az login with a service principal inside the container:

# In your Docker entrypoint or CI script
az login --service-principal \
--username "$AZURE_CLIENT_ID" \
--password "$AZURE_CLIENT_SECRET" \
--tenant "$AZURE_TENANT_ID"
# Now dbt can use CLI auth
dbt build --target prod

This works, but it's fragile:

  • You need the Azure CLI installed in your Docker image (adds ~200MB)
  • Token refresh isn't automatic for long-running sessions
  • If the az login step fails silently, dbt gives you a confusing auth error

If you're choosing between Fabric Data Warehouse and Fabric Lakehouse for a new project and production automation matters to you, the Data Warehouse path is significantly easier to operate.

Capacity Management

In production, make sure your Fabric capacity is always running. If you're on pay-as-you-go, either:

  • Keep the capacity running 24/7 (expensive)
  • Use the Fabric REST API to resume capacity before dbt runs and pause it after (cost-effective but adds orchestration complexity)
# Resume capacity via Azure CLI
az fabric capacity resume \
--resource-group my-rg \
--capacity-name my-capacity
# Wait for capacity to be ready (it takes a minute or two)
sleep 120
# Run dbt
dbt build --target prod
# Pause capacity
az fabric capacity suspend \
--resource-group my-rg \
--capacity-name my-capacity

A Simpler Way to Run dbt with Fabric

Setting up dbt with Fabric isn't as plug-and-play as it is with Snowflake or BigQuery. Between the ODBC drivers, Azure authentication, capacity management, and adapter quirks, there's a lot of surface area for things to go wrong.

[ModelDock](https://modeldock.run) supports both dbt-fabric and dbt-fabricspark adapters. You connect your Git repo, enter your Fabric credentials (encrypted with AES-256-GCM — they never sit in a config file), set a schedule, and your dbt project runs in an isolated container with full logs and artifact storage. All the ODBC driver setup, authentication wiring, and capacity management headaches are handled for you.

Free during open beta. No credit card required.

Ready to run dbt-core in production?

ModelDock handles scheduling, infrastructure, and credential management so you don't have to.

Start For Free