Back to Blog
dbt-coreproductiontutorial

How to Run dbt-core in Production: The Complete Guide

Every approach to running dbt-core in production — cron, Airflow, Prefect, GitHub Actions, and managed services — with real code, trade-offs, and recommendations.

ModelDock TeamFebruary 16, 20268 min read

You've built your dbt project. Models run perfectly on your laptop. Tests pass. Documentation looks great.

Now what?

Getting dbt-core into production is where most teams hit a wall. dbt-core doesn't ship with a scheduler, a deployment system, or credential management. You need to build all of that yourself — or find something that does it for you.

This guide covers every major approach, with real code, honest trade-offs, and a clear recommendation at the end.

The Problem

Running dbt build locally is easy. Running it reliably on a schedule, with secure credentials, proper logging, failure alerts, and no human intervention — that's a different challenge entirely.

Here's what "running dbt in production" actually means:

  1. Scheduling: Triggering dbt runs on a cron or event-based schedule
  2. Environment isolation: Running dbt in a clean, reproducible environment (not your laptop)
  3. Credential management: Keeping warehouse passwords and tokens secure
  4. Monitoring: Knowing when runs fail, and why
  5. Deployment: Getting code changes from Git into the production environment
  6. Version management: Pinning dbt and adapter versions to avoid surprise breakages

Let's walk through the options.

Option 1: Cron + Bare Metal

The simplest approach. Install dbt on a server, write a shell script, add a cron entry.

# /opt/dbt/run.sh
#!/bin/bash
set -e
cd /opt/dbt/my-project
source /opt/dbt/venv/bin/activate
export DBT_PROFILES_DIR=/opt/dbt/profiles
dbt build --target prod 2>&1 | tee -a /var/log/dbt/run-$(date +%Y%m%d-%H%M%S).log
# Run dbt every day at 6 AM UTC
0 6 * * * /opt/dbt/run.sh

Pros

  • Dead simple to set up
  • No additional tools or dependencies
  • Easy to understand

Cons

  • No retry logic
  • No alerting (unless you build it)
  • No dependency management between jobs
  • Credentials sit in plain text on the server
  • No visibility into run history
  • Scaling to multiple projects is painful
  • Server maintenance is on you

Verdict: Fine for a weekend project. Not production-grade.

Option 2: GitHub Actions

Many teams already use GitHub Actions for CI. Why not use it for scheduling too?

# .github/workflows/dbt-production.yml
name: dbt Production Run
on:
schedule:
- cron: '0 6 * * *' # Daily at 6 AM UTC
workflow_dispatch: # Manual trigger
jobs:
dbt-build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install dbt
run: |
pip install dbt-core dbt-postgres # swap adapter as needed
- name: Create profiles.yml
run: |
mkdir -p ~/.dbt
cat > ~/.dbt/profiles.yml << EOF
my_project:
target: prod
outputs:
prod:
type: postgres
host: ${{ secrets.DB_HOST }}
port: 5432
user: ${{ secrets.DB_USER }}
password: ${{ secrets.DB_PASSWORD }}
dbname: ${{ secrets.DB_NAME }}
schema: analytics
threads: 4
EOF
- name: Run dbt
run: dbt build --target prod
- name: Upload artifacts
if: always()
uses: actions/upload-artifact@v4
with:
name: dbt-artifacts
path: target/

Pros

  • No infrastructure to manage
  • Credentials stored in GitHub Secrets
  • Built-in run history and logs
  • Easy to trigger manually
  • Most teams already have GitHub

Cons

  • GitHub Actions runners have limited compute
  • Cron scheduling is approximate (can be delayed by minutes)
  • No native dependency management between dbt jobs
  • Run minutes cost money at scale (2,000 free minutes/month for private repos)
  • No built-in monitoring or alerting beyond GitHub notifications
  • Cold start on every run (installing dbt + dependencies each time)

Verdict: A solid middle ground for small teams. Breaks down with complex multi-project setups.

Option 3: Airflow

Apache Airflow is the industry standard for workflow orchestration. It's powerful, flexible, and battle-tested. It's also complex.

# dags/dbt_production.py
from airflow import DAG
from airflow.operators.bash import BashOperator
from datetime import datetime, timedelta
default_args = {
'owner': 'data-team',
'retries': 2,
'retry_delay': timedelta(minutes=5),
}
with DAG(
'dbt_production',
default_args=default_args,
schedule_interval='0 6 * * *',
start_date=datetime(2026, 1, 1),
catchup=False,
) as dag:
dbt_build = BashOperator(
task_id='dbt_build',
bash_command=(
'cd /opt/dbt/my-project && '
'dbt build --target prod'
),
)

Or with the Cosmos operator (recommended for production):

from cosmos import DbtTaskGroup, ProjectConfig, ProfileConfig
profile_config = ProfileConfig(
profile_name="my_project",
target_name="prod",
profiles_yml_filepath="/opt/dbt/profiles.yml",
)
dbt_tasks = DbtTaskGroup(
project_config=ProjectConfig("/opt/dbt/my-project"),
profile_config=profile_config,
default_args=default_args,
)

Pros

  • Industry standard — large community, extensive documentation
  • Rich scheduling (cron, sensors, event-driven)
  • DAG dependency management
  • Built-in retries, alerting, and logging
  • Web UI for monitoring
  • Works with any orchestration need, not just dbt

Cons

  • Significant operational burden: You need to run and maintain Airflow itself (webserver, scheduler, worker, database, message broker)
  • Steep learning curve
  • Docker/Kubernetes setup required for isolated dbt runs
  • Credential management needs additional tooling (Airflow Connections, Vault, etc.)
  • Overkill if dbt is your only workload
  • Upgrading Airflow is notoriously painful

Verdict: The right choice if you already run Airflow for other workloads. Hard to justify setting up from scratch just for dbt.

Option 4: Prefect

Prefect positions itself as a modern alternative to Airflow. Less infrastructure, more Python-native.

from prefect import flow, task
from prefect.tasks import task_input_hash
import subprocess
@task(retries=2, retry_delay_seconds=300)
def run_dbt(command: str):
result = subprocess.run(
f"dbt {command} --target prod",
shell=True,
cwd="/opt/dbt/my-project",
capture_output=True,
text=True,
)
if result.returncode != 0:
raise Exception(f"dbt failed: {result.stderr}")
return result.stdout
@flow(name="dbt-production")
def dbt_production():
run_dbt("build")
if __name__ == "__main__":
dbt_production()

Pros

  • Pure Python — no DAG files, no config boilerplate
  • Prefect Cloud handles orchestration (no self-hosted scheduler)
  • Better developer experience than Airflow
  • Built-in observability

Cons

  • Prefect Cloud is a paid service for production features
  • Smaller community than Airflow
  • Still need to manage the compute environment where dbt runs
  • Credential management is separate
  • Lock-in to Prefect's execution model

Verdict: Good developer experience, but you're still managing dbt's runtime environment yourself.

Option 5: Dagster

Dagster takes a software-defined assets approach that maps well to dbt's model concept.

from dagster_dbt import DbtCliResource, dbt_assets
from dagster import Definitions
dbt_resource = DbtCliResource(project_dir="/opt/dbt/my-project")
@dbt_assets(manifest_path="/opt/dbt/my-project/target/manifest.json")
def my_dbt_assets(context, dbt: DbtCliResource):
yield from dbt.cli(["build"], context=context).stream()
defs = Definitions(
assets=[my_dbt_assets],
resources={"dbt": dbt_resource},
)

Pros

  • First-class dbt integration
  • Asset-centric model aligns with how dbt thinks
  • Good observability and lineage
  • Dagster Cloud available for managed hosting

Cons

  • Requires learning Dagster's asset model
  • Dagster Cloud is paid for production features
  • Self-hosting Dagster has similar complexity to Airflow
  • Smaller ecosystem than Airflow

Verdict: If you buy into the software-defined assets paradigm, Dagster is excellent. But it's a commitment.

Option 6: Managed dbt-core Schedulers

The newest category. Services that handle the infrastructure, scheduling, and credential management specifically for dbt-core.

This is the approach ModelDock takes: connect your Git repo, pick your adapter, set a cron schedule, provide your credentials. The service handles everything else — building Docker images, running dbt in isolated containers, storing artifacts, and monitoring runs.

Pros

  • Zero infrastructure to manage
  • Purpose-built for dbt-core
  • Credentials encrypted at rest (AES-256-GCM)
  • Run history, logs, and artifacts stored automatically
  • Works with all dbt adapters
  • Minutes to set up, not days

Cons

  • Less flexibility than running your own orchestrator
  • Newer category — fewer options to choose from
  • Adds a dependency on a third-party service

Verdict: The fastest path from "dbt works locally" to "dbt runs in production."

Comparison Table

ApproachSetup TimeMaintenanceCostMonitoringCredential Security
CronMinutesHighServer costsNonePoor
GitHub ActionsHoursLowFree tier + minutesBasicGood (Secrets)
AirflowDaysHighInfrastructureExcellentGood (with effort)
PrefectHoursMediumFree tier + CloudGoodGood
DagsterHoursMediumFree tier + CloudExcellentGood
Managed (ModelDock)MinutesNoneFree tier availableGoodExcellent

Our Recommendation

If you're a small team that just needs dbt running on a schedule, don't over-engineer it. You don't need Airflow. You don't need Kubernetes. You don't need a platform team.

Start with the simplest thing that meets your requirements:

  1. Solo developer, single project? GitHub Actions gets the job done.
  2. Already running Airflow? Add dbt to your existing setup.
  3. Want production reliability without the infrastructure? Use a managed scheduler.

The goal is to spend your time writing dbt models, not maintaining infrastructure.

Skip the Infrastructure

ModelDock was built for teams that want dbt-core in production without the operational overhead. Connect your repo, configure your warehouse credentials, set a schedule — done.

Free during open beta. No credit card required.

Ready to run dbt-core in production?

ModelDock handles scheduling, infrastructure, and credential management so you don't have to.

Start For Free