Free during open beta — no credit card required

The simplest way to run dbt-core in production

Running dbt-core in production shouldn't be this hard

Connect your repo, pick your adapter, set a schedule. Your dbt-core project runs in production. No Airflow to manage. No Docker to wrangle. No CI/CD to build.

Up and running in minutes

1

Connect

Point it at your GitHub repo

2

Configure

Pick your dbt-core version and warehouse adapter

3

Credentials

Set up your warehouse connectivity (AES-256-GCM encrypted)

4

Schedule

Set a cron schedule

5

Done

Your dbt-core project runs in production

Everything you need, nothing you don't

ModelDock does one thing and does it well

GitHub Integration

Connect your repo. Private repos supported with encrypted OAuth tokens.

Multi-Adapter

PostgreSQL, Snowflake, BigQuery, Redshift, Databricks and Fabric Lakehouse & Warehouse for now. Docker images built on-demand.

Cron Scheduling

Set it and forget it. Your dbt-core project runs on your schedule, reliably.

Interactive Lineage

Visualise your DAG. Focus mode, filters, search, upstream/downstream exploration.

Run History & Logs

Full run history with real-time logs, task progress, run comparison, historical analysis and artifact storage.

Encrypted Credentials

AES-256-GCM encryption at rest. Warehouse credentials are the keys to the kingdom.

Isolated Execution

Every run in its own Docker container. No shared state, no cross-contamination. Containers are automatically destroyed after each execution.

Alerts & Monitoring

Email and webhook notifications on failure or success. Never miss a broken pipeline.

Simple, transparent architecture

The whole platform runs on a single environment. No Kubernetes. No magic.

Docker Compose
Next.js
Web App
Apache Airflow
Orchestration
PostgreSQL
Metadata
Artifact Storage
Local FS / S3
Caddy
HTTPS / Reverse Proxy
Sentry
Error Tracking
Fail2Ban
Brute-Force Protection
Umami
Privacy-Focused Analytics
Isolated dbt-core Docker Containers
per run, auto-removed

Each run spins up an isolated container for your specific adapter and dbt-core version

Built for people who'd rather write SQL than YAML

Small Data Teams

2-10 people who know dbt-core well and just need a reliable place to run it.

Solo Engineers & Consultants

Want production scheduling without infrastructure overhead.

The Weekend Warrior

Anyone who's spent a weekend setting up Airflow just to run dbt-core on a cron and thought 'there has to be a better way'.

Simple, honest pricing

Free during open beta. No credit card required.

Open Beta

Free

$0/mo
  • 1 project
  • 100 runs/month
  • 1 user
Start For Free
Open Beta

Starter

$34.99/mo
  • 3 projects
  • 1,000 runs/month
  • 1 user
Start For Free

Billing starts after beta

Open Beta

Team

$99.99/mo
  • 10 projects
  • Unlimited runs
  • 5 users
Start For Free

Billing starts after beta

Open Beta

Pro

$249.99/mo
  • Unlimited projects
  • Unlimited runs
  • Unlimited users
Start For Free

Billing starts after beta

Why we built this

If you use dbt-core, you know the drill. Your models work great locally. Now you need them running on a schedule, in production, reliably. So you start researching — and you're quickly looking at setting up Airflow, writing Dockerfiles, building CI/CD pipelines, and maintaining all of it. Indefinitely.

All you wanted was to run dbt build on a cron. Instead you've become a platform engineer.

We decided to build a simpler path. ModelDock.run was born out of 20+ years of experience in data engineering & architecture, Linux, data systems and on-prem & cloud infrastructure. We use dbt-core every day and think it's a fantastic tool — but the gap between “dbt-core works on my laptop” and “dbt-core runs reliably in production” is wider than it should be.

Most data analysts and analytics engineers — the people actually writing the dbt-core models — shouldn't need to know how to configure Airflow or write Dockerfiles just to get their code running on a schedule.

So we built what we wanted: a platform where you point it at your repo, tell it when to run, and forget about it. The entire infrastructure — server, deployment, security, monitoring — runs on a single environment. The complexity was in making it invisible to the end user.

Behind the scenes, the platform dynamically generates an Airflow DAG from your project config. On schedule, it clones your repo, generates profiles.yml from your encrypted credentials, spins up an isolated Docker container for your specific adapter and dbt-core version, runs the job, stores the artifacts, and tears down the container. Every run is fully isolated — no shared state, no cross-contamination.

The platform is live and in free open beta. We want real users putting it through its paces before we start charging. Break it, challenge it, tell us what's missing.

We want your feedback

ModelDock is in open beta. Found a bug? Have a feature request? Just want to say hi? We'd love to hear from you.

Frequently asked questions