The simplest way to run dbt-core in production
Connect your repo, pick your adapter, set a schedule. Your dbt-core project runs in production. No Airflow to manage. No Docker to wrangle. No CI/CD to build.
Point it at your GitHub repo
Pick your dbt-core version and warehouse adapter
Set up your warehouse connectivity (AES-256-GCM encrypted)
Set a cron schedule
Your dbt-core project runs in production
Point it at your GitHub repo
Pick your dbt-core version and warehouse adapter
Set up your warehouse connectivity (AES-256-GCM encrypted)
Set a cron schedule
Your dbt-core project runs in production
ModelDock does one thing and does it well
Connect your repo. Private repos supported with encrypted OAuth tokens.
PostgreSQL, Snowflake, BigQuery, Redshift, Databricks and Fabric Lakehouse & Warehouse for now. Docker images built on-demand.
Set it and forget it. Your dbt-core project runs on your schedule, reliably.
Visualise your DAG. Focus mode, filters, search, upstream/downstream exploration.
Full run history with real-time logs, task progress, run comparison, historical analysis and artifact storage.
AES-256-GCM encryption at rest. Warehouse credentials are the keys to the kingdom.
Every run in its own Docker container. No shared state, no cross-contamination. Containers are automatically destroyed after each execution.
Email and webhook notifications on failure or success. Never miss a broken pipeline.
The whole platform runs on a single environment. No Kubernetes. No magic.
Each run spins up an isolated container for your specific adapter and dbt-core version
2-10 people who know dbt-core well and just need a reliable place to run it.
Want production scheduling without infrastructure overhead.
Anyone who's spent a weekend setting up Airflow just to run dbt-core on a cron and thought 'there has to be a better way'.
Free during open beta. No credit card required.
Billing starts after beta
If you use dbt-core, you know the drill. Your models work great locally. Now you need them running on a schedule, in production, reliably. So you start researching — and you're quickly looking at setting up Airflow, writing Dockerfiles, building CI/CD pipelines, and maintaining all of it. Indefinitely.
All you wanted was to run dbt build on a cron. Instead you've become a platform engineer.
We decided to build a simpler path. ModelDock.run was born out of 20+ years of experience in data engineering & architecture, Linux, data systems and on-prem & cloud infrastructure. We use dbt-core every day and think it's a fantastic tool — but the gap between “dbt-core works on my laptop” and “dbt-core runs reliably in production” is wider than it should be.
Most data analysts and analytics engineers — the people actually writing the dbt-core models — shouldn't need to know how to configure Airflow or write Dockerfiles just to get their code running on a schedule.
So we built what we wanted: a platform where you point it at your repo, tell it when to run, and forget about it. The entire infrastructure — server, deployment, security, monitoring — runs on a single environment. The complexity was in making it invisible to the end user.
Behind the scenes, the platform dynamically generates an Airflow DAG from your project config. On schedule, it clones your repo, generates profiles.yml from your encrypted credentials, spins up an isolated Docker container for your specific adapter and dbt-core version, runs the job, stores the artifacts, and tears down the container. Every run is fully isolated — no shared state, no cross-contamination.
The platform is live and in free open beta. We want real users putting it through its paces before we start charging. Break it, challenge it, tell us what's missing.
ModelDock is in open beta. Found a bug? Have a feature request? Just want to say hi? We'd love to hear from you.