Overview
Alembic is the industry-standard database migration tool for Python, built by the creators of SQLAlchemy. Whether you are starting a new project or retrofitting migrations onto a legacy application, this guide walks you through every step — from a blank terminal to a fully automated, version-controlled migration pipeline.
1. What is Alembic and Why Do You Need It?
When you build a Python application with a relational database, your schema inevitably changes over time — you add columns, rename tables, create indexes, and drop obsolete constraints. Doing this manually with raw SQL is error-prone and impossible to reproduce reliably across multiple environments.
Alembic solves this with three powerful capabilities:
| Capability | What it means |
|---|---|
| Version control for your schema | Every change is a numbered, ordered migration file stored in Git alongside your code. |
| Autogenerate | Alembic compares your SQLAlchemy models with the live database and writes the migration script for you. |
| Upgrade / Downgrade | Every migration has an upgrade() and an optional downgrade() function, letting you roll forward or back to any point in history. |
Alembic is not tied to a specific ORM or framework. It works with SQLAlchemy Core, SQLAlchemy ORM, Flask-SQLAlchemy, SQLModel, FastAPI, and plain Python scripts.
2. Core Concepts
Before writing any code, understand the vocabulary Alembic uses.
alembic/
├── env.py ← controls how Alembic connects and discovers models
├── script.py.mako ← Jinja2 template for new migration files
└── versions/
├── 3e4a8f1b2c_create_users_table.py
└── 9d2c1a0e7f_add_email_index.py
alembic.ini ← top-level configuration (DSN, paths, logging)
Key terms
- Revision — a single migration file identified by a short random hex ID (e.g.
3e4a8f1b2c). - Head — the latest revision in the chain. Running
alembic upgrade headapplies every pending revision. - down_revision — the pointer inside each revision file that links it to its parent, forming a singly-linked list (or a DAG for branches).
- alembic_version table — a single-row table Alembic creates in your database to track which revision is currently applied.
- Online mode — migration runs against a live database connection (the default).
- Offline mode — migration generates raw SQL scripts without connecting, useful for DBAs or audit workflows.
3. Quick Reference: Fast Forward
Already familiar with Python tooling? Skip the prose — here’s everything you need to go from zero to running migrations.
One-liner cheat sheet
uv add alembic && \
uv run alembic init alembic && \
# edit env.py: set target_metadata = Base.metadata + DATABASE_URL
uv run alembic revision --autogenerate -m "initial" && \
uv run alembic upgrade head
Alembic Quick Setup (annotated)
1. Install
uv add alembic
2. Init
uv run alembic init alembic
3. Config
Set the DB URL via your app settings (preferred) rather than hardcoding in alembic.ini:
# alembic.ini — leave sqlalchemy.url blank; it is set programmatically in env.py
sqlalchemy.url =
4. Edit alembic/env.py
Make these targeted changes — everything else stays as generated:
import os
from myapp.database import Base
from myapp import models # noqa: F401 — registers all models with Base.metadata
# Point Alembic at your models
target_metadata = Base.metadata
# Use your app's URL (e.g. from a settings object or env var)
config.set_main_option("sqlalchemy.url", os.environ["DATABASE_URL"])
Key options to consider:
context.configure() kwarg | When to use |
|---|---|
compare_type=True | Detect column type changes during autogenerate |
compare_server_default=True | Detect server-default changes (can be noisy) |
render_as_batch=True | Required for SQLite — rewrites ALTER TABLE as a table copy |
include_schemas=True | When using non-default PostgreSQL schemas |
SQLite extra: pass connect_args={"check_same_thread": False} on your create_engine() call in database.py, not in Alembic itself.
5. Create & apply the first migration
# Auto-generate from model diff
uv run alembic revision --autogenerate -m "initial"
# Review the generated file in alembic/versions/, then apply
uv run alembic upgrade head
6. Daily workflow thereafter
# After changing a model:
uv run alembic revision --autogenerate -m "describe_the_change"
uv run alembic upgrade head
# Check what's applied:
uv run alembic current
# Roll back one step:
uv run alembic downgrade -1
4. Installation
Prerequisites
- Python 3.8+
- A running relational database (PostgreSQL, MySQL/MariaDB, SQLite, Oracle, MS-SQL)
pipor your preferred package manager (uv,poetry,pdm)
Install Alembic
# Using pip (recommended for most projects)
pip install alembic
# Using uv (faster)
uv add alembic
# Using poetry
poetry add alembic
# Including the database driver together
pip install alembic psycopg2-binary # PostgreSQL
pip install alembic mysqlclient # MySQL / MariaDB
pip install alembic cx_Oracle # Oracle
pip install alembic pyodbc # MS SQL Server
Verify the installation:
alembic --version
# alembic 1.13.x
Tip: Pin Alembic to a specific version in your
requirements.txt/pyproject.toml. Alembic occasionally introduces behavioral changes between minor versions that can affect autogenerate output.
4. Quick-Start: A Brand-New Project
Let’s build a small user-management service from scratch to see the full lifecycle.
4.1 Project structure target
myapp/
├── myapp/
│ ├── __init__.py
│ ├── database.py ← SQLAlchemy engine + session
│ └── models.py ← Your ORM models
├── alembic/
│ ├── env.py
│ ├── script.py.mako
│ └── versions/
├── alembic.ini
└── requirements.txt
4.2 Set up the database module
# myapp/database.py
from sqlalchemy import create_engine
from sqlalchemy.orm import DeclarativeBase, sessionmaker
DATABASE_URL = "postgresql+psycopg2://user:password@localhost/mydb"
engine = create_engine(DATABASE_URL, echo=True)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
class Base(DeclarativeBase):
pass
4.3 Define your first model
# myapp/models.py
from datetime import datetime
from sqlalchemy import String, DateTime, Boolean, func
from sqlalchemy.orm import Mapped, mapped_column
from myapp.database import Base
class User(Base):
__tablename__ = "users"
id: Mapped[int] = mapped_column(primary_key=True)
email: Mapped[str] = mapped_column(String(255), unique=True, nullable=False)
username: Mapped[str] = mapped_column(String(100), nullable=False)
is_active: Mapped[bool] = mapped_column(Boolean, default=True)
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), server_default=func.now()
)
4.4 Initialize Alembic
Run this once, from the root of your project:
alembic init alembic
This creates alembic.ini and the alembic/ directory with starter files.
5. Understanding the Generated Files
5.1 alembic.ini
# alembic.ini (key sections)
[alembic]
# Path to the migration scripts
script_location = alembic
# The database URL — you will override this in env.py for real projects
sqlalchemy.url = driver://user:pass@host/dbname
[loggers]
keys = root,sqlalchemy,alembic
[logger_alembic]
level = INFO
handlers =
qualname = alembic.runtime.migration
Best practice: Never hard-code credentials in
alembic.ini. Overridesqlalchemy.urlinsideenv.pyusing an environment variable (see Section 11).
5.2 alembic/env.py
This is the most important file — it controls how Alembic connects to your database and where it discovers your models’ metadata.
The key sections you must edit:
# alembic/env.py (after customization)
import os
from logging.config import fileConfig
from sqlalchemy import engine_from_config, pool
from alembic import context
# ── 1. Import your Base so autogenerate can see the models ──────────────────
from myapp.database import Base
from myapp import models # noqa: F401 — ensures models are registered
# ── 2. Alembic Config object ────────────────────────────────────────────────
config = context.config
# ── 3. Override the URL from an environment variable ────────────────────────
config.set_main_option("sqlalchemy.url", os.environ["DATABASE_URL"])
# ── 4. Logging ───────────────────────────────────────────────────────────────
if config.config_file_name is not None:
fileConfig(config.config_file_name)
# ── 5. Provide metadata for autogenerate ─────────────────────────────────────
target_metadata = Base.metadata
def run_migrations_offline() -> None:
"""Run migrations without a live database connection."""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online() -> None:
"""Run migrations with a live database connection."""
connectable = engine_from_config(
config.get_section(config.config_ini_section, {}),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(connection=connection, target_metadata=target_metadata)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
pool.NullPoolis intentional. During migrations you do not want connection pooling — each migration run should use a fresh connection and close it cleanly.
6. Creating and Running Migrations
6.1 Create a migration manually
# Creates an empty revision file in alembic/versions/
alembic revision -m "create_users_table"
Alembic generates a file like alembic/versions/3e4a8f1b2c_create_users_table.py:
"""create_users_table
Revision ID: 3e4a8f1b2c
Revises:
Create Date: 2026-04-15 09:00:00.000000
"""
from typing import Union
from alembic import op
import sqlalchemy as sa
# revision identifiers
revision: str = "3e4a8f1b2c"
down_revision: Union[str, None] = None # None = this is the first migration
branch_labels = None
depends_on = None
def upgrade() -> None:
op.create_table(
"users",
sa.Column("id", sa.Integer(), nullable=False),
sa.Column("email", sa.String(length=255), nullable=False),
sa.Column("username", sa.String(length=100), nullable=False),
sa.Column("is_active", sa.Boolean(), nullable=True),
sa.Column(
"created_at",
sa.DateTime(timezone=True),
server_default=sa.text("now()"),
nullable=True,
),
sa.PrimaryKeyConstraint("id"),
sa.UniqueConstraint("email"),
)
def downgrade() -> None:
op.drop_table("users")
6.2 Apply migrations
# Apply all pending migrations up to the latest
alembic upgrade head
# Apply a specific number of steps forward
alembic upgrade +2
# Apply to a specific revision ID
alembic upgrade 3e4a8f1b2c
6.3 Rollback migrations
# Roll back one step
alembic downgrade -1
# Roll back to a specific revision
alembic downgrade 9d2c1a0e7f
# Roll back everything (dangerous in production!)
alembic downgrade base
6.4 Inspect current state
# Show the current revision applied to the database
alembic current
# Show the full migration history
alembic history --verbose
# Show which revisions are pending
alembic history -r current:head
7. Autogenerate: Let Alembic Write the Migration for You
Autogenerate is Alembic’s killer feature. It compares your SQLAlchemy MetaData (your model definitions) against the live database schema and generates a diff.
alembic revision --autogenerate -m "add_profile_table"
Alembic will generate upgrade() and downgrade() functions automatically. Always review the generated file before applying it.
7.1 What autogenerate detects
| Detected automatically | NOT detected automatically |
|---|---|
| Added / removed tables | Changes to stored procedures |
| Added / removed columns | Changes to server-side defaults (sometimes) |
| Changes to column type | Sequence changes (driver-dependent) |
| Added / removed indexes | Changes to CHECK constraints (some dialects) |
| Added / removed unique constraints | Data-level changes |
| Added / removed foreign keys | Custom types without __repr__ |
7.2 Example workflow
# 1. Edit myapp/models.py — add a new "posts" table
# 2. Auto-generate the migration
alembic revision --autogenerate -m "add_posts_table"
# 3. Review the generated file
cat alembic/versions/<new_revision>_add_posts_table.py
# 4. Apply it
alembic upgrade head
7.3 Pro tip: include/exclude tables in autogenerate
# In env.py, filter out tables you don't own (e.g., third-party extensions)
def include_object(object, name, type_, reflected, compare_to):
if type_ == "table" and name in {"spatial_ref_sys", "geography_columns"}:
return False
return True
context.configure(
...,
target_metadata=target_metadata,
include_object=include_object,
)
8. Branching, Merging, and Multi-Head Scenarios
When two developers create migrations simultaneously from the same head, you end up with two heads — a branched history. Alembic handles this gracefully.
8.1 Detecting multiple heads
alembic heads
# <rev_a> (head)
# <rev_b> (head)
8.2 Creating a merge revision
alembic merge -m "merge_heads" <rev_a> <rev_b>
This creates a new revision that has down_revision = ("<rev_a>", "<rev_b>"), resolving the branch.
alembic upgrade head # applies the merge + any pending revisions
8.3 Named branches (advanced)
Named branches let you track independent streams, such as separate microservices sharing a migration repository, or a “schema” branch vs. a “data” branch.
alembic revision --branch-label=data_migrations -m "backfill_users"
alembic upgrade data_migrations@head
9. Working with Multiple Databases
For multi-tenant or sharded applications, env.py can be customized to iterate over multiple URLs.
# env.py — multi-database pattern
import os
DATABASES = {
"tenant_a": os.environ["DB_TENANT_A"],
"tenant_b": os.environ["DB_TENANT_B"],
}
def run_migrations_online() -> None:
for tenant, url in DATABASES.items():
print(f"Migrating {tenant}...")
connectable = create_engine(url, poolclass=pool.NullPool)
with connectable.connect() as connection:
context.configure(connection=connection, target_metadata=target_metadata)
with context.begin_transaction():
context.run_migrations()
10. Offline / SQL-Script Mode
In regulated environments (banking, healthcare), DBAs often need to review SQL before it is applied.
# Generate SQL for all pending migrations (no DB connection required)
alembic upgrade head --sql > migrations.sql
# Generate SQL for a specific range
alembic upgrade 3e4a8f1b2c:9d2c1a0e7f --sql > patch.sql
The output is plain SQL that can be reviewed, stored in an audit log, and run by a DBA.
11. Environment Variables and Twelve-Factor Apps
Never commit database credentials. Use environment variables and override the URL in env.py:
# env.py — production-ready URL handling
import os
from urllib.parse import quote_plus
def get_database_url() -> str:
# Option A: single DATABASE_URL env var (Heroku / Railway / Render style)
if url := os.getenv("DATABASE_URL"):
# Fix older postgres:// style URLs (Heroku quirk)
return url.replace("postgres://", "postgresql+psycopg2://", 1)
# Option B: individual components
user = quote_plus(os.environ["DB_USER"])
password = quote_plus(os.environ["DB_PASSWORD"])
host = os.environ["DB_HOST"]
port = os.environ.get("DB_PORT", "5432")
name = os.environ["DB_NAME"]
return f"postgresql+psycopg2://{user}:{password}@{host}:{port}/{name}"
config.set_main_option("sqlalchemy.url", get_database_url())
12. Introducing Alembic to an Existing Project — Step-by-Step
This is the most common real-world scenario: you have a running application with a database that was created without any migration tooling.
Goal: Add Alembic without breaking anything. The database schema stays exactly as-is; Alembic simply “adopts” the current state as its baseline.
Step 1 — Audit the current state
Before touching anything, take a snapshot of the existing schema.
# PostgreSQL
pg_dump --schema-only -d mydb > schema_before_alembic.sql
# MySQL
mysqldump --no-data mydb > schema_before_alembic.sql
# SQLite
sqlite3 mydb.db .schema > schema_before_alembic.sql
Commit this file to Git. It is your safety net.
Step 2 — Install Alembic and add it to dependencies
pip install alembic
# Add to requirements.txt or pyproject.toml
echo "alembic>=1.13,<2.0" >> requirements.txt
Step 3 — Initialize Alembic
alembic init alembic
Check the new files into version control immediately:
git add alembic/ alembic.ini
git commit -m "chore: initialize alembic"
Step 4 — Reflect or define your SQLAlchemy models
You have two paths:
Path A — You already have SQLAlchemy models
This is the easiest case. Just import your Base in env.py (as shown in Section 5.2).
# env.py
from myapp.database import Base
from myapp import models # noqa: F401
target_metadata = Base.metadata
Path B — You have no SQLAlchemy models (raw SQL / other ORM)
Use SQLAlchemy’s reflection to read the existing schema directly from the database:
# env.py — reflection-based approach
from sqlalchemy import MetaData
# Reflect the existing tables from the DB without a model layer
target_metadata = MetaData()
def run_migrations_online() -> None:
connectable = engine_from_config(
config.get_section(config.config_ini_section, {}),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
# Reflect all existing tables
target_metadata.reflect(bind=connection)
context.configure(connection=connection, target_metadata=target_metadata)
with context.begin_transaction():
context.run_migrations()
Note: The reflection approach is a stepping stone. Gradually replace it with explicit model definitions as you refactor the codebase.
Step 5 — Create the baseline revision
A baseline revision “stamps” the existing database state without running any SQL. It tells Alembic: “the database is already at this point.”
# Create an empty revision — this will become the baseline
alembic revision -m "baseline_existing_schema"
Open the generated file and leave both upgrade() and downgrade() empty:
# alembic/versions/abc123_baseline_existing_schema.py
"""baseline_existing_schema
Revision ID: abc123
Revises:
Create Date: 2026-04-15 10:00:00.000000
"""
revision = "abc123"
down_revision = None
branch_labels = None
depends_on = None
def upgrade() -> None:
pass # Database already exists — nothing to do
def downgrade() -> None:
pass # Cannot safely drop a pre-existing database
Step 6 — Stamp the database
Tell Alembic that the database is already at the baseline revision — without actually running the (empty) migration:
alembic stamp abc123
Verify it worked:
alembic current
# abc123 (head)
Now Alembic knows where it stands. Future alembic upgrade head commands will only apply migrations created after this baseline.
Step 7 — Verify autogenerate produces no diff
This is the critical validation step. Run autogenerate to confirm Alembic sees no difference between your models and the live database:
alembic revision --autogenerate -m "verify_no_diff"
Open the generated file. Both upgrade() and downgrade() should be empty (or have only pass). If they contain operations, your models are out of sync with the database — fix the discrepancies before continuing.
Once verified, delete this verification revision file and do not apply it:
rm alembic/versions/<rev>_verify_no_diff.py
Step 8 — Write your first real migration
Now that Alembic is bootstrapped, create your first real schema change through it:
# Edit models.py to add a new column, table, etc.
# Then:
alembic revision --autogenerate -m "add_last_login_column"
# Review the file
alembic upgrade head
Step 9 — Update your deployment process
Add the migration command to your deployment pipeline so it always runs before the application starts:
# In your startup script, Dockerfile entrypoint, or CI/CD pipeline:
alembic upgrade head && python -m uvicorn myapp.main:app
For containerized deployments:
# Dockerfile
FROM python:3.12-slim
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
# The entrypoint runs migrations, then starts the server
ENTRYPOINT ["sh", "-c", "alembic upgrade head && uvicorn myapp.main:app --host 0.0.0.0 --port 8000"]
Step 10 — Team rollout checklist
Use this checklist when rolling out Alembic across a team:
[ ] alembic.ini committed to Git (credentials removed)
[ ] alembic/ directory committed to Git
[ ] DATABASE_URL set as environment variable on all environments
[ ] Baseline revision stamped on production DB: `alembic stamp <baseline_rev>`
[ ] Baseline revision stamped on staging DB: `alembic stamp <baseline_rev>`
[ ] Baseline revision stamped on each developer's local DB: `alembic stamp <baseline_rev>`
[ ] CI pipeline runs `alembic upgrade head` before tests
[ ] Deployment pipeline runs `alembic upgrade head` before app start
[ ] Team briefed: never alter the DB schema manually; always go through Alembic
[ ] README updated with migration instructions
13. Tips, Patterns, and Best Practices
✅ Always review autogenerated migrations
Autogenerate is helpful but not perfect. Always open and read the generated file before running it. Common false positives:
- Renamed columns (Alembic sees a drop + add, not a rename — fix this manually with
op.alter_column) - Server-default changes (often noisy; suppress with
compare_server_default=Falseif not needed) - Type mapping differences between Python types and database types
✅ One migration per logical change
Keep each migration focused on a single concern. Atomic migrations are easier to review, test, and roll back.
# Good
add_users_table.py
add_posts_table.py
add_index_on_posts_user_id.py
# Avoid
big_bang_schema_v2.py ← Hard to debug, impossible to partially roll back
✅ Always write a downgrade()
Even if you think you will never roll back, write it anyway. Production incidents at 2 AM will change your mind.
def downgrade() -> None:
op.drop_index("ix_users_email", table_name="users")
op.drop_column("users", "last_login")
✅ Never edit an applied migration
Once a revision has been applied to any environment (dev, staging, or production), treat it as immutable. Create a new migration instead.
✅ Use op.execute() for data migrations
Keep data migrations alongside schema migrations — don’t do them separately with one-off scripts.
def upgrade() -> None:
# 1. Add new column
op.add_column("users", sa.Column("full_name", sa.String(200)))
# 2. Backfill data
op.execute(
"""
UPDATE users
SET full_name = first_name || ' ' || last_name
WHERE full_name IS NULL
"""
)
# 3. Make it non-nullable now that data exists
op.alter_column("users", "full_name", nullable=False)
✅ Add a __table_args__ schema for multi-schema databases
class User(Base):
__tablename__ = "users"
__table_args__ = {"schema": "tenant_schema"}
And configure autogenerate to include schemas:
# env.py
context.configure(
...,
include_schemas=True,
version_table_schema="public", # Where to put alembic_version
)
✅ Batch mode for SQLite
SQLite does not support ALTER TABLE for most operations. Use op.batch_alter_table():
def upgrade() -> None:
with op.batch_alter_table("users") as batch_op:
batch_op.add_column(sa.Column("phone", sa.String(20)))
batch_op.drop_column("old_phone")
Enable batch mode globally in env.py for SQLite projects:
context.configure(
...,
render_as_batch=True, # Safe for all dialects; no-op on non-SQLite
)
✅ Keep alembic_version in your monitoring
Alert if alembic current does not match alembic head on production. Stale migrations are a common source of runtime errors.
# Add to healthcheck script
CURRENT=$(alembic current 2>/dev/null | awk '{print $1}')
HEAD=$(alembic heads 2>/dev/null | awk '{print $1}')
if [ "$CURRENT" != "$HEAD" ]; then
echo "WARNING: Database is not at head. Current: $CURRENT, Head: $HEAD"
exit 1
fi
✅ Use transaction_per_migration=True for safety
Each migration runs in its own transaction so a failure does not leave your database in a half-applied state:
context.configure(
...,
transaction_per_migration=True,
)
Note: PostgreSQL supports transactional DDL (so this works perfectly). MySQL/InnoDB has limited support —
CREATE INDEXcannot be in a transaction.
14. Common Errors and How to Fix Them
FAILED: Can't locate revision identified by '<id>'
The alembic_version table references a revision that no longer exists in the versions/ directory.
# Diagnosis
alembic history # lists all known revisions
alembic current # shows DB's current revision
# Fix: if you deleted a revision file by mistake, restore it from Git
git checkout alembic/versions/<rev_id>_*.py
# Fix: if you intentionally replaced it, stamp the DB to a known-good revision
alembic stamp <valid_revision_id>
Multiple head revisions present
alembic heads
# <rev_a> (head)
# <rev_b> (head)
# Fix: merge the heads
alembic merge --autogenerate -m "merge_heads" <rev_a> <rev_b>
alembic upgrade head
Target database is not up to date
This means alembic current is behind alembic heads.
alembic upgrade head
autogenerate detected no changes
If you expected changes but autogenerate produces nothing:
- Confirm your models are imported in
env.py(thefrom myapp import modelsline). - Confirm
target_metadata = Base.metadatapoints to the correctBase. - Check for typos in
__tablename__. - Ensure
include_schemasis set correctly if you use non-default schemas.
sqlalchemy.exc.OperationalError: FATAL: remaining connection slots are reserved
You are leaking connections. Ensure you use pool.NullPool in env.py during migrations:
from sqlalchemy.pool import NullPool
connectable = engine_from_config(..., poolclass=NullPool)
ProgrammingError: column "x" of relation "y" does not exist
You are running the application before running the migrations. Fix your deployment order:
alembic upgrade head # FIRST
python -m myapp # THEN
15. CI/CD Integration
GitHub Actions
# .github/workflows/ci.yml
name: CI
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_DB: testdb
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run migrations
env:
DATABASE_URL: postgresql+psycopg2://testuser:testpass@localhost/testdb
run: alembic upgrade head
- name: Run tests
env:
DATABASE_URL: postgresql+psycopg2://testuser:testpass@localhost/testdb
run: pytest
Docker Compose (local development)
# docker-compose.yml
version: "3.9"
services:
db:
image: postgres:16
environment:
POSTGRES_DB: mydb
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypass
ports:
- "5432:5432"
migrate:
build: .
command: alembic upgrade head
environment:
DATABASE_URL: postgresql+psycopg2://myuser:mypass@db/mydb
depends_on:
db:
condition: service_healthy
app:
build: .
command: uvicorn myapp.main:app --host 0.0.0.0 --port 8000 --reload
environment:
DATABASE_URL: postgresql+psycopg2://myuser:mypass@db/mydb
depends_on:
- migrate
ports:
- "8000:8000"
16. Quick Reference Cheat Sheet
Setup
| Command | Description |
|---|---|
alembic init alembic | Initialize Alembic in current directory |
alembic init --template=async alembic | Initialize with async template |
Creating Migrations
| Command | Description |
|---|---|
alembic revision -m "description" | Create an empty migration |
alembic revision --autogenerate -m "description" | Auto-generate from model diff |
alembic merge -m "merge" <rev1> <rev2> | Merge two branch heads |
Applying Migrations
| Command | Description |
|---|---|
alembic upgrade head | Apply all pending migrations |
alembic upgrade +1 | Apply one migration forward |
alembic upgrade <rev_id> | Upgrade to a specific revision |
alembic downgrade -1 | Roll back one migration |
alembic downgrade base | Roll back all migrations |
alembic downgrade <rev_id> | Downgrade to a specific revision |
Inspection
| Command | Description |
|---|---|
alembic current | Show current DB revision |
alembic heads | Show all head revisions |
alembic history | Show full migration history |
alembic history -r current:head | Show only pending migrations |
alembic show <rev_id> | Show details of a specific revision |
Utilities
| Command | Description |
|---|---|
alembic stamp <rev_id> | Mark DB at a revision without running SQL |
alembic stamp head | Mark DB at the latest revision |
alembic upgrade head --sql | Output SQL without running it |
alembic check | Check if there are pending upgrades (exit code 1 if yes) |
Alembic Alternatives
Alembic is the dominant choice for SQLAlchemy-based Python projects, but it is not the only tool in the ecosystem. Here’s how it compares to the realistic alternatives.
At a glance
| Tool | Language | ORM coupling | Migration style | Best for |
|---|---|---|---|---|
| Alembic | Python | SQLAlchemy | Python scripts | SQLAlchemy projects; full control over migrations |
| Django Migrations | Python | Django ORM only | Python scripts (auto) | Django apps — built-in, zero config |
| Flyway | JVM / CLI | None (SQL-first) | Plain .sql files | Teams that prefer raw SQL; polyglot stacks |
| Liquibase | JVM / CLI | None | XML / YAML / SQL | Enterprise; audit trails; multi-DB rollout |
| Yoyo Migrations | Python | None | Python / SQL | Lightweight; no ORM dependency |
| Tortoise ORM Aerich | Python | Tortoise ORM | Auto-generated | AsyncIO / FastAPI projects using Tortoise |
| SQLModel (via Alembic) | Python | SQLModel → SQLAlchemy | Same as Alembic | FastAPI + SQLModel stacks |
| Peewee Playhouse | Python | Peewee ORM | Schema diffs | Peewee ORM projects |
Django Migrations
When to use: You are already inside the Django ecosystem.
Django’s migration framework is tightly integrated — makemigrations introspects your models.py and generates Python migration files automatically. It handles squashing, data migrations, and multi-app dependency ordering out of the box.
python manage.py makemigrations
python manage.py migrate
Pros: Zero setup; rollback support; signal hooks; well-tested at scale.
Cons: Locked to Django ORM; not usable outside a Django project.
Flyway
When to use: Your team prefers writing raw SQL and wants a language-agnostic tool.
Flyway uses numbered .sql files (V1__create_users.sql, V2__add_email_index.sql). It has no ORM dependency and can be dropped into any stack.
flyway -url=jdbc:postgresql://localhost/mydb migrate
Pros: SQL-first (no abstraction layer); excellent enterprise support; widely adopted in Java/Kotlin shops.
Cons: Java runtime required (unless using the CLI binary); no Python-native feel; no downgrade by default (undo is a paid feature).
Liquibase
When to use: You need detailed audit trails, multi-database targeting, or enterprise compliance workflows.
Liquibase expresses migrations as changesets in XML, YAML, JSON, or SQL. It tracks each changeset by ID + author + file, making it highly auditable.
# changelog.yaml
databaseChangeLog:
- changeSet:
id: 1
author: alice
changes:
- createTable:
tableName: users
columns:
- column:
name: id
type: int
autoIncrement: true
constraints:
primaryKey: true
Pros: Rich diff/report tooling; supports rollback; database-agnostic DSL; excellent CI/CD plugins.
Cons: Verbose; Java runtime; steeper learning curve; overkill for small Python projects.
Yoyo Migrations
When to use: You want a lightweight, dependency-free migration tool without any ORM.
Yoyo uses plain .sql files or Python scripts and stores state in a _yoyo_migration table. It’s simple enough to read in five minutes.
pip install yoyo-migrations
yoyo apply --database postgresql://user:pass@host/db ./migrations
Pros: Tiny footprint; no SQLAlchemy required; supports both SQL and Python migration files.
Cons: No autogenerate; smaller community; less sophisticated branching than Alembic.
Aerich (Tortoise ORM)
When to use: You are building an async application with Tortoise ORM (common in FastAPI projects that avoid SQLAlchemy).
pip install aerich
aerich init -t myapp.database.TORTOISE_ORM
aerich init-db
aerich migrate --name "add_users_table"
aerich upgrade
Pros: Native async support; autogenerate from Tortoise models; Django-like workflow.
Cons: Tied to Tortoise ORM; smaller ecosystem than SQLAlchemy/Alembic; less mature.
Decision Guide
Are you using Django?
└── YES → Django Migrations (built-in, no choice needed)
└── NO
├── Are you using SQLAlchemy (sync or async)?
│ └── YES → Alembic ✓
│ └── NO
│ ├── Using Tortoise ORM (async/FastAPI)?
│ │ └── YES → Aerich
│ ├── Want raw SQL files, no ORM?
│ │ ├── Simple/lightweight → Yoyo
│ │ ├── Multi-DB / enterprise audit → Liquibase
│ │ └── SQL-first, polyglot team → Flyway
│ └── Using Peewee ORM?
│ └── YES → Peewee Playhouse schema migrations
Bottom line: If you are writing Python with SQLAlchemy — which covers the vast majority of non-Django Python web and data applications — Alembic is the correct default choice. The alternatives exist for specific constraints (framework lock-in, raw-SQL preference, async ORM, or enterprise compliance) rather than as general improvements.
Further Reading
- Alembic Official Documentation
- SQLAlchemy Documentation
- Alembic Cookbook (Official)
- Alembic GitHub Repository
- SQLAlchemy 2.0 Migration Guide
Written for Alembic 1.13.x and SQLAlchemy 2.x. Last updated: April 2026.




Leave a Reply