Running PostgreSQL locally used to mean a full install, fighting with config files, and dealing with version conflicts. With Docker, it’s a one-liner — and you can have multiple versions running side by side if you need them.
Quick start — running in 60 seconds
docker run -d \
--name postgres \
-e POSTGRES_USER=myuser \
-e POSTGRES_PASSWORD=mypassword \
-e POSTGRES_DB=mydb \
-p 5432:5432 \
postgres:16-alpine
That’s it. PostgreSQL 16 is running on localhost:5432.
Connect with psql:
# Using Docker (no local psql needed)
docker exec -it postgres psql -U myuser mydb
# Using local psql if you have it installed
psql -h localhost -U myuser mydb
Check it’s running:
docker ps
# CONTAINER ID IMAGE PORTS NAMES
# abc123 postgres:16-alpine 0.0.0.0:5432->5432/tcp postgres
The problem with the quick start: data disappears
When you docker rm the container, all your data goes with it. For serious development, you need persistent storage.
Proper setup — with persistent storage
docker run -d \
--name postgres \
-e POSTGRES_USER=myuser \
-e POSTGRES_PASSWORD=mypassword \
-e POSTGRES_DB=mydb \
-p 5432:5432 \
-v postgres_data:/var/lib/postgresql/data \
--restart unless-stopped \
postgres:16-alpine
The -v postgres_data:/var/lib/postgresql/data flag creates a Docker named volume. Your data lives in that volume even if you remove and recreate the container.
To see your volumes:
docker volume ls
# DRIVER VOLUME NAME
# local postgres_data
docker-compose — for real projects
For any project you’re actively developing, use docker-compose.yml so your database config lives next to your code:
services:
db:
image: postgres:16-alpine
container_name: myapp_db
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
POSTGRES_DB: mydb
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U myuser -d mydb"]
interval: 10s
timeout: 5s
retries: 5
volumes:
postgres_data:
Start it:
docker compose up -d
# Check it's healthy
docker compose ps
# NAME STATUS PORTS
# myapp_db Up (healthy) 0.0.0.0:5432->5432/tcp
Stop and restart without losing data:
docker compose stop # stops containers, data stays
docker compose start # starts again
docker compose down # removes containers, data STAYS in volume
docker compose down -v # removes containers AND volumes (data gone)
Connecting from your application
Your .env file:
DATABASE_URL=postgresql://myuser:mypassword@localhost:5432/mydb
Connection string format:
postgresql://[user]:[password]@[host]:[port]/[database]
Works with any client:
- Prisma:
datasource db { url = env("DATABASE_URL") } - node-postgres (pg):
new Pool({ connectionString: process.env.DATABASE_URL }) - SQLAlchemy:
create_engine(os.environ["DATABASE_URL"]) - GORM: built-in postgres dialect
Connect with a GUI client
Your Docker PostgreSQL is just a regular postgres server on port 5432. Connect with any GUI tool:
| Tool | Connection |
|---|---|
| TablePlus | Host: localhost, Port: 5432, User/DB/Pass as set |
| DBeaver | Same — PostgreSQL connection type |
| pgAdmin | Add server: localhost:5432 |
| DataGrip | New data source → PostgreSQL |
| VS Code | SQLTools extension → PostgreSQL driver |
Running multiple PostgreSQL versions
Docker makes version testing trivial:
# PostgreSQL 14 on port 5433
docker run -d --name pg14 \
-e POSTGRES_PASSWORD=pass \
-p 5433:5432 \
postgres:14-alpine
# PostgreSQL 16 on port 5432
docker run -d --name pg16 \
-e POSTGRES_PASSWORD=pass \
-p 5432:5432 \
postgres:16-alpine
Connect to whichever you need by changing the port.
Load a SQL dump into the container
# Pipe a dump file into psql inside the container
docker exec -i postgres psql -U myuser mydb < backup.sql
# Or copy the file in first
docker cp backup.sql postgres:/tmp/
docker exec -it postgres psql -U myuser mydb -f /tmp/backup.sql
Dump the database out of the container
# Dump to local file
docker exec postgres pg_dump -U myuser mydb > backup.sql
# Compressed dump
docker exec postgres pg_dump -U myuser -Fc mydb > backup.dump
Common commands
# See container logs
docker logs postgres
docker logs -f postgres # follow
# Enter a bash shell inside the container
docker exec -it postgres bash
# Connect to psql directly
docker exec -it postgres psql -U myuser mydb
# Restart the container
docker restart postgres
# Stop without removing
docker stop postgres
# Remove container (data stays if you used a volume)
docker rm postgres
# Remove container and volume
docker stop postgres && docker rm postgres
docker volume rm postgres_data
Environment variables reference
| Variable | Required | Default | What it sets |
|---|---|---|---|
POSTGRES_PASSWORD | Yes | — | Password for the user |
POSTGRES_USER | No | postgres | Username |
POSTGRES_DB | No | Same as user | Default database |
PGDATA | No | /var/lib/postgresql/data | Data directory |
The Docker setup takes about 60 seconds and gives you a clean, isolated database that you can tear down and rebuild any time. No leftover system packages, no version conflicts, no permission issues.
Keep these handy: PostgreSQL Cheat Sheet | How to Debug a Slow SQL Query
Related Reading.
How to Debug a Slow SQL Query in PostgreSQL
Step-by-step: find slow queries with pg_stat_statements, read EXPLAIN ANALYZE output, identify missing indexes, fix N+1 queries, and diagnose lock contention.
How to Install Docker on Ubuntu, macOS and Windows
Install Docker Desktop or Docker Engine step-by-step on Ubuntu, macOS, and Windows — including post-install setup, running your first container, and Docker Compose.
How to Set Up a .env File and Stop Leaking Secrets
What .env files are, how to load them in Node.js, Python, and Docker, the common mistakes that expose API keys, and how to manage secrets safely in production.