Docker Security Mistakes Developers Keep Making
Containers feel secure by default. They're not. Here's every mistake I've made and what I do differently now.
Docker Security Mistakes That Most Developers Don't Think About
I've been containerizing everything for the past two years. Personal projects, internship work, class assignments that definitely didn't need Docker but I Dockerized anyway because I wanted the practice. And in that time, I've shipped some genuinely terrible Dockerfiles from a security perspective.
The thing about Docker is that it feels secure. Your app's in a container! It's isolated! Except... it frequently isn't. Not in practice. And the defaults are way more permissive than most people realize.
Here are the mistakes I've made and seen repeatedly in the wild.
Running Everything as Root
This is the big one, and almost every tutorial you'll find online does it wrong.
If your Dockerfile doesn't include a USER instruction, your application runs as root inside the container. That means if someone exploits a vulnerability in your app, they have root access in that container. And depending on your configuration, that can be leveraged to escape the container entirely.
The fix takes two lines:
RUN addgroup --system app && adduser --system --ingroup app app
USER app
That's it. Your app runs as a non-root user now. It won't fix everything, but it immediately limits the blast radius of any compromise.
I audited a friend's side project last semester — a Node.js API running in Docker. Root user, mounted the Docker socket (more on that later), and exposed the debug port. It was essentially a remote code execution service with a nice API wrapper.
Using latest Tags in Production
FROM node:latest
This is in every beginner tutorial and it's terrible for two reasons:
- Reproducibility —
latestchanges. Your build that worked yesterday might break today because the base image updated. - Security — You can't pin to a known-good, scanned image. You're trusting that whatever
latestresolves to right now doesn't have vulnerabilities that weren't in the version you tested with.
Use specific digests or at minimum version tags:
FROM node:20-alpine@sha256:abc123...
Alpine-based images also have a much smaller attack surface. The node:latest image has hundreds of packages you don't need, each one a potential vulnerability.
Copying Your Entire Source Directory
COPY . .
Seems harmless. Except this copies everything — your .env file with database credentials, your .git directory with commit history (maybe containing secrets from old commits), your node_modules (which you're about to reinstall anyway), your test fixtures, your personal notes.
Use a .dockerignore file. Always.
.git
.env
.env.*
node_modules
*.md
tests/
.vscode/
I once found a production container that had the entire .git history baked in. One git log and you could see every API key that was ever committed and "removed." They weren't removed. They were right there in the history.
Mounting the Docker Socket
volumes:
- /var/run/docker.sock:/var/run/docker.sock
If you see this in a Docker Compose file and the service isn't specifically a Docker management tool (like Portainer or Traefik), something's wrong. Mounting the Docker socket gives the container full control over the Docker daemon. It can start new containers, stop other containers, mount host directories — it's essentially root access to the host machine.
I see this in CI/CD setups constantly. "We need Docker-in-Docker for our build pipeline." Okay, but you've just given your build container the keys to the entire infrastructure.
If you actually need Docker-in-Docker, use rootless mode or the --userns-remap flag. Don't just mount the socket and hope for the best.
Not Scanning Your Images
Your dependencies have vulnerabilities. That's not a question. Every non-trivial Docker image has known CVEs — the question is how bad they are and whether they're exploitable in your context.
Tools that exist and are free:
- Docker Scout (built into Docker Desktop now)
- Trivy by Aqua Security (my personal favorite)
- Grype by Anchore
- Snyk Container
Run trivy image your-image:tag and look at the output. The first time you do this, it will be alarming. Most of those will be in transitive dependencies you've never heard of. But some of them will be real.
The trick is making this part of your CI Pipeline. Don't just scan once — scan on every build and set a threshold. "Fail the build if there's a CRITICAL or HIGH severity CVE" is a reasonable starting point.
Exposing Ports You Don't Need
EXPOSE 3000
EXPOSE 5432
EXPOSE 6379
EXPOSE 9229
I've seen Dockerfiles that expose the app port, the database port, the Redis port, and the Node.js debug port all at once. The EXPOSE instruction is technically just documentation, but the problem is when people do the same in their docker run command with -p.
Expose only what needs to be externally accessible. If your API talks to Postgres over an internal Docker network, there's zero reason to publish port 5432 to the host. Same for Redis. Same for especially the debug port.
The Node debug port (9229) is the one that really gets me. I've seen production containers with --inspect=0.0.0.0:9229 enabled. That's a remote code execution endpoint. You're literally inviting people to attach a debugger to your running application.
Multi-Stage Builds — Seriously, Use Them
If your final image contains gcc, make, python3, and a full compiler toolchain because you needed them during the build step, your image is too big and has too many potential vulnerabilities.
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
USER node
CMD ["node", "dist/index.js"]
Your final image only has the runtime and your compiled code. No source code, no dev dependencies, no build tools. Smaller image = smaller attack surface = faster deploys.
Hardcoding Secrets in the Dockerfile
ENV DATABASE_URL=postgres://admin:supersecret@db:5432/myapp
ENV API_KEY=sk-live-abc123xyz
These values are baked into the image layer. Anyone who pulls the image can run docker history or docker inspect and see them. Every single person with access to your registry has your production credentials.
Use runtime secrets instead:
docker secretfor Swarm- Kubernetes Secrets
- Environment variables injected at runtime (not build time)
- A secrets manager (Vault, AWS Secrets Manager, etc.)
And if you use ARG for build-time secrets, those are also visible in the image history unless you use BuildKit's --mount=type=secret syntax. The old-school ARG approach isn't actually secret at all.
The Practices I Actually Follow Now
After messing up most of the things above at least once:
- Every Dockerfile gets a non-root USER
- Pinned base image versions (or digests when I'm being careful)
.dockerignoremirrors.gitignoreplus more- Multi-stage builds for anything that has a build step
trivyscan in CI that blocks on HIGH/CRITICAL CVEs- No socket mounts unless absolutely unavoidable
- Only the app port exposed
- Secrets injected at runtime, never in the image
Most of these are simple habits that take 30 seconds to implement. The defaults just happen to be insecure, and it's on us to override them.
Docker's security model is actually solid — namespaces, cgroups, seccomp profiles, capability dropping. The problem isn't Docker. The problem is that developers (myself included) treat containers as magic security boxes and skip the basics. The container boundary is real, but it's thinner than most people think, and the configuration matters way more than the technology.
Found this useful?
Share it with your network.