Notes from Docker

Basic Command

// pull an image
docker pull [image name]

// what images we have
docker images

// see running container
docker ps

// all container available
docker ps -a

// inspect a container
docker inspect [3 first char container id]

// start container
docker start [3 first char container id]

// stop container
docker stop [3 first char container id]

// remove container
docker rm [3 first char container id]

// remove all container available
docker down $(docker ps -a -q)

// remove all running container
docker down $(docker ps -q)

// remove container and volume
docker rm -v [3 first char container id]

// remove image
docker rmi [3 first char image id]

// remove all <none> images
docker rmi $(docker images -f "dangling=true" -q)

// run image as container
docker run [image name]

// stop container
docker stop [3 first char container id]

// remove all dangling images
docker system prune 

// remove all unused images, not just dangling ones
docker system prune -a

// remove all unused images not just dangling ones and volumes
docker system prune -a --volumes

// run a command in a running container
docker exec [options] [container] [command]
docker exec -it app-ticket ls /path/to/file
docker exec -t 6cb260d796c6 pg_dumpall -c -U root > dump_`date +%d-%m-%Y"_"%H_%M_%S`.sql

// copy files/folders between a container and the local filesystem
docker cp <containerId>:/file/path/within/container /host/path/target
docker cp a87a0bcf9244:/tmp/ /srv/microservices/


  • Docker Volume/Data Volume: an alias for a mounted folder that is in Docker Host; associated with container; can be shared and reused among containers; persisted;
  • Docker Host: virtual machine/OS
  • Working Directory: startup directory where any commands should be executed in the container
docker run -d -p [external port]:[internal port] -v [host location/mount folder source]:[container volume/mount folder destination] -w [working directory] [image] [command]
docker run -d -p 8080:3000 -v $(pwd):/var/www -w "/var/www" node npm start

Docker, let's run in the background on port 8080 on the external, 3000 on the internal (it is node port by default), set up volume that points to my source code in the current working directory, and then the volume inside of the container is point to /var/www, also use it as my working directory. So when I run npm start, what is really going to do is forward the call from /var/www into my project.


A text file to build docker images for generate a layered filesystem, use docker image and make container from that; also contains build instructions like work with environment, variables, copy source code into the image.

# image
FROM node

# maintainer info

# environment variable can be used in the container
ENV PATH /var/www

# copy the source from the current directory to the working Directory inside the container 

# set the current working directory inside the container 

# data volume where to store in the filesystem

# download all depedencies (will be cached if the go.mod and the go.sum files are not changed)
RUN npm install

# expose port 8080 to the outside world

# entry point that kicks off the container
ENTRYPOINT ["node", "server.js"]

Linking Containers

Legacy Linking
Linking container by name. First run a container with a name, link to running container by name, and then repeat for additional container.

// start mongodb
docker run -d --name my-mongodb mongo

// start node and link to my-mongodb container with an alias 'mongodb' that we use internally in container
docker run -d -p 3000:3000 --link my-mongodb:mongodb --name nodeapp node

Bridge Network
Create an isolated network and any container which run in that, can communicate each other (they do so by name).

// create network using bridge network named isolated_network
docker network create --driver bridge isolated_network

// start mongodb and add the container into isolated_network
docker run -d --net=isolated_network --name mongodb mongo

// start node and add the container into isolated_network
docker run -d --net=isolated_network --name nodeapp -p 3000:3000 node


A yml text file to manage the whole application lifecycle like: start, stop and rebuild services; view the status of running services, stream the log output of running services; run a one-off command on a service.


version: '3'
# what we want to running as container once we build this docker-compose
  # name of service
    # image use as the service
    image: node:latest
    # what folder we gonna build from as the context, and dockerfile we want to use
      context: .
      dockerfile: Dockerfile
    # name of the container
    container_name: node_app
    # environment variable that automatically put into
      - NODE_PATH=/var/www
    # network name to linking up
      - isolated-network
    # internal and external ports used
      - 8080:8080 
    # host location and container volume
      - .:/usr/src/app/

# networks to be created to facilitate communication between containers
    driver: bridge

docker-compose command

// build or rebuild services
docker-compose build

// build or rebuild a service
docker-compose build [service]

// create container, up and linking
docker-compose up

// create container and rebuild
docker-compose up --build

// see running container on docker-compose
docker-compose ps

// see logs on running container
docker-compose logs
docker-compose logs --tail=100 -f

// stop container
docker-compose stop

// stop and remove container
docker-compose down

// stop and remove container, all images and volumes
docker-compose down --rmi all --volumes

// stop and remove containers and for services not defined in the compose file and volumes
docker-compose down --remove-orphans --volumes

More about volume

When we run Docker, it takes an image, which is read-only, and adds an additional layer on top (our container), which is read-write. When a container modifies files or adds data, it uses one writable layer to do that. The problem is that the layer is deleted when the container is deleted. When we relaunch Docker, all our saved data is gone.

To be able to persist data, Docker came up with the idea of Volumes. Volumes are basically data storages that are outside of containers and exist on the host filesystem. There are two types of Volumes, Docker-managed volumes and bind-mount volumes.

Bind-mount volumes
Points to a user-specified place on the host’s filesystem.

    image: postgres:latest
        - "5432:5432"
    container_name: db
    volumes: ~/postgre/data:/var/www/postgresql/data

We use ~/postgre/data to specify a path to the location on the host’s drive (it has to be an absolute path). Then, we bind that location to data location in a container, which is /var/www/postgresql/data. It's great if we need to point several containers to one storage. If we delete the container with the -v flag it, the volume is still there. But remember, it will be complicated to manage in large systems with many containers and volumes, it is quite easy to overwrite data.

Docker-managed volumes
It will created automatically by the Docker daemon.

    image: postgres:latest
        - "5432:5432"
    container_name: db
    volumes: /var/www/postgresql/data

The user can’t choose the location as they are created in the portion of the host’s filesystem owned by Docker. If we delete the container with the -v flag, the data will be deleted. But it will be easier to manage in large systems with many containers and volumes.

Sharing a Docker-Managed Volume
Share a Docker-managed volume between containers.

  image: postgres:latest
    - "5432:5432"
  container_name: db1
  volumes: shared_database:/var/www/postgresql/data
  image: postgres:latest
  volumes: shared_database:/var/www/postgresql/data

We create two containers and link them to one volume. Whenever you delete the main database image for some reason, even with the -v flag, the volume is still there. Volumes can’t be deleted as long as they are connected to at least one container.

Persistent Databases Using Docker’s Volumes and MongoDB