Even within the beginning of your software engineering careers we’ve all heard the famous excuse, “but it worked on my machine!”. Docker solves this problem easily by ensuring that Docker containers give us an equivalent environment on all machines. The definition of a container is that it’s an instance of a picture running as a process on your machine.
All containers are started by running a docker image. a picture is that the application we would like to run. Examples of familiar applications we’d want to run during a container are Node, Ruby, or Postgres. A docker image will generally contain a set of files, libraries, and dependencies. We’ll be talking tons more about images later but if you want to peruse some for now be happy to go to the image hosting registry Docker Hub
LifeCycle of a Docker Container
Docker containers are prepared to die at any time: you’ll stop, kill, and destroy them quickly. once you do kill a container, all data created during its existence is exhausted by default. it’s during this sense that we could say containers, are ephemeral. By “ephemeral”, we mean that a container is often stopped and destroyed, then rebuilt and replaced with an absolute minimum setup and configuration.
Containers are perfect for temporal tasks and that they also can perfectly run long-running daemons like web servers. By default, all containers are created equal; all of them get an equivalent proportion of CPU cycles and block IO, and that they could use the maximum amount of memory as they have.
Here may be a visualization of the Docker Container Lifecycle: