As I was watching a ship dock in the harbor yesterday, I couldn't help but draw a parallel with something that I've been using for over 8 years - Docker.
I can't help but wish I could send the knowledge I've gathered back in time to myself, starting out on this adventure. Instead, I'm sharing these insights with you, in hopes that it helps smooth your journey through the vast sea of Docker.
Here's how I would summarize the essentials:
"Docker is a platform that uses containerization to make it easier to create, deploy, and run applications. It's like having a lightweight, stand-alone, and self-sufficient "virtual machine" - but instead of emulating an entire operating system, it isolates the application and its dependencies into a self-contained unit called a container. This isolation is defined in a configuration file called a Dockerfile.
Dockerfile defines a blueprint for creating something, but it doesn't do anything on its own. When you 'build' a Dockerfile, you create a Docker image, which is like a static snapshot or a 'class' in programming terms.
And when you 'run' that Docker image, you create a Docker container - essentially turning on that snapshot you previously created, or creating an object instance of the class. Each container is an isolated and secure application platform - but it's much lighter and faster than a traditional virtual machine because it runs directly on the host system's kernel.
Docker containers, like instantiated objects, are stateless. Any operations performed on an instance won't be preserved once it's removed from memory and won't be present when a new one is instantiated from the same class. Just as in programming, when we need data to persist beyond the lifecycle of a single object, we'd typically use a database. Docker offers a similar solution - Docker Volumes. These are shared folders between your host system and Docker container, ensuring data exchange and persistence.
Each container lives in its isolated space, but communication is enabled through Docker's networking model. Think of this as defining methods on your objects which allow interaction with other types of objects, each representing a different container in the network.
Finally, Docker Compose is your go-to tool for managing multiple Docker containers simultaneously. Think of it as the application's main method, where the configuration happens before the application runs. In a `docker-compose.yml` file, you can define the services, volumes, and networks. Once everything's set, a single `docker-compose up` command brings your entire application to life, much like running the main method in your program."
Over the years, I've learned many other Docker tips and tricks, such as how image size reduction is properly done, utilizing a single Dockerfile for multi-image builds, debugging Docker layers when things go sideways, etc.
In the next part of the series, we'll set sail towards these more advanced waters.