24 January 2019

What is Docker? Why should i care about Docker?

Introduction to Docker as a container system as an alternative to classic virtualization.

Introduction to Docker

What is Docker?

Docker marked a turning point in the evolution of application virtualization. If you're already familiar with classic virtualization concepts, some of the introductory concepts may seem redundant; however, for those new to this world, understanding the basics is essential to fully appreciate the benefits Docker offers.

Docker is an open-source platform that enables containerization, a technology that has profoundly transformed the way applications are developed, deployed, and run. Unlike traditional virtual machines, which require an entire operating system for each instance, Docker encapsulates applications within lightweight, insulated containers, which share the host operating system kernel but operate as independent, self-contained environments. This ensures consistent and predictable behaviors, regardless of the underlying machine or operating system.

From a technical point of view, Docker takes advantage of advanced features of the Linux kernel, such as cgroups (control groups) for resource management and namespace for process, filesystem, network, and other system resource isolation. These technologies allow you to run multiple containers in parallel, each with its own isolated space, but with a smaller footprint than a VM, since you don't need to run a full operating system or hypervisor for each instance.

One of Docker's most obvious strengths is its ability to eliminate discrepancies between environments, dramatically reducing the compatibility issues that often arise when moving from development to production. Developers can build a container once and deploy it anywhere—on laptops, on-premises servers, the cloud, or CI/CD systems—with the confidence that it will perform the same way. This approach increases operational efficiency, accelerates the release cycle, and reduces infrastructure costs, as multiple containers can coexist on a single machine, whether physical or virtual, without requiring separate environments or duplication of system resources.

What is virtualization?

To understand the concept of virtualization, let's start with a simple metaphor. Imagine you live in a spacious house and have a friend who needs a place to stay. You have three options:

  • Make him sleep in the same room as you: you would share the space directly, but this cohabitation would most likely quickly become uncomfortable and chaotic, given the lack of separation.

  • Build a new house for him on your land: perfect solution to ensure privacy and independence, but requires significant time, money and resources.

  • Host it in the guest roomIn this case, everyone has their own space, but some common resources like the kitchen, bathroom, and electricity are shared. It's a good compromise between isolation and resource optimization.

The third option best represents the essence of the virtualization: create separate and independent environments within the same physical infrastructure, sharing resources in a controlled manner.

In the IT world, virtualization is about creating a virtual version of physical resources, such as servers, operating systems, storage, or networks. It's a technology that allows multiple isolated environments to run on a single physical hardware, making more efficient use of CPU, RAM, disk, and other components.

For example, let's say you want to run a web server on your computer, but without directly affecting the main operating system or applications already installed. With virtualization, you can create a virtual machine (VM): A software instance that simulates an entire computer, with its own operating system, libraries, and application stack. The VM runs within your existing operating system, leveraging the resources of the underlying hardware, but operating as if it were a standalone machine.

When you launch a VM, you'll see a completely autonomous operating system appear—inside a window—just as if you'd turned on a second computer. This virtual environment is isolated from the main one, but uses the physical machine's resources: just like your friend in the guest room, who lives separately but shares electricity, gas, and water.

This ability to isolate and compartmentalize work environments has revolutionized modern computing. Virtualization allows companies to maximize hardware utilization, reduce operating costs, facilitate testing and rapid provisioning of new systems. It also makes workload management more flexible, promoting scalability , resilience of the infrastructure.

Virtualization, in short, allows you to abstract the underlying hardware to create autonomous and replicable environments, representing one of the technological foundations on which data centers, clouds, and modern distributed architectures are based today.

What's different about Docker? How is it different from traditional virtualization?

Docker represents an innovative paradigm in virtualization. Unlike traditional virtualization, which relies on hypervisors such as VMware, KVM, or Hyper-V to create virtual machines (VMs) complete with operating system, Docker uses a more lightweight approach, based on the containerization.

Traditional virtualization: hypervisor isolation

In classic virtualization, each VM includes its own complete operating system, libraries, binaries, and applications. This allows multiple operating systems to run simultaneously on the same physical machine, providing a high degree of isolation. However, it also entails significant overhead. resource overload, since each VM requires a dedicated CPU, RAM, and storage. Furthermore, starting a virtual machine is slower, and the images are large and complex to manage.

Docker: Isolation through Containers

On the contrary, Docker is based on a technology called containerization, which allows the creation of isolated execution environments called containers (containers). These containers share the host operating system kernel, eliminating the need to include an entire operating system for each environment. In this way, containers are lighter, they start almost instantly, consume fewer resources, and are much easier to move, replicate, and deploy.

Summary of differences

Appearance Traditional Virtualization Docker (Containerization)
Insulation Via hypervisor Via kernel namespaces and cgroups
Operating System Complete for every VM Shared (host's only)
Resource overhead Raised Very small
Boot speed Slow Almost immediate
Portability Limited Extreme (build once, run anywhere)
Image weight High (several GB) Reduced (even < 100 MB)

In short, Docker does not completely replace VMs, but it represents a more efficient, portable and scalable, particularly suited for modern development, CI/CD and microservices architectures.

 

Docker for Web Developers: Consistency and Simplicity

One of the main advantages of Docker for developers is the ability to share identical development environments, avoiding inconsistencies between team members' machines.

Let's imagine we're collaborating on an application written in Node.js. To ensure everything works properly, we need to make sure we're both using the same version of Node. Minor differences between versions can cause difficult-to-diagnose bugs, due to changes in libraries or runtime behavior.

A traditional solution is to use tools like NVM (Node Version Manager) to manage different versions of Node. We can create a file .nvmrc in the project and document the requirements, but this approach is not without problems: it requires manual configurations on each machine, is prone to errors and does not guarantee absolute uniformity.

Docker makes everything easier

With Docker, we can define exactly which environment to use and distribute it to the entire team with 100% accuracy. The typical flow becomes:

  1. Install Docker (one-time).

  2. Write a Dockerfile with the desired environment.

  3. Building the image with docker build -t nome-immagine ..

  4. Run the container with docker run -p 3000:3000 nome-immagine.

This process completely eliminates the need to install Node, NVM, or other dependencies on the local system. It also allows you to replicate the environment in production, reducing the classic problem of “it works on my computer, but not on the server”.

Example Dockerfile for a Node.js app

# Use a specific Node.js version as a base FROM node:18.17.0 # Set working directory WORKDIR /app # Copy package.json and package-lock.json files COPY package*.json ./ # Install dependencies RUN npm install # Copy the rest of the code into the container COPY . . # Expose port 3000 EXPOSE 3000 # Start application command CMD ["npm", "start"]

By saving this file to the project root and including it in the Git repository, any developer can recreate the exact same environment in seconds. Regardless of the operating system used (Linux, macOS, or Windows), the behavior will always be identical.

Develop on the same environment as production

Once the app is installed in a Docker development environment, you can ship the entire container directly to production. If you think it is a problem to deal with the inconsistencies between two developers, just wait for you to write the code that works on your machine just to make sure that immense functions in production. It's extremely frustrating.

You have tons of options for deploying Docker containers to production. Here are some of them:

I like Heroku's approach because it's the only one that allows you to simply ramp up your project with a Dockerfile to run them. Others take many other steps like pushing the Docker image to a repository. The extra steps aren't the end of the world, but they're not necessary.

What about more complex apps?

Docker adopts the philosophy “a process per container”, which means that, in most cases, real, full-fledged applications—such as a WordPress site—will require multiple distinct containers. In a typical scenario, we'll have at least one container for the web server (Apache or Nginx) running PHP, and another for the database, such as MySQL or MariaDB. In more complex contexts, we might have additional containers for caching services (such as Redis or Memcached), load balancers, monitoring systems, or backup tools. It's therefore essential that these containers can communicate with each other securely and in a coordinated manner: and this is where the concept of container orchestration.

For local development environments or for applications that need to run on a single server, Docker Compose This is often the ideal choice. It is a simple yet powerful tool, included with the Docker installation, that allows you to define and start multiple containers simultaneously using a YAML file (docker-compose.ymlCompose also automatically creates an internal network between containers, assigning names and facilitating communication between the various services. This approach makes developing multi-container architectures accessible even to beginners, offering a fast, declarative, and easily replicable way to manage complex environments on a single machine.

But when the application needs to scale horizontally, running containers on multiple distributed hosts, a more advanced level of orchestration is needed. In this context, the de facto standard is KubernetesKubernetes is a robust and highly configurable open-source platform designed to manage the deployment, load balancing, automatic scaling, and monitoring of containers in distributed production environments. Kubernetes allows you to define the desired application behavior declaratively and maintains it over time, even in the event of failures or upgrades. Many cloud providers and on-premises platforms that support Docker offer Kubernetes as an integrated or managed service, making it an increasingly popular solution for orchestrating complex and mission-critical architectures.

Quick benefits of understanding Docker.

It may not seem urgent now, but remember these words when you first encounter a problem caused by a discrepancy between development environments. It's one of the most frustrating experiences for a developer, and when it happens, you'll wish you'd adopted Docker sooner. Understanding and using Docker allows you to create consistent and predictable environments, regardless of the machine you're working on, the operating system, or the team involved. This translates into consistent and reliable application behavior, reducing unexpected issues and increasing confidence in your releases—both on your part and that of your customers or superiors.

Mastering Docker brings with it a set of immediate benefits — real ones quick wins in business parlance—which can make a difference even in the smallest projects. Here are the main ones:

  • Consistent development environment:
    Docker allows you to define a complete and identical development environment for all team members, regardless of their operating system or local configurations. This eliminates the class of errors known as “it works on my computer”, ensuring that the application behaves exactly the same throughout development, testing, and production. The onboarding process for new developers also becomes much faster and more efficient.

  • Application portability:
    Thanks to the approach build once, run anywhereA containerized application can be easily moved from a local machine to a staging or production server without requiring any code or infrastructure changes. Docker encapsulates everything needed for execution—including the runtime, libraries, and environment variables—making apps highly portable and agnostic to the underlying platform.

  • Application isolation:
    Each Docker container runs in an isolated environment, with a separate filesystem, environment variables, and processes from other containers. This means you can run multiple applications or different versions of the same software on the same machine without conflicting. This is especially useful when different projects require different dependencies (for example, different versions of PHP, Node.js, or Python).

  • Resource efficiency:
    Unlike traditional VMs, containers share the host operating system kernel and use only the resources strictly necessary to run the application. This reduces computational overhead and allows many more containers to run on the same machine than equivalent VMs. In resource-constrained environments (such as laptops or VPSs), this efficiency translates into superior performance and reduced operating costs.

  • Replicability:
    Docker allows you to codify the entire build and deployment process into readable and versionable files, such as Dockerfile e docker-compose.ymlThis ensures that any developer or environment can accurately reproduce the entire development and release pipeline, without surprises or variations. Every configuration change can be tracked in the code versioning, improving transparency and collaboration.

Conclusion

In conclusion, learning to use Docker is an investment that pays off right away. It allows you to work more efficiently, reduces surprises during deployment, speeds up development cycles, and gives you the peace of mind of knowing exactly what you're doing. where and how your application will run. For these reasons, Docker is not just a useful tool: it has become a de facto standard in modern development, and a fundamental skill for every developer who wants to work professionally.

Do you have doubts? Don't know where to start? Contact us!

We have all the answers to your questions to help you make the right choice.

Chat with us

Chat directly with our presales support.

0256569681

Contact us by phone during office hours 9:30 - 19:30

Contact us online

Open a request directly in the contact area.

DISCLAIMER, Legal Notes and Copyright. RedHat, Inc. holds the rights to Red Hat®, RHEL®, RedHat Linux®, and CentOS®; AlmaLinux™ is a trademark of the AlmaLinux OS Foundation; Rocky Linux® is a registered trademark of the Rocky Linux Foundation; SUSE® is a registered trademark of SUSE LLC; Canonical Ltd. holds the rights to Ubuntu®; Software in the Public Interest, Inc. holds the rights to Debian®; Linus Torvalds holds the rights to Linux®; FreeBSD® is a registered trademark of The FreeBSD Foundation; NetBSD® is a registered trademark of The NetBSD Foundation; OpenBSD® is a registered trademark of Theo de Raadt; Oracle Corporation holds the rights to Oracle®, MySQL®, MyRocks®, VirtualBox®, and ZFS®; Percona® is a registered trademark of Percona LLC; MariaDB® is a registered trademark of MariaDB Corporation Ab; PostgreSQL® is a registered trademark of PostgreSQL Global Development Group; SQLite® is a registered trademark of Hipp, Wyrick & Company, Inc.; KeyDB® is a registered trademark of EQ Alpha Technology Ltd.; Typesense® is a registered trademark of Typesense Inc.; REDIS® is a registered trademark of Redis Labs Ltd; F5 Networks, Inc. owns the rights to NGINX® and NGINX Plus®; Varnish® is a registered trademark of Varnish Software AB; HAProxy® is a registered trademark of HAProxy Technologies LLC; Traefik® is a registered trademark of Traefik Labs; Envoy® is a registered trademark of CNCF; Adobe Inc. owns the rights to Magento®; PrestaShop® is a registered trademark of PrestaShop SA; OpenCart® is a registered trademark of OpenCart Limited; Automattic Inc. holds the rights to WordPress®, WooCommerce®, and JetPack®; Open Source Matters, Inc. owns the rights to Joomla®; Dries Buytaert owns the rights to Drupal®; Shopify® is a registered trademark of Shopify Inc.; BigCommerce® is a registered trademark of BigCommerce Pty. Ltd.; TYPO3® is a registered trademark of the TYPO3 Association; Ghost® is a registered trademark of the Ghost Foundation; Amazon Web Services, Inc. owns the rights to AWS® and Amazon SES®; Google LLC owns the rights to Google Cloud™, Chrome™, and Google Kubernetes Engine™; Alibaba Cloud® is a registered trademark of Alibaba Group Holding Limited; DigitalOcean® is a registered trademark of DigitalOcean, LLC; Linode® is a registered trademark of Linode, LLC; Vultr® is a registered trademark of The Constant Company, LLC; Akamai® is a registered trademark of Akamai Technologies, Inc.; Fastly® is a registered trademark of Fastly, Inc.; Let's Encrypt® is a registered trademark of the Internet Security Research Group; Microsoft Corporation owns the rights to Microsoft®, Azure®, Windows®, Office®, and Internet Explorer®; Mozilla Foundation owns the rights to Firefox®; Apache® is a registered trademark of The Apache Software Foundation; Apache Tomcat® is a registered trademark of The Apache Software Foundation; PHP® is a registered trademark of the PHP Group; Docker® is a registered trademark of Docker, Inc.; Kubernetes® is a registered trademark of The Linux Foundation; OpenShift® is a registered trademark of Red Hat, Inc.; Podman® is a registered trademark of Red Hat, Inc.; Proxmox® is a registered trademark of Proxmox Server Solutions GmbH; VMware® is a registered trademark of Broadcom Inc.; CloudFlare® is a registered trademark of Cloudflare, Inc.; NETSCOUT® is a registered trademark of NETSCOUT Systems Inc.; ElasticSearch®, LogStash®, and Kibana® are registered trademarks of Elastic NV; Grafana® is a registered trademark of Grafana Labs; Prometheus® is a registered trademark of The Linux Foundation; Zabbix® is a registered trademark of Zabbix LLC; Datadog® is a registered trademark of Datadog, Inc.; Ceph® is a registered trademark of Red Hat, Inc.; MinIO® is a registered trademark of MinIO, Inc.; Mailgun® is a registered trademark of Mailgun Technologies, Inc.; SendGrid® is a registered trademark of Twilio Inc.; Postmark® is a registered trademark of ActiveCampaign, LLC; cPanel®, LLC owns the rights to cPanel®; Plesk® is a registered trademark of Plesk International GmbH; Hetzner® is a registered trademark of Hetzner Online GmbH; OVHcloud® is a registered trademark of OVH Groupe SAS; Terraform® is a registered trademark of HashiCorp, Inc.; Ansible® is a registered trademark of Red Hat, Inc.; cURL® is a registered trademark of Daniel Stenberg; Facebook®, Inc. owns the rights to Facebook®, Messenger® and Instagram®. This site is not affiliated with, sponsored by, or otherwise associated with any of the above-mentioned entities and does not represent any of these entities in any way. All rights to the brands and product names mentioned are the property of their respective copyright holders. All other trademarks mentioned are the property of their respective registrants.

JUST A MOMENT !

Have you ever wondered if your hosting sucks?

Find out now if your hosting provider is hurting you with a slow website worthy of 1990! Instant results.

Close the CTA
Back to top