DevOps: The main tools to know

by | DevOps Expertise

There are many, many DevOps tools available, each of which can perform specific tasks. It can be difficult to find your way around, and to know where to start when you want to implement DevOps. This article describes the main DevOps tools you need to know about, and explains what they can do when implemented.

Docker - Containerisation

Docker is the main tool used for containerisation. As the name suggests, containerisation allows an application to be isolated in a container. The container contains all the elements necessary for the execution of the process, be it a web service, a database or other. Everything is packaged into an image, from which a container can be created on any environment supporting docker.

It is therefore not necessary to install anything other than docker on the machine running the service, as all the elements needed for the program are already embedded in the container.

The Docker logic is "one service, one container". This logic allows the application to be thought of in terms of microservices, and the isolation of the different parts of the application allows, in a DevOps logic, the application to evolve brick by brick as opposed to a monolithic architecture where the whole application must be redeployed at each update.

Docker has really enabled the momentum of DevOps, and is the central building block for automating deployment processes in our time.

- — Benefits of the tool

  • Layers and image version control
  • Restoration: it is possible to rollback to a previous image
  • Microservices architectures: Docker facilitates the implementation of microservices architectures (one container = one service)
  • Easy to install: nothing to install on the machine
  • Reusability: nothing is installed on the machine
  • Isolation: services started in one container cannot affect services in another container

Kubernetes - Container Orchestration

Container orchestration is a set of tools for managing containers. The additional functionalities provided by a container orchestration tool can range from deployment to scaling of the application, including the network configuration of containers.

The main tool used at the moment (although there are others) is Kubernetes. Kubernetes provides a whole range of tools and commands that allow you to finely manage your containers. It is thus possible, for example, to quickly deploy an application update by pushing a new version of a particular container, while ensuring a roll-back system in the event of failure.

Kubernetes also takes care of all the configuration of the containers (by means of environment variables), allows the implementation of a load-balancer and the declaration of several instances of the application at the same time by means of replicas (scaling).

- Benefits of the tool

  • Procurement and deployment
  • Resource allocation
  • Provision of containers
  • Scaling or deleting containers according to workloads in the infrastructure (scaling)
  • Load balancing and traffic routing
  • Monitoring the integrity of containers
  • Securing interactions between containers

Read also: Terraform / Kubernetes: more and more use in production?

Continuous integration

Continuous integration was very quickly put in place by the developers, who were anxious toautomate a maximum number of tasks each time the application was updated. The objective is to carry out a series of automatic tasks each time a modification is made in the application. Most of the time, these tasks are the following: launch a series of tests, execute a linter (static analysis of the code), check for security flaws, build the application (usually in a container), then deploy the application.

Continuous integration can even include an integration test phase, allowing the interaction of the application with other components (database, third-party API, etc.) to be tested

There are many continuous integration tools, such as Jenkins, Gitlab-CI, CircleCI... Most of the time, they are easy to learn, and simply describe a list of tasks to be performed.

The continuous integration chain sends email alerts to the team, giving them quick feedback on the status of what has been delivered, allowing them to fix problems before they reach production.

- Benefits of the tool

  • Ensuring quality code in production
  • Automation of tests for each new feature/change
  • A quick look at the developments
  • The objective is continuous deployment, which allows instant feedback via automated deployments in the CI chain

Ansible / Puppet - Provisioning and deployment automation

Provisioning a production environment can be time consuming and tedious. Automation of this task is possible with tools like Ansible/Puppet.

A series of tasks are defined in a script, which can contain variables and other file templates. These scripts can be executed on any platform to automatically configure a server (database installation, web server setup, HTTPS configuration...) without having to redo the action manually for each infrastructure that you want to set up.

It also ensures that all provisioned platforms are iso, which avoids having different and therefore difficult to maintain environments.

Automated configuration also makes it possible to configure new machines very quickly, for example when you want to add a new pre-production server halfway between production and the test environment.

These tools also allow thedeployment phase to be automated. A list of actions to update the application (on the production environment or otherwise) can be described in a role, which will be executed for example at the last stage of the continuous integration chain.

-Advantages of the tool (Ansible)

  • Modules: for a task, an Ansible module already exists (Example: copy a file).
  • Roles: list of instructions to perform an action (e.g. deploy a version of the application)
  • Inventory: list of machines on which to perform tasks
  • Playbook: lists of roles to be performed on the inventory
  • Templating

The cloud and cloud providers

A Cloud is made up of servers located remotely and accessible from anywhere at any time via a secure and protected Internet connection.

Many Cloud Providers provide cloud services, and allow you to set up a turnkey server or managed databasein a few clicks.

Among them : Amazon Web Service, Scaleway, Azure, Google Cloud Platform...

Coupled withInfrastructure As Code (see next point), the cloud enables environments to be generated and destroyed on the fly in a fully automated manner.

- Benefits of the tool

  • No more physical machine to manage yourself
  • Quick to set up (a few clicks for a server)
  • Flexibility
  • Easy to handle
  • Supported maintenance
  • Analytics services

Terraform - Infrastructure As Code

Infrastructure As Code allows, via descriptor files or scripts, to create, manage and destroy infrastructures on the fly.

The open source Terraform tool allows you to use providers to describe the resources you want to put in place. Via a system of resources (a resource being for example a machine, an IP, a storage volume...), it is possible to create a complete infrastructure in the cloud in a few command lines.

Terraform scripts can be run in a continuous integration chain to set up integration test servers on the fly. They can then be destroyed at the end of the test phase to reduce infrastructure costs.

- Benefits of the tool

  • Fast and transparent deployment
  • No human failure possible
  • Speed of execution
  • Deploy and destroy resources on the fly (costs)
  • Easy knowledge sharing: all infra is described in a file.

Review our discovery session: Webinar: Terraform Training - Discovery Session

Today, it is increasingly necessary to master or at least know the main DevOps tools on the market. For developers, it is the assurance of having better control of their services, of automating certain redundant tasks and of being able to deploy infrastructures without needing the support of an Ops team.

For companies, it is the first condition to claim to provide quality, scalable and robust services, and to optimise delivery processes while reducing errors. Automation is at the heart of the DevOps challenge, and allows human error to be removed from the equation.

There are, of course, many other tools, particularly for setting up monitoring, but those presented in this article form the basis of the technology to be known in 2021.

Tommy Alexandre

Tommy Alexandre

Lead developer @theTribe

Why don't we talk?