DevOps: how to implement the process?
Embracing the DevOps methodology and its philosophy is the first step in building the work in a “different” way. But how is the process actually implemented?
In a previous article I illustrated the mindset you need to have to structure a DevOps process: theory is crucial to be able to create a teamwork guided by good development and, above all, release practices.
Obviously, if knowing the mindset is a very important step, then you need to be able to structure the process using the right tools, also linked to the technologies chosen for your work.
In 20tab we have developed an open source template that allows us to setup projects in a matter of seconds, guaranteeing Continuous Integration pipelines in a Continuous Delivery process from the first moment and from the first commit, this enables us to perform automatic deployments in the various necessary environments. Continuous Integration provides for the automation of test processes and mostly shares the pipeline with Continuous Delivery, a more extensive concept that also provides for the software automated release process.
As we have seen already, having rapid feedback gives us the possibility to quickly modify and improve the software: this means quickly responding to the market needs. It also means satisfying the customer at the time of need, therefore giving a service of the highest quality.
The early stages
In 20tab we do not have a figure dedicated to the DevOps process, so we had to find an alternative that would keep the quality of the process high and, most importantly, did not create blocks in the activities.
But let's start from the beginning.
In order to meet the expectations that arise in the creation of a software it is essential that each team member knows the project goals and the roadmap to follow.
- As for the goals, a good impact analysis is needed. This allows us to plan activities and optimize resources. Maurizio Delmonte tells us about it in a very interesting article.
- For the roadmap, instead, we rely on an excellent activity analysis tool: User Story Mapping, of which Gabriele Giaccari explains the technique in this second article.
These first two steps, within a design phase, help us carry out a backlog of activities that also include project setup tasks.
Once the implementation phase of our features has started, we need to be able to deploy and constantly release new features.
In our process we generally use 3 different environments for the releases:
- Development: our development environment, which corresponds to a version of the project in alpha testing. Generally it is an environment where the development team members and the customer's technical representatives can make the first tests. Every little integration is released into this environment on a daily basis.
- Integration: this is the environment where the features are being released following reviews by the customer or a technical representative. If the release in this environment is successful, the code is ready to be released into production so users can use the new implemented functionality.
- Production: this is the production environment, completely identical to integration. In this phase, releases only take place after explicit approval by the client.
As you can see from the previous figure, which represents the workflow just described, the first six phases can be easily automated. Let's see what happens at each stage in particular.
- Source: This is the moment when the new features implemented are added to the git repository.
- Test: The Continuous Integration pipeline performs the tests automatically. This allows us to release code protected from any bugs, while automatic tests allow us to monitor any regressions.
- Build: an application build is created, then a Docker image that will be used for our releases.
- Development: if tests and automatic builds pass the checks, it means there are no technical issues and therefore the code can be reviewed. This means that the new changes are approved.
- User Acceptance Test: the new code is immediately released on the development system. At this point a review of the new features will be performed from the user point of view, with a quality control. If there are no issues we can then move to the next step: the release on the staging system (our Integration).
- Staging: it is nothing but the release on the Integration system. This is a necessary step to verify that everything that has been approved in the development phase has not undergone regressions of any kind and that it meets the requirements to go into production. Otherwise it could create problems for the software in production, with direct consequences on the business associated with it.
- Production: sending new features into production must be a non-automatic choice, but dictated by business demands. If all the previous steps have been so far successful, we can integrate our new code with the software already used by our users, with the security of not having created inconvenience of any kind.
The flow just described represents what in theory would be better to do in order to have a consistent and solid Continuous Delivery process. In fact, analyzing the build step before doing a code review, it would be a good idea to make sure that the build is done successfully.
In our case, we have slightly modified the steps to optimize some physical resources, in fact we proceed with the code review before the build is done. This is because, by applying Test Driven Development and running the build locally, that step will hardly be inconsistent. However, even with this change, if the build is not done successfully, the deployment on the development environment is immediately blocked.
To automate the whole process we decided to create an open source template that would structure our projects according to our technologies and our standards. And that, of course, it followed all the principles of Agile and DevOps processes.
The template we use consists of 3 main blocks:
- a service we call Backend: it is a Django project that generally offers the API rest and the interface to the Database;
- an orchestrator: it is a repository where the kubernetes configurations, necessary for all the environments we will use, are saved
The structure is obviously linked to the technologies we use during our development, but the approach used is independent from them.
Here you will find the model to start from to create your projects: by following a step by step guide, you will be able to build a working product in just a few minutes. Obviously it will be the structure that will contain everything that will be needed by your new software.
I explained all the steps in a video:
If you use our very own technology stack to program, then you can use this template as it is.
What if you use different languages? The approach we have seen here also applies to them. Being a procedure, it can be disconnected from technology.
The important thing to keep in mind is that by using a good mindset, building good team collaboration, and observing programming best practices, then you can achieve a good DevOps process. Even without a figure dedicated to that purpose.