Stay updated with our latest articles, subscribe to our newsletter
Clouds & DevOps United!
As we discussed in the previous posts, Clouds and DevOps are two major IT trends that are closely related and complement each other.
DevOps is a philosophy that implies maximum automation of IT resource management in the modern Development + Operations lifecycle. Clouds provide means and tools to provision IT resources needed in a simpler and faster way – as a service. One of the main advantages of Clouds is the ability to save cloud infrastructure and its configuration into templates and design it as code. This concept is known as Infrastructure as Code (IaC).
From this perspective, preparation of the cloud infrastructure can also be considered a kind of programming. This programming, preparation of configuration templates, is made in a so-called declarative way – declaring or describing the future willing state of the cloud resources, their interdependencies, and interconnections, usually saved as JSON or YAML structured documents. One of the examples of such cloud services is – CloudFormation from AWS.
These configuration files can be stored and managed in VCS (version control systems) in the same way as application source codes. The beauty of this approach is that the whole Cloud infrastructure can be rolled out from a predefined template, including computing resources, databases, network & security configurations, etc.
And all the resources will be deployed in the proper sequence with a single click. This can be considered as maximum automation and so-called “one-touch provisioning.” But it is not the only power of DevOps tools in Clouds; let us see what else we have. We do have a lot!
DevOps as a Service
We can see DevOps as a service in the cloud from two different perspectives – DevOps tools provided as a service and DevOps automation solutions. DevOps tools provided as a service are pre-configured and usually don’t require much DevOps expertise to use them. Cloud vendors offer such services; popular examples are: – AWS CodePipeline & AWS CodeBuild- Google Cloud Build- Azure DevOps Services. These services can be easily integrated with the existing development tools and processes.
DevOps automation solutions, on the other hand, require more DevOps expertise to configure and use. But they offer much more power and flexibility in DevOps infrastructure management. The significant advantage of DevOps automation solutions is that they can be used not only in the cloud but also on-premises. Some popular DevOps automation solutions are: – Puppet, Chef, Ansible- Jenkins- TerraformAs we see, DevOps tools and automation solutions form a complete DevOps toolset that can be used to manage and operate DevOps environments in the cloud.
DevOps teams can use these tools to automate various DevOps activities, from infrastructure provisioning and configuration management to continuous integration/delivery (CI/CD) and application monitoring. DevOps teams can also use these tools to manage and operate DevOps environments on-premises. So, DevOps as a service in the cloud is a combination of DevOps tools and automation solutions that can be used to manage and operate DevOps environments in the cloud.
Cloud Ops vs. DevOps
Now, let us see what Cloud ops and DevOps are and how they are related to each other. Cloud ops can be defined as a set of processes and practices that help organizations to manage and operate their cloud environments. Cloud ops involve various activities, from provisioning and configuring cloud resources to monitoring and troubleshooting cloud applications.
DevOps, on the other hand, can be defined as a set of practices that help organizations to increase their speed and agility in software development and delivery. DevOps practices help organizations to automate various tasks in the software development lifecycle, from code development and testing to deployment and monitoring.
So, we can see that Cloud ops and DevOps are two different things. Cloud ops help organizations to manage and operate their cloud environments, while DevOps helps organizations to increase their speed and agility in software development and delivery. However, we can also see that Cloud ops and DevOps are related to each other. Cloud ops activities can help DevOps teams to automate various tasks in the software development lifecycle. In turn, DevOps practices can help Cloud ops teams to increase their speed and agility in cloud environment management.
DevOps Functions And Tools
DevOps tools are not necessarily cloud-based. Still, initially locally deployed and managed systems have evolved to have cloud versions available via the “SaaS” (Software as a Service) model.
Fig. 1. The DevOps Cycle
Let us list down and describe all the stages of the DevOps cycle in detail. What functions each stage is responsible for, and what tools can be used to support these features.
Planning is the first step in many processes – the better you plan, the more effective your strategy will be rolling further. In long waterfall-based projects, it was a common practice to use classic Project Planning tools such as MS Project.
A Project Manager defines project phases, duration, interdependencies, and team capacity in a single project plan file. This file is a pretty solid roadmap for the project; it is used to track the project’s progress throughout all the phases.
With the popularity of Agile software delivery methodologies (like Scrum, Kanban, Lean, DevOps), the delivery lifecycle has shortened. Such collaboration tools like Jira, Trello, and Wrike are now used for planning iterations and assigning tasks, tracking process stages, and orchestrating the work of development teams, sometimes even geographically distributed.
These tools are usually also integrated with audio- video-conferencing & messaging tools like Slack, Zoom, and RocketChat to communicate quickly between team members. For personal and team planning is also especially important to have a practical knowledge and information sharing solution to manage documentation, meeting notes, and other information – for these purposes, Confluence, Teamwork, or similar tools can be used.
Of course, MS Project is still a popular tool that many PMs are familiar with, so it has its Cloud version as a part of the Office 365 offering.
To build efficient project management and collaboration environment, it is crucial to have all the systems integrated into each other so they can refer info from each other and also to have a Single Sign On (SSO) capabilities to save time and make it more convenient to use without the necessity to login to each system separately.
Create Your Program
Developers produce codes; they usually do it on their PC or laptop; it is pretty rare to use cloud-based tools for code editing. Integrated development environment (IDE)/Code editors is still a conservative world since it is quite complex and highly customizable software with multiple plugins and extensions (there is plenty of choice of IDEs/Code editors specialized for a specific programming language or general-purpose (Notepad++, VIM, Atom, etc.). Of course, cloud-based IDEs (Codeanywhere, Cloud9, Eclipse Che) are available as a service and editors with integrated runtime such as Jupyter Notebook-based solutions like Google Colaboratory, AWS SageMake, Azure Notebooks – used mainly for Data Science and Machine Learning use cases.
But what coders do with their code when they develop one solution collaboratively in a team – they need a place to store it in a commonly available space, control, track changes, make sure it is consistent, and the codes written by one developer are not overwritten by others. For this purpose, Version Control Systems (VCS) was invented. There are many products out there, but two significant players share most of the market – Apache Subversion (SVN) and Git (Git has several cloud-based implementations – Github, Gitlab, Bitbucket).
Version Control Systems allow to track what piece of code was added, modified accurately, or deleted by whom and quickly restore the previous version, merge codes, resolve conflicts, compare code versions, control who can perform code changes, approve or reject suggested changes, review newly added code.
This is a handy tool for collaborative development. In the context of DevOps – it is the first part of CI/CD process automation. A typical use-case is when a VCS is integrated into a ticketing/project management tool to relate a piece of code to a specific user ticker, request, or task on Jira/Kanban board. Once a task is assigned to a developer, they receive a notification and take the task to work; after it is done and the code is ready, it is prepared to be packaged, deployed to a test environment, tested, and delivered to production. So, by now, we have a process like this:
Ticket assigned (e.g., in Jira) -> Notification is sent to a developer (e.g., email or Slack) -> Developer uses his favorite IDE to change/add codes and submits the codes into VCS (e.g., Git). Supposedly Dev part of the DevOps cycle is done? Probably not yet; let us see what happens next.
Build and Package
When codes are developed, software needs to be compiled and packaged into an installable or executable patch; this process is a perfect candidate for automation because it is the same repeatable task – compile the codes, install dependencies, perform packaging with a particular version. Many mature software products are available on the market (Apache Ant, Maven, Gradle), including SaaS ones (Travis CI, Google CloudBuild, AWS CodeBuild).
The main objective of using these tools, which originated from a set of self-developed scripts (usually Bash or PowerShell), is to automate repetitive tasks to create an installable and deployable software package ready to be rolled out to a server, Virtual Machine, or into a container.
It also helps to verify that all dependencies are collected, the codes are compiled without errors, and the build is completed successfully. Usually, in Agile software development practice, it is recommended to perform a build as often as every code commit happens – to make sure if something is broken, it is known as soon as possible with minimal impact and minimal effort to roll back to a previous working version of the codes. Here is the point at which a build system is integrated into SVC systems. But not always Git/SVN triggers build straight from it – a particular class of software orchestrates the whole CI/CD process, which we will describe later in this post.
During the build stage, external packages are expected to be included in a deployable patch; these packages are called artifacts and stored in centralized storage, called an artifact repository. And the built patch can also be saved and tracked in the same repository. The most commonly used artifact repositories are Jfrog Artifactory, Nexus, and DockerHub.
Fig. 4. Software Build and Package from VSC + Artifact Repository
Test & Verify
It is an entirely separate Quality Assurance world with its own rules and tools, concepts and approaches, and different test types – unit, integration, system, acceptance, regression, and others. Some of them can be automated and executed as a part of the CI process right after building. A result of any testing is a decision whether this build can be deployed – “green light” (if it has a sufficient number of errors) or not – “red light.” With manual testing, there is always a tradeoff between the amount of testing to be done and the price of resources + time spent on testing. Test automation is a great time and effort saver and shall be used whenever possible.
Tools used for automated test execution depend on technology and what type of program we want to test (web frontend or backend, mobile app or desktop program); the most popular test tools are Junit, Selenium, Cucumber, and SilkTest. There are also more specialized test types, such as performance or security (penetration & vulnerability); they also can be automated and added to CI/CD process.
Moreover, an extension of DevOps known as DevSecOps indicates the importance of the Security aspect.
Fig. 5. Deployment to Cloud After Successful Testing
Configure & Release
Once our codes are ready, packaged, and tested, we are almost ready to deploy them in our environments. Usually, it involves some number of commands to be run before the installation: stop a running application from receiving user traffic, gracefully shut it down, download a new patch, apply the patch, start a new version of the application, initiate user traffic to flow into it, validate that installation has finished successfully.
Of course, users would prefer not to have any service interruption during the deployment, so different approaches to the gradual rollout are used. When an application is redundant, it runs on more than one server or container. These services are updated one by one or proportionally (canary deployment, Blue/Green deployment, rolling deployment).
What is also essential in the deployment process – is to have a reliable rollback procedure; in case the deployed application is faulty (but somehow has passed all tests before), it is an excellent idea to roll it back to the previous working state as quickly as possible. DevOps philosophy and CI/CD approach allow releases often with fewer changes; it significantly reduces the risk of significant and complex failure that might be difficult to fix.
The most used tools for automated deployment are Chef, Puppet, Ansible, and Terraform. Cloud service providers provide tools for deployment automation to their platforms, such as AWS CodeDeploy and Google Cloud Deployment Manager.
When deploying an application to different environments, it is necessary to apply correct security credentials and other settings usually stored in configuration files. It is more convenient to have these sets of parameters stored outside the principal application codes so they can be applied dynamically. The sensitive part of these parameters, such as usernames, passwords, and access tokens, is called secrets and shall not be stored in a plain text format; it must be encrypted. Public cloud providers provide special storage services for such information: Parameters/Secrets store.
Monitor, Observe & React
After deployment, it may seem that the Dev part is done, and we step into the Ops realm.
We need to ensure that our application is healthy, serves users, performs all operations correctly, and provides sufficient resources. In simple words – we need to monitor its health and performance. There are a few aspects of the monitoring we need to take into consideration: application logs, application, and environmental metrics, alarms, and notifications.
In a normal situation, logs are not being read by anyone; they are just being written to some log storage. Even though logs are read-only when something happens – it is an old-fashioned and reactive approach. It is common to have logs being analyzed to detect any anomalies to predict and, if possible, prevent any potential issues.
Application and environment metrics can be exposed for collection by monitoring programs on different levels, and these metrics can be application/platform-specific or relatively standard ones (CPU utilization, RAM, Inbound/Outbound network, IOPS, etc.)
Of course, like DevOps, we want automation; thus, different alarming mechanisms can trigger automated reactions to inform an engineer (e.g., notification) or perform rectification actions (restart, scale in/out, replace a VM/container, etc.)
Some of the most used monitoring & log analysis systems are Prometheus, Dynatrace, Grafana, ELK, and Nagios.
Even though it may seem that Devs have nothing to do with monitoring and it is a purely Operations area of responsibility, we have a closed-loop in our process; At the same time, our application is being monitored and observed, and the results of such observation shall be supplied from Ops to Devs (in fact, we have a joint team and this separation is only for illustrative purpose) to improve the application features and performance continuously, get insights on which parts are being used more frequently and so on.
We have just described the whole DevOps Cycle and tools that enable all the parts of the process, but we did not cover the overall orchestration and management of the entire process. There must be a brain that controls what and when to be done by part of our CI/CD organism.
The primary purpose of orchestration and scheduling tools is to stitch together all the details of the process, pass signals from one system to another, trigger automatic actions, and inform the DevOps team whether a part of the process has been executed correctly or not. This type of software usually operates with workflows or pipelines and tasks running within them triggered by schedules or by events. It is a glue that integrates the whole process and serves as a control center.
Samples of CI/CD process automation tools: AWS CodePipeline, Azure Pipelines, Jenkins, Bamboo.
For most DevOps tools, services are available from the Cloud so that you can build your own CI/CD pipeline fully managed by a Cloud Service Provider.
Fig. 7. DevOps tools from Public Clouds (AWS, Azure, GCP)
If you want to know more about DevOps, stay tuned for our other posts on our blog!
Start your journey to become a DevOps Engineer expert and excel in your career, we suggest starting with the basics.
DevOps online training can be a great way to learn the skills needed to be a DevOps engineer. DevOps is a relatively new field, and there are many online resources that can help you get up to speed quickly. The key DevOps skills you will need to learn include automation, configuration management, and monitoring.
To join our learn-by-doing Project Based Learning program, where you can build actual software and deployment scenarios and get real hands-on experience in DevOps and Clouds – visit our site!
Tell us about yourself...
Liked this article?
Stay updated with our latest articles, subscribe to our newsletter
You may also like
DevOps - Where to start?