ChatOps, AIOps, GitOps, CloudOps; these are a few of the new DevOps trends that are on the rise and many organizations are jumping at the chance to implement these ‘new’ practices and technologies. However, organizations must ask the question: How many groundbreaking successes are we seeing from some of the emerging DevOps trends?
About the author
Anders Wallgren is VP of Technology Strategy at CloudBees.
To date, there have not been many revolutionary changes to have come from the variety of “X-Ops”, which raises the question that perhaps too many businesses are focusing on adopting the latest DevOps trends, rather than concentrating their efforts on developing reliable and effective software development that has the fundamental principles of people, tooling and collaboration at its core.
More industries are capitalizing on automation and low code processes, with some removing manual work from the DevOps equation. However, organizations require a human element in order to scale DevOps and, by removing human collaboration, we take away what is at the very heart of DevOps. The goal of DevOps is to combine separate teams with sometimes divergent missions – development, leadership and operations – and to form a closer collaboration between them to move the company mission forward faster.
What’s more, DevOps tools are not sophisticated enough to work autonomously and require humans to power them effectively. Therefore, to save businesses from being distracted by shiny new DevOps trends that appear slick and groundbreaking, this article will examine some of the many DevOps trends and current buzz-terms while delving into new practices and debunking the myths. Let’s separate fact from fiction.
Let’s start with ChatOps – a collaboration model that was created to ease the integration between various DevOps platforms, tools and IT management teams. ChatOps is a model that enhances the work that teams are already doing, with robots acting as team members, receiving sent requests and issuing instant responses back to the team. One of the really cool things about ChatOps is that it is educational. Each action is logged after being issued and team members are able to learn from previous activity.
However with ChatOps, it is important to remember that products are not autonomous and there is no magic element that sets it up. A human is required to create the code and tie certain elements of ChatOps together. It doesn’t just work perfectly straight out of the box. This means that when an error occurs in ChatOps, manual work is required to fix it which disproves the myth that ChatOps products are able to function autonomously. So if manual work is still required to power ChatOps, there is the potential for it to create further bottlenecks for teams.
Organizations must weigh new risks from trends
There are also a number of potential security implications in ChatOps. Most chat products do not have strong application programming interfaces, or APIs, and facilities for securing information. For example, there’s a high risk of employees getting access to a room and seeing confidential information that they may not be allowed to see. The reality for industries that require incredibly robust security measures, such as regulated industries like global finance and healthcare who both possess arguably some of the most valuable information, is that ChatOps may not be an efficient tool. It is incredibly important for organizations to identify what area within the business they are trying to improve with ChatOps, and to weigh the potential risks of a new product like ChatOps before implementing these tools.
Low code/no code, established to streamline workflows?
Another hot trend in DevOps are low code/no code systems. Low code/no code systems rely on automation tools for the work of skilled developers and these systems are set up to increase the speed of software delivery. Many organizations are using low code/no code products to boost the efficiency of teams, so both developers and operators can invest more time on improving customer experience rather than spending long durations of time fixing and creating new code.
However, the concept of low code/no code is not a new idea. Microsoft released a tool named ‘Visual Basic’, an event-driven programming language launched in the nineties. With Visual Basic, businesses could easily draw an interface and create code that could be generated for them, without having to create the code from scratch. This sounds pretty groundbreaking, right? In reality, not so much. In fact, from an architectural perspective, Visual Basic was not a roaring success – it was not scalable nor reliable. The question for organizations to consider is whether low code/no code has significantly improved since the creation of Visual Basic.
A.I Ops – a false promise?
When looking back at the history of low code and no code systems, there appears to be a pattern of over promising and under delivering. New systems are made to look slick in demos that demonstrate how easy it is to put code together, but when you look a little closer, the idea falls apart completely. Again, low/no code is not autonomous. Human action is required to create the code. It will not spontaneously generate itself. Therefore, humans cannot be removed from the equation because they need to monitor code regularly to check that it is working efficiently and that there are no bottlenecks.
Ultimately, if the code is not managed by humans and becomes broken up in the system, it will fail and the code will no longer be made for human use. Since low code/no code systems have the potential to create further bottlenecks in DevOps, it is fair to say that the hype exceeds its achievements and it simply is not justified. A lot of money is currently being invested in developing these systems, so we can expect that a lot of new and interesting tools will be created. However, the evidence of how groundbreaking low/no code systems will be is yet to be seen, as manual work is required to fix any potential issue.
It must not be forgotten that AI cannot exist without humans. When you dig further into the history of AI, no major breakthroughs have been made since the 80s. Sure, we have clever algorithms that are now accessible to many more organizations, but fundamentally, AI is still very fragile. AI is simply a collection of statistics, which can only be as smart as it is created to be through the manual power of humans. In essence, it is still not very sophisticated. Societal challenges with AI remain because it is programmed to make decisions based on external training data that’s subject to bias. If a tool has the capacity to exclude sensitive variables, such as gender and ethnicity, then it cannot be truly collaborative.
Ultimately, a human-centered approach is key
After a close examination of new DevOps trends, one thing remains clear – a people-centric, collaborative approach is still needed in order for DevOps teams to work at their full potential and achieve meaningful results. Humans are still integral to advances in automation. Software is created for the use of humans and people are required to power exciting new tools such as ChatOps. New trends also bring new security challenges for businesses and it is essential for organizations to weigh the pros and cons of using new practices. But one thing that’s universally clear for all is that the human element in DevOps is required to make both scalability and reliability a reality.
We cannot rely on technology alone to power sophisticated IT systems because we need real people working behind the curtain. Of course automation can be used to help teams deliver software quickly, but to believe that automation can work autonomously without manual effort would be foolish. A solid foundation of DevOps practices combined with collaborative ways of working need to work harmoniously with technology. So are all these new buzz words fact or fiction? Ultimately, you’ll be the judge – but regardless of the outcome, humans shouldn’t be taken out of the equation any time soon.