This five-phase http://cofepublic.net/?rz=kp borrows its structure from the CMM, progressing from a base level of no effective capability through beginner, intermediate, advanced and expert stages. It’s a path to the advanced capabilities befitting the DevOps major leaguers that deploy multiple times a day or even multiple times an hour. The pinnacle of continuous delivery maturity focuses on continual process improvement and optimization using the metrics and automation tools previously implemented in stages two through four of the model. Optimizations reduce the cycle time for code releases; eliminate software errors and resulting rollbacks; and support more complex, parallel release pipelines for multiple, concurrent software versions, including A/B experimental releases. The first stage of maturity in continuous delivery entails extending software build standards to deployment.
At intermediate level, builds are typically triggered from the source control system on each commit, tying a specific commit to a specific build. Tagging and versioning of builds is automated and the deployment process is standardized over all environments. Built artifacts or release packages are built only once and are designed to be able to be deployed in any environment. The standardized deployment process will also include a base for automated database deploys (migrations) of the bulk of database changes, and scripted runtime configuration changes. A basic delivery pipeline is in place covering all the stages from source control to production.
This means no manual testing or verification is needed to pass acceptance but typically the process will still include some exploratory testing that feeds back into automated tests to constantly improve the test coverage and quality. If you correlate test coverage with change traceability you can start practicing risk based testing for better value of manual exploratory testing. At the advanced level some organizations might also start looking at automating performance tests and security scans. Every company is unique and has its own specific challenges when it comes to changing the way things work, like implementing Continuous Delivery. This maturity model will give you a starting point and a base for planning the transformation of the company towards Continuous Delivery.
This setup is suitable when
you deploy new models based on new data, rather than based on new ML ideas. At this stage, DevOps teams — continuous delivery experts all adopt some form of DevOps structure — have fully automated a code build, integration and delivery pipeline. They’ve also automated the infrastructure deployment, likely on containers and public cloud infrastructure, although VMs are also viable. Hyper-automation enables code to rapidly pass through unit, integration and functional testing, sometimes within an hour; it is how these CD masters can push several releases a day if necessary. Resist the tendency to treat a maturity model as prescriptive directions instead of generalized guidelines — as a detailed map instead of a tour guidebook. Also, this continuous delivery maturity model shows a linear progression from regressive to fully automated; activities at multiple levels can and do happen simultaneously.
Continuous improvement mechanisms are in place and e.g. a dedicated tools team is set up to serve other teams by improving tools and automation. At this level, releases of functionality can be disconnected from the actual deployment, which gives the projects a somewhat different role. A project can focus on producing requirements for one or multiple teams and when all or enough of those have been verified and deployed to production the project can plan and organize the actual release to users separately. The goal of level 1 is to perform continuous training of the model by
automating the ML pipeline; this lets you achieve continuous delivery of model
prediction service. To automate the process of using new data to retrain models
in production, you need to introduce automated data and model validation steps
to the pipeline, as well as pipeline triggers and metadata management.
Doing this will also naturally drive an API managed approach to describe internal dependencies and also influence applying a structured approach to manage 3rd party libraries. At this level the importance of applying version control to database changes will also reveal itself. At the intermediate level you will achieve more extended team collaboration when e.g. DBA, CM and Operations are beginning to be a part of the team or at least frequently consulted by the team. Multiple processes are consolidated and all changes, bugs, new features, emergency fixes, etc, follow the same path to production. Decisions are decentralized to the team and component ownership is defined which gives teams the ability to build in quality and to plan for sustainable product and process improvements.