​​Implementing a DevOps process can be tricky when there are so many options that can be incorporated. At Copado, over the last decade we've helped thousands of customers shift-left to implement an optimized DevOps strategy, and we've been able to condense that strategy into the following points:

  1. Standardize development requests by using an ALM tool.

  2. Create a release pipeline to ensure consistency.

  3. Set environment access levels and utilize an approvals process.

  4. Incorporate a Git repository for added security.

  5. Back promote regularly to keep orgs in sync.

  6. Schedule backups to Git to keep branches up to date.

  7. Test regularly to prevent errors in production.

Standardize development requests:

The first step in implementing an efficient DevOps process is to standardize the way requests for improvements are made. This is a crucial step in shifting left, as introducing organization early on in the process is the best way to mitigate any complications downstream. There are several tools that can be used to help with this process, but Application Lifecycle Management (ALM) tools like Jira make it easy to track the progress of these changes from inception to completion. We recognize the importance of this process at Copado, so we make it easy to integrate with Jira directly from Copado Essentials+ to ensure your User Stories are always up to date.

Create a Release Pipeline:

After setting up a process to manage requests for change, the next step is to ensure these changes are reliably delivered. The delivery of these changes needs to be consistent and repeatable, ensuring that any errors introduced early in the development process are identified and resolved long before the changes ever would reach production. Work Items with Copado Essentials+ makes it easy to guarantee consistency by helping you to create pipelines that the entire team can utilize. A Work Item is an encapsulation of a user story and its progression through a consistent change management process with approvals, history, visibility, and collaboration built-in.

Set Organization Access Levels and Approvals:

Once a pipeline has been defined and implemented, you're well on your way to optimizing your DevOps process! However, as teams continue to grow, it can be hard to ensure everyone is following the process that has been defined. In situations like this, a Release Manager role may be utilized to oversee what gets pushed to production and what may require more work. By implementing a Release Manager, development work can be confined to lower, risk tolerant environments until they are deemed ready to promote to higher, less risk tolerant environments (by whomever is given this role). One way a release management process can be enforced is through the use of Organization Access Levels. These access levels can be set to allow only select members of the team to Deploy to the next environment in the pipeline, and limit others to only be able to Validate that their changes will successfully deploy.


If implementing a Release Manager is not an option, or even more oversight is needed, another method to ensure process is followed is by utilizing Approvals. Approvals can be made a required part of the deployment process by setting a minimum number of approvals to deploy to the next environments within your pipeline. The ability to approve or reject changes is reserved for Team Owners, or to whomever the pending Work Item is shared with (for example, more senior members of the team).

Incorporate a Git Repository:

Now that changes can be delivered in a controlled, standardized way, the process of getting these changes from development environments to production is much more efficient. But, what happens when changes are accidentally overwritten, or an environment is refreshed and you lose any in-progress metadata? Salesforce doesn't provide protection against metadata being overwritten, so as changes are deployed through your pipeline, additional security may be warranted. This security can be provided through the use of an industry standard Git repository (or version control system), and Copado Essentials+ removes the requirement of an advanced technical skill set by making it easy to commit your metadata changes to a repository. Having your changes committed and backed up to a repository ensures that you can restore previous versions of your metadata in the event an accident does occur.

If you don't have a Git repository, don't worry, they're easy to seed using Copado Essentials+. Copado Essentials+ supports all major Git providers (GitHub, GitLab, Azure DevOps, and BitBucket), as well as our own Copado Version Control, and has multiple deployment methods to best utilize a repository and accommodate users where they're at in their DevOps maturity.

Using Git with Work Items

Incorporating a Git repository into your Work Items pipeline is arguably the easiest way to take advantage of the powerful functionality of version control. When setting up a pipeline, any repository that's been authenticated within your Organizations tab will be available to select for use. Once selected, the last step is to choose the appropriate branch to deploy changes to at each stage in your pipeline. Now, any changes that are deployed through this pipeline will automatically be committed to the corresponding branches in your repository without any additional work on your part!

Using Git with Deployments

For teams that want granular control of what is deployed to and through their Git repository, the Deployments method can offer a way to facilitate pull requests using Copado Essentials+. By simply selecting your Git repository as the target of your deployment, Copado Essentials+ will create a feature branch (as shown below) that can then be used as part of a pull request against the branch that corresponds to the next upstream environment. Furthermore, after merging changes into the target Git branch, a CI Job can be triggered via a Git webhook to simplify the process of deploying those changes to the next upstream environment.

Using Git with CI Jobs

CI Jobs are incredibly versatile, as they allow you to automate almost any action within Copado Essentials+, including deployments to and from a Git repository. Since both Work Items and Deployments only allow you to commit metadata to a Git repository upon deployment, CI Jobs can be a powerful feature to ensure your in-progress metadata is committed to your Git repository without needing to deploy to an upstream environment. We'll touch on this more later, but CI Jobs are a great way to automate commits to a Git repository, for backup purposes.

Regularly Back Promote Any Changes Made Directly In Higher Environments:

Ideally, changes would always originate in lower environments, then be put through a review process including quality gates, then ultimately be deployed through a predefined pipeline to upstream environments. However, for teams that utilize multiple developer environments to push upstream, or in cases where changes originate in upstream environments, a regular practice of back deploying these changes to lower environments is needed. Fortunately, Copado Essentials allows you to use the Deployments method to deploy to and from any environment of your choosing, and with Copado Essentials+, a back promote pipeline can be created with Work Items for streamlined back promotions, or automation can be incorporated into back promotions using CI Jobs. To do so, simply create a pipeline starting from Production (or any other upstream environment), ultimately ending with the downstream environment of your choosing. For example, see the image below:

Automate Metadata Backups to Git:

As mentioned earlier, Work Items and Deployments only allow you to commit metadata to a Git repository upon deployment, so any in-progress metadata is at risk of being lost or overwritten during a sandbox refresh or back promotion. To ensure in-progress metadata is always saved, CI Jobs can be utilized to automate the task of backing up metadata to your repository. Adding this to your DevOps process is not only a powerful way to ensure that the latest, in-progress changes are committed to a source of truth, but by also following this best practice, you have the added security of being able to restore metadata from any point in time, not only from what was committed during a planned org-to-org deployment.

Test Regularly:

To round out best practices with your Salesforce DevOps process, testing is a key component that cannot be overlooked. Ensuring that you have a robust testing solution can make the difference between smooth, predictable deployments, and panicked recovery efforts, due to production environments being unexpectedly taken down. Testing can refer to a wide array of methods, whether that be Unit Testing or Static Code Analysis to ensure your Apex code is following best practices and will deploy without error, or automating your functional and regression testing through the use of Copado Robotic Testing from within the Copado Essentials+ interface. Due to the requirement to learn complex coding languages and ongoing script maintenance, traditional automated testing (i.e. Selenium based testing) is a very specialized skill that requires years of experience to perform well. However, Copado Robotic Testing completely democratizes testing, lowering the barrier of entry and providing a low-code solution for something that is traditionally quite difficult to implement. By incorporating an easy to scale testing solution, you can shift left and catch errors early on in your DevOps process.

Did this answer your question?