When talking about DevOps there are a lot of misconceptions, and a lot of articles and discussions about how there are a lot of misconception about DevOps. As with most "buzzword concepts", it is not really productive to discuss what it is, better to show what it could be. That being said, I will set the stage by presenting what I see as the closest to a good definition. Donovan Brown - Principal DevOps Manager with Microsoft, states the following:
DevOps is the union of people, process, and products to enable continuous delivery of value to our end users.
I will show an example of how this can look in practice, briefly presenting the concepts and tools that I have had experience with, trying to create an effective "DevOps"-process.
The first step when developing software, after deciding on a solution and technology, is to write the code. You might not think that this is part of the devops process, I beg to differ. Coming back to the definition above, "...delivering value to our end user", of course involves actually writing the code. Especially how we write tests covering the code, is in my opinion an important factor, enabling us to release new features with confidence. Fact is that no matter how good code review routines or manual testing regimes you have, bugs will occur. Writing tests that make sure the same bug does not occur again and again, is vital.
Red-green refactor is one simple technique, which helps protect code against reoccurring bugs, as well as aiding towards good code quality in general. The technique involves always writing a failing test before changing any code. This can be used as a general development technique, but I find it most useful when fixing bugs. Being able to reproduce the bug with a failing test before fixing it, ensures that you do not change code that does not need changing, and in the longer run makes sure that the bug isn't introduced again later. By always doing this, as the code evolves over time, you build yourself a safety net, catching you in careless moments.
BDD or tests as documentation is another useful way to think of tests. Writing tests that not necessarily only provide test coverage, but also describe the desired functionality or behavior, promote maintainable tests and code in general. I usually do this in the simplest possible way, using a Given/When/Then structure, with groups of tests describing the process.
Read also Tests that Matter
After writing code, the next step towards delivering it to the customer, is integrating with the work of others. Normally through some version control system, preferably git. Integrating the work you do continuously is key in a well functioning DevOps process. Committing often makes the pain of fixing a problem much more manageable. If you have caused a problem, broken a build or failed a test, you would want to know it straight away, not when preparing a release weeks later.
Multiple long lived branches, is the main anti-pattern causing most headache in regards to source control and co-operation. Extra branches from which changes are merged "later on", environment specific branches or release specific branches are all commonly seen, when most of the time all you need is one branch, with tags to mark releases. Feature branches can be a good practice, as long as they are short lived and have a purpose, typically code reviews. Having code live on parallel branches will always cause problems and are most of the time a clear sign of a flawed DevOps process. I want to highlight that this advice is related to the commits and branches that are shared, how each developer work locally is a different story. In the end it all come down to confidence, and a complex branching strategy promotes the opposite.
Independent of the branching strategy you use, your git history will consist of commits, and how you compose them might be worth some thought. It is very common to open a git log and see meaningless commit messages: "Fixing test", "Update config" etc. This usually comes from the fact that teams don't make an effort to keep their history clean, resulting in an inconsistent and poorly structured log.
Agreeing on a few simple rules make the history not only beautiful to watch, but useful. Getting a quick overview of the commits in the current release, is actually possible if the commit messages are properly formatted. I always try to follow the advice laid out in How to Write a Git Commit Message, with the most important point being to always have one short subject line, followed by a new line and then an optional description if need. In addition it might be desirable to connect all commits to a work item. Worth noting is that many commits will not require a long description, the subject line is often enough.
Make home page correctly redirect after logging in When logging in through the home page users will be redirected to the welcome page, without having to click the "enter page" link. Users are more likely to stay if they do not have to click twice to enter the page. This will affect marketing, as the ads previously shown at the splash page will no longer be visible. US: #456754
After committing changes the code will be built and tests run. Having a build you trust is crucial when delivering changes continuously. Running strong tests both as part of the code as well as automated "GUI-tests" is a big part of this, but also relying on the build script doing the same every run. Projects will have different requirements in regards to how they are built, therefore it is important to keep the definition of the build close to the code itself. In my opinion, defining builds in team city or VSTS is a not an option. Everything should be scripted and included in source control. There are a lot of good abstractions that can be used. I have good experience with both Cake and Fake, but for most small projects I tend to like simply writing powershell.
Keeping the build together with the code has several benefits. It makes the build independent of the actual build server, changing from e.g. Team City to VSTS would require close to no effort. Being able to run the build script locally is also a nice feature, making debugging possible. As the rest of the code changes the build scripts will evolve, and having history of the changes is always useful.
Segregation of concerns is an important concept in software development, but can also be applied to a DevOps process. Do not try making the build server do deployment work and vice versa. Keep the build and deploy segregated. Last step of the build script should push an artifact package to a package feed, which the deployment system picks up and installs. I feel that connecting the two, by triggering deploys from the build server etc., with the argument being that it is nice to have "everything in one place", often leads to confusion and a lack of insight to the process.
Versioning is the glue connecting the code, the builds, the artifacts and the deployments. You would like to have the same version number from the first time code is checked in, all the way to production. GitVersion is a simple tool that helps with this. GitVersion makes versioning consistent and automatic. The version is deduced from the git log, using configurable rules to calculate the next version based on tags, branches etc. It is all configured in the respective repositories, in a simple config, specifying how the version should be incremented, e.g. based on previous tags.
assembly-versioning-scheme: MajorMinor branches: master: regex: master increment: minor hotfix: regex: hotfix(es)?[/-] tag: hotfix increment: patch
The version provided by GitVersion will be used as assembly version and as the version of the artifact package. This number can be followed all the way to production: build number, assembly version, package version, release version, git tag.
When a new package is pushed, the deployment system can automatically create a new release, using the version number from the package. This release will typically be deployed to an initial test environment, maybe running integration tests etc. If this deployment is successful the release will be promoted through a series of environments, going through more testing and in the end be released to production. This part of the process is probably where the biggest differences will be seen. Some projects can be coupled to other systems dictating how often it is possible to release. There can be automatic or manual testing, and a lot of other strange obstacles on the way to production. Important is to do the same every time, and minimize the possibility for human error. I usually try to avoid doing any clicking in GUI when deploying, scripting and automating all actions.
Installation is similar to building in that it can be run locally in the same manner as when deploying. Same rules goes here, no installation steps specified in VSTS or Octopus. The same framework used to specify build scripts can be used to write installation scripts, with the same benefits. Typically each project in a solution will have an installation script and be pushed and deployed as its own package. E.g. one messaging service and one web application, when built, results in two packages being pushed with the same version number and consecutively deployed as the same release.
Releasing can be as easy as creating a tag in git and promoting a tested release to production. The exact steps can vary, so as you might guess, it should be a script checked in together with the code. When creating a release, the current latest commit will be tagged, resulting in all consecutive changes being next version from that point on. The tagged version will have an associated build, package and release, because of the same GitVersion config being used both when building and when running the release script.