Jag var nyss iväg på FOSDEM, Free Open Source Developers’ European Meeting. Det är en årlig konferens som startar lite informellt med en ölkväll på Café Delirium. På lördag sparkar det igång på riktigt och håller sedan på fram till ca 17 på söndagen, beroende på rum/spår. Bussarna från centrum är tokfulla, över 5000 deltagare […]
For about three years ago we use feature branches in our current project. It took about a half to one day to merge those branches before deployment. The reason why feature branches was used was more or less because most team members have worked like that in earlier projects and they thought it was the way to work. At that time some of them also wanted to introduce bug branches! That was the right branch strategies for them, because that was what they have learned.
We all are often stuck into a pattern of thinking, we have during our life learned from the community sounding us what is the correct way of thinking and doing stuffs. In school we learned that 1+1 is 2. We are often stuck into that pattern, so when someone ask us “what is 1+1”, most of us will probably say two. But what If someone suddenly says, no it’s three? Well, I think most of us will say NO, it isn’t, it’s two. 1+1 can actually be three. It’s just three character, 1, + and 1, it’s three. It’s NOT strange that we do what we are used to, and it’s absolutely nothing wrong with us because of that. About fourteen years ago I went to a course, “Physiology at the workplace” at the university. A teacher show us a picture of a cube. She said, only a child and a genius can take that cube apart.
No one in the classroom managed to take the cube apart. The teacher told us that we have during our life learned a pattern, which is why we didn’t manage to take it apart. A child haven’t learned it yet, and a genius is just a genius. What I learned from that day, is that there may be a way to solve problems, and even if we don’t see it, it can be the pattern we have learned that is just in our way. But I know that we can find the elephant in the room, and slice it!
And in my team we almost did it when it come to the branch and merge hell! We started to do a research about how we could reduce the merges, how we could work with the source code, and how our process could help us reducing the waste.
At the moment we don’t often need to merge at all. We haven’t reach the ultimate goal yet, but we aren’t far away from reaching it.
At the moment we work against one single branch during an iteration, but when we are done we create an UAT branch. UAT then turns into a Release branch before deployment to production. There are some problems with those branches. First we build from them. So when we do a UAT release we build the UAT branch and deploy the binaries. The problem with this is that you can’t really trust the code in the UAT branch. From the time when the branch was created, changes to the code may happened. For example a hot fix in the UAT that someone forget to merge to the main branch etc. We even have experienced a hot fix made in the release branch, but was never merged into the main branch. What happened was a hot fix was release to production. But the next release introduce the bug again, it took few month until it was detected again.
During our progress to reach a great Continuous Delivery experience, we have started to apply feature toggle and branch by abstraction to eventually remove branches to just work against one single branch.
By using feature toggling, we can release unfinished code into production, we just make sure the code will never be executed. By doing so we can simply work in one singe branch, and can avoid merges between branches, we can also reduce the problem we had with hot fixes in other branches that wasn’t merged. We can also continuous deliver completed feature into production that is not yet enabled, but can be enabled by the product owner when he feels it’s time to enable it. If a bug of a new deployed feature is found and a critical fix need to done, it can be turned off in production.
How we use feature toggle
Something that is important when it comes to feature toggling is to remove the switch when the new feature should always be on. If we don’t remove it we can easily introduce a technical debt. We wanted to make it easy for use to find and remove the switches from our code. So we introduced a class for each feature and added it to a specific folder.
Note: The ConfigSwitch base class just help us to read from the application configuration file if the feature is on or off, we use <appSettings> to turn on and off a feature.
In our code where we want to use the new feature or the old one based on the feature is enable or not, we just use the class we have created:
We also use the feature toggle when we register our dependencies. Because we work against abstractions we can easily replace the detail of the abstraction by another one. Here is our registration of dependencies:
The reason why we add a new class for the feature to turn on or off, is that we can simply remove the MyFeature class and easily find all places in the code where it was used, and just remove the if statement.
By using feature toggling we can work in one single branch, we can reduce merges and continue deliver from that single branch into production. It requires disciplined team members, for some this is a new way of thinking and working. It’s not a Silver Bullet, it is as always a “it depends!”. Some project it may not work, in others it would. Just don’t let the elephant in the room fool you. Find it, and slice it!
If you want to know when I publish a new blog post, please feel free to follow me on twitter: @fredrikn
During the last month I have created different deployment tools, as a proof of concepts. The tools have change from push deploy into pull deploy, from an own XML workflow and environment definition into using Microsoft Workflow. Finally I decided to introduce to you the Polymelia Deploy tool. The goal of the tool is to make it open source. The code I have is still in a proof of concept level, and need some more work until it will be available.
Polymelia uses agents installed on servers. By using pull deployment, no one can communicate directly to an agent. This make it much easier for to install agents on servers and no ports need to be opened. Each agents have a role. For example a role as a “Web Server”, or “Application Server”. When an agent is running it will ask a Controller for tasks to be executed.
Because agents has roles and Polymelia uses Pull deploy, we can now add a new server, put an agent on the server, specify the role, for example “Web Server”. When the server is up and running the agent can ask a controller for tasks. The latest succeeded tasks will be retrieved and executed. That makes it easy to just add a new server to a load balancing environment and get it auto configured and installed when it’s up and running. No need to do a push deploy, or do changes to the deploy script.
In a near future the agents will be able to be scheduled, when and how often it should ask for a task. The agents will also use SingalR, and by using SingalR, a controller can know when a new agent is added to the environment, and by suing Polymelia Deploy Client, we are able to approve that agent before it can ask for a tasks. Some ideas on the list to do, is to be able to specify an IP range for auto allow new agents without needing to approve them.
Polymelia have as the moment just a few activates (but will get more, maybe you will help me create them ;)), one activity is a NuGet Installation activity, it has a similar solution as Octopus Deploy. The activity will get binaries from an Artifact Repository using NuGet server.
The packages can have configuration files that can be transformed, variables that can replace connection string, appsettings keys and WCF endpoints, but will in a near future replace all kind of keys and values in the configuration file using markers in the config file:
<add key=”$VariableName$” Value=”$variableName2″/>
The NuGet Installation activity will also search for PowerShell scripts in the package, pass variables and execute the script. It will make it possible to use PowerShell to configure and install packages on a server. Because Polymelia is based on Microsoft Workflow, it’s possible to use pre-defined activities that will reduce the use of PowerShell, like creating a MSMQ, Install a Service, Create an app Pool, Run PowerShell script and Start a Virtual Machine etc.
Polymelia Deploy Client
Polymelia Deploy Client is the tool to create deployable workflows, and is used to perform the deployment of a specific version.
When Polymelia Deploy Client is loaded we can create or select a project:
When a project is created or loaded, we are able to add environments:
When the environment(s) are added we can start creating our deploy tasks. The following illustrate how we can tell the Controller to start a Virtual Machine, the Virtual Machine has an agent installed with the role “Web Server”. When the Virtual Machine is started a parallel activity is started and will execute two “Deploy to Agent” activities. One to the role “Web Server” and one for the role “Database”. The tasks added into the “Deploy to Agent” are the tasks that the Controller will add to a queue. The “Web Server” role will read from the queue to execute the tasks added for that role. The “Web Server” will get two packages from a NuGet server and install them on the server, this is done in parallel.
When hitting the DEPLOY button, we need to specify the version we are going to deploy, and the deploy workflow will then be passed to the Controller for execution. When the agents is starting to install tasks, they reports back to the Controller and the client can read from the reports.
This project is still under development. If you are interested to contribute to this project, please let me know. The reason I make this tool Open Source, is to get help of the community to build it, and also give an open source option for the community to use.
If you want to know when I post a blog post, please feel free to follow me on twitter: @fredrikn
During the last twelve months I have spent a lot of time on Continuous Delivery and a deep dive into Team Foundation Server 2012. A commit stage is set, we use TFS build to build, and NuGet as an Artifact Repository. Now my goal is to bring the deployment to the team, let them also be part of the deployment process, maintain and create deployment scripts etc. I have looked around among some deployment tools like Octopus and InRelease. InRelease looks promising. Instead of using one of the great tools, I decided to create my own. Why you may ask? The reason is that I want to be able to do any modifications and bring the code to the team, and to be honest, I needed something to do on my spare time 😉
In this blog post I will describe my tool.
Pull or Push deployment
My first tool supported only Push deployment. It uses my own XML definition do specify environments and deployment steps. My deployment controller at that time read the XML and downloaded the packages to deploy from a NuGet repository and pushed them to deployment agents (Installed on every machine). The deployment agent role was to unzip the package and do configuration files transformations and to run PowerShell scripts. By using push deployment, I have more control over how the deployment went. The main problem was that I needed to make sure the Deployment controller had access to the server where the agents was installed, and also that a specific network port was open in the firewall. Another problem is when a new web server will be added (for example to a load balancing), changes need to be made to the XML file specifying the deployment environments, also a new push deployment need to be started to configure the server.
I decided to move over to a more pull deployment solution. I removed my own XML definition file and decided to use Microsoft Workflow instead. The Deployment controller uses OWN and Katana instead of WCF (my first push tool uses WCF). The deployment controller has the responsibility to execute a deployment workflow. I added a workflow activity that can be used to perform action on specific servers. When a deployment is started the deployment control added the task to a queue. The deployment agent checks the queue for a task. If a task was added to the queue for the specific server, the agent started to execute the task, and report back the progress to the deployment controller. The deployment agent downloads the packages it needs, it also transforms configuration files and execute PowerShell scripts if they was part of the package to be installed. Nothing form the queue will be removed, the deployment agent hold the responsibility to know which task it has already “executed”. By doing so I can simply install an agent to a new server, specify that it has the role as a “webserver”, and when it’s started, it can check the queue for the latest task to be executed. With this solution I can simply add a server and it will automatically be configured. No changes to the deployment script, or trigger a push deploy is needed.
Note: I don’t have the goal to build a high scalable solution for hundreds of servers.
Here is an architecture overview:
The deployment controller doesn’t know anything about the deployment agents, only the agents know about the deployment controller. I use NuGet.Core to get packages from the Artifact Repository. I also use the Microsoft.Web.Xdt library for configuration files transformations. The deployment agent is a thin service, it’s the workflow activity the agent will execute that handle all the things that should happen within the agent.
I decided to use Microsoft Workflow to specify the deployments steps that should be performed. I decided to use workflow because of its design editor and to easily visualize the deployment steps. I have created a Sequence, Parallel, InstallNuGetPackage and Agent activity. The reason I have created a custom Sequence and Parallel activity is to make it pass global variables to child activates. Variables that can be used when a configuration file is transformed, or by a PowerShell script. Variables in Workflow is scope based, I needed to make the variables global within the scope and to its children. I was kind of disappointed that Workflow didn’t pass parent variables to a child activity. The agent activity is the one that will serialize all of its activity and add it to the queue for an agent to capture. The InstallNuGetPackage is used to download a package from the Artifact Repository, and handle the execution of PowerShell scripts and configuration transformation. I will probably add more activity, such as Installing Service, creating MSMQ queues, creating web applications etc. Activity that will reduce the PowerShell script.
Here is a simple deployment workflow:
Note: The above workflow will only demonstrate that an agent activity can perform activity in Sequence and the above will also perform InstallNuGetPackages in parallel.
The goal with the workflow is to make it easier for other team members to setup deployment script by drag & drop of activities, and also have a visualized overview of the script.
If you would like to follow my progress or know when I post a new blog post, please follow me on twitter: @fredrikn
Sitter på tåget på väg hem från Agila Sverige 2013. Det har varit två underbara och uttröttande dagar, med mängder med trevligt, kompetent folk, bra blixttal och öppna, givande diskussioner och dialoger. Egna insatsen Fredrik Wendt: Coding Dojos för företag Jag var först ut, hade inte presentatörsanteckningarna framför mig vilket var tvärtemot vad jag fick […]
Fredrik Normén på Squeed kommer den 30:e April att prata om Continuous Delivery och hans erfarenheter kring området i det projekt han sitter i just nu för ett finansbolag. Ni kommer att få grundläggande förståelse kring begreppet Continuous Delivery, samt hur Fredrik och hans team har valt att lösa vissa problem i sitt utvecklingsteam, allt […]
I helgen kom jag äntligen iväg på CITCON (uttalas kit-kon) eller Continuous Integration and Testing Conference. Det är en årlig konferens som i år arrangeras på fyra kontinenter. Formatet är open space och de enda fasta förutbestämda punkterna är när sessioner börjar presenteras, vilka rum som finns och tider för sessioner och pauser – allt […]
DevSum 2012 började med en keynote av Martin Laforest från Waterloo universitetet i Canada. Han forskar inom kvantdatorer och är teoretisk fysiker. Att få en hel publik av utvecklare att förstå kvantfysik är kanske inte helt lätt men så långt det är möjligt gjorde han ett bra jobb och man kunde åtminstone förstå delar av […]