Azure är utan tvekan en av Microsofts stora utvecklarsatsningar. Så i decembers upplaga av Developer Roadshow så var det Azure som gällde, tillsammans med det “nya” buzzwordet IoT(Internet of Things). För de som inte har koll på Azure så det Microsofts molnplattform. Dagen var uppdelad i 2 pass, en generell introduktionsworkshop om Azure och en […]
Microsoft Message Queuing, or simply MSMQ, is a tool that I’ve been using as part of my current assignment in developing an integration solution for a customer. In short, MSMQ provides failsafe message based communication between or within applications through the usage of queues.
For about three years ago we use feature branches in our current project. It took about a half to one day to merge those branches before deployment. The reason why feature branches was used was more or less because most team members have worked like that in earlier projects and they thought it was the way to work. At that time some of them also wanted to introduce bug branches! That was the right branch strategies for them, because that was what they have learned.
We all are often stuck into a pattern of thinking, we have during our life learned from the community sounding us what is the correct way of thinking and doing stuffs. In school we learned that 1+1 is 2. We are often stuck into that pattern, so when someone ask us “what is 1+1”, most of us will probably say two. But what If someone suddenly says, no it’s three? Well, I think most of us will say NO, it isn’t, it’s two. 1+1 can actually be three. It’s just three character, 1, + and 1, it’s three. It’s NOT strange that we do what we are used to, and it’s absolutely nothing wrong with us because of that. About fourteen years ago I went to a course, “Physiology at the workplace” at the university. A teacher show us a picture of a cube. She said, only a child and a genius can take that cube apart.
No one in the classroom managed to take the cube apart. The teacher told us that we have during our life learned a pattern, which is why we didn’t manage to take it apart. A child haven’t learned it yet, and a genius is just a genius. What I learned from that day, is that there may be a way to solve problems, and even if we don’t see it, it can be the pattern we have learned that is just in our way. But I know that we can find the elephant in the room, and slice it!
And in my team we almost did it when it come to the branch and merge hell! We started to do a research about how we could reduce the merges, how we could work with the source code, and how our process could help us reducing the waste.
At the moment we don’t often need to merge at all. We haven’t reach the ultimate goal yet, but we aren’t far away from reaching it.
At the moment we work against one single branch during an iteration, but when we are done we create an UAT branch. UAT then turns into a Release branch before deployment to production. There are some problems with those branches. First we build from them. So when we do a UAT release we build the UAT branch and deploy the binaries. The problem with this is that you can’t really trust the code in the UAT branch. From the time when the branch was created, changes to the code may happened. For example a hot fix in the UAT that someone forget to merge to the main branch etc. We even have experienced a hot fix made in the release branch, but was never merged into the main branch. What happened was a hot fix was release to production. But the next release introduce the bug again, it took few month until it was detected again.
During our progress to reach a great Continuous Delivery experience, we have started to apply feature toggle and branch by abstraction to eventually remove branches to just work against one single branch.
By using feature toggling, we can release unfinished code into production, we just make sure the code will never be executed. By doing so we can simply work in one singe branch, and can avoid merges between branches, we can also reduce the problem we had with hot fixes in other branches that wasn’t merged. We can also continuous deliver completed feature into production that is not yet enabled, but can be enabled by the product owner when he feels it’s time to enable it. If a bug of a new deployed feature is found and a critical fix need to done, it can be turned off in production.
How we use feature toggle
Something that is important when it comes to feature toggling is to remove the switch when the new feature should always be on. If we don’t remove it we can easily introduce a technical debt. We wanted to make it easy for use to find and remove the switches from our code. So we introduced a class for each feature and added it to a specific folder.
Note: The ConfigSwitch base class just help us to read from the application configuration file if the feature is on or off, we use <appSettings> to turn on and off a feature.
In our code where we want to use the new feature or the old one based on the feature is enable or not, we just use the class we have created:
We also use the feature toggle when we register our dependencies. Because we work against abstractions we can easily replace the detail of the abstraction by another one. Here is our registration of dependencies:
The reason why we add a new class for the feature to turn on or off, is that we can simply remove the MyFeature class and easily find all places in the code where it was used, and just remove the if statement.
By using feature toggling we can work in one single branch, we can reduce merges and continue deliver from that single branch into production. It requires disciplined team members, for some this is a new way of thinking and working. It’s not a Silver Bullet, it is as always a “it depends!”. Some project it may not work, in others it would. Just don’t let the elephant in the room fool you. Find it, and slice it!
If you want to know when I publish a new blog post, please feel free to follow me on twitter: @fredrikn
Microsofts utvecklingsservrar är konfigurerade att endast tillåta lokala requests, detta kan ställa till det när man exempelvis vill testa äldre versioner av IE från en virtuell maskin. En enkel väg runt denna begränsning är att använda fiddler. Man kan också använda tunnlar eller andra proxys, men fördelen med fiddler är att de flesta utvecklare redan har det installerat. […]
Så är det återigen höst och det betyder att vi har många nya utvecklarkvällar framför oss innan det återigen blir sommar. 19 september kör Javaforum och nforum i Folketshus i Göteborg. Flyers i pdf finns här för både Javaforum och nforum om ni vill sprida agendan. Välkomna!
Den 5e september kommer JetBrains till Squeeds HQ för att prata om ReSharper8, ett verktyg som effektiviserar skrivandet av källkod. Innehåll: In this talk, I will take you on a journey around the various new and improved features that have been introduced in the latest incarnation of ReSharper. We’ll take a look at improvements […]
Many of you may have probably already heard about OWIN (Open Web Interface for .Net) and Katana that implements OWIN and use different components for supporting OWIN, for example System.Web and System.Net.HttpListener. If you haven’t heard about OWIN or Katana before, please read the following: OWIN and Katana.
One way to begin with using Katana is by installing the Katana Tooling. Download the tooling and unzip the downloaded zip file. Then install the Katana.vsix. The file includes Microsoft Visual Studio 2012 templates for Katana Web Applications and Katana Console Applications.
When the installation is completed you can start Visual Studio 2012. Create a New Project and select Templates, Visual C#. You will not see two Katana projects, the Katana Console Application and the Katana Web Application:
The Katana Console Application will use the HttpListener and the Katana Web Application will use System.Web (“ASP.Net”). By using the Katana Web Application, you can use the web.config file, global.asax etc, it is more or less a simple ASP.Net project. The Katana Console Application is more of a lightweight host that don’t take advantage of ASP.Net.
Select the Katana Console Application, name it to what you like. I will use the default suggested name.
The Katana Console Application creates two .cs files, Program.cs and Statup.cs. The Program.cs file looks like this:
The WeApp.Start<Statup>(uri) will be used to start listening for incoming request to the specified URI, in this case the http://localhost:12345. The generic Start method will make sure the Startup.cs class will be used during the startup. The Startup class will be used for configuration, more about this later. The Process.Start will simply start a new process, in this case when a URI is specified, your default Internet browser will be opened when the Console application is started.
The Startup.cs file is used for configuration, for example specify which middleware that should be used. A Middleware is the modules that an OWIN host will use, so instead of a normal IIS pipeline with several modules chained after each other, you can add the modules you prefer to use instead. It gives you more freedom to just add those you really needs. Here is the Startup.cs file created by the Katana Console Application template:
The Configuration method takes an IAppBuilder as an argument. The IAppBuilder is used to register OWIN middlewares. The UseErrorPage and UseWelcomePage methods are extension methods to the IAppBuilder, they are used to add the use of an error page and simply shows a welcome page when the application is started. It’s a common pattern to add an extension method to the IAppBuilder. You can remove the lines inside of the Configuration method because we aren’t going to use them.
Using ASP.Net Web API with OWIN/Katana
To use ASP.Net Web API together with OWIN/Katana you need to install ASP.Net Web API OWIN NuGet package. Enter:
In the Package Manager Console in Visual Studio 2012 and then press enter to install the ASP.Net Web API.
Note: You can instead of downloading Katana, simply create an empty Console Application and in the Package Manager Console write: Install-Package Microsoft.AspNet.WebApi.OwinSelfHost –Pre, you can read more about it here.
The next step is to configure the use of ASP.Net Web API. In the Configuration method of the Startup.cs file add the following lines of code:
The HttpConfiguration class is used to setup the ASP.Net Web API routes. To use the ASP.Net Web API the IAppBuilder extension method UseWebApi is used. The UseWebApi takes HttpConfiguration as an argument. This is all you need to do to set up the use of the ASP.Net Web API. The next step is to add a simple ApiController.
Add a new class, DefaultController, make sure it inherits the System.Web.Http.ApiController class.
Add a Get method that simply returns “Hello World”.
Run the application and in your browser enter: http://localhost:12345/api/Default and you should now get a XML result, something like this:
You have now created a simple OWIN Host using Katana to host a simple ASP.Net Web API application.
Using NancyFx with OWIN/Katana
To use NancyFx you need to install the install the following NuGet package:
Open the Startup.cs file and change the Configuration method to the following code:
This is all you need to specify the use of NancyFx, no need to register routes etc. like you did when using the ASP.Net Web API. The next thing to do is to add a Get “action” method. Create a new class and make sure it inherits the Nancy.NancyModule class:
Add a constructor and then use the Get property to specify a route and the lambda expression that should run when the route is accessed by a HTTP GET method, for example returning “Hello World!”:
Run the application and in your browser enter: http://localhost:12345/ and you should now see “Hello World!” in the browser.
You have now created a simple OWIN Host using Katana to host a simple NancyFx application.
In this blog post you have learned about how easy it is to create a lightweight host to host either ASP.Net Web API or NancyFx by using OWIN and Katana.
If you want to know when I post a blog post, please feel free to follow me on twitter: @fredrikn
Often developers and testers are using their own machines for developing and testing software. The local environment can look different, have different tools installed, even different libraries, sometimes some may use technical previews (like me using …
During the last month I have created different deployment tools, as a proof of concepts. The tools have change from push deploy into pull deploy, from an own XML workflow and environment definition into using Microsoft Workflow. Finally I decided to introduce to you the Polymelia Deploy tool. The goal of the tool is to make it open source. The code I have is still in a proof of concept level, and need some more work until it will be available.
Polymelia uses agents installed on servers. By using pull deployment, no one can communicate directly to an agent. This make it much easier for to install agents on servers and no ports need to be opened. Each agents have a role. For example a role as a “Web Server”, or “Application Server”. When an agent is running it will ask a Controller for tasks to be executed.
Because agents has roles and Polymelia uses Pull deploy, we can now add a new server, put an agent on the server, specify the role, for example “Web Server”. When the server is up and running the agent can ask a controller for tasks. The latest succeeded tasks will be retrieved and executed. That makes it easy to just add a new server to a load balancing environment and get it auto configured and installed when it’s up and running. No need to do a push deploy, or do changes to the deploy script.
In a near future the agents will be able to be scheduled, when and how often it should ask for a task. The agents will also use SingalR, and by using SingalR, a controller can know when a new agent is added to the environment, and by suing Polymelia Deploy Client, we are able to approve that agent before it can ask for a tasks. Some ideas on the list to do, is to be able to specify an IP range for auto allow new agents without needing to approve them.
Polymelia have as the moment just a few activates (but will get more, maybe you will help me create them ;)), one activity is a NuGet Installation activity, it has a similar solution as Octopus Deploy. The activity will get binaries from an Artifact Repository using NuGet server.
The packages can have configuration files that can be transformed, variables that can replace connection string, appsettings keys and WCF endpoints, but will in a near future replace all kind of keys and values in the configuration file using markers in the config file:
<add key=”$VariableName$” Value=”$variableName2″/>
The NuGet Installation activity will also search for PowerShell scripts in the package, pass variables and execute the script. It will make it possible to use PowerShell to configure and install packages on a server. Because Polymelia is based on Microsoft Workflow, it’s possible to use pre-defined activities that will reduce the use of PowerShell, like creating a MSMQ, Install a Service, Create an app Pool, Run PowerShell script and Start a Virtual Machine etc.
Polymelia Deploy Client
Polymelia Deploy Client is the tool to create deployable workflows, and is used to perform the deployment of a specific version.
When Polymelia Deploy Client is loaded we can create or select a project:
When a project is created or loaded, we are able to add environments:
When the environment(s) are added we can start creating our deploy tasks. The following illustrate how we can tell the Controller to start a Virtual Machine, the Virtual Machine has an agent installed with the role “Web Server”. When the Virtual Machine is started a parallel activity is started and will execute two “Deploy to Agent” activities. One to the role “Web Server” and one for the role “Database”. The tasks added into the “Deploy to Agent” are the tasks that the Controller will add to a queue. The “Web Server” role will read from the queue to execute the tasks added for that role. The “Web Server” will get two packages from a NuGet server and install them on the server, this is done in parallel.
When hitting the DEPLOY button, we need to specify the version we are going to deploy, and the deploy workflow will then be passed to the Controller for execution. When the agents is starting to install tasks, they reports back to the Controller and the client can read from the reports.
This project is still under development. If you are interested to contribute to this project, please let me know. The reason I make this tool Open Source, is to get help of the community to build it, and also give an open source option for the community to use.
If you want to know when I post a blog post, please feel free to follow me on twitter: @fredrikn
During the last twelve months I have spent a lot of time on Continuous Delivery and a deep dive into Team Foundation Server 2012. A commit stage is set, we use TFS build to build, and NuGet as an Artifact Repository. Now my goal is to bring the deployment to the team, let them also be part of the deployment process, maintain and create deployment scripts etc. I have looked around among some deployment tools like Octopus and InRelease. InRelease looks promising. Instead of using one of the great tools, I decided to create my own. Why you may ask? The reason is that I want to be able to do any modifications and bring the code to the team, and to be honest, I needed something to do on my spare time 😉
In this blog post I will describe my tool.
Pull or Push deployment
My first tool supported only Push deployment. It uses my own XML definition do specify environments and deployment steps. My deployment controller at that time read the XML and downloaded the packages to deploy from a NuGet repository and pushed them to deployment agents (Installed on every machine). The deployment agent role was to unzip the package and do configuration files transformations and to run PowerShell scripts. By using push deployment, I have more control over how the deployment went. The main problem was that I needed to make sure the Deployment controller had access to the server where the agents was installed, and also that a specific network port was open in the firewall. Another problem is when a new web server will be added (for example to a load balancing), changes need to be made to the XML file specifying the deployment environments, also a new push deployment need to be started to configure the server.
I decided to move over to a more pull deployment solution. I removed my own XML definition file and decided to use Microsoft Workflow instead. The Deployment controller uses OWN and Katana instead of WCF (my first push tool uses WCF). The deployment controller has the responsibility to execute a deployment workflow. I added a workflow activity that can be used to perform action on specific servers. When a deployment is started the deployment control added the task to a queue. The deployment agent checks the queue for a task. If a task was added to the queue for the specific server, the agent started to execute the task, and report back the progress to the deployment controller. The deployment agent downloads the packages it needs, it also transforms configuration files and execute PowerShell scripts if they was part of the package to be installed. Nothing form the queue will be removed, the deployment agent hold the responsibility to know which task it has already “executed”. By doing so I can simply install an agent to a new server, specify that it has the role as a “webserver”, and when it’s started, it can check the queue for the latest task to be executed. With this solution I can simply add a server and it will automatically be configured. No changes to the deployment script, or trigger a push deploy is needed.
Note: I don’t have the goal to build a high scalable solution for hundreds of servers.
Here is an architecture overview:
The deployment controller doesn’t know anything about the deployment agents, only the agents know about the deployment controller. I use NuGet.Core to get packages from the Artifact Repository. I also use the Microsoft.Web.Xdt library for configuration files transformations. The deployment agent is a thin service, it’s the workflow activity the agent will execute that handle all the things that should happen within the agent.
I decided to use Microsoft Workflow to specify the deployments steps that should be performed. I decided to use workflow because of its design editor and to easily visualize the deployment steps. I have created a Sequence, Parallel, InstallNuGetPackage and Agent activity. The reason I have created a custom Sequence and Parallel activity is to make it pass global variables to child activates. Variables that can be used when a configuration file is transformed, or by a PowerShell script. Variables in Workflow is scope based, I needed to make the variables global within the scope and to its children. I was kind of disappointed that Workflow didn’t pass parent variables to a child activity. The agent activity is the one that will serialize all of its activity and add it to the queue for an agent to capture. The InstallNuGetPackage is used to download a package from the Artifact Repository, and handle the execution of PowerShell scripts and configuration transformation. I will probably add more activity, such as Installing Service, creating MSMQ queues, creating web applications etc. Activity that will reduce the PowerShell script.
Here is a simple deployment workflow:
Note: The above workflow will only demonstrate that an agent activity can perform activity in Sequence and the above will also perform InstallNuGetPackages in parallel.
The goal with the workflow is to make it easier for other team members to setup deployment script by drag & drop of activities, and also have a visualized overview of the script.
If you would like to follow my progress or know when I post a new blog post, please follow me on twitter: @fredrikn