As part of a new project to create a Jenkins CI server on Azure I am writing a set of powershell scripts to control virtual machines on Azure. For this project the plan is to use virtual machine (VM) images as a template for an 'immutable server' that will contain the Jenkins instance.
Now the actual server isn't really 'immutable' given that the jenkins instance will update, add and delete files on the hard drive which will obviously change the state of the server. As such the immutable idea isn't applied to the whole server but more to the configuration part of the server. The idea being that the configuration of the server will not be changed once the server is put in production. Any configuration changes (e.g. a new version of Jenkins) will be done by creating a new image, spinning up a new server based on that image and then destroying the old server and replacing it with the new one.
So in order to achieve this goal the first step will be to build an image with all the required software on it and then verify that this image has indeed been created correctly.
To create the image we first obtain a certificate that can be used for the WinRM SSL connection between the Azure VM and the local machine that is executing the creation scripts. You can either get an official one or you can use a self-signed certificate (which is obviously less secure). Two things of interest are:
- The certificate needs to have an exportable private key because otherwise it cannot be used for the WinRM connection.
- The certificate needs to be named after the connection that you expect to make. For a connection to an Azure VM this will most likely be something like
Once the certificate is installed in the user certificate store we can create a new virtual machine from a given base image, e.g. a Windows 2012 R2 server image. The following powershell function creates a new windows VM with a WinRM endpoint with the certificate that was created earlier. Note that the
New-AzureVM function can create resource and storage groups for the new VM if you don't specify a storage account and a matching resource group.
Once the VM is running a new Powershell remote session can be opened to the machine in order to start the configuration of the machine. Note that this approach only seems to be working for
https connections because the
Get-AzureWinRMUri function only returns the
https URI. Hence the need for a certificate that can be used to secure the connection.
The next step is to copy all the installer files and configuration scripts to the VM. This can be done over the remoting channnel.
Once all the required files have been copied to the VM the configuration of the machine can be started. This can be done in many different ways, e.g through the use of a configuration management tool or just via the use of plain old scripts. When the configuration is complete and all the necessary clean-up has been done the time has come to turn the VM into an image. Before doing that a Windows machine will have to be sysprepp'ed so that there are no unique identifiers in the image (and thus in the copies).
In order to sysprep an Azure VM it is necessary to execute the sysprep command through a script on the VM because sysprep fails if the command is given directly through the remoting channel. The following function creates a new Powershell script which invokes sysprep, copies that to the VM and then executes that script. Once sysprep has completed running the machine will be turned off and an image can be created.
The next step is to test the new image in order to verify that all configuration changes have been applied correctly. The explanation of how the testing of an virtual machine image works is a topic for the next blog post.
The build server that is being used to build the packages for nBuildKit is AppVeyor. AppVeyor is an Continuous Integration system in the cloud. The way AppVeyor works is that every time a commit occurs in a GitHub project AppVeyor is notified. AppVeyor then spins up a new clean virtual machine (VM) on which your build scripts are executed. Once the build is done the VM is terminated and thrown away. This way there is no way that the changes made to the build environment by a build will influence future builds.
For nBuildKit two builds were configured. The first configuration is the standard continuous integration build which generates the version numbers and templates and then creates the NuGet packages. As the final step the build artefacts are archived for later use by the second build configuration. For this configuration no special settings are required other then to tell AppVeyor to store the artifacts.
The second build configuration handles the delivery of the artefacts. This configuration gathers the build artefacts from the latest build of the first build configuration, tags the revision that was build and then pushes the NuGet packages to NuGet.org and marks the given commit as a release in GitHub.
For this second configuration a few tweaks need to be made to the environment before the build can be executed. The first thing to do is to install the GitHub-release application which provides an easy way to push release information to github. A simple Powershell script is used to set-up this part of the environment:
Once all the required tools are installed the artefacts of the selected continuous integration build need to be downloaded and placed in the correct directories. For that yet another Powershell script is used:
Once all the artefacts are restored the delivery process can be executed. For nBuildKit the delivery process is executed by nBuildKit itself in the standard dogfooding approach that is so well known in the software business.
This release introduces a version provider using GitVersion and a custom version provider that can be implemented by a user. On top of that the nBuildKit.MsBuild.Projects.Common and nBuildKit.MsBuild.Projects.Common.Net packages have been merged with the C# and WiX packages.
Last year when I started this blog I decided to keep the layout as simple as possible, hence all the posts were just added to the home page and to their own page. Over time as more posts were written the home page got larger and larger making it slower to load and more difficult to navigate. In order to improve this pagination of the home page was introduced.
Once again DocPad makes this very easy because all you have to do is add the DocPad-paged plugin and then update the documents you want to split. In the case of this blog only the index page needed to be split. The new index page now looks like this:
In order to make the paged plugin do its work the following 'properties' were added to the header:
- isPaged - Indicates that the document should be broken up into multiple pages.
- pageCollection - The collection from which the sub-documents that will fill up the current page are taken.
- pageSize - The number of sub-documents per page
Finally two buttons were added to the bottom of the page to naviage to he newer and the older posts.
A while ago I decided that it was time to add an archive page to the website so that there would be a place to get a quick overview of all the posts that I have written. Fortunately setting up an archive page with DocPad is relatively simple.
The first step to take was to add a new layout for the archive page.
The layout gets the list of all posts and iterates over them in chronological order. All the posts for one year are grouped together under a header titled after the year. In keeping with the layout of the rest of the site each post gets a title, the day and month and the tags that belong to that post. In order for this specific layout to work you will need to add the
moment node.js package
The layout and the CSS for the archive page is heavily based on the layout created by John Ptacek.