Friday, 13 October 2017

Help! I cannot complete my Pull Request in TFS!

A quick one today: if the Complete button is greyed out in your Pull Request UI, ask your TFS Administrator to start the TFS Job Agent.

image

The TFS Job Agent does many things – including handling Pull Requests completion.

Tuesday, 10 October 2017

Successfully handle disruptive changes with no downtime: the TFS 2015 example

It is something I’ve mentioned a few years ago but the question came out again during my presentation at x-celerate.de:

How should I handle a breaking change without service interruptions?

This is a brilliant question, and the best example I can give out is the TFS 2015 Upgrade that introduced support for renaming Team Projects.

If you don’t want to have downtime for your users the only mitigation is to introduce an intermediate migration layer which is going to be pouring data from the production stack and transform it into what you want.

The upside of this is that you are performing a very expensive and time-consuming operation out-of-band, so you can apply all the usual patterns for highly available application deployments.

The downside is that it is a costly operation, it could be compute, storage or something else but it will cost something out of it.

In my specific case I was able to perform a scheduled upgrade into the mandatory weekend window (yes, there was still a bit of downtime but it was due to the nature of the product and it was expected – you can overcome the hurdle if you are building your own product though) instead of having days of downtime due to the migration of data from a schema to another, at the cost of lots of storage space for the temporary tables and a dedicated server to run the tool.

Sunday, 8 October 2017

Re-release to an environment, don’t spin up a whole new deployment!

I know this happens on a regular basis – but it caught up my eye this morning as I am finishing up preparation for X-Celerate.de.

Let’s say your Release fails:

image

What many do is to actually spin up a whole new instance of the Release itself. While this works ok, you are missing out on something important: traceability.

In a sea of releases, with microservices and multiple moving pieces, how would you be able to trace back what happened during that failed release?

Why don’t you actually try to re-deploy the same failed bits instead?

image

Doing this provides you all the details about the previous failures, and it is going to be much easier to recall in case you might need to refer to the scenario in the future.

image

Oh in case you were wondering… it was all about my lab’s DNS server, which cached the Kudu website of an App Service I was deploying as 404 Smile but it is Sunday after all…

Thursday, 5 October 2017

If you don’t package your stuff you are doing it wrong!

Packages are a thing, exactly like containers are.

I mean – who wants to spend countless time in moving files, editing configuration files and the likes? Nobody, I know, but still so many people don’t take advantage of application packaging when it comes to deploying stuff!

Let’s take an average web application as an example. What is the reason that pushes you to actually move stuff from a certain folder (DLLs, .configs, etc.) to the target server, instead of packaging your application’s components and move these instead?

All you need to do is adding /p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true to the MSBuild Arguments if you are using a traditional Build task.

image

If you are using .NET Core you just need to select the Zip Published Projects in the Publish task!

image

I just love the MSDeploy comeback. I am a huge supporter of this technology, because it makes life so much easier. It also has the side effect of enabling deployment to Azure in a snap, as it is one of the three supported delivery methods!

Let’s say you have a Build Definition and a Release Pipeline for this application. You want to deploy it to Azure – this is what you need to do:

imageWhat if you want to run the same application (providing it actually works on-premise as well and it does not specifically require any technology not available in your datacentre) on your own servers?

Firstly you’ll need to create a Deployment Group in VSTS – this can be done either by statically running the appropriate PowerShell script on your machines (interactively, via RDP) or dynamically with a bit of PowerShell or Azure if you are using IaaS. It is required because IaaS/on-premise machines in a Deployment Group will run an agent.

Then you can run whatever script you need to install the pre-requisites your application requires and configure all the settings, and eventually you can use the IIS Web App tasks to interact with IIS. Focusing on the IIS Web App Deploy task…

image

That Package is exactly the same package I used in the Azure deployment above. So you can easily have a Continuous Delivery pipeline on Azure and a different one (with the same cadence or a different one, your call) for on-premise, both starting from the same artifacts.

Containers are better – of course – but they require a minimum or ramp up or learning in order to actually implement them in a production environment. Moving to MSDeploy on the other hand is a matter of minutes at most, and it will provide a tangible improvement.

Friday, 29 September 2017

A few catches on customising the new Work Item form

If you use Team Foundation Server 2017 you already know this:

image

The new form is brilliant: it makes a much better use of the screen space, with a better UX in general.

But what if you have forms that were already using customised fields and a specific arrangement of controls?

image

The answer is that Microsoft uses a best-effort transformation system to automatically migrate your old form to the new, but for some reason I found myself in this situation – these two tabs won’t migrate to the new layout.

The new customisation model is brilliant – everything is now much cleaner and easier to use. All you need to do is add what you want to the <WebLayout> tag:

image

You can see that you now have Page, Group, Section as containers for controls, making life actually much easier when it comes to customisation. In my case I added two new Pages with the relevant control in there:

image

image

Of course you can customise the display layout by using the LayoutMode attribute. All the documentation is available here.

Wednesday, 20 September 2017

How to encrypt your Team Foundation Server data tier

For all sorts of reasons (including GDPR looming on you) you might feel the need to encrypt your Team Foundation Server databases. Proper encryption at rest.

I realised this is not a really well documented scenario, but it is surprisingly easy to achieve.

What you need to do is leverage SQL Server TDE (Transparent Data Encryption), which is out-of-the-box since SQL Server 2008 onwards. It acts at page level and it is transparent, with little overhead.

The process of enabling TDE is very well documented, and it is based off two keys (the master key and the encryption key) and a certificate. It is very straightforward if you have a single server as a data tier, off you go.

Now, this gets slightly more complicated if you have (like me Smile) AlwaysOn protecting your High Availability. Well, complicated if it is the first time you approach the topic.

Working with AlwaysOn requires:

  • On the Primary Replica - creating the master key, the certificate and the encryption key. Remember to backup the master key and the certificate.
  • On the Secondary Replica - creating the master key and the certificate. The certificate must be created from the backup of the Primary Replica!

After these two steps you can enable TDE on the database hosted on the Primary Replica, which then will propagate on the Secondary as per AlwaysOn schedule.

If your databases are already encrypted and you want to add them to an Availability Group you’ll need to do so manually – the wizard is not going to show encrypted databases to be added to the AG.

This SQLServerCentral.com article features a set of queries I found really helpful to get started.

A suggestion though: prepare a single query for the Primary Replica preparation, run it, prepare a single one for the Secondary Replica preparation, run it, and eventually encrypt from a separate query.

The reason why I say this is simple: if anything goes wrong before you encrypt the database you can easily drop the master key, the certificate or the encryption key and start again.

Eventually, remember that encryption for large databases can take a long time. During this time, the process might stop because of database size, so remember to check the logs as well so you can restart it if you need to.

Tuesday, 19 September 2017

Why Work Item rules are so important

I was on holiday for the last couple of weeks so I had only a limited coverage of what happened since the beginning of the month Smile but I could not miss the release of custom Work Item rules on VSTS.

Why such an emphasis on my end? Well, because custom rules on Work Items involved fiddling with xml files and command line tools or using the Process Template Editor in Visual Studio and a UI that is a bit tough.

image

image

It is something that any TFS administrator does on a regular basis though. Now rules in VSTS can now be defined in a web UI with a consistent experience and multiple sets of conditions and actions can be defined easily in the same page.

image

Also, this makes involving team managers in the definition of these rules extremely easy, as there is no Visual Studio, XML, command line involved anymore.