Wednesday, 20 September 2017

How to encrypt your Team Foundation Server data tier

For all sorts of reasons (including GDPR looming on you) you might feel the need to encrypt your Team Foundation Server databases. Proper encryption at rest.

I realised this is not a really well documented scenario, but it is surprisingly easy to achieve.

What you need to do is leverage SQL Server TDE (Transparent Data Encryption), which is out-of-the-box since SQL Server 2008 onwards. It acts at page level and it is transparent, with little overhead.

The process of enabling TDE is very well documented, and it is based off two keys (the master key and the encryption key) and a certificate. It is very straightforward if you have a single server as a data tier, off you go.

Now, this gets slightly more complicated if you have (like me Smile) AlwaysOn protecting your High Availability. Well, complicated if it is the first time you approach the topic.

Working with AlwaysOn requires:

  • On the Primary Replica - creating the master key, the certificate and the encryption key. Remember to backup the master key and the certificate.
  • On the Secondary Replica - creating the master key and the certificate. The certificate must be created from the backup of the Primary Replica!

After these two steps you can enable TDE on the database hosted on the Primary Replica, which then will propagate on the Secondary as per AlwaysOn schedule.

If your databases are already encrypted and you want to add them to an Availability Group you’ll need to do so manually – the wizard is not going to show encrypted databases to be added to the AG.

This SQLServerCentral.com article features a set of queries I found really helpful to get started.

A suggestion though: prepare a single query for the Primary Replica preparation, run it, prepare a single one for the Secondary Replica preparation, run it, and eventually encrypt from a separate query.

The reason why I say this is simple: if anything goes wrong before you encrypt the database you can easily drop the master key, the certificate or the encryption key and start again.

Eventually, remember that encryption for large databases can take a long time. During this time, the process might stop because of database size, so remember to check the logs as well so you can restart it if you need to.

Tuesday, 19 September 2017

Why Work Item rules are so important

I was on holiday for the last couple of weeks so I had only a limited coverage of what happened since the beginning of the month Smile but I could not miss the release of custom Work Item rules on VSTS.

Why such an emphasis on my end? Well, because custom rules on Work Items involved fiddling with xml files and command line tools or using the Process Template Editor in Visual Studio and a UI that is a bit tough.

image

image

It is something that any TFS administrator does on a regular basis though. Now rules in VSTS can now be defined in a web UI with a consistent experience and multiple sets of conditions and actions can be defined easily in the same page.

image

Also, this makes involving team managers in the definition of these rules extremely easy, as there is no Visual Studio, XML, command line involved anymore.

Wednesday, 30 August 2017

You might not know this: Service Execution History in VSTS

How would you know how many times your Service Endpoint is invoked, with what result and by who?

image

It is neatly after the Details and Roles tabs in the Service Endpoint list for each Service Endpoint Smile

Friday, 18 August 2017

Shadow code indexing jobs after upgrading to TFS 2017.2

Just a quick one – if you remove the Search Server before upgrading to TFS 2017.2, you might see failing Git_RepositoryCodeIndexing or TFVC_RepositoryCodeIndexing jobs raising a Microsoft.VisualStudio.Services.Search.Common.FeederException: Lots of files rejected by Elasticsearch, failing this job error.

The reason why this happens is because the extension is automatically enabled on the collections, triggering these jobs.

So check your extensions after the upgrade Smile

Thursday, 10 August 2017

Run a SonarQube analysis in VSTS for unsupported .NET Core projects

With some projects you might face this:

error MSB4062: The "IsTestFileByName" task could not be loaded from the assembly <path>\.sonarqube\bin\SonarQube.Integration.Tasks.dll.

It is a known issue I’m afraid, involving (among the others) .NET Standard.

There is a fairly straightforward workaround IMHO. Instead of using the Scanner for MSBuild as you would, use the CLI scanner that is now provided by VSTS:

sq0

This is enough to let the scanner do its job. This approach can bring a different issue – if you use branches to identify projects in SonarQube, or if you have dynamically set properties, and having a fixed, static properties file doesn’t really work.

Still, nothing really blocking. Do you see the PowerShell Script above? Smile

sq1

This is an example of what you can do – a bit rough, it just adds a line at the end of the file stating the branch to analyse. It can also be much cleaner, but still Smile

Remember that you can always manipulate files in the agent, and that’s what I do. Add whatever line you want with a script like this so that you have granular control in the same way as adding /d:… to the parameters in the regular MSBuild task.

Monday, 31 July 2017

My take on build pools optimisation for the larger deployments

If you have a large pool of Build Agents it can be easy to incur in a terrible headache: plenty of hardware resources to handle, capabilities, pools, queues, etc.

Bearing this in mind, having a single default pool is the last thing you want IMHO:

image

There are exceptions to this of course, like if you work on a single system (loosely defined, like a single product or a single suite of applications) or if you have a massive, horizontal team across the company.

Otherwise pulling all the resources together can be a bit of a nightmare, especially if company policy gets in the way – what if each development group/product team/etc. needs to provide hardware for their pools?

Breaking this down means you can create pools based on corporate organisation (build/division/team/whatever), on products (one pool per product or service) or on geography.

Performance should be taken into account in any case: you can add custom capabilities marking something special about your machines:

image

Do you need a CUDA-enabled build agent for some SDKs you are using? Add a capability. Is your codebase so legacy or massive that takes advantage of fast, NVMe SSDs? Add a capability. You get the gist of it after a while.

That becomes very nice, because with capabilities you can define your perfect build requirements when you trigger the build, and the system is going to choose the one that has all you need – saving you the hassle of manually finding what you need.

Maintaining these Build Agents is also important – that is why a Maintenance Job can be scheduled to clean up the _work folder in the agent:

image

This can have an impact on your pools – that is why you can specify that only a certain percentage is going to undergo the job at once. Everything is also audited, in case you need to track down things going south.

Wednesday, 26 July 2017

So many things I like in the new Release Editor!

Change is hard to swallow, it is the human nature and we cannot do anything about it Smile so like every change, the new Release Editor can be a surprise for some.

image

To be fair with you, I think it is a major step ahead, for a few reasons. Usability is on top of the pile of course, as I can have a high level overview of what my pipeline does without digging into the technical details of the process.

Then if you look at the Artifacts section, you will see the amount of sources you can choose from:

image

Being VSTS a truly interoperable DevOps platform spoils you for choice – I really appreciate the having Package Management in such a prominent place, because it enables all sorts of consumption scenarios for NuGet packages as a build output, including a cross-organisation open model.

Then on the Environments section, the templates provided cover lots of scenarios and not only with cloud technologies. One that is going to be really appreciated in hybrid DevOps situations is the IIS Website and SQL Database Deployment.

image

This template creates a two phase deployment that serves as a starting point for most on-premise deployments with IIS and SQL Server.

The Web App Deployment supports XML transformations and substitutions by default:

image

The data side of the story is really interesting IMHO as it employs DACPACs by default, together with a .sql file and inline SQL options:

image

I think it is clear I really like it Smile