Thursday, 10 August 2017

Run a SonarQube analysis in VSTS for unsupported .NET Core projects

With some projects you might face this:

error MSB4062: The "IsTestFileByName" task could not be loaded from the assembly <path>\.sonarqube\bin\SonarQube.Integration.Tasks.dll.

It is a known issue I’m afraid, involving (among the others) .NET Standard.

There is a fairly straightforward workaround IMHO. Instead of using the Scanner for MSBuild as you would, use the CLI scanner that is now provided by VSTS:

sq0

This is enough to let the scanner do its job. This approach can bring a different issue – if you use branches to identify projects in SonarQube, or if you have dynamically set properties, and having a fixed, static properties file doesn’t really work.

Still, nothing really blocking. Do you see the PowerShell Script above? Smile

sq1

This is an example of what you can do – a bit rough, it just adds a line at the end of the file stating the branch to analyse. It can also be much cleaner, but still Smile

Remember that you can always manipulate files in the agent, and that’s what I do. Add whatever line you want with a script like this so that you have granular control in the same way as adding /d:… to the parameters in the regular MSBuild task.

Monday, 31 July 2017

My take on build pools optimisation for the larger deployments

If you have a large pool of Build Agents it can be easy to incur in a terrible headache: plenty of hardware resources to handle, capabilities, pools, queues, etc.

Bearing this in mind, having a single default pool is the last thing you want IMHO:

image

There are exceptions to this of course, like if you work on a single system (loosely defined, like a single product or a single suite of applications) or if you have a massive, horizontal team across the company.

Otherwise pulling all the resources together can be a bit of a nightmare, especially if company policy gets in the way – what if each development group/product team/etc. needs to provide hardware for their pools?

Breaking this down means you can create pools based on corporate organisation (build/division/team/whatever), on products (one pool per product or service) or on geography.

Performance should be taken into account in any case: you can add custom capabilities marking something special about your machines:

image

Do you need a CUDA-enabled build agent for some SDKs you are using? Add a capability. Is your codebase so legacy or massive that takes advantage of fast, NVMe SSDs? Add a capability. You get the gist of it after a while.

That becomes very nice, because with capabilities you can define your perfect build requirements when you trigger the build, and the system is going to choose the one that has all you need – saving you the hassle of manually finding what you need.

Maintaining these Build Agents is also important – that is why a Maintenance Job can be scheduled to clean up the _work folder in the agent:

image

This can have an impact on your pools – that is why you can specify that only a certain percentage is going to undergo the job at once. Everything is also audited, in case you need to track down things going south.

Wednesday, 26 July 2017

So many things I like in the new Release Editor!

Change is hard to swallow, it is the human nature and we cannot do anything about it Smile so like every change, the new Release Editor can be a surprise for some.

image

To be fair with you, I think it is a major step ahead, for a few reasons. Usability is on top of the pile of course, as I can have a high level overview of what my pipeline does without digging into the technical details of the process.

Then if you look at the Artifacts section, you will see the amount of sources you can choose from:

image

Being VSTS a truly interoperable DevOps platform spoils you for choice – I really appreciate the having Package Management in such a prominent place, because it enables all sorts of consumption scenarios for NuGet packages as a build output, including a cross-organisation open model.

Then on the Environments section, the templates provided cover lots of scenarios and not only with cloud technologies. One that is going to be really appreciated in hybrid DevOps situations is the IIS Website and SQL Database Deployment.

image

This template creates a two phase deployment that serves as a starting point for most on-premise deployments with IIS and SQL Server.

The Web App Deployment supports XML transformations and substitutions by default:

image

The data side of the story is really interesting IMHO as it employs DACPACs by default, together with a .sql file and inline SQL options:

image

I think it is clear I really like it Smile

Tuesday, 11 July 2017

Git, TFS and the Credential Manager

A colleague rang up saying he could not clone anything from the command line, but everything was fine in Visual Studio. All he got from PowerShell was an error stating he was not authorised to access the project.

He did not want to setup a PAT or SSH keys, and this behaviour was quite odd to say the least. There was also a VPN in the mix.

At the end of the day the easiest way to get around this was using the Windows Credential Manager:

image

Thursday, 29 June 2017

Some tips on Search Server for TFS 2017

Code Search is a brilliant feature based off Elastic Search. I wanted to summarise a few tips from my relatively limited experience with Search Server for TFS 2017, which adds Code Search to your instance.

The first thing to remember is that the size of the index can be quite large (up to 30% the size of a collection in the worst case) so plan for it in advance.

CPU and RAM should not be underestimated as well. Elastic Search can be a quite intensive process.

If you are not an expert on it (like me Smile) you want to use these script the TFS Product Team provided. They are straightforward and extremely useful. Also take a look at the documentation, it is very useful as well.

The indexing process can be quite tough on the TFS scheduler, so bear in mind this when you install it – otherwise you will see some jobs delayed.

This should not have an impact on the users (all the core jobs have higher priority anyway), but it is worth remembering that the indexing job will have a similar priority to the reporting jobs, so there is a risk of slowing reporting down.

Thursday, 22 June 2017

A few nuggets from using TFS/VSTS and SonarQube in your builds

The cool thing about SonarQube is that once it is set up it works immediately and it provides a lot of value for your teams.

After a while you will notice there are things that might be refined or improved in how you integrated the two tools, here are some I feel can be quite useful.


Bind variables between Team Build and SonarQube properties

I feel this is quite important – instead of manually entering Key, Project Name, Version, etc. you should be using your variables. Try to reduce manual input to a minimum.

Branch support

SonarQube supports branches with the sonar.branch property. This would create a separate SonarQube project you can use for comparison with other branches.

Analyse your solution once

Don’t be lazy and add just a task at the beginning and one at the bottom - you should scan one solution at the time and complete the analysis. This will solve the typical Duplicate project GUID warning you will get if you have multiple solutions in the same scan.

Exclude unnecessary files

It is so easy to add a sonar.exclusion pattern, do it to avoid scanning files you are not interested in.

Wednesday, 14 June 2017

How a (home)lab helps a TFS admin

I’ve always been a fan of homelabs. It is common knowledge that I am a huge advocate of virtualisation technologies, and pretty much all my machines feature at least Hyper-V running on them.

If you are not familiar with this, a homelab is a set of machines you run at home which simulate a proper enterprise environment. This does not mean a 42U cabinet full of massive servers, but even just a decently-sized workstation acting as VM host would do. The key here is enterprise, so ADDS, DNS, DHCP, the usual suspects indeed, plus your services.

What I am going to talk about is applicable to corporate labs as well, albeit this can be less fun Smile

So, if you are a TFS administrator, what are the advantages of a lab?

Testing upgrades!

Yes, test upgrades are one of the uses of a lab. But not just testing the upgrade itself, it also helps understanding how long an upgrade will take and what are the crucial areas you need to be aware of.

In an ideal world, you will have an exact copy of the production hardware so you might be able to have a very accurate forecast. This helps of course, but it will also hide what are the critical areas in your deployment.

Let’s take TFS 2017  – one of the most expensive steps is the migration of all the test results data in a collection to a different schema.

This is a very intensive operation, and having a lab where you know inside-out any finer detail about your hardware really helps when it comes to planning the proper upgrade, especially if you have a large deployment.

Without mentioning that in case of failure you are not breaking anybody’s day and you can work on your own schedule.

Also, you will find that sometimes you might need to experiment with settings that require a service interruption. The lab is yours, so you are not affecting anybody again and you can go straight to the solution when it comes to the production environment.

All of that sounds reasonable and maybe too simplicistic, but I saw too many instances where there was no lab and the only strategy was test-and-revert-if-fails, given that Team Foundation Server is “just a DB and IIS” (yeah…).

Definitely not something you want to see or hear, trust me Smile