Showing posts with label Visual Studio Online. Show all posts
Showing posts with label Visual Studio Online. Show all posts

Thursday, August 20, 2015

Welcome BugGuardian

Someone of you has maybe noticed the I did only few post in the last months. This happened because of two main reasons.

The first one is, as some of you already know, I moved in another country far away from home, with a completely different culture and language, to start a new adventure. This inevitably took a lot of time from the already small amount of spare time I have.

The second reason I wasn't so active here, that is also the reason of this post, is that I worked on a new project that I have released today: BugGuardian.

What is BugGuardian
BugGuardian is an Open Source library, written in C# like Shared Project, that allows to easily create a Bug work item on your Visual Studio Online account or on your on-premises Team Foundation Server 2015 in the case your application throws an Unhandled Exception.
It can also be invoked manually in try/catch blocks to keep track of handled exceptions.

It supports applications written with the .Net Framework v4.0 and above, and it can be used on every kind of project, including:
  • Asp.net
  • Asp.net MVC
  • WPF
  • Windows 8 / 8.1 Apps
  • Windows Phone 8 / 8.1 Apps
  • Universal App
  • Universal Windows Platform Apps (Windows 10)
  • and so on...

As I mentioned, it is OSS so you can find the sources on GitHub.

To let you install and use it without having to build the sources by your own, I have published it on NuGet. Just search for BugGuardian in the Package Manager GUI or run the following command in the Package Manager Console:
Install-Package DBTek.BugGuardian

Usage, support and Feedback
Refer to the project documentation to find examples about how to use this library. You can also find some code samples in the TestApps folder.

Anyway, just to make an example and to show it's really simple to use, this is the code you need to manage an exception and to open a Bug on your VSO / TFS:

using (var creator = new DBTek.BugGuardian.Creator())
{
    creator.AddBug(myException);
}

If you encounter any issue using this library, please let me know through the Issues page and I'll fix the problem as soon as possible!

I'm waiting for your feedback :)


I want to thank you my friend and fellow MVP Marco Minerva (@marcominerva) for the support, the patience and the Code Review.

Monday, July 27, 2015

Deploy a Web App to Azure with the Visual Studio Online Build

In this post we are going to see from sratch how to deploy a solution to an Azure Web App using the new Visual Studio Online Build platform.
To do it, we will use only the web portal of VSO.

Connect Azure to VSO
First of all we have to let Visual Studio Online know that we have an Azure account we want to use as deploy endpoint for our solution. Go the the Team Peoject Settings page (clicking on the button in the upper right corner of the project page), click on the "Services" tab, then on "Add new Service Connection" and then on "Azure".

At this point a pop-up will appear and we have to insert the ID of the Azure subscription which contains or will contain the Web App to publish, a name for it and the authentication mode.
We can use either a subscription certificate or the credentials.

When done, our Azure subscription and our VSO account will be connected (for this project).

Create the build definition
Now that we have linked VSO and Azure, let's go to the Build section of the project and create a new deployment build definition.


We eant to deploy to Azure, so in the "Deployment" section click on "Azure Website" (like shown in the image).

This creates a new build definition with some steps to configure.

Let's see how to proceed step-by-step.



The first one is the "real" build task. It's completely customizable but the most important thing is to define the solution file it has to use as build source. Select it like shown in the image, near the red arrow.

In the second step we have to set the how to execute the post-build Unit Tests. If we don't have unit tests (that is bad...) or we don't want to execute them (also this is bad :) ), we can safely remove this step using the "X" button that appears when the mouse pointer is over it.

The third step is the most important one for our scenario, because it publishes the build result to the Azure Web App.


In its settings there is a drop down where to select the target Azure subscription (if the previous Azure service connection worked properly). Then, in the "Web App Name" textbox, we have to write the name of the Web App we want to deploy. In theory this also appears as a drop down itself, but currently it doesn't load the existent Web Apps names (don't forget that the new Build platform is still in preview)

For this reaso, here I've written down some notes about the Web App name
  • If we use a non-existent Web App name, it will create it on our Azure subscription using the selected region
  • If we use a Web App name that doesn't exist in our subscription but which already exist in the selected region, the deploy process will fail
  • If we use a Web App name that already exists in our subscription but it is in a region different from the selected one, the deploy process will fail
  • If we use a Web App name that already exists in our subscription and it is in the selected region, the deploy will use it as target and will update it.


So pay attention about what we write :)

The two remaining steps allow us to index and to publish the application symbols (suggested, to have more information in the case of unhandled exceptions) and to publish the build artifacts somewhere.

When all the parameters are set we can go ahead and save our build definition, giving it a name.


We can also change all the default parameters in the upper tabs to customize the build process accordingly our needs.

Let's deploy!
Now that we have completed all the customizations, it's the moment to launch the build and to verify that the deploy works fine.

To start the build, click on the "Queue build" button in the build definition toolbar or in the contextual menu.



When we click on that button, VSO will queue a build based on our build definition and will execute it either on our servers or in a Microsoft Datacenter (if we have chosen to do it with the hosted agents).
Anyway, the build progress will be shown in the real time web console that appears as soon as the build is queued.

When is starts, it firstly execute the build and test tasks:


Then, if both are successful, the deploy to Azure:



When all the tasks will successfully finish, we have our web application deployed and running on Azure. Easy!

Sunday, June 7, 2015

Cloud Load Test with Visual Studio Online - part 2: Visual Studio

This article is the second one of a series of 3 where I write about the Cloud Load Testing with Visual Studio Online.

In the first part I wrote about simple load tests execution using the VSO portal.
In this article I will approach a more complex (but more complete too) solution, the integration between Visual Studio and Visual Studio Online.

Remainder
Before to start with the Cloud Load Test execution, it's important to remind a couple of aspects:
  • To use the Cloud Load Test functionality with VSO you need a MSDN Ultimate subscription
  • The application you want to test has to be public exposed on the web (because the load test agents are in the cloud and need to reach our app)
  • You can use a max of 20.000 free minutes of load test per month. They are "virtual user minutes", which means thet, i.e., if you execute a 2 minutes test with a constant load of 200 virtual users, you're going to consume 400 minutes..
Also, the Load test feature is available only on Visual Studio Ultimate/Enterprise.

That said, let's see how to setup and execute Load Tests with Visual Studio and Visual Studio Online.


Introduction
If we do a Load test with Visual Studio, we will have a plenty of settings we can configure, so this is the most tunable and customizable way to do such kind of tests.
Also, after the execution we will find a lot of information, data and charts about how our application performs under load.

Last but not least, we can do a multi step (multi page) test, even with pages under authentication, and we can record all the steps we want to do in the app in the same way a user will use it (so, no manual configuration is required for this task).


The Start
First of all, we have to create a test project in Visual Studio. The project template to use is "Web Performance and Load Test Project"


When we click on the "OK" button, VS creates for us the solution (as ever...) with a "WebTest1.webtest" project in it. This is not a "Cloud Test" project, it's a "Web Performance" one. We need this kind of project to instruct the load test in what to do.

In the upper left of the project, indeed, there is a small button with a big red dot: this is the button that allow us to record the "steps" in our application.


If we click on this button, it will open a browser session (typically IE) with a small panel on the left. This is the pane that "do the work", it's generated by the "Web Test Recorder helper" that VS installs on your machine.


As side note, remember to check if the plugin is enabled on the browser, if it isn't nothing will show.


Ok, now that the recording is active we can just visit and use the application we want to test just like a normal user will use it. We can also do some specific steps if we need to test specific pages, sections and so on.
To start the recording, write in the browser bar the url of the application and use it, the web recorder plugin will do all by itself.

When we have finished, just click on the "Stop" button in the recording pane. If we don't want to record some step, we can pause the recording and resume it later.


While we navigate our application, you will notice that a lot of "stuff" is added to the recording pane. Those are all the requests our app does.
If we select one of them and then click on the "comment" button, we can set a comment on it to be used later.

When we click on the "Stop" button, some magic happens. Firstly, Visual Studio grabs all the requests from the web recorder plugin, elaborates them and shows them in the webtest window. Then it tries to detect if we have some "dynamic parameters" in our test.


Dynamic parameters are the values that can change between one request and another one made in different moments: user tokens, user search, authentication, etc. If VS detect them, we will be prompted to decide what to do. We can decide to use the ones we recorded for every test it will do or if associate the params with "dictionary" where VS will pick different values at different times.


Another interesting aspect we can customize is related to "Request details". In this window we can decide if we want to insert a "think time" between requests (a delay) and, most important, what is for us the target time for the response. If during the test one request will have a response time greater than what we set, VS will consider the test as failed.

Now we have the webtest with all the steps and settings, we can add the Load test project.


The Load Test project
To add a load test to a web performance test, just right-click on project name, chose "Add" and than "Load test".


This action will start the "Load test wizard" which guides us through the load test setup.

On the first step, we have to select how to manage think times (delay between requests): we can use the "recorded times" (the ones that happens during registration), normal distribution (an average based on recorded times) or to remove all the think times. We can also insert a think time between iterations of the test.

In the second step we have to decide how much load we want to generate, and how to generate it.


We can select to have a constant load (the test will start with N users and will continue with the same number of users until the end) or to have a "step load".
In the image, for example, I decided to start the test with 10 users and then to add other 10 users every 10 second until a maximum of 200 users. 
Step load can be really interesting because in that way we will discover how our application scales based on the incremental load.

In the next step, we have to instruct the load test in what to do. For that reason, we will add the web performance test we created before.


Click to "Add", select the performance test and click to "Ok". The test will be added to the "test mix" window. It's called "Test Mix" because we can add as many test as we want, and we can decide the distribution of each test over the total. In this example we have only one test so the distribution will be 100%.

We can also decide, in the next steps, how to mix network types for source users (warning: when I wrote this article VSO supports only "LAN" network type) and also the kind of browser engines we want the test agents use to do the requests.


As you can see in the image, we can select from a list of different browser engines and we can also set the distribution between the user agents.

In the last step, we can customize the load test duration.


We can select between fixed time duration or number of test iterations to be done regardless of the time. If we select a fixed time (2 minutes in my example), we can also insert a "Warm-up time" (10 seconds in the example). During this time some dummy requests will be sent against the application and no result data will be collected. This is extremely useful if we has to test an application that could have a "cold start" (think for example an application hosted on IIS that could not be used for a while and than IIS "deactivated" the app pool) and to not have some false-positives in the test result.

Clicking on "Finish", Visual Studio will generate the load test project and saves it in our solution.



Test execution
Now that we have all the stuff ready, we can start our test execution. But before, only a little more customization. Indeed, we want to execute our load test using the Visual Studio Online Cloud Load test feature, but noone has said this to our local Load Test project.

To do it, open (double click) the "Local.testsettings" file in the solution and change the "Test run location" to Visual Studio Online.


Note that if you have logged in to Visual Studio with an account that own a VSO subscription, this step is not necessary because VS already selected it for you.

Ok, let's start the test. Click on the "Start button" in the upper left.


But what happens when you start the test?
Visual Studio connects to Visual Studio Online and queues the test. VSO then creates an on demand virtual lab in some Azure datacenter and configures the test agents on the VMs, with the given settings.


As soon as the lab is ready and configured, the test starts. We have configured a warm up period, so it starts with it and we can see no data is collected.


Then, the agents start to generate traffic to the given pages of our application (as used in the recording) and the results are posted back to Visual Studio in near real time. In this way we can have a preview of the test outcome.


Now we can wait the test execution ends to have the full resultset.


Results
When the test completes, we are given the results and the achieved performances. We have a lot of information available.


We have the average responses time, with also the min and the max, together with average test times, average page times and user load.

We can also see from the chart and from the info that the virtual users grew from 10 to 120 (I set 200 as maximum but it didn't have time to reach it 'cause the test lasted only 2 minutes) and how the tested application scaled.

Also in the "Throughput" page we can see other useful informations.


In this example we don't have errors, but if they had occurred we'll find them with the generic cause, the specific type and the given error message.

But this is only a small subset of all the information we can have. If we click on the "download report" link, in the upper, VS will download the whole resultset from Visual Studio Online and we can enjoy a sheet plenty of data and charts which can help us to better understand how our application performs under certain load.



Conclusions
This kind of Load Test is the most complete we can do. As we have seen, it gives us a lot of informations, metrics and charts to better understand our application behaviours.
Moreover, the setup of these tests is simple but very customizable and in a while we can have a huge number of useful info.

Stay tuned for the 3rd and last article of this series, which will be about Load Test APIs.

Monday, January 19, 2015

Cloud Load Test with Visual Studio Online - part 1: Web panel

This article is the first of a series of 3 where I'll write about the Cloud Load Testing with Visual Studio Online.

In this first part I'm going to talk about simple load tests execution, directly using the using the VSO portal.

Introduction
Before to start with the Cloud Load Test execution, it's important to specify a couple of aspects:

  • To use the Cloud Load Test functionality you need a MSDN Ultimate subscription
  • The application you want to test has to be public exposed on the web
  • You can use a max of 15.000 minutes of load test per month. They are "virtual user minutes", which means thet, i.e., if you execute a 2 minutes test with a constant load of 200 virtual users, you're going to consume 400 minutes..


That said, let's see how to setup and execute those Load Test with the Visual Studio Online web interface.


The start
First of all, you have to log in into your VSO web portal and click on "Load test" on the main dashboard's menu.


Atfer that, it shows the settings page. How you can see, it's a single form: the available settings are a bit limited but more than sufficient to do a generic load test over our app.


The first parameter to set is the url of the page to test (it's not possible to execute a multi-step load test from this interface) which can be the home page or, like in the example, what page you want. The only constraint is that the page must be accessible on the internet without credentials.

The second value to insert is the test name: it a free text, useful as reminder.

Below of these params there are other 4, named "Test settings", useful to better set up the key aspects of the load to apply.

  • User Load: with this setting you can define the number of concurrent users that will connect to the give url. Allowed values are 25, 50, 100 and 200
  • Run duration: it's the duration of the entire test. Possible selections are from 1 to 5 minutes
  • Think-time: it's the consecutive requests' delay time. It's useful to avoid the intervention of anti-hammering and anti DoS systems. It's possible to set a waiting time of 1 second (default) or 5 seconds
  • Browser distribution: this selection sets the percentage of browser usage you want to simulate. Choosing, for example,"IE 80%, Chrome 20%" the test will be executed with agents that use Internet Explorer engine and Chrome engine by the given percentages


Setted up these setting, just click on the "Test now" button to start the test.


Test execution
But what happens when you start the test?
Visual Studio Online creates an on demand virtual lab in some Azure datacenter and configures the test agents on the VMs, with the given settings:


As soon as the lab is ready and configured, the test starts. The agents start to generate traffic to the given url and the results are posted back to our browser in near real time. In this way we can have a preview of the test outcome.


In the image we see, i.e., that at about 50 seconds there is a "hole" in the request per second that the site was able to manage, associated with a huge increase of the response time. Using these data we could start an analysis activity over our application or our infrastructure to understand why and where this anomaly happened.


Results
When the test completes, we are given the results and the achieved performances.
Near the chart we have seen before, we have a couple of other informations.


First, we have the average response time. A value under 0,1 seconds is considered good, between 1 second and 0,1 seconds is "not so good", more than 1 second is bad.

After the response time, there is the number of total requests done to the web app.

Finally, there is an indication of any requests that are not completed successfully or generated an error on the application.

Below these values there is also a table with the eventual errors.


In this example we don't have errors, but if they had occured we'll find them with the generic cause, the specific type and the given error message.


Conclusions
This kind of Load Test isn't the most complete we can do but it's however sufficient if we want to have some info about the performances and the load managed by our application
Moreover, the setup of these tests is extremely simple and in a while you can have a huge number of useful info.

Instead, if what you need is a deep load test where you can do a step-by-step navigation, maybe with credentials... don't miss my next article of this series.

Friday, December 19, 2014

Source Code file editing with Visual Studio Online

Brian Harry, on his blog, has announced the new release of Visual Studio Online. One of the new features is the long-awaited code edit directly from the web portal.
Let see how it works.
 
 
Edit
On the Team Project's Dashboard, go to the "Code" section and click on a source code file from the tree menu on the left.
 
As ever, the read-only source code box will appear on the right pane. Unlike what happened previously, however, there is a new "Edit" button.
 
 
Clicking on it, three things happen:
  1. The toolbar changes
  2. In the tree left menu the selected file is marked with a * to let you know it's in editing mode
  3. In the right pane is now possible to edit the source code

 
 
When the editing is completed, it's possible (recommended...) to write a comment for the checkin into the textbox and then click on the "Save" button of the toolbar.
 
Click on "Save", the changes will be saved in the source control; it creates a changeset and shows a tooltip with the info about the result of the operation and a link to the changeset view page.
 
 
It's also possible to cancel the editing, just click on the "Discard" button in the toolbar.
Finally, if we want to verify our changes before doing the checkin, we can make a diff between the file on the source control and our version with the button on the far right of the toolbar.
 
 
 
Upload, Create, Rename, Delete
In addition to directly edit the source code, you can also rename, delete and create a file or folder and upload new files.
 
 
To creare a new file or upload it, just right click on a folder and choose "Add file(s)".
A popup will open, here you can select to create a new file or to upload existing files.
 
 
To cancel a file or folder right click on it and click on "Delete" from the contextual menu (it asks for the confirm).
 
Finally, to rename it click on "Rename" in the same contextual menu.
 
All these operations will generate a changeset, so all the changes will be mapped and historicised on the source control.

Tuesday, November 11, 2014

Manage Cloud Load Tests with the REST API

The Cloud-based Load Testing (CLT) REST APIs give you the ability to execute Load tests from the Cloud in an automated manner, that can be integrated either as part of your Continuous Integration/Deployment pipeline or Test Automation.

Here’s a list of what you can do with these new APIs:
  • Create/Start/Stop a Load Test Run
  • Get Load Test Results  - the set of KPIs groups that you are used to – Performance, Throughput, Application
  • Get Messages from the service during a run
  • Get Exceptions, if any, from the service during a run
  • Get Counter instances and Samples for a Load Test run
  • Get Application Counters for Apps configured with your load test
  • Get list of all past Load test Runs - filtered by requester, date, status etc..
Those APIs work strictly with Azure APIs because they expect to upload files from Azure Blob and to download results to the same drop folder in Azure Blob.

Please note that:
  • For using the REST APIs, you’ll need to enable alternate credentials, and use that for authentication
  • Add "vsclt" to your account name to get redirected to the Cloud-based Load Test (CLT) service within visualstudio.com. If your account is https://abc.visualstudio.com/, then when using APIs, specify this as https://abc.vsclt.visualstudio.com/
 
The base API pattern is:
VERB    https://{account}.vsclt.visualstudio.com/_apis/clt/{resource}[/{options}]
 
Where "resource" can be one of the following and "options" depend on the resource:
 
Test
  • for "Test Runs" (queued test runs): testruns
  • for "Test Drops" (the tests store containers): testdrops
 
Depending on Test Runs
  • for "Counter Instances": testruns/{testrunid}/counterinstances
  • for "Counter Samples": testruns/{testrunid}/countersamples
 
Application Performance Management:
  • for "APM Plugins": apm/plugins
  • for "APM Applications ": apm/applications
  • for "APM Counters ": apm/counters

Create a Test Run
To create a Cloud Load Test Run, you have to make a POST call to the API, passing a set of parameter and test settings like in the example below.

Request Url:
POST    https://dbtek.vsclt.visualstudio.com/_apis/clt/testruns

Request Body:
{
  "name": "MyAppToTest.loadtest",
  "description": "nightly loadtest",
  "testSettings": {
    "cleanupCommand": "",
    "hostProcessPlatform": "x86",
    "setupCommand": ""
  },
  "testDrop": {
    "id": "fe35ed32-eaab-4178-ba7e-ad2577ee187f"
  }
}


Response:
{
  "id": "a5e0d4b9-d387-4b3e-9566-163da9c39b67",
  "name": "MyAppToTest.loadtest",
  "createdDate": "2014-11-1T08:51:27.0965365Z",
  "state": "pending",
  "subState": "none",
  "testSettings": {
    "cleanupCommand": "",
    "hostProcessPlatform": "x86",
    "setupCommand": ""
  },
  "testDrop": {
    "id": "fe35ed32-eaab-4178-ba7e-ad2577ee187f",
    "url": "https://dbtek.vsclt.visualstudio.com/_apis/clt/TestDrops/fe35ed32-eaab-4178-ba7e-ad2577ee187f"
  },
  "runSpecificDetails": {
    "duration": 180,
    "virtualUserCount": 250,
    "samplingInterval": 15
  },
  "createdBy": {
    "id": "76cabfe4-0e20-4f5b-862e-9693a68232f1",
    "displayName": "iamtheuser@dbtek.it"
  },
  "url": "https://dbtek.vsclt.visualstudio.com/_apis/clt/testruns/a5e0d4b9-d387-4b3e-9566-163da9c39b67"
}


As you can see, in the response you'll find all the data you need related to the newly created test run.

Start a Test Run
With those data, we can for example start (queue) the test run, calling the API with a PATCH verb:

Request Url:
PATCH    https://dbtek.vsclt.visualstudio.com/_apis/clt/testruns/a5e0d4b9-d387-4b3e-9566-163da9c39b67

Request Body:
{
  "state": "queued"
}


Response:
Status code: 202


Get Test results
Finally, when the test finished, we can get the test run results, with a GET call.

Request Url:
GET    https://dbtek.vsclt.visualstudio.com/_apis/clt/testruns/a5e0d4b9-d387-4b3e-9566-163da9c39b67/results

Response:
{
  "resultsUrl": "http://127.0.0.1:10000/devstoreaccount1/ets-containerfor-aeee0697-d734-43d7-956e-e662252c265c/2150fbd4-e71c-42fd-8b90-95222a556d87/TestResult/LoadTest.ltrar.zip?sv=2012-02-12&se=2014-06-03T05%3A05%3A39Z&sr=b&si=sas_tenant_policyaeee0697-d734-43d7-956e-e662252c265c&sig=n1Tj%2BsCtiOqQu9UtcXsl%2Bn3ixP%2FVebHCKDJvfD5Tr%2FE%3D",
  "counterGroups": {
    "count": 3,
    "value": [
      {
        "groupName": "Performance",
        "url": "https://dbtek.vsclt.visualstudio.com/_apis/clt/testruns/a5e0d4b9-d387-4b3e-9566-163da9c39b67/CounterInstances?groupNames=Performance"
      },
      {
        "groupName": "Throughput",
        "url": "https://dbtek.vsclt.visualstudio.com/_apis/clt/testruns/a5e0d4b9-d387-4b3e-9566-163da9c39b67/CounterInstances?groupNames=Throughput"
      },
      {
        "groupName": "Application",
        "url": "https://dbtek.vsclt.visualstudio.com/_apis/clt/testruns/a5e0d4b9-d387-4b3e-9566-163da9c39b67/CounterInstances?groupNames=Application"
      }
    ]
  }
}



Get Test errors
You can also retrieve a list of errors that eventually happened during the test. Again, use a GET call.

Request Url:
GET    https://dbtek.vsclt.visualstudio.com/_apis/clt/testRuns/47be20f0-ac4a-40cd-acb7-d9f8c44d0404/Errors

Response:
{
  "count": 2,
  "value": [
    {
      "type": "Exception",
      "subType": "UriFormatException",
      "occurrences": 50,
      "testCaseName": "ErrorsAndExceptionsWebTest",
      "scenarioName": "LoadTestingScenarioWarmupDuration",
      "request": "http://www.bing:123.com/----{GET}",
      "stackTrace": "   at System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind)\n   at System.Uri..ctor(String uriString, Boolean dontEscape)\n   at Microsoft.VisualStudio.TestTools.WebStress.WebTestTransaction..ctor(String requestUrl)\n   at Microsoft.VisualStudio.TestTools.WebStress.WebTestInstrumentedTransaction.CreateTransaction()\n   at Microsoft.VisualStudio.TestTools.WebStress.WebTestInstrumentedTransaction.Execute(WebTestCaseContext testCaseContext, AsyncCallback completionCallback, Object callerState)",
      "messageText": "Invalid URI: Invalid port specified.",
      "lastErrorDate": "2014-11-11T09:14:20.363Z"
    },
    {
      "type": "ExtractionRuleError",
      "subType": "ExtractText",
      "occurrences": 50,
      "testCaseName": "ErrorsAndExceptionsWebTest",
      "scenarioName": "LoadTestingScenarioWarmupDuration",
      "request": "http://www.bing.com/----{GET}",
      "stackTrace": "",
      "messageText": "StartsWith text was not found in the response",
      "lastErrorDate": "2014-11-11T09:14:23.663Z"
    }
  ]
}

Thursday, October 30, 2014

Place the Database under Source Control - free eBook

Normally, a Source and Version control system shows huge benefits in coordinating the efforts of the development team, ensuring a complete audit trail of all changes to the code files, and allowing the team to reproduce any specific revision or build.

Database developers can and should also benefit from source control's audit history and change-tracking capabilities, but there's more to it than simply placing a few database build scripts into a subfolder of the application development team's project folder in source control. Unlike application developers, database developers are not assembling files into a neat little application package, but are instead running scripts that feed off each other and off existing database objects, while negotiating the close interdependency between the code and the data.

To cover what we can call "Database Lifecycle Management", and consider it a branch of ALM, RedGate has developed an interesting free ebook, called "SQL Server Source Control Basics"

Unfortunately, the book's authors decided to use SVN, but you can take the core concepts and bring them to Team Foundation Server or Visual Studio Online as well.

Topics include:
  • Source control core concepts
  • Choosing a database version control system and structure
  • Branching and merging strategies
  • Automating database versioning and deployment from source control
  • An introduction to database continuous integration


The eBook gives a detailed walkthrough of database source control concepts, with code samples and clear examples.

You can download it, for free, here:

Tuesday, October 28, 2014

Visual Studio Online REST API version 1.0

Today the first official version of the REST API of Visual Studio Online has been released, the version 1.0.

Announced in May as preview, now these APIs have reached a great maturity. This doesn't mean that they are done with the APIs completely but rather that a core set are now complete and from here forward, they’ll be versioning them for backward compatibility so all the apps that use them don’t break every time they update them. 

Together with this announce, they have updated the "API Reference portal" and the "Getting started guide".

Important
If you have existing apps using the 1.0 preview APIs, you should start migrating to the release 1.0 APIs as soon as possible. Graduated preview APIs (any API in the 1.0 set) are subject to stop working in 12 weeks from today. To learn more about versioning and migrating, see the "versioning and migration page".

That said, remember that, starting from today, the Visual Studio Online REST APIs follow this pattern:

VERB https://{account}.VisualStudio.com/DefaultCollection/_apis[/{area}]/{resource}?api-version=1.0

Monday, October 13, 2014

Integrate an application or a service with Visual Studio Online - MSDN Guest Post

On friday was published on MSDN Italy my second Guest post in which I talk about how is possible and easy to integrate our applications or services with Visual Studio Online using the new REST APIs and the Service Hooks.

To read the whole article (available only in italan) just clik on this link:

Monday, October 6, 2014

Easily migrate from Team Foundation Server to Visual Studio Online - MSDN Guest Post

Today was published on MSDN Italy my Guest post in which I explain how to migrate from a Team Foundation Server (the normal on-premises installation scenario) to its corresponding on-cloud version Visual Studio Online and which are the reasons that could push to do so.

To read the articlae (available only in italan) just clik on this link:

Tuesday, September 30, 2014

Create a Work Item using REST API - Visual Studio Online

Sometimes it can be useful to add a new Work Item to our Team Project programmatically, maybe in response of a certain event and so on.

The new "WIT REST API v1.0 (preview 2)" (released on September 4) exposed by Visual Studio Online allow us to do it.

When you create a work item, you can provide values for any of the work item fields.

To use them, you have to send a HTTP PATCH request to:

https://your_account.visualstudio.com/defaultcollection/team_project_name/_apis/wit/workitems/$work_item_type_name?api-version=1.0-preview.2

The request body has to be set using this format:

[
    {
        "op": "add",
        "path": { string }
        "value": { string or int, depending on the field }
    },
    {
        "op": "add",
        "path": "/relations/-",
        "value":
        {
            "rel": { string },
            "url": { string },
            "attributes":
            {
                { name/value pairs }
            }
        }
    }
]

An example request could be:

https://myAccount.visualstudio.com/defaultcollection/myProject/_apis/wit/workitems/$task?api-version=1.0-preview.2

[
  {
    "op": "add",
    "path": "/fields/System.Title",
    "value": "Change blog title height"
  }
]

This request produces a response like this one, in which you'll find all the informations related to newly created Work Item:

{
  "id": 88,
  "rev": 1,
  "fields": {
    "System.AreaPath": "myProject",
    "System.TeamProject": "myProject",
    "System.IterationPath": "myProject",
    "System.WorkItemType": "Task",
    "System.State": "To Do",
    "System.Reason": "New task",
    "System.CreatedDate": "2014-09-30T10:25:12.943Z",
    "System.CreatedBy": "Davide Benvegnu",
    "System.ChangedDate": "2014-09-30T10:25:12.943Z",
    "System.ChangedBy": "Davide Benvegnu,
    "System.Title": "Change blog title height"
  },
  "_links": {
    "self": {
      "href": "https://myAccount.visualstudio.com/DefaultCollection/_apis/wit/workItems/88"
    },
    "workItemUpdates": {
      "href": "https://myAccount.visualstudio.com/DefaultCollection/_apis/wit/workItems/88/updates"
    },
    "workItemRevisions": {
      "href": "https://myAccount.visualstudio.com/DefaultCollection/_apis/wit/workItems/88/revisions"
    },
    "workItemHistory": {
      "href": "https://myAccount.visualstudio.com/DefaultCollection/_apis/wit/workItems/88/history"
    },
    "html": {
      "href": "https://myAccount.visualstudio.com/web/wi.aspx?pcguid=0fa87894-6f48-4458-957d-3438b6bb9343&id=88"
    },
    "workItemType": {
      "href": "https://myAccount.visualstudio.com/DefaultCollection/c4637008-2068-4b3f-828d-a214e2ba5210/_apis/wit/workItemTypes/Task"
    },
    "fields": {
      "href": "https://myAccount.visualstudio.com/DefaultCollection/_apis/wit/fields"
    }
  },
  "url": "https://myAccount.visualstudio.com/DefaultCollection/_apis/wit/workItems/88"
}

Tuesday, August 19, 2014

Azure Web Sites Deployment: how does it work?

Azure Web Sites supports continuous deployment from source code control and repository tools like BitBucket, CodePlex, Dropbox, Git, GitHub, Mercurial, and TFS/VSO. You can use these tools to maintain the content and code for your web site, and then quickly and easily push changes to your site when you want.

Supported Deployment types
Pushing local files to Azure by using Local Git allows you to manually push updates from a local project to your Azure Web Site, while deploying from BitBucket, CodePlex, Dropbox, GitHub, or Mercurial results in a continuous deployment process where Azure will pull in the most recent updates from your project.

While both methods result in your project being deployed to an Azure Web Site, continuous deployment is useful when you have multiple people working on a project and want to ensure that the latest version is always published regardless of who made the most recent update. Continuous deployment is also useful if you are using one of the above mentioned tools as the central repository for your application.

On the net there are plenty of articles explaining how to deploy an Azure WebSite (i.e. http://azure.microsoft.com/en-us/documentation/articles/web-sites-deploy/) or how to implement Continuous Deployment strategies (i. e. http://azure.microsoft.com/en-us/documentation/articles/web-sites-publish-source-control/).

But how does it work behind the scenes?

Well, the answer is "Kudu".

Kudu is the engine behind deployments in Azure Web Sites, but it can also run outside of Azure.
It has a somewhat unusual architecture, in the sense that it is a single-tenant rather than multi-tenant service. What this means is that each Azure Web Site has its own instance of the Kudu service, completely distinct from the Kudu service sites used for other Azure sites.

It stays active in background and "watches" for checkins, commits, new files, completed builds and so on. When it detects something, KuduSync starts and do the "dirty work".

It's a great tool, because:
  • it’s an open source project available on GitHub (https://github.com/projectkudu/kudu)
  • it's automatically installed on all Windows Azure Web Sites
  • it can use a custom deployment script

But the most important thing (imho) is this:
The deployment is created in your website’s folder structure and the new deployment is copied to your site’s root, leaving old deployments intact.
This means you can "rollback" to any deploy you've done in the past. This, for me, it's a killer feature!

It's also possible to have access to the Kudu web dashboard, using an url like "https://your_website_name.scm.azurewebsites.net/" and your deployment credentials or your Azure service admin credentials.

Kudu Dashboard
In the Kudu dashboard you can find a lot of useful informations about your website's environment together with a set of tools to manage your website and, last but not least, a complete set of REST APIs. There is also the possibility to manage WebSites extensions.

There is also a great video where David Ebbo and Scott Guthrie explain how Kudu works: http://channel9.msdn.com/Shows/Azure-Friday/What-is-Kudu-Azure-Web-Sites-Deployment-with-David-Ebbo

Tuesday, July 22, 2014

Visual Studio Online now supports Azure Active Directory

Yesterday the Microsoft's vsalm team began deployment of the sprint 68 work.

The biggest thing in the announcement is the next step in the rollout of Azure Active Directory (AAD) support in VS Online.  They started this journey back in May with the very first flicker of AAD support at the Build conference.  Then they added more support at TechEd but they've stay pretty quiet about it because, until this week, there was no way to convert and existing account to AAD.  With this deployment they’ve enabled it.  Officially it’s in preview and you have to ask to get access to do it but they’re accepting all requests so it’s nothing more than a speed bump to keep too big a rush from happening all at once.  With these last set of changes, you can:


  • Associate your OrgID (AAD/AD credentials) with your MSDN subscription, if you have one, and use that to grant your VSO license
  • Create a new account bound to an AAD tenant
  • Bind an existing account to an AAD tenant
  • Unbind an account from an AAD tenant
  • Log in with either a Microsoft Account or and OrgID (AAD only or synchronized from you on prem Active Directory) giving you single sign-on with your corporate credentials, Office 365, etc.
To see all the details about AD support and other things included in the update, read the original post on MSCD by Brian Harry.

Thursday, July 10, 2014

Visual Studio Online licenses will change (in better)

Through the fall and spring, Visual Studio Online transitioned from Preview to General Availability.  That process included changes to branding, the SLA, the announcement of pricing, the end of the early adopter program and more.

Now, the VSO/TFS team has decided to roll out these 2 major licensing changes in the next couple of months:

  • any VS Online account will be able to have an unlimited number of “Stakeholder” users with access to the a subset of functionality, at no charge.
  • the Visual Studio Online Advanced plan will include access to all of the Test hub functionalities 


The team is working hard to implement these licensing changes now and the expectation is that they’ve got about 2 sprints of work to do to get it all finished.  That would put the effective date somewhere in the neighborhood of mid-August.

In general, the team goal is to keep the licensing for VS Online and Team Foundation Server as “parallel” as they can – to limit how confusing it could be.  As a result, they will be evolving the current “Work Item Web Access” TFS CAL exemption (currently known as “Limited” users in TFS) to match the “Stakeholder” capabilities.  That will result in significantly more functionality available to TFS users without CALs.  The hope is to get that change made for Team Foundation Server 2013 Update 4.


Here you can find the original announcement by Brian Harry:
http://blogs.msdn.com/b/bharry/archive/2014/07/09/upcoming-vs-online-licensing-changes.aspx

Wednesday, July 2, 2014

Delete work items from TFS or VSO

Have you ever created a bunch of work items that you decided later that you had to delete. Well I have… especially as a user of the TFS Integration Platform. And when things go wrong there they can really go wrong.

Now while you can put stuff into the “removed” state it is still hanging around cluttering up the place. The only way out of the box to remove items is to give the ID for each work item that you want to delete and execute the command line for each one:

witadmin destroywi /collection:CollectionURL /id:id [/noprompt]

Well that’s just great unless you have a couple of thousand things to delete. 
So I've written a little bit of code to do it for me; in this example I created a small console program but you can use the same code in any kind of project.

using Microsoft.TeamFoundation.Client;
using Microsoft.TeamFoundation.WorkItemTracking.Client;

[...]

TfsTeamProjectCollection tpc = new TfsTeamProjectCollection(new Uri("http://your_tfs_url:8080/tfs/CollectionName"));
WorkItemStore store = tpc.GetService();

string query = @"SELECT [System.Id] FROM WorkItems WHERE [System.TeamProject] = 'projectName'  AND  [System.AreaPath] UNDER 'projectName\_TOBEDELETED' ORDER BY [System.Id]";

WorkItemCollection wis = store.Query(query);
var wisIds = wis.Select( wi => wi.Id);

Console.WriteLine(string.Format("DESTROY {0} work items (they really can't be resurrected): y/n?", wis.Count));
ConsoleKeyInfo cki = Console.ReadKey();
Console.WriteLine();

if (cki.Key.ToString().ToLower() == "y")
{
 try
 {
  Console.WriteLine("Deleting....");
  var items = store.DestroyWorkItems(wisIds.ToArray());
  Console.WriteLine("DONE");
  foreach (var item in items)
  {
   Console.WriteLine(item.ToString());
  }
 }
 catch (Exception ex)
 {
  [...]
 }

}

Console.WriteLine("Finished");


The first thing that you may notice is that I search for items in a specific area path. I use _TOBEDELETED as it is obvious what is going to happen to things that end up there. Although I did work with a user who complained that all his files had gone missing. When asked where he kept them he pointed at the recycle bin on his desktop!

Anyhow, just in case you made a mistake it will let you know how many work items that you are deleting. It’s a simple check but I have had it say "100000" work items... AS you can imagine I very carefully terminated the program (never trust the 'no' option).