Our customer experiences bringing together aspects of project quality management, the Agile process and continuous integration in a real customer environment using TFS

In our previous 2 posts we covered the basics of the Agile process in TFS by reviewing the artifacts, how they can be used to manage quality, increase speed of delivery and how they map into PMBoK quality management.  This blog will review some of our experiences assisting customers with their project quality management process implementation in TFS.  It will also review our approach including an analysis of what the customer is currently doing to manage quality for their software products, planning and executing a proof of concept (POC), or testing in TFS before any processes or systems integrations are moved into production.  This post will focus on moving items from Hewlett Packard’s Quality Center to TFS, plus the implementation of continuous integration in TFS.

Most companies who we have worked with have some processes for quality management in place, often documented and built leveraging other processes or the PMBoK.  A walkthrough of the customer’s process and the supporting systems is often a good place to start.  Once we have collected all of the information on the quality management processes, supporting systems and issues that the customer has experienced with either/both their processes and systems, we analyze the data and compare it against the out of the box Agile process template in TFS.  The output of the analysis is often a spreadsheet mapping of data from the company’s quality management suite of choice to TFS and a Visio diagram of system functions and workflows, which is reviewed with the customer.  Ideas on how processes and data can be moved to TFS are often presented at the same time as the mapping review and decisions are made to help plan for migration.

We have worked with Hewlett Packard’s Quality Centre (QC) to move items into TFS permanently and to decommission the use of QC.  Some of the challenges driving migration to TFS for customers who use both QC and TFS are:

  • QA and development teams are disconnected and don’t collaborate because they are using different tools for their work.
  • No direct linkages exist between issues discovered by the testing team and the source code owned by developers.  This causes traceability issues and can lead to the development team not being able to reproduce an issue discovered by the testing team.
  • No aggregate reporting on work since development tasks are managed in TFS and QA is managed in QC.

The customer work we did to move QC items into TFS started with an overview of how QC was used by the testing team.  During the review, a detailed mapping of data from QC to TFS was performed at the field level and documented in a spreadsheet.  This mapping drove the modification of the Agile process template bug work item in TFS.  Figures 1 and 2 below demonstrate the difference between the out of the box bug work item and the customized work item.

Figure 1: Out of the box Bug Work Item

Figure 2: Customized Bug Work Item

The workflow in QC was also documented and moved into TFS.  This required some custom development to allow for Bugs to be automatically assigned to developers or other team members. In addition to the custom development work, we were able to make use of native TFS functionality to modify the workflow that manages a Work Item’s lifecycle (in this case a Bug).

The customized bug work item and workflow was reviewed with the QA team and modifications were made through 2-3 iterations.  The review was conducted through the QA team’s use of the TFS customized bug work item – team members created new bugs and added data, plus moved the bugs through the workflow in order to understand how their process had been translated into TFS and provided feedback for changes.

In parallel with customization of the bug work item and the implementation of workflow and testing/iterating with the QA team, an investigation of tools and manual processes was conducted in preparation for the move of existing bugs from QC to TFS.  Several options, including Scrat from Sela Group, were considered, but ultimately deemed unworkable.  The high volume bugs in QC built up over the several years of its use, compounded with the loose organization of functional areas in QC and the number of customizations made to TFS led us to conclude that no commercially available tools were up to the migration task. The customer needed a proven method to move bugs from QC to TFS in a short period of time, because no bugs could be logged or modified during the migration from QC to TFS effectively, stopping QA work.

A reliable manual process was developed to move the bugs from QC to TFS – a group of project managers exported all QC Defects that were destined for TFS to several excel worksheets; they then ensured that the data in all mapped fields passed the TFS validation tests.  This process was tested through several dry runs and provided the team with an indication of the bug creation/edit blackout period required for the migration.

After the QA team’s acceptance of the bug work item, the workflow implemented in TFS and the finalization of the manual process to move bugs from QC to TFS, the migration of items was executed.  The team using QC was told to halt their work on defects for one day, during which over 1000 Defects were exported to TFS. The QA team began using TFS the following day. The modifications to the Bug template in TFS displayed earlier in figure 2, helped ease the transition to TFS by emulating the familiar QC environment.

Today our customer uses the customized TFS bug work item and benefits from the QA and development teams using the same tools in TFS.  Linking bugs to source code allows the team to trace issues and report on testing and development activities in aggregate.

The implementation of continuous integration for another customer began through their use of JetBrains TeamCity.   The customer’s objective was to add testing early in the product lifecycle; at the point of check-in by developers.  By adding testing early in the project lifecycle, subsequent issues in the product development lifecycle were reduced. These issues were taking up an inordinate amount of time and resources to resolve.  The customer demonstrated the build definitions in TFS that were to be used with TeamCity and we determined that a more seamless environment, which included Lab Management could meet the objectives of our customer for continuous integration.

We implemented a Proof of Concept with Lab Management, utilizing the creation of golden templates to deploy test machines and environments.  The POC allowed the customer to run builds, deploy built code to test VMs and run tests, prior to checking the code into TFS.  The golden template underwent several revisions, as we refined the required testing machines’ configurations.  The POC build included scripts to set up the required databases from scratch in the test environments.  For users of Visual Studio, the POC was a success. However, the database team was not using Visual Studio but rather a RedGate source control plug-in for SQL Server Management Studio.  The gated check-in policy, requiring developers to build, deploy and test code with Lab Management before final check-in to TFS, did not work for the database team using RedGate.  The logic for the builds, deployments and tests was moved into a new build definition that would be triggered upon check-in – so rather than front end load the tests prior to formal check-in, test are run immediately after check-in by either the development or database team using Visual Studio or RedGate respectively.  This approach continues to meet the objective of the customer to test early in the development process, and identify issues that arise immediately after check-in.

Through our past 3 posts we have outlined the use of the Agile process for quality management, mapped PMBoK to Agile in TFS and provided a sample of our own customer experiences implementing quality processes and continuous integration in TFS.  As a result of having all artifacts in one place and linked together in TFS, team members spend less time looking for information, freeing up time to do more work.  Discovering issues early in the development lifecycle through continuous integration drastically reduces time to troubleshoot problems if they are found later in the project lifecycle.  The use of the Agile process in TFS to facilitate the management of quality coupled with continuous integration has helped our customers to speed up development and ensure the high quality of software deliverables.

Posted in Lab Management, TFS 2013 | Tagged , , , | Comments Off

Mapping the Project Management Body of Knowledge (PMBoK) quality management to the Agile process within TFS

In our last blog entry we listed and described how to use the various artifacts within TFS to facilitate continuous integration, help manage quality with Agile work items and speed up work.  As a result of having all artifacts in one place and linked together in TFS, team members spend less time looking for information, freeing up time to do more work.  Discovering issues early in the development lifecycle through continuous integration drastically reduces time to troubleshoot problems if they are found later in the project lifecycle.  In this blog entry we map PMBoK quality areas to the quality components and process areas for Agile TFS.  As Agile software development spreads to larger organizations employing methods that reflect or are based on the PMBoK, this mapping can assist with the expansion or modification of the out of the box Agile process within TFS.  The primary focus of the mapping will be on product quality (all software/deliverable related testing and fix activities).  A secondary focus will be on project quality mapping through the data and reports that are available in TFS.

Table 1 below maps the PMBoK quality processes descriptions to Agile quality processes in TFS

PMBoK MSF Agile in TFS
Plan Quality Management – The process of identifying quality requirements and/or standards for the project and its deliverables and documenting how the project will demonstrate compliance with quality requirements or standards. Planning software (deliverable) quality management with Agile in TFS maps into the PMBoK planning process – determining what to test begins with an analysis of requirements in user stories – deciding on what to focus on for testing e.g. areas that need the most coverage, how to test e.g. mix of automated and manual testing, when to test e.g. will the project use continuous integration and test as early in the SDLC as possible.  The Test Approach document that is part of the Agile process in TFS provides guidance on how the project can demonstrate that quality requirements are met by the software being developed.Agile in TFS provides no guidance on planning project quality management, however, the process template is flexible and can accommodate plans for the management of overall project quality.
Perform Quality Assurance – The process of auditing the quality requirements and the results from quality control measurements to ensure that appropriate quality standards and operational definitions are used. Auditing and review to ensure that what is stated in the quality management plan is actually being done is not explicit in the TFS Agile process but can nevertheless be practiced by reviewing quality metrics available through TFS reports and comparing them to the Test Approach document that is part of the Agile process in TFS.
Control Quality – The process of monitoring and recording results of executing the quality activities to assess performance and recommend necessary changes. Controlling quality in the TFS Agile process is conducted through the execution of test plans containing test suites and test cases, bug generation and repair.   Controlling quality in TFS Agile can also include looking at patterns in bug reports to improve the product e.g. code churn, code coverage, components that have a high number of bugs, looking for the patterns where number of bugs discovered and re-opened is falling in relation to them being closed (x pattern in bug report).


Plan Quality Management

PMBoK describes the inputs required, tools and techniques and the outputs for quality management plan development.  Tables 3, 4 and 5 below map PMBoK inputs, tools and techniques and the outputs for quality management to TFS Agile below.

Table 3. Plan Quality Management Inputs mapping

PMBoK MSF Agile in TFS
Project Management Plan

  • Scope
  • Schedule
  • Cost
  • Vision document
  • Iteration length (set with the iteration backlog workbook in TFS 2010 and through the web interface for TFS 2012 and TFS 2013).  The entire project schedule can be set by planning out all iterations envisioned.
  • Hours estimated in user stories and tasks multiplied by rate
Stakeholder register Stakeholder matrix document
Risk Register There is no native artifact in TFS Agile template – using a SharePoint team site list (which is included as part of the TFS Agile project) to manage a risk register for the project is possible.
Requirements User stories


Table 4. Plan Quality Management Tools and Techniques mapping

PMBoK MSF Agile in TFS
Tools and Techniques are meant to facilitate the creation of the Quality Management Plan There is nothing in the TFS Agile process providing guidance on the creation of a quality management plan.  How to create the plan is left to the team participating in the project allowing them to leverage the PMBoK for guidance.


Table 5. Plan Quality Management Outputs mapping

PMBoK MSF Agile in TFS
Quality management plan The Test Approach document contains items covered by the quality management plan in PMBoK.  The Test Approach document focuses on deliverable quality and contains:

  • Reference to user stories (requirements) to be tested
  • Scope including:
    • Functional tests
    • Integration tests
    • Security tests
    • Load and performance tests
    • Stress tests
  • Release criteria and the thresholds (which are part of the quality metrics)
    • Code coverage minimum
    • Kind of bugs and severity
    • Performance levels that need to be met
    • Level of test automation
  • Also defined in the test approach are:
    • Roles and responsibilities
    • How bugs will be tracked (in TFS)
    • Definitions of severity and priority levels for bugs
    • When triage meetings will be held
    • Test tools
    • Schedules and milestones (at a high level – but it would be advisable to leverage the iterations in TFS for schedule).

Other artifacts known as configurations, test suites and test cases in TFS and Lab Management are combined into test plans.  See Figure 1 below for an illustration of a test plan and its component artifacts.

Process improvement plan There is no template in TFS for a process improvement plan.
Quality metrics (Defect frequency, failure rate, availability, reliability and test coverage) The Test Approach document contains some guidance on quality metrics:

  • Release criteria and the thresholds
    • Code coverage minimum
    • Kind of bugs and severity
    • Performance levels that need to be met
    • Level of test automation
Quality checklists There is no template in TFS for quality checklists


Figure 1 – from the MSDN developer network shows the test plan in TFS and its configuration, test suites and test case components.


Planning guidance from Microsoft on the Agile process states “test points are created in your test plan based on your test cases and test configurations for each test suite. A test point is a pairing of a test case with a test configuration.”

See http://msdn.microsoft.com/en-us/library/dd286682.aspx for additional information and planning guidance.


Perform Quality Assurance

Quality assurance in PMBoK is an auditing or review process that is applied throughout the project life cycle.  Auditing and review to ensure that what is stated in the quality management plan is being performed is not explicit in the TFS Agile process but can nevertheless be practiced.  Project managers or team leads can elect to regularly review the Test Approach document on a weekly basis.  They can compare reports in TFS on code coverage, bug severity, performance levels and the level of test automation with the quality metrics in the test approach document to determine if they are meeting quality minimums.  Figures 2 and 3 below, illustrate examples of reports available in TFS, once testing and bug fixing begin in the project lifecycle.

Figure 2: Bug Trends


Figure 3: Burndown Chart and Burn Rate


Control Quality

Once the planning outputs are complete the team is ready to control quality.  In PMBoK control quality is defined as identifying the causes of poor process or product quality and taking action to eliminate them and validating that project deliverables meet requirements and will receive final acceptance by stakeholders.  Control quality in PMBoK maps to the execution of test plans and their component configurations and test cases that are grouped into test suites.  Control quality also encompasses the execution of test cases on their own when continuous integration is incorporated into the Agile project process.  Tests are run every time changes are made to the code, as part of continuous integration to ensure that nothing is broken and quality remains high.

PMBoK describes the inputs required, tools and techniques and the outputs for controlling quality.  Tables 6, 7 and 8 below map PMBoK inputs, tools and techniques and the outputs for quality control to TFS Agile.

Table 6 Quality Control Inputs

PMBoK MSF Agile in Team Foundation Server
Project management plan There are a few documents that can come together to help form an integrated project management plan:

  • Vision
  • Iteration length (schedule)
  • Cost
  • Project Structure
  • Test Approach
  • Test Plans (comprised of configurations, test suites and test cases)
Quality metrics The Test Approach document contains some guidance on quality metrics:

  • Release criteria and the thresholds
  • Code coverage minimum
  • Kind of bugs and severity
  • Performance levels that need to be met
  • Level of test automation
Quality checklists TFS Agile does not contain any quality checklist templates.
Work performance data TFS Agile reports such as code coverage, bug severity, performance levels and the level of test automation can be used to collect work performance data.
Approved change requests Changes are not handled through formal requests, which are either approved or rejected.  The product backlog for each iteration acts as a record and can be monitored for changes.
Deliverables The final deliverable after each iteration using Agile in TFS will be a usable application.
Project documents The project documents that can be used for the control of quality are the persona definition (describes actions of the users with the application being built) and the Test Approach.  These documents are used in combination with TFS artifacts called user stories, test plans and their component test suites and test cases.  The personas and user stories are the baseline against which the application is measured for completeness and quality.  The test plans and their component artifacts are used to test and their results are the quality measurements recorded in TFS.
Organizational process assets – issue and defect reporting procedures and communication The Agile process template makes use of the bug artifact to record and track defects.  Reports such as bug status, bug trends and reactivations can be used to communicate progress on bug discovery and fixes.  Issues can be recorded and communicated through a spreadsheet that is provided with the TFS Agile template.


Table 7 Quality Control Techniques

PMBoK MSF Agile in Team Foundation Server
Seven basic quality tools There is no equivalent in the TFS Agile process for the seven basic quality tools.  However, team members can elect to use the tools outside of TFS.
Statistical sampling There is no equivalent in the TFS Agile process for statistical sampling.  However, the team can select user stories and their corresponding test cases for additional inspection at their discretion.  If a user story and the corresponding test cases are part of the iteration the team is working on, it is assumed that they will go through the full inspection process described below.
Inspection Testing through the execution of test plans and their component test suites and test cases is the equivalent of inspection.  User acceptance testing of the application (deliverable) is the final step for inspection.
Approved change request review The product backlog for each iteration acts as a record and can be monitored for changes.  There is no formal approval of changes.  However, a suggested practice is to review the product backlog with stakeholders at least weekly and highlight any changes to the order of the backlog (priorities) or items that have moved from the current iteration to future iterations.


Table 8 Quality Control Outputs mapping

PMBoK MSF Agile in TFS
Quality control measurements Preparing for and conducting testing generates reports within TFS that can be used as measurements of quality control.  Reports include:

  • Test case readiness
  • Test plan progress
  • Bug status
  • Bug trends
  • Reactivations
Validated changes Changes are validated with tests – if application functionality is deferred from the current iteration to future iterations, the corresponding user stories and test cases are shifted out to the next iteration.  If new functionality is added, new user stories and corresponding test cases are created and added to the iteration backlog.  Passed tests serve as a record of validated changes.
Verified deliverables A usable application that has gone through successful testing and UAT.
Work performance information Reports in TFS provide useful work performance information from both a quality and general perspective.  All of the TFS reports employ the build, user story, task, test case and bug artifacts.   In addition to the reports listed for quality control measures above, work performance information can be found in reports for:

  • Build quality indicators
  • Build success over time
  • Build summary
  • Burn rate
  • Burndown
  • Stories progress
  • Remaining work
  • Unplanned work
  • Status on all iterations
  • Stories overview
Change requests Changes are not handled through formal requests.  The product backlog for each iteration acts as a record and can be monitored for changes.  Individual user stories can be examined for change by reviewing the history tab to trace any modifications made e.g. updates to the description or when a user story was moved to a future iteration/made obsolete.
Project management plan updates Project management plan updates
Project documents updates Project documents updates
Organizational process assets – Completed checklists and lessons learned documents Iteration Retrospective document – contains a section on lessons learned describing what went well and what could be improved in the next iteration plus concrete actions for improvement.


A team using TFS and the Agile process has a good basis for quality management of a software project.  The absence of guidance on project quality management (as opposed to deliverable/software quality management), quality assurance and formal change request management in the Agile process are the biggest differences with PMBoK in the area of quality management.  Guidance gaps in Agile can be covered by team members who are familiar with PMBoK through the creation of a broader project quality management plan, and leveraging artifacts and reports that are available in the TFS Agile process.

The artifacts in TFS including user stories, test plans, test suites, test cases and bugs provide significant benefits to quality management when compared to processes driven by Word documents.  TFS contains all artifacts which can be linked together making it easier for team members to find what they need and frees up more time for work and generally speeds up project progress.  Empowering the project team members to use these artifacts in TFS results in the creation of powerful records, facilitates quality auditing, and makes overall management of the project more transparent. In addition, all of these artifacts feed into reports that can be used by any team member to measure project progress and look for issues that need to be resolved.

Posted in Lab Management, TFS 2013 | Tagged , , , , , , , , , , | Comments Off

Continuous Integration, Quality Management and Speed

Over the past few years the industry has experienced an increase in the pace of software development. More or improved features are available in subsequent generations of software products more rapidly than in the past. At the same time, businesses and individuals have more software to choose from to get the job done or for entertainment and information purposes. This poses some special challenges for teams working in software development. They must keep up with a fast pace of product development and because of the increased competitive landscape for software, the quality of their products must remain high.

The use of Agile methods when practiced with testing and fixing bugs early in the development cycle, otherwise known as continuous integration, have contributed to both the increased speed of software development and the improved quality of the final product. The use of tools to facilitate the automation of testing and collection/reporting of test results has also helped to increase speed and quality. Finally, team project management, particularly focused on quality practices, has a role to play in the speed and quality of software development. This series of blog entries will explore the use of the Agile method within a particular tool – Team Foundation Server (TFS) and focus on project quality management and continuous integration as key functions to increase the speed of software development and the production of higher quality products.

This first entry will review continuous integration practices, test automation within TFS and how they fit into the Agile process. It will also discuss the order in which to move efficiently to full use of continuous integration with the Agile process to manage quality.

The second blog entry will provide a mapping of quality management in the Project Management Body of Knowledge (PMBoK) to the Agile process within TFS. As Agile processes spread to larger organizations employing methods that reflect the PMBoK, this mapping can assist with the creation of a hybrid approach within TFS.

The final entry will describe our experiences bringing together aspects of project quality management, the Agile process and continuous integration in a real customer environment using TFS.


Continuous integration practices and test automation within TFS

We will review the use of build definitions, tests and work items such as user stories, tasks, test cases and bugs in TFS to facilitate continuous integration, test automation and quality management. Some basic information about these artifacts will be provided in this blog. However, more detailed information can be found at http://msdn.microsoft.com/en-us/library/dd380647.aspx.
Continuous integration practices and test automation within TFS begins with the use of build definitions, automated deployments and tests artifacts such as coded UI tests. Both build definitions and coded UI tests can be created and stored within TFS. Figure 1 below is an example of a build definition in TFS.

Figure 1: Example of a New Build Definition – Process configurations

The deployment of built code to a test virtual machine (VM) is automated through the build definition.

Coded UI tests are created by using Visual Studio IDE or Microsoft Test Manager by recording the test during manual execution steps. These coded UI tests can then be stored and linked to a Test Case and/or Build Definition (Automated Tests – as defined by the Test.dll). Coded UI tests can run after the completion of a build and deployment of the code. Build and test agents running coded UI tests must be configured to run interactively.
Once the build and test artifacts have been created and stored in TFS, continuous integration can begin. The most common triggers for builds, deployments and tests are changes that are made by developers to the source code. Builds can be triggered as part of a gated check in policy, or by scanning the code stored in TFS for changes on a regular basis (e.g. hourly) and running a build. After either trigger for a build, tests run and a report is generated and sent to the relevant developers or team leads on the results. If something is broken, it can be fixed and checked in immediately. Figure 2 below is an example of a build report. This continuous integration practice incorporating automated builds, tests and issue reporting, keeps the code in a good, working state at all times.

Figure 2: Build and Test Results

To facilitate Agile development management, a process template is used within TFS. The process template defines the set of work item types (WITs), queries and reports that teams use to plan and track projects. We will focus on quality management, employing the user story, task, test case and bug work items in TFS. The user story work item is central to general management while the test cases, grouped into test suites and test plans are the key to quality management. The user story describes the functionality of an application – in Agile development it is the equivalent to user requirements.

Figure 3: New User Story

Tasks can be created and linked to the user story to organize the development work. The Agile process task work item is shown below in Figure 4.

Figure 4: New Task

Work can be described in the details tab of the task. Source code can also be linked to the task, as can changesets, which are generated during code check in. These links facilitate traceability. At a higher level, tasks can be grouped by area and iteration and as is typical in Agile development management, iterations can be set up with start and end dates. This allows a view into how many tasks and how much work/coding/testing is planned for each iteration.
In Figure 5 below, steps are described in a test case along with the expected results.

Figure 5: Agile Test Case

The steps can be executed manually the first time and recorded creating a coded UI test and then linked to the test case (see tab on the test case work item called associated automation). Test cases can be grouped into a test suite (see Figure 6) through Microsoft Test Manager and also constrained by iteration – creating a goal for the completion of the test suite by the end of the assigned iteration.

Figure 6: Test Suite in Microsoft Test Manager

The final WIT used in quality management in the Agile process is a bug. Failed tests can be used to generate bugs that are then assigned to the development team for resolution. Figure 7 below shows the detail that can be added to a bug, including links back to the test case that was executed to generate the bug. Screenshots, recordings of the issue and system information that point back to the VM or specific snapshot on which the issue was discovered, can help the development team reproduce the problem.

Figure 7: New Bug with attached Test Case

Bringing continuous integration within the Agile quality management process can be phased in over time. The artifacts required for continuous integration such as build definitions, deployments and test cases should be created first. Work items that link builds and test results back to changesets committed to source code, can then be used to organize and add traceability to a team’s work. The user stories and tasks are the foundation for the creation of test cases and grouping them into test suites. Adding test cases and plans later will allow for manual testing to elevate quality beyond what can be provided through continuous integration alone. Once all of the artifacts are set up in TFS and the team is using them, software development speeds up, while retaining a high level of quality.

Posted in Lab Management, TFS 2013 | Tagged , , , , , , , , , | Comments Off

Domain Controller Upgrade

We recently upgraded our Domain Controllers from Windows Server 2003 to Windows Server 2012. All seemed to have gone fine: we were able to promote the first new DC, then the second; replication worked fine; dcdiag returned no errors. It looked like we were doing okay.

It wasn’t until the next day, when one of our clients reported intermittent authentication issues on a massive scale, that we realized something was amiss. We have an outgoing forest trust with this client and their users connect to our TFS services using their own domain’s accounts. Furthermore, we host Lab Management for them and the SCVMM 2012 SP1 service (the backbone to Team Foundation Server 2012 Lab Management) uses a service account from the foreign domain.

What perplexed us was the intermittent nature of these issues. It’s important to point out that we still had a 2003 DC in the mix. After several frustrating hours of troubleshooting we could see that any time a user attempted to log in and the 2003 DC was being used to authenticate the Foreign Security Principal, everything was fine. But, when the 2012 DCs were used, the call to the foreign domain would time out and we would be kept out (after 3 long attempts). Out network engineer pored over firewall logs and was able to find that attempts were being made to go over port 49162. Looking through the firewall configurations, we saw that this port was not open. It turns out that all Windows Server versions following 2003 make use of a different range of ports for Remote Procedure Calls (see below). This may have been breaking news in 2007, but it is currently buried with run-of-the-mill generic upgrade procedure information.

Inline image 1


At this point, we had two options:

1- Fix the RPC port so that all traffic gets directed to a certain port instead of being dynamically mapped

2- Open a new range of ports to allow for communication to trusted sites

As we have other trusts in place, where clients still use older Windows Server DCs, we opted for the second so to avoid possible further headaches.


So, if you’re planning in upgrading your Domain Controllers from Windows Server 2003, make sure that you’ve got the right ports open. Here is a very good and thorough blog post about planning for such an upgrade.

Posted in Active Directory, Domain Controller, Lab Management, TFS 2012 | Tagged , , , | Comments Off

HTTP 500 Error after installing TFS 2012 Prerequisites

Over the past few weeks, we’ve been experimenting with TFS 2012 deployments. We’ve been impressed with the new features and the new look. New installs in sanboxed environments were fairly straightforward, so our next project was to put the upgrade process to the test. I won’t spend much time on the different upgrade paths as it’s covered in countless posts online. I will say that we stood up a new TFS 2010 + SP1 instance, then restored a backup of one of our environments and made sure that it was working properly. Since this particular instance was also synced with Project Server 2010 through the Integration Feature Pack, we re-installed that aspect as well.

On to the upgrade process..

TFS 2012 relies on SharePoint 2010 to functions and the WSS 3.0 that was installed with TFS 2010 needs to be upgraded. So, we popped the SharePoint Server DVD in and selected “Install software prerequisites”

Sharepoint Splash

This installed the following:

• Application Server Role, Web Server (IIS) Role
• Microsoft SQL Server 2008 Native Client
• Hotfix for Microsoft Windows (KB976462)
• Windows Identity Foundation (KB974405)
• Microsoft Sync Framework Runtime v1.0 (x64)
• Microsoft Chart Controls for Microsoft .NET Framework 3.5
• Microsoft Filter Pack 2.0
• Microsoft SQL Server 2008 Analysis Services ADOMD.NET
• Microsoft Server Speech Platform Runtime (x64)
• Microsoft Server Speech Recognition Language – TELE(en-US)
• SQL 2008 R2 Reporting Services SharePoint 2010 Add-in

Immediately after this step we noticed that we were no longer able to connect to our TFS 2010 collections from Visual Studio or through the web interface. Browsing to the http://servername/tfs page gave us a generic HTTP 500 error. If you find yourself in this situation, do not despair. Just follow along.

1. To investigate, in IIS, extend the “Team Foundation Server” site, right-click on the “tfs” Application, Manage Application > Browse.

IIS Browse
This will open Internet Explorer (or your associated browser) to reveal a generic error page. This is not very useful, is it? So let’s get IIS to give us some more information about what exactly is going wrong.

2. Back in IIS, right-click “tfs” Application, click Explore. This will open a Windows Explorer window at the location where your TFS web service is installed.

IIS Explore

3. Make a backup of the web.config file (I just copy the file and paste it in the same location and Windows will append “Copy” to the filename)

4. Now, open an elevated Notepad process: Start > All Programs > Accessories > Right-click Notepad and click Run as Administrator.

5. File> Open in the elevated Notepad and browse to where the web.config file is and open it. Search for “customErrors” and find the place where this is defined. You will want to change this setting to “RemoteOnly”.

 <customErrors mode=”On” />

<customErrors mode=”RemoteOnly” />

Save the file.

6. Now, to take another look at that error message in IE. If you have the window still open, just hit refresh, otherwise, just follow Step 1 again. You should see the following error:

typeloadexception: Could not load type 'System.ServiceModel.Activation.HttpModule' from assembly 'System.ServiceModel, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089'

This is caused by the Prerequisites Install of SharePoint Foundation 2010. It looks like a new ASP.NET version was installed but wasn’t registered properly and has messed up something with IIS.

7. No matter! There is an easy fix for this. Open an elevated command prompt (Run as Administrator) and change directory to where the new .NET framework was installed (in our case it was C:\Windows\Microsoft.NET\Framework64\v4.0.30319).
Now, run the following command:

aspnet_regiis.exe -iru

This will register the newly installed framework but leave your IIS configuration intact.

8. Browse to the “tfs” application again and it should now work properly. You should also now be able to connect to your collection with Visual Studio.

At this point, you can undo the changes you had made to your web.config file (either revert the backup by deleting Web.config and renaming Web – Copy.config or change “RemoteOnly” back to “On”).

Posted in TFS 2012 | Tagged , , , , , | Comments Off

Getting the requirements for Lab Management

Like any infrastructure project, the gathering of requirements is an important step before setting up Lab Management.  For our customers, we start by trying to understand what their applications do.  We also look at understanding the use of test environments if they existed before clients decided to start using Lab Management.  The information gathered through the examination of existing test environments, understanding the application and mixing in best practices for LM results in a recommendation on the size of the underlying host machine for Lab Management.  Another important input when determining what will be required to host Lab Management is the client’s future direction for their application(s).  If a client is planning to offer software with enhanced features, additional environments and VMs may be required to accommodate testing in parallel with current versions of their software.

Determining a buffer that will allow for some growth can also be tricky, but it’s important to build in extra room none the less.  We have found that no matter how well you understand a customer’s application and how they tested it before the introduction of Lab Management, there are always opportunities/demands for additional environments or VMs that would benefit the application lifecycle and you don’t want to be in a position where you have no capacity for growth beyond the initial requirements.  As a rule of thumb we like to provision a host that can provide for about 20% growth.

Posted in Lab Management | Tagged , , , , , , , | Comments Off

Microsoft case study on TDC hosted Lab Management

Microsoft has published a case study featuring Questionmark – a TeamDevCentral customer employing TDC’s hosted Lab Management. Read about how TDC helps Questionmark use Lab Management and get a better understaning of the benefits realized by Questionmark.

Questionmark – Software Company Boosts Productivity 67 Percent with Virtual Test Labs

Posted in Lab Management | Tagged , , , , , , , , , , , , | 1 Comment

Managing Lab Management!

I recall speaking with a customer years ago about introducing a new ESX host for testing purposes – he was concerned about the parallel creation of a wild west where VMs would spring up willy-nilly, in an unstructured fashion quickly using up resources without anyone enforcing discipline to clean up un-used VMs thereby returning resources to the ESX pool for use by other testers. Only half the battle to prevent a wild west can be won with the gathering of requirements to help plan out a new Lab Management environment. Once LM has been deployed and is in use, our experience is that it must be managed – by that I mean some process has to be established to deploy VMs and environments after the initial test environments have been established – decommissioning VMs and environments once they are no longer useful is also important.

Management can be complex. In our situation we have a lot of players. Since we host Lab Management we own some responsibilities such as assigning ips to machines after they are deployed. Customers can and very often do deploy new environments to LM, but since environments are associated with individual team projects, QA team members for one team might not know what members from another team are doing or require of the Lab Management environment. To begin to address the coordination challenges associated with LM management, we have begun to publish a report of the VMs and environments/team projects they are associated with for our customers. This gives all of the QA team members, no matter what team project they are a part of, a good view into resource use. We are working with our customers through on line meetings to periodically review this report and understand what plans there are for new environments. This helps us to jointly set priorities and initiate clean-up activities for test environments that may no longer be useful. We will be continually improving our reports for customers so that there is good visibility into how much capacity there is on LM and how old/the last times environments were used to facilitate clean up.

Posted in Lab Management | Tagged , , , , , | Comments Off