|Version 73 (modified by edunne, 6 years ago) (diff)|
OK GENSHI Migration and port to 0.11 Beta is done
thanks to everyone who has agreed to test the plugin
You can download the source or preferably use easy_install for the 0.11.x compatible version here :
if you need easy_install check : http://peak.telecommunity.com/DevCenter/EasyInstall
0.10.x TRAC users should continue to use this version :
NOTE- the latest stable version is 0.4 for TRAC 0.10.x
This includes a new validation page which allows you to identify errors in your xml before you attempt to create the test run. Also has the ability to point the test case managmenet plugin at a branch or tag folder. That way test runs can be generated for alternative test suites (say for an older release, or a release that is under stabilization in a branch). This also more explicitly ties testruns to versions because the testcases that are run are the testcases in the branch or tag.
I also added some more error checking to the test validation links.
I've developed a test case management tool that uses subversion as the testcase repository and uses the ticket framework in trac to create test runs.
Some of the common problems that any testcase management tool faces are:
- assigning a group of tests to a testing resource(s)...a test run (this is what the plugin manages)
- versioning of the test cases (taken care of by subversion...you mean testcases require maintenance?)
- reporting on results and progress (trac reports...export to excel for pretty pictures)
So what is a test run?
- a set of tests
- assigned to a specific tester
- for a particular product and version
In more detail... A test run is specific set of test cases for a particular milestone or version of the software under test for a particular tester.
Each tester owns a single instance of a test case in the form of a ticket. This allows you to track the progress of individual testers as they complete their set of test cases. As they complete a testing task they set the TRAC ticket to closed.
This is really powerful because you can take full advantage of the custom reporting engine in TRAC to look for open tickets of type testcase that belong to a particular tester, and also monitor the progress of a group of testers.
There was some confusion about this as it seemed like a single ticket would be "owned" by multiple testers. TRAC's data model makes this impossible (also I think that assigning test tickets in that way would be really clunky). The testcase is in subversion, while a test instance (in the form of a TRAC ticket), is in TRAC.
Essentially what happens is the testcase is an xml document stored in subversion. The testcasemanagementplugin creates a ticket with the test case details as the ticket description and assigns that ticket to a specific tester: one ticket per tester per testcase.
The advantage of having a testcase management tool in trac is that it removes the need to have yet another system that manages testcases...with all the usual problems of maintaining user accounts, system upgrades, and licensing issues. The other advantage is that it adds more transparency to the testing process as the tool and any testcases are right there for anyone to see.
Testcases are stored in a very simple XML format.
Example... Take the testcases that you've written a in a word doc or excel and put one test case into one xml document. Check the testcases into subversion (more on that below). I've heard some feedback that potentially this ends up in a huge set of test cases. That's only if you write test cases in the form : "Testcase 1: click the submit button...end of test case".
My personal philosohpy is to encourage people to write workflow or task based driven testcases and have the tester use their brains to find issues. I call it script based Exploratory testing. Check out Exploratoy testing by James Bach at http://satisfice.com . If they need to look at the UI in very high detail then the test case could simply be "compare the UI to the mockup and here's the link to the screen shot: http://(screen shot checked in to subversion)". If there is no mock up, trying to describe a GUI using words is completely ridiculous and it shouldn't be the job of a test case to document what the GUI looks like.
Grouping testcases together... You can specify collection of test cases (for example a smoke test) by specifying which test cases belong to a test template. This information is also specified in an xml file.
Note this plugin is designed to work with Subversion, although I don't use any specific subversion code. Potentially this plugin could work with other TRAC supported configuration management systems. I just haven't tried it.
Configuration steps required
- Create a testcase directory within an existing subversion project. This is where your testcase and testtemplate files go. We typically structure our development projects with a main project directory and then a source and build subdirectory. So when you add the testcase directory you might have something like this:
project/source/ <-- checked into subversion
project/build/ <-- this is not checked in but created when you build the application
project/testcases/ <-- checked into subversion
The nice thing about this is you add a lot more transparency to the testing process as testcases are bundled and versioned with the source code.
- Add testcases and commit those testcases to your subversion repository using the example format (see attachment for correct XML format of testcase files).
- Create a testtemplate file called testtemplates.xml, then specify which tests belong to which test templates, for example the smoke-test (see attachment for correct XML format of the test templates file). Not all tests have to belong to a template, it's just a convient way to group tests together. This file goes into the same project/testcases directory.
- Add the following new section to the trac.ini file:
This is important because Trac can be set up against subdirectories so you often don't link trac to your root subversion folder, so only specify the full path from the root node if that's how trac was set up.
- add a new ticket type called testcase
- I also add a custom ticket type in the trac.ini file for reporting purposes (although this is not required yet...just really useful)
[ticket-custom] testcase_result=select testcase_result.label= Test Case Result testcase_result.options=pass|fail|incomplete testcase_result.value =
- Enable the plugin either through the trac admin plugin or by modifying the trac.ini file:
- Restrict ticket owners to only known users of the system. You do this by looking for the line retrict_owner in the trac.ini file.
This is actually necessary because of an oversight on my part. I incorrectly assumed that most people would have turned this on.
restrict_owner = true
A bug has been entered about this (thanks for that), and I'll be working on a solution for the next release.
I've added some helpful debugging/error checking that should help if an error occurs. Probably the most likely error is you'll specify the relative path to the testcases incorrectly.
If you have any issues, create a new ticket.
Download and Install using easy_install
You can use easy install to install the testManagementPlugin by doing this:
(this version includes bug fixes for 1551)
If you don't know what easy_install is or don't have it installed go to : http://peak.telecommunity.com/DevCenter/PythonEggs
Windows users should add the python2X/scripts directory to their path.
'Running a weekly smoke test.'
'Step 1(Create your test cases)' Take a look at the example test case I attached at the bottom of this page. It's a very simple XML format for specifying what a test case should be.
Inside the xml file you need to specify specifics about each testcase. A test case has an Id (which probably should be the file name without any extensions), a component (matching a real component in your TRAC project..case sensitive), a summary, a description, and a field for describing the expected results of the test.
Here's what a testcase might look like...remember one testcase per file. It helps if you give the file a name that reflects what the testcase is for. For example: map01_zoom.xml
<testcase> <id> map01_zoom </id> <component> map </component> <summary> This test verifies correct functionaliy of zoom on the map. </summary> <description> Steps: 1. Zoom around on the map. 2. etc. Note you can use TRAC formatting in the testcase if you want. </description> <expectedresult> The map should correctly zoom to the area specified by the user..etc. </expectedresult> </testcase>
'Step 2 (Define any required grouping of testcases in the testtemplates.xml file):'
An example grouping would be specifying that tests: test1, test2, and test3 all belong to the smoketest.
<templates> - <template name="smoke"> - <testcases> <testcase>test1</testcase> <testcase>test2</testcase> <testcase>test3</testcase> </testcases> </template> </templates>
Make sure the tests and template file are checked in.
'Step 3 (Creating a test run)' Click on the TestManagement tab on the main trac menu.
Click on the Test Run link
Select users to assign testcases to (these have to be known users to the trac system).
Select the appropriate version, and/or milestone/sprint for this testrun.
Select the testcases, and/or the appropriate test template.
Click the generate test run button.
This will re-direct you to the custom ticket reporting page with a pre-built query that should show you all the test cases that you just created grouped by user. It will also pick up previously created testcases if they also exactly match the query criteria.
'Step 4 (Run the testcases)'
Open one of the created tickets and using the testcase_result custom field specify whether the test for this test run passed, failed, or was not run for whatever reason.
Close each ticket in the normal way by setting the ticket to "fixed" (or one of the other appropriate resolved states).
'Step 5 Reporting'
For reporting on testcase progress or results use the TRAC custom query page. Just specify where ticket type = 'testcase' assuming you've added that ticket type into your TRAC data base. TRAC is going to be adding the ability to specify which columns appear in the custom query results so this will only get better.
-  by edunne on 2008-09-26 22:29:45
uploading a packaged (egg) version of the latest stable version of the genshi/unicode merge.
-  by edunne on 2008-09-26 19:22:37
creating a tags folder
-  by edunne on 2008-09-26 19:04:39
added some additional error messages, merged the unicode version with the genshi version (also that was in the previous change). Updated the version num in setup.py