An automated data integration test suite is like a funnel that scoops up abstract, unstructured and downright weird knowledge from every corner of your problem domain, condenses it onto a concise form, and pipes it out to consumers (i.e. development teams) who turn it into a working system.
It fulfils three important roles that are otherwise missing.
* To define a complete and unambiguous specification held in a form that is directly useful to technical people (n.b. a spec is not directly useful because it is open to interpretation and assumes tacit knowledge about the problem)
* To create single, well understood, vocabulary for discussing requirements and issues in development
* To give ourselves fast, simple way to prove functional correctness
So how do you build the funnel?
Let's assume you a basic technical framework in place that allows you to store test cases and run them through the system under test. The question we need to answer is how to integrate it into your project lifecycle in a cost effective manner. The only way to do that is to use Test Driven Development (TDD). Put very simply, write a failing test that describes a change before you write the code to implement that change.
There is plenty of information out there on how to do TDD in Object Oriented systems. I have worked through a lot of it and built systems of my own to learn the techniques (and do other useful things as well). I found that Object Oriented languages allow much more modular code than ETL systems and give you much more control over the low-level details. Many of the techniques do not map perfectly into the Data Integration world. It's certainly impossible, or at least highly impractical, to start from a single test case and build up a system from there. For a start, a large part of the system already exists in the form of source and target systems and interfaces.
However, I have found that the principles and benefits of doing TDD are relevant and achievable to Data Integration when you tweak the practices a little. Furthermore, data integration projects raise challenges around data quality and opaque interfaces to external systems that are not a concern on most OO systems. I have found that building the funnel using TDD provides an elegant and proactive solution to these problems that integrates well with normal project activities and does not mandate an explicit 'phase','activity' or 'gate'.
Here's how to do it.
Before you write any code...
1. Create an automated test harness.
2. Work with technical team, business users and analysts to define an initial set of test cases based on the current understanding of the business problem. Make sure you can run all your test cases in one pass and you don't need to run batches of test cases through the system under test.
3. Roll technical test cases such as boundary conditions and known data quality issues into the test suite.
- Do your initial data profiling here
At this point, you will find that you have driven out a lot of problems in your understanding of the problem and the solution required by simply thinking through the problem at a detailed level. Some of these will be misunderstandings between the business and technical people, while others will be areas the business users and analysts had not considered in their initial discussions. The test framework has already been valuable and you don't even have any code yet!
You are now ready to start coding and you can move the test suite into "TDD" mode.
For every modification...
4. Check you have test cases for the feature. If you don't, write them.
5. Run the suite to ensure the test fails (this is how you "test the test")
6. Write code to make the test pass - running the tests frequently to make sure you are on the right track
7. Once all the tests pass, refactor your code to make it production ready
8. Go to step (4) and start the next feature
You must expect issues to arise once you start development. Some will be technical, others business-related and the rest due to tests that were missed in the first pass. It is very important to assign collective ownership of the test suite to the development team so everyone has the right and ability to make changes as soon as they find a problem. In practical terms, you should have a version control strategy to ensure changes are propagated around the team in a controlled fashion, but you should avoid assigning ownership or setting up any sort of approval mechanism that will create a bottleneck.
Periodically, you should run a smoke test with a full volume of production or production-like data through the system to drive out issues you have missed. Similarly, if you have a lot of dependencies on other systems you should build your test suite to stub out those dependencies but run regularly in a fully integrated environment to find new cases that should be added to your automated suite.
Footnote: Why you really can't create the tests after the code
Apart from the fact you have already lost the benefits of the funnel during development, it's actually impossible to create a complete test suite after the event.
Ask yourself a few of questions...
* How can you validate the test suite tests what you think it does unless you have made the test fail?
* How can you be sure the test suite tests everything the code base does unless it grew alongside the code base?
* How can you inject test cases into a system that was never built to have test cases inserted into it? Usually you will find a few tricky dependencies that you can't work around.
I'm not saying you should not try to build a test suite if you have to manage a codebase that does not have tests, but you should understand the limitations. You should certainly not plan to build the tests after the code if you have the option.
No comments:
Post a Comment