Part of The Inner Chapters Unbook.
Originally part of podcast episode number thirteen.
Dedicated audio available from Podiobooks.
- Most programmers think it is not their job
- QA testers do functional, black box testing
- Customers do acceptance testing
- Programmers do white box, unit testing
- Integration testing is a super set of unit testing
- Motivated self-interest
- Ensure you never have to fix a bug twice
- Freedom to change and improve code
- Another way to explain your code
- New JUnit, version 4
- An early look at JUnit 4
- Annotations nice but not all that different from reflection
- Removal of parent class requirement more interesting
- Suite wide initialization very nice
- Exception handling a definite plus
- Ignore is definitely something I've done the hard way
- Just like design, all projects, no matter how small, should include some amount of testing
- Practice makes perfect
- Be disciplined
- Especially for performance/load testing
- Only change one variable at a time
- Always cover failure modes
- Automation is your friend
- Beware the pitfalls
- TDD can lead to a certain myopia
- Some environments are just not well suited to automated unit testing, like EJB
I know you're thinking, geez, functional decomposition is at least somewhat more interesting, it has to do with programming. What does testing has to do with programming? Well, that's unfortunately a very common attitude and it's one that I used to subscribe to myself at the start of my career. And one of the things that I actually come to realize is that one of the corner stones of solid programming practice is testing.
So functional decomposition is one, testing is another, design is a third and refactoring is the fourth. I'd say these are probably the four kind of fundamental practices that if you master them, if you practice them ― I think that's the most important distinction I'm gonna make about these four things in particular versus everything else I'm gonna talk about ― they're gonna do the most to enhance the quality of your code, the quality of your deliverables and enhance what you get out from actually programming: being able to earn more money, make a better living by being able to get more senior positions, more senior projects, more senior opportunities, or conversely, if you doing it on a voluntary basis for an Open Source project, really improving quality of your contribution and really doing a good turn by way of the project you are participating in. And I happen to do both, by the way. Not only do I program for a living but I also contribute to Open Source projects. I have founded one myself, Navel; you can find it out on sourceforge.net. It's very Java specific and maybe a little 20 per cent as far as the problem it's meant to solve and some of the technology it uses, but it's there, and I'm using it at my own work now, and I put it out there for the people to use and try to improve it.
But anyway, testing. Like I said, very common mindset for programmers to think "Testing's not part of my job." Testing is, if your organisation is lucky, that's what the QA department is for. Or that's what the customer's job is for to do acceptance testing, to say "Yes, this works" or "it doesn't."
But I would argue that those types of testing technically are black-box testing, meaning that the tests are performed without any internal knowledge of the workings of the system they are trying to test. And as such, there is a limited amount that they gonna be able to do, especially in the face of a defect or a bug. What programmers need to learn how to do ― and I think that there actually has been some coverage of this in the literature, and I don't want to proceed talking about this without people assuming that I have some ignorance of what XP has to say, and we'll get a little bit to that a little later on about programmers doing testing, particularly unit testing, so white-box unit testing, and then integration testing actually I think beyond that is something that in my professional experience also needs to be done by programmers and I'll try to explain that a little more clearly in a second.
But these are both, unit testing and integration testing, far much more into the bailiwick of the programmer, again because they're more white-box style testing. They presume some internal knowledge of the workings of the system and the component under test and therefore serve a different purpose. They are not high level functional tests. It really is more about not, well in one case, in unit testing, it's atomic testing, making sure that each unit of code ― again, just covering definitions here ― works as intended, works as you set out. Are your preconditions valid, are your post conditions valid, are there any side effects, are they what you meant and not something unanticipated, are there any invariants that shouldn't change through the course of the unit test? And integration testing, I think, also is very important in my experience in that when you do have any sort of hand off from the programmers to non-programmers, to either the QA department or your customer for acceptance testing or full-on functional testing, doing integration testing, putting all the unit tests together, putting it into a live system, preferably not your local development system, but some shared resource, and just doing a once-over the entire system and making sure as all the units, as all the components are plugged together that there aren't any bugs that fall out as a consequence of the integration of those components ― hence the name "integration testing" ― that there aren't any bugs in that integration, is very critical.
The work-flow that we used a couple of jobs ago around integration testing basically, the way it was explained to me, the way it really stated to make sense to me in a practical fashion, was that the QA testers couldn't do their job if the integration testing wasn't done first: in the sense that if there were some bug that popped just from integrating two pieces of a system together for the first time, from two different developers working on them in isolation, then any number of functional tests beyond that QA testers need to perform are then blocked off and they are just not able to reach that portion of their test plan.
So it also starts to get back to... you know, I try to cast these things also in, not just what it can do for the people that you work with, but also as a programmer. I'm a lazy programmer. You know, I don't like to have conversations I don't have to have. I don't like to have to do things again and again and again.
So let me talk a minute here about motivated self-interest, which I think is a good thing. And I think that motivated self-interests can be aligned very well with serving others. So don't get me wrong; don't think I'm being entirely callous and mercenary here. All I'm saying is that if I can find something that serves my needs, that prevents me from having to duplicate work, or do more work or more grunt work than I really want to do, and at the same time serves somebody else's needs, why not do that?
You know, I talk about efficiencies from time to time in this show, and this is just another case where I'm talking about leveraging efficiencies. But looking at it from a cost-benefit perspective, in terms of selfish cost and benefit as well as selfless cost and benefit, and then trying to merge those together to make, I think, the best decision as far as what's most efficient for everybody involved. So in terms of motivated self-interest, like I said, duplication of effort, right there. If you're doing you're own unit testing, and especially if you're automating it ― I think that's the key, I missed talking about that in the previous segment here ― but if you're automating your unit tests, number one: automation means that you only have to describe the test once, and the machine is going to perform it for you exactly the same every single time, from now till the heat death of the universe. But if you get a bug reported back to you at any stage of the game ― so, during initial QA testing, during customer-acceptance testing, when the software is live in the field ― if the first thing you do is develop the habit of first response being to write a unit test to capture that bug and exercise that bug, then that increases your confidence that you've truly fixed it. And it also ensures that as the software evolves over time (this may be self evident, and I apologise that I'm being obvious) that you're unit test suite's going to keep stressing that same bug over and over again, and make sure that you don't introduce what we refer to as a regression bug into the system, so you're causing a regression in behaviour.
And then, I think there's a more positive way of looking at that in my experience, in terms of regression, that when you get to a point where a code base has gotten just a little too hinky, a little too crufty, and you want to do some more extensive overhauling, housekeeping, if not outright re-architecting, re-engineering, re-designing, re-factoring... If the funcitonality at the unit level, at least, is the same, or most of it is the same, then that automated unit test is a great tool to ensure that any of your real, low-level, systemic improvements don't completely destroy the functionality of the system.
And I've found that, even in green-field development, even as I'm kind of bumping along and I think I've made a slight misstep in my design and my implementation, I want to kind of back it out and try something slightly different: again, if the interfaces are similar enough or the same, then the unit tests really make that a whole hell of a lot easier. And you know when you see the green bar in test runner or you see the numbers in your test report come back saying that everything passed, then that's a great feeling. Not only that, but it gives you confidence that you did the right thing, that you're doing the right thing.
My last point is going to require a little bit more explaining as to how it falls under motivated self-interest, but I think you'll start to get this. It's something that I've hit upon before, or maybe this is a point that I've made to my co-workers; sometimes I lose track. And the idea is that automated unit testing in particular, so unit tests that are written in code, are a great way of explaining the code itself. And I've only come across one reference that put forward unit tests as source code documentation, and I forget what the reference is, and I wish that wasn't the case so I could give credit. But I wholeheartedly agree with that notion, that especially if you're building libraries or components to be used by other programmers, your unit tests are a great way to show good examples of how your code should and should not be used.
And then, the motivated self-interest comes in here in the same vein as (I think this is what I was saying earlier I might have talked about this before) self-explanatory code and good code documentation, in the sense that if your unit tests help really explain how your code is meant to be used, that's one less conversation hopefully that you have to have with somebody that is actually making use of your code. If they had more tools to understand the code without having to come and interrupt you and pull you away from something interesting that you're doing, then that's ... as I said, it serves them well, but it also serves you well. It allows you to concentrate on things that are more interesting to you than explaining yourself.
There are a wide variety of technology choices for unit testing. The xUnit family seems to have an implementation in just about every language. Xcode, Apple's premier development environment, has incorporated unit testing into its latest revision, version 2.1. They were talking about that fairly extensively this summer at WWDC 2005, and I thought that was very cool of Apple, including the fact that they had actually built a... I think it was for a C++, an automated unit test library from scratch. So they used JUnit for Java, they used CUnit for C, and then they wrote their own for C++, and then integrated it into the development environment so it becomes a very natural, easy extension of your personal development process, probably the least enjoyable of the four pillars that I laid out, as far as solid, personal programming process.
Just like the discussion of functional decomposition, I want to wrap up with some guidelines, some kind-of do and don't rules to think about as you're thinking about incorporating or improving automated testing in your own programming process. So just like design, I would say ― and we'll get to design in a later show ― all projects, no matter how small, should include some amount of automated unit tests. This goes back to just one of my convictions in general, that there is no project so small that you should short-circuit or forfeit your usually practices in putting it together. Everything you write, whether it's a 20-line shell script or it's a 1000-component enterprise system, should be the best software you know how to make it at the time, period. There should never be a, "well, this is really quick and dirty; let's just do something sub-optimal just to get something out there." No. There are always tools on the flip side of this to argue, to reduce features set, to reduce scope. These are what your project managers are for; these are what your bosses are there for, to discuss this, to bring things down to the point where you can still deliver the best quality software you know how. Just control some other variable.
So this falls underneath the auspices of that personal conviction, something that I strongly encourage you to adopt as a profession conviction, that always write your best software, no matter what.
The second point, and I've hit upon this before, is that practice makes perfect. That, at first the unit testing may seem awkward, may seem a little strange. You may think, "I can go faster without it; I'll get back to it later," whatever. Just do it, whatever way that you need to come to it to incorporate it into your personal practice. You know, whether it's a test-driven development style ("Ah, I write all my unit tests first, and then I write my code to pass it!") or a "I write a little code, I write a little test, I write a little code", so it's somewhere in-between, what have you, find some way so that it becomes automatic. And practice is the only way to do that. Repetition is the only way to do that. So at first you're going to have to remind yourself, you're going to have to find ways to remind yourself to do it. But then it's going to start to come more naturally. And then, also, I think you'll start to develop a good intuition. At least that's been my experience of, "Does this really need to be tested? No, not so much; it's pretty much guaranteed to work" vs. "You know, this would probably be better covered by some sort of a test, so I'm gonna go ahead and do that".
I'm not saying that you need to have 100%, that you need to be super-zealous about this like some of the XP advocates, but good coverage, you know, 70-80% coverage of your code base is something that you should strive for. And like I said, practice and repetition are going to make that easier over time. Like functional decomposition, like design, re-factoring: the more you do it, the easier it gets, and the stronger your instincts become, and the more you're able to trust them.
Be disciplined. This is something else that I've touched on. I think this goes hand-in-hand with the notion of practising something over and over again. Especially, however, for performance and load testing. And you can incorporate both performance and load testing into automated unit tests. A lot of the same tools that you use for automated unit tests, I've found, work well for doing load testing and performance testing. There are declarators in the JUnit library itself to do timed tests and repetitive tests, so that stands testament to that. The notion here, though, is that you don't want to get into a situation with testing where you have maybe multiple variables, and you're trying to isolate which one is responsible for some behaviour. You don't want to get into a mode of thrashing, and the only way to avoid that is just to be disciplined. Take your time, take a deep breath, slow down, and remember that you only want to change one variable at a time when you're trying to isolate some behaviour. So whether it's defective behaviour or whether it's a performance issue you're trying to track down.
Next: always, always, always cover your failure modes. You want to make sure that when you get unexpected input, when your code is used in an unexpected manner, that it fails gracefully. I don't remember where I picked up this up, but one of my mantras when it comes to failure modes is "fail early and fail often", which means that, as you're reading code, if there's an assumption in your head about something that should or shouldn't be true, put it in there. Put an assertion in there. Design by constraint is a good thing. The sooner you can check it, the more defensive you can be in your programming, the better off everyone else is going to be: especially in my experience in C and C++, where a bad pointer de-reference, where a buffer overrun can cause a very, very strange, unrelated pathology later on in your code if it's uncaught. This is critical. If you catch a problem, a mismatch, an array length with the actual allocation, something like that; if you're able to catch that early on when you've got all of the information to make sense of it, as opposed to smashing the stack later, it's just going to make your life a lot easier.
Automation is your friend. I've spent some good time talking about this, the JUnit 4 discussion. You know, you can use code to write tests just as easily as you can to write programs. Everything that you know about writing good code can be leveraged to write good automation for your unit tests. An experience I want to share here is that what I've seen some people do is write extraordinarily monolithic unit tests, and not use functional decomposition and a little bit of design and re-factoring. It's still code. Write good code there, too. You're going to have to read your unit tests later on, somebody else may have to read your unit tests later on to understand them. So write them well. You might be able to cut a few corners in terms of, maybe not having to drop your copyright boilerplate notice and everything, maybe not writing quite as verbose code documentation at the class and function level, but still, try to write the best code possible in the automation.
And then, I also just wanted to warn you off from a couple of what I consider to be pitfalls, in my experience. I am not a terribly huge advocate of test-driven development, and I'll tell you why. I tend to fall into the camp that this leads to unmaintainable design, that pure, bottom-up software development really sacrifices a little too much in kind of the oversight and big-picture understanding design of software, and therefore leads to very crufty, hard-to-maintain, and sometimes very brittle software.
This is not to say that I think all your design should be done up-front, atomically, down to the last detail. We'll talk about design in a future instalment, and you'll get a better sense of what I'm talking about here. But I think that you should take any recommendation that comes from an advocate of test-driven development with a grain of salt, and you should find the right balance that makes the most sense for you.
And then I'm also going to say, as a closing thought, that not all environments are particularly suited to automated unit testing, and you should understand where the limitations in the automation are and maybe find other strategies to supplement, EJB database testing being notorious about this just because they set up and tear down costs. But also, just be careful falling into that kind of primrose path of maybe going too far to making your code too "unit testable". Don't sacrifice your design and solid implementation just to make it more testable. Find other ways to test it. So what I'm saying here is, don't make a bad choice by your access modifiers in a class or a method, or your packaging of your code, or your encapsulation of your code, just to make it easier to test. Ease of testing is good, but it's a secondary concern as far I'm concerned... (can I get more redundant?) As far as I'm concerned it's a secondary priority to good design, good implementation. That's all I'm saying.
So that's going to do it for this instalment, number two, of the inner chapters on testing. Thanks.