I got a very interesting comment from Rick Pingry and I’m taking a liberty to respond to him in the post format, because I think his questions deserve so. In general, Rick expresses his doubts about going TDD all the way and raises some really good questions about practical side of TDD. What’s interesting (to me at least) is that he works on a C++ application, so it’s good to hear that even the C++ world is catching up with some modern trends :)

TDD costs and benefits

Rick writes:

After 6 months, we still struggle with TDD a bit. We have had SOME good experiences, but we are still waiting for that “I’ll never go back” experience. It still feels a good bit slower to do the TDD. In some ways this might be due to needing to use C++, which requires the compile and link time, even if we are able to run our tests themselves in 4 seconds.

Well, it’s expected and understood, that the writing code part takes longer than compared to “cowboy coding“. After all, you probably have twice as much code to write and you don’t cut corners. But having written the code is not the end. I said “TDD makes you deliver faster” but delivering is so much more that having written the code. I’m talking about the “done” done in XP terms. So, your cowboy friend might be long “done” with the code, but is still fixing all the issues and his solution gets more and more clunky during this process, while you finish your TDD code, and it is cleaner, better designed and free of most obvious bugs :)

But it’s important to always do cost/benefit analysis. The cost of TDD is the time you need to write (and later maintain) the tests and the benefits are less bugs and better design. It’s nice to have 100% test coverage but if it takes you too much time to achieve this, you’re probably overdoing it.

Manual testing

Rick again:

Doing TDD does NOT mean that we don’t manually test. I would never trust my automated tests so much that I don’t run every bit of code I write through at least a couple of use cases. There have been plenty of times that I have had all passing tests and something went wrong when I ran it, something I did not expect.

That’s right and that’s also expected. I don’t think anyone in their sane mind would recommend NOT actually running the application if you do TDD. Of course, you need to run it and see if it “really works” and all the small, independent, tested parts cooperate. Typically, unit tests are rather low-level (they are testing small, low-level “units”) and it’s easy to lose the big picture. And if you lose it too often, maybe you should write more integration tests.

Also, if your company has QA department, I don’t think they will fire all the testers because now your applications is TDD and you have 100% test coverage. Even if you write integration tests and so on, they still test YOUR understanding of the requirements/user story/specs, which simply might be wrong. That’s the “problem” with automated tests: they don’t prove the code is right per se, they prove the code is right according to someone’s understanding of what it should be doing. And they don’t guarantee that all possible edge-cases have been tested and taken care of.

Exploratory programming

Next point:

Most of the time, I start off needing to do “exploratory programming”, I suppose that would be called a “spike”. Usually I need to do something manually with the 3rd party libraries to make sure it works, as mentioned above. I am not sure at first what needs to be written, much less what TESTS to write, so I take it one step at a time and investigate, building the code as I go. How do you do TDD with that? The expected answer is to throw the code away and start over. I just can’t bring myself to do that. I finished the feature, tested it manually to make sure it works, and even my “cowboy” code is not that bad as far as style goes. Throwing the code away just to add tests, or even add tests after the fact when I already know it works seems counter-productive.

My feeling is that you still try to write production code while doing exploration, and that’s why it’s hard for you to throw it away. Somewhere along the way you probably pass a point when you’re done with exploration and start writing “real” code. That’s where you should stop and switch to TDD, at least in theory. In my case, when I need to explore how something works, I write tests for this (which I later also throw away in most cases).

Here’s a fragment of “Rails Tests Prescriptions” book that tackles this problem:

Testing your application assumes that you know the right answer. And while you will have clear requirements or a definitive source of correct output some of the time, other times you don’t know what exactly the program needs to do. In this exploratory mode, TDD is less beneficial, because it’s hard to write tests if you don’t know what assertions to make about the program. Often this happens during initial development, or during a proof of concept. I find myself in this position a lot when view testing—I don’t know what to test for until I get some of the view up and visible.

In classic Extreme Programming parlance, this kind of programming is called a spike, as in, “I don’t know if we can do what we need with the Twitter API; let’s spend a day working on a spike for it.” When working in spike mode, TDD is generally not used, but it’s also the expectation that the code written during the spike is not used in production, it’s just a proof of concept.

You might also find this helpful:


OK, next issue:

We have a hard time working out at what level we should be writing tests. Do we write tests around every class and function, mocking out all collaborators, or do we work at our own system boundary, mocking out the external interfaces? We struggle back and forth with this one. When we go to the class and function level, we have problems with the next two issues and we do not feel we are testing real issues from the users perspective. When we stay farther out, the tests are not as clear on testing one thing, have lots of duplication, and are larger and more difficult to write, but it is easy to see the use case and we feel confident that we are testing the integrations between our various classes and can easily cover refactoring.

This is another case where you probably need to differentiate between unit tests, functional tests and integration tests. With TDD you mostly write very simple, couple-of-lines test methods. In result you typically get many small tests per one function, testing various aspects of it. But if you feel that you need more higher level tests, go ahead and write them, too.

My rule of the thumb is that every test case should exercise the code from one layer only. So, if you test model code, you stub or mock the DB accesses and for controller you stub the model calls and so on (in the case of MVC). Of course, there are plenty of exceptions: if your model builds some complicated query, you might want to actually run it to verify the results.

Same goes for 3rd party libraries or external services your code might depend on. You shouldn’t test them or use them in your “normal” unit tests (what if that external service goes down?). Use mocks instead in your unit tests and write some integration tests specifically for them. Using mocks will also help you to make sure your code can deal with various disaster scenarios: service is not responding, the response is mangled or ambiguous, service returns some error codes or refuses to perform the action etc.

As for the duplication in the tests, remember that tests are code, too. So refactor them as you would do with any other code. Make your tests clear and DRY.


Another issue from Rick:

When testing every class and method, the required de-coupling to deal with TDD seems to make the code MORE complex than necessary. This may be due to needing to pass in collaborators for almost everything through dependency inversion so that we can mock out these collaborators. We have to make sure the collaborators are passed through the constructor using some kind of “Composer” class. Every code change seems to complicate the base.

Well, this also bothers me a little bit, I must admit. But that’s how Inversion of Control works and it’s a Good Thing in the long run. Your design is more extendible and this is one of the ways that TDD improves your design. Your class, which was hardcoded to write results to a file, now may use any collaborator as long as it conforms to some interface. So you can make it write to a socket or a stream or memory buffer just passing appropriate object. Sure, you probably don’t need this RIGHT NOW (and it violates YAGNI principle) but who knows what the future brings? Here’s a very good article if you need more examples.

Having not seen your code I can’t say why you have so much problems with DI/IoC. Sure, it’s tedious to do it by hand, maybe you should use some framework for it (or write your own helpers if you can’t). Have you tried or

Another technique I use sometimes (when DI and/or mocking feel too heavy) is subclassing tested class or other classes used by it and overriding problematic methods. Using these subclasses in the test I can achieve same results as with DI or mocking but it’s easier.


Rick writes:

Going along with that, we have found that having test coverage, while it helps us to be confident that we are not adding defects, actually makes refactoring HARDER.

This might indicate that your tests are too intimate with the production code. But there’s also a more general issue. I think it’s unavoidable that unit test will require some maintenance. It’s just a bunch of code, after all. And if your requirements constantly change, having extensive unit tests might be a real PITA. But it may also be a blessing, as long as you start accommodating the changes in the tests first. Some people go and change the code and then complain “OMG now half of my tests is broken!”.

But well-written tests can facilitate refactorings, too. Sometimes, while refactoring you may break your code inadvertently and that’s when it’s good to have unit tests in place. When I need to refactor some code, I always write tests for it first (unless it’s already tested, of course).

Back to Rick:

Specifically, we are trying to work out the proper way to do an “Extract Class”. If you had class A with some functionality you extracted into class B, Do you move the tests that were already in-place into the tests for B then test A against a mock of B? Or do you leave the tests in place for A and make B an inner class? What if other clients want to use B? These kinds of things make it so we don’t want to refactor at all so as to not break our tests. Perhaps we are testing at the wrong level?

Sorry, I can’t answer this question generally, I think this should be decided for each individual case. But according to my rule of the thumb, B is in the same layer as A, so you don’t need to mock it. As I said before, your tests might be too dependent on implementation details (I often have the same problem, too), but you should not forsake refactorings.

You might want to identify the pattern in your tests that prevents or impedes refactorings and try to avoid that. From my experience, the single worst problem is that stubbing/mocking in most cases requires very precise definition of what method is mocked, what’s the receiver and parameters etc. These are all implementation details and they’re bound to change so you need to be very careful to not get to close to the production code. Which is difficult.

8 responses to “TDD Q&A

  • Rick Pingry

    Thanks for the great reply. I am still working through some of your links. I think the core of my questions now come down to my tests being too intimate with my production code, as you say.

    You mention dealing on the “layer” level. Let me make sure I understand your meaning there (not being a web programmer myself). You mean layers like Database->BusinessLogic->UI kind of layers in a web page right? (perhaps there are more or less). I suppose in my case they would be organized in some other way, but the idea I am getting is that a Layer is a group of classes where we have some clear interface between the layers. Perhaps one class is the gatekeeper for the rest of the layer and may delegate functionality to other helper classes on the same layer but all of the tests for that layer go through that class and mock the functionality of the layers it coordinates with? That makes sense and is something we are moving towards, rather than the extremes of testing every single class in every layer against all of its collaborators, or only testing at our system boundary. Are we going the right direction there?

    Most importantly, I imagine that you use TDD to drive the interface between the various layers of your application, right? I imagine that you write a test which drives your understanding of the needs of that layer’s interface, then implement just enough to make it work in the simplest way possible, then refactor the code smells. Red-Green-Refactor. Do you use TDD while refactoring as well to drive the interactions between classes in a particular layer? I used to think that I had to use TDD everywhere, between every little class, but that made me go crazy and it seems my tests are all too intimate. The refactoring book I have says that you do NOT write new tests while refactoring (unless you see a code path you missed I suppose). I think this is the big point I am needing to make it all click in my head.

    thanks again
    — Rick

  • Rick Pingry

    Oh, and one other thing? For those of us not in web programming, how big is a layer (I call them Modules)? Do you come up with a general high level design of your application up front using CRC cards or UML or something and then work from there, or can you let TDD drive even deciding what the layers are by creating integration level tests and then working down?

  • szeryf

    Rick, I think your understanding of layers is perfect. One important thing I would add to it is that layers have a strictly defined hierarchy of dependencies and calling. So, typically the Business Logic layer depends on Database layer (and calls functions/methods from there), but not the other way around.

    No, I don’t think you do TDD while refactoring. Refactoring is “changing the code structure without changing its results”. So, you don’t touch your tests while refactoring because they are there to guarantee you did not break anything by refactoring. BTW Refactoring is the part of TDD, not the opposite :)

    As for the design, in most cases it’s so simple I just come up with some general idea in my head of what the classes should be. Or extract them from existing classes if I see them emerge.

  • Andrzej Krzywda

    Excellent post! I wouldn’t write it better.

    As for “tests being too intimate with my production code” – I try to solve this problem with 2 things:

    1. Write declarative code.
    2. Eliminate (unit) tests for well written declarative code.

    There are many definitions of declarative code. My rule is to avoid nestes if’s, switch statements, have short methods etc.

  • Jeo

    Hmm….your defense of DI/IoC sounds a lot like the man building a castle to roast a rabbit.

  • szeryf

    Heh, Jeo, good point. Of course, if you go overboard with DI/IoC (or any other design pattern), it’s a lot like that :)

    But the point here is a little bit different: you need to make your code testable, which means you need a way to plug-in the collaborators and DI is the most popular way to do it. So you use (or reinvent) DI because you need it, not the other way around.

  • Rafael Ribeiro

    Rick and szeryf, its really good to see two people so focused on TDD.

    One of the most import thing TDD gives me its: Peace of Mind!

    Rick, you sad:

    “Throwing the code away just to add tests, or even add tests after the fact when I already know it works seems counter-productive.”

    When you work with a team, or even when you spend some time (for me one day its enough :D) without working on that particular code, you never know when it will STOP WORKING!

    TDD gives me relief because if something breaks my code (another developer, a refactor, a new feature, etc) i will be noticed. And i will be able to fix it as soon as possible, on the development environment, and not being notified by the end user about a new bug (or an old one) happening on the production environment.

    I see that your posts have one year old almost, how are your(both of you) TDD approach these days?

  • Rick Pingry

    Hi Rafael,
    It was cool to re-read some of this stuff after a year.

    My partner and I are still working on TDD. Still the only ones on our team. Still feeling like novices, but we think we are getting better. We have had a few question threads over on the GOOS threads, and a lot of great help from jbrains, Nat Pryce, and the guys over there. They are amazing, and I am the guy asking all kinds of obnoxious questions.

    Over the year, we bounced back and forth in our frustration level from doing it to not doing it. To be honest, I could never quite buy into szeryf’s claim that it was all faster, especially since we were such noob’s at it (even after working on it a year). We did learn some lessons, mostly around the fact that when testing felt frustrating, it was not because TDD was at fault, but more because there was something wrong with our design. We kept wanting to write tests based on the design we already had in our heads, and that is definitely the wrong way to do it.

    Fact is though, is that every line of code we wrote takes (it feels like) many times longer to write. This is what everyone that tries it sees first. The CLAIM is that it saves you all that time and more when trying to track down bugs, and that you never need to use a debugger. Well, to be honest, I am pretty good with a debugger, and I don’t mind using one when I need to. I also don’t believe that every bug I come across could have been prevented by TDD. So you have to weigh all of the time you WILL spend up-front against the REAL cost of debugging what could have been prevented. I would need to see some hard un-biased data to believe it. At least that was my thought until just recently …

    So, we finished the ActiveX control. The vast majority is under test and It is working well. I occasionally have some bugs, but they are almost always based on some misunderstanding of the external boundary. Whenever I need to do maintenance work, I find that it is typically easy to add new stuff, and I sometimes am tempted to not write tests first.

    So, once that was finished, I was invited to make a change to the web side and learn ASP.NET, C#. There has been a bit of a learning curve with it all, particularly getting my head wrapped around what happens on the client and what happens on the server, life-cycle of a page, and all that, but I am enjoying it.

    The biggest thing is that I am now working on the code base that everyone else is working on. Up until now, I had almost always worked on my own code. WHAT A MESS! Everything is just disgusting with duplicate code, implicit side-effects, and all kinds of dependencies, explicit and implicit. The other day I was working on code that I did not understand and did an SVN blame to figure out what was going on. I talked to the guy that did it, and as we were working through it, I started to clean it up a bit with some refactoring. He said “STOP! I know the code is pretty ugly, but you don’t know what you are going to break by changing that…”.

    And it is true. I can’t just blame the people that wrote the code now. It is a big application, and every new thing I add, there are a zillion dependencies and I don’t want to touch anything for fear of breaking it, so I get in there, add my feature or fix with bubble gum and duct tape, and move on. I added my own little bit to the chaos. The code is the epitome of what Michael Feathers talks about in “Working Effectively With Legacy Code”. I realized that perhaps I was not seeing the full benefit of TDD before, because I was always working on something relatively small, by myself, and that it never go to be such a mess that I thought TDD was worth the trouble.

    Well now I have. I can see that either you resign yourself to working like this, the way everyone I know does, or you get disciplined and do TDD. We have tried so hard to think of another way, and we just can’t come up with another way to do it. Is it hard… YES! We have learned a good bit over the last year, and now that I am using C# with all of its wonderful tools, things have become much easier. We still have questions, and sometimes those questions do not have silver bullet answers from the gurus. But we have come to the conclusion that there is simply no other way.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: