Testing Keeps Me From Getting Things Done

A while ago we received these interesting questions from a developer via e-mail:

"I have been thinking about unit tests for quite a while. On the theoretical level, I would immediately agree that they are a very good thing. My, not yet comprehensive, experience, however, is rather disillusioning. The writing of unit tests definitively costs time. Studies talk about 30% more development effort. Maybe I am doing something wrong, but I perceive a factor of 2.

What does unit testing mean for the developer? Do you have to accept that you write less production code? Is there a benefit that justifies the effort in the long run? Even without unit tests, I am not drowning in bug reports so far.

My suspicion: proponents of unit testing mostly work on libraries (in the broadest sense): non-trivial code that uses easy-to-grasp objects (with regards to the number of attributes and methods). This code should be universally applicable (not only in a specific application) and there are extremely high requirements for stability and absence of defectiveness (the masses will suffer if that is not the case). All of these are characteristics that do not apply to straightfoward application code.

I imagine to work on a typical method of a controller: fetch data from somewhere, maybe trivially wrap it in a view model, and pass it on to the frontend. Why should I unit-test that? Okay, it is not really difficult to do so but it is tedious. When I replace the data access with a test double then I have to simulate rather broad data containers (that have a lot of attributes and methods) most of the time. This requires a lot of typing and is tiring. And what did I gain? I already believed that the code works before I wrote the test.

Does it really pay off to unit-test each and every, even trivial, piece of functionality? Controllers, for instance? I mean, in the end, a well-structured application is made up of methods that, for the most part, only delegate work to collaborators.

Does an integration test, for instance starting in the controller and going all the way to the database, not provide the same benefit?"

We give our answer – with his permission – publicly, because we frequently get similar questions. Each of these questions is important, sensible, and understandable. In fact, we believe that they are different perspectives of a single, unasked core question.

Are we done yet?

To successfully develop software means to work target-oriented. These targets should be derived from acceptance criteria that are reconciled with the business. Without clear targets – we mean at a task level, not project or annual targets – the developer runs the risk of getting lost in work. Most importantly, he does not know when he is done with a task.

It is prudent to document and verify acceptance criteria through automated tests. One way or another, the targets have to be defined before production code gets written. This is test-driven development, whether you want to call it that or not.

When the retroactive writing of automated tests is perceived as a painful and additional burden then a clear business (not technical) goal was missing at the time the production code was written. Testing retroactively leads to problems because the reason why the code to be tested exists is not clear. How are you supposed to write meaningful tests in a situation like that?

The primary task of a developer is not the writing of code but to understand a problem. The writing of tests helps to understand a problem step by step. The tests do not drive the development as an end in itself. They drive the development to trigger the required thought processes.

Does trivial code have to be tested? Mistakes happen especially here, because nobody looks closely. And especially here you look for problems last. A test must not make a difference between trivial and complex code. After all, it must not test implementation details but rather verify acceptance criteria.

Integration Tests rather than Unit Tests?

An integration test seems sufficient when nothing out of the ordinary happens while exercising the production code and no error occurs. Especially considering the fact that an integration test usually requires less effort. Errors in the so-called happy path are obvious because they noticeably break a feature.

The more changes are made to the code over time the more execution paths are taken that nobody has thought of before. An integration test that only focusses on the happy path is now no longer sufficient to protect the changes against mistakes. The more execution paths you have to test the more useful unit tests become. They are faster to execute and provide a more precise result than integration tests.

You must first focus on the happy path. Later, when you start to think about edge cases and special circumstances, your focus shifts from verifying that the code works today to making sure that it works in the future when changes are made that you have not thought about yet.

Measuring developer productivity by the amount of code written is a mistaken notion of software development. Bill Gates realized this already when he said:

"Measuring programming progress by lines of code is like measuring aircraft building progress by weight."

Having clear business goals leads to writing less production code. Automated testing supports exactly this.

About the authors

Sebastian Bergmann
Sebastian Bergmann
Twitter LinkedIn Xing

Sebastian Bergmann, creator of PHPUnit, is an internationally sought-after expert who has played a vital role in professionalizing the PHP community.

Arne Blankerts
Arne Blankerts
Twitter LinkedIn Xing

Arne Blankerts has created solutions far ahead of the times already years ago, and finds security issues with magic intuition.

Stefan Priebsch
Stefan Priebsch
Twitter LinkedIn Xing

For over 20 years, Stefan Priebsch has been finding sustainable solutions using a unique blend of new ideas and proven approaches.

Share this article
The Death Star Version Constraint Why Developers Should Not Code