Comments on: Test Driven Development is alive and well https://sdtimes.com/ca-technologies/test-driven-development-alive-well/ Software Development News Mon, 14 Aug 2017 22:21:18 +0000 hourly 1 https://wordpress.org/?v=6.5.5 By: Gordon Zuehlke https://sdtimes.com/ca-technologies/test-driven-development-alive-well/#comment-8418 Mon, 14 Aug 2017 22:21:18 +0000 https://sdtimes.com/?p=26418#comment-8418 In reply to John Keklak.

Good work.

]]>
By: Hakan Akdag https://sdtimes.com/ca-technologies/test-driven-development-alive-well/#comment-8415 Sat, 12 Aug 2017 22:07:35 +0000 https://sdtimes.com/?p=26418#comment-8415 I am not a fan of TDD but I have to say that it teaches a lot to a beginner software developer.

Imagine you work in a company as a software architect or senior developer, and you have to pick up a junior developer. What you should do is to pick up the right person into the company. And to do that , you should ask right interview questions. What I prefer to know is if he or she has huge understanding of basic concepts of OOP.

Later, your junior starts working in company. Starts coding. What he should learn first? If you already apply TDD in the project he will learn a lot of things like how to right unit tests, what is lously coupling, what is a good architecture etc. But without TDD he will develop code with his own techniques which are may be good or bad practices (more likely they will be bad).

Just saying…

]]>
By: Semsudin Sefic https://sdtimes.com/ca-technologies/test-driven-development-alive-well/#comment-8391 Mon, 07 Aug 2017 09:26:09 +0000 https://sdtimes.com/?p=26418#comment-8391 Great reply John!

]]>
By: Dave Smith https://sdtimes.com/ca-technologies/test-driven-development-alive-well/#comment-8379 Fri, 04 Aug 2017 19:34:41 +0000 https://sdtimes.com/?p=26418#comment-8379 John Keklak, you should’ve written the article:)

]]>
By: John Keklak https://sdtimes.com/ca-technologies/test-driven-development-alive-well/#comment-8368 Thu, 03 Aug 2017 18:46:46 +0000 https://sdtimes.com/?p=26418#comment-8368 There is great value in writing tests before doing development, but this article barely makes the case. Instead it uses many current buzzwords, hoping that these popular terms convince the reader, rather than providing the reader with illuminating reasons for writing tests before writing code.

Meanwhile the article more or less dodges a discussion of the main complaints about TDD, while merely acknowledging that developers often don’t like it and find it to be “extra work”.

Worst yet, the article passes along claims that are not true at all, such HPE’s Kelly Emo remarking that “…Test Driven Development is dead belief is often coupled with the belief that testing as a practice is dead”. Testing is hardly dead. It is utterly foolish to write code without testing it in some way before releasing it to users.

————

One area of overwhelming value of creating tests before creating software is during the phase of a project when developers are determining exactly what a product should do. The value in such tests lies in their ability to clarify and even prompt communication with the “customer”.

For instance:

(Developer presents a test to Customer.)

Developer: “So, the software is going to do this in this case. Is that OK?”

Customer: “No.”

Developer: “No? Then what should it do in this case?”

Customer: “It should do something more like this.”

(Developer goes away and revises the test.)

Developer: “Here’s a revised test. Is this correct?”

Customer: “More or less.”

In this way, any sort of written requirements or verbal description is bolstered by some rather quantitative examples.

Clearly, common sense must hold sway when determining what constitutes an appropriate number and type of such supporting test cases.

Although not formally part of the definition of TDD, Walter Kapitani of Rogue Wave endorses this practice when he says that “…[TDD] promotes the idea of understanding what you are trying to build before you start building it… It exposes weaknesses in requirements, in the architecture, and even in your test infrastructure before you start trying to build something.” Jason Hammon of TechExcel concurs.

———

Such test cases have an immediate additional use. In more than one instance in my career, I passed such tests on to quality assurance staff, who not only greatly appreciated the work I had done for them, but also had a sound foundation on which to build their “torture test” cases. Other developers were typically horrified when I did this, since most of them generally played a form of “hide and seek” with quality assurance.

In one particular project, an amusing unintended consequence arose. Once I had began work on the actual code, quality assurance staff reported to my management (as a matter of routine reporting, and not with any sort of malicious intent) that an overwhelming percentage of the test cases for my project were failing. By comparison, test cases for most other projects were passing at a high rate. This alarmed the management and nearly cost me my job until they learned that my project had hundreds of tests, some quite complex, while most other projects had five or ten rather simple tests. Moreover, as my project progressed, the percentage of test cases that passed was increasing steadily, while the number of test cases did not grow much, restoring my management’s belief that I knew what I was doing.

Eventually all my test cases succeeded, and quality assurance had to be quite clever to find reasons to add more tests. Meanwhile in other projects, the number of test cases grew steadily throughout the projects (a test case was usually added each time a bug was discovered), and those projects seemed like they would never converge to a completed and robust state.

——-

Maintaining a collection of tests also has great value for identifying when a software change has broken a component of a system. I’ve had projects where 500+ tests checked components for digressions from expected behavior, alerting me in seconds to potential regressions with almost no work on my part. Usually this kind of collection of tests consists of the tests created during the requirements clarification process augmented by additional relevant tests. Developers rarely have objections to using such labor-saving methods.

Where TDD runs into opposition from developers is when e-v-e-r-y s-i-n-g-l-e c-h-a-n-g-e requires a developer to first contrive a test case. This is often taken to such an extreme that the development process becomes “create a test case, get it to work however you can, rinse and repeat”. This practice is more or less the formal definition of TDD, and has two major failings.

First, test maintenance grows O(n^2) in the worst case. Increasingly the fraction of time maintaining tests overwhelms all other work. Developers detest this grunt work, which is often required for dogmatic reasons and does not deliver much value.

Secondly, the “create a test case, get it to work however you can, rinse and repeat” approach often leads to horrendous logic and data structures. The approach does not lend itself to diagramming a plan on a white board, banging out the code for the skeleton of this design (i.e. writing a fair amount of code before doing any testing), filling in the obvious stuff, and then testing and fixing, usually by adding missing code. The “create a test case, get it to work however you can, rinse and repeat” approach makes it difficult to create the appropriate skeleton, and generally leads to a mish-mosh of scripts customized to pass tests (albeit often well-refactored scripts). I’ve personally witnessed programmers literally writing scripts to get their current tests to work.

A related issue is that TDD becomes an obstacle when it becomes obvious that a significant change in architecture or logic is needed, for instance, in changing from a single process approach to a map-reduce approach. Typically, in such a situation, one needs to return to the whiteboard, sketch out what needs to be changed, and make these changes wholesale. During this time, it makes no sense to run any tests. The code is utterly broken until the restructuring is done. Only after the restructing is done does it make sense to run the system to see which tests pass and which tests fail.

I can already hear TDD proponents’ responses to this failing. “Use what works”, they’ll say, “We never said not to whiteboard a design and bang out the initial code.” Well, then what is TDD? Is it merely saying, “Use tests?” In that case, TDD is nothing new. Tests have been obvious and necessary from the dawn of programming.

———

A failing related to the O(n^2) growth of maintenance work has to do with culling tests that are no longer necessary. Eventually, when fundamental changes occur in a system (say, a set of utility functions are replaced by a newer set), it makes sense to cull out the tests for the code that is no longer in use. This is usually more easily said than done, and the easiest thing to do is not to worry about culling tests at all. This only compounds the O(n^2) problem, since now developers are left wondering which tests are still relevant, and which tests are not.

———

The article also propagates a misconception about software development that is remarkably persistent, yet patently false. Software developers do not “build” systems or code. Software development is primarily about gathering knowledge to resolve unknowns. Once the necessary knowledge has been gathered, creating code is rather trivial. Of course, because of the potential for human error, it is necessary to verify with tests that the code does what it is intended to do.

Incidentally, by shifting to a software development paradigm based on resolving unknowns, many things become easier. Estimating how much time is required for a project becomes easier. Understanding the status of a project becomes easier. Discussing projects with developers becomes easier.

Tests are an integral part of this “resolving unknowns” paradigm, since they are key for identifying unknowns. For instance, a test that checks the validity of JSON code naturally leads to the question, “How is this JSON to be produced?” or perhaps even, “What is JSON code?”

——–

So, in short, writing tests before writing code is an excellent practice, and has many immediate and follow-on benefits. TDD, as it is formally defined, does not even begin to deliver these benefits. If anything, it stunts the ability of developers to create professional quality code.

]]>
By: Henrik Thor Christensen https://sdtimes.com/ca-technologies/test-driven-development-alive-well/#comment-8367 Thu, 03 Aug 2017 16:17:44 +0000 https://sdtimes.com/?p=26418#comment-8367 TDD has been the mantra for a while, now another mantra is probably the buzz of the industry.
When a new mantra emerges, read about it, take the gold nuggets out of it and use them, but don’t make it religion – you may mention it on your CV, but realize that it is not a new incarnation of some God.

]]>
By: Paul Gehrman https://sdtimes.com/ca-technologies/test-driven-development-alive-well/#comment-8366 Thu, 03 Aug 2017 15:11:41 +0000 https://sdtimes.com/?p=26418#comment-8366 The author makes some good points, but TDD is still dead. Good riddance. It has way too many limitations and frankly, is and always has been, faddish.

]]>
By: Test Driven Development is alive and well - Khabri No 1 https://sdtimes.com/ca-technologies/test-driven-development-alive-well/#comment-8362 Wed, 02 Aug 2017 20:15:30 +0000 https://sdtimes.com/?p=26418#comment-8362 […] post Test Driven Development is alive and well appeared first on SD […]

]]>