Skip to content.

plope

Personal tools
You are here: Home » Members » chrism's Home » Extreme Danger
 
 

Extreme Danger

Programming extremism is dangerous.

Once a programmer latches on to some process or idea that makes him more sucessful, he tends to treat it like a subroutine, calling in to it over and over again to get his job done. It becomes part of his fundamental toolkit.

However, over time, for some reason, programmers also seem to forget that the helpful process or idea they found is just a mechanism, and not some sort of natural law. As a result, sometimes the helpful process or concept itself becomes dogma. Dogma is a form of extremism: the belief that an idea is superior even when there's scant proof of its superiority, or when there is existence proof to the contrary.

Using dogma is almost never useful except in one very specific circumstance. Your can use dogma successfully to box someone very inexperienced into a particular worldview for the purpose of getting output from them: inexperienced programmers are willing to accept a limited worldview, filled with all sorts of dogma, because they often lack the experience to make otherwise reasoned decisions. Dogma actually helps inexperienced programmers produce things, because they don't get wrapped around as many axles when trying to make decisions about how to get something done: to them, dogma acts as a bridge between a problem and a solution. Without the dogma, they might never reach a solution.

But sometimes, it can hurt. Let's take, for example, treatises on testing. I see lots of blog posts with this theme: "test-driven development helps me, because, at the end of the process, I wind up with a codebase that has tests." Such posts might mention other benefits, such as "test-first" as a design aid, but the central theme of the blog post usually marvels at the idea that the author can later refactor his codebase without unintentionally breaking it. To the author of the blog post "test-driven-development" is testing; there is no other kind. To suggest that you might write tests after you write a bit of code would be an anathema to such a person, because they believe that the process of testing simply cannot be accomplished without test-first. And it takes a lot of time to convince them otherwise; time that could have been used to actually do something useful.

Lots of programmers do indeed test-last (or at least test-during) without any noticeable difference in code quality. I personally don't mind if people use test-first (aka test-driven) development or test-last, as long as they wind up with good tests at the end. As long as the design works out OK, and the code is tested, the path you take to reach "good tests" should be negotiable. Dictating a particular process to reach that goal is often not helpful.

The same criticism applies to any insistence that some particular technology or library or approach is always better than another, no matter what the circumstance. Questioning a dogma in such circumstances can be fraught with peril, both interpersonal and political, but to reach an optimum technical solution, questioning dogma almost always has to be done. But sometimes its better to optimize for the best interpersonal solution rather than the best technical one: "pick your battles". Deciding when to question dogma and when to leave well enough alone is an art, I think.

I think the phenomena of dogma might be explained by the nature of the job. Programmers need to tell a really, really dumb and unforgiving box "do this, then do this" over and over and over. It's quite easy to forget that humans can operate with more incomplete data, and can produce the same meaning from a set of inputs using radically different thought processes and activities.

Created by chrism
Last modified 2010-01-08 11:14 AM

TDD by itself doesn't win

But, in conjunction with other practices, might lead to improved code quality.

For instance, with "Do the Simplest Thing That Could Possibly Work" and "YAGNI", TDD can help keep the implementation simpler / cleaner: you don't generalize / engineer stuff which tests don't need. In that sense, it helps optimize the "learnability" of the code for a new programmer, or for yourself weeks or months later.

I think you have it inside-out

The usefulness of dogma is not simply to separate "the experienced" from "the inexperienced", because we are all inexperienced to varying degrees about different subjects. Lack of experience is not the sole cause of dogma; lack of time and interest are equal partners. It would take a team of researchers several lifetimes to merely identify every assumption you or I possess, and several millenia for most of those assumptions to prove whether they were the "best" ones. Most are unprovable in the scientific sense since they are not repeatable. I pity the poor programmer who is expected to question *everything*.

In my experience, everybody accepts a certain level of dogma *except* programmers. It's the desire to eradicate dogma that is explained by the job.

Evolution of TDD Dogma

The reason TDD dogma came about was that nobody writes meaningful tests afterwards. Nobody. Oh, there are people who write some tests afterwards. But usually with pretty poor coverage. It's a clean up operation if it ever happens. And usually reveals more about how the code should have been factored to begin with than anything else. Once people started doing TDD, other benefits accrued (known when to stop, coverage complete enough for regression, working out the API usability up front, etc.). Dogma is useful as often as not or more. It evolves for a reason. Examining the trail of how a dogma develops is usually pretty instructive as to its usefulness.