Once a programmer latches on to some process or idea that makes him more sucessful, he tends to treat it like a subroutine, calling in to it over and over again to get his job done. It becomes part of his fundamental toolkit.
However, over time, for some reason, programmers also seem to forget that the helpful process or idea they found is just a mechanism, and not some sort of natural law. As a result, sometimes the helpful process or concept itself becomes dogma. Dogma is a form of extremism: the belief that an idea is superior even when there's scant proof of its superiority, or when there is existence proof to the contrary.
Using dogma is almost never useful except in one very specific circumstance. Your can use dogma successfully to box someone very inexperienced into a particular worldview for the purpose of getting output from them: inexperienced programmers are willing to accept a limited worldview, filled with all sorts of dogma, because they often lack the experience to make otherwise reasoned decisions. Dogma actually helps inexperienced programmers produce things, because they don't get wrapped around as many axles when trying to make decisions about how to get something done: to them, dogma acts as a bridge between a problem and a solution. Without the dogma, they might never reach a solution.
But sometimes, it can hurt. Let's take, for example, treatises on testing. I see lots of blog posts with this theme: "test-driven development helps me, because, at the end of the process, I wind up with a codebase that has tests." Such posts might mention other benefits, such as "test-first" as a design aid, but the central theme of the blog post usually marvels at the idea that the author can later refactor his codebase without unintentionally breaking it. To the author of the blog post "test-driven-development" is testing; there is no other kind. To suggest that you might write tests after you write a bit of code would be an anathema to such a person, because they believe that the process of testing simply cannot be accomplished without test-first. And it takes a lot of time to convince them otherwise; time that could have been used to actually do something useful.
Lots of programmers do indeed test-last (or at least test-during) without any noticeable difference in code quality. I personally don't mind if people use test-first (aka test-driven) development or test-last, as long as they wind up with good tests at the end. As long as the design works out OK, and the code is tested, the path you take to reach "good tests" should be negotiable. Dictating a particular process to reach that goal is often not helpful.
The same criticism applies to any insistence that some particular technology or library or approach is always better than another, no matter what the circumstance. Questioning a dogma in such circumstances can be fraught with peril, both interpersonal and political, but to reach an optimum technical solution, questioning dogma almost always has to be done. But sometimes its better to optimize for the best interpersonal solution rather than the best technical one: "pick your battles". Deciding when to question dogma and when to leave well enough alone is an art, I think.
I think the phenomena of dogma might be explained by the nature of the job. Programmers need to tell a really, really dumb and unforgiving box "do this, then do this" over and over and over. It's quite easy to forget that humans can operate with more incomplete data, and can produce the same meaning from a set of inputs using radically different thought processes and activities.