I work in a development group with perhaps 120 developers, with smaller divisions within that. Our process is somewhere between waterfall and agile, more towards the former. We do NOT have our builds executing unit tests and there is only casual use of them in the various teams. Nothing resembling TDD happens here.
We've been going through Scrum training, and are trying to use agile methods for some projects, and move others towards agile in the future.
I've been concerned about our de-emphasis of automated unit tests for quite a while. During this Scrum/Agile training process, I've tried to make the point that the lack of automated unit tests in our builds could be a problem, even more so with agile processes, specifically using short iterations. The response to this from the "movers and shakers" is that this is an XP topic, and we're implementing Scrum.
Assuming you agree with my concerns, what arguments could I present to the people who pay the bills that the development of a good automated unit test infrastructure (and understanding) needs to have higher priority?
The best argument I've seen used is it is cheaper to fix bugs early.
In particular, as you say, with short iterations untested code will almost certainly fail when deployed. Having the team stop to perform manual testing, then fixes, introduces uncertainty into the schedule when ideally the best practice of Scrum is that a well-defined rhythm of frequent high quality releases is what's needed.
It can also be difficult to integrate untested code across a larger team: even the best written specifications can be ambiguous, and are frequently worse. Having a good robust test suite is a great specification for what the code actually does.
Once the code has been written, decent test coverage lets you take code and change it knowing that it still works as defined. In particular the effort associated with regression testing is greatly reduced.
I've seen management try to "cut corners" in this way by suggesting that testing is done outside of the core development function and away from the sprint cycle. In my experience that ends in tears, with the software delivered later than if proper effort was made to find and fix the bugs early.
Perhaps it's a cultural issue, but in the UK the best practice I've seen for Scrum etc. is to not get too concerned with whether a particular part of the process is XP, Agile, Scrum or what-have-you. Rather, a policy of inspect and adapt suggests that a team can themselves decide to improve their code through adopting a particular policy; then, if after a spike it appears to work, the policy is adopted more widely. Or not.
So, you may find it best to bide your time, then suggest improving test coverage at your next retrospective. Or, perhaps just implement them yourself... and watch your velocity improve!