Tests are closer to sacred cows than code
Change is the only constant in software, and we can change tests, but we should treat tests more as sacred cows than code. At the end, they…
Change is the only constant in software, and we can change tests, but we should treat tests more as sacred cows than code.
At the end, they are the guardians of the behaviors of our application.
Sacred cow is an idiom, a figurative reference to cattle in religion and mythology. A sacred cow is a figure of speech for something considered immune from question or criticism, especially unreasonably so.
This idiom is thought to originate in American English, although similar or even identical idioms occur in many other languages.
wikipedia
If you are living in the Software Forest described by Kent Beck, your team, organization and yourself care about building software in small and safe increments. So this post is for you, and probably this will resonate with you, if you live in the desert you probably don’t have to care too much about automatic tests because you don’t have them.
I have said in the past that automatic tests (please don’t understand e2e tests) have the property of making the things that are testing harder to change.
We all agree that we want to be informed when our behaviors change, and we use tests for this. At least good tests should do this.
Following this idea, that tests are the guardians of our behaviors, tests are more sacred than our production code.
Don’t understand me wrong, we can, and we should remove and change tests, but we should not do it when we are changing also production code. It’s a bad idea to change your guards in the casino when someone is trying to steal your money.
Bugs are like burglars, they will find ways to appear, trying to be invisible to us. The best weapons we have to identify bugs faster are automatic tests.
Don’t change them in the middle of changes in your production code, treat them as if they were sacred cows.
So, let’s think that tests are right then when a test fails let’s ask ourselves:
Why the test is failing?
Is this broken test a signal of a behavior that we want to preserve?
Is this a signal of upper level changes that need to be done with this production code change?
Is this test useful anymore?
Depending on the answers to these questions, we should treat our production code change differently. Because a warning signal appears.
Perhaps we should start thinking to honor:
Perhaps our production change requires following the “expand and contract method”
Perhaps API versioning is required.
Perhaps we need to create a sunset process for the old API
Perhaps we just introduced a bug and our production code is wrong
When and how can we change a test?
Let’s try to put extra care changing tests.
I think we should favor to isolate the old behavior, not changing the test at all or doing tiny changes.
If the test is obsolete let’s just remove it, I prefer to make tests obsolete than changing them. This is a design decision of how the new behaviors has been built.
Let’s remove tests when all tests are passing, when we are sure the old behavior can go without breaking anything.
Then let’s remove the test and then let’s remove the old behavior, for doing this as I said before the new behavior needs to be isolated from the old one.
In someway, this is following the idea of writing code as if you were going to delete it tomorrow.
But if we react too fast to a failing test because our behavior changed, and we think right now the failing test is useless so we can remove it, we are changing the guards at the same time we are changing our code.
A recipe for the disaster.