Dogmas that govern our code, Test everything
September 27, 2023
Test everything, is the dogma I often come across. And then paired with Test-Driven Development, it becomes a pain of sorts. As well as one of the bottlenecks when it comes to getting something out, fast. Or faster, whatever.
What could I ever possibly mean by this? Am I saying testing is bad? No, let us stop there. Any software that runs in production and is meant to be there for a long time, needs to have some sort of testing put in place. What that means is debatable every time. In my personal experience, that is where developer(s) need to decide, what that means for them. E2E testing, integration, component, unit… How and when you write those tests, is up to you as well. Do you understand what you need to deliver so you can start writing tests for it?
It is an opinion that is greeted with so much fire, as I have said “Scrum is a broken way to develop software”.
So let us get back to that opinion. As with anything, there is a big tradeoff when it comes to following path A vs. path B. When should you employ certain things? How do you know? When it comes to testing, the rule of thumb for me is simple. Do I know what I need to deliver? If the answer is no, not gonna happen. I would rather sacrifice some of the “best” practices and get something out of the door than spend days on refactoring tests due to constantly changing requirements. And deliver zero value. To borrow it from TDD, make it as:
- Red : Understand what you need to build and the value of adding complexity.
- Green : Write a functional software according to those findings.
- Refactor : Time to refactor and follow up on those practices you may have skipped over.
And this is happening more frequently than not. When there are set or rigid tests, it takes you exponentially more time to update those than actual functionality. For me, that is much worse. What constitutes for good set of tests is open for debate and the evolution of that project. My opinion:
💡
Tests should be there to assist you and not slow you down.
Then again, that is just an opinion. I have worked on several projects where tests were like documentation. Often not updated, just hacked around to make it work, or just plain misleading. And then a 1-hour change becomes days of going down the rabbit hole. And for me, that is like having no tests. Or documentation.
Now with some context set in place, just imagine starting on some new project, in an agile fashion, that still doesn’t even know what should be done. Simple PoC of MVP. We will figure it out, as we go. What could go wrong… Anyhow, long story short: Things change, more often than not. That is all fine. What is not fine is the impact on your code base and the rigid set of tests that you put in place. Brittle is the word. Have you experienced frustration due to the code you wrote and now it is making your life harder? There is no one to complain about, it is just you. Insert Spiderman meme here.
This is where sometimes those dogmas and good practices get in your way. In recent years I moved away from setting tests for every single thing in my code. I started testing for behavior, and for the catchy title, I started calling this Requirments Driven Testing. If the project is something that has a clear goal, then tests from the start are a great way to document decisions. At least for me. If they’re the cause of pain, time to think if they add any perceived value to it. But I can’t write a wrong test if the requirements are clear to me and I know what I need to deliver. At the same time, the other way around makes every test fragile.
With experience, we need to start putting priorities in place. What is the point of a project that has the best practices implemented when the project team decides to stop it because no value was delivered in over a year? There is a delicate balance, like with this entire series of dogmas that govern our code, between the value and quality of code. Both ends of the spectrum are equally dangerous. How do you strike that balance? For me, personally, it is just listening to what is needed and having a gut feeling about where should I invest my time. You can’t always know if your decision is good or bad, till you test that hypothesis. But there is a simple way to know which way to go: If you have a project to refactor later on, that was probably a good idea.
Examples of this in “wild” are numerous. Not a single successful project out there didn’t start as architecture they’re selling you today at conferences. It looked something similar to what you currently have. So keep that in mind as well, when thinking about what should be done and when. And this applies to all dogmas I’m going over in this series of posts.
I mostly start with a test project on every new project. Not TDD-like, but more of a way to run my code for “manual” testing purposes. Without using some REST clients or waiting for UI to be there. As my code matures and requirements become clearer, so will the testing solution. It grows with my understanding of the domain and what I need to deliver. Then there are projects where I need to build a projection between some system and UI. There I really don’t see a benefit of having a full set of tests at all, REST client will do just fine to verify a contract and swagger to expose this to the client. When it is all put in place and working as expected, then I will write some contract tests that ensure the stability of my contracts.
The definition of requirements and stakeholders changes, so it is important to write your tests according to behavior and intended audience. At least how I see it and what makes tests less brittle. It is not the same as writing a BDD test for an API endpoint that just reads data from a database. In my opinion, that is just time wasted. If you offer SLA on your contracts, write to ensure those. Backward compatibility and so on, so just you don’t shoot yourself in the foot. As mentioned before, tests should be there to assist you.
What I often struggle with is tests that test the behavior of some third-party library or vendor. In my head, they never make sense. Either you didn’t choose a product that suits your needs or you didn’t read release notes before upgrading. If I am using a popular database out there, and it’s managed instance by some vendor, I am not going to connect to it directly when running tests. Spin up a local instance and run it against it. It should be the same for purposes of your code. You’re not testing replication or high availability and for other reasons, for which, you pay for managed instances. Those are enforced by contracts and SLAs you put in place. There are edge cases for this opinion, while for most of the software built out there, this in my opinion holds.
The dogma surrounding “test everything” has its merits but also its pitfalls. By understanding the philosophy behind TDD and its pros and cons, you can better decide when to adopt TDD strictly and when to take a more nuanced approach. While TDD can be a powerful way to improve software quality, like any tool, its effectiveness depends on where it is used. Or if it even makes sense on the project in question.
Remember, in software development, one size does not fit all. The wisdom lies in adapting methodologies to fit the project’s needs rather than adapting the project to fit a specific methodology dogma.
Until next time, test away.