Test code shouldn't be some script you write for an hour, for weeks of work on functional code.
Test code needs to be just as readable, maintainable, and scalable as your functional code is. You can easily spend just as time, and more, on tests. They should be valuable time.
Just like you should avoid writing code that doesn't do anything, you should avoid writing test code that doesn't really test anything.
Are you sure you need to test that if-statement? Is it imperative to make a test for something you know will not happen logically? Does it make any more sense to test that something does happen, when you know for sure it will happen like it's designed?
The best practice for testing, is to write tests for complex components that might break. Not break because the computer will suddenly break, but break because people might make bad changes to it. Or people might give bad values. Or people might forget what that component was doing.
Unless you're dealing with a computer that flies above the atmosphere, is bombarded with radiation that causes a bit to mysteriously change from a 1 to a 0, and that switch statement in your code gets a value that is not expected, it's best that you don't take a heavy-handed approach to testing everything.
Tests are to protect against human failure, not computer failure. And humans can fail in a few ways:
And then there'll be 20% of your application that is complex and bug-prone. This usually ends up being a core part of your application. That's the best area to focus on heavy testing.
Test code needs to be just as readable, maintainable, and scalable as your functional code is. You can easily spend just as time, and more, on tests. They should be valuable time.
Just like you should avoid writing code that doesn't do anything, you should avoid writing test code that doesn't really test anything.
Are you sure you need to test that if-statement? Is it imperative to make a test for something you know will not happen logically? Does it make any more sense to test that something does happen, when you know for sure it will happen like it's designed?
The best practice for testing, is to write tests for complex components that might break. Not break because the computer will suddenly break, but break because people might make bad changes to it. Or people might give bad values. Or people might forget what that component was doing.
Tests are to cover for human error.
Computers will not surprise you, humans will.
When you see a surprising result on a computer, the only thing off is either your understanding, or someone else's implementation.Unless you're dealing with a computer that flies above the atmosphere, is bombarded with radiation that causes a bit to mysteriously change from a 1 to a 0, and that switch statement in your code gets a value that is not expected, it's best that you don't take a heavy-handed approach to testing everything.
Tests are to protect against human failure, not computer failure. And humans can fail in a few ways:
- They try to change the design of things without fully understanding how it works. You don't need tests to protect against this, because usually the functionality will break in obvious ways. For instances where it wouldn't be so obvious, that's a great candidate for test cases.
- They forget why something was done a certain way. You can use tests for this, but usually a comment in the right area is more than enough proof against this. Especially if there's a review process and others can see obvious commentary that the changer was too lazy to read.
- They didn't think of that scenario, or those particular combination of variables. Here, tests act as a stand-in for critical thinking and planning before coding. Sure, in the process of writing the tests, you figure out that there are so many more cases that you hadn't thought of, but you would've thought of them if you had created a truth table during development.
Truth tables are great, but undervalued by most developers. They'd rather think of all the possible cases with just their heads (cause they're so smart), and be surprised during development, or when the bugs start coming in.
And then there'll be 20% of your application that is complex and bug-prone. This usually ends up being a core part of your application. That's the best area to focus on heavy testing.
E.g.
Here's two recent examples where tests were a good idea:
- I had to create a phone number prettifier, which makes the phone number look pretty as the user types. And for all possible countries.
Right away I could tell that there's going to be difficulty in detecting which country. The system I was working with didn't distinguish the country code from the national phone number. So the servers were spitting out the entire international phone number and letting me deal with parsing it.
Sure, all numbers are "supposed" to adhere to some standard RFC protocol, but there's like a gazillion ways people can fuck that up. And there was. Those tests saved my job and the company hundreds of hours, and thousands of unhappy users.
I needed unit tests desperately to make sure I've tested as many countries, with as many formats (both correct and incorrect) as possible. I need these unit tests to make sure my functional code adheres to specific outputs from specific inputs. Fundamentals of Test-Driven Development.
I had to make sure that the Google library I was using to parse phone numbers was working, and future updates to it didn't change my expected results. The library turned out to be slightly different from the android version. The unit tests saved my ass when I needed to explain why android was working but iOS wasn't, and vice-versa. It's always good to legitimately blame Google for your problems. - With the user’s phone number and last name, I could search through their contacts and find recommendations for the user on who to share the app to.
Any recommendation engine relies of weighted graphs and algorithms to bubble up the recommended object. It'd be good to have a pool of canned contacts where I know what the result should be, and to validate my work as I develop. Another fundamental of TDD.
My algorithms also needs to be proof against someone (me) mucking around in it later, trying to fix some bug. Usually a bug in these kinds of things are when something should be in the recommended list, but isn't. Here, the developer has to figure out how to update the weights, or add more parameters, so that outlier gets in, while others don't get in who don't belong, and those already in doesn't get out.