9 Comments
User's avatar
Petar Ivanov's avatar

Writing tests has a compounding effect. You slow down temporarily, but you'll be faster weeks or months in the future.

Great article, friend!

Expand full comment
Renan's avatar

TDD is about short cycles. Normally, I code using TDD from the inside (core business rule) to the outside, so my last test implementation will be the highest test possible.

Do you mind sharing your thoughts on that as well?

Expand full comment
Erik's avatar

I would typically develop in this fashion as well. However, I have read (I think it was Kent Beck who said this, but can't recall) that you can write all the small tests you want while developing, but once you're "done" and all your tests (at the low and high level) are passing, then you can start deleting the smaller tests that don't really matter anymore. Since, the larger tests, should/would be encompassing variations of the smaller tests (similar parts of the code flow).

However, I'm still confused about how they describe "unit", to me, it sounds more like an integration test. But that could just by my misunderstanding, would love to know more.

Expand full comment
Renan's avatar

Reading those both articles gave me a solid understanding how to coding behaviors test:

- Test Contra-variance by Uncle Bob (https://blog.cleancoder.com/uncle-bob/2017/10/03/TestContravariance.html)

- Additional Testing After Refactoring by Kent Beck (https://tidyfirst.substack.com/p/additional-testing-after-refactoring?utm_source=share&utm_medium=android&r=1wkrh0&triedRedirect=true)

Expand full comment
Erik's avatar

Wow, thanks for the articles, the test contra-variance blew my mind and is probably along the lines of what this article was trying to say (but the article you linked did a better job). Moreover, it really does pair well with article 2 (I can see why you also mentioned it) because article 1 left me with the same question, should we not write tests for the ("extracted") sub-elements at all?

I think most people do some version of both of these without knowing they're doing them (myself included). But what that leads to is incomplete or "messy" test contra-variance, and redundancy.

Thanks for the articles, worth the read!

Expand full comment
Renan's avatar

I comprehend what was said and practice TDD. However, I believe that at times it is essential to focus on tests with a narrower scope when there are multiple input and output possibilities. Concentrating solely on high-level tests can make it complex to use various inputs that cover all possibilities. An example would be validations of prohibitive business situations that should throw exceptions when specific circumstances occur.

In this line of thought, the higher-level unit test will examine a different situation than the unit test with a more specific scope.

Thank you for the article.

Expand full comment
Erik's avatar

Based on the definition of a unit here, how would you then define/describe an integration test?

I agree that too many small tests can be bad, and not every function requires a test, however, I was under the impression that the image you have under ["So What is a UNIT"](https://craftbettersoftware.com/i/148523986/so-what-is-a-unit), the image on the left IS-A unit test and the image on the right IS-AN integration test.

Would love to hear your thoughts on this. Thanks!

Expand full comment
Pete B's avatar

Not the author, but I think typically Chicago school unit testers would consider an integration test something that interacts with an external dependency, e.g. database

Expand full comment
Erik's avatar

I would consider that to be an integration test as well, but what you described as an integration test, is what they are calling a unit test (from my understanding of reading the article). Again, I could be misunderstanding, perhaps two “components” (maybe services or methods) make up one unit, but a database is different enough that it would make the test be an integration test.

I just didn’t get that from reading the article.

Expand full comment