Goes without saying… but just in case: Following are my thoughts, at this moment in time, about stuff I have some experience in, but could have more and acknowledge that a lot of others do. So, read with a grain of salt and feel free to correct me or just throw in your own wisdom in the comments, in my Twitter feed or anywhere you want.

I like to classify. It clears my thoughts and paints a clearer picture. Not necessarily the correct one, but less blurry in any case. I have come across some answers to the question “What is the difference between ATDD and BDD?”. To me they all lack some good old engineering simplicity. This is an attempt to fix that (excuse my arrogance).

I decided to approach this with a test classification.

WARNING: Please do not hang on the naming here. (E.g. tests are also checks or specifications). I don’t care about naming as long as the meaning is clear from the context.

Tests describe cycles

I am using a test classification to describe the difference between ATDD & BDD (& TDD) because each type of test described here features prominently in its type of feedback cycle. Each feedback cycle has its test type and other traits also described here.

Some traits of feedback cycles described here include:

  • test scope (amount of code under test)
  • expected test speed
  • test location
  • frequency of running
  • primary purpose
  • secondary purpose (there are more, but I wanted to keep this short-ish)
  • also known as … to provide context

So, without more rambling, here we go…

Unit tests

Cycle: TDD
Scope: One unit. (NOTE: A unit may consist of more than one class, but more on that in a later post.)
Primary: Drive the design at unit (component) level.
Secondary: Detailed functional documentation of a unit.
Location: Bundled with code.
Run freq.: Every 10-20 secs in the IDE (also on every build, everywhere).
Duration per test: Below or about 100 ms.

System tests

Cycle: ATDD
Scope: One “system”, i.e. all code run as a single machine process, single deployable, defined within a single repository and/or a single IDE project.
Primary: Drive the design at system level.
Secondary: High-level functional documentation of a system.
Location: Same as unit tests (see above).
Run freq.: Same as unit tests (see above).
Duration per test: Same as unit tests (see above).
A.K.A.: end-to-end, acceptance, integration, integrated

Acceptance tests

Cycle: BDD
Scope: The whole system. Not the system from the above section. Everything. The whole software package from a business perspective.
Primary: Provide automatic confirmation when a feature is Done (done-done, as in “start working on the next one”).
Secondary: Provide overall system state (which features are working currently) so that whether to release something or not is purely a business decision (not all features have to work in a release).
Location: In its own project and/or repo. They can be run by the same tool which runs unit and system tests, but they test a complete system deployed in a staging environment designed to resemble the production environment as close as possible.
Run freq.: On every push to origin (git-talk for “main source repository”).
Duration per test: Depends on context, mostly long (seconds, even minutes).
A.K.A.: specification, scenario, acceptance criteria, story

Integration tests

Cycle: TDD (but read on!)
Scope: One adapter.
Primary: Test the adapter with the externality it is adapting.
Secondary: None.
Location: Bundled with code.
Run freq.: Manually, whenever the programmer feels like it (but likely just while TDD-ing the adapter), and on every build, everywhere.
Duration per test: Seconds (opening sockets, files, sleeping, etc.).

Important bits

Lets extract the important bits out of my pile of personal opinions and come up with some useful advice (hopefully)…

Unit and system tests are run together, on each hyper-frequent TDD test run. For that they have to be are fast and so are only covering business logic.

Adapters (stuff that’s handling connections to the outside) are tested with integration tests which are slow and not included in the hyper-frequent test run of TDD.

Acceptance tests are also slow because they’re testing integrated deployed systems through their public API. While being slow, they should also be run as often as possible. Since they test a deployed system, they can only be run after the code is pushed to the main repository, from where it can be pulled, built (all bundled tests are run) and deployed in staging. Their results should be published and visible to the business section of your team (i.e. the product manager). It should be clearly visible which features are working at any point in time, so that releasing the product is a business decision – some features may not be crucial to the release, so a version can be launched with some acceptance tests failing. That’s OK as long as its clear to the business people which features they test.

A development workflow

Development should be done solely within nested feedback cycles.

Begin from the biggest cycle (BDD) by writing an acceptance test. To do that, you must of course make a fixture – a piece of software which can communicate with your whole system by using its public APIs (e.g. Restful HTTP).

When your acceptance test is finished (and of course, failing) go to the next cycle in (ATDD) by writing a system test.

When your system test is finished (and of course, failing) go to the innermost cycle (TDD) and write a unit test. by now you should have to decide pretty specific stuff regarding your software architecture (on which I also have advice but that’s for another post).

Keep TDD-ing with unit tests until your system test passes. Write another system test, then unit test, etc.

Keep writing system tests until your acceptance test passes. Then stop. The feature is done and releasable.

Advertisements