Testing problems are test results: „If automated checks are difficult to develop and maintain, does that say something about the skill of the tester, the quality of the automation interfaces, or the scope of checks? Or about something else?“ (Michael Bolton, 2011).


If you run into problems to automate a test, this indicates that there is a flaw in the design of the system under test. Certainly, it could be also a problem of your test design…


So, what is needed to support good testability? Mainly observability and controllability.


Source: Alexander Tarlinder (2017)

Observability describes how well the test can determine what the system under test is doing.

„Along with observability, control is probably the key area for testability, particularly so if wanting to implement any automation.“ (Adam Knight 2012).

Controllability describes if the test can set the system under test in a predefined state.
Important design principles, related to separation of concerns, are supporting the testability, i.e.:

  • Interface Segregation design principle
  • Open Closed design principle
  • Robustness principle (Postel’s Law)


With those design principles in mind, let’s look at microservices:

Microservices should be designed and shaped so that they fulfill following key aspects:

  • They are responsible for only one functionality.
  • They are loosely coupled, communicate with each other via APIs, and shall not share data.
  • If there are dependencies needed, consider embedding them.

By implementing design principles which are focusing on decoupling and supporting testability, microservices can be completely decoupled which makes it possible to build, test, and release them independently, i.e. in autonomous Continuous Integration & Continuous Delivery (CI & CD) pipelines.


Note that overusing testability related design changes may lead to a test-induced design damage flow: „Such damage is defined as changes to your code that either facilitates a) easier test-first, b) speedy tests, or c) unit tests, but does so by harming the clarity of the code through — usually through needless indirection and conceptual overhead.“ (David Heinemeier Hansson 2014).

However, if design principles such as Interface Segregation, the Open Closed, and the Robustness principles are followed, the software should support testability while avoiding introduction of design damages.



See also:



Continuous Testing in DevOps…

Good blog about Testing in DevOps

Dan Ashby

I’ve recently attended a number of conferences, some testing ones, some agile ones and some dev ones too. Although some of the talks were painful due to misunderstandings of one thing or another (be it testing, automation, agile, BDD, TDD, etc…), overall, I thought the conferences were pretty good and I met some new people too and got to share some stories with them.

One thing I heard a fair amount was about DevOps. Many people were talking about it. It’s a big topic! But too many people seemed hugely confused about where testing fits in this wonderful world of DevOps. Some suggested that you only need automation in DevOps, but when asked to explain, their arguments fell floppily by the waist-side. Some people blatently refused to guess at how they’d try and implement testing in DevOps.
And one person even said that no testing was required at all, as he…

Ursprünglichen Post anzeigen 801 weitere Wörter

What is Continuous Testing?

DevOps is combining software engineering, testing, and operations. Testing is an essential part of DevOps: Continuous Delivery is addressing the need to deliver to the customer fast and with high quality. This means that the tests must provide fast and meaningful feedback. To be able to release with both speed and confidence requires having immediate feedback from Continuous Testing.

The main goals of Continuous Testing are:

  • Fast and meaningful feedback
  • Support collaboration
  • Make integration and releases “non-events”

There is a “shift left” in testing, describing the trend towards an approach in which the teams focus on quality, work on problem prevention instead of detection, and begin testing as early as possible. The goal is to increase quality, shorten long test cycles and reduce the risk of bugs found at the end of the development cycle or in production. The “shift left” is about fast and meaningful feedback by executing tests early and at the lowest integration level possible:

  • Unit tests executed as pre-commit tests provide fast feedback and help to ensure a stable mainline.
  • Acceptance tests provide feedback on the business risks associated with a build.
  • Frequently executed performance tests to detect performance regressions.

You should have a large number of unit tests, complemented by acceptance and performance tests and eventually (if at all) a small number of automated E2E tests. The unit tests shall be implemented according to the Test Driven Development (TDD) methodology and acceptance tests can be implemented following the Acceptance Test Driven Development (ATDD) approach. As part of this “shift left”, the whole team is contributing to the tests: Testing is not an isolated activity done in a separate silo but owned by a cross-functional team.

As described by Bach, testing is the evaluating of a product by learning about it through exploration and experimentation. How does this fit with Continuous Testing? You should evaluate if the tests which are getting executed are meaningful, meaning if they detect issues. If a test case is never failing, you may want to execute it less frequently as part of the CI&CD, i.e. not as part of the pre-commit tests which are about fast feedback. Exploratory tests shall be done to detect gaps in automated tests. If you design a test for CI&CD, you must explore too because you must think about the expected usage scenario. Also, since CI&CD is about frequent releases, those releases can be used to do experimentation.

Testing doesn’t have to stop with the release approval prior to production. As part of DevOps, there is not only a “shift left” in testing. There can be also a “shift right”: If you have already progressed to a high degree of Continuous Delivery, you may consider introducing controlled experiments in production, e.g. A/B testing, Test in Production (TiP) and Chaos Engineering. In particular A/B tests play a vital role in the build/measure/learn approach, propagated by the lean startup methodology. As Allstair Croll says it in his Lean Analytics book (2013):  „Testing is at the heart of Lean Analytics“.

Experiments in production are only possible if the software is sufficiently tested upfront so the idea is not to throw features out that might not even work and letting customers find those flaws. The tests are customer facing but they are executed as controlled experiments, affecting only a small number of customers. Also, they are made possible by rolling out features gradually, by relying on feature flags that are easy to turn on or off, a focus on a short mean time to recover (MTTR), quick rollbacks and other approaches which build together DevOps tool sets.

See also: