A few years back, I adopted the belief that seams were a silver bullet – that more seams unequivocally meant better software design. Recently, I’ve realized how this incorrect belief has caused damage to the systems I’ve built.
The journey began when I started learning how to build testable systems. Resources geared towards junior developers on this topic seemed to focus on how to test code which interacts with volatile dependencies.
Seams were presented as a way to truly ‘isolate’ units from their dependencies . Looking back on some of the resources I learned from, the importance of seams was particularly emphasized. For example:
“The key to testing is the presence of seams … I don’t know how to test [an] application without seams”
Around the same time, I learned about the SOLID principles, and how they are employed by systems that are flexible, robust and reusable. It soon became apparent to me that seams were an application of the Dependency Inversion Principle (DIP), along with the Single Responsibility Principle (SRP).
This served as even more feedback that seams were a powerful tool. I came to the conclusion that the more seams I have, the more testable my code will be, and the more adherent it will be to the SOLID principles. Therefore, more seams unequivocally meant better design. Unfortunately, my conclusion was naïve and incorrect.
The resulting design
The systems I built, which centered around the use of seams, looked something like this:
- Many classes, each implementing singular responsibilities in the system (SRP)
- Virtually all of these classes implementing an interface, and being injected into the classes which depended on them (DIP)
- A unit test suite which took advantage of the created seams to insert test doubles
- A high-level portion of the code-base responsible for wiring all of these classes together to instantiate the system
Unfortunately, seams were not the silver bullet I hoped they would be. Over time, I began to repeatedly encounter design pains when working with systems of the nature described.
The damage done
Adopting the mentality of “depend on abstractions, not concretions”  to create seams causes an explosion of interfaces.
The problem with these interfaces is that they almost never model a true abstraction found in the domain. Instead, they tend to model an implementation detail of how the system solves some problem in the domain.
Abstractions are powerful because they hide complexity, allowing us to easily grasp the bigger picture. When the explicit abstractions (i.e interfaces) of a system mirror implementation details, we can no longer rely on them to outline the bigger picture, making the system harder to understand. This is amplified by the fact that injecting these interfaces causes the selection of their implementation to bubble up to higher levels of the system, further hurting encapsulation.
Not only do these false abstractions make the system harder to understand, they also make it harder to maintain. Changing implementation details of our system often means we need to change interfaces as well, since they are directly coupled.
Lack of cohesion
As we work to create small classes, and create explicit seams between all of their interactions, the system becomes composed of many fine-grained, independent actors.
The problem here is that we begin to treat these fine-grained abstractions as the components of our system. They are too granular and low-level to treat them as such. Doing so causes a system design which lacks cohesion, and aligns more with implementation details than the domain itself. This tends to make the system harder to understand and maintain.
Where seam obsession takes us
What we should be aiming for
I’ll use the system diagrams above to help illustrate my point. Arrows represent communication between circles, the circles represent classes, and classes of the same colour represent classes which conceptually belong together.
In the left diagram, each class behaves like an independent component, even though it makes more sense for it to be part of a larger, cohesive component. This creates a complex collaboration scheme, breaks encapsulation, and makes the system harder to conceptualize.
The right diagram serves as a better model. There is still decoupling and separation of concerns, but those concerns are organized in a cohesive way. Components are created from independent but related classes to model high-level concepts and roles. This creates a system which is easier to comprehend due to clearer abstractions, simpler collaboration, and increased encapsulation.
In the previous sub-section, I spoke about how false abstractions prevent us from utilizing the abstractions in a system to help conceptualize the problem space. These diagrams serve as good visual confirmation of that. The component abstractions in the right diagram create explicit boundaries that we can use to reason about the system.
Brittle test suite
An ideal test suite is one which gives us confidence that we can refactor and be quickly notified if we have caused a regression. We can define refactoring as follows: changing internal structure without affecting the external behaviour/output.
The typical approach to get this kind of system under test is to create test doubles at the seams, and verify the collaboration between our classes for correctness. The problem with this approach, in this context, is that we are tying our tests to the internal structure of the system.
Why? Because of poorly selected abstractions, the interfaces at the seams of the system tend to mirror implementation details, and the collaboration scheme between classes and their dependencies tend to be implementation details as well.
Since we’ve tied the tests to the internal structure of the system, and refactoring changes internal structure, tests will often break after refactoring, even if the external behaviour of the system has not changed.
I’d like to emphasize that I’m not claiming that there is anything inherently wrong with seams, or any other principles I’ve mentioned.
The purpose of this exercise was to unravel what led me to the incorrect beliefs that caused design damage, and to come away with some key takeaways on how I can improve moving forward.
- Use explicit abstractions like interfaces and abstract classes carefully. They should ideally encapsulate the complexity and implementation details of an important high-level concept or role in the domain
- Poorly selected abstractions display some warning signs
- Header interfaces, i.e interfaces which only have one implementation
- Interface definitions changing frequently
- Unit tests and IoC/DI containers which have knowledge of implementation-specific classes and details, rather than high-level abstractions and behaviours
- Ensure that when applying SRP, the cohesiveness of the system is always taken into account. Enforce cohesion by using the classes created via SRP to form explicit components, which model a high-level concept in your domain
- Reserve DIP for meaningful boundaries in the domain where Strategy pattern is relevant (e.g swapping between implementations of a high-level component)
- Be weary when adopting any new tool or principle, and fight the urge to treat it as a ‘silver bullet’ or a ‘golden hammer’. Understand both sides of its trade-offs.
1 Interestingly enough, the term ‘isolation’ with respect to testing is contentious. When Kent Beck proposed that unit tests should isolated, he meant that unit tests should not have any effect on each other. However, it appears that the common definition of isolation has become that units of code should be completely isolated from their system while under test. Ian Cooper discusses this here.
2 While writing this, I wanted to verify exactly how Uncle Bob described DIP. I thought, perhaps he didn’t mean always depend on an interface, since you can have abstractions which aren’t explicitly modelled by an interface. Upon further investigation, he was very clear: “Every dependency in the design should target an interface, or an abstract class. No dependency should target a concrete class.”