The “Real” Clean Architecture in Android: S.O.L.I.D. | by Denis Brandi | Jul, 2022

To start, let’s say what CA is not:

  1. A template to follow
  2. An unnecessary boilerplate that slows you down (unless you do it wrong)
  3. A new trendy architecture that works only on Android (looking at you iOS, Web, and Backend guys)

So what is Clean Architecture?

The short answer is:

Clean Architecture is the outcome of the S.O.L.I.D. principles.

Hence if you don’t follow them you are not doing CA, you are just following a template that will never make sense.

My rant against other articles is that they rarely mention S.O.L.I.D principles and they focus only on the separation of concerns part (SRP), and because of that, they leave the reader with more doubts than they had before reading the article: why does that use case have a single method? why do I need that interface there? why do I need a different object and mappers?

The answer to all these questions is in the S.O.L.I.D. principles so in part 1 I will focus on the theory and I will show how each principle shapes Clean Architecture, then I will deep dive into each component in the following articles.

What you need to realize is that S.O.L.I.D. principles are not just a myth or a bullet point you need to put on your CV.

They are the benchmark to determine how clean your code is.

You don’t know if you have the right design? Check if your code adheres to S.O.L.I.D. just like a doctor would check your blood test, if something is wrong your blood test will tell, the same is for S.O.L.I.D. and your code.

Now, just because a principle has been violated, it doesn’t mean you are going to have issues but if you have issues, bugs or you are less productive you are most likely breaking one or more S.O.L.I.D. principles.

Look at S.O.L.I.D. principles as common-sense disciplines that can help you stay out of trouble, you may not have to use them always but you better know them!

Derived classes must be substitutable for their base classes.

Inheritance is the tightest coupling you can have, bad usage of it leads to highly-coupled and lowly-cohesive code.

Most inexperienced engineers use inheritance as the primary way to achieve reusability but this often makes maintenance almost impossible to achieve (and just so you know, repeating code is much better than coupling code, maintenance > reusability).

Because this used to be very common in the early days of Object-Oriented Programming, even among senior developers, the LSP was the main principle used to regulate inheritance.

Let’s have a look at this over-simplistic example (which is not the usual square/rectangle example you’ll see around) where the LSP is violated:

Before you wonder, no the LSP is not violated because Android Developers are underpaid (although it would make a good argument 😈).

In this scenario, the developer decided to make CTO a subclass of Employee for reusing the getSalary method and because of that now the app could crash if the method getLineManager is called on a CTO object.

I know for certain that 99% of you wouldn’t have made this mistake because you know that a CTO is a chief and not an employee but not always relationships are clear in the codebase.
In fact, I bet that you have seen many projects with massive BaseActivity, BaseFragment , BaseViewModel…. classes, that is another clear example of LSP violation.

So it is fine to use inheritance as far as I don’t break the LSP?

Instead of using inheritance and then switching to another approach if LSP is violated, you should change your way of thinking and default to composition.

99% of the things you want to do with inheritance can be done with composition so you should always favor composition over inheritance.
If by any chance composition cannot be used, even with the help of any of the G.o.F. design patterns, then you can use inheritance (by making sure you adhere to the LSP!).

What are the most common scenarios in which inheritance is fine to use?

There are a couple of scenarios where inheritance (IS-A relationship) is fine to use instead of composition (HAS-A relationship), or maybe it isn’t but you are forced to use it, for example:

  1. When the framework you are using requires you to use it (extending Activities, Fragments, ViewModels…)
  2. When doing data modeling (Animal -> Mammals…)

Excluding models and frameworks, usually, architectural components should not rely on inheritance, hence any architecture relying on inheritance is far from being good and clean.


Testing inheritance is always painful and requires more effort than testing composition.

When implementing a new subclass you not only have to test the new public methods added and the ones that you are overriding from the parent class, but you also have to test all the other methods of the parent class that you are not overriding!

After all, you don’t want changes to the parent class to break your subclass behavior, don’t you?

Testing composition instead is a piece of cake since your tests won’t care about how the collaborator works, they will only care about your collaborator’s contract which can easily be mocked/stubbed/faked… if required.

* Clean Architecture, Chapter 9 (LSP)
Origin of LSP

A class should do one, and only one, thing.

Hmmm… you meant “A function should do one, and only one, thing?”.
The wrong principle, try again.

A class should have one, and only one, reason to change.

Better, but maybe rephrase it in a way that explains what that “reason to change” is…

A class should be responsible to one, and only one, actor (group of users or stakeholders).

Now we are talking!
I’m using the term “class” because we work in the Java-Kotlin realm, a more generic description is a “module (not a java module) or a source file that contains a cohesive set of functions and data structures”.

“Cohesion” is the keyword in the SRP: the methods of the class must be cohesive, if they are not you should move those methods to another class.

Who are the actors? There are actually many, and therefore there are going to be many classes. “Logging in” and “Making a purchase” will most likely interest different actors, the onboarding team and the sales team for example. So putting them in the same class would be a violation.

To top that, the “Logging in” feature will interest not only a PO but also the UI/UX Designers that defined how the user interaction is performed and the BE team that provided you the REST API (plus other actors that might be involved).

These actors, even if all involved in the same feature will want to be independent of each other as much as possible and will not want to be affected by other actors (UI changes shouldn’t break your API code, API changes shouldn’t break your UI code, both UI changes and API changes shouldn’t break your business logic code).

Conway’s law: The best structure for a software system is heavily influenced by the social structure of the organization that uses it so that each software module has one, and only one, reason to change.

This brings in the concept of vertical and horizontal slicing: features (login, search, purchase…) and layers (presentation, domain, data…..).

A feature set per cross-functional team.
A layer for each actor of that cross-functional team.

I will elaborate more on vertical and horizontal slicing in the next articles.

From SRP to CCP

At the component level, the SRP becomes the Common Closure Principle (CCP) which can be phrased in the following way:

Gather into components those classes that change for the same reasons and at the same times.

Separate into different components those classes that change at different times and for different reasons.

The CCP is one of the most important principles to follow for having a rock-SOLID modularization (another topic I’ll cover in the following articles).

By only applying the SRP this is how the architecture of the system would look like:

1*ce3NgapFzBamtg1Xqt H3w
Outcome of SRP

Again, I will explain each component in the following articles.

* Clean Architecture, Chapter 7 (SRP)
* Clean Architecture, Chapter 13 (CCP)
* Patterns of Enterprise Application Architecture, Chapter 1

The most flexible systems are those in which source code dependencies refer only to abstractions, not to concretions.

This is the easiest principle to follow and yet it is the one that has been more brutally violated by developers.

Somehow developers think that you should have an interface “only in the case of multiple implementations”.
This couldn’t be more wrong.

Any modern Software Architecture worth its name (Hexagonal, Onion, Clean) heavily relies on the DIP.
We do not want our high-level business rules to depend upon low-level details.
We want the isolation of high-level abstractions from low-level details.

Many developers struggle to understand the problem of isolation because they are not aware of how the compiler works, of transitive dependencies or that they start writing code from the wrong class (more on this in the next article).

Let’s use the diagram from the previous chapter (SRP) as an example, where all the arrows point in the same direction:

View -> ViewModel -> Interactor -> Repository -> Data Store...

The class ViewModelwill have an import for the Interactor .
The class Interactor will have an import for the Repository …. and so on.

What many developers don’t know is that the ViewModel also imports the Repository and the DataStore transitively.
Any change to the Data Store will trigger the recompilation of the Repository which will cause the recompilation of the Interactor and the ViewModel and so on.. up to the first class of the Flow of Control.

A good Architecture is supposed to isolate changes, not propagate them everywhere.

On top of that not only does the arrows’ direction dictates the compilation but it also dictates the first class the developer is going to write, the Data Source in this case, which as we will see in the next articles is the wrong class to start implementing.

If we apply the DIP principle to the diagram of the previous chapter, this is how it will look like:

The outcome of SRP + DIP


Testability is also heavily affected by this principle.
Once you have an interface, creating a test double becomes a mainstream job.

But I use Mockito, I don’t need interfaces for testing…

Yes I know, you can still “mock” the behavior of the collaborator by using a mocking framework that uses reflection under the hood.
But I also know that Mocking frameworks make tests slower. In fact, on my machine, every time I run a test class, Mockito adds an extra 5–10 seconds to the execution time, and this for a TDDer means waiting 5–10 extra seconds for each TDD cycle (which is still faster than not doing TDD at all btw).

PS: the 5–10 extra seconds become a minor problem when you run the tests on your CI since this is the time that Mockito takes to start for each module (The same thing will happen with Mockk in case you are thinking that by switching framework you would solve the problem).

The last advantage of creating your own test double instead of using a mocking framework is that you never have to reference the collaborator’s method.

For example, when mocking using Mockito you are doing something like this:

fun test1() {
fun test2() {
fun test3() {
//... you got the point

While, when you create your own test double:

class CollaboratorTestDouble: Collaborator {
fun mockMethod(...) { ... }
override fun method() {
// stub, mock, spy...

Your tests won’t reference the method anymore:

fun test1() {
fun test2() {
fun test3() {

And by not referencing the collaborator’s methods your tests are more protected from method signature changes, which ends up with less code to update and less code to review.
Tests also look more readable.

Sometimes you may not even have to create a test double because you can just use a real implementation.
This is usually preferable when such implementation requires minimal setup (it has no collaborators) and it was created before the current class under test.

Some other times mocks just don’t work well at all.
Think about the SharedPreferences.Editor interface, every method returns the object instance and the transaction is not executed until apply or commit is invoked.
Many here give up with the mocking and just move the test to androidTest folder so they can use the real preferences but that in my opinion is a complete failure.
The fastest and cleanest solution is to create a fake like the following:

Fake for SharedPreferences that uses an in-memory implementation

This fake uses an in-memory implementation of the SharedPreferences interface.
By using this in your test class there is no setup required and you can pre-set your preferences by just doing
fakePreferences.putString("key", "value").apply()
just like you would do with real SharedPreferences in androidTest with the difference that with this approach your tests will run on the JVM and will be much faster (and obviously faster than using mocks).

A funny story

A few years ago Mockito didn’t have support for final classes and when developers were migrating their codebases from Java to Kotlin (where classes are final by default) their tests were failing.

That was a clear sign of their bad architecture and at that point, there were only a clean solution (adding an abstraction) and a dirty solution (making classes open).

Which solution do you think most developers adopted? 😈

Let’s just say that I know many companies that did a big refactoring PR for removing open keyword everywhere once Mockito released their mock-maker-inline workaround (that still doesn’t work for spy and cannot be used in androidTest btw).

* Clean Architecture, Chapter 11 (DIP)
Hexagonal Architecture
Onion Architecture, Part 1
Martin Fowler, Mocks Aren’t Stubs

Keep interfaces small so that users don’t end up depending on things they don’t need.

This is one mainly for us, statically-typed language users, in fact in dynamically-typed languages developers may even get away with just S.O.L.D. instead of S.O.L.I.D. (there are still benefits in adhering to the ISP in dynamically typed languages but I won’t cover them since Android is based on Java/Kotlin anyway).

The ISP can be seen as an extension of the DIP, given that you can’t have small interfaces if you don’t have interfaces at all 😉.

While with the DIP the purpose is to protect your code from the flow of control and transitive dependencies, with the ISP we make sure our collaborators not only are interfaces (or any other type of abstractions) but they are also small so we don’t depend on something we don’t use, that when changed, will cause an extra recompilation.

Let’s take a look at this example:

interface UserService {
fun login(...)
fun logout(...)
fun createAccount(...)
class LoginViewModel(
private val userService: UserService
): ViewModel() {
fun doSomething() {

LoginViewModel uses only one method of UserService yet it is depending on all of them + all the object parameters they require.
This means that a change in logout will recompile LoginViewModel.
And also adding/removing methods to UserService will cause a recompilation.

How do you solve this? Simple, instead of having a big interface you create more and smaller interfaces:

interface LoginInteractor {
fun execute(...)
interface LogoutInteractor {
fun execute()
interface CreateAccountInteractor {
fun execute()


If it wasn’t clear already, S.O.L.I.D. principles are not independent of each other, so by following one you are most likely following also some others, and by breaking one, you are most likely breaking also some others (this is the reason why most developers break most of them).

Classes can implement interfaces, so if interfaces are small then classes are most likely following the SRP.

After all, which class do you think is more likely to break the SRP?
A class with a single method or a class with 100 methods?

Sometimes instead you want small interfaces but slightly bigger classes.
Interfaces are preferably small because they are optimized for the classes that use them.
Classes instead should be responsible for 1 thing, this often makes them smaller but as the SRP says, you should not split one responsibility into multiple classes since you are meant to aggregate cohesive methods in a single class.

What to do in this case?
Simple, you make your class implement multiple interfaces!

From ISP to CRP

At the component level, the ISP becomes the Common Reuse Principle (CRP) which can be phrased in the following way:

Don’t force users of a component to depend on things they don’t need.

This is another important principle for modularization.
Classes that are reused together should be part of the same component.
Like the CCP it tells us how to group classes but it also tells us which classes are not to keep together under the same component.

Just like a class recompiles when an unused collaborator’s method changes, a component recompiles when an unused class of an imported component changes.


When creating a test double it is quite annoying having to deal with big interfaces since you have to stub/mock all the methods of the interface.

In order to avoid this issue, you can either use a mocking framework or adhere to the ISP.

Personally, I’m not against mocking frameworks, I actually use them in all my projects but what I don’t like is that by using them developers can be tempted to break 1 or more principles:

  • Because you can mock classes, you might be tempted to avoid interfaces (DIP 💔)
  • Because you don’t have to mock all the methods, you might be tempted to keep big interfaces/classes (SRP and ISP 💔)

Adhering to S.O.L.I.D. principles makes testing easier and removes the need to use mocking frameworks.
Somehow Android Developers think that they can’t test their code if they don’t use mocking frameworks, unlike iOS developers that tend to stay as vanilla (classicists) as possible.
This is because Java/Kotlin library developers have done very good marketing in the past years and not because unit testing was born with mocking in mind.

My take is that choosing between Mocking Frameworks and Test Doubles is up to your team’s preference (if that is the only way your team is going to write tests then it is the right way) but your code should be testable regardless of the mocking strategy you use!

* Clean Architecture, Chapter 10 (ISP)
* Clean Architecture, Chapter 13 (CRP)

A software artifact should be open for extension but closed for modification.

The ultimate goal of software architecture is to add new features without having to rewrite or recompile any existing code.
This is because every time you modify existing code you may introduce regression bugs and break previously built functionalities, not to mention merge conflicts and very big pull requests!

The OCP is the least independent S.O.L.I.D. principle and the one you should monitor the most, in fact, breaking any other principle will most likely break the OCP too:

  1. When LSP is violated, hence inheritance is used in the wrong way, adding a subclass may require a modification of the parent class in order to not break other subclasses.
  2. When SRP is violated, hence a class belongs to more than 1 actor, whenever an actor requires a new integration then that new integration will modify an existing method which will affect also the other actors.
  3. When DIP is violated, hence concrete classes depend on concrete classes (creating also transitive dependencies), adding a new method somewhere will require a recompilation of all the classes up to the start of the flow (and yes, if you were wondering, recompilation counts as modification too).
  4. When ISP is violated, hence collaborators’ abstractions have more methods than required, adding a new method to a collaborator will recompile also classes that don’t use that method.

So if I follow the other 4 principles, do I get this one for free?

No, unfortunately.

There are other ways you can break the OCP without breaking the other principles, but luckily it is very easy to detect these extra scenarios.

Flag Arguments AKA “Magic Booleans”

How many times have you thought “I’ll just add a boolean here, if true I do A and if false I do B”?

That’s a modification of existing code where you support the integration of B by adding a boolean.

Instead of adding a boolean and modifying an existing function, add a new function.

For example, instead of:

fun pay(isDebitCard: Boolean) { ... }


fun payWithDebitCard() { ... }fun payWithCreditCard() { ... }


They present the same problem of flag arguments but the chance of breaking existing features is even higher since you can have a lot of values, unlike booleans where you have at most 2 values (unless you are passing more than one boolean 😈).

So again, instead of:

enum class PaymentType {
fun pay(paymentType: PaymentType)


fun payWithDebitCard() { ... }fun payWithCreditCard() { ... }fun payWithBankTransfer() { ... }fun payWithGooglePay() { ... }

This way adding a payment method cannot break the previously implemented payment methods and doesn’t require recompilation of all the classes importing the enum.

Does this mean I can’t ever use flags and enums?

It is nearly impossible to write software without ever using flags and enums so, obviously, not all your classes and functions will strictly adhere to the OCP.
The important thing is that when you have to use a flag argument or an enum your function does only that if/else or when statement and nothing else.

* Clean Architecture, Chapter 8
* Clean Code, Chapter 3

News Credit

%d bloggers like this: