In part 1 of this write-up, I was going a long way around. Everything that for me influences the code structure, organization, and implementation. On a macro level. And I do believe it does influence these lower-level details and it should. It will provide a context when looking through the solution and module structure, the naming conventions, and the actual code.

This part of the journey begins at the macro level still, where we explore how a well-thought-out solution forms the bedrock of readable code. This sets the stage for a deeper dive into the world of modular design – a crucial element in achieving clarity and maintainability. As we navigate through the layers of code structure, we unravel the secrets of creating a logical and intuitive flow that guides the reader effortlessly through the logic.

But the quest doesn’t end there. We zoom in further to dissect the nuances of implementation details. Here, naming conventions, commenting strategies, and the delicate balance between simplicity and functionality come into play, each playing a pivotal role in how effectively a piece of code communicates its intent.

Let us start, it is a long one.

The solution level

The next thing in line is opening a solution in an editor of choice and seeing if the context from the previous layer gives more insights when getting your fingers busy. This is where you will be spending most of your time and this is where you will be doing your work. And this is where the context comes in when it comes to the code readability “properly” starts.

No matter what your chosen layering, architecture, and whatnots are, if you’re not following what you’re preaching it will just add to the confusion. And cognitive load. If the naming is not clear it will be a guessing game where to find what. Or if you’re saying: I am following paradigm X and your implementation adds a spin-off to this, that is another notch on confusion. At least for me, that is the case. I am the type of person who will not make a big fuss about whatever poison of choice in your code base they see. I may ask questions, but I will accept and go with the flow. On the other hand, if the reality doesn’t match what I see in the code base, then I would have a stronger opinion on that. Simply as I think it would be adding to my confusion based on my understanding of the chosen architecture/paradigm in question.

/src: 
    /domain
    /entities
    /value_objects
    /services
    /repositories
    /exceptions
    /events

    /application
    /commands
    /queries
    /dtos
    /mappers
    /interfaces

    /infrastructure
    /repository_impl
    /external_services
    /db
    /api
    /configuration

    /ui
    /web
    /mobile

/tests
    /domain
    /application
    /infrastructure
    /integration

This is just one of the examples, that takes Domain Driven Design approach. It is one of the possible things you will encounter in the wild. Is it good or bad, not a point I am trying to make here. It is just an example, so don’t get triggered. I have my own opinions on this as well, so let me leave it at that.

Back to this. As said, if you’re working with something you read in a book, found at a conference or whatever other place, use it as intended. At least at the start until you think you know better. When a project evolves together with an understanding of it and maybe beyond original choices. Trying to add your spin to it, until that “clicks” by itself is going to lead to confusion about where things sit. In the natural order of things. Is it a repository, or maybe a service, or maybe something else? So that is why I believe following the root material of the chosen approach is a great way to learn and be headache-free. It will help you see if this is a way to go or take a different route. It will help also with explaining a project. You can point your new hires towards whatever inspired you originally and you don’t need to implement a glossary to translate between the two terms. Ubiquitous language was not meant for that. Oh common?

There is nothing better for me than when you ask a simple question and it turns into a rabbit hole. We’re doing DDD, with a mix of clean architecture and we sprinkle a bit of event sourcing. And that is just these two modules. The other two are adding ports and adapters and functional concepts in non-functional language. Easy to understand, right? Right?

Into the module, we go

Now time to unpack the next layer of this readability onion. Or it’s context. We’re still nowhere near code, and yet we’re talking apparently about its readability. The next layer of this context is a module. It gives the meaning of what I should be expecting to find within this layer. And why it exists in the context of the solution. A typical example of a dotnet ecosystem is given below:

/src
    /Project.Api
    Program.cs
    /Controllers
        ValuesController.cs
    /Dto
        YourModel.cs
    appsettings.json
    appsettings.Development.json

When I would see the name of the project and it is a structure like described above, I wouldn’t be surprised if I would find things like endpoints, dependencies related to whatever protocol, and so on. There is more to the context here given by knowing a framework in question, as that would be input from documentation or understanding what tech stack is being used, etc.

So with context about how code is organized within the project, the next logical step is to step into the point of this entire write-up. The code itself.

Are we there yet

With the context out of the way, let us talk about code. Wait, context??? Yes, all of the things before this, are what I am keeping in mind when it comes to writing code. And code readability. They all add to the reason for its existence, the place where it lives, the name, and so on. It is not an isolated piece of thing. It is the lowest level we can talk about in this write-up, but all other layers above it give it meaning. A reason to be there.

The file

The first part of this new context is the scope that the working file introduces. What is contained within it, is based on everything we learned and should expect. What documentation and guidelines told us we should be seeing? Is the reality matching that wishful thinking from days of old?

The name of the file is where it all starts for me. If I see a directory structure as follows:

/src
    /Project.Api
    /Endpoints
        ListUsers.cs

It is pretty much clear what is the thing I can expect when I open the ListUsers.cs in my editor of choice. I will also observe the naming conventions and consistency within the project. Are they adhering to the same for all files? Are the filenames indication of logical groupings or pieces that fit together? This kind of thing will tell me what I will be looking at next when I open the file.

When opening the file, the first thing I am observing is the organization within. Modularity, length, cohesion. Here I am mostly looking at the length and modularity are impacted by cohesion. Meaning I don’t mind if a single file contains several definitions or classes if they’re all closely related. Things that change together, stay together kind of mentality. Again, there are many ways to achieve the same result, I am just giving an example of me not being triggered by a file that has a controller, model, and mapping encapsulated within a single file.

After glancing over this part the next one for me is to see if the content follows the prescribed style guides. With advancements in the editors these days, I do expect to be able to configure it based on existing styles and go from there. Editor will take care of this for me and I may challenge it at one point, but personally, it doesn’t bother me if you’re using tabs or spaces. I just like the code to be consistent and when the editor does its magic I don’t need to expect a cognitive load when just reading the text.

The next thing is going to be looking at the code.

What goes where

Another important topic for me is the consistency of code within the files. No matter what coding style you’re using I find it much easier if things are also kept in the same places across all files. One of the things that stuck with me from C++ days is the notion of having the “blocks” to indicate access modifiers and to organize a code.

class SomeImplementation {
private:
    // Members and methods

public:
    // Constructor and methods
};

This has changed a bit over the years, but I still “block” my code in a certain way. It is still changing, trying out new things and finding out what works best for the given situation.

public class SomeImplementation
{
    // Private members
    
    // External dependencies as private members
        
    // Constructors

    // Public properties, get only

    // Public methods

    // Private methods
}

This makes some sense in my head when I read the code. As I see it as follows:

  • What is the internal state I wish to keep
  • Do I depend on the external implementations
  • How the internal state and external dependencies are initialized and to what values
  • Do I need to expose my internal state to outside
  • What I am exposing to the outside world as “service”
  • Some internal members encapsulate logic that is being leveraged within

It depends on the language of choice, but I usually do try similarly structure my code across any of those. Some language features, like deconstructors and similar I am keeping grouped as well with their respective counterparts. With code flowing in this way, code flows from top to bottom and I always know where to find a certain implementation just by knowing the organization of the file. Pair that with features from your editor of choice and code navigation becomes a non-issue.

The code, finally

At this point, we’re finally at the lowest layer of this post. It took a bit of time to get to this layer, but as mentioned beforehand all the parts or layers around it, for me, contribute to the understanding of the written word. With editors these days getting better every minute, they provide even more context to the code. Making the plain text more visual. Easier to grasp the language/framework constructs at a glance. What do I mean, you may ask?

using Microsoft.AspNetCore.Mvc;
using System.Collections.Generic;
using MyApp.Api.Models;

namespace MyApp.Api.Controllers;

[ApiController, Route("api/products")]
public class ProductsController : ControllerBase
{
    [HttpGet]
    public ActionResult<IEnumerable<ProductDto>> GetAll()
    {
        var products = new List<ProductDto>
        {
            new ProductDto { Id = 1, Name = "Product 1", Price = 9.99M },
            new ProductDto { Id = 2, Name = "Product 2", Price = 19.99M }
        };

        return Ok(products);
    }
}

This is a working code. It is not just what you’re used to seeing. A bit harder to understand. And that is where the editor of choice adds a bit more visual appeal to your code.

using Microsoft.AspNetCore.Mvc;
using System.Collections.Generic;
using MyApp.Api.Models;

namespace MyApp.Api.Controllers;

[ApiController, Route("api/products")]
public class ProductsController : ControllerBase
{
    [HttpGet]
    public ActionResult<IEnumerable<ProductDto>> GetAll()
    {
        var products = new List<ProductDto>
        {
            new ProductDto { Id = 1, Name = "Product 1", Price = 9.99M },
            new ProductDto { Id = 2, Name = "Product 2", Price = 19.99M }
        };

        return Ok(products);
    }
}

Much easier to follow, with just a bit of a color, right? Keywords are highlighted, values that are assigned, usage, and so on. This is all nice and good, but when it comes to readability and understanding the good measure of it, for me, is the first one. When you strip all the bells and whistles.

Yes, I understand and agree that this comes packaged. That is not the point. I am focusing on how much context can I pull out of the code, when I read it and try to understand the flow and what is happening. This is a subjective feeling, depending on the layers before this one. If they gave enough insight into the implementation details, it would be great. Reality in my experience, is not that simple.


Depending on the language of choice and some other frameworks of the chosen stack, there may be slight differences when it comes to how code is organized. Or slight differences in the keywords and naming conventions. But for the sake of code readability, they’re pretty much the same thing.

package main

import (
    "encoding/json"
    "net/http"
)

func main() {
    http.HandleFunc("/api/products", getAllHandler)
    http.ListenAndServe(":8080", nil)
}

func getAllHandler(w http.ResponseWriter, r *http.Request) {
    products := []Product{
        {ID: 1, Name: "Product 1", Price: 9.99},
        {ID: 2, Name: "Product 2", Price: 19.99},
    }

    w.Header().Set("Content-Type", "application/json")
    json.NewEncoder(w).Encode(products)
}

As shown in the example of the same code with Go. Slight differences and a bit “lower” level than C# code, but the concepts are pretty much similar, and reading both solutions will result in a similar. You can follow and map the solution between the two languages. And that is what I mean by “they’re pretty much the same thing”.

At least, this is true for me. And I could go with a similar example in F# or any other language and its frameworks. And I don’t think this should come as a surprise, to anyone, no matter how many “mine is better than yours” you see online. Yes, some languages are better at some things, others in different parts. But for the sake of understanding what code does, and its readability of problem solution, I never had issues making sense of it at the end.

This is a tricky one, but I hopefully made my point. It is the same point when I hear people say: “If you want performant code use C”, like you can’t write bad code in it that would be outperformed by any other “lesser” language.


Ok, back to the subject at hand. So what does this all mean? Why all this ranting about? When you throw all of this into the mix and add a problem statement, opinions start to pop up on how to mix language constructs and domain problem statements.

To make a code understandable and readable. So here I went a long way around to say what are all the onion layers I am looking at when I am down and dirty with the actual code. And then here, those choices and concepts from surrounding layers mix in. With a language and actual choices. No matter what paradigm you’re working with, I am always looking at some of the things I consider “clean and readable” code.

public class Calculator
{
    public int Add(int first, int second)
    {
        return first + second;
    }
}

Ignoring what this small example does, I would like you to read and see what kind of information this gives you, before reading my take on it.


This starts from the top level, where I see the public class. For me, it means that this is “free” to use by anyone and there is no limitation inside of the module or by referencing it outside. Modifiers in language are there for a reason and they play a vital story. So I use them carefully to declare the intent on the usage of this piece of code.

I will even go out of my way if I want to make this code testable to still keep proper modifiers and ask for access to the internals I wish to test. And not just go: “Make it public, simpler” which is one of the most common attitudes I see when it comes to this kind of thing.

The next one is the name of this “contract”, Calculator. It should reflect the things I would expect to find inside that are closely related together. Cohesion and other fancy words. The naming convention will be and should be, heavily influenced by the paradigm of your choice that you said it is being used. Follow the convention that is set, be that by following proposals from wherever your influence is coming from or from the documentation that you have around the topic. With that special twist to the original mix. I have no problems following whatever is decided, but I will have questions if I can’t match that. Standards and guidelines kind of a thing.

Naming is really important to me. I spend some time when naming things even checking a dictionary when it comes to using a proper word to reflect my intentions. Not even joking. This also will indicate to me if I truly understand what I am writing. Thus, naming is hard. There is no truer statement made in our industry in my opinion.


Now we’re diving deeper into the method layer. Here again, modifier. Out of the box, will this be used outside of this encapsulating class?  If so, again naming “correctly” will matter afterward. Clarity of the intended function it will perform. After that is out in clear, return type. I see what is the expected outcome that a caller will be expecting.

Now, here are some of the tradeoffs that come in, with different languages of choice. That makes “clarity” a bit hard to get “right”. As you’re well aware, there are possible side-effects you’re solution can have. Like an exception. Some languages have a clear way to specify errors in signatures, some don’t.

There are solutions for it, like adding documentation to your code. Saying this can land in your self-explanatory code discussions. I am not sure at what point in time that started to be zero comments. Will touch upon this in a later part.

Now we get to the name and the principle from above still applies. Clarity on the provided functionality and its existence. Don’t add things that are obvious from all other parts of the signature when naming a function is one of my rules.

As an example, I wouldn’t do AddNumbers. Redundant and, for me, defeats the purpose of the rest of the signature. It should be obvious from input parameters and their types what are you expecting and outputting. Also, the module/class adds additional context, so for me, it is like ignoring everything else. Just for fun. Or even better, Calculate. I cringe so many times at the sight of that kind of naming, generic without any obvious need for it.

var calculator = new Calculator();
var result = calculator.CalculateNumbers(1, 2, CalculatorOperation.Addition);

This example above for me is a full-on “anti-pattern” regarding my view on readable code. The example is a bit trivial, I know, but it wouldn’t be an understatement to not see it at least once every couple of weeks. An equivalent example of it, that is. Generic for the sake of it, and stating the obvious for no good reason even if it’s generic. But that is just my opinion, not right or wrong.


The next in line is the input parameters, and that is for me the same thing as a return type. It indicates to the caller what obligations it needs to fulfill and provide. Here I will be looking into the number of parameters mostly, and the rule of thumb for me is that more than 3, I would look into splitting or combining into a custom type. And sometimes even with fever parameters, a custom type as an input (or output) that simply “wraps” the language constructs tells a much better story.

public record Vector(double X, double Y);

public class Calculator
{
    public Vector Add(Vector first, Vector second)
    {
        return new Vector(
            first.X + second.X,
            first.Y + second.Y);
    }
}

I don’t know about you, but this is much cleaner for me than the following:

public class Calculator
{
    public (decimal X, decimal Y) Add(
        decimal firstX,
        decimal firstY,
        decimal secondX,
        decimal secondY)
    {
        return new (firstX + secondX, firstY + secondY);
    }
}

The language constructs that enable features around my code, for example CancellationToken in C#, I am not going to wrap or consider into the total count. Simply, as stated it is a language construct and it by itself adds a layer of understanding to the caller. And with this context in mind, here is also where naming could be going against what I believe in. If the team decides to, as per for example Microsoft conventions of naming, appends Async at the end of the function. Again, I consider it necessary but pick your battles.

public record Vector(double X, double Y);

public class Calculator
{
    public Task<Vector> AddAsync(
        Vector first,
        Vector second, 
        CancellationToken cancellationtoken)
    {
        return Task.FromResult(
            new Vector(
                first.X + second.X,
                first.Y + second.Y),
                cancellationToken);
    }
}

This is part of the context I would expect to see before going into the code and seeing, it in the documentation. Under coding standards and guidelines. For me, it would be good enough to keep the original name, as I think a return type of Task<T>  and that CancellationToken more than indicate what is this about.

This is how I deconstruct a piece of code and what is inside of it. And all of it combined gives me what this code does. It has layers and each of them can be used to add a bit more context to the layer below it. From modifiers to modules and classes, to return types, input parameters, and so on. All of it adds to the readability and clarity.

Now, let me give a bit more on the stuff that matters to me, for the sake of understanding the written code. That extends the previous.


One of the other things I am often looking into are things like boolean parameters. As they, in my experience, more often than not indicate a flow control within the implementation. This is a personal opinion, but for me always looks odd.

public void DoSomething(bool isOne)
{
    if (isOne) OneWay();
    else OtherWay();
}

private void OneWay() {}

private void OtherWay() {}

This is how I think when I look at this: Why then not expose these two private implementations? The “unified interface” just makes me question it more. Less code, more thought put into the process of why they are there, why you need a flag and so many other reasons.

On the same note, this is a similar process when I think about optional parameters. Or nullable ones, whatever your language of choice has. Because the first line of code will be something like this:

public void DoSomething(string? beOrNotToBe = "")
{
    if (string.IsNullOrEmpty(beOrNotToBe))
    {
        // Something
    }
}

I don’t know about you, but for me, it is almost the same thing. You’re adding more code to support a scenario that you can split into two implementations. No need to guess what happens if you change things around. Or this becomes a mandatory parameter. I would either drop this parameter, split the method in two, or introduce a complex type for those “we may need it in future” scenarios.

public class ItMustBe
{
    public ItMustBe() 
    {
        Value = string.Empty;
    }

    public ItMustBe(string value)
    {
        Value = value;
    }

    public string Value { get; private set; }
}

// Somewhere in code

public void DoSomething(ItMustBe beOrNotToBe)
{
    var value = beOrNotToBe.Value;
}

Functional languages have constructs that solve it in their own way. Talking about Option type as an example in F#. Or Maybe in some others. It is an expressive way to describe whether the value will be there or not. I do enjoy these kinds of things within functional languages. And judging in what direction several object-oriented ones are taking, we’re moving toward them having similar features in the future.


Now onto a controversial subject, code comments. I honestly don’t know at what point the code comments became taboo. You will get just in response: Self-documenting coooooooode!!!. Would be a non-issue when I hear it if it in most cases reality matches what is being said.

When I think about writing a comment about some of my pieces of implementation, it is to provide a context to that “self-documenting code”. Things like references that influenced my implementations (some standards, examples of solutions, etc.), reasons for code existence that may not be obvious, dirty hacks that you can’t get away from, and so on. There is always a reason to write a good comment.

/// <summary>
/// This is a hack to solve a problem of the <Name Of Some External Api> expecting a decimal value for <name of the value> in the payload to be split into the whole part and decimal part as different properties.
/// <seealso>https://link_towards_api.documentation</seealso>
/// </summary>
/// <example>
/// var (wholePart, decimalPart) = SplitDecimal(10.20m);
/// Console.WriteLine($"Whole: {wholePart}, Decimal: {decimalPart}"); // Whole: 10, Decimal: 20
/// </example>
private static (int wholePart, int decimalPart) SplitDecimal(decimal value)
{
    int wholePart = (int)value;
    int decimalPart = (int)((value - wholePart) * 100);
    
    return (wholePart, decimalPart);
}

I consider this a valid comment for somewhat of an odd code that you may have no control over. I wouldn’t pretend to need to write this kind of “hack” solution now and then. But leaving a comment to explain its existence will go a long way when someone else looks at it.

The code comments also help with documenting side-effects your implementation may have, that may not be obvious to the caller. Like exceptions being thrown, not “obvious” results, etc.

/// <summary>
/// Validates and updates the provided date of birth to ensure it does not indicate an age of 200 years or more and updates it on the user profile.
/// </summary>
/// <param name="dayOfBirth">The date of birth of the user to be updated on the profile.</param>
/// <exception cref="InvalidAgeException">
/// Thrown when the provided date of birth indicates an age of 200 years or more.
/// The exception message is "Wishful thinking...".
/// </exception>
public void UpdateDateOfBirth(DateOnly dayOfBirth)
{
    var maximalAge = DateOnly.FromDateTime(DateTime.Today).AddYears(-200);
    
    if (dayOfBirth >= maximalAge)
    {
        throw new InvalidAgeException("Wishful thinking...");
    }

    // Continue as normal
}

The comment above provides the caller with information on what they should be expecting in case of “invalid” usage. It is solving a problem you can’t fix without introducing some custom return type to indicate success or failure, which would spiral on.

The interfaces/contracts, to your external consumers, are also a good place to put great descriptions and examples on how to implement and implications around them. Clear documentation for APIs, libraries, or any code that will be used by others. Include descriptions of parameters, return types, and any side effects. These are some nice things to have and they can make or break your library. For an API things like OpenAPI are nice to have. The benefits are numerous and with several libraries out there implementing the standard, adding it is no more than a couple of lines of code. And so on, and so on.

💡 Documentation is not an anti-pattern if provides a context. And not add more “noise” to already noisy code.


The length of the code is another thing that I look for. I can’t remember how long I read, or even where, but one thing that stuck with me was: I decide if a function is too long if I press my head against the monitor and it overflows. I found it funny and a solid rule of thumb in a sea of many. It is a good one as it makes it funny and easy to remember.

I do tend to split the longer code into smaller chunks and group them, based on the implementation. These could be small classes/modules by themselves, encapsulating that particular piece of logic. And give it a bit more meaning. All of this is personal preference, but I do tend to find long methods lead me into going back and forth to correlate some things.

A smaller private method that can isolate a small piece of logic is a good start to identifying how to split up a code more neatly. Then again, sometimes things just need to be the way they are.

func fetchUserData(apiURL string) (*User, error) {
    resp, err := http.Get(apiURL)
    if err != nil {
        return nil, err
    }
    defer resp.Body.Close()

    body, err := ioutil.ReadAll(resp.Body)
    if err != nil {
        return nil, err
    }

    user, err := mapToUserModel(body)
    if err != nil {
        return nil, err
    }
    
    return &user, nil
}

func mapToUserModel(data []byte) (*User, error) {
    var user User
    err := json.Unmarshal(data, &user)
    if err != nil {
        return nil, err
    }

    return &user, nil
}

The simple example above in Go, demonstrates what I mean. The endpoint that invokes the API endpoint is responsible for just that, and then to make it simpler to understand there is a function below it that isolates the translation of the payload into an internal model. How granular you wish to be it is up to you.


And to close this story for now, the last thing on the list for me would be the nesting of the code. And I mean the kind that starts looking like an arrow in most cases. It is becoming a bit an of edge case that I come across like an example below:

func process(data *Data) {
    if data != nil {
        if data.User != nil {
            user := data.User
            if user.Age > 18 {
                if user.Location != nil && user.Location.Country != "" {
                    location := user.Location
                    if location.Country == "USA" {
                        if location.State == "California" {
                            if location.City != "" {
                                city := location.City
                                if city == "San Francisco" {
                                } else {
                                }
                            } else {
                            }
                        } else {
                        }
                    } else {
                    }
                } else {
                }
            } else {
            }
        } else {
        }
    } else {
    }
}

The deepest nesting I come across these days is usually still probably too deep. Recently I even needed to write some, due to some interesting stuff within API I was integrating with. It was only 2 levels deep, and it made me uneasy. Decide to rewrite it to defaults and have a function that tells me if the data is correct. And to prevent checks for null defaults for everything. These days is easy to forget that just because of synthetic sugar operators it is fine to write things like:

var street = user?.Address?.Street ?? string.Empty;

Just reading it takes a minute to understand. For me, I would prevent being able to initialize with “invalid” or null. The code could become much simpler.

public class User
{
    public User()
    {
        Address = new Address("Not available");
    }
    
    // Rest...
    public Address Address { get; private set; }
}

public class Address
{
    public Address()
    {
        Street = string.Empty;
    }	

    public Address(string street)
    {
        Street = street;
    }

    public string Street { get; private set; }
}

The implementation  Address indicates if initialized by itself, that it can be empty. While the User that is “default” provides more insight, and says: “Not available”. It is a different context from different points of usage and composition. And both will not allow a value of this to be anything that adds no “meaning” or you need to check.

If you need to enforce that value must be provided, then enforce it at the point of creation. That way data will become “correct” and the rest of the code will not be any wiser. Or worry if data is present or not.

You can’t end up in a situation that you don’t expect if you don’t allow it. Simple as that. There are “cleaner” ways to solve this kind of thing in C#, these days, but the point is not to use synthetic sugar to explain the concept I am talking about. I have a couple of posts on this subject: Nullable data & Nullable collections.


For now, I think it is enough and there are enough opinions that will hopefully make you think or stop the next time you’re in a similar debate. I honestly am not sure if I covered all of my views on the subject. It is a constant evolution for me and this is also something for me to reflect on in the future.

Summary

Some other things will along the way change the way you think and one of them, in my opinion, is reading others’ code and learning more about the language which you’re working with. And the ecosystem around it. I sometimes cringe at the sight of trying to do functional programming in an object-oriented language instead of just going that route. I saw many attempts on this and they always end up in a worse code. Less readable, harder to understand, and not doing any better than it set out to do.

So don’t worry about it too much when you get into those discussions, keep an open mind when it comes to working within a team and come up with standards. There is no best way out there when it comes to personal preferences. Everyone has their take on these things and so do you. This is what I tried expressing here, and this was a road for me as well. I started only caring and thinking about the code itself and assuming that “self-explanatory” code means just that. But when you keep adding layers to the end solution, in whatever form and shape works for you, the code then becomes simpler. Easier to follow. Less cumbered as the layer above it explains it. It is only responsible for doing its job and potentially providing a context for the layer below it.

As stated at the start of this adventure, this is not to a framework or anything like it. I am just trying to put in words everything that for me constitutes a readable code. Or results in it. Grouping things based on the context they provide, considering writing documentation instead of forcing my code to be “self-documenting”, naming things, following architecture and paradigms we chose, and so on. And all of this results in less code and more clarity, at least for me. Code should be left to the things it is best suited for. Making it more complex just for the sake of it, is not one of them. In my opinion.

Readability

Until next time, hopefully, you didn’t give up by now.