Popular Principles of Software Engineering

What is Software Engineering?

Software engineering is a set of principles, procedures, and methods to analyze user requirements and develop effective, reliable, and high-quality software. It is a set of best practices introduced by some famous industry experts that programmers should follow during software development. On another side, every development team needs to deal with several issues to write bug-free, readable, and maintainable code. So programmers need to follow well-defined principles to design and develop large-scale software.

Advantages of software engineering principles

  • Reduces the complexity of the development process.
  • Help software teams in avoiding critical errors and mistakes.
  • Help to achieve development goals efficiently.
  • Increase quality and productivity of the development.
  • Help large-scale teams to work in an organized manner.

Let’s go through some of the top design principles of software engineering.

1. Separation of Concerns

When specifying the behavior of a class, we need to deal with two things: functionality and data integrity. A class is often easier to use if these two concerns are divided as much as possible via different methods. There are two specialisations of this principle:

  • The principle of modularity: This is about separating software into meaningful components according to functionality so that development will be faster and easier.
  • The principle of abstraction: abstraction helps us in separating software components' behavior from their implementation. We need to think about each software component from two points of view: what does it do? And, how does it do it?

This principle allows code reusability because each method is written to handle a separate task and can be reused for similar objectives. 

2. Law of Demeter (The Principle of Least Knowledge)

According to the Clean Code book by Robert Martin, The law of demeter says that a method f of class C should only call: 1) Methods of class C 2) Methods of an object created by f 3) Methods of an object passed as an argument of f 4) Methods of an object held in an instance variable of C. The idea here is simple: Talk to friends not to strangers!

So based on this principle, we need divide the areas of responsibility between classes and encapsulate the logic within a class or method. There are several recommendations from this principle:

  • We need to keep software entities independent of each other. 
  • Minimum coupling: We should reduce the coupling between different classes.
  • Cohesion: We need to put related classes in the same package or module.

The law of Demeter makes classes independent of their functionalities and reduces inter-dependability between classes. Following this idea allows our application to be more maintainable, understandable, and flexible.

Here is a quote from the book Clean Code by Robert Martin: "...There is a well-known heuristic called the Law of Demeter that says a module should not know about the innards of the objects it manipulates. As we saw in the last section, objects hide their data and expose operations. This means that an object should not expose its internal structure through accessors because to do so is to expose, rather than to hide, its internal structure..."

3. Avoid Premature Optimization

Optimization is necessary to build faster applications and reduce the consumption of system resources. But everything has its own time. If we do optimization at the early stages of development, it may do more harm than good. The idea is simple: developing the optimized code requires more time and effort. Even we need to verify the code's correctness constantly. So it is better to use a simple but not the most optimal method in the first place. Later, we can estimate the method's performance and decide to design a faster or less resource-intensive algorithm.

avoid premature optimization

Let's understand it from another perspective. We all agree that optimization speeds up the development process and reduces resource consumption. But suppose we initially implemented the most efficient algorithm and our requirements get changed. What will happen? Our efforts to design an efficient code will be useless, and the program becomes difficult to change. So it would be best if you did not waste your time on premature optimization.

4. Keep it Simple, Stupid (KISS principle)

This principle came into the picture in 1960 when U.S. NAVY found an insight about the system functioning: the complicated system works worst, and the simple system works best! They observed that complexity causes a poor system understanding and generates more bugs.

The idea of the KISS principle: Simple software code should be easy to understand and flexible in modification or extending new features. In other words, we need to avoid unnecessary complexity while building software. This looks obvious, but we often complicate things using fancy features. As a result, we ended up adding several dependencies. Here are several recommendations from this principle:

  • So whenever we add a new dependency by using a new framework or adding a new feature or some other way, we should think about whether this complexity is worth it or not! In other words, we first consider the usefulness of adding another method/class/tool, etc.
  • Our methods need to be small and designed to solve only one problem. If there are many conditions, try to break them into a smaller block of codes. This makes our code cleaner and less likely to have bugs. In other words, simple code is always easy to debug and maintain.

5. Don’t Repeat Yourself (DRY Principle)

The DRY principle states that repeating the same code at different places is not a good idea. It helps us promote code reusability and makes it more maintainable, extensible, and less buggy. This principle originates from the book “The Pragmatic Programmer” by Andy Hunt and Dave Thomas.

Let's understand it from a different perspective! In software systems, there is always a need to maintain and modify the code later. If some part of code is repeated in several places, it leads to a critical challenge: a minor change in the source code will trigger a change to the same code in several places. Suppose someone misses one of the changes, then they will face several errors. These bugs may cost additional time, effort, and focus. The recommended solution idea would be:

  • We should not repeat ourselves while writing code or avoid copy-pasting code in different places. Otherwise, future maintenance will be more complex. If any code block occurs more than twice, we should move that common logic into a separate method.
  • Every piece of data should have a single reference point or source of truth, such that changing a single part of that data doesn’t require changing related code at other places.

6. You Aren't Gonna Need It (YAGNI Principle)

There is a famous problem in developing software! Sometimes we may feel that we need that functionality in the future. But a lot of times, we may not even need it due to the changing software requirements. In the end, some or most of these functionalities become useless.

So, according to the YAGNI principle: we should not add functionality to solve a future problem that we don’t need right now. Always implement things when you need them. In other words, this principle aims to avoid the complexity that arises from adding functionality that we think we may need in the future. Note: YAGNI comes from the software development methodology called Extreme Programming (XP).

7. SOLID Principles

SOLID is a group of object-oriented design principles, where each letter in the acronym “SOLID” represents one of the principles. When applied together, these principles help developers create code that is easy to maintain and extend over time.

It consists of design principles that first appeared in Robert C. Martin’s 2000 paper entitled "Design Principles and Design Patterns". Let’s go through each SOLID principles one by one:

Single Responsibility Principle

According to this principle, every class or method should have responsibility for a single functionality provided by the software, and that class or method should entirely encapsulate responsibility. In other words: a class or method should have only one responsibility and only one reason to change, such that only one part of an application should be able to affect the class if that part is changed.

Single Responsibility Principle

When we design our methods or classes by making them responsible for a single functionality, our code becomes easier to understand, maintain, and modify. Whenever we want to make any changes to functionality, we exactly know the place where we need to change the code.

  • The SRP principle makes the code more organized and improves code readability.
  • If we have short and focused functions or classes, we can reuse them easily. So it contributes a lot to the code reusability.

Open-Closed Principle

According to this principle, we should be able to change the behavior of a class without modifying it.

  • Open for an extension: We should add new features to the classes/modules without changing the existing code.
  • Closed for modification: Once the existing code is working, we shouldn’t change the existing code to add functionality or features.

Open-Closed Principle

Let's understand this from a different perspective! We started the development journey by implementing many functionalities, testing them, and releasing them to the users. But when there is a need to develop new functionalities later, the last thing we want is to make changes to the existing functionality that is working well. So rather than changing existing functionality, we try to build the new functionality on top of the existing functionality.

Liskov Substitution Principle

In a 1988 conference keynote address titled "data abstraction and hierarchy", Barbara Liskov introduced this principle. She stated that: derived classes should be replaceable by their base class(es). 

Liskov Substitution Principle

In other words, an object of a parent class must be interchangeable with an object of a child class without changing the program.

  • An inherited class should complement, not replace, the behavior of the base class. 
  • We should be able to substitute the child for the parent class and expect the same basic behavior.

Interface Segregation Principle

This principle was defined by Robert C. Martin while consulting Xerox. Xerox had designed a new printer software to perform various tasks such as stapling and faxing. As the software grew, making modifications became more and more difficult so that even the slightest change would take a redeployment cycle of an hour, which made development nearly impossible.

The design problem was that almost all tasks used a single Job class. A call was made to the Job class whenever a print job or a stapling job needed to be performed. This resulted in a 'fat' class with several specific methods for various clients. Because of this design, a staple job would know about all the methods of the print job, even though there was no use for them.

The suggested solution by Martin is called the Interface Segregation Principle. Instead of having one large Job class, a Staple Job interface or a Print Job interface was created that would be used by the Staple or Print classes, respectively, calling methods of the Job class. Therefore, one interface was designed for each job type, which was all implemented by the Job class.

Interface Segregation Principle

So the Interface Segregation Principle states that a client should never be forced to depend on methods it does not use. We achieve this by making our interfaces small and focused. It would be best to split large interfaces into more specific ones focused on a particular set of functionalities so that the clients can choose to depend only on the functionalities they need.

Dependency Inversion Principle

Dependency inversion says that high-level modules should not depend on low-level modules but only on their abstractions. The interaction between the two modules should be thought of as an abstract interaction between them, not a concrete one. In simple words, It suggests that we should use interfaces instead of concrete implementations wherever possible.

So, what is the reason behind this principle? The answer is simple: abstractions don’t change a lot. Therefore, we can easily change the behavior of our closed or open-source code and boost its future evolution.

  • It also allows programmers to work smoothly at the interface level, not the implementation level.
  • This decouples a module from the implementation details of its dependencies. The module only knows about the behavior it depends on, not how it is implemented. This allows you to change the implementation whenever possible without affecting the module itself.

Other best practices of software enginneering

  • Measure twice and cut once: As we know, good software development project planning can produce a better result. So before building functionalities, we should choose the right problem, the right solution approach, the right tools, assemble the right team, define perfect metrics to measure, etc.
  • The principle of consistency: Following a consistent coding style helps us understand and read the code in an efficient manner. It saves a lot of programmers time in dealing with more critical issues. Remember that: complex code might look better, readable code is always better!
  • The principle of generality: It is essential to design software free from unnatural restrictions and limitations. In other words, we should develop the project so that it should not be limited or restricted to some of the cases/functions. This would help us provide service to the customer broadly based on their general needs.
  • Remember open source: There are so many open-source options out there. So one of the biggest time wasters in software engineering is building code to do something someone has already written.
  • Follow modern programming practices: To enter current technology trends, modern programming practices are essential to meet user requirements in the latest and most advanced way.
  • Develop a clear understanding of requirements: Understanding user requirements via a well-defined requirement analysis process is critical for good software engineering.
  • Define a project vision: Designing and maintaining the project's vision is one of the most important things throughout the complete development process and critical for success.
  • Write good documentation: When other developers work on another’s code, they should not be surprised and not waste their time getting code. So providing better documentation for each step of development is a good way of building software projects.
  • Add sensible logging. Make sure you have a way of logging/tracing the code execution, which has various log levels (e.g., informational, warning, error).

Enjoy learning, Enjoy oops!

Share feedback with us

More blogs to explore

Our weekly newsletter

Subscribe to get weekly content on data structure and algorithms, machine learning, system design and oops.

© 2022 Code Algorithms Pvt. Ltd.

All rights reserved.