Skip to main content

The top 5 software architecture patterns: How to make the right choice


Image result for software architecture patterns

How many plots are there in Hollywood movies? Some critics say there are only five. How many ways can you structure a program? Right now, the majority of programs use one of five architectures.
Mark Richards is a Boston-based software architect who’s been thinking for more than 30 years about how data should flow through software. His new (free) book, Software Architecture Patterns, focuses on five architectures that are commonly used to organize software systems. The best way to plan new programs is to study them and understand their strengths and weaknesses.
In this article, I’ve distilled the five architectures into a quick reference of the strengths and weaknesses, as well as optimal use cases. Remember that you can use multiple patterns in a single system to optimize each section of code with the best architecture. Even though they call it computer science, it’s often an art.

Layered (n-tier) architecture
Event-driven architecture
Microkernel architecture
Microservices architecture
Space-based architecture
x
This approach is probably the most common because it is usually built around the database, and many applications in business naturally lend themselves to storing information in tables.
This is something of a self-fulfilling prophecy. Many of the biggest and best software frameworks—like Java EE, Drupal, and Express—were built with this structure in mind, so many of the applications built with them naturally come out in a layered architecture.
The code is arranged so the data enters the top layer and works its way down each layer until it reaches the bottom, which is usually a database. Along the way, each layer has a specific task, like checking the data for consistency or reformatting the values to keep them consistent. It’s common for different programmers to work independently on different layers.
Image credit: Izhaki
The Model-View-Controller (MVC) structure, which is the standard software development approach offered by most of the popular web frameworks, is clearly a layered architecture. Just above the database is the model layer, which often contains business logic and information about the types of data in the database. At the top is the view layer, which is often CSS, JavaScript, and HTML with dynamic embedded code. In the middle, you have the controller, which has various rules and methods for transforming the data moving between the view and the model.
The advantage of a layered architecture is the separation of concerns, which means that each layer can focus solely on its role. This makes it:
  • Maintainable
  • Testable
  • Easy to assign separate "roles"
  • Easy to update and enhance layers separately
Proper layered architectures will have isolated layers that aren’t affected by certain changes in other layers, allowing for easier refactoring. This architecture can also contain additional open layers, like a service layer, that can be used to access shared services only in the business layer but also get bypassed for speed.
Slicing up the tasks and defining separate layers is the biggest challenge for the architect. When the requirements fit the pattern well, the layers will be easy to separate and assign to different programmers.
Caveats:
  • Source code can turn into a “big ball of mud” if it is unorganized and the modules don’t have clear roles or relationships.
  • Code can end up slow thanks to what some developers call the “sinkhole anti-pattern.” Much of the code can be devoted to passing data through layers without using any logic.
  • Layer isolation, which is an important goal for the architecture, can also make it hard to understand the architecture without understanding every module.
  • Coders can skip past layers to create tight coupling and produce a logical mess full of complex interdependencies.
  • Monolithic deployment is often unavoidable, which means small changes can require a complete redeployment of the application.
Best for:
  • New applications that need to be built quickly
  • Enterprise or business applications that need to mirror traditional IT departments and processes
  • Teams with inexperienced developers who don’t understand other architectures yet
  • Applications requiring strict maintainability and testability standards
Many programs spend most of their time waiting for something to happen. This is especially true for computers that work directly with humans, but it’s also common in areas like networks. Sometimes there’s data that needs processing, and other times there isn’t.
The event-driven architecture helps manage this by building a central unit that accepts all data and then delegates it to the separate modules that handle the particular type. This handoff is said to generate an “event,” and it is delegated to the code assigned to that type. 

Programming a web page with JavaScript involves writing the small modules that react to events like mouse clicks or keystrokes. The browser itself orchestrates all of the input and makes sure that only the right code sees the right events. Many different types of events are common in the browser, but the modules interact only with the events that concern them. This is very different from the layered architecture where all data will typically pass through all layers. Overall, event-driven architectures:
  • Are easily adaptable to complex, often chaotic environments
  • Scale easily
  • Are easily extendable when new event types appear
Caveats:
  • Testing can be complex if the modules can affect each other. While individual modules can be tested independently, the interactions between them can only be tested in a fully functioning system.
  • Error handling can be difficult to structure, especially when several modules must handle the same events.
  • When modules fail, the central unit must have a backup plan.
  • Messaging overhead can slow down processing speed, especially when the central unit must buffer messages that arrive in bursts.
  • Developing a systemwide data structure for events can be complex when the events have very different needs.
  • Maintaining a transaction-based mechanism for consistency is difficult because the modules are so decoupled and independent.
Best for:
  • Asynchronous systems with asynchronous data flow
  • Applications where the individual data blocks interact with only a few of the many modules
  • User interfaces
Many applications have a core set of operations that are used again and again in different patterns that depend upon the data and the task at hand. The popular development tool Eclipse, for instance, will open files, annotate them, edit them, and start up background processors. The tool is famous for doing all of these jobs with Java code and then, when a button is pushed, compiling the code and running it.
In this case, the basic routines for displaying a file and editing it are part of the microkernel. The Java compiler is just an extra part that’s bolted on to support the basic features in the microkernel. Other programmers have extended Eclipse to develop code for other languages with other compilers. Many don’t even use the Java compiler, but they all use the same basic routines for editing and annotating files.
The extra features that are layered on top are often called plug-ins. Many call this extensible approach a plug-in architecture instead.
Richards likes to explain this with an example from the insurance business: “Claims processing is necessarily complex, but the actual steps are not. What makes it complex are all of the rules.”
The solution is to push some basic tasks—like asking for a name or checking on payment—into the microkernel. The different business units can then write plug-ins for the different types of claims by knitting together the rules with calls to the basic functions in the kernel.
Caveats:
  • Deciding what belongs in the microkernel is often an art. It ought to hold the code that’s used frequently.
  • The plug-ins must include a fair amount of handshaking code so the microkernel is aware that the plug-in is installed and ready to work.
  • Modifying the microkernel can be very difficult or even impossible once a number of plug-ins depend upon it. The only solution is to modify the plug-ins too.
  • Choosing the right granularity for the kernel functions is difficult to do in advance but almost impossible to change later in the game.
Best for:
  • Tools used by a wide variety of people
  • Applications with a clear division between basic routines and higher order rules
  • Applications with a fixed set of core routines and a dynamic set of rules that must be updated frequently
Software can be like a baby elephant: It is cute and fun when it’s little, but once it gets big, it is difficult to steer and resistant to change. The microservice architecture is designed to help developers avoid letting their babies grow up to be unwieldy, monolithic, and inflexible. Instead of building one big program, the goal is to create a number of different tiny programs and then create a new little program every time someone wants to add a new feature. Think of a herd of guinea pigs.
“If you go onto your iPad and look at Netflix’s UI, every single thing on that interface comes from a separate service,” points out Richards. The list of your favorites, the ratings you give to individual films, and the accounting information are all delivered in separate batches by separate services. It’s as if Netflix is a constellation of dozens of smaller websites that just happens to present itself as one service.
This approach is similar to the event-driven and microkernel approaches, but it’s used mainly when the different tasks are easily separated. In many cases, different tasks can require different amounts of processing and may vary in use. The servers delivering Netflix’s content get pushed much harder on Friday and Saturday nights, so they must be ready to scale up. The servers that track DVD returns, on the other hand, do the bulk of their work during the week, just after the post office delivers the day’s mail. By implementing these as separate services, the Netflix cloud can scale them up and down independently as demand changes.
Caveats:
  • The services must be largely independent or else interaction can cause the cloud to become imbalanced.
  • Not all applications have tasks that can’t be easily split into independent units.
  • Performance can suffer when tasks are spread out between different microservices. The communication costs can be significant.
  • Too many microservices can confuse users as parts of the web page appear much later than others.
Best for:
  • Websites with small components
  • Corporate data centers with well-defined boundaries
  • Rapidly developing new businesses and web applications
  • Development teams that are spread out, often across the globe
Many websites are built around a database, and they function well as long as the database is able to keep up with the load. But when usage peaks, and the database can’t keep up with the constant challenge of writing a log of the transactions, the entire website fails.
The space-based architecture is designed to avoid functional collapse under high load by splitting up both the processing and the storage between multiple servers. The data is spread out across the nodes just like the responsibility for servicing calls. Some architects use the more amorphous term “cloud architecture.” The name “space-based” refers to the “tuple space” of the users, which is cut up to partition the work between the nodes. “It’s all in-memory objects,” says Richards. “The space-based architecture supports things that have unpredictable spikes by eliminating the database.”
Storing the information in RAM makes many jobs much faster, and spreading out the storage with the processing can simplify many basic tasks. But the distributed architecture can make some types of analysis more complex. Computations that must be spread out across the entire data set—like finding an average or doing a statistical analysis—must be split up into subjobs, spread out across all of the nodes, and then aggregated when it’s done.
Caveats:
  • Transactional support is more difficult with RAM databases.
  • Generating enough load to test the system can be challenging, but the individual nodes can be tested independently.
  • Developing the expertise to cache the data for speed without corrupting multiple copies is difficult.
Best for:
  • High-volume data like click streams and user logs
  • Low-value data that can be lost occasionally without big consequences—in other words, not bank transactions
  • Social networks
These are Richards' big five. They may be just what you need. And if they’re not, the right solution could be a mixture of two. Or maybe even three. Be sure to download his book for free; it contains a lot more detail. If you can think of more, please let us know in the comments.

Comments

Popular posts from this blog

Datadog - Công cụ giám sát, quản lý hệ thống

Datadog là một dịch vụ giám sát, tập hợp số liệu và sự kiện từ các máy chủ, cơ sở dữ liệu, các ứng dụng, các công cụ và dịch vụ để trình bày một quan điểm thống nhất của các cơ sở hạ tầng. Những khả năng này được cung cấp trên một nền tảng phân tích dữ liệu SaaS dựa trên phép Dev và Ops đội làm việc cộng tác trên cơ sở hạ tầng để tránh thời gian chết, giải quyết vấn đề hiệu suất và đảm bảo rằng các chu kỳ phát triển và triển khai hoàn thành đúng thời hạn. 1. Integrations Datadog hỗ trợ, tích hợp rất nhiều những công nghệ phổ biến nhất hiện nay như Amazon EC2, Linux, Elasticsearch... Có hỗ trợ thông qua API (tất nhiên sẽ chậm hơn so với sử dụng agent cài trực tiếp) Có agent mã nguồn mở, nghĩa là mình có thể chọc ngoáy, tùy biến sử dụng 2. Infrastructure Được hiển thị trong tab  infrastructure list  là danh sách các server của bạn đang được cài agent của datadog. Ở đây ta sẽ có một cái nhìn tổng quan về toàn bộ nguồn tài nguyên đang có, qua đó phát hiện được máy nào đang qu

Quality attributes in Software Architecture

Let’s continue investigating Software Architecture. We considered who is a Software Architect, what types of Software Architects exist and what the architect should do in the beginning of a project. When stakeholders are identified and requirements are collected, the question arises what to do next. After functional requirements are formulated — or the answer to the question “ WHAT  the system should do” is found, the software architect starts searching for the answer to the question “ HOW  the system should work”. Non-functional requirements help in that case. What are non-functional requirements? Non-functional requirements (NFRs)  define the criteria that are used to evaluate the whole system, but not for specific behavior, and are also called quality attributes and described in detail in architectural specifications. All NFRs can be divided into two main categories: Attributes that affect system behavior, design, and user interface during work. Attributes that affect the

Web REST API Benchmark

Who are the candidates Laravel 5, PHP 7.0, Nginx Lumen 5, PHP 7.0, Nginx Express JS 4, Node.js 8.1,  PM2 Django, Python 2.7, Gunicorn Spring 4, Java, Tomcat .NET Core, Kestrel What are we testing We are interested in the number of requests per seconds each framework achieves on different server configurations alongside how clean or verbose the code looks like. How server configurations look We are also interested in how each framework scales its performance and what is the price of achieving such performance. This is why we’ll test them on 3 different server configurations using  DigitalOcean : 1 CPU, 512 MB - $5 / month 4 CPU, 8 GB - $80 / month 12 CPU, 32 GB - $320 / month What are we building We want to test a real life application example so we’ll basically build a Web REST API that exposes 4 endpoints, each one having a different complexity: Hello World  -   simply respond with a JSON containing  Hello World  string. Computation  -   compute the firs

15 Chứng Chỉ IT Có Giá Trị Nhất Hiện Nay

Tìm kiếm một lợi thế trong sự nghiệp IT của bạn? Chứng nhận IT vẫn là một bằng chứng hiệu quả nhất để bạn nhanh chóng đạt được các kỹ năng có giá trị và giúp bạn có bí quyết sâu sắc hơn trong một domain đồng thời sẽ làm sự nghiệp bạn luôn thăng tiến. Chứng chỉ và kỹ năng có thể giúp tăng lương cho bạn, tăng giá trị của bạn trong thị trường ngành nghề cạnh tranh và giúp bạn thăng tiến trong công việc của bạn. Theo khảo sát từ Kiến thức toàn cầu, 83% chuyên gia IT ở Hoa Kỳ và Canada có chứng nhận IT – và ở Hoa Kỳ, mức lương trung bình cho một chuyên gia IT trung bình là 8400 USD (hoặc 11,7%) hoặc cao hơn. Việc thuê chuyên gia đã có chứng nhận thường có lợi cho người sử dụng lao động. Trong số những người được làm bài khảo sát, 44% người ra quyết định IT nói rằng nhân viên có bằng cấp khiến công việc được thực hiện nhanh hơn, 33% người cho biết triển khai hệ thống hiệu quả hơn và 23% nói rằng nó giúp triển khai sản phẩm và dịch vụ nhanh hơn và xảy ra ít lỗi hơn. Dưới đây là 15 chứn

Building A Continuous Delivery Pipeline Using Jenkins

Continuous Delivery is a process, where code changes are automatically built, tested, and prepared for a release to production. I hope you have enjoyed my  previous blogs on Jenkins .  Here, I will talk about the following topics:: What is Continuous Delivery? Types of Software Testing Difference Between Continuous Integration, Delivery, and Deployment What is the need for Continuous Delivery? Hands-on Using Jenkins and Tomcat Let us quickly understand how Continuous Delivery works. What Is Continuous Delivery? It is a process where you build software in such a way that it can be released to production at any time. Consider the diagram below: Let me explain the above diagram: Automated build scripts will detect changes in Source Code Management (SCM) like  Git . Once the change is detected, source code would be deployed to a dedicated build server to make sure build is not failing and all test classes and integration tests are running fine. Then, the buil

Kiến Trúc Message Queue Trong Microservice

Trong kiến trúc microservices, ứng dụng phần mềm được cấu thành từ các dịch vụ độc lập. Để hoàn thành một tác vụ của người dùng trên phần mềm, kết nối và giao tiếp giữa các microservices là cần thiết vì một tác vụ gồm nhiều tác động khác nhau lên các services. Vì vậy, giao tiếp giữa các microservices là một vấn đề cực kì quan trọng. Hãy xem xét một ví dụ cụ thể sau: Service Order và Payment exposed các API để client hoặc các service khác gọi đến nó Khi client call REST API request để thực hiện một Order. Order service nhận request xử lý business logic rồi gọi tới Payment Service qua API để thực hiện Payment. Mô hình trên là loại mô hình kết nối Point-To-Point, các service kết nối trực tiếp với nhau thông qua API end-point. Tuy nhiên, mô hình này chỉ áp dụng với hệ thống có số lượng service nhỏ. Nếu số lượng service tăng, giao tiếp kiểu này sẽ trở nên rắc rối phức tạp, khó quản lý Để giải quyết vấn đề trên, Microservice sử dụng mô hình Message Queue để giao tiếp giữa c

Xây dựng Open Chatbot bằng Deep Learning

Hiện nay, chúng ta đã quen với những Chatbot được xây dựng dựa trên các Platform có sẵn như IBM Watson Conversation, Luis, API.Ai, Amazon Lex hay FPT.AI. Tuy nhiên các platform chỉ có thể áp dụng cho các Close Chatbot như tổng đài trả lời tự động của ngân hàng, hướng dẫn đăng ký bảo hiểm trực tuyến. Chúng không thể làm được như các Open Chatbot kiểu như Siri của Apple, Cortana của Microsoft, Google Assistant, hay Alexa của Amazon. Các bạn hãy tưởng tượng thế này, để xây dựng các Open Chatbot bằng các Platform trên thì chúng ta phải tạo ra các Intents các Entities có thể cover hàng tỷ tỷ case sử dụng của người dùng. Điều đó thật sự là bất khả thi! Vậy các Open Chatbot được xây dựng như thế nào? Thực tế thì có rất nhiều phương pháp, rất nhiều kỹ thuật được sử dụng để tạo nên một Open Chatbot. Trong bài này, tôi muốn chia sẻ với các bạn một phương pháp như một phần cơ bản được sử dụng để tạo ra chúng. Đó là Deep Learning. Đặt vấn đề Ở mức high level, công việc của Chatbot là có t

Stakeholders in Software Architecture

This time we will talk about the purpose of the development of projects, construction of architectural solutions, and programming of algorithms. Let’s talk about the people for whom this is done and who can influence the project. Who is a stakeholder There’s always one more System Stakeholder than you know about; and the known stakeholders have at least one more need than you know about now. Tom Gilb The code itself, no matter how ideal it is, is not worth a penny. What matters is the business functionality that this code implements. Or rather, the people who are willing to pay for this business functionality, because it solves their problems, or just entertains.  All these people, for whom developers, architects, managers work all over the world, are called stakeholders. Project development is all about its stakeholders. And the developers themselves are also among these people. A stakeholder  is a person or organization that has rights, share, claims or interests with re