Small independent programs with clear tasks that process information: Microservices
Microservice & Microfronted
Autonomous applications that are designed with a specific function, have clear and particular task definitions and which are able to work independently as well as together with other services by establishing communication are called microservices. Software architecture which started with monolithic applications where software consisted of a single program initiated two new concepts known as frontend and backend in time. In this new system, software consisted of two distinct layers: “frontend”, which was the part seen by the user, and “backend”, where the application’s logic was written and data was stored.
Nowadays we count on microservice and microfrontend architectures. In microservice architecture, “backend” which processes data and holds the application’s logic, is separated into its logical and functional components. While in microfrontend architecture, the “frontend” part is also separated into its component to complement the microservice architecture.
Companies like Facebook, Twitter, Amazon and Hepsiburada benefits from technology at its finest and lead the way by using Mikroservice and Microfrontend Architecture. As USTA, we as well follow and grow this technology.
The benefits we offer to our customers by using Microservice & Microfrontend architecture:
- No single-point of failure,
- Load balancing,
- Opportunity to scale horizontally,
- Facility to add new modules or features
- Possibility to work on cloud environments
- Solutions to immediate requirements through temporary source enhancement
- Elimination of obligations like hardware installation or data backup
What are Microservices?
The characteristics defining microservices didn’t come out of the blue. The theory’s seeds were planted back in 1978 and software designers succeeded in developing the product over time. Difficulties experienced on the way, technological improvements related to hardware and the challenge of getting better at delegating work made the search of new ideas a continuous endeavor. Meanwhile, a digital evolution took place.
The reasons laying behind developers’ need for microservice structures could be summed up as authorization, dependency, performance issues and increasing hardware requirements over time.
Initially, the software community used to develop solutions for individual situations. Many of the applications used in the field were independent from each other and far from being integrated. Data was kept on distributed, independent systems. Later on, a switch to prompt server structures had been adopted; all data was kept now in a centralized system. Thus it became possible to derive relatively interrelated data. However, this approach still carried some weaknesses when large systems receiving inputs from many sources were at stake.
The community started to work on developing what we call now Service Oriented Architecture (SOA) in order to establish larger, integrated systems which can be reached from any point; the aim was to obtain isolation by separating functions from each other, to reduce dependency and to allow specialization. Thus software management became somewhat easier and the opportunity to set systems with long-term sustainability arose. Thereby, dividing the systems horizontally into modules that can work with different hardware and components provided the opportunity for areas and specializations to be better defined.
However, some problems regarding SOA systems like service sizes, unnecessary resource expenditure during expansion and backward- compatibility issues triggered some new researches among the software community. New solutions were being created with versions from scratch instead of updates because of the increased interdependency. Even if SOA seems to present some negative aspects, it is still one of the most widely used architecture. But an inevitable fact about SOAs is the need for deep understanding of its components.
Although these approaches are still intensively used, a different architecture has been adopted to work on globally scaled solutions. Systems were created based on well-defined documents and smaller services with simple tasks. Even though the system design’s instrumentation was difficult, more easily developed systems which didn’t require domain knowledge began to appear. The idea’s first seeds date back to 1978 but it was not after the millennium that microservices found acceptance. Now, they are software giants’ main architecture.
All these advances witnessed in the software world also took place at USTA in a micro scale and with a parallel chronology. We came along from individual applications to prompt server systems running on Windows and from SOA systems to the most up-to-date microservice structures. Qualified task descriptions allow us to divide tasks in such small parts that the service can be created independently from the code developers’ domain knowledge.
System installation and updates are time consuming and prone to various complications due to operating system but automated upload systems make it much easier. Thanks to an ecosystem established independently from the structures, uploads can be made automatically and the deployment phase continues autonomously while the code writing process still carries on. These mechanisms are called Continuous Integration, Continuous Deployment –CI/CD systems. Using CI/CD makes possible to build better quality systems by running control mechanisms such as tests, integration tests and code quality tests while uploading.
Microservice architecture allows different software languages, databases as well as other tools to be selected to best suit each operation by removing the necessity of using the same technology for distinct independent structures. Thus, by using different technologic infrastructures for different functions, results are achieved in a faster, easier and safer way. Microservices can be developed backward-compatible, meaning instead of modifying an existing function, a new function is built each time a new parameter is added. This approach avoids ruptures and compatibility issues experienced in larger systems or integrations and offers simpler solutions.
Long recovery processes caused by the need of a system’s full analysis or of making breaking changes due to some forgotten steps are no longer an issue. Another feature is that services can be moved from one project to another without the need of rewriting them and can be used at any desired point without requiring any further development. For example at USTA, a service prepared for an individual – institution can easily be adapted to many other applications such as CRM, Infrastructure Management System, Lab Application, Subscription Systems, Orders and Sales, etc.
As USTA, we build ourselves a framework infrastructure by gathering services that meet general needs in a pool, then offer our products to our customers connecting other specialized services to these general ones.
As USTA, we have been developing software projects for a long time now. Along with advancements experienced, we changed and evolved as much as the software world did. To fully understand DevOps and DevOps engineering, we should first take a look to all software development processes carried out before and after them.
Before, Software Development Life Cycle (SDLC) would begin with developers designing the software within their own environmental conditions and rules; then, after this phase completed, developer teams would install the software to servers prepared by IT and systems engineers. Security and other similar elements were handled together with other applications within IT units. Applications that were secure and impenetrable in themselves were considered satisfactory. Infrastructure was always designed according to the developed application; all operation processes and overhead operations were maintained with feedbacks (Operations Management).
In this traditional approach, systems are not flexible and it is impossible to bend designed structures. When the number of services would increase, applications needed to be transported to infrastructures with bigger sources, which was a complicated process. Even when processes were automated, they required manual intervention and were time-consuming.
On the other hand, Agile Software Development methodologies changed our behavior and infrastructural needs. Dynamic development processes and their instant transmission to systems called forth new planning in several matters such as time and resource management.
Changes in technology field led to the increase of distributed systems, prevention of unnecessary sourcing and, specially, to the creation of infrastructure-independent systems.
As Usta, we started to build our applications on an architecture that responds to general systems while keeping the systems required to operate them infrastructure-independent.
This change at Usta allowed developers to focus on more specific tasks while all other tasks were handled by people with interdisciplinary knowledge. Our DevOps Engineers (Development Life Cycle – Operations Management) are responsible for being involved in and running the entire system’s life cycle.
Developers and DevOps Engineers always work hand in hand and protocols are implemented by DevOps engineers.
- Development, scalability and test approaches are designed by developers together with DevOps
- Parametric tasks required for applications to run independently from infrastructure are shared
- Applications are automatically fed to the target system by using CI/CD (Continuous Integration – Continuous Deployment). At this stage, following sub-tasks are taken care of:
- Tests prepared by developers are run
- Security checks are carried out
- Applications are installed on relevant systems
- Systems and sub-applications units are monitored; developers are procured with tools to monitor them and feedback at necessary points are provided
- Eventual problems are forwarded to development units and solved together if necessary
- Scalability is worked on together with development teams
- Infrastructure required for the live system is set, applications get installed to the environment with CI/CD
- Log records, analysis and feedbacks are implemented in the live system