CONTAINERIZING PYTHON APPLICATIONS USING DOCKER TO BUILD A MICROSERVICE ARCHITECTURE

КОНТЕЙНЕРИЗАЦИЯ PYTHON-ПРИЛОЖЕНИЙ С ПОМОЩЬЮ DOCKER ДЛЯ ПОСТРОЕНИЯ МИКРОСЕРВИСНОЙ АРХИТЕКТУРЫ
Rodionov K.
Цитировать:
Rodionov K. CONTAINERIZING PYTHON APPLICATIONS USING DOCKER TO BUILD A MICROSERVICE ARCHITECTURE // Universum: технические науки : электрон. научн. журн. 2024. 6(123). URL: https://7universum.com/ru/tech/archive/item/17827 (дата обращения: 18.12.2024).
Прочитать статью:
DOI - 10.32743/UniTech.2024.123.6.17827

 

ABSTRACT

This article aims to explore the synergy between microservices architecture and containerization in Python applications, with a focus on their practical application and benefits. The research method involves examining key aspects such as microservices, Docker containerization, orchestration, and scaling strategies. Key findings reveal that microservices offer scalability and flexibility, while containerization ensures consistent deployment environments. Orchestration tools like Kubernetes and Docker Swarm facilitate multiple container management, and scaling strategies enhance performance and resilience. The article concludes that this convergence represents a transformative shift in software engineering, enabling robust, scalable, and maintainable systems. The novelty of this work lies in its comprehensive overview, providing insights into how these technologies interact to shape modern software development.

АННОТАЦИЯ

Данная статья направлена на изучение взаимосвязи архитектуры микросервисов и контейнеризации в приложениях на языке Python с акцентом на их практическое применение и преимущества. Метод исследования предполагает изучение таких ключевых аспектов, как микросервисы, контейнеризация Docker, оркестровка и стратегии масштабирования. Основные выводы показывают, что микросервисы обеспечивают масштабируемость и гибкость, а контейнеризация - согласованность сред развертывания. Инструменты оркестровки, такие как Kubernetes и Docker Swarm, облегчают управление множеством контейнеров, а стратегии масштабирования повышают производительность и устойчивость. В статье делается вывод, что эта конвергенция представляет собой трансформационный сдвиг в программной инженерии, позволяющий создавать надежные, масштабируемые и ремонтопригодные системы. Новизна работы заключается во всестороннем обзоре, позволяющем понять, как взаимодействие этих технологий формирует современную разработку программного обеспечения.

 

Keywords: microservices, containerization, python, docker, orchestration, scaling, kubernetes, docker swarm, software engineering, scalability.

Ключевые слова: микросервисы, контейнеризация, python, docker, оркестровка, масштабирование, kubernetes, docker swarm, программная инженерия, масштабируемость.

 

Introduction

Modern software architecture has changed significantly, and the way applications are designed and deployed is driven by two key trends: microservices architecture and containerization. Both of these paradigms have a profound impact on how Python applications are developed, scaled, and maintained. This paper examines the convergence of these technologies and explores how they facilitate the creation of scalable, fault-tolerant, and flexible systems.

Microservices architecture represents a significant departure from traditional monolithic software development. Unlike monolithic applications where all components are tightly coupled and deployed together, microservices architecture breaks the system into smaller, loosely coupled services that communicate over a network. This approach offers many benefits, such as improved scalability, better fault isolation, and easier deployment. Microservices flexibility fits well with the dynamic nature of Python development, where platforms such as Flask, Django and FastAPI provide the tools needed to efficiently create and deploy independent services [4,6].

On the other hand, containerization solves a software development problem: ensuring environment consistency during development, testing and production. Docker, for example, is a leading containerization platform that packages applications and their dependencies into standardized blocks, ensuring that they run smoothly across different computing environments. This consistency is critical for microservices, where individual services may have unique dependencies and runtime environment requirements. By containerizing Python applications, developers can address compatibility issues and simplify deployment [1].

As for orchestration in microservices, it plays an important role in managing the complexities associated with running multiple containers. Orchestration tools such as Kubernetes and Docker Swarm provide mechanisms to scale, monitor, and support containerized applications, allowing organizations to focus on delivering business value. The orchestration layer is essential for service discovery, load balancing, and failover, which are critical components of a robust microservices architecture [10].

Scaling is another important aspect of microservices architecture. Traditional scaling approaches, such as vertical scaling, involve increasing a single server’s capacity. In contrast, microservices architecture provides horizontal scaling, in which individual services are replicated across multiple nodes to handle the increased load. This approach is particularly effective when combined with containerization and orchestration because it allows services to scale independently based on demand [7].

Thus in the realm of Python application development, the intersection of microservices architecture and containerization has led to the emergence of innovative practices and tools. Python's rich ecosystem, combined with the flexibility of containers and orchestration capabilities, provides a solid foundation for building scalable, fault-tolerant, and maintainable systems.

1. Microservice Architecture

Microservice architecture is a software development paradigm that organizes an application as a collection of small, loosely coupled services (Figure 1). This architectural style has gained traction due to its ability to improve scalability, flexibility, and maintainability in complex software systems. In this context, Python has become a powerful language for building microservices due to its simplicity, extensive libraries, and robust platforms such as Flask, Django, and FastAPI.

 

Figure 1. Microservices architecture [3]

 

At its core, a microservices architecture divides an application into smaller independent services, each focused on a specific functionality. This modular approach contrasts with the traditional monolithic architecture where all components are tightly integrated into a single code base. By breaking applications into separate services, a microservices architecture allows teams to work on different system parts independently, facilitating parallel development and continuous deployment. This independence also improves fault isolation, as failures in one service do not necessarily affect others.

One of the key advantages of microservice architecture is its scalability. Unlike monolithic systems, where scaling usually involves increasing a single server’s capacity (vertical scaling), microservices allow horizontal scaling. This means replicating certain services across multiple instances to handle the increased load. This detailed scaling approach fits well with cloud deployments, where resources can be allocated dynamically based on demand.

However, microservices architecture also presents challenges, especially in terms of complexity. Managing multiple independent services leads to challenges related to communication, data consistency, and deployment. Communication between services, often accomplished through RESTful APIs or messaging systems, requires careful design to ensure efficiency and reliability. Data consistency becomes an issue when different services need access to shared data, requiring strategies for managing distributed data. Deployment complexity increases as the number of services grows, requiring robust tools for orchestration and monitoring.

Therefore, Python versatility makes it an ideal choice for building microservices. The language offers many frameworks to suit different needs. Flask, a lightweight platform, provides simplicity and flexibility, making it suitable for small to medium sized services. Django, a more comprehensive platform, offers built-in features such as authentication and database management that are useful for larger services. FastAPI, a relatively new platform, focuses on performance and ease of use, using modern Python features such as type hints to automatically document APIs [1,6].

To illustrate how Python can be used for microservices,  consider a few examples of setting up a basic microservice using Flask, Django, and FastAPI.

1. Flask is a microservice framework for Python based on Werkzeug and Jinja2, known for its simplicity and minimalism. Here is a sample code snippet demonstrating a basic Flask service:

from flask import Flask, jsonify

 

app = Flask(__name__)

 

@app.route('/hello', methods=['GET'])

def hello_world():

    return jsonify(message="Hello, World!")

 

if __name__ == '__main__':

    app.run(debug=True, host='0.0.0.0')

In this example, the service defines a single endpoint (/hello) that responds with a JSON message. Flask's simplicity allows developers to quickly configure and deploy microservices. By default, the application runs on port 5000 and is available to any incoming connection (host='0.0.0.0.0').

2. Django, another popular Python framework, is often used for larger services that need more built-in features. Here is an example of setting up a basic Django microservice:

# views.py

from django.http import JsonResponse

 

def hello_world(request):

    return JsonResponse({'message': 'Hello, World!'})

 

# urls.py

from django.urls import path

from . import views

 

urlpatterns = [

    path('hello/', views.hello_world),

]

 

# settings.py

INSTALLED_APPS = [

    'django.contrib.contenttypes',

    'django.contrib.sessions',

    'django.contrib.staticfiles',

    'myapp',

]

 

# Run server

if __name__ == '__main__':

    from django.core.management import execute_from_command_line

    execute_from_command_line(['manage.py', 'runserver'])

Django offers a more structured approach with clear task separation through components such as 'views', 'urls' and 'settings'. This structure is useful for larger projects with more complex requirements. In the above example, the 'hello_world' view responds with a JSON message, similar to the Flask example.

3. FastAPI is another option for microservices, especially for projects that value performance and modern Python features. Here is an example of using FastAPI:

from fastapi import FastAPI

 

app = FastAPI()

 

@app.get("/hello")

def hello_world():

    return {"message": "Hello, World!"}

 

if __name__ == '__main__':

    import uvicorn

    uvicorn.run(app, host="0.0.0.0", port=8000)

FastAPI uses asynchronous programming and automatic API documentation via OpenAPI and JSON schema. The above example demonstrates a simple endpoint (/hello) that returns a JSON message. FastAPI's focus on performance makes it ideal for high throughput services.

It is worth noting that the microservices architecture offers significant advantages in terms of scalability, flexibility, and error isolation. However, it also presents challenges related to communication, data consistency, and deployment. Python's rich ecosystem, which includes platforms such as Flask, Django, and FastAPI, provides the tools needed to build effective microservices. By understanding the strengths and weaknesses of each platform, developers can choose the right tool for their specific needs, creating the foundation for successful microservices-based applications.

2. Containerization with Docker

Containerization has revolutionized software development and deployment by offering a consistent and efficient way to package and distribute applications (Figure 2). It encapsulates applications and their dependencies into lightweight, portable containers that can work equally in different environments. Docker, developed by Docker Inc. is currently the leading containerization platform that has become the de facto standard for application containerization due to its ease of use, robust ecosystem, and strong community support.

 

Figure 2. Application containerization [3]

 

Containerization solves the common software development problem of "it works on my machine," which means that an application works fine on the developer's local machine but does not function properly when deployed to other environments. This problem occurs when software behaves differently in different environments due to differences in system configurations, dependencies, or operating systems. Containers solve this problem by encapsulating everything the application needs to run - such as code, libraries, and runtime environment - in a single isolated module. This provides consistency and portability, making it easy to move applications between development, testing, and production environments.

Docker provides a platform for creating, delivering and running containers. It uses operating system-level virtualization to isolate applications using Linux kernel features such as control groups and namespaces. Docker's popularity is due to several factors, including its simplicity, efficiency, and dynamic ecosystem. The Docker ecosystem includes tools for container orchestration, monitoring, and networking, which are critical for building scalable and fault-tolerant applications [10].

To containerize a Python application using Docker, the first step is to create a Dockerfile, a text file that defines the environment and instructions needed to create a Docker image. The Dockerfile defines the base image, dependencies, application code, and any necessary configuration.

Consider a simple example of a Python application displaying a "Hello, World!" message using Flask. The following Dockerfile demonstrates how to containerize this application:

# Use an official Python runtime as a parent image

FROM python:3.9-slim

 

# Set the working directory in the container

WORKDIR /app

 

# Copy the current directory contents into the container at /app

COPY . /app

 

# Install any needed packages specified in requirements.txt

RUN pip install --no-cache-dir -r requirements.txt

 

# Make port 5000 available to the world outside this container

EXPOSE 5000

 

# Define environment variable

ENV NAME World

 

# Run app.py when the container launches

CMD ["python", "app.py"]

In this Dockerfile, the FROM instruction specifies a base image, in this case the official Python:3.9-slim image. This image includes a minimal Python environment suitable for running the application. The WORKDIR instruction sets the working directory inside the container, and the COPY instruction copies the application code from the host computer into the container.

The RUN instruction installs the required dependencies specified in the require.txt file. The EXPOSE instruction makes port 5000 available, which is the default port for Flask. The ENV instruction sets the environment variable, and the CMD instruction specifies the command to run when the container starts.

To create a Docker image, navigate to the directory containing the Dockerfile and run the following command:

docker build -t hello-world-app .

This command tells Docker to create an image named hello-world-app using the Dockerfile in the current directory. The -t flag marks the image, making it easier to identify and manage.

After creating the image, run the following command to start the container:

docker run -p 5000:5000 hello-world-app

This command maps container port 5000 to host port 5000 (-p 5000:5000) and starts the container using the hello-world-app image. The application should now be accessible at http://localhost:5000.

In this way, containerization provides many benefits, including a consistent environment, simplified deployment, and resource efficiency. By isolating applications, containers avoid conflicts arising from shared dependencies or system configurations. This isolation also improves security because containers operate with limited access to the host system.

The lightweight Docker nature enables rapid development cycles, allowing developers to build, test, and deploy applications quickly. Containers are also highly portable, making them ideal for cloud deployments. The Docker ecosystem includes tools such as Docker Compose, which enables multi-container applications, and Docker Swarm, which provides built-in clustering and orchestration.

3. Orchestration

Microservices orchestration is very important in managing the complexities associated with deploying, managing, and scaling multiple containers. Microservices include many independent services that must communicate efficiently while maintaining their individual functions. Orchestration helps coordinate these services, ensuring that they operate seamlessly as a single system (Figure 3). Two of the best known orchestration tools are Kubernetes and Docker Swarm, each offering unique features and benefits.

 

Figure 3. Microservices orchestration [8]

 

Orchestration is necessary for several reasons. First, it automates the deployment, scaling, and management of containerized applications, reducing manual intervention and the risk of human error. Second, it provides mechanisms for service discovery, load balancing, and failover, which are critical to maintaining system availability and performance. Third, orchestration improves resource utilization by efficiently distributing workloads across available resources. These capabilities are important in microservice architectures, where independent services often have different resource and workload requirements [7].

Next consider the main orchestration tools, the first of which is Kubernetes, it is an open source platform, originally developed by Google *, has become the de facto standard for container orchestration. Kubernetes automates the deployment, scaling, and management of containerized applications by providing features such as self-healing, load balancing, and secrets management. Kubernetes runs in a node cluster, where each node runs multiple containers managed by a central management plane. The control plane coordinates the cluster’s state, ensuring that the desired number of containers run and are optimally distributed.

Docker Swarm, another orchestration tool, is tightly integrated with the Docker ecosystem. It provides built-in clustering and orchestration for Docker containers, allowing developers to manage multi-container applications. Docker Swarm uses a simpler architecture compared to Kubernetes, making it easier to set up and manage small projects. Swarm's built-in features include load balancing, service discovery, and scaling, providing a simple solution for managing containerized applications [2].

To illustrate the orchestration process, consider a scenario where multiple Python microservices need to be deployed and managed. Using Kubernetes, this can be achieved by creating YAML configuration files that define the desired cluster state . Here is an example of Kubernetes configuration for basic microservices setup:

apiVersion: apps/v1

kind: Deployment

metadata:

  name: web-service

spec:

  replicas: 3

  selector:

    matchLabels:

      app: web

  template:

    metadata:

      labels:

        app: web

    spec:

      containers:

      - name: web

        image: my-web-app:latest

        ports:

        - containerPort: 5000

---

apiVersion: v1

kind: Service

metadata:

  name: web-service

spec:

  selector:

    app: web

  ports:

    - protocol: TCP

      port: 80

      targetPort: 5000

  type: LoadBalancer

In this example, the deployment specification defines a service called "web service" that runs three replicas of the containerized application. The service specification provides the deployment through a load balancer, making it available on port 80. The configuration ensures that the desired state (three replicas) is preserved even if some replicas fail or are terminated.

Docker Swarm provides similar capabilities through a simple CLI-based approach. Here is an example of deploying multiple containers using Docker Swarm:

docker swarm init

 

docker service create --name web-service \

  --replicas 3 \

  -p 80:5000 \

  my-web-app:latest

In this example, the docker swarm init command initializes a new Swarm mode cluster, and the docker service creation command creates a new service with three replicas. The -p flag maps container port 5000  to host port 80 , as in the Kubernetes example. The simplicity of Docker Swarm makes it an attractive option for smaller projects or teams that are already familiar with Docker.

Orchestration provides several key benefits for microservices architecture. First, it provides horizontal scaling, allowing services to adjust their capacity based on demand. This flexibility allows traffic peaks to be handled and provides consistent performance. Secondly, orchestration improves fault tolerance by automatically detecting and replacing failed containers, which increases system resiliency. Thirdly, it facilitates upgrade sequencing by allowing services to be upgraded without downtime by gradually replacing old containers with new ones.

4. Microservices scaling

Scaling is a critical aspect of microservices architecture that allows applications to adapt to changing workloads and maintain optimal performance. In the context of microservices, scaling refers to the ability to adjust the capacity of individual services to meet demand. This customization can be achieved through horizontal scaling, vertical scaling, or their combination (Figure 4). Each scaling strategy has its own benefits and challenges, which are addressed by various orchestration and containerization tools such as Docker, Kubernetes, and Docker Swarm.

 

Figure 4. Types of scaling [5]

 

Horizontal scaling, also known as "scaling out," involves adding additional service instances to distribute the workload across multiple nodes. This strategy is particularly effective for microservices because it allows individual services to scale independently based on their specific needs. Horizontal scaling improves fault tolerance by distributing the workload across multiple instances, reducing the impact of individual failures. It also improves performance by balancing the workload across available instances, preventing any individual instance from becoming a bottleneck.

Vertical scaling or "scaling up" involves increasing the capacity of a single instance, usually by adding additional resources such as CPU or memory. While vertical scaling can be useful for certain applications, it has limitations. Hardware constraints may limit the maximum capacity of a single instance, and vertical scaling does not improve fault tolerance because all workloads still depend on a single instance. However, vertical scaling may be easier to implement, especially when dealing with legacy systems or applications that cannot be easily distributed across multiple instances [4].

In microservices architecture, horizontal scaling is often preferred because of its flexibility and resilience. Containerization with Docker and orchestration tools such as Kubernetes or Docker Swarm facilitate horizontal scaling by enabling dynamic creation and management of service replicas.

To illustrate horizontal scaling, consider a scenario where a Python microservice needs to handle increased traffic during peak hours. Using Kubernetes, this can be achieved by defining the desired number of replicas in the deployment configuration:

apiVersion: apps/v1

kind: Deployment

metadata:

  name: scalable-service

spec:

  replicas: 5

  selector:

    matchLabels:

      app: scalable

  template:

    metadata:

      labels:

        app: scalable

    spec:

      containers:

      - name: scalable-container

        image: scalable-app:latest

        ports:

        - containerPort: 5000

In this example, the deployment configuration specifies five service replicas, ensuring that the workload is distributed across five instances. Horizontal autoscaling of Kubernetes modules can be used to automatically adjust the number of replicas based on CPU utilization or other metrics. This scaling ensures that the service adapts to changing workloads while maintaining performance and availability.

Docker Swarm provides a similar approach to horizontal scaling through service management commands. For example, the following command creates a service with five replicas using Docker Swarm:

docker service create --name scalable-service \

  --replicas 5 \

  -p 80:5000 \

  scalable-app:latest

In this command, the --replicas flag specifies the desired number of instances, and the -p flag maps container port 5000 to host port 80. Docker Swarm's scaling capabilities provide efficient distribution of workloads across multiple instances, improving fault tolerance and performance.

Vertical scaling, while less common in microservices, may be necessary for certain applications or in certain scenarios. For example, a memory-intensive service may require additional RAM to efficiently process large datasets. Docker enables vertical scaling by adjusting the resource limits of containers. For example, the following command creates a container with increased memory:

docker run -d --name memory-intensive-app \

  --memory=2g \

  memory-intensive-image

In this command, the --memory flag sets the container's memory limit to 2 gigabytes, ensuring that the service has sufficient resources for its workload. However, vertical scaling has inherent limitations, such as hardware limitations and insufficient fault tolerance, making it less suitable for most microservice scenarios.

Scaling strategies in microservices architecture are closely related to orchestration and containerization tools. Kubernetes and Docker Swarm provide robust horizontal scaling mechanisms, while Docker's resource management features facilitate vertical scaling when needed. The choice of scaling strategy depends on the specific needs of each service,  considering factors such as workload patterns, fault tolerance requirements, and hardware limitations.

Conclusion

By examining the convergence of microservices architecture and Python application containerization, this paper sheds light on the current trend in software development. The research covered key aspects such as the microservices approach, the benefits and challenges of containerization using Docker, orchestration techniques, and scaling strategies. Together, these topics reflect a complex but useful picture of modern software development.

If you think about the broader implications, it becomes clear that the intersection of microservices architecture and containerization represents a significant evolution in software development. The ability to build, deploy, and manage scalable, fault-tolerant, and easily maintainable systems is becoming increasingly important in a world where software drives business innovation and competitiveness. The practices and tools discussed in this article provide a solid foundation for navigating this evolving environment, empowering developers and organizations to meet the demands of today's applications.

As for future directions, the continued evolution of microservices and containerization technologies promises even greater flexibility, efficiency, and resiliency. As new infrastructures, tools, and best practices emerge, developers will have more opportunities to build robust systems that meet business needs and technological advances. The future of microservices and containerization promises to be dynamic and transformative, driven by constant innovation and a relentless search for better software solutions.

 

References:

  1. 3Alves M., Paula H. Identifying Logging Practices in Open Source Python Containerized Application Projects //Proceedings of the XXXV Brazilian Symposium on Software Engineering. – 2021. – pp. 16-20.
  2. 9Comparison of Kubernetes and Docker. [Electronic resource] – Access mode: https://www.atlassian.com/ru/microservices/microservices-architecture/kubernetes-vs-docker
  3. 7Docker and Kubernetes - how containerization technologies differ. [Electronic resource] – Access mode: https://eternalhost.net/blog/razrabotka/docker-kubernetes
  4. 1Douglas F., Nieh J. Microservices and containers //IEEE Internet Computing. – 2019. – T. 23. – No. 6. – pp. 5-6.
  5. 10Horizontal vs. Vertical Cloud Scaling: Key Differences and Similarities. [Electronic resource] – Access mode: https://www.spiceworks.com/tech/cloud/articles/horizontal-vs-vertical-cloud-scaling/
  6. 2Keni N. D., Kak A. Adaptive containerization for microservices in distributed cloud systems //2020 IEEE 17th Annual Consumer Communications & Networking Conference (CCNC). – IEEE, 2020. – pp. 1-6.
  7. 5Luo X., Ren F., Zhang T. High performance userspace networking for containerized microservices //International Conference on Service-Oriented Computing. – Cham: Springer International Publishing, 2018. – pp. 57-72.
  8. 8The Benefits of Microservices Choreography vs Orchestration. [Electronic resource] – Access mode: https://solace.com/blog/microservices-choreography-vs-orchestration/
  9. 6Will microservices become the architecture of the future? [Electronic resource] – Access mode: https://dou.ua/lenta/articles/microservices-for-future/
  10. 4Yepuri V. K. et al. Containerization of a polyglot microservice application using Docker and Kubernetes //arXiv preprint arXiv:2305.00600. – 2023.

 

* По требованию Роскомнадзора информируем, что иностранное лицо, владеющее информационными ресурсами Google является нарушителем законодательства Российской Федерации – прим. ред.)

Информация об авторах

Python developer, X5 Digital Don State Technical University (DSTU), Russia, Rostov-on-Don

Python-разработчик, X5 Digital Донской Государственный Технический Университет, РФ, г. Ростов-на-Дону

Журнал зарегистрирован Федеральной службой по надзору в сфере связи, информационных технологий и массовых коммуникаций (Роскомнадзор), регистрационный номер ЭЛ №ФС77-54434 от 17.06.2013
Учредитель журнала - ООО «МЦНО»
Главный редактор - Ахметов Сайранбек Махсутович.
Top