SYSTEM'S LOAD REDUCTION BY USING ASYNCHRONOUS AND SYNCHRONOUS SERVICE METHODS

СНИЖЕНИЕ НАГРУЗКИ СИСТЕМЫ ПРИ ИСПОЛЬЗОВАНИИ АСИНХРОННЫХ И СИНХРОННЫХ МЕТОДОВ ОБСЛУЖИВАНИЯ
Цитировать:
Zakirov V., Abdullaev E., Shukurov F. SYSTEM'S LOAD REDUCTION BY USING ASYNCHRONOUS AND SYNCHRONOUS SERVICE METHODS // Universum: технические науки : электрон. научн. журн. 2023. 4(109). URL: https://7universum.com/ru/tech/archive/item/15350 (дата обращения: 23.11.2024).
Прочитать статью:
DOI - 10.32743/UniTech.2023.109.4.15350

 

ABSTRACT

The method of offering consumers services via synchronous and asynchronous service methods, as well as its mathematical models, are discussed in this article. The amounts of load placed on systems organized in synchronous and asynchronous forms as well as the probabilities of losses during the service provided by transparent loss, waiting, and conditional loss methods are also mentioned. And with the aid of these techniques, the efficiency of the system of distant learning was taken into consideration. In this instance, the principles of expectation, conditional loss, and transparent loss are used to compute the system quality efficiency individually. The calculations took into account the number of waiting areas and the variation in waiting times to determine how the load entering the system would change. Additionally, the graphs of the computation results are shown, and differences between them are identified by their efficiency indicators. It has been found that the load in asynchronous systems is 2-3 times less than in synchronous systems, and that employing synchronous and asynchronous service methods simultaneously has the dual goals of reducing system load and increasing user count.

АННОТАЦИЯ

В данной статье представлен метод предоставления услуг пользователям посредством синхронных и асинхронных методов обслуживания и его математические модели. Также упоминается величина нагрузки, приходящаяся на системы, организованные в синхронном и асинхронном виде, и вероятности ее потерь при оказании услуги методами явных потерь, ожидания и условных потерь. И с помощью этих методов была рассмотрена эффективность системы дистанционного образования. В этом случае эффективность системы качества рассчитывается отдельно для принципов ожидания, условной и явной потери. В расчетах производилось изменение нагрузки, поступающей в систему, в зависимости от количества мест ожидания и изменения времени ожидания. Также представлены графики результатов расчетов и определены различия показателей их эффективности. При этом целью использования синхронного и асинхронного методов обслуживания является снижение нагрузки на систему и увеличение количества пользователей системы, причем установлено, что нагрузка в асинхронных системах в 2-3 раза меньше, чем в синхронных системах.

 

Keywords: Synchronous system, asynchronous system, single channel communication, Markov’s process, call intensity, flow, quality indicator of system efficiency.

Ключевые слова: Синхронные системы, асинхронные системы, одноканальная связь, Марковский процесс, интенсивность вызова, поток, качественный показатель эффективности системы.

 

1. Introduction

Modern information and communication technologies play a significant role in how distance education is organized nowadays, and their functionality and serviceability determine whether the educational process is of high or low quality [1]. Particularly, there are numerous platforms for managing remote education procedures today, and each one's operating principles differ. Particularly, synchronous and asynchronous service techniques (models) are utilized in the manner of designing platforms of various types [3].

User requests are uploaded to the working devices of the server servicing the system in synchronous operation systems, which generates a significant amount of data circulation on the server [1]. Direct contact between the user and the service provider (which could be software, hardware, people, etc.) is the foundation of synchronized workflow. Throughout the whole information exchange with the user, this link is maintained. In this scenario, the service device or software responds to requests in a specific order (Fig. 1). As a result, it becomes more difficult for the system server to fulfill subsequent requests without becoming overworked. Additionally, this circumstance lengthens the time it takes to respond to user queries. The system only processes a request made during a service when the service device is available. The request either waits for the service device to become free or logs out if it is not.

 

Figure 1. The operation principle of the synchronous system

 

The data required to fulfill user requests is downloaded to the user's device in systems built on the asynchronous operation tenets. As a result, the system's workload is somewhat reduced, and the system's fast and permanent memory devices are not overloaded. It is feasible to serve many requests concurrently in asynchronous systems. The foundation of this procedure is receiving the ensuing requests into the system and processing them all concurrently, without holding back until one is finished. In this instance, after responding to a system request, the connection between the two parties is only broken briefly until the subsequent connection, enabling the device to continue serving requests over this service channel. As a result, the service load in asynchronous systems is marginally higher than in synchronous systems when compared to devices with the same technical qualities (Fig. 2). Data exchange in such a system only happens when it's necessary to update it on the system server, unlike a synchronous system where it's loaded during the entire service. Users connect to the server only after one of these updates occurs at a random time. Similar to synchronous systems, such links are only serviced while the service device is not in use. The request either waits for the service device to become free or logs out if it is not.

So, in each of the techniques under consideration, incoming requests come in at random times and are only fulfilled when the service device is free. The efficacy of these two service delivery strategies can be evaluated using public service theory (PST) models.

 

Figure 2. The principle of operation of the asynchronous system

 

The order in which service requests are made, how long they take to complete, and how the service system is structured must all be understood to use PST models. An inbound or outbound stream is made up of user requests for hardware or software that provides services. This current generates a random current at a random moment in time. The flow of user-generated requests can be handled in transparent loss, wait or conditional loss, and aggregated methods since each user chooses the hardware or software he needs when it's convenient for him. Each request's service length in this instance varies and is determined at random. Additionally, the system used to make the service request may be fully or partially capable; in this situation, fully capable systems should be used. Because such a service organization enables system users to employ any free service device or software they desire [2].

2. Methods

Users may be placed on standby, as was previously described, while the system is busy providing synchronous service. In some cases, a surge in incoming requests will result in an infinite increase in the number of queued requests; however, queues can also be restricted. However, the setup of limitless waiting spaces is economically wasteful because it leads to situations where demands cannot be fulfilled. Taking into account the above, it is appropriate to use the M/M/V/m < ∞ model as a mathematical model of synchronous and asynchronous methods of service (Fig. 3) [4][5], where m is the number of waiting places, V is the number of service devices, M – symbols indicate that the flow of requests (probability distribution) and the duration of service obey exponential (exponential) distribution laws, respectively.

 

Figure 3. User service model in synchronous and asynchronous systems

 

Here l is the rate of incoming requests, β is the rate of service duration, i.e. β = 1/t, t is the average service duration.

The quality indicators of the cited model are as follows [8-12]:

  • the probability of waiting for requests to be served

 ,                                         (3)

where, Ev,v(A) is Erlang's B formula, A is the load on the service system;

,                                            (4)

In the considered model, the probability of loss of requests is determined by expression (4).

                                                    (5)

These expressions demonstrate that the expression becomes a transparent loss system (Erlang's formula B in expression 4) and that Erlang's formula B determines the likelihood of losses if m = 0, i.e., there are no waiting regions [6–8].

Expression 6 of the Erlang C formula changes the formula into a conditional loss system if m equals. Losses in this instance are nil because every request is fulfilled.

                                     (6)

3. Result and discussion

Using the aforementioned equations and formulas, let's now think about the effectiveness of the distance learning system. In this instance, we construct the system's quality indicator independently for the expectation, conditional loss, and transparent loss principles. When the load enters the system, the number of waiting areas, and the maximum waiting time change, we take these factors into account when determining the system's efficiency.

Figure 4 depicts how the indicators Ev,v(A), P(m>0), and P(m=∞) vary as a result of the load that is introduced into the system. In this instance, there are three distinct instances when the system's load increases during service under the guiding principles of expectation and conditional and transparent loss. Depending on how many people are waiting, this process varies. In this instance, a certain level of load is handled when the service is offered in standby mode since the requests are diverted to the standby state when the system's devices are busy (seen in red and green on the diagram). Additionally, there is a distinction between them, and inspection times are impacted by whether crucial situations are finite or endless. The system's limitless waiting areas are built on serving all incoming requests, therefore as the volume of requests increases, so do the waiting times for those requests. Due to the restricted number of waiting areas in the conditional loss technique, the waiting times in the waiting areas are essentially constant as a result of the loss of some requests (those made while the waiting areas are crowded). Additionally, in systems built on the idea of transparent loss, the system loses a lot of requests since there aren't enough waiting areas (blue in the illustration). However, compared to other principles, these systems had a slightly longer service time for requests that came in while the devices supporting the system were not in use. Additionally, as can be seen from the graphs, the losses rise in each of the three situations (Ev,v(A), P(m>0), and P(m=)) when the load rises. Consequently, the gap between them widens. This difference is negligibly different at low load values. For instance, the difference between Ev,v(A), and P(m=) does not surpass 10% when the particular load is 0.1 Erl, but it reaches 50% when the specific load is 0.5. In this instance, the inequality between them still exists: Ev,v(A) P(m>0) P(m=).

 

Figure 4. Ev,v(A), P(m>0) and P(m=∞) change of indicators in relation to the load entering the system

 

The probability of waiting for requests in the system as compared to the waiting regions of the systems based on service by the technique of service exclusively by the method of conditional losses, in contrast to the previous example, is analyzed in figure 5. It might be claimed that an increase in waiting areas may add a small amount of load to the system. But this also means that it takes longer to react to user inquiries. The graph also shows that assuming the number of falling loads and service devices remains constant, the chance of waiting reduces exponentially as the number of waiting places increases. The structuring of tiny waiting spaces significantly lowers the likelihood of waiting requests. For instance, the organization of a single waiting area in the absence of waiting devices lowers the likelihood of having to wait repeatedly. As a result, when service facilities are busy, setting up a small number of waiting rooms lowers the likelihood of waiting or lowers the number of service facilities.

 

Figure 5. Dependence of losses on the number of waiting places in the method of conditional losses

 

When a service is given using the conditional losses approach, the reliance of the losses on the waiting time is depicted in figure 6. The following expression [7] can be used to calculate the likelihood of such losses.

,                                                   (7)

where td is the waiting time of users.

 

Figure 6. Dependence of losses on the waiting time in the conditional loss method

 

If the load and service devices entering the system are left unchanged, as shown in the graph, the chance of waiting for drops exponentially with the rise in waiting time. Users of the system may have to wait a certain length of time for the service to begin in actual circumstances. They will exit the system without using the service if this time goes over a certain threshold.

In contrast to the method of transparent losses, the examination of the aforementioned graphs demonstrates that when requests are handled by the waiting method, the likelihood of waiting rises quickly with the rise in load and reaches its highest value at A=V. The number of pending requests and the waiting time substantially grow as the system then enters a non-stationary state. Users of the system are inconvenienced by this scenario. However, when the service is given via the transparent losses technique, a situation like this does not occur, demonstrating its benefit. Because when the load increases sharply, it is technically and economically inefficient to increase the number of waiting areas or the number of service devices to increase the efficiency of the system. Therefore, when the load on the system changes sharply and reaches a certain value, changing the service method makes it possible to increase the efficiency of the system.

Conclusion

Utilizing the synchronous and asynchronous service methods discussed above aims to lighten the system's burden, or to put it another way, to increase the system's user base. According to preliminary calculations, the load on the system under identical conditions depends on the reason for using it, and it is 2-3 times lower for the asynchronous technique than for the synchronous way.

The mathematical concepts discussed above enable the creation of synchronous and asynchronous systems for service procedures. In order to optimize the system's efficiency while being used, it is therefore possible to plan the system's user base, waiting areas, and wait times as well as to alter the system's service model based on the load.

          

References:

  1. Abdullaev E. Technical methods of organizing a distance learning system // Scienceweb academic papers collection. – 2022.
  2. Eldor Sa’dulla o‘g A. et al. Asinxron nazorat usullaridan foydalangan holda magistrlik dissertatsiyalarini nazorat qilishni optimallashtirish //Journal of new century innovations. – 2023. – Т. 24. – №. 3. – С. 22-27.Turdiyev О. А., Tukhtakhodjaev А. B., Abdullaev E. S.
  3. The model of network bandwidth when servicing multi-service traffic //Journal of Tashkent Institute of Railway Engineers. – 2019. – Т. 15. – №. 3. – С. 70-74.
  4. Богдановский, В.К., “Разработка информационно-аналитической системы обучения сетевому конфигурированию”//Актуальные проблемы авиации и космонавтики. 2019.
  5. Закиров В.М., & Аметова А.А. Оценка качественных показателей процесса обслуживания на железнодорожном транспорте//The Scientific Heritage, (66-1), 36-39. doi: 10.24412/9215-0365-2021-66-1-36-39. 2021.
  6. Закиров В., Абдуллаев Э. Bir kanalli sinxron tizimlarning oshkora yo ‘qotish va kutish usullarida xizmat ko‘rsatish sifat samaradorligini aniqlash //Актуальные вопросы развития инновационно-информационных технологий на транспорте. – 2022. – Т. 2. – №. 2. – С. 22-33.
  7. Корнышев Ю.Н. Фан Г.Л., Теория распределения информации. M. Радио и связь. 1985. 250. .
  8. Корнышев А.П. Пшеничников А.Д., Харкевич Теория телетрафика, учебник для вузов. Москва, издательство Радио и связь, 1996. 272.
  9. Лившиц Б.С., Пшеничников А.П., Харкевич А.Д., Теория телетрафика. М.: Связь. 1979
  10. Ложковский А.Г., Теория массового обслуживания в телекоммуникациях. Учебник. Одесса: ОНАС им А.С. Попова,. 2012. 112
  11. Gulyamov, Javlon. "Warehouse accounting automated information system design." In AIP Conference Proceedings, vol. 2432, no. 1, p. 060027. AIP Publishing LLC, 2022.
  12. Саакян, Г. Р., "Теория массового обслуживания" Шахты: ЮРГУЭС. 2006
Информация об авторах

Candidate of Technical Sciences, Tashkent State Transport University, Republic of Uzbekistan, Tashkent

канд. техн. наук, Ташкентский государственный транспортный университет, Республика Узбекистан, г. Ташкент

Assistant, Tashkent State Transport University, Republic of Uzbekistan, Tashkent

ассистент, Ташкентский государственный транспортный университет, Республика Узбекистан, г. Ташкент

Assistant, Tashkent State Transport University, Republic of Uzbekistan, Tashkent

ассистент, Ташкентский государственный транспортный университет, Республика Узбекистан, г. Ташкент

Журнал зарегистрирован Федеральной службой по надзору в сфере связи, информационных технологий и массовых коммуникаций (Роскомнадзор), регистрационный номер ЭЛ №ФС77-54434 от 17.06.2013
Учредитель журнала - ООО «МЦНО»
Главный редактор - Ахметов Сайранбек Махсутович.
Top