?
Разработка моделей для расчета задержек доступа к контенту с учетом кеширования в сетях NDN
Introduction: Caching reduces content access latency by bringing it closer to users and offloading the cloud layer of the network. Edge servers are used for this purpose, but the standard TCP/IP stack does not fully utilize caching capabilities. Therefore, it is necessary to consider a cache model using the Named Data Networking stack, which is more suited for this function. Purpose: To develop models for estimating content access latency, taking into account the specifics of caching at an intermediate node in NDN. Results: We propose a simulation model implementing the Least Recently Used caching algorithm based on discrete event modeling, as well as a mathematical model of delays in receiving content. These models allow for an adequate assessment of the latency on each segment from the user to the cloud, and for the system as a whole, and for calculating key cache performance characteristics: cache hit probability, load on external and internal channels. Edge server-based caching reduced content access latency to 65 ms, while reducing operator costs for accessing the remote cloud by up to 60%. Practical relevance: The results of this study can be used by internet service providers and telecom operators to reduce backbone bandwidth congestion and improve subscriber quality of service when downloading popular content. Discussion: The study used a scenario with a large catalog and a Poisson flow of edge cache requests, which corresponds to a regime in which the probability of request aggregation in the Pending Interest Table is low. Studying the dynamic effect of aggregation on content access latency requires a more complex traffic model (e.g., an ON-OFF or shot-noise model), which is the subject of further research.