The blockchain technology is currently penetrating different areas of the modern Information and Communications Technology community. Most of the devices involved in blockchain-related processes are specially designed targeting only the mining aspect, i.e., solving the computational puzzle task. At the same time, the use of wearable and mobile devices may also become a part of eCommerce blockchain operation, especially during the on-charge time. The paper considers the possibility of using a large number of constrained devices to support the operation of the blockchain with a low impact on battery consumption. The utilization of such devices is expected to improve the system efficiency as well as to attract a more substantial number of users. This paper contributes to the body of knowledge with a survey of the main applications of blockchain for smartphones along with existing mobile blockchain projects. It also proposes a novel consensus protocol based on a combination of Proof-of-Work (PoW), Proof-of-Activity (PoA), and Proof-of-Stake (PoS) algorithms for efficient and on-the-fly utilization on resource-constrained devices. The system was deployed in a worldwide testnet with more than two thousand smartphones and compared with other projects from the user-experienced metrics perspective. The results prove that the utilization of PoA systems on a smartphone does not significantly affect the lifetime of the smartphone battery while existing methods based on PoW have a tremendous negative impact. Finally, the main open challenges and future investigation directions are outlined.
Intelligent Transportation Systems (ITS) will become an essential part of every city in the near future. They should support various vehicle-to-everything (V2X) applications that improve road safety or even enable autonomous driving. Recently, the European Telecommunications Standards Institute (ETSI) introduced a multi-access (mobile) edge computing concept as a promising solution to satisfy the V2X delay and computational requirements. Based on this concept, the tasks generated by V2X applications can be offloaded to servers at the edge of the radio access network (RAN). There is a need for a task offloading algorithm that minimizes the ITS operator expenses connected with the servers deployment and maintenance while satisfying the requirements of the V2X applications. Most of the existing papers in the literature do not pay much attention to queuing delays at servers. In this paper, the queuing delays are analyzed by considering a general-type task computational time distribution. A non-linear optimization problem is formulated to minimize the ITS operator expenses subject to delays and computational resources constraints. The flexibility is also improved by considering that a delay constraint is satisfied with a given probability. To solve this problem, a method for linearization of the problem is proposed, and consequently, an algorithm based on Integer Linear Programming (ILP) is designed. A heuristic algorithm called Costeffective Heuristic Algorithm for Task offloading (CHAT) is also introduced that provides close to optimal results and has much lower computational complexity than the ILP algorithm. The efficiency of the CHAT algorithm is studied in several scenarios in terms of the computational time, delays, and the total server energy consumption as the cost function. The results show that the CHAT algorithm satisfies the requirements of the V2X applications in all the considered scenarios and reduces the ITS operator expenses over twice compared with other algorithms proposed in the literature.
Licensed assisted access (LAA) enables the coexistence of long-term evolution (LTE) and WiFi in unlicensed bands, while potentially offering improved coverage and data rates. However, cooperation with the conventional random-access protocols that employ listen-before-talk (LBT) considerations makes meeting the LTE performance requirements difficult, since delay and throughput guarantees should be delivered. In this paper, we propose a novel channel sharing mechanism for the LAA system that is capable of simultaneously providing the fairness of resource allocation across the competing LTE and Wi-Fi sessions as well as satisfying the quality-of-service guarantees of the LTE sessions in terms of their upper delay bound and throughput. Our proposal is based on two key mechanisms: 1) LAA connection admission control for the LTE sessions and 2) adaptive duty cycle resource division. The only external information necessary for the intended operation is the current number of active Wi-Fi sessions inferred by monitoring the shared channel. In the proposed scheme, LAA-enabled LTE base station fully controls the shared environment by dynamically adjusting the time allocations for both Wi-Fi and LTE technologies, while only admitting those LTE connections that should not interfere with Wi-Fi more than another Wi-Fi access point operating on the same channel would. To characterize the key performance trade-offs pertaining to the proposed operation, we develop a new analytical model. We then comprehensively investigate the performance of the developed channel sharing mechanism by confirming that it allows to achieve a high degree of fairness between the LTE and Wi-Fi connections as well as provides guarantees in terms of upper delay bound and throughput for the admitted LTE sessions. We also demonstrate that our scheme outperforms a typical LBT-based LAA implementation
The development of information technology has led to a significant increase in the share of multimedia traffic in data networks. This has necessitated to solve the following information security tasks in relation to multimedia data: protection against leakage of confidential information, as well as identifying the source of the leak; ensuring the impossibility of unauthorized changes; copyright protection for digital objects. To solve such kind of problems, methods of steganography and watermarking are designed that implement embedding in digital objects hidden information sequences for various purposes. In this paper, an overview of promising research in the specified area is provided. First of all, we provide basic information about this field of research and consider the main applications of its methods. Next, we review works demonstrating current trends in the development of methods and algorithms for data hiding in digital images. This review is not exhaustive; it focuses on contemporary works illustrating current research directions in the field of information embedding in digital images. This is the main feature of review, which distinguishes it from previously published reviews. The paper concludes with an analysis of identified problems in the field of digital steganography and digital watermarking.
Today, direct contacts between users are being facilitated by the network-assisted device-to-device (D2D) technology, which employs the omnipresent cellular infrastructure for the purposes of control to facilitate advanced mobile social applications. Together with its undisputed benefits, this novel type of connectivity creates new challenges in constructing meaningful proximity-based services with high levels of user adoption. They call for a comprehensive investigation of user sociality and trust factors jointly with the appropriate technology enablers for secure and trusted D2D communications, especially in the situations where cellular control is not available or reliable at all times. In this paper, we study the crucial aspects of social trust associations over proximity-based direct communications technology, with a primary focus on developing a comprehensive proof-of-concept implementation. Our recently developed prototype delivers rich functionality for dynamic management of security functions in proximate devices, whenever a new device joins a secure group of users or an existing one leaves it. To characterize the behavior of our implemented demonstrator, we evaluate its practical performance in terms of computation and transmission delays from the user perspective. In addition, we outline a research roadmap leveraging our technology-related findings to construct a holistic user perspective behind dynamic, social-aware, and trusted D2D applications and services.
IEEE 802.11ah, a new amendment to the Wi-Fi standard, adapts Wi-Fi networks to the emerging Internet of Things (IoT). A key component of .11ah is the Restricted Access Window (RAW), a new channel access mechanism, which reduces contention when even thousands of IoT devices operate in the same area by assigning them different channel times. This paper shows that existing studies incorrectly understand the RAW behavior, oversimplify its modeling and thereby overestimate the real system throughput in several times, especially for short durations of the reserved RAW slots. The core contribution of this paper is a new mathematical model based on a completely different approach, which yields more accurate results and thereby enables better IoT system dimensioning. The developed model is suitable for many scenarios typical for IoT. It allows finding RAW parameters that optimize system performance in terms of throughput, power consumption, and packet loss ratio. The proposed solution is can be used for various traffic patterns: when each device transmits a single packet, a batch of packets of random size, or it has full-buffer traffic.
The advancements in multi-core central processing units have attracted new designs ranging from mechanisms of packing higher number of transistors into the small space, new techniques for communications (e.g., wireless network on chips), or new methodologies for cooling the chip. The latter two design aspects are the focus of this paper, where a microfluidic system is utilized for performing both functions. The miniaturization of microfluidic channels makes it attractive to embed them into the chips to transport fluids that can remove the heat from the processor cores. The extension of the cooling purpose of on-chip microfluidic channels is done by integrating communication feature. The communication process is achieved by transporting fluid through the channel and injecting information through air droplets. Protocols for microfluidic communications are applied, including physical layer functionalities and medium access protocols. The protocol design takes into considerations various properties of the microfludics. Based on the proposed system, the tradeoffs between the data rate and its impact on the amount of heat that can be removed from the processor are evaluated. This system provides new forms of condensed processor design of the future, in which integration of multiple functionalities of microfluidic channel system embedded into multi-core processors.
Mobile social networks (MSNs) are the networks of individuals with similar interests connected to each other through their mobile devices. Recently, MSNs are proliferating fast supported by emerging wireless technologies that allow to achieve more efficient communication and better networking performance across the key parameters, such as lower delay, higher data rate, and better coverage. At the same time, most of the MSN users do not fully recognize the importance of security on their handheld mobile devices. Due to this fact, multiple attacks aimed at capturing personal information and sensitive user data become a growing concern, fueled by the avalanche of new MSN applications and services. Therefore, the goal of this work is to understand whether the contemporary user equipment is susceptible to compromising its sensitive information to the attackers. As an example, various information security algorithms implemented in modern smartphones are thus tested to attempt the extraction of the said private data based on the traces registered with inexpensive contemporary audio cards. Our obtained results indicate that the sampling frequency, which constitutes the strongest limitation of the off-the-shelf side-channel attack equipment, only delivers low-informative traces. However, the success chances to recover sensitive data stored within a mobile device may increase significantly when utilizing more efficient analytical techniques as well as employing more complex attack equipment. Finally, we elaborate on the possible utilization of neural networks to improve the corresponding encrypted data extraction process, while the latter part of this paper outlines solutions and practical recommendations to protect from malicious side-channel attacks and keep the personal user information protected.
Energy efficiency is a significant challenge for modern wireless networking devices. It is crucial for the Internet of Things devices, is required for battery-supplied user devices such as smartphones, and is advisable for high-performance devices such as wireless VR headsets. This article examines the ability of modern Wi-Fi devices to achieve extremely low power consumption when they rarely send and receive data. Two recently developed mechanisms, namely Target Wake Time (TWT) and Wake-Up Radio (WUR), are studied. The first one allows stations to schedule the frame exchanges in advance, while the second one introduces a low-power radio for control information exchange. Although TWT and WUR differ significantly, they both suffer from the clock drift effect that significantly degrades their performance in the case of rare traffic. The paper describes these mechanisms, focusing on their revolutionary features, and presents mathematical models to evaluate the impact of this effect on TWT and WUR mechanism efficiency in terms of energy and channel time consumption. The paper also proposes and thoroughly examines various approaches to the joint and separate usage of TWT and WUR in Wi-Fi networks.
Recently standardized millimeter-wave (mmWave) band 3GPP New Radio systems are expected to bring extraordinary rates to the air interface efficiently providing commercial-grade enhanced mobile broadband services in hotspot areas. One of the challenges of such systems is efficient offloading of the data from access points (AP) to the network infrastructure. This task is of special importance for APs installed in remote areas with no transport network available. In this paper, we assess the packet level performance of mmWave technology for cost-efficient backhauling of remote 3GPP NR APs connectivity “islands”. Using a queuing system with arrival processes of the same priority competing for transmission resources, we assess the aggregated and per-AP packet loss probability as a function environmental conditions, mmWave system specifics, and generated traffic volume. We show that the autocorrelation in aggregated traffic provides a significant impact on service characteristics of mmWave backhaul and needs to be compensated by increasing either emitted power or the number of antenna array elements. The effect of autocorrelation in the per-AP traffic and background traffic from other APs also negatively affects the per-AP packet loss probability. However, the effect is of different magnitude and heavily depends on the load fraction of per-AP traffic in the aggregated traffic stream. The developed model can be used to parameterize mmWave backhaul links as a function of the propagation environment, system design, and traffic conditions.
To improve the performance of Wi-Fi networks in dense deployments, the recent IEEE 802.11ax standard introduces a palette of features improving spatial reuse. A key property of these features is dynamic changes in transmit power and the interference from the neighboring devices. The paper explains the basic operation of spatial reuse features and shows that their efficiency significantly depends on how the stations select appropriate modulation and coding schemes taking into account the variable transmission conditions. Nevertheless, the majority of existing studies in the literature leave this effect out of consideration, assuming an ideal rate control algorithm and obtaining wrong results. The paper fills this gap and presents a novel statistics-based rate control algorithm that selects modulation and coding schemes taking into account the effects induced by the recent spatial reuse features. With extensive simulation, it is shown that the algorithm significantly outperforms the existing rate control algorithms, providing up to 50% higher goodput and three times lower latencies.
For description of dynamics of changes random loads of information flows we examine the stochastic model of Double Stochastic Poisson process which manages points of changes the random loads. A special case of a discrete distribution for the random intensity provides the following covariance property to the corresponding Double Stochastic Poisson subordinator for a sequence of the random loads. Such covariance exactly coincides with the covariance of the fractional Ornstein-Uhlenbeck process. Applying the Lamperti transform we obtain a self-similar random process with continuous time, stationary in the wide sense increments, and one dimensional distributions scaling the distribution of a term of the the initial subordinated sequence of the random loads. The Central Limit Theorem for vectors allows us to obtain in a limit, in the sense of convergence of finite dimensional distributions, the fractional Gaussian Brownian motion and the fractional Ornstein-Uhlenbeck process.
For a family of optimal two-dimensional circulant networks with an analytical description, two new improved versions of the shortest path search algorithm with a constant complexity estimate are obtained. A simple, based on the geometric model of circulant graphs, proof of the formulas used for the shortest path search algorithm is given. Pair exchange algorithms are presented, and their estimates are given for networks-on-chip (NoCs) with a topology in the form of the considered graphs. New versions of the algorithm improve the previously proposed shortest path search algorithm for optimal generalized Petersen graphs with an analytical description. The new proposed algorithm is a promising solution for the use in NoCs which was confirmed by an experimental study while synthesizing NoC communication subsystems and comparing the consumed hardware resources with those when other previously developed routing algorithms.
Massive multi-core processing has recently attracted significant attention from the research community as one of the feasible solutions to satisfy constantly growing performance demands. However, this evolution path is nowadays hampered by the complexity and limited scalability of bus-oriented intra- chip communications infrastructure.The latest advantages of terahertz (THz) band wireless communications providing extraordinary capacity at the air interface offer a promising alternative to conventional wired solutions for intra-chip communications. Still, to invest resources in this field manufacturers need a clear vision of what are the performance and scalability gains of wireless intra-chip communications. Using the comprehensive hybrid methodology combining THz ray-tracing, direct CPU traffic measurements, and cycle-accurate CPU simulations, we perform the scalability study of x86 CPU design that is backward compatible with the current x86 architecture. We show that preserving the current cache coherence protocols mapped into the star wireless communications topology that allows for tight centralized medium access control a few hundreds of active cores can be efficiently supported without any notable changes in the x86 CPU logic. This important outcome allows for incremental development, where THz-assisted x86 CPU with a few dozens of cores can serve as an intermediate solution, while the truly massive multi-core system with broadcast-enabled medium access and enhanced cache coherence protocols can be an ultimate goal.
Sentiment analysis has become a powerful tool in processing and analysing expressed opinions on a large scale. While the application of sentiment analysis on English-language content has been widely examined, the applications on the Russian language remains not as well-studied. In this survey, we comprehensively reviewed the applications of sentiment analysis of Russian-language content and identified current challenges and future research directions. In contrast with previous surveys, we targeted the applications of sentiment analysis rather than existing sentiment analysis approaches and their classification quality. We synthesised and systematically characterised existing applied sentiment analysis studies by their source of analysed data, purpose, employed sentiment analysis approach, and primary outcomes and limitations. We presented a research agenda to improve the quality of the applied sentiment analysis studies and to expand the existing research base to new directions. Additionally, to help scholars selecting an appropriate training dataset, we performed an additional literature review and identified publicly available sentiment datasets of Russian-language texts.