A Too-Good-to-be-True Prior to Reduce Shortcut Reliance
In recent times, the utilization of autonomous vehicles (AVs) has been significantly increased over the globe. It is because of the tremendous rise in familiarity and the usage of artificial intelligence approaches in distinct application areas. Though AVs offer several benefits like congestion control, accident prevention, and so on, energy management and traffic flow prediction (TFP) remain a challenging issue. This paper concentrates on the design of intelligent energy management and TFP (IEMTFP) technique for AVs using multi-objective reinforced whale optimization algorithm (RWOA) and deep learning (DL). The proposed model involves an energy management module using fuzzy logic system to reach the specified engine torque with respect to different measures. For optimal tuning of the variables involved in the fuzzy logic membership functions (MFs), RWOA is employed to further reduce the energy utilization. Besides, the proposed model uses a DL-based bidirectional long short-term memory (Bi-LSTM) technique to perform TFP. For validating the efficacy of the IEMTFP technique, an extensive experimental validation is carried out. The resultant values ensured the goodness of the IEMTFP model in terms of energy management and TFP.
This book constitutes the refereed proceedings of the 11th International Conference on Intelligent Data Processing, IDP 2016, held in Barcelona, Spain, in October 2016.
The 11 revised full papers were carefully reviewed and selected from 52 submissions. The papers of this volume are organized in topical sections on machine learning theory with applications; intelligent data processing in life and social sciences; morphological and technological approaches to image analysis.
Objective: Brain-computer interfaces (BCIs) decode information from neural activity and send it to external devices. The use of Deep Learning approaches for decoding allows for automatic feature engineering within the specific decoding task. Physiologically plausible interpretation of the network parameters ensures the robustness of the learned decision rules and opens the exciting opportunity for automatic knowledge discovery. Approach: We describe a compact convolutional network-based architecture for adaptive decoding of electrocorticographic (ECoG) data into finger kinematics. We also propose a novel theoretically justified approach to interpreting the spatial and temporal weights in the architectures that combine adaptation in both space and time. The obtained spatial and frequency patterns characterizing the neuronal populations pivotal to the specific decoding task can then be interpreted by fitting appropriate spatial and dynamical models. Main results: We first tested our solution using realistic Monte-Carlo simulations. Then, when applied to the ECoG data from Berlin BCI competition IV dataset, our architecture performed comparably to the competition winners without requiring explicit feature engineering. Using the proposed approach to the network weights interpretation we could unravel the spatial and the spectral patterns of the neuronal processes underlying the successful decoding of finger kinematics from an ECoG dataset. Finally we have also applied the entire pipeline to the analysis of a 32-channel EEG motor-imagery dataset and observed physiologically plausible patterns specific to the task. Significance: We described a compact and interpretable CNN architecture derived from the basic principles and encompassing the knowledge in the field of neural electrophysiology. For the first time in the context of such multibranch architectures with factorized spatial and temporal processing we presented theoretically justified weights interpretation rules. We verified our recipes using simulations and real data and demonstrated that the proposed solution offers a good decoder and a tool for investigating motor control neural mechanisms.
Determining the tonality of the text is a difficult task, the solution of which essentially depends on the context, the field of study and the amount of text data. The analysis shows that the authors in their works do not jointly use the full range of possible transformations on the data and their combinations. The article explores a generalized approach, which consists in sequentially passing through the stages of intelligence analysis, obtaining a basic solution, vectorization, preprocessing, tuning hyperparameters and modeling. The experiments carried out by iterative application of these stages give a positive increase in quality for classical machine learning algorithms and a significant increase for deep learning.
The article analyzes main causes and consequences of the interdisciplinary crisis of the reproducibility and reliability of the results of scientific research that has unfolded in the social sciences in parallel with the «data revolution». This crisis is expressed not only in the growing concern of scientists about the reliability of research results and the possibilities to establish the practices securing the transparency of empirical data and the statistical software used for their analysis, but also in disputes on limitations of the routine approach to significance testing and feasibility of alternatives based on Bayesian approach. Some aspects of the relationship between theory and data-driven methods of searching for patterns in empirical data are briefly discussed in the context of describing a new approach to multimodel analysis aiming at evaluation of model robustness and model uncertainty.
Brain computer interfaces are a growing research field producing many implementations that find various uses in research and medical practice and everyday life. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement in the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging modality that provides highly informative brain activity signals and entails the use of machine learning methods to efficiently decipher the complex spatial-temporal cortical representation of motor and cognitive function. Deep learning techniques is the family of machine learning methods that allow to learn representations of data with multiple levels of abstraction. We hypothesized that the deep learning would allow to reach higher accuracy in the task of decoding movement timecourse than it is possible with traditional signal processing approaches.
Recently, deep learning methods have been increasingly applied on spoken language technologies, including signal processing, language understanding and generation, dialogue management, as well as joint optimisations of these (end-to-end learning). However, such methods still have limitations and it is not yet clear that deep learning and joint optimisation is the key to the future.
Encompassing the current deep learning trends and traditional knowledge-based methods, SLT’s 2018 main theme will be around “Spoken Language Technology in the Era of Deep Learning: Challenges and Opportunities”.
The book presents a remarkable collection of chapters covering a wide range of topics in the areas of intelligent systems and artificial intelligence, and their real-world applications. It gathers the proceedings of the Intelligent Systems Conference 2019, which attracted a total of 546 submissions from pioneering researchers, scientists, industrial engineers, and students from all around the world. These submissions underwent a double-blind peer-review process, after which 190 were selected for inclusion in these proceedings.
As intelligent systems continue to replace and sometimes outperform human intelligence in decision-making processes, they have made it possible to tackle a host of problems more effectively. This branching out of computational intelligence in several directions and use of intelligent systems in everyday applications have created the need for an international conference as a venue for reporting on the latest innovations and trends.
This book collects both theory and application based chapters on virtually all aspects of artificial intelligence; presenting state-of-the-art intelligent methods and techniques for solving real-world problems, along with a vision for future research, it represents a unique and valuable asset.
The task object tracking is vital in numerous applications such as autonomous driving, intelligent surveillance, robotics, etc. This task entails the assigning of a bounding box to an object in a video stream, given only the bounding box for that object on the first frame. In 2015, a new type of video object tracking (VOT) dataset was created that introduced rotated bounding boxes as an extension of axis-aligned ones. In this work, we introduce a novel end-To-end deep learning method based on the Transformer Multi-Head Attention architecture. We also present a new type of loss function, which takes into account the bounding box overlap and orientation. Our Deep Object Tracking model with Circular Loss Function (DOTCL) shows a considerable improvement in terms of robustness over current state-of-The-Art end-To-end deep learning models. It also outperforms state-of-The-Art object tracking methods on VOT2018 dataset in terms of the expected average overlap (EAO) metric.
A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traﬃc is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the ﬁnal node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a ﬁnite-dimensional system of diﬀerential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the “correct” extension of solutions of a system of diﬀerential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station.
Event logs collected by modern information and technical systems usually contain enough data for automated process models discovery. A variety of algorithms was developed for process models discovery, conformance checking, log to model alignment, comparison of process models, etc., nevertheless a quick analysis of ad-hoc selected parts of a journal still have not get a full-fledged implementation. This paper describes an ROLAP-based method of multidimensional event logs storage for process mining. The result of the analysis of the journal is visualized as directed graph representing the union of all possible event sequences, ranked by their occurrence probability. Our implementation allows the analyst to discover process models for sublogs defined by ad-hoc selection of criteria and value of occurrence probability
Existing approaches suggest that IT strategy should be a reflection of business strategy. However, actually organisations do not often follow business strategy even if it is formally declared. In these conditions, IT strategy can be viewed not as a plan, but as an organisational shared view on the role of information systems. This approach generally reflects only a top-down perspective of IT strategy. So, it can be supplemented by a strategic behaviour pattern (i.e., more or less standard response to a changes that is formed as result of previous experience) to implement bottom-up approach. Two components that can help to establish effective reaction regarding new initiatives in IT are proposed here: model of IT-related decision making, and efficiency measurement metric to estimate maturity of business processes and appropriate IT. Usage of proposed tools is demonstrated in practical cases.