Investigation and development of the intelligent voice assistant for the Internet of Things using machine learning
Artificial intelligence technologies are beginning to be actively used in human life, this is facilitated by the appearance and wide dissemination of the Internet of Things (IoT). Autonomous devices are becoming smarter in their way to interact with both a human and themselves. New capacities lead to creation of various systems for integration of smart things into Social Networks of the Internet of Things. One of the relevant trends in artificial intelligence is the technology of recognizing the natural language of a human. New insights in this topic can lead to new means of natural human-machine interaction, in which the machine would learn how to understand human's language, adjusting and interacting in it. One of such tools is voice assistant, which can be integrated into many other intelligent systems. In this paper, the principles of the functioning of voice assistants are described, its main shortcomings and limitations are given. The method of creating a local voice assistant without using cloud services is described, which allows to significantly expand the applicability of such devices in the future.
In Experimental Economics, laboratory and eld experiments are conducted on subjects in order to improve theoretical knowledge about human behavior in interactions. Although paying dierent amounts of money restricts the preferences of the subjects in experiments, the exclusive application of analytical game theory does not suce to explain the recorded data. It exacts the development and evaluation of more sophisticated models. In some experiments, human subjects are involved into an interaction with automated agents and these agents are used for simulating human interactions. The more data is used for the evaluation, the more of statistical signicance can be achieved. Since huge amounts of behavioral data are required to be scanned for regularities and automated agents are required to simulate and to intervene human interactions, Machine Learning is the tool of choice for the research in Experimental Economics. Moreover modern economics extensively involves network structures, which can be modeled as graphs or more complicated relational structures. This volume contains the papers presented at the inaugural International Workshop on Experimental Economics and Machine Learning (EEML 2012) held on May 9, 2012 at the Katholieke Universiteit Leuven, Belgium. This year the committee decided to accept 8 full papers for publication in the proceedings and two abstracts for presentation at the conference. Each submission was reviewed by on average 3 program committee members. R. Tagiew proposes a new method for mining determinism in human strategic behavior. N. Buzun et al. present a comparison of methods and measures for overlapping community detection. A. Fishkov et al. discuss a new click model for relevance prediction inWeb search. A. Drutsa et al. applied novel data visualisation techniques to socio-semantic network data. Gilabert et al. made an experimental study on the relationship between trust and budgetary slack. O. Barinova et al. proposed using online random forest for interactive image segmentation. A. Bezzubtseva et al. built a new typology of collaboration platform users. V. Zaharchuk et al. proposed a new recommender system for interactive radio network services. D. Ignatov et al. designed a prototype system for collaborative platform data analysis.
This paper is an overview of the current issues and tendencies in Computational linguistics. The overview is based on the materials of the conference on computational linguistics COLING’2012. The modern approaches to the traditional NLP domains such as pos-tagging, syntactic parsing, machine translation are discussed. The highlights of automated information extraction, such as fact extraction, opinion mining are also in focus. The main tendency of modern technologies in Computational linguistics is to accumulate the higher level of linguistic analysis (discourse analysis, cognitive modeling) in the models and to combine machine learning technologies with the algorithmic methods on the basis of deep expert linguistic knowledge.
The paper makes a brief introduction into multiple classifier systems and describes a particular algorithm which improves classification accuracy by making a recommendation of an algorithm to an object. This recommendation is done under a hypothesis that a classifier is likely to predict the label of the object correctly if it has correctly classified its neighbors. The process of assigning a classifier to each object involves here the apparatus of Formal Concept Analysis. We explain the principle of the algorithm on a toy example and describe experiments with real-world datasets.
The paper deals with the problems of creating and tuning a system of automated anaphora resolution for Russian. Such a system is introduced, combining rule-based and machine learning approaches. It shows F-measure from 0.51 to 0.59. Freeling serves as an underlying morphological layer and an account of its quality is given, with its influence on anaphora resolution workflow. The anaphora resolution system itself is available to download and use, coming with online demo.
In an effort to make reading more accessible, an automated readability formula can help students to retrieve appropriate material for their language level. This study attempts to discover and analyze a set of possible features that can be used for single-sentence readability prediction in Russian. We test the influence of syntactic features on predictability of structural complexity. The readability of sentences from SynTagRus corpus was marked up manually and used for evaluation.
We present a universal method for algorithmic trading in Stock Market which performs asymptotically at least as well as any stationary trading strategy that computes the investment at each step using continuous function of the side information. In the process of the game, a trader makes decisions using predictions computed by a randomized well-calibrated algorithm. We use Dawid's notion of calibration with more general checking rules and some modication of Kakade and Foster's randomized rounding algorithm for computing the well-calibrated forecasts. The method of randomized calibration is combined with Vovk's method of defensive forecasting in RKHS. Unlike in statistical theory, no stochastic assumptions are made about the stock prices.
Over the past century, educational psychologists and researchers have posited many theories to explain how individuals learn, i.e. how they acquire, organize and deploy knowledge and skills. The 20th century can be considered the century of psychology on learning and related fields of interest (such as motivation, cognition, metacognition etc.) and it is fascinating to see the various mainstreams of learning, remembered and forgotten over the 20th century and note that basic assumptions of early theories survived several paradigm shifts of psychology and epistemology. Beyond folk psychology and its naïve theories of learning, psychological learning theories can be grouped into some basic categories, such as behaviorist learning theories, connectionist learning theories, cognitive learning theories, constructivist learning theories, and social learning theories.
Learning theories are not limited to psychology and related fields of interest but rather we can find the topic of learning in various disciplines, such as philosophy and epistemology, education, information science, biology, and – as a result of the emergence of computer technologies – especially also in the field of computer sciences and artificial intelligence. As a consequence, machine learning struck a chord in the 1980s and became an important field of the learning sciences in general. As the learning sciences became more specialized and complex, the various fields of interest were widely spread and separated from each other; as a consequence, even presently, there is no comprehensive overview of the sciences of learning or the central theoretical concepts and vocabulary on which researchers rely.
The Encyclopedia of the Sciences of Learning provides an up-to-date, broad and authoritative coverage of the specific terms mostly used in the sciences of learning and its related fields, including relevant areas of instruction, pedagogy, cognitive sciences, and especially machine learning and knowledge engineering. This modern compendium will be an indispensable source of information for scientists, educators, engineers, and technical staff active in all fields of learning. More specifically, the Encyclopedia provides fast access to the most relevant theoretical terms provides up-to-date, broad and authoritative coverage of the most important theories within the various fields of the learning sciences and adjacent sciences and communication technologies; supplies clear and precise explanations of the theoretical terms, cross-references to related entries and up-to-date references to important research and publications. The Encyclopedia also contains biographical entries of individuals who have substantially contributed to the sciences of learning; the entries are written by a distinguished panel of researchers in the various fields of the learning sciences.