Predictive Model for the Bottomhole Pressure based on Machine Learning
The objective of this work is to develop a predictive model for multiphase wellbore flows using the machine learning approach. The artificial neural network is developed and then trained on the dataset generated using the numerical simulator of the full-scale transient wellbore flows. After the training is completed, the neural network is used to predict one of the key parameters of the wellbore flow, namely, the bottomhole pressure. The novelty of this work is related to the application of the neural network to analyze highly transient processes taking place in wellbores. In such processes, most of the parameters of interest can be represented by interdependent time series of variables linked through complex physical phenomena pertinent to the nature of multiphase flows. The proposed neural network with two hidden layers demonstrated the capability to predict the bottomhole pressure within 5% of the normalized root mean squared error for many complex wellbore configurations and flows. It is also shown that relatively higher prediction errors are mainly observed in the case of slug flows where the transient nature of flows is pronounced the most. Finally, the developed model is tested on data affected by noise. It is demonstrated that although the error of prediction slightly increases in contrast to the data without noise, the model captures essential features of the studied transient process. Description of the developed models, analysis of various test use cases, and possible future research directions are outlined.
The paper makes a brief introduction into multiple classifier systems and describes a particular algorithm which improves classification accuracy by making a recommendation of an algorithm to an object. This recommendation is done under a hypothesis that a classifier is likely to predict the label of the object correctly if it has correctly classified its neighbors. The process of assigning a classifier to each object involves here the apparatus of Formal Concept Analysis. We explain the principle of the algorithm on a toy example and describe experiments with real-world datasets.
The paper deals with the problems of creating and tuning a system of automated anaphora resolution for Russian. Such a system is introduced, combining rule-based and machine learning approaches. It shows F-measure from 0.51 to 0.59. Freeling serves as an underlying morphological layer and an account of its quality is given, with its influence on anaphora resolution workflow. The anaphora resolution system itself is available to download and use, coming with online demo.
In an effort to make reading more accessible, an automated readability formula can help students to retrieve appropriate material for their language level. This study attempts to discover and analyze a set of possible features that can be used for single-sentence readability prediction in Russian. We test the influence of syntactic features on predictability of structural complexity. The readability of sentences from SynTagRus corpus was marked up manually and used for evaluation.
Over the past century, educational psychologists and researchers have posited many theories to explain how individuals learn, i.e. how they acquire, organize and deploy knowledge and skills. The 20th century can be considered the century of psychology on learning and related fields of interest (such as motivation, cognition, metacognition etc.) and it is fascinating to see the various mainstreams of learning, remembered and forgotten over the 20th century and note that basic assumptions of early theories survived several paradigm shifts of psychology and epistemology. Beyond folk psychology and its naïve theories of learning, psychological learning theories can be grouped into some basic categories, such as behaviorist learning theories, connectionist learning theories, cognitive learning theories, constructivist learning theories, and social learning theories.
Learning theories are not limited to psychology and related fields of interest but rather we can find the topic of learning in various disciplines, such as philosophy and epistemology, education, information science, biology, and – as a result of the emergence of computer technologies – especially also in the field of computer sciences and artificial intelligence. As a consequence, machine learning struck a chord in the 1980s and became an important field of the learning sciences in general. As the learning sciences became more specialized and complex, the various fields of interest were widely spread and separated from each other; as a consequence, even presently, there is no comprehensive overview of the sciences of learning or the central theoretical concepts and vocabulary on which researchers rely.
The Encyclopedia of the Sciences of Learning provides an up-to-date, broad and authoritative coverage of the specific terms mostly used in the sciences of learning and its related fields, including relevant areas of instruction, pedagogy, cognitive sciences, and especially machine learning and knowledge engineering. This modern compendium will be an indispensable source of information for scientists, educators, engineers, and technical staff active in all fields of learning. More specifically, the Encyclopedia provides fast access to the most relevant theoretical terms provides up-to-date, broad and authoritative coverage of the most important theories within the various fields of the learning sciences and adjacent sciences and communication technologies; supplies clear and precise explanations of the theoretical terms, cross-references to related entries and up-to-date references to important research and publications. The Encyclopedia also contains biographical entries of individuals who have substantially contributed to the sciences of learning; the entries are written by a distinguished panel of researchers in the various fields of the learning sciences.
In Experimental Economics, laboratory and eld experiments are conducted on subjects in order to improve theoretical knowledge about human behavior in interactions. Although paying dierent amounts of money restricts the preferences of the subjects in experiments, the exclusive application of analytical game theory does not suce to explain the recorded data. It exacts the development and evaluation of more sophisticated models. In some experiments, human subjects are involved into an interaction with automated agents and these agents are used for simulating human interactions. The more data is used for the evaluation, the more of statistical signicance can be achieved. Since huge amounts of behavioral data are required to be scanned for regularities and automated agents are required to simulate and to intervene human interactions, Machine Learning is the tool of choice for the research in Experimental Economics. Moreover modern economics extensively involves network structures, which can be modeled as graphs or more complicated relational structures. This volume contains the papers presented at the inaugural International Workshop on Experimental Economics and Machine Learning (EEML 2012) held on May 9, 2012 at the Katholieke Universiteit Leuven, Belgium. This year the committee decided to accept 8 full papers for publication in the proceedings and two abstracts for presentation at the conference. Each submission was reviewed by on average 3 program committee members. R. Tagiew proposes a new method for mining determinism in human strategic behavior. N. Buzun et al. present a comparison of methods and measures for overlapping community detection. A. Fishkov et al. discuss a new click model for relevance prediction inWeb search. A. Drutsa et al. applied novel data visualisation techniques to socio-semantic network data. Gilabert et al. made an experimental study on the relationship between trust and budgetary slack. O. Barinova et al. proposed using online random forest for interactive image segmentation. A. Bezzubtseva et al. built a new typology of collaboration platform users. V. Zaharchuk et al. proposed a new recommender system for interactive radio network services. D. Ignatov et al. designed a prototype system for collaborative platform data analysis.
This paper is an overview of the current issues and tendencies in Computational linguistics. The overview is based on the materials of the conference on computational linguistics COLING’2012. The modern approaches to the traditional NLP domains such as pos-tagging, syntactic parsing, machine translation are discussed. The highlights of automated information extraction, such as fact extraction, opinion mining are also in focus. The main tendency of modern technologies in Computational linguistics is to accumulate the higher level of linguistic analysis (discourse analysis, cognitive modeling) in the models and to combine machine learning technologies with the algorithmic methods on the basis of deep expert linguistic knowledge.
We present a universal method for algorithmic trading in Stock Market which performs asymptotically at least as well as any stationary trading strategy that computes the investment at each step using continuous function of the side information. In the process of the game, a trader makes decisions using predictions computed by a randomized well-calibrated algorithm. We use Dawid's notion of calibration with more general checking rules and some modication of Kakade and Foster's randomized rounding algorithm for computing the well-calibrated forecasts. The method of randomized calibration is combined with Vovk's method of defensive forecasting in RKHS. Unlike in statistical theory, no stochastic assumptions are made about the stock prices.
Event logs collected by modern information and technical systems usually contain enough data for automated process models discovery. A variety of algorithms was developed for process models discovery, conformance checking, log to model alignment, comparison of process models, etc., nevertheless a quick analysis of ad-hoc selected parts of a journal still have not get a full-fledged implementation. This paper describes an ROLAP-based method of multidimensional event logs storage for process mining. The result of the analysis of the journal is visualized as directed graph representing the union of all possible event sequences, ranked by their occurrence probability. Our implementation allows the analyst to discover process models for sublogs defined by ad-hoc selection of criteria and value of occurrence probability
The Handbook of CO₂ in Power Systems' objective is to include the state-of-the-art developments that occurred in power systems taking CO₂ emission into account. The book includes power systems operation modeling with CO₂ emissions considerations, CO₂ market mechanism modeling, CO₂ regulation policy modeling, carbon price forecasting, and carbon capture modeling. For each of the subjects, at least one article authored by a world specialist on the specific domain is included.
Many electronic devices operate in a cyclic mode. This should be considered when forecastingreliability indicators at the design stage.The accuracy of the prediction and the planning for the event to ensure reliability depends on correctness of valuation and accounting greatest possiblenumber of factors. That in turn will affect the overall progress of the design and, in the end,result in the quality and competitiveness of products
Let G be a semisimple algebraic group whose decomposition into the product of simple components does not contain simple groups of type A, and P⊆G be a parabolic subgroup. Extending the results of Popov , we enumerate all triples (G, P, n) such that (a) there exists an open G-orbit on the multiple flag variety G/P × G/P × . . . × G/P (n factors), (b) the number of G-orbits on the multiple flag variety is finite.
I give the explicit formula for the (set-theoretical) system of Resultants of m+1 homogeneous polynomials in n+1 variables