This book constitutes the refereed proceedings of the 14th International Workshop on Enterprise and Organizational Modeling and Simulation, EOMAS 2018, held in Tallinn, Estonia, in June 2018. The main focus of EOMAS is on the role, importance, and application of modeling and simulation within the extended organizational and enterprise context. The 11 full papers presented in this volume were carefully reviewed and selected from 22 submissions. They were organized in topical sections on conceptual modeling, enterprise engineering, and formal methods.
The contemporary marketing practices methodology (CMP) attracts attention of a substantial number of researchers in the field of strategic marketing. In the past two decades there were more than fifty papers published in peer-reviewed outlets addressing the analytics of usage of contemporary marketing practices in a variety of countries and industries. In this note we discuss reliability of these studies with respect to the usage of specific analytic tools. First, we demonstrate that standard clustering analysis is relatively sensitive to small changes in the datasets with companies being assigned to different clusters at frequent rates. Second, the project national teams make use of different, often incompatible settings. Therefore, to make possible comparisons between the countries and across industries, the researchers must agree on a generic setup and procedures. We conclude the note sketching the basics of these common grounds.
This state-of-the-art survey is dedicated to the memory of Emmanuil Markovich Braverman (1931-1977), a pioneer in developing the machine learning theory. The 12 revised full papers and 4 short papers included in this volume were presented at the conference "Braverman Readings in Machine Learning: Key Ideas from Inception to Current State" held in Boston, MA, USA, in April 2017, commemorating the 40th anniversary of Emmanuil Braverman's decease. The papers present an overview of some of Braverman's ideas and approaches. The collection is divided in three parts. The first part bridges the past and the present. Its main contents relate to the concept of kernel function and its application to signal and image analysis as well as clustering. The second part presents a set of extensions of Braverman's work to issues of current interest both in theory and applications of machine learning. The third part includes short essays by a friend, a student, and a colleague.
Originally published in 1995. In securing the future of any democracy, it is vital that the education service should provide an effective introduction to citizenship by means of a high quality and empowering curriculum in educational institutions organized and administered according to democratic principles. In this volume, educators with a variety of backgrounds and experience gained in educational institutions in both Russia and western countries address the question of the conception, justification and implementation of the idea of 'education for democracy'. This is the first publication to emerge from a collaboration of Russian and Western educators in recent times and is an enthralling account of education in countries with wide social, political and historical differences yet having common ground to share over the creation and management of their school systems.
Higher Education in Federal Countries: A Comparative Study is a unique study of higher education in nine federal countries—the United States, Canada, Australia, Germany, Mexico, Brazil, Russia, China and India. In this book, leading international scholars discuss the role of federalism and how it shapes higher education in major nation-state actors on the world stage. The editors develop an overarching comparative analysis of the dynamics of central and regional power in higher education, and the national case studies explain how each federal and federal-like higher education system has evolved and how it functions in what are highly varied contexts.
The book makes a major contribution to higher education studies and defines a new field of comparative analysis. It also provides important insights into comparative governance and the study of federalism and federal arrangements, with their particular historical, political, legal and economic dimensions.
Since 2008 the neoliberal mainstream, which seemed to be steadfast, has been suffering both economic and political crisis. This makes people seek an alternative of right or left kind. In such circumstances, alternative political forces attempt to satisfy civil society’s needs suggesting new ideas that challenge the neoliberalism. Therefore, both leftist and rightist revolts have been a search for new growth drivers and new balance within societies that takes all the classes and their interests into consideration.
This process is closely tied to several major shifts. The privatization of state’s functions has been doubted and put the return of public capital on the agenda. The voice of people demanding a more equal access to public goods, for example, education gets louder.
There are new politicians able to get this message and obtain a broad popular support. They are, for instance, Jeremy Corbyn in the UK and Jean-Luc Melenchon in France. However, what is behind the leftist revolt? Does it have any chance to succeed?
Contributions in this volume focus on computationally efficient algorithms and rigorous mathematical theories for analyzing large-scale networks. Researchers and students in mathematics, economics, statistics, computer science and engineering will find this collection a valuable resource filled with the latest research in network analysis. Computational aspects and applications of large-scale networks in market models, neural networks, social networks, power transmission grids, maximum clique problem, telecommunication networks, and complexity graphs are included with new tools for efficient network analysis of large-scale networks.
This proceeding is a result of the 7th International Conference in Network Analysis, held at the Higher School of Economics, Nizhny Novgorod in June 2017. The conference brought together scientists, engineers, and researchers from academia, industry, and government.
Igor Pellicciari is a tenured professor at the State University of Urbino (Italy) and a senior fellow at the Higher School of Economics (Moscow). He is also contract professor at the Moscow State University and LUISS University (Rome). From 2005 to 2013 he has been a Senior Expert of the European Union for Institution Building Programs, done in cooperation with the Russian Presidential Administration and the Russian Federation Duma.
In order to understand modern Russia and not to fall into the current most common stereotypes (the first and most common one being the image of its current president as a modern Tsar), it should be a prerequisite to analyze the period of the substantial failure of the first Russian constitutionalization which preceded the Soviet government and the entire Soviet period.
This book aims to analyze this period (1905–1907) distinguished by the short but intense liberal era in Russia at the start of the 20th Century. Thanks to this, Russia experienced one of the latest and shortest liberal periods in Europe, in which, however, seeds were launched for the later modern political and institutional development of the country.
It is important to observe the revolution of 1905 and the following convocation of the First Russian Duma in 1906, which evolved into a lost opportunity for the Russian constitutionalization and ultimately ended up being a forgotten liberal revolution. Instead, throughout the decades became predominant Lenin’s narrative of the 1905 events as a general rehearsal towards the hailed and inevitable Glorious Bolshevik Revolution of October 1917, which by contrast, was considered the start of a new era and a strong new legitimate political regime.
Thus, the liberal and constitutionalization potential of the 105 revolution have been for almost a century banned from the official political history of the Soviet Russia.
Nonetheless, today all these events, and especially those generated by the parliamentary institutions, have been reevaluated in the light of their role in inspiring the constitutional transformation of the current post-soviet political system, and have a newly acquired practical significance for the modern institutional development of Russia.
From this perspective, it is more historically understandable the current effort of the Russian Federation to consolidate first and more liberalism and Rule of Law reforms before dealing with the issue of a full and true procedural democratization of the country.
Miscommunicating Social Change analyzes the discourses of three social movements and the alternative media associated with them, revealing that the Enlightenment narrative, though widely critiqued in academia, remains the dominant way of conceptualizing social change in the name of democratization in the post-Soviet terrain. The main argument of this book is that the “progressive” imaginary, which envisages progress in the unidirectional terms of catching up with the “more advanced” Western condition, is inherently anti-democratic and deeply antagonistic. Instead of fostering an inclusive democratic process in which all strata of populations holding different views are involved, it draws solid dividing frontiers between “progressive” and “retrograde” forces, deepening existing antagonisms and provoking new ones; it also naturalizes the hierarchies of the global neocolonial/neoliberal power of the West. Using case studies of the “White Ribbons” social movement for fair elections in Russia (2012), the Ukrainian Euromaidan (2013–2014), and anti-corruption protests in Russia organized by Alexei Navalny (2017) and drawing on the theories of Ernesto Laclau, Chantal Mouffe, and Nico Carpetntier, this book shows how “progressive” articulations by the social movements under consideration ended up undermining the basis of the democratic public sphere through the closure of democratic space.
During the past several decades, several “highly-resourced, accelerated research universities” have been established around the world to pursue—and achieve—academic and research excellence. These institutions are entirely new, not existing universities that were reconfigured. Accelerated Universities provides case studies of eight such universities and highlights the lessons to be learned from these examples. Each of the cases is written by someone involved with leadership at the early developmental stages of each university, and provides insights that only senior executives can illustrate. Accelerated Universitiesshows that visionary leadership and generous funding combined with innovative ideas can yield impressive results in a short time. Universities aspiring to recognition among the top tier of global institutions will find this book indispensable.
The coursebook is aimed at developing foreign language competence among university students and interlingual and intercultural communication in professional sphere. The book is a possibility to master phonetic, lexical and grammatical skills as well as listening, writing anf speaking on the basis of a documentay series "The History of the Kings and Queens of England". Students are provideв with various task types which assist in developing language, communicattive and cultural competences.
This book concludes The Industrialisation of Soviet Russia, an authoritative account of the Soviet Union’s industrial transformation between 1929 and 1939. The volume before this one covered the ‘good years’ (in economic terms) of 1934 to 1936. The present volume has a darker tone: beginning from the Great Terror, it ends with the Hitler-Stalin pact and the outbreak of World War II in Europe. During that time, Soviet society was repeatedly mobilised against internal and external enemies, and the economy provided one of the main arenas for the struggle. This was expressed in waves of repression, intensive rearmament, the increased regimentation of the workforce and the widespread use of forced labour.
What is it to be a work of art? Renowned author and critic Arthur C. Danto addresses this fundamental, complex question. Part philosophical monograph and part memoiristic meditation, What Art Is challenges the popular interpretation that art is an indefinable concept, instead bringing to light the properties that constitute universal meaning. Danto argues that despite varied approaches, a work of art is always defined by two essential criteria: meaning and embodiment, as well as one additional criterion contributed by the viewer: interpretation. Danto crafts his argument in an accessible manner that engages with both philosophy and art across genres and eras, beginning with Plato’s definition of art in The Republic, and continuing through the progress of art as a series of discoveries, including such innovations as perspective, chiaroscuro, and physiognomy. Danto concludes with a fascinating discussion of Andy Warhol’s famous shipping cartons, which are visually indistinguishable from the everyday objects they represent.
This book is the first to trace the origins and significance of positivism on a global scale. Taking their cues from Auguste Comte and John Stuart Mill, positivists pioneered a universal, experience-based culture of scientific inquiry for studying nature and society—a new science that would enlighten all of humankind. Positivists envisaged one world united by science, but their efforts spawned many. Uncovering these worlds of positivism, the volume ranges from India, the Ottoman Empire, and the Iberian Peninsula to Central Europe, Russia, and Brazil, examining positivism’s impact as one of the most far-reaching intellectual movements of the modern world. Positivists reinvented science, claiming it to be distinct from and superior to the humanities. They predicated political governance on their refashioned science of society, and as political activists, they sought and often failed to reconcile their universalism with the values of multiculturalism. Providing a genealogy of scientific governance that is sorely needed in an age of post-truth politics, this volume breaks new ground in the fields of intellectual and global history, the history of science, and philosophy.
Combining history of science and a history of universities with the new imperial history, Universities in Imperial Austria 1848–1918: A Social History of a Multilingual Space by Jan Surman analyzes the practice of scholarly migration and its lasting influence on the intellectual output in the Austrian part of the Habsburg Empire.
The Habsburg Empire and its successor states were home to developments that shaped Central Europe's scholarship well into the twentieth century. Universities became centers of both state- and nation-building, as well as of confessional resistance, placing scholars if not in conflict, then certainly at odds with the neutral international orientation of academe.
By going beyond national narratives, Surman reveals the Empire as a state with institutions divided by language but united by legislation, practices, and other influences. Such an approach allows readers a better view to how scholars turned gradually away from state-centric discourse to form distinct language communities after 1867; these influences affected scholarship, and by examining the scholarly record, Surman tracks the turn.
Drawing on archives in Austria, the Czech Republic, Poland, and Ukraine, Surman analyzes the careers of several thousand scholars from the faculties of philosophy and medicine of a number of Habsburg universities, thus covering various moments in the history of the Empire for the widest view. Universities in Imperial Austria 1848–1918 focuses on the tension between the political and linguistic spaces scholars occupied and shows that this tension did not lead to a gradual dissolution of the monarchy’s academia, but rather to an ongoing development of new strategies to cope with the cultural and linguistic multitude.
This review is an attempt to read the main ideas of Catherine Malabou’s book Before Tomorrow: Epigenesis and Rationality, with a particular emphasis upon the problem of the modifiability of the transcendental and the rejection of the a priori dimension of subjectivity within scientific and philosophical thought of a materialist orientation. Malabou’s thesis of the epigenesis of pure reason evinces the dynamical dimension of the transcendental, integrating structural and evolutionary conceptions of reason. Epigenesis secures the stability of the phenomenal world and allows for the possibility of a contingent metamorphosis of reason, thereby establishing an economy of transcendental contingency. In general, Malabou’s work has many affinities with recent phenomenological thought, although it makes few explicit references to phenomenological philosophers as such.
This article deals with the representation of tales of the Ulster Cycle in Foras Feasa ar Éirinn, written by Geoffrey Keating in the seventeenth century. Among the sources of retellings of these stories, the article focuses on that copied in Cambridge McClean MS 187, which may have been the Black Book of Molaga, the hypothetical primary source of the death tales reproduced in Foras Feasa ar Éirinn, of which editors and students of the Ulster Cycle have not been aware. On closer examination it becomes evident that the tales as represented in Keating’s work and McClean 187, as well as other tales included in the Foras, were reworkings of earlier variants of the tales. Keating did not merely copy his primary sources but rather revised them: he either rearranged the plot of the original story or modified it in accordance with his own authorial intentions.
In the recent edition of Maffeo Vegio’s Antonias, the manuscripts’ readings should be restored in two places, since the conjectures introduced by the poem’s editor distort both the text’s syntax and its metrical correctness.
The following conjecture is proposed: 'New Apuleius' 27.15 ordine cieri Stover: ordinem queri cod.: ordine moueri Shumilin.
A conjecture of longe is proposed at Verg. Aen. 12.510 based on Vergil’s Homeric model and on a probable imitation by Statius.
The mixture of argon and mercury vapor is used as the background gas in different types of gas discharge illuminating lamps. The aim of this work was development of a model, describing transport of electrons, ions and fast atoms in the one-dimensional low-current gas discharge in argon-mercury mixture, and determination of the dependence of their contributions to the cathode sputtering, limiting the device service time, on the temperature. For simulation of motion of electrons we used the Monte Carlo method of statistical modeling, whereas the ion and metastable excited atom motion, in order to reduce the calculation time, we described on the basis of their macroscopic transport equations, which allowed to obtain their flow densities at the cathode surface. Then, using the Monte Carlo method, we found the energy spectra of ions and fast atoms, generated in collisions of ions with mixture atoms, at the cathode surface and also the effective coefficients of the cathode sputtering by each type of particles. Calculations showed that the flow densities of argon ions and fast argon atoms, produced in collisions of argon ions with slow argon atoms, do not depend on the temperature, while the flow densities of mercury ions and fast argon atoms generated by them grow rapidly with the temperature due to an increase of mercury content in the mixture. There are represented results of modeling of the energy spectra of ions and fast atoms at the cathode surface. They demonstrate that at low mercury content in the mixture of the order of 10–3 the energies of mercury ions exceed that of the other types of particles, so that the cathode is sputtered mainly by mercury ions, and their contribution to sputtering is reduced at a mixture temperature decrease.
This article is written in English.
The aim of this review is to offer a coherent selection of previous findings related to the pivotal role of teachers at nurturing the moral acquisitions in their students. Four sections are dissecting evidence about teaching efficacy, teaching practice, value transmission and imitative learning. Through these elements, the possibilities of a successful intervention will be discussed and confronted with the unavoidable limitations and controversies.
The second part of the paper is devoted to enumeration of r-regular maps on the torus up to all its homeomorphisms (unsensed maps). We describe in detail the periodic orientation reversing homeomorphisms of the torus which turn out to be representable as glide reflections. We show that considering quotients of the torus with respect to these homeomorphisms leads to maps on the Klein bottle, the annulus and the M ̈obius band. Using 3- and 4-regular maps as an example we describe the technique of enumerating quotient maps on surfaces with a boundary. Obtained recurrence relations are used to enumerate unsensed r-regular maps on the torus for various r.
The paper studies two-dimensional modal logics with additional connectives (so-called Segerberg squares) and can be regarded as a continuation of the author's earlier paper (2012). It gives a new simpler proof of the finite model property of minimal Segerberg squares using bismulation games. It proves the square finite model property for Segerberg squares of polymodal T and D. It also constructs a faithful embedding of Segerberg squares in the equational theory of relation algebras.
We prove completeness for some normal modal predicate logics in the standard Kripke semantics with expanding domains. We consider quantified versions of propositional logics with the axiom of density plus some others (transitivity, confluence). The method of proof modifies the technique developed for other cases (without density) by S. Ghilardi, G. Corsi and D. Skvorstov; but now we arrange the whole construction in a game-theoretic style.
Information technologies have evolved from its traditional back office role to a strategic resource role able not only to support but also to shape business strategies. For over a decade IT-business alignment has been ranked as a top-priority management concern and is widely covered in literature. However, conceptual studies dominate the field, while there is little research on practical ways to achieve the alignment. The aim of this paper is to formalize and verify the alignment assessment model developed in the previous research by integrating the traditional Strategic Alignment Model and EA framework TOGAF in an attempt to provide a practical approach to the alignment evaluation and implementation. The Alloy Language and Analyzer are used as a means of model formalization and verification.
This research aims to use an architectural approach to create an alignment of IT and business in a chosen industrial enterprise, based on Zachman model. All development processes, existing within the enterprise, their relations and interactions, which are needed to fulfill the enterprise mission, are represented in a chosen Enterprise Architecture framework. The framework does not just perform the main attributes and components of the organization, but it also provides the company with an opportunity to understand and analyze crucial weaknesses and inconsistencies that needed to be identified and rectified. Nowadays enterprises use a wide range of established Enterprise Architecture frameworks, some of which were developed for specific fields, while others can be applied broadly. One of the frameworks with such functionality is the Zachman Enterprise Architecture Framework, a unique tool to create an architectural description and apply solutions for overcoming challenges that have been identified for a considered enterprise. So, the main goal of the study is to provide a practical guidance, which enables the alignment between business and IT, based on the Zachman Enterprise Architecture Framework.
This paper is a continuation of the author’s previous study on methods of velocity measurements of navigation receivers and devoted to their comparative analysis.
Each method of measuring velocity has a few parameters. Let us fix all parameters except for one (main) and vary this parameter. The value of each varied parameter corresponds to some noise error of velocity measurements which can be characterized by standard deviation, or SD (cm/s). A dynamic model of GNSS receiver motion determines dynamic errors. Maximal dynamic error (MDE) (cm/s) is of interest in this case. This error depends on “maneuver phase”, i.e., a shift of the maneuver start time from the starting point of PLL control period and also the starting point of the secondary processing period. The maximal value of MDE is of interest in these shifts.
So, for each value of the varied parameter there is a pair of numbers: SD and MDE. Let us arrange these numbers in plane of the coordinate system: x-axis is MDE, and y-axis is SD. Connect nearest points and obtain a curve which is called an exchange diagram. Since SD and MDE vary within a wide range, the diagrams should be built in logarithmic scale, that is in dB relative to 1 cm/s. Let us call them logarithmic exchange diagrams (LED). Different LED were plotted for tough and soft dynamic scenarios for different methods of velocity measurements including the conventional one frequently discussed in the literature.
As a result of the analysis, a method of generating frequency estimates of the input signal and their further filtering using an after-satellite second order tracking filter, and a method based on quasi-optimal estimates of the input signal phase and further after-satellite filtration using the third order tracking filter have been recommended for tougher dynamic conditions. Under more favorable conditions in addition to the two above, a method of generating coordinate increments over one period with further after-coordinate filtration using the second order tracking filter, and a method of generating local coordinates with further aftercoordinate filtration using the third order tracking filter have been also recommended. In conclusion, a law of velocity estimate SD variation for one of the best recommended methods was investigated in the process of varying method parameters.
Measurements of velocity in navigation receivers are performed in two stages. At the primary (after-satellite) processing stage each of received signals is synchronized using a separate PLL, after that an estimation block (EB) estimates nonenergy (phase and frequency) and energy (SNR) parameters of the received signal. Doppler primary estimations can be subject to after-satellite filtration to obtain secondary frequency estimates. A number of Doppler estimates are conversed into primary estimates of velocity vector projections (for example, onto axes of the local Cartesian coordinate system) using the least square method (LSM). Primary estimates of velocity can be filtered at the secondary (after-coordinate) processing. Secondary velocity vector coordinates are outputted to users.
The present paper considers different methods of measuring velocity, they being different from each other by different tracking filters of primary and secondary processing and different EB. Primary filters operate at the same control frequency Fc as PLL (for instance, at Fc= 200 Hz), and LSM and secondary filters – at lower frequency FE < Fc (for example, at FE=100 Hz or FE=10 Hz). To shift from Fc to FE, some samples are rejected (intermediate samples are thrown). EB generates either primary estimates of instantaneous frequency or instantaneous phase of the input signal, or primary estimates of average input phase over control period Tc=Fc^-1. These primary estimates are fed to the filters of primary processing. At the outputs of these filters either secondary estimates of instantaneous frequency or estimates of averaged frequency and it’s derivative over period Tc are outputted which further are recalculated in estimates of instantaneous frequency. Based on thinned instantaneous phase estimates sometimes there are generated increments of these phase estimates over period TE = FE^-1. Primary estimates of either coordinates of the instantaneous velocity vector or averaged over period TE are fed to the input of secondary processing filters. In the first case, secondary estimates of instantaneous coordinates of the velocity vector are obtained at filter outputs at once. In the second case, at the filter outputs there are estimates of averaged velocities and accelerations over period TE which are further calculated in estimates of instantaneous velocity vector coordinates.
It has been shown that frequency estimation typically used in analog systems brings about a biased frequency (and hence, velocity) estimate when a receiver with digital PLLs has constant non-zero acceleration. Various algorithms of non-biased estimation have been also considered.
Depending on the scales of periodic irregularities in the problem under study, a solution arises which describes two (“double-deck”) or three (“triple-deck”) boundary layers on the plate. Mainly, we study the equations describing the velocity oscillations in the boundary layers arising because of periodic irregularities and show their command nature.
The principal possibility of creating optically pumped compact magnetic sensor for MEG operating in a wide magnetic field range is experimentally proved.