Designing models of educational processes execution in secure distributed system
The paper is devoted to describing of the methods applied to create models of educational processes. These models was prepared to be implemented in a secure distributed system. The methods for model search, implementation and optimization were inspected in relation to the quality of education and distance learning. In order to improve metadata, the feedback process modeling was introduced.
Multi-agent systems (MAS) with many levels and dynamic hierarchical structure are widely used in telecommunication, transport, social, and other fields. Assuring correctness of such systems is an important and topical issue. In this paper we consider modeling MAS with dynamic structure with the help of Nested Petri nets (NPNs). NPN is an extension of Petri nets within ‘nets-within-nets’ paradigm, when tokens in a Petri nets are Petri nets themselves. Net tokens have autonomous behavior and communicate with each other. In the model-driven software development process building a code from a designed model is the most error-prone phase. The paper presents an algorithm for automatic translation of NPN models of MAS into systems of distributed components. The suggested translation respects the distributed structure of source models components, preserves some important behavioral properties (such as safety, liveness, conditional liveness), and supports the fairness of the system execution. The translation allows automating the development of distributed multi-agent systems, based on nested Petri nets models. A translator prototype on the basis of EJB technology was implemented and tested.
This study investigates main problems of automation and optimization of educational processes with the help of BPMS and Big Data. The questions concerning process modeling are raised, particularly related to the integration of process-oriented and business analysis systems. The main goal of study is to find possible new way to implement the ideas of metadata integrity, closed-loop process controls, data storage adapters and hidden processes discovery. These ideas are proven to be necessarily implemented in the new complex type of information systems and the corresponding methodology. The new structure for this type of systems is introduced with brief explanations of solutions and methodologies chosen for the task. The concept of process repositories, which could be found in previous works, is developed: process repositories are shown as the basis to create standardized interoperable components for the global educational information system. The working prototype partially implementing the concept is demonstrated in the case of online learning resources usage. The prototype describes the key aspects: metadata descriptions, data gathering and process mining. This leads to real prototype implementations of all elements introduced as the part of complex theoretical structure. The study proposes means to improve prototype and build the complete system out of it. In conclusion, the examples of working applications based on the idea are listed. The new complex structure, methodology description and working prototypes are the results of this study
This paper presents how machine learning algorithms and methods of statistics can be implemented to data management in hybrid data storage systems. Basicly, two di↵erent storage types are used to store data in the hybrid data storage systems. Keeping low-frequenty used data on cheap and slow storages of type one and high-frequently used data on fast and expensive storages of type two helps to achieve optimal performance/cost ratio for the system. We use classification algorithms to estimate probability that the data will high-frequently used in future. Then, using the risks analysis we define where the data should be stored. We show how to estimate optimal number of replicas of the data using regression algorithms and Hidden Markov Model. Based on the probability, risks and the optimal nuber of data replicas our recommendation system finds optimal data distribution in the hybrid data storage system. We present the results of our method implementation in LHCb hybrid data storage.
Coordination of several distributed system components is an error-prone task, since interaction of several simple components can generate rather sophisticated behavior. Verification of such systems is very difficult or even impossible because of the so-called state space explosion problem, when the size of the system reachability set grows exponentially on the number of interacting agents. To overcome this problem several approaches to construct correct models of interacting agents in a compositional way were proposed in the literature. They define different properties and conditions to ensure correct behavior of interacting agents. Checking these conditions may be in its turn quite a problem. In this paper, we propose patterns for correct composition of component models. For justifying these patterns we use special net morphisms. However, to apply patterns the user does not need to be familiar with the underlying theory.
Checking the correctness of distributed systems is one of the most difficult and urgent problems in software engineering. A combined toolset for the verification of real-time distributed systems (RTDS) is described. RTDSs are specified as statecharts in the Universal Modeling Language (UML). The semantics of statecharts is defined by means of hierarchical timed automata. The combined toolset consists of a UML statechart editor, a verification tool for model checking networks of real-time automata in UPPAAL, and a translator of UML statecharts into networks of timed automata. The focus is on the translation algorithm from UML statecharts into networks of hierarchical timed automata. To illustrate the proposed approach to the verification of RTDSs, a toy example of a real-time crossroad traffic control system is analyzed.
Explosive growth of raster data volumes in numerical simulations, remote sensing and other fields stimulate the development of new efficient data processing techniques. For example, in-situ approach queries data in diverse file formats avoiding time-consuming import phase. However, after data are read from file, their further processing always takes place with code developed almost from scratch. Standalone command line tools are one of the most popular ways for in-situ processing of raster files. Decades of development and feedback resulted in numerous feature-rich, elaborate, free and quality-assured tools mostly for a single machine. The paper reports current development state of ChronosServer – distributed system partially delegating in-situ raster processing to external tools. The new delegation approach is anticipated to readily provide rich collection of raster data operations at scale.
The 4th International Conference on Educational Data Mining (EDM 2011) brings together researchers from computer science, education, psychology, psychometrics, and statistics to analyze large datasets to answer educational research questions. The conference, held in Eindhoven, The Netherlands, July 6-9, 2011, follows the three previous editions (Pittsburgh 2010, Cordoba 2009 and Montreal 2008), and a series of workshops within the AAAI, AIED, EC-TEL, ICALT, ITS, and UM conferences. The increase of e-learning resources such as interactive learning environments, learning management systems, intelligent tutoring systems, and hypermedia systems, as well as the establishment of state databases of student test scores, has created large repositories of data that can be explored to understand how students learn. The EDM conference focuses on data mining techniques for using these data to address important educational questions.
The annual ACM SIGMOD/PODS Conference is a leading international forum for database researchers, practitioners, developers, and users to explore cutting-edge ideas and results, and to exchange techniques, tools, and experiences. The conference includes a fascinating technical program with research and industrial talks, tutorials, demos, and focused workshops. It also hosts a poster session to learn about innovative technology, an industrial exhibition to meet companies and publishers, and a careers-in-industry panel with representatives from leading companies.