Proceedings of the 7th Spring/Summer Young Researchers’ Colloquium on Software Engineering, SYRCoSE 2013
The issue contains the papers presented at the 7th Spring/Summer Young Researchers' Соllоquium оn Software Engineering (SYRCoSE 2013) held in Kazan, Russia on 30th and З1st оf Мay, 2013. Paper selection was based on a competitive peer review process being done by the program committee. Both regular and reseаrсh-in-рrogrеss papers were соnsidered ассeрtable for the colloquium.
The topics of the colloquium include modeling of computer systems, software testing and verification, parallel and distributed systems, information search and data mining, image and speech processing and others.
Different specialists are involved in software development at once: databases designers, business analysts, user interface designers, programmers, testers, etc. It leads to creation and usage in systems designing of various models fulfilled from the different points of view, with different levels of details, which use different modeling languages for the description. Thus there is a necessity of models transformation as between different levels of hierarchy, and within the same level between different modeling languages for creation of united model of system and exporting of models to external systems. The MetaLanguage system is intended to visual domain-specific languages creation. The approaches to development of a model transformation component of MetaLanguage system are considered. This component allows to fulfill vertical and horizontal model transformations of “model-text” and “model-model” types. These transformations are based on graph grammars described by production rules. Each rule contains the left- and right-hand sides. The algorithm of the left-hand side search in the source model and the algorithms of execution of a right-hand side of a rule are described. Transformations definitions for models in ERD notation are presented as example.
Nested Petri nets is an extension of Petri net formalism with net tokens for modelling multi-agent distributed systems with complex structure. While having a number of interesting properties, NP-nets have been lacking tool support. In this paper we present the NPNtool toolset for NP-nets which can be used to edit NP-nets models and check liveness in a compositional way. An algorithm to check m-bisimiliarity needed for compositional checking of liveness has been developed. Experimental results of the toolset usage for modelling and checking liveness of classical dinning philosophers problem are provided.
The work provides a specific approach to modeling and simulating Wireless Sensor Networks (WSN) via nested Petri Nets formalism. The tool for modeling/simulating WSN must take into account resources, time and sensors cost. Even though classical Petri Nets are well-suited for modeling dynamic concurrent systems, they do not have enough expressibility to model systems with distributed agents-sensors. The proposed tool enables user to conduct visual modelling, simulate WSN and find WSN defects on early stages of the WSN development.
Abstract – Nowadays approaches, based on models, are used in the development of the information systems. The models can be changed during the system development process by developers. They can be transformed automatically: visual model can be translated into program code; transformation from one modeling language to other can be done. The most appropriate way of the formal visual model presentation is metagraph. The best way to describe changes of visual models is the approach, based on graph grammars (graph rewriting). It is the most demonstrative way to present the transformation. But applying the graph grammar to the graph of model means to find the subgraph isomorphic to the left part of the grammar rule. This is an NP-complete task. There are some algorithms, developed for solving this task. They were designed for ordinary graphs and hypergraphs. In this article we consider some of them in case of using with the metagraphs representing models.
This paper describes our approach to document search based on the ontological resources and graph models. The approach is applicable in local networks and local computers. It can be useful for ontology engineering specialists or search specialists.
Today many problems that are dedicated to a particular problem domain can be solved using DSL. Thus to use DSL it must be created or it can be selected from existing ones. Creating a completely new DSL in most cases requires high financial and time costs. Selecting an appropriate existing DSL is an intensive task because such actions like walking through every DSL and deciding if current DSL can handle the problem are done manually. This problem appears because there are no DSL repository and no tools for matching suitable DSL with specific task. This paper observes an approach for implementing an automated detection of requirements for DSL (ontology-based structure) and automated DSL matching for specific task.
Volume of the data information system operate has been rapidly increasing. Data logs have long been known, as they are a useful tool to solve a range of tasks. The amount of information that is written to a log during a specified length of time leads to the so-called problem of “big data”. Process-aware information systems (PAIS) allow developing models of processes interaction, been monitoring accuracy of their performance and correctness of interaction with each other. Studying logs of PAIS in order to extract knowledge about the processes and construct their models has to do with the process mining discipline. There are available developed tools for process mining, both on a commercial and on a free basis. We are proposing a concept of a new DPMine tool for building a model of multistage process mining from individual processing units connected to each other in a processing graph. The resulting model is executed (simulated) by making an incremental process from the beginning to the end.
This article contains the implementation description of a real estate market offers aggregator service. Advertisement analysis is made with the aid of ontologies. A set of ontologies to describe specific websites can be extended, so the aggregator can be used for many diverse resources.
Creation of test programs and analysis of their execution is the main approach to system-level verification of microprocessors. A lot of techniques have been proposed to automate test program generation, ranging from completely random to well directed ones. However, no “silver bullet” has been found. In good industrial practices, various methods are combined complementing each other. Unfortunately, there is no solution that could integrate all (or at least most) of the techniques in a single framework. Engineers are forced to use a number of tools, which leads to the following problems: (1) it is required to maintain duplicating data (each tool uses its own representation of the target design); (2) to be used together, tools need to be integrated (engineers have to deal with different formats and interfaces). This paper proposes a concept of an extendable framework (MicroTESK) that follows a unified methodology for defining test program generation techniques. The framework supports random and combinatorial generation and (what is even more important) can be easily extended with new techniques being implemented as the framework’s plugins.