Формирование базы знаний экспертной системы оценки военной безопасности: методологический аспект
The methodological aspects of the knowledge base of expert system.
Proceedings of the 25th International Conference on Database and Expert Systems Applications - DEXA 2014, Munich, Germany, September 1-4, 2014.
Nowadays a lot of various test generation tools are developed and applied to create tests for both software applications and hardware designs. Taking into account the size and complexity of modern projects, there is an urgent need for "smart" tools that would help maximize test coverage and keep the required effort and time to a minimum. Despite the fact that each project is unique in some sense, there is a set of common generation techniques that are applied in a wide range of projects (random tests, combinatorial tests, tests for corner cases, etc). In addition, projects belonging to specific domains tend to share similar test cases or use similar heuristics to generate them. A natural way to improve the quality of testing is to make the most of the experience gained working on different projects or performing testing at different stages of the same project. To achieve this goal, a knowledgebase holding information relevant to test generation would be of a great help. This would facilitate reuse of test cases and generation algorithms and would allow sharing knowledge of "interesting" situations that can occur in a system under test. The paper proposes a concept of a knowledgebase for test generation that can be used in a wide range of test generation tools. At ISPRAS, it is applied in test program generation tools that create test programs for microprocessors. The knowledgebase is designed to store information on widely used test generation techniques and test situations that can occur in a microprocessor design under verification.
he paper presents a framework for fast text analytics developed during the Texterra project. Texterra is a technology for multilingual text mining based on novel text processing methods that exploit knowledge extracted from user-generated content. It delivers a fast scalable solution for text mining without the expensive customization. Depending on use-cases Texterra could be utilized as a library, extendable framework or scalable cloudbased service. This paper describes details of the project, use-cases and results of evaluation for all developed tools. Texterra utilizes Wikipedia as a primary knowledge source to facilitate text mining in arbitrary documents (news, blogs, etc). We mine the graph of Wikipedia’s links to compute semantic relatedness between all concepts described in Wikipedia. As a result, we build a semantic graph with more than 5 million concepts. This graph is exploited to interpret meanings and relationships of terms in text documents. In spite of large size, Wikipedia doesn’t contain information about many domain-specific concepts. In order to increase applicability of the technology we developed several automatic knowledge extraction tools. These tools include systems for knowledge extraction from MediaWiki resources and Linked Data resources, as well as system for knowledge base extension with concepts described in arbitrary text documents using original information extraction techniques. In addition, utilization of information from Wikipedia allows easily extend Texterra for support of new Natural languages. The paper presents evaluation of Texterra applied for different text processing tasks (part-of-speech tagging, word sense disambiguation, keyword extraction and sentiment analysis) for English and Russian.
This volume brings together twenty four articles by eminent historians from around Europe, presented in form of papers at the international conference on the Crimean War (1853-1856) held in Warsaw in 2007.
A new computer architecture named object-attribute is offered in the article. Computer of the architecture have all necessary properties for Artificial Intelligence: abstraction of data and program, height concurrency, isomorphism of data and program (i.e. possibility of painless changing of program and data structures), training and self-training of computer system, dataflow, integration of data and program, generation of object description from simple description to complex description, implementation of distribute computer system.
We investigate conjunctive query inseparability of description logic (DL) knowledge bases (KBs) with respect to a given signature, a fundamental problem for KB versioning, module extraction, forgetting and knowledge exchange. We study the data and combined complexity of deciding KB query inseparability for fragments of Horn-ALCHI, including the DLs underpinning OWL 2 QL and OWL 2 EL. While all of these DLs are P-complete for data complexity, the combined complexity ranges from P to EXPTIME and 2EXPTIME. We also resolve two major open problems for OWL 2 QL by showing that TBox query inseparability and the membership problem for universal UCQ-solutions in knowledge exchange are both EXPTIME-complete for combined complexity.
I give the explicit formula for the (set-theoretical) system of Resultants of m+1 homogeneous polynomials in n+1 variables