Procedia Computer Science. 3rd International Conference on Information Technology and Quantitative Management, ITQM 2015
Welcome to the Third International Conference on Information Technology and Quantitative Management (ITQM 2015), July 21-24, 2015, Rio De Janeiro, Brazil. The theme of ITQM 2015 is "Exploring Data Science in IT and Quantitative Management". ITQM 2015 is organized by International Academy of Information Technology and Quantitative Management (IAITQM) and Ibmec/RJ, Brazil.
Huge amount of internet connected devices affect the usual way of people lives. People now try to find a way to operate all devices they have in the most efficient way. Cognitive maps are used as models for problems analysis on a strategic level in enterprises. In this paper Web of Service concept is defined and further business model of a Social Web of Service-based company is suggested. We also developed mathematical model of customer experience based on cognitive maps which should further be applied to the proposed business model which should improve customer experience management in the enterprise.
This paper explains how people responding to our survey, which included users’ basic information, social status, experience with social networking and attitude towards social network-integrated e-health information systems. The survey findings show that social media users need special recommendation and guidance services—especially those people located in urban centers that have busy schedules. These people prefer to receive recommendations for their minor health problems over having to go to the hospital or clinic and spend time waiting, perhaps even to return home without a proper consultation from a doctor. As a result, we propose to work on architecture for integrated social media analytics and e-health information systems. However, our findings, being the result of a controlled survey, raise issues such as respondent trust and security and privacy issues relating to healthcare.
Everyone is talking about big data, and how it will transform go vernment. However, looking past the excitement, questions abound. How to use big data to make intelligent decisions? Perh aps most importantly, what value will it really deliver to the government and the citizenry it serves to? By reviewing the literatu re and summarizing insights from a series of business reports and interviews of public sector and top companies Chief Information Officers (CIOs), we offer a survey for both practitioners and researchers inter ested in understanding big data in the public sector of Russian Federation. Remarkable changes are taking place in IT industry of Russian Federati on at present: new strategies of Federal Government, sanctions and import substitution tendency. The paper makes the estimate of internal and external factors, which effect on big data development in public sector of Russian Federation and makes comparative analysis of Russian and world practices of the study area.
This paper is devoted to modern approaches to the estimation of external conflict in the theory of evidence based on axioms. The conflict measure is defined on the set of beliefs obtained from several sources of information. It is shown that the conflict measure should be a monotone set function with respect to sets of beliefs. Some robust procedures for evaluation of conflict measure that are stable to small changes in evidences are introduced and discussed. The analysis of conflict among forecasts about the value of shares of Russian companies of investment banks is presented. In this analysis the conflict measure estimates inconsistency of recommendations of investment banks, while the Shapley values of this measure on the set of evidences characterize the contribution of each investment bank to the overall conflict. The relationship between conflict and precision of forecasts is also investigated.
This scientific work is dedicated to the development, improvement and application of double layer interval weighted graphs (DLIG) for non-stationary time series forecasting. This model appears to be the universal and easy-to-use tool for modeling the non-stationary time series and forecasting. We observe the double layer version of the model because it's the most representative way in the sense of main idea though you can add several layers more for different purposes. The first layer of the graph is based on empirical fluctuations of system and displays the most potential fluctuations of the system at the time of system training. The second layer of the graph as a superstructure of the first layer displays the degree of modeling error and it's connected with the first layer nodes by edges. The second layer is the way of supervised training implementation with the aim of error minimization.