Технологии обработки Больших данных в логистике и УЦП
Nowadays, it is necessary to capture accumulate, analyze and manage a huge amount of data while managing logistics processes. However, the world’s amount of data is generated with an exponentially growing pace and current IT tools are becoming inadequate. To solve potential problems, the Big Data concept was created. The idea is to accumulate, store, analyze and manage the amount of data that significantly exceed the functionality of traditional systems. The article touches upon the definition of Big Data and Big Data Analytics, which include modern techniques for processing big data sets to search for hidden pattern which allow making better decisions and improve business efficiency. In addition, the article examines the main approaches to the systematization of types of analytics and analytical techniques used to process data and generate conclusions for decision-makers. The author considers the influence of different cases on processes in supply chains. However, despite the benefits already achieved in the implementation of Big data processing technologies, further development of the implementation methodology is required.
With an increasing number of companies applying smart manufacturing (Industry 4.0) technologies, and therefore gathering records from multiple enterprise data sources, a potential for big data analytics (BDA) is seemingly limitless. Still not every firm that implemented smart manufacturing reports gathering or making use of big data emerging from those processes, let alone extracting value from them. This study investigates business value creation mechanisms from BDA in smart manufacturing. Relying on several use cases and project stories described in publicly available sources, we analyze key drivers, applications, barriers, success factors, and business benefits of BDA in smart manufacturing. We summarize our findings in a comprehensive framework capturing first- and second- order effects of BDA implementation on Industry 4.0 processes. Our work aims at contributing to the body of knowledge on BDA and smart manufacturing, and at guiding practitioners in identifying and assessing various application scenarios for those technologies.
Predictive maintenance is a powerful maintenance strategy that makes it possible to significantly reduce operation and maintenance costs of public, commercial and industrial environments. It is a complex data-driven process, which tries to forecast future states of company assets. On one hand it prerequisites condition monitoring of components on machine level. On the other hand it demands the integration of the collected data with other management information systems. Digitization and especially the advent of big data science bring along promising opportunities to create effective smart monitoring and predictive maintenance applications. The aim of this research is to examine the possibilities of a predictive maintenance framework based on the design principles of Industry 4.0 and recent developments in distributed computing, Big Data and Machine Learning. It introduces numerous enabling technologies such as industrial Internet of things, standardized communication protocols, as well as edge and cloud computing. Moreover, it takes a deeper look at data analytical techniques and tools, and analyses performance of well-known machine learning algorithms. Paper proposes architecture of a predictive maintenance framework based on existing software and hardware solutions. As a proof of concept, a real-life smart heating, ventilation, and air conditioning (HVAC) application system is created and tested to demonstrate the possibilities of the proposed PdM framework.
High performance querying and ad-hoc querying are commonly viewed as mutually exclusive goals in massively parallel processing databases. In the one extreme, a database can be set up to provide the results of a single known query so that the use of available of resources are maximized and response time minimized, but at the cost of all other queries being suboptimally executed. In the other extreme, when no query is known in advance, the database must provide the information without such optimization, normally resulting in inefficient execution of all queries. This paper introduces a novel technique, highly normalized Big Data using Anchor modeling, that provides a very efficient way to store information and utilize resources, thereby providing ad-hoc querying with high performance for the first time in massively parallel processing databases. A case study of how this approach is used for a Data Warehouse at Avito over two years time, with estimates for and results of real data experiments carried out in HP Vertica, an MPP RDBMS, are also presented.
In this article, the problem of Big Data is examined from the standpoint of civil law in the context of the question whether the existing mechanisms are sufficient for the purposes of civil regulation of Big Data or whether a qualitative review of the system of objects of civil objects, including intellectual property, is required. In the frame of civil discussion, it is proposed to consider Big Data in close connection with new knowledge formation, including on the basis of its analysis, for the purposes of using it in one’s own activity or selling it on the market and, as a result, to qualify Big Data as a special service based on Big Data technology. An emphasis on the “service” focuses attention on the “dynamics” of relations and the subject of regulations. Equally, the inclusion in the concept of indications of “information and analytical” nature and “Big Data technology” highlights the relevant specific features. Commenting on the characteristics of various objects of civil rights, the authors note the impossibility of extending the existing legal regimes to Big Data and suggest the expediency of recognising Big Data as a new non-traditional object of intellectual property. The proposed approach, according to the authors, allows to take into account not only the differentiation of objects of intellectual property in the broadest sense, but also their inherent unity, which is manifested in the granting of special — exclusive — rights to intangible objects being the results of the activity in question.
International Conference on Information Systems (ICIS) is the major annual meeting of the Association for Information Systems (AIS) , which has over 4,000 members representing universities in over 95 countries worldwide. It is the most prestigious gathering of academics and practitioners in the IS discipline, and provides a forum for networking and sharing of latest ideas and highest calibre scientific work amongst the IS profession. Each year, over 1,000 IS academic professionals from around the world participate in the conference program, which includes about 60 sessions and 180 presentations, in addition to keynotes and panels. The theme of ICIS 2017 is Transformation Society with Digital innovation.
The objective is to provide an opportunity for Big Data researchers and practitioners to build a dynamic community for open and constructive discussions and exchange of academia and industrial experience.
ICIS 2017 SIG on BDA Proceedings include the topics of Business process modelling, Enterprise Architecture, Data Processing and on e-Commerce to find out the new industrial impacts of applied Big Data analytics, e.g. in marketing, risk-assurance, logistics and quality management.
International Conference on Information Systems (ICIS) is the major annual meeting of the Association for Information Systems (AIS) , which has over 4,000 members representing universities in over 95 countries worldwide. It is the most prestigious gathering of academics and practitioners in the IS discipline, and provides a forum for networking and sharing of latest ideas and highest calibre scientific work amongst the IS profession. Each year, over 1,000 IS academic professionals from around the world participate in the conference program, which includes about 60 sessions and 180 presentations, in addition to keynotes and panels.
This book constitutes the refereed proceedings of the 28th International Conference on Data Analytics and Management in Data Intensive Domains, DAMDID/RCDL 2016, held in Ershovo, Moscow, Russia, in October 2016.
The 16 revised full papers presented together with one invited talk and two keynote papers were carefully reviewed and selected from 57 submissions. The papers are organized in topical sections on semantic modeling in data intensive domains; knowledge and learning management; text mining; data infrastructures in astrophysics; data analysis; research infrastructures; position paper.
This study aims to investigate the effects of open innovation (OI) and big data analytics (BDA) on reflective knowledge exchange (RKE) within the context of complex collaborative networks. Specifically, it considers the relationships between sourcing knowledge from an external environment, transferring knowledge to an external environment and adopting solutions that are useful to appropriate returns from innovation.
This study analyzes the connection between the number of patent applications and the amount of OI, as well as the association between the number of patent applications and the use of BDA. Data from firms in the 27 European Union countries were retrieved from the Eurostat database for the period 2014–2019 and were investigated using an ordinary least squares regression analysis.
Because of its twofold lens based on both knowledge management and OI, this study sheds light on OI collaboration modes and highlights the crucial role they could play in innovation. In particular, the results suggest that OI collaboration modes have a strong effect on innovation performance, stimulating the search for RKE.
This study furthers a deeper understanding of RKE, which is shown to be an important mechanism that incentivizes firms to increase their efforts in the innovation process. Further, RKE supports firms in taking full advantage of the innovative knowledge they generate within their inter-organizational network.
The paper examines the structure, governance, and balance sheets of state-controlled banks in Russia, which accounted for over 55 percent of the total assets in the country's banking system in early 2012. The author offers a credible estimate of the size of the country's state banking sector by including banks that are indirectly owned by public organizations. Contrary to some predictions based on the theoretical literature on economic transition, he explains the relatively high profitability and efficiency of Russian state-controlled banks by pointing to their competitive position in such functions as acquisition and disposal of assets on behalf of the government. Also suggested in the paper is a different way of looking at market concentration in Russia (by consolidating the market shares of core state-controlled banks), which produces a picture of a more concentrated market than officially reported. Lastly, one of the author's interesting conclusions is that China provides a better benchmark than the formerly centrally planned economies of Central and Eastern Europe by which to assess the viability of state ownership of banks in Russia and to evaluate the country's banking sector.
The paper examines the principles for the supervision of financial conglomerates proposed by BCBS in the consultative document published in December 2011. Moreover, the article proposes a number of suggestions worked out by the authors within the HSE research team.