SpheroidPicker for automated 3D cell culture manipulation using deep learning
Recent statistics report that more than 3.7 million new cases of cancer occur in Europe yearly, and the disease accounts for approximately 20% of all deaths. High-throughput screening of cancer cell cultures has dominated the search for novel, effective anticancer therapies in the past decades. Recently, functional assays with patient-derived ex vivo 3D cell culture have gained importance for drug discovery and precision medicine. We recently evaluated the major advancements and needs for the 3D cell culture screening, and concluded that strictly standardized and robust sample preparation is the most desired development. Here we propose an artificial intelligence-guided low-cost 3D cell culture delivery system. It consists of a light microscope, a micromanipulator, a syringe pump, and a controller computer. The system performs morphology-based feature analysis on spheroids and can select uniform sized or shaped spheroids to transfer them between various sample holders. It can select the samples from standard sample holders, including Petri dishes and microwell plates, and then transfer them to a variety of holders up to 384 well plates. The device performs reliable semi- and fully automated spheroid transfer. This results in highly controlled experimental conditions and eliminates non-trivial side effects of sample variability that is a key aspect towards next-generation precision medicine.
The new technology used for data processing of population census results is described. The system was recently launched by Rosstat for 2002 and 2010 censuses. It gives the user an opportunity of on-line tabulation any demographic table from micro data with no need to download the data themselves and to set up any software. The examples of the results absent in the official census tabulation obtained by means of this system are given.
The quantitative assessment of the credit quality of manufacturing companies is a task of great interest to researchers and practitioners. This is underpinned by the elevated credit risk of these companies stemming from rapid technological changes. However, few studies have addressed this issue specifically for manufacturing companies. This study aimed to fill this research gap by comparing the predictive power of various methods in reproducing manufacturing companies’ public credit ratings from available financial and non-financial data. The sample included 109 manufacturing companies from developed and emerging markets over the period 2005–2016. The analysis included three methods: ordered logistic regression (OLR) and two machine learning techniques, random forest and gradient boosting. The results showed that machine learning techniques outperformed OLR in terms of predictive power. In the best specification model, random forest had an accuracy of 50%, followed by gradient boosting (47%) and OLR (25%). We also tested two types of sampling in the training and test sets: random and time-dependent. The results showed that the models’ predictive power was greater with random sampling. The inclusion of macroeconomic variables did not improve the models’ predictive power due to the rating agencies’ preferred through-the-cycle rating approach. The study’s findings have implications for the development of manufacturing firms’ internal credit ratings. They can also be useful for researchers exploring the accuracy of empirical models in predicting industrial firms’ insolvency and creditworthiness.
High-energy physics experiments rely on reconstruction of the trajectories of particles produced at the interaction point. This is a challenging task, especially in the high track multiplicity environment generated by p-p collisions at the LHC energies. A typical event includes hundreds of signal examples (interesting decays) and a significant amount of noise (uninteresting examples). This work describes a modification of the Artificial Retina algorithm for fast track finding: numerical optimization methods were adopted for fast local track search. This approach allows for considerable reduction of the total computational time per event. Test results on simplified simulated model of LHCb VELO (VErtex LOcator) detector are presented. Also this approach is well-suited for implementation of paralleled computations as GPGPU which look very attractive in the context of upcoming detector upgrades.
This book constitutes the refereed proceedings of the First International Conference on Data Compression, Communications and Processing held in Palinuro, Italy, in June 2011.
The article is devoted to improving the quality of education in prospective interdisciplinary areas of knowledge such as biomedical engineering. The experience of universities in the U.S.A. and Western Europe is described. Particular attention is paid to the content formation and certification of educating and training programs.
A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traﬃc is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the ﬁnal node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a ﬁnite-dimensional system of diﬀerential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the “correct” extension of solutions of a system of diﬀerential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station.
Event logs collected by modern information and technical systems usually contain enough data for automated process models discovery. A variety of algorithms was developed for process models discovery, conformance checking, log to model alignment, comparison of process models, etc., nevertheless a quick analysis of ad-hoc selected parts of a journal still have not get a full-fledged implementation. This paper describes an ROLAP-based method of multidimensional event logs storage for process mining. The result of the analysis of the journal is visualized as directed graph representing the union of all possible event sequences, ranked by their occurrence probability. Our implementation allows the analyst to discover process models for sublogs defined by ad-hoc selection of criteria and value of occurrence probability