The purpose of this paper is to extend existing theories of b2b networks over non-proft networks. The paper sheds light on the network organisational forms recently implanted in the academic community. The analytic induction method is used to extend b2b network concepts to a non-profit context. The concepts of b2b networks are critically analysed and applied to explorative case studies of networks in academia. The paradox of open knowledge exchange in these networks is revealed and an attempt is made to elucidate it. B2b network concepts should be modifed before being extended to non-profits. Propositions are suggested to adapt b2b network concepts to explain non-profit networks. Questions to address in further research are developed. The main conclusions are only applicable to speciec types of networks. Only academic networks are reviewed. The case study approach does not allow for generalizing the findings and deriving a set of concepts for non-profit networks, and thus, calls for further research. There may be space for achieving excellence in research by facilitating interpersonal rather than interorganisational research networks. This is important, since by facilitating interpersonal networks one can escape from organisational bureaucracy. The study reports networking between the non-profits, an issue largely neglected by marketing researchers, and contributes to its understanding in the frame of existing b2b network concepts.
Efficient packet classification is a core concern for network services. Traditional multi-field classification approaches, in both software and ternary content-addressable memory (TCAMs), entail tradeoffs between (memory) space and (lookup) time. TCAMs cannot efficiently represent range rules, a common class of classification rules confining values of packet fields to given ranges. The exponential space growth of TCAM entries relative to the number of fields is exacerbated when multiple fields contain ranges. In this work, we present a novel approach which identifies properties of many classifiers which can be implemented in linear space and with worst-case guaranteed logarithmic time and allows the addition of more fields including range constraints without impacting space and time complexities. On real-life classifiers from Cisco Systems and additional classifiers from ClassBench (with real parameters), 90–95% of rules are thus handled, and the other 5–10% of rules can be stored in TCAM to be processed in parallel.
Linguistic processing is based on a close collaboration between temporal and frontal regions connected by two pathways: the “dorsal” and “ventral pathways” (assumed to support phonological and semantic processing, respectively, in adults). We investigated here the development of these pathways at the onset of language acquisition, during the first post-natal weeks, using cross-sectional diffusion imaging in 21 healthy infants (6–22 weeks of age) and 17 young adults. We compared the bundle organization and microstructure at these two ages using tractography and original clustering analyses of diffusion tensor imaging parameters. We observed structural similarities between both groups, especially concerning the dorsal/ventral pathway segregation and the arcuate fasciculus asymmetry. We further highlighted the developmental tempos of the linguistic bundles: The ventral pathway maturation was more advanced than the dorsal pathway maturation, but the latter catches up during the first post-natal months. Its fast development during this period might relate to the learning of speech cross-modal representations and to the first combinatorial analyses of the speech input.
Существование ведущих интернет-компаний сегодня тесно связано с миром свободного ПО. На страницах «Open Source» (приложение к журналу «Системный администратор») уже были статьи, посвященные Google и Twitter. Еще одним ярким представителем сетевой индустрии, активно вовлеченным в развитие FLOSS, является Facebook
The problem of face recognition with large database in real-time applications is discovered. The enhancement of HoG (Histogram of Gradients) algorithm with features mutual alignment is proposed to achieve better accuracy. The novel modification of directed enumeration method (DEM) using the ideas of the Best Bin First (BBF) search algorithm is introduced as an alternative to the nearest neighbor rule to prevent the brute force. We present the results of an experimental study in a problem of face recognition with FERET and Essex datasets. We compare the performance of our DEM modification with conventional BBF k-d trees in their well-known efficient implementation from OpenCV library. It is shown that the proposed method is characterized by increased computing efficiency (2-12 times in comparison with BBF) even in the most difficult cases where many neighbors are located at very similar distances. It is demonstrated that BBF cannot be used with our recognition algorithm as the latter is based on non-symmetric measure of similarity. However, we experimentally prove that our recognition algorithm improves recognition accuracy in comparison with classical HoG implementation. Finally, we show that this algorithm could be implemented efficiently if it is combined with the DEM.
Among the negative predictors of sexual freedom, cultural complexity has been always mentioned as most important. However, regression analysis revealed the existence of a reverse trend within the interval between 11 and 22 points of Murdock's cumulative scale of cultural complexity. This suggests that it is senseless to try to find a general set of regularities regarding the correlation between cultural complexity and sexual freedom. One would expect to find different sets of regularities for simple, medium-complexity, complex and supercomplex cultures. In this paper we begin with a summary analysis of research conducted on simple societies, suggesting a model of relationships between cultural complexity and female premarital sexual freedom among foragers. We suggest that the underlying variable in this model is foraging intensification. This intensification appears to be one of the most important preconditions for the significant growth of cultural complexity among the foragers. As shown in the ethnographic record, this intensification mostly occurs through the development of hunting and/or fishing practices (i.e. in most cases predominantly male activities). This tends to lead to a decline in female contribution to subsistence which, in turn, appears to lead to the societal decline of female status. This, the general argument goes, contributes to the decrease of the female premarital sexual freedom. On the other hand, we argue that this is not the only mechanism explaining the negative correlation between cultural complexity and female premarital sexual freedom among foragers. The intensification of a foraging economy tends to lead to the rise of the wealth accumulation, and the growth of cultural complexity components such as the development of a medium of exchange and social stratification. This situation seems to “entice” the development of modes of marriage that involve the transfer of valuables/ services. The growth of social stratification appears to have an independent influence on the decline of female premarital sexual freedom among foragers. The growth of similar components of cultural complexity seems to lead to the development of slavery and polygyny, whereas the combined action of these factors appears to entice what we call "bride commodification" which against the background of declining female status appears, naturally, to lead to the restriction of the female premarital sexual freedom. The growth of such components of cultural complexity as political integration, fixity of settlement and community size seems to contribute to the decline of female premarital sexual freedom through the growth of social control (against the background of declining female status).
In this letter we present estimates for the distance of secret key transmission through free space for three different protocols of quantum key distribution: for BB84 and phase timecoding protocols in the case of a strictly single-photon source, and for the relativistic quantum key distribution protocol in the case of faint laser pulses.
Предлагается новый подход к настройке моделей гауссовских процессов для задач классификации. Стандартные методы для данной задачи имеют сложность O(n 3 ), где n — размер обучающей выборки. Данное обстоятельство не позволяет применять эти методы к задачам с большим объемом данных. В связи с этим в литературе был предложен ряд подходов, основанных на использовании так называемых вспомогательных точек (inducing inputs). Изначально такие методы использовались для задачи регрессии, но в недавней работе Хенсмэна с коллегами (2015 г.) подобный метод был разработан для задач классификации. В этом методе используется глобальная нижняя оценка на правдоподобие, которая максимизируется по параметрам гауссовского процесса и по дополнительным вариационным параметрам с помощью стохастической оптимизации. Вычислительная сложность данного метода составляет O(nm2 ), где m — число вспомогательных точек, которое обычно существенно меньше, чем n. Однако число переменных в оптимизации составляет O(m2 ), что делает задачу поиска оптимальных параметров весьма сложной при больших значениях m. Предлагаются две новые оценки на маргинальное правдоподобие в модели гауссовских процессов со вспомогательными точками для задач классификации, а также несколько методов для их оптимизации. В новых оценках количество численно оптимизируемых переменных не зависит от числа вспомогательных точек m. В результате новые процедуры обучения становятся эффективными для широкого диапазона параметров n и m. Кроме того, в отличие от стохастического метода из статьи Хенсмэна с коллегами (2015 г.), новые процедуры не требуют настройки параметров пользователем. Это значительно облегчает использование новых методов на практике. Проведенные эксперименты показывают, что новые методы демонстрируют сравнимое или лучшее качество по сравнению с методом из работы Хенсмэна с коллегами (2015 г.).
The paper is focused on an application of sequential three-way decisions and granular computing to the problem of multi-class statistical recognition of the objects, which can be represented as a sequence of independent homogeneous (regular) segments. As the segmentation algorithms usually make it possible to choose the degree of homogeneity of the features in a segment, we propose to associate each object with a set of such piecewise regular representations (granules). The coarse-grained granules stand for a low number of weakly homogeneous segments. On the contrary, a sequence with a large count of high-homogeneous small segments is considered as a fine-grained granule. During recognition, the sequential analysis of each granularity level is performed. The next level with the finer granularity is processed, only if the decision at the current level is unreliable. The conventional Chow’s rule is used for a non-commitment option. The decision on each granularity level is proposed to be also sequential. The probabilistic rough set of the distance of objects from different classes at each level is created. If the distance between the query object and the next checked reference object is included in the negative region (i.e., it is less than a fixed threshold), the search procedure is terminated. Experimental results in face recognition with the Essex dataset and the state-of-the-art HOG features are presented. It is demonstrated, that the proposed approach can increase the recognition performance in 2.5–6.5 times, in comparison with the conventional PHOG (pyramid HOG) method.