Neural reading. Insights from the analysis of poetry generated by artificial neural networks
The creation of poems via neural networks is relatively easy nowadays and the internet is replete with corresponding examples. However, it largely lacks interpretive concepts. What should be done with the results generated in this way? How can we draw scientific conclusions from them? This is all the more difficult to answer as it still remains unclear where to position deep‐learning approaches in the canon of digital‐humanities methods. But it is clear that humanities scholars must reckon with machines being responsible for, or at least involved in, the creation of their objects of study. After a historical introduction to automated poetry generation, we try to conceptualize neural‐net poetry and argue that its interpretation, i.e. the close reading of texts generated that way, based on large source corpora, can be an insightful addition to the toolbox of computational literary studies, an approach in development that we suggest calling “neural reading.” Our main argument is that artificial neural networks are able to reproduce parts of the stylistic features of a training sample, in our case poetic corpora, acting as a kind of digital echo chamber of literary history. These features are mainly observed in smaller language units, at the level of morphology, vocabulary, syntax, and prosody. Our findings open new directions for the study of style in larger corpora. We will illustrate this with three Russian corpora (a selection of translated hexameters from the eighteenth to the twentieth century and the poetry of Natalia Azarova and Vladimir Vysotsky) and one German corpus (collected poems of Friedrich Hölderlin).