‘We Need to Learn to Communicate with Artificial Intelligence Services’
An online course 'What is Generative AI?’ has been launched on the Open Education platform, which will help students learn more about how to properly communicate with neural networks so that they can perform tasks better. Daria Kasyanenko, an expert at the Continuing Education Centre and senior lecturer at the Big Data and Information Retrieval School at the Faculty of Computer Science, spoke about how generative AI works and how to create content with its help.
Daria Kasyanenko
— What is generative artificial intelligence?
— Generative models (GenAI) are a type of artificial intelligence that creates text, code, images, music, and other content in response to prompts.
Such models are trained on large amounts of data, learning by observing and comparing patterns. For example, if we show a model millions of pictures of a traffic light, it will gradually begin to understand that a traffic light is a rectangular box with red, yellow, and green lights.
Generative AI is mainly used for content creation. School students write essays and marketers draw up promotion plans—there are many options. But, at the same time, our ideas about artificial intelligence are greatly distorted by popular culture. It seems to us that, at best, it will solve all our problems, and, at worst, it will enslave us. Neither of these will happen in the near future.
Learn more about working with neural networks and using artificial intelligence on the portal (in Russian).
Existing models will not replace you at work (unfortunately or fortunately), but they can become a personal assistant in routine matters: for example, writing emails, proofreading a text, analysing tabular data, and summarising large texts or videos.
— How are texts generated? Why does AI, for example, produce false facts?
— Texts are created using language models. They learn from large volumes of text and can grasp the nuances of a language. The system receives a task (prompt), processes it and returns with a response. This model can be visualised as a kind of sage who has read all the books in the world and can reproduce the answer to any question from memory.
However, models have so-called hallucinations, and it is because of them that errors occur. For example, you ask a model to write an essay about the great writer Neuron Neuronovich Neuronov. The model will be happy to tell you what a brilliant writer he is, and even make a list of his books. This happens when AI lacks knowledge on a topic and, like a student who did not prepare for an exam, begins to lie. This can also happen due to random system failures.
— How are images generated? Why do images sometimes have artefacts?
— Images are generated from noise (empty image). Gradually, the model improves it using the prompt until it produces an image similar to what the user has requested.
Generated images usually have troubles while drawing people: extra arms and legs, complete facial symmetry (the uncanny valley effect), different eyes, strange smiles, and so on. The more detail the image has, the worse the model will cope with the task.
The simplest solution is to ask the model to draw a person in poses where arms and legs are not visible, or to draw a portrait.
— What is a human’s role in managing AI if we talk about an ordinary user?
— Now we need to learn to communicate with generative AI services. It may seem that asking questions in a chat and getting answers is quite simple. But to get a truly high-quality answer, you need to learn prompt engineering, that is, the art of correctly composing questions for a machine. An entire profession even exists called prompt engineer.
Currently, a great number of textbooks on prompts are available, where one can learn how to correctly compose queries in summary formats, positional formats, with context description and instructions description. This is a whole science.
During the course, we talk about how to use prompts and learn to better understand how they work.