• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

‘We Need Our Own AI Models Trained on Local Data’

Alexey Ivanov (far right)

Alexey Ivanov (far right)
© HSE University

The digitalisation of the economy and the rapid development of artificial intelligence (AI) technologies pose new challenges for antitrust authorities worldwide. Major players in the AI market, equipped with significant resources, can block new entrants and set anti-competitive prices. Additionally, the use of AI raises increasingly complex ethical questions for which the global community has yet to develop answers. These and other issues were discussed at the Third BRICS+ Digital Competition Forum.

The forum took place in Rio de Janeiro in November on the side-lines of the G20 Summit. It was organised by the International BRICS Competition Law and Policy Centre at HSE University (BRICS Antitrust Centre) in collaboration with the FGV Law School and supported by Brazil's competition authority, CADE. The event brought together representatives of UNCTAD, international organisations, antitrust authorities from BRICS countries, as well as world-class researchers and experts from BRICS nations, Mexico, Malaysia, Europe, the UK, and the US.

Artificial intelligence, while now critical to society, is spinning out of control, noted Alexey Ivanov, Director of the BRICS Antitrust Centre at HSE. Large language models and chatbots disregard fundamental principles that should ensure the safety and ethical use of AI. These technologies are already causing harm, particularly to individuals vulnerable to various kinds of manipulation.

‘Ethical principles will not be effective if the industry develops in only one direction. It is crucial for antitrust authorities to act decisively to create space for new start-ups, products, and alternative AI development pathways. If BRICS countries can establish new standards and raise the bar, it will serve as an excellent example for antitrust authorities globally,’ Alexey Ivanov asserted.

© HSE University

Partnerships and investment agreements in AI present a significant challenge for regulators. Companies within the GAMMAN group (Google, Amazon, Microsoft, Meta, Apple, and NVIDIA) are of particular interest to antitrust agencies. These giants not only develop their own AI products and platforms, but also integrate their technologies into various industries through collaborations with smaller companies and start-ups.

Payal Malik, former Economic Advisor and Head of the Economics Division at the Competition Commission of India, highlighted that such partnerships often involve data sharing and access to computational resources, in addition to financial investments. These alliances may appear as business agreements but, in reality, help the largest companies strengthen their market positions while evading antitrust scrutiny.

‘The key question is whether AI start-ups benefit more from partnerships with large companies or suffer from a lack of resources,’ Ms Malik emphasised.

Introducing new antitrust control criteria is largely ineffective because each deal in the AI and related sectors is unique, explained Elena Rovenskaya, Programme Director at the International Institute for Applied Systems Analysis (IIASA). Consequently, IIASA and the International BRICS Competition Law and Policy Centre are developing systemic analysis and mapping methods to create comprehensive models for AI and other dynamic sectors. ‘Breaking down highly complex issues into specific processes and cause-and-effect relationships has already proven valuable in other fields and can be helpful in competition analysis,’ she said.

Alexey Ivanov, John Newman
Alexey Ivanov, John Newman
© HSE University

John Newman, Professor at the University of Miami, cautioned that regulators must proceed carefully when moving toward stricter oversight of AI partnerships and agreements. The first regulator to implement such measures will be under intense scrutiny, and if their actions lack transparency or justification, it could undermine the legitimacy of regulation itself.

Luca Belli, Professor at FGV Law School in Rio de Janeiro (FGV Direito Rio), raised the issue of AI sovereignty for developing nations. He urged BRICS countries to develop their own AI models, citing Brazil's PIX payment system and India's UPI as examples of breaking the VISA and Mastercard duopoly in their respective countries.

Vikas Kathuria, Professor at BML Munjal Law School, agreed, noting that in addition to UPI, India launched the Open Network for Digital Commerce (ONDC) to challenge Amazon and Flipkart's duopoly in the local market. ‘India cannot afford bias or discrimination when implementing generative AI in various solutions. We need our own AI models trained on local data,’ he stated.

Irina Filatova
Irina Filatova
© HSE University

Irina Filatova, a member of the Russian State Duma Committee on Competition Protection, proposed that BRICS develop joint rules to combat deepfakes. These technologies can enable cybercriminals to obtain users' personal data and access critical information for cybercrimes and blackmailing offenses.

Marcela Mattiuzzo, Professor at INSPER Law School, pointed out that unscrupulous market participants might use AI to create ‘dark patterns’—design features intended to manipulate users, such as making it difficult to unsubscribe from services or leave a platform. These tools, she argued, could be seen as a way to gain competitive advantages.

Marcela Mattiuzzo (second from left)
Marcela Mattiuzzo (second from left)
© HSE University

‘We must define the legal powers regulators need to assess data, documentation, and request key information to understand AI systems, effectively investigate, and ensure proper oversight,’ said Victor Oliveira Fernandes, a commissioner at CADE (Brazil).

André Vellozo, CEO of the US-based data management company DrumWave, stressed the importance of preventing technological giants from dominating the AI sector as they have in digital markets. To achieve this, three critical factors are necessary: consumer awareness, technological innovation, and effective regulation.

CADE Commissioner Gustavo Augusto Freitas de Lima emphasised that AI legislation must become more transparent, flexible, and easy to implement. ‘We should not view AI as a tool solely for major tech companies. AI will be used by society as a whole. It is too late to see ChatGPT as a single product—it represents hundreds and thousands of different applications over which no one will have centralised control,’ he noted.

The forum also included a session of a BRICS working group on competition issues in digital markets.