Yazılar

French Publishers and Authors Sue Meta for Alleged Copyright Infringement in AI Training

France’s leading publishing and authors’ associations have filed a lawsuit against Meta, accusing the U.S. tech giant of using copyrighted content without permission to train its artificial intelligence (AI) systems. The lawsuit was filed earlier this week in a Paris court, with the plaintiffs alleging copyright infringement and economic “parasitism.”

The groups behind the lawsuit include the National Publishing Union (SNE), the National Union of Authors and Composers (SNAC), and the Society of Men of Letters (SGDL), which represent authors and publishers in France. They argue that Meta, the parent company of Facebook, Instagram, and WhatsApp, has been illegally using copyright-protected material to enhance its AI models.

Maia Bensimon, general delegate of SNAC, described the actions as a form of “monumental looting.” Renaud Lefebvre, Director General of SNE, referred to the lawsuit as “David versus Goliath,” emphasizing that the legal action aims to set a precedent for the protection of copyright in the face of rapidly advancing AI technologies.

This lawsuit marks the first of its kind in France against an AI giant, though similar legal actions are already underway in other countries, particularly in the United States. In 2023, Sarah Silverman, an American actress and author, along with other plaintiffs, sued Meta for allegedly misusing their works to train its Llama language model. Other authors, including Christopher Farnsworth, have also filed lawsuits against Meta for similar claims.

In addition to Meta, OpenAI, the creator of ChatGPT, faces similar copyright lawsuits in the United States, Canada, and India over the data used to train its generative AI systems.

Chinese Military-Linked Institutions Develop AI Model Using Meta’s Llama for Strategic Applications

Chinese research bodies associated with the People’s Liberation Army (PLA) have adapted Meta’s open-source AI model, Llama, for potential military use, according to several academic papers and expert analysts. A June paper by six Chinese researchers—connected to three institutions, including the PLA’s Academy of Military Science (AMS)—revealed the development of an AI tool named “ChatBIT.” Built on Meta’s Llama 13B model, ChatBIT is tailored for military intelligence gathering and operational decision-making support.

Optimized specifically for dialogue and question-answering within military contexts, ChatBIT reportedly performs better than other AI models, with capabilities about 90% of those of ChatGPT-4. However, the researchers did not specify the exact performance criteria or confirm whether the tool is operational within the military.

This development marks the first confirmed attempt by Chinese military-affiliated researchers to leverage Meta’s open-source models systematically, according to Sunny Cheung, a specialist in China’s dual-use technologies at the Jamestown Foundation. Meta’s open-source strategy, which includes guidelines barring military and nuclear use, limits enforcement options. Meta reiterated this position in response to Reuters inquiries, emphasizing that any PLA use of its models is unauthorized.

While Meta supports open innovation, the use of Llama in military contexts has reignited discussions in the U.S. about potential security risks associated with open-source models. Recently, President Joe Biden signed an executive order to monitor AI developments, balancing innovation benefits with security concerns.

The AMS-affiliated researchers, including Geng Guotong and Li Weiwei, alongside colleagues from Beijing Institute of Technology and Minzu University, suggested ChatBIT could potentially aid in strategic planning, simulation training, and command decision-making as the technology progresses. While Reuters could not confirm the model’s computational scope, the researchers cited a relatively modest dataset of 100,000 military dialogue records, prompting experts like Joelle Pineau of Meta’s AI Research division to question the depth of ChatBIT’s current capabilities.

This development arises as the U.S. finalizes rules to regulate investment in critical AI technologies in China. Pentagon officials have voiced ongoing concerns about the dual-use implications of open-source models, while some observers argue that China’s progress in indigenous AI research makes it challenging to prevent technological advances. William Hannas of Georgetown University’s Center for Security and Emerging Technology notes that extensive collaboration between top Chinese and American AI scientists has bolstered China’s AI goals.

Meanwhile, other PLA-linked studies describe further uses for Llama in fields such as airborne electronic warfare and intelligence policing. In April, PLA Daily emphasized AI’s potential to accelerate weapons development and enhance military training and simulation. These developments reflect China’s national strategy to close the technological gap with the U.S. in AI by 2030, underscoring the ongoing global debate over AI’s role in military advancement.