Member

EU act will set boundaries on AI — with room for innovation

When it goes into effect, the proposed act will set rules that establish risk and govern how AI can be used in the European Union. It also aims to set a framework for European innovation using AI.
EU AI Act Spanish Secretary of State for Digitalization and Artificial Intelligence Carme Artigas Brugal
Photo: Thierry Monasse

This article on the EU AI Act is part of our Vogue Business membership package. To enjoy unlimited access to our weekly Technology Edit, which contains Member-only reporting and analysis and our NFT Tracker, sign up for membership here.

The European Union is one step closer to enacting the first major governmental regulation of its kind on artificial intelligence. The EU AI Act, if passed, will set rules around how AI can be used and define clearer consumer protections. To ensure that these rules don’t stamp out innovation, a “sandbox” framework would give entrepreneurs space to experiment.

The rules vary depending on the estimated risks and level of impact of various uses, but are designed to give citizens a way to protect themselves against misuse of AI, as well as further education around what AI can do and is being used for. Those using general purpose AI systems, including the tech behind ChatGPT and Dall-e, will have transparency requirements, for example technical documentation, complying with EU copyright law and summaries about the content it is trained on. Some use-cases, like those concerning health or public safety, will have stricter obligations, while certain uses will be banned, like some applications of facial identification technology.

For global companies, the effect is likely to ripple beyond Europe. Effectively, companies are likely to look at this as “the gold standard”, especially because not many other territories have yet released similar measures, says Jamie Rowlands, partner at the AI law firm Haseltine Lake Kempner. “We have another couple of years for the act to start biting.”

Read More
US’s AI executive order sets tone, but not rules, for fashion

The Biden administration called for more formal frameworks to limit risk and maximise the opportunity of artificial intelligence. Brands are smart to begin establishing their own standards now, tech leaders say.

President Biden signs an order while Vice President Kamala Harris looks on

Fashion, beauty and retail have readily experimented with generative AI, meaning that requirements placed on their tech providers may trickle down to how they implement these tools. Many brands have been quick to use AI-powered chatbots and AI-powered try-on tools, and these uses will “prove particularly interesting from a regulatory perspective”, says The Fashion Law’s Julie Zerbo, a lawyer specialising in fashion.

“While these systems are classified as presenting ‘limited’ risks under the AI Act framework, they still bring with them some requirements from a transparency standpoint. Companies will need to make users aware if and when they are interacting with AI, including when these systems generate or modify images, audio or video content,” Zerbo says.

Under the proposed legislation, providers of software, digital services and online marketplaces could be subject to regulation, as the definition of responsible “manufacturers” of AI changes, says Mamata Dutta, a partner in the regulatory team at international law firm RPC. While there is likely to be some negotiation related to foundational AI models (the broad tech behind ChatGPT among others), the expectation is that the burden will be primarily on the providers, says Rowlands. Still, brands will have to disclose when deep-fakes are used in marketing materials, for example.

Generative AI, specifically, holds enormous potential for fashion brands, says cybersecurity and AI compliance attorney Myriah Jaworski of Clark Hill, whose clients include major e-commerce retailers along with global manufacturers and retailers. Jaworski points to existing uses including personalising online customer journeys, individualised skincare and eyeglasses customised to facial topography. “When the use of AI relies on [the] processing of personal data — for example, for virtual try-ons that rely on biometric information or for workforce management and employment-monitoring tools — it may require pre-market assessment under the EU’s new AI Act,” Jaworski says. (Already, consumers in the US have sued providers of virtual try-on technology.)

In addition to manufacturers and retailers considering whether any updates are required — specifically in data storage and how AI systems are used — they also need to consider whether they have adequate insurance in place, as it will be easier for consumers to submit complaints, Dutta adds.

This act is racing to keep up with the pace of AI innovation as well as both consumer and developer calls for regulation. In September, US legislators hosted tech leaders — including Bill Gates, Mark Zuckerberg and OpenAI CEO Sam Altman (OpenAI is the parent company of ChatGPT) — to advise on how to develop guardrails addressing AI. In November, the Biden administration issued an executive order in the US offering a set of guidelines around the use of AI and a call for safety standards around the technology.

Calls for regulation come amid fears that hard-to-follow regulations could hinder innovation in the space. Earlier this year, the US government reportedly warned that the in-progress EU bill might hurt small businesses that don’t have the hefty resources to comply. To protect creative endeavours, the EU AI Act includes a framework outlining a “regulatory sandbox” approach, which proposes that businesses can experiment under a regulator’s supervision for a limited time. Similarly, the US executive order noted support for US-led innovation along with the need for global cooperation on regulation.

This approach prioritises startups, but it’s unclear how regulators define them. The goal of this “sandbox” approach would enable developers to work without the burden of as much regulatory compliance, while enabling regulators to better understand and prepare for potential challenges or needs. Nevertheless, it is unclear whether regulatory sandboxes are truly beneficial to participating businesses, due to their apparent lack of protection, Dutta says. (This approach has been used in countries including Japan, Norway and the UK in sectors such as financial services and healthcare.)

High-impact and high-risk uses will have more strict obligations. Systems that are identified as specifically “high risk” are those that could harm health, safety, fundamental rights, environment, democracy or the rule of law. Citizens will be able to launch complaints and receive explanations regarding decisions related to these systems.

Some uses are outright banned, such as those related to identifying people. This includes biometric categorisation that uses “sensitive” characteristics, such as political, religious, philosophical beliefs, sexual orientation and race (except when law enforcement is searching for someone who committed a serious crime), and “untargeted scraping” of facial images to create facial recognition databases. It also bans human monitoring in other ways, including “emotion recognition” in the workplace and schools, and “social scoring” based on social behaviour or personal characteristics. Exploitation and manipulation are also concerns; the act aims to ban AI systems that manipulate human behaviour to circumvent their free will, and that exploit vulnerabilities due to age, disability, social or economic situations.

Those who break the rules may face fines depending on the severity of the infringement and size of the company: up to 7 per cent of global revenue or €35 million.

Jaworski advises brands to evaluate their AI use-cases, with particular attention to the data inputs as well as their outputs or impacts. Using a generative AI tool for merchandising or product ideation is “pretty straightforward”, and relatively low-risk, Jaworski says, as long as the data used is not someone else’s protected intellectual property. Conversely, AI that ranks and sorts job candidates or monitors warehouse workers is higher-risk; these systems will need to be assessed to mitigate risk, including an AI-governance programme that ensures the used data is high-quality, reliable and does not lead to discriminatory outcomes. In some of these cases, Jaworski says, consumers should receive notice of AI systems used on them. “All AI systems should be included in an enterprise-wide AI inventory, which is evaluated at least annually.”

Brands will receive at least a year to prepare. Members of the European Parliament agreed on a provisional deal last Friday, and this week they are meeting to finalise the details. The full text, which is expected to be released in early 2024, still needs to be formally adopted by both Parliament and Council to become EU law; the timeline for this is unknown, but the law, once ratified, is slated to go into effect as early as 2025.

With regulators looking over their shoulders, manufacturers need to balance the resources required to limit litigation against the benefits of new AI-based products, Dutta says. The overall benefits remain significant, but “particularly in a creative industry such as fashion, the importance of human input remains clear and must not be undervalued.”

To receive the Vogue Business newsletter, sign up here.

Comments, questions or feedback? Email us at feedback@voguebusiness.com.