This article on the AI executive order is part of our Vogue Business membership package. To enjoy unlimited access to our weekly Technology Edit, which contains Member-only reporting and analysis and our NFT Tracker, sign up for membership here.
This week, US President Joe Biden issued an executive order on “safe, secure and trustworthy artificial intelligence”. The broad order, which serves primarily as a directive to the federal government, spans both risks and opportunities for AI. During a presentation announcing the order, Biden called it “the most consequential technology of our time” that “will accelerate at warp speed”. Fashion, like all industries, should be listening.
The order calls for new safety standards that prevent deception, new data privacy legislation, a means to address algorithmic bias and support for job training that mitigates job displacement. The order also said that AI represents an opportunity to foster US-led innovation, noting that more AI startups raised first-time capital in the US last year than the next seven countries combined. It also acknowledged the need for global cooperation on regulation.
This order follows a “Blueprint for an AI Bill of Rights” issued by the White House a year ago, which serves as a set of guidelines for AI tools, including the need for people to be protected from AI discrimination, have control over their data and that AI systems be transparent and avoidable. Meanwhile, there are ongoing discussions among EU countries over a proposed AI Act, which would classify and regulate AI tools according to their perceived level of risk.
The reaction to the executive order from the tech community has been mixed. Some in the industry say it’s over-reaching, while others think it’s not comprehensive enough, says Cathy Hackl, chief futurist and co-founder of metaverse consultancy Journey. Microsoft president Brad Smith called it “another critical step forward in the governance of AI technology”, while Alphabet’s president of global affairs and chief legal officer Kent Walker said, “We are confident that our long-standing AI responsibility practices will align with its principles.”
Fashion has been early to experiment with ways in which artificial intelligence can impact a range of functions, from creative design inspiration and fantastical ad campaigns to scaling personalised recommendations and streamlining operations in-house. They are evaluating tech providers — including current tech giants like Meta and Google (who’ve both introduced AI image manipulation and AI assistants), startups focusing on specific functions (such as customer service and virtual try-on), and using open-source AI to build their own proprietary systems.
It’s because of these broad-sweeping uses that fashion’s creative set welcomes guidance, says Vivek Jayaram, an attorney and founder of Jayaram Law, who works with leading artists and LVMH fashion brands, among others. “Everybody that I have encountered is in favour of having regulation,” he says. However, he cautions that executive orders can be little more than expressing intentions until there is a congressional law. “Creatives are looking to Congress because there is a ton of uncertainty out here. I can’t go to brands and say, today, that ‘You need to do this or you will get sued.’”
The suggestions are a “good starting point”, says Teddy Pahagbia, who is founder and “chief executive druid” at Paris-based fashion and beauty consultancy Blvck Pixel. He worries that it might be too little, too late, compared to what is happening “behind the scenes”. “In executive corridors, the AI surge is a hot topic”, while “technology is advancing more than regulation operates”, he adds. “This EO is 10 years late — and I'm barely exaggerating.”
For brands, this serves to set the tone for how they approach AI tools going forward — even before any legislation is ultimately passed, says Beerud Sheth, CEO of conversational AI platform Gupshup. “It’s not something where you wait until real guidelines come in. You want to be moving in sync with these things and bring that into your thought process.” This applies even if brands rely on tech companies to comply with legal and ethical norms, he adds, because fashion and beauty brands are uniquely in tune with issues that their customers and category might present. “Even if you work with vendors like us — we understand the tech, but we do not know the domain as well.”
Risks span bias, privacy and jobs
The executive order outlined a number of areas in which AI presents new risks, including the ease with which AI-generated content can deceive people; Biden himself shared that he’d been confused by an AI-generated video portraying himself. The order called for the establishment of standards and best practices for detecting and authenticating content, including potential watermarks to label AI-generated content.
For brands, this could mean that AI-generated content, such as virtual influencers, might require a disclosure. Brands and agencies will need to pay attention to what the Department of Commerce ultimately decides in terms of watermarking AI content, Hackl says. “I have a feeling that the fashion brands’ legal teams will start to be more careful with allowing their agencies or creative teams to just experiment with AI and post content … This might slow down experimentation in some ways, but in the long run, would benefit society as it pertains to trust.”
The parameters for authentication and disclosure are still quite vague, so it’s unclear if, for example, an ad campaign made using generative art would require a watermark. “I don’t know that there are simple binary answers,” Sheth says, comparing it to the advent of Photoshop and other photographic manipulation tools and the amplification of so-called “perfect” depictions of beauty. (A recent report from The Atlantic, for example, noted that AI tends to only generate conventionally attractive people.)
On that note, the order called for tech companies to address the potential for algorithmic discrimination and bias. Biden spoke out against algorithms that make social media more addictive and expressed concerns about the effect this has on the mental well-being of teenagers. For brands, this could translate into being mindful of conversational AI experiences. For example, if someone asks for beauty product recommendations, “you can’t have AI passing judgement on what complexion or skin is good or bad,” Sheth says. Similarly, if a user says something that is inappropriate about themselves, the AI can be trained to disengage, not respond or respond in a certain way.
The order also highlighted tensions around data privacy, which has already caused a reckoning among tech companies that depend on advertising to specific audiences based on their data. It called for an evaluation of how agencies collect and use commercially available information and for guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques. Without safeguards, “AI not only makes it easier to extract, identify and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems,” the order says. For brands, this could mean more robust disclosures or permissions when it comes to personalised services. Pahagbia thinks the order could ultimately lead to strengthened customer privacy and data protection that goes further than GDPR.
The order called for a strengthening of privacy-preserving technologies, such as cryptographic (blockchain) tools, which enable people to, for example, access websites anonymously. “While acknowledging that the technology is not a panacea, blockchain is an effective tool for enhancing online data protection and privacy due to its several inherent features,” says Megan Kaspar, managing director at Firstlight Capital, which invests in many fashion and Web3 startups. “This holds particular significance as we anticipate the integration of Web3 elements in the fashion industry through innovative technologies like [NFC] chips and IoT [internet of things].”
Noticeably absent, however, was a call for frameworks for intellectual property created by brands using generative AI, Jayaram says, at a time when brands are keen to create marketing content with these new tools. For now, he says, laws do not enable creators to register IP that has been created with generative AI. (This doesn’t currently apply to physical goods that are based on generative designs, but rather just images.) “There is still no real guidance on IP other than to direct the copyright office and US Patent and Trademark Office to say, ‘We want you to give guidance on how this will work,’” he says.
Encouraging innovation locally and globally
Tech leaders found the sentiment of the order refreshing in terms of encouraging innovation. The order announced plans to help the US “continue to lead the way in innovation” by supporting research efforts and a “fair, open and competitive AI ecosystem”. It shared an intention to support accessible workforce training, and to develop principles that maximise the benefits of AI for workers by addressing job displacement. This in addition to expanding the ability of “skilled immigrants and nonimmigrants with expertise in critical areas to study, stay and work in the United States”.
Hackl says this could benefit US brand hiring efforts because it could potentially allow for an influx of AI talent from other countries, and she applauds the order’s balance between the need to stay competitive and create a capable workforce with the need to keep consumers safe. Pahagbia thinks this might lead to more innovations in fashion tech, potentially through more funding from the federal government and the private sector.
The order also acknowledged the global nature of the technology and the need to accelerate the development of AI standards with international partners and organisations. Pahagbia says his firm is already evaluating the cross-section between the US order and the EU’s own efforts.
The EU’s proposed AI act, which is expected to be approved this month and to go into effect next year, would be the world’s first formal set of rules on AI, according to the EU. The goal, similar to the US, would be to “make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly”. The act also states that AI systems should be overseen by people rather than by automation. Lawmakers are still negotiating details of the act, amid reports earlier this year that the US criticised portions of the proposal, saying that it could hamper investment and hurt smaller firms; while the EU’s proposal initially focused on how AI models are developed, the US’s approach focuses on the risks and the final uses.
Governmental and global institutions, such as the United Nations, “are finally taking things seriously”, Pahagbia says. “I hope this will drive the conversation so we can set up a global ethical framework to safeguard human rights and extend it in the digital age.”
To become a Vogue Business Member and receive the Technology Edit newsletter, click here.
Comments, questions or feedback? Email us at feedback@voguebusiness.com.
Generative AI: The fashion exec’s guide
AI regulation is coming. What do brands need to know?
With generative AI, luxury hopes to finally crack digital clienteling