An initial review of generative AI by the U.K.’s Competition and Markets Authority (CMA) which was announced back in May has concluded with a report containing seven proposed principles to “ensure consumer protection and healthy competition are at the heart of responsible development and use of foundation models” (FMs), as it puts it.
The principles the competition watchdog has come up with for consideration, as it kicks off another round of stakeholder engagement on AI’s potential impacts on markets, are:
- Accountability: “FM developers and deployers are accountable for outputs provided to consumers”
- Access: “ongoing ready access to key inputs, without unnecessary restrictions”
- Diversity: “sustained diversity of business models, including both open and closed”
- Choice: “sufficient choice for businesses so they can decide how to use FMs”
- Flexibility: “having the flexibility to switch and/or use multiple FMs according to need”
- Fair dealing: “no anti-competitive conduct including anti-competitive self-preferencing, tying or bundling”
- Transparency: “consumers and businesses are given information about the risks and limitations of FM-generated content so they can make informed choices”
The competition watchdog is drawing on its experience regulating market contestibility combined with some early research and feedback from AI stakeholders to pull together this first draft of pro-innovation principles. The move follows instruction from the U.K. government to existing regulators to consider AI impacts on their patches. Down the line, the CMA might thus end up promoting such a list as best practice for avoiding competition complaints at AI’s cutting edge.
Nothing is set in stone yet, though — with another update on its thinking in this area planned for early 2024. So watch this space.
Scoping AI impacts
“In this market for foundation models, there’s lots at stake for both competition and consumers. If the market works well, the best products win. And so do consumers and so do people. But if it doesn’t, people could really lose out and compelling businesses could struggle to compete. So with this review… we wanted to be on the on the front foot as much as possible — trying to understand what’s going on as it happens, rather than having to come in later and figure it out after the fact,” said Will Hayter, senior director for the CMA’s Digital Markets Unit (DMU), speaking in an interview with TechCrunch.
“There are tonnes of potential benefits for these models. But also, of course, some risks. And we think the benefits, or the harms, could happen quite fast. So we’ve tried to focus on some potential positive outcomes and some potential less positive outcomes. And then we’ve really thought hard about the kinds of drivers that might push in one direction or another.”
By foundation models the CMA said its focus is on large-scale AI models that can be adapted to downstream customer applications through a process of fine-tuning — so which play a particular role in the AI supply chain where they’re intended to be built on by others developing customer facing apps and services.
Hayter confirmed it’s too soon for the CMA to have an established view on how these still fast-developing AI technologies might impact markets. Nor could he say whether foundation model makers could be future candidates for bespoke regulation under the U.K.’s planned “pro-competition” reform of rules applied to Big Tech with so-called “Strategic Markets Status” (a long-trailed digital regulation reboot which was recently revived by prime minister Rishi Sunak) — telling us “I think it really would be wrong to prejudge and try and forecast that at this point”.
But the regulator is clearly keen to be proactive in scoping out a flavor of cutting-edge tech with such major potential for impact.
“We think it’s important to try and nudge the market towards some of those more positive outcomes. And that’s what the set of proposed principles are trying to achieve,” said Hayter, adding: “But I really would emphasise they’re genuinely proposed at this stage.
“We produced the report but that means we can now get out and discuss both the content but also the principles with lots of different types of organisation… to see how those principles might be improved upon and also how they might be adopted to try and get to those more positive outcomes. So we’re pretty much at the beginning of the conversation, which we’re looking forward to having.”
The CMA received some 70+ responses following its call for input ahead of the review. Hayter would not be drawn into a breakdown of where this feedback came from but he suggested they’d heard from “a broad range of types of organisations — from the [AI] labs themselves, the big firms, some civil society, academics, a range of experts”, as well as conducting their own research to feed the report.
“We’ve pulled in quite a lot of varied inputs. But, again, what I’m really looking forward to now is now that we have our initial thoughts out there, and these principles, we can really use that as a bit of a framework for having the next round of conversations — and see how we might be able to work together, as collaborative as you possibly can with a wide range of types of organisations, to try and help get the market to the best possible place,” he added.
The U.K. government came up with its own set of principles to guide the development of AI back in March in its policy white paper. While these two lists have some overlap (the government’s five principles for AI are: safety, security and robustness; transparency and explainability; fairness; accountability and governance; contestability and redress) the CMA’s proposed principles are specifically targeted at potential risks that fall within its competition and consumer protection purview.
It’s also worth noting it has also not looked at a full spectrum of potential consumer concerns — for example it notes that issues like security and data protection are not considered in the initial review. Here it appears keen to ensure it stays well within its regulatory lane (whereas issues like data security and privacy fall more obviously under the Information Commissioner’s Office — which is also issuing guidance for generative AI developers).
Asked about this potential gap the CMA emphasized it will be working with other U.K. regulators also tasked with paying attention to AI under the government’s plan for developing context-specific guidance — pointing to the Digital Regulatory Cooperation Forum (which was established by the CMA, ICO and Ofcom in July 2020, with the FCA joining as a full member in April 2021) as playing an important role in any relevant joint working.
Targeted approach
One wider question competition regulators may face is how to balance the need to let novel AI technologies (and business models) develop vs responding to a sense of urgency in light of the tech’s scale and potency. And, well, the need for speed to tackle what could be a new wave of problems being baked into digital markets that have already — for years — been characterized by issues like tipping and unfair dealing, while consumers have also faced exploitative business models foisted on them by dominant platforms operating under their own self-serving T&Cs.
Such concerns lie behind the U.K. plan to reboot the domestic competition regime by adding a proactive regime of bespoke rules that can be applied to the most powerful platforms by the DMU, where Haytor is a senior director.
The European Union has its own flavor of ex ante digital competition reform already — which is up and running (aka the Digital Markets Act, which is applicable to Alphabet, Amazon, Apple, ByteDance, Meta and Microsoft). While Germany has, since a few years, implemented its own update to competition rules targeted at Big Tech. So the U.K. is lagging behind peers in addressing tech giants’ market power.
On foundation models, Hayter suggested some (possible) negative scenarios vis-a-vis market impacts could flow from similar issues to those that have entrenched (current gen) Big Tech. However he said there’s still too much uncertainty about how AI power plays will go to predict whether markets will track towards another wave of AI-fuelled concentration (i.e. powered by a few dominant foundation model makers) or blossom into the opposite: Vibrant competition as businesses tap into the power of FMs.
Both scenarios are possible in his view.
“You could see scenarios where these models actually help newer challengers challenge big incumbent positions and that’ll be great… That could challenge those positions of market power,” he argued. “On the flip side, of course, depending on some of these issues that we’ve highlighted — things like how the access to key inputs might be controlled — you could get the opposite situation where actually these foundation models are a help to firms in currently strong positions to shore up those positions further.
“That will all really depend on the specific context and specific market — and you might see one scenario isn’t and then the other the opposite in another — so we need to we need to be really, really focused on the evidence and on the specifics in the market and be prepared to act when necessary but not also not jump in too quickly.”
Despite what could be seen as an early intervention by the CMA to get a handle on emerging AI developments, Hayter’s top-line message is one that’s likely to reassure industry: The U.K. regulator won’t be rushing to rein in the cutting edge.
Any future rules (or even a set of confirmed principles) would need to be “very closely targeted”, he emphasized.
“We certainly need to be very measured, and I think whatever kind of regulation… whether it’s for competition or other reasons, that would need to be very closely targeted to specific questions and issues/problems based on real evidence,” he said, adding: “This report is certainly not suggesting jumping in and regulating… It’s trying to identify the kinds of things that would help realise the maximum potential from the technology and also to be aware of things to look out for.”
“We’re trying extremely hard not to prejudge what’s happening here,” he went on. “We’re trying to work out… what the specific drivers are that might push in one direction or another. So we do highlight access to data. And the driver that we explored in that area was whether access to proprietary data will become important. We have heard that at the moment, there’s a reasonable availability of publicly available data to train models on but it’s possible that, over time, the role of proprietary data becomes bigger — and then, as you say, that might play into the hands of companies that have large banks and proprietary data. But we don’t think that’s happened at this point. And it still might not happen — that depends on how the market develops. And similarly on the access to compute.
“That’s clearly an extremely important input to these models, which is why we highlighted access as one of these key principles. And… there’s work going on elsewhere to consider access to public cloud services in the work that Ofcom is doing and has previously proposed to send to us as market investigation. So we waited to see the outcome of that.”
“There’s a broader question about the ability of the current frameworks to respond to new developments,” he also told us during our interview. “So you will have seen us in a number of contexts explain that the existing tools — i.e. competition, and enforcement of Competition and Consumer Law — sometimes can be a bit slow to respond to some of the specific developments. And that’s what the progress of the Digital Markets Competition and Consumers bill through parliament is intended to help with. But just to emphasise that again… is all about being very targeted at very specific problems and with an intentionally high hurdle for taking any action, which is the concept of Strategic Market Status in that framework.
“The Digital Markets Competition and Consumers bill — the Strategic Market Status framework — gives us the broad structure to be able to tackle some of these problems that arise in digital markets. But we certainly wouldn’t want to rush in too quickly and do anything here [with FMs] because we still think this market could develop in more positive directions, particularly if these principles are backed up.”