As expected, the AI frenzy shaped this year’s biggest tech conference in Europe, the Web Summit, which took place in Lisbon over three days starting from November 14.
From startups developing innovative products by harnessing the power of large language models (LLMs) to panels exploring risk and opportunity of the most exciting thing in the tech world right now, how to regulate AI has obviously been a big theme throughout the whole event.
However, according to Brittany Kaiser, the Founder of Own Your Data, data privacy protection rules need to be the priority.
How not to regulate AI
Kaiser spoke about being shocked to see the White House signing off an executive order to regulate AI without having basic definitions of all the pieces that comprise this advanced software.
“If we are not regulating how personal data is used—custodianship, fiduciaries responsibilities, data ownership, data transfer laws—if we don’t have all of that federally, how are we going to start talking about advanced predicted algorithms, or artificial intelligence that is actually the most advanced use-case of basic use of data?” she pointed out during a panel discussing how not to regulate tech, held on the first day of the Web Summit.
Kaiser knows very well the implications of misusing personal data. She was the main whistleblower in the Facebook-Cambridge Analytica scandal. A former employee of the infamous UK political consulting firm, she gave evidence of over 87 million Facebook users who had their data compromised and used to target political campaigns in front of the UK Parliament committee in 2018. Yet, 5 years later, the US still lacks a comprehensive federal privacy law.
According to Kaiser, data protection and privacy have been “weaponized as a political tool.” This seemed to have frozen the US legislative path towards privacy. A few States, with California being the best iteration, have now enforced their rules on data collection practices. Federally, though, the ADPPA bill has sat untouched in the pile of proposed acts, at the time of writing, since December 2022.
This privacy mess becomes even more convoluted when it comes to regulating AI.
“Perhaps, if we are lucky, that actually means that we will get data protection definitions in law, because we have to do that first in order to start talking about advanced use of data that is artificial intelligence,” said Kaiser.
US President Joe Biden signed an executive order at the end of October with directives on safety, privacy, social rights and consumer protections. He called it “the most significant action” towards a safe deployment of AI—The Guardian reported.
About a week later, it was the time for the UK Prime Minister Rishi Sunak to invite Biden and many other world leaders to Bletchley Park, Buckinghamshire, for the AI Safety Summit. The two days of talks culminated in the Bletchley Declaration, where the EU and other 28 nations including the US agreed on global efforts to cooperate on AI safety. However, the world’s first comprehensive AI law, the EU AI Act, is currently causing disagreements among member states.
“I really think we are going about this in a completely wrong way,” said Kaiser. “It really had to start with the definitions of every single piece of the puzzle. So that, when we start to create new regulations, we can place where the data has been used in the technological process.”
Lawmakers, more often than not, don’t fully understand how the technology works. Misguided regulations can prevent developers and entrepreneurs from legally harnessing the potential that a software can offer. Or, worse, in some instances they can also make users less safe.
Talking to the tech community, Kaiser said: “The best type of regulation is regulation that’s formulated with technologists that understand how the technology functions. You understand your technology better than the people who are regulating it. So engage, explain why you support something or you don’t, what you would like to see and do or not, and that’s how we get the best regulation possible.”