Earlier this month, Google decided to add deepfake training to its list of forbidden projects on its Colaboratory service. The change was first spotted by DFL developer going on Discord by the name ‘chervonij’. When he tried to train his deepfake models on the platform, he got an error message, saying
“You may be executing code that is disallowed, and this may restrict your ability to use Colab in the future. Please note the prohibited actions specified in our FAQ.”
Google appears to have made the change under the radar, and has since remained quiet on the matter. While ethics are the first potential theory to come to mind, the actual reason might be a bit more on the pragmatic side.
Abusing the free resource
Deepfakes are “photoshopped” videos – fake videos showing people saying things they never really said. Their creators leverage artificial intelligence (AI) and machine learning (ML) technologies to create super convincing videos, which are getting more and more difficult to distinguish from legitimate content.
To make them convincing, though, deepfakes require significant computing power, not unlike the one on offer through the Colab service. This Google project allows users to run Python in their browser while using free computing resources.
As deepfakes are usually used to crack jokes, create fake news, or spread fake revenge porn, it’s easy to think ethics are behind Google’s decision. However, it might also be that too many people were using Colab to create fun little deepfake videos, preventing other researchers from doing more “serious” work. After all, the computing resource is free to use.
Besides deepfakes, Google doesn’t allow Colab to be used on projects such as mining cryptocurrency, running denial-of-service attacks, password cracking, using multiple accounts to work around access or resource usage restrictions, using a remote desktop or SSH, or connecting to remote proxies.