Google Play’s policy update cracks down on ‘offensive’ AI apps, disruptive notifications | TechCrunch
Google is taking aim at potentially problematic generative AI apps with a new policy, to be enforced starting early next year, that will require developers of Android applications published on its Play Store to offer the ability to report or flag offensive AI-generated content. The new policy will insist that flagging and reporting can be done in-app and developers should use the report to inform their own approaches to filtering and moderation, the company says.
The change to the policy follows an explosion of AI-generated apps, some of which where users tricked the apps into creating NSFW imagery, as with Lensa last year. Others, meanwhile, have more subtle issues. For instance, an app that went viral this summer for AI headshots, Remini, was found to be greatly enhancing the size of some women’s breasts or cleavage, and thinning them. Then there were the more recent issues with Microsoft’s and Meta’s AI tools, where people found ways to bypass the guardrails to make images like Sonic the Hedgehog pregnant or fictional characters doing 9/11.
Of course, there are even more serious concerns around the use of AI image generators, as pedophiles were discovered using open source AI tools to create child sexual abuse material (CSAM) at scale. And with the coming elections, there are also concerns around using AI to create fake images, aka deepfakes, to mislead or misinform the voting public.
The text of the new policy indicates that examples of AI-generated content includes “text–to-text conversational generative AI chatbots, in which interacting with the chatbot is a central feature of the app,” which encompass apps like ChatGPT, as well as apps where images are “generated by AI based on text, image, or voice prompts.”
Google, in its announcement, reminded developers that all apps, including AI content generators, must comply with its existing developer policies, which prohibit restricted content like CSAM and others that enable deceptive behavior.
Beyond changing its policy to crack down on AI content apps, Google says some app permissions will also receive an additional review by the Google Play team, including those apps that request broad photo and video permissions. Under its new policy, apps will only be able to access photos and videos if it’s directly related to their functionality. If they have a one-time or infrequent need — like AI apps that ask users to upload a set of selfies, perhaps — the apps need to use a system picker, like the new Android photo picker.
The new policy will also limit disruptive, full-screen notifications to only those times when there’s a high-priority need. The ability to pop up full-screen notifications has been abused by many apps in an attempt to upsell users into paid subscriptions or other offers, when really the functionality should be limited to real-world priority use cases, like receiving a phone call or video call. Google says it will now change the limitations and requires a special app access permission. This “Full Screen Intent permission” will only be granted to apps targeting Android 14 and above that actually require the full screen functionality.
It’s surprising to see that Google is first out of the gate with an policy on AI apps and chatbots, as historically, it’s been Apple that issues new rules to crack down on unwanted behavior from apps, which Google then mimics. But Apple does not have a formal AI or chatbot policy in its App Store Guidelines as of yet, though it has tightened up in other areas, like apps’ requesting data for the purpose of identifying the user or device, a method known as “fingerprinting,” as well as on apps that attempt to copy others.
Google Play’s policy updates are being rolled out today though AI app developers have until early 2024 to implement the flagging and report changes to their apps.