This Week in AI: With Chevron’s demise, AI regulation seems dead in the water | TechCrunch
Hiya, folks, and welcome to TechCrunch’s regular AI newsletter.
This week in AI, the U.S. Supreme Court struck down “Chevron deference,” a 40-year-old ruling on federal agencies’ power that required courts to defer to agencies’ interpretations of congressional laws.
Chevron deference let agencies make their own rules when Congress left aspects of its statutes ambiguous. Now the courts will be expected to exercise their own legal judgment — and the effects could be wide-reaching. Axios’ Scott Rosenberg writes that Congress — hardly the most functional body these days — must now effectively attempt to predict the future with its legislation, as agencies can no longer apply basic rules to new enforcement circumstances.
And that could kill attempts at nationwide AI regulation for good.
Already, Congress was struggling to pass a basic AI policy framework — to the point where state regulators on both sides of the aisle felt compelled to step in. Now any regulation it writes will have to be highly specific if it’s to survive legal challenges — a seemingly intractable task, given the speed and unpredictability with which the AI industry moves.
Justice Elena Kagan brought up AI specifically during oral arguments:
Let’s imagine that Congress enacts an artificial intelligence bill and it has all kinds of delegations. Just by the nature of things and especially the nature of the subject, there are going to be all kinds of places where, although there’s not an explicit delegation, Congress has in effect left a gap. … [D]o we want courts to fill that gap, or do we want an agency to fill that gap?
Courts will fill that gap now. Or federal lawmakers will consider the exercise futile and put their AI bills to rest. Whatever the outcome ends up being, regulating AI in the U.S. just became orders of magnitude harder.
News
Google’s environmental AI costs: Google has issued its 2024 Environmental Report, an 80-plus-page document describing the company’s efforts to apply tech to environmental issues and mitigate its negative contributions. But it dodges the question of how much energy Google’s AI is using, Devin writes. (AI is notoriously power hungry.)
Figma disables design feature: Figma CEO Dylan Field says that Figma will temporarily disable its “Make Design” AI feature, which was said to be ripping off the designs of Apple’s Weather app.
Meta changes its AI label: After Meta started tagging photos with a “Made with AI” label in May, photographers complained that the company had been applying labels to real photos by mistake. Meta is now changing the tag to “AI info” across all of its apps in an attempt to placate critics, Ivan reports.
Robot cats, dogs and birds: Brian writes about how New York state is distributing thousands of robot animals to the elderly amid an “epidemic of loneliness.”
Apple bringing AI to the Vision Pro: Apple plans go beyond the previously announced Apple Intelligence launches on the iPhone, iPad and Mac. According to Bloomberg’s Mark Gurman, the company is also working to bring these features to its Vision Pro mixed-reality headsets.
Research paper of the week
Text-generating models like OpenAI’s GPT-4o have become table stakes in tech. Rare are the apps that don’t make use of them these days, for tasks that range from completing emails to writing code.
But despite the models’ popularity, how these models “understand” and generate human-sounding text isn’t settled science. In an effort to peel back the layers, researchers at Northeastern University looked at tokenization, or the process of breaking down text into units called tokens that the models can more easily work with.
Today’s text-generating models process text as a series of tokens drawn from a set “token vocabulary,” where a token might correspond to a single word (“fish”) or a piece of a larger word (“sal” and “mon” in “salmon”). The vocabulary of tokens available to a model is typically determined before training, based on the characteristics of the data used to train it. But the researchers found evidence that models also develop an implicit vocabulary that maps groups of tokens — for instance, multi-token words like “northeastern” and the phrase “break a leg” — to semantically meaningful “units.”
On the back of this evidence, the researchers developed a technique to “probe” any open model’s implicit vocabulary. From Meta’s Llama 2, they extracted phrases like “Lancaster,” “World Cup players” and “Royal Navy,” as well as more obscure terms like “Bundesliga players.”
The work hasn’t been peer-reviewed, but the researchers believe it could be a first step toward understanding how lexical representations form in models — and serve as a useful tool for uncovering what a given model “knows.”
Model of the week
A Meta research team has trained several models to create 3D assets (i.e., 3D shapes with textures) from text descriptions, fit for use in projects like apps and video games. While there’s plenty of shape-generating models out there, Meta claims its are “state-of-the-art” and support physically based rending, which lets developers “relight” objects to give the appearance of one or more lighting sources.
The researchers combined two models, AssetGen and TextureGen, inspired by Meta’s Emu image generator into a single pipeline called 3DGen to generate shapes. AssetGen converts text prompts (e.g., “a t-rex wearing a green wool sweater”) into a 3D mesh, while TextureGen ups the “quality” of the mesh and adds a texture to yield the final shape.
The 3DGen, which can also be used to retexture existing shapes, takes about 50 seconds from start to finish to generate one new shape.
“By combining [these models’] strengths, 3DGen achieves very-high-quality 3D object synthesis from textual prompts in less than a minute,” the researchers wrote in a technical paper. “When assessed by professional 3D artists, the output of 3DGen is preferred a majority of time compared to industry alternatives, particularly for complex prompts.”
Meta appears poised to incorporate tools like 3DGen into its metaverse game development efforts. According to a job listing, the company is seeking to research and prototype VR, AR and mixed-reality games created with the help of generative AI tech — including, presumably, custom shape generators.
Grab bag
Apple could get an observer seat on OpenAI’s board as a result of the two firms’ partnership announced last month.
Bloomberg reports that Phil Schiller, Apple’s executive in charge of leading the App Store and Apple events, will join OpenAI’s board of directors as its second observer after Microsoft’s Dee Templeton.
Should the move come to pass, it’ll be a remarkable show of power on Apple’s part, which plans to integrate OpenAI’s AI-powered chatbot platform ChatGPT with many of its devices this year as part of a broader suite of AI features.
Apple won’t be paying OpenAI for the ChatGPT integration, reportedly having made the argument that the PR exposure is as valuable as — or more valuable than — cash. In fact, OpenAI might end up paying Apple; Apple is said to be mulling over a deal wherein it’d get a cut of revenue from any premium ChatGPT features OpenAI brings to Apple platforms.
So, as my colleague Devin Coldewey pointed out, that puts OpenAI’s close collaborator and major investor Microsoft in the awkward position of effectively subsidizing Apple’s ChatGPT integration — with little to show for it. What Apple wants, it gets, apparently — even if that means contentiousness its partners have to smooth over.