Technology

Businesses including Stitch Fix are already experimenting with DALL-E 2 – TechCrunch


It’s been just a few weeks since OpenAI began allowing customers to commercially use images created by DALL-E 2, its remarkably powerful AI text-to-image system. But in spite of the current technical limitations and lack of volume licensing, not to mention API, some pioneers say they’re already testing the system for various business use cases — awaiting the day when DALL-E 2 becomes stable enough to deploy into production.

Stitch Fix, the online service that uses recommendation algorithms to personalize apparel, says it has experimented with DALL-2 to visualize its products based on specific characteristics like color, fabric and style. For example, if a Stitch Fix customer asked for a “high-rise, red, stretchy, skinny jean” during the pilot, DALL-E 2 was tapped to generate images of that item, which a stylist could use to match with a similar product in Stitch Fix’s inventory.

“DALL-E 2 helps us surface the most informative characteristics of a product in a visual way, ultimately helping stylists find the perfect item that matches what a client has requested in their written feedback,” a spokesperson told TechCrunch via email.

Stitch Fix DALL-E 2

A DALL-E 2 generation from Stitch Fix’s pilot. The prompt was: “soft, olive green, great color, pockets, patterned, cute texture, long, cardigan.”

Of course, DALL-E 2 has quirks — some of which are giving early corporate users pause. Eric Silberstein, the VP of data science at e-commerce startup Klaviyo, outlines in a blog post his mixed impressions of the system as a potential marketing tool.

He notes that facial expressions on human models generated by DALL-E 2 tend to be inappropriate and muscles and joints disproportionate, and that the system doesn’t always perfectly understand instructions. When Silberstein asked DALL-E 2 to create an image of a candle on a wooden table against a gray background, DALL-E 2 sometimes erased the candle’s lid and blended it into the desk, or added a incongruous rim around the candle.

DALL-E 2 Eric Silberstein

Silberstein’s experiments with DALL-E 2 for product visualization.

“For photos with humans and photos of humans modeling products, it could not be used as is,” Silberstein wrote. Still, he said he’d consider using DALL-E 2 for tasks like giving starting points for edits and conveying ideas to graphic artists. “For stock photos without humans and illustrations without specific branding guidelines, DALL·E 2, to my non-expert eye, could reasonably replace the ‘old way’ right now,” Silberstein continued.

Editors at Cosmopolitan came to a similar conclusion when they teamed up with digital artist Karen X. Cheng to create a cover for the magazine using DALL-E 2. Arriving at the final cover took very specific prompting from Cheng, which the editors said is illustrative of DALL-E 2’s limitation as an art generator.

But the AI weirdness works sometimes — as a feature, rather than a bug. For its Draw Ketchup campaign, Heinz had DALL-E 2 generate a series of images of ketchup bottles using natural language terms like “ketchup,” “ketchup art,” “fuzzy ketchup,” “ketchup in space,” and “”ketchup renaissance.” The company invited fans to send their own prompts, which Heinz curated and shared across its social channels.

Heinz DALL-E 2

Heinz bottles as “imagined” by DALL-E 2, a part of Heinz’ recent ad campaign.

“With AI imagery dominating news and social feeds, we saw a natural opportunity to extend our ‘Draw Ketchup’ campaign; rooted in the insight that Heinz is synonymous with the word ketchup — to test this theory in the AI space,” Jacqueline Chao, senior brand manager for Heinz, said in a press release.

Clearly, DALL-E 2-driven campaigns can work when AI is the subject. But several DALL-E 2 business users say they’ve wielded the system to generate assets that don’t bear the telltale signs of AI constraints.

Jacob Martin, a software engineer, used DALL-E 2 to create a logo for OctoSQL, an open source project he’s developing. For around $30 — roughly the cost of logo design services on Fiverr — Martin ended up with a cartoon image of an octopus that looks human-illustrated to the naked eye.

“The end result isn’t ideal, but I’m very happy with it,” Martin wrote in a blog post. “As far as DALL-E 2 goes, I think right now it’s still very much in a “’first iteration’ phase for most bits and purposes — the main exception being pencil sketches; those are mind-blowingly good … I think the real breakthrough will come when DALL-E 2 gets 10x-100x cheaper and faster.”

DALL-E 2 OctoSQL

The OctoSQL logo, generated after several attempts with DALL-E 2.

One DALL-E 2 user — Don McKenzie, the head of design at dev startup Deephaven — took the idea a step further. He tested applying the system to generate thumbnails on the company’s blog, motivated by the idea that posts with images get much more engagement than those without.

“As a small team of mostly engineers, we don’t have the time or budget to commission custom artwork for every one of our blog posts,” McKenzie wrote in a blog post. “Our approach so far has been to spend 10 minutes scrolling through tangentially related but ultimately ill-fitting images from stock photo sites, download something not terrible, slap it in the front matter and hit publish.”

After spending a weekend and $45 in credits, McKenzie says he was able to replace 100 or so blog posts with DALL-E 2-generated images. It took finagling with the prompts to get the best results, but McKenzie says it was well worth the effort.

“On average, I would say it took a couple of minutes and about four to five prompts per blog post to get something I was happy with,” he wrote. “We were spending more on money and time on stock images a month, with a worse result.”

For companies without the time to spend on brainstorming prompts, there’s already a startup trying to commercialize DALL-E 2’s asset-generating capabilities. Unstock.ai, built on top of DALL-E 2, promises “high-quality images and illustrations on demand” — for no charge, at the moment. Customers enter a prompt (e.g., “Top view of three goldfish in a bowl”) and then choose a preferred style (vector art, photorealistic, penciled) to create images, which can be cropped and resized.

Unstock.ai essentially automates prompt engineering, a concept in AI that looks to embed a task description in text. The idea is to provide an AI system detailed instructions so that it reliably accomplishes the thing being asked of it; in general, the results for a prompt like “Film still of a woman drinking coffee, walking to work, telephoto” will be much more consistent than “A woman walking.”

It’s likely a harbinger of applications to come. When contacted for comment, OpenAI declined to share numbers around DALL-E 2’s business users. But anecdotally, the demand appears to be there. Unofficial workarounds to DALL-E 2’s lack of API have sprung up across the web, strung together by devs eager to build the system into apps, services, websites, and even video games.



Source link