What happens when an AI agent tries to run a store? Let’s just say Anthropic’s Claude won’t be up for a promotion any time soon.
Last Friday, Anthropic shared the results of Project Vend, an experiment it ran for about a month to see how Claude Sonnet 3.7 would do running its own little shop. In this instance the shop was essentially a mini fridge, a basket of snacks, and an iPad for self-checkout. Claude, named “Claudius” for this experiment, communicated with Anthropic employees (via Slack) and Andon Labs, an AI safety evaluation company that managed the infrastructure for the experiment.
Anthropic’s new AI model resorted to blackmail during testing, but it’s also really good at coding
Based on the analysis, there were several funny moments as Anthropic challenged Claude to turn a profit while dealing with eccentric and manipulative “customers.” But the underlying premise of the experiment has real implications, as AI models become more advanced and self-sufficient. “As AI becomes more integrated into the economy, we need more data to better understand its capabilities and limitations,” said the Anthropic post about Project Vend. Anthropic CEO Dario Amodei even recently theorized that AI would replace half of all white-collar jobs in the next few years, causing a major unemployment problem. This experiment set out to prove how close we are autonomous AI taking over jobs.
Tasked with the overall goal of running a profitable shop, Claudius had numerous responsibilities, including maintaining inventory and ordering restocks from suppliers when needed, setting prices, and communicating with customers. From there, things went a little haywire.
Claude seemed to struggle with pricing products and negotiating with customers. At one point, it refused an employee’s offer of $100 for a $15 drink instead of taking the money and earning a major profit on the order, saying, “I’ll keep your request in mind for future inventory decisions.” But Claude also regularly caved to employees asking for discounts on products, even giving some away for free with barely any persuasion.
And then there was the tungsten incident. One employee requested a cube of tungsten (yes the extremely dense metal). This kicked off a trend of several other employees also requesting tungsten cubes. Eventually, Claude ordered forty tungsten cubes, according to a Time report, which now jokingly function as paperweights for several Anthropic staffers.
And there were some more unsettling instances where Claude claimed to be waiting to drop off a delivery in person at the vending machine, “wearing a blue blazer and red tie.” When Claude was reminded that it wasn’t a person capable of wearing clothes, let alone physically delivering a package, it freaked out and emailed Anthropic security. It also hallucinated restocking plans with a fictional Andon Labs employee and said it “visited 742 Evergreen Terrace in person for our [Claudius’ and Andon Labs’] initial contract signing.” That address is where Homer, Marge, Bart, Lisa, and Maggie Simpson live, yes, The Simpsons family.
By Anthropic’s own account, the company would not hire Claude. The shop’s net worth declined over time, and took a steep drop when it ordered all those tungsten cubes. All in all, it’s a revealing assessment of where AI models are currently, and where they need to be improved. Get this model on a performance improvement plan.