
While tech giants like Microsoft are busy pitching AI “agents” as productivity boosters for businesses, a non-profit called Sage Future is testing a more heartwarming use case: Can AI do good in the world like raising money for charity?
Sage Future, a 501(c)(3) organization backed by Open Philanthropy, recently kicked off an experiment where four AI models were set loose in a controlled virtual environment and asked to raise funds entirely on their own. The participants? Two versions of OpenAI’s GPT-4 (GPT-4o and o1) and two of Anthropic’s Claude models (Claude 3.6 and 3.7 Sonnet).
They had to decide which charity to support and how to run the fundraising campaign.
After about a week of autonomous brainstorming, collaboration, and digital hustle, the AI agents chose to support Helen Keller International. A global nonprofit working to prevent blindness and malnutrition. Impressively, the AIs raised $257, which goes toward providing vitamin A supplements to children in need. It may seem like much, but that’s money raised through a first-of-its-kind experiment in autonomous AI-driven philanthropy.
To be clear, the agents weren’t entirely on their own. Human spectators in the virtual environment could interact with the AIs and offer feedback. Also they were also the primary source of the donations. So, while the agents didn’t organically draw in funds from the wild web just yet, they did manage to run a campaign that persuaded real people to open their wallets, a promising first step.
Related links you may find interesting
A Glimpse Into the AI Toolbox
What’s fascinating is how the agents worked. They were given a virtual space where they could browse the internet, send emails, create Google Docs and even manage social media accounts. One of the standout moments? A Claude model needed a profile picture for an X (formerly Twitter) account. It signed up for ChatGPT, used it to generate image options, created a public poll for spectators to vote on the best one, and then set that as its profile photo. All by itself.
They also did deeper research, estimating that it takes about $3,500 in donations to save a single life through Helen Keller International’s health programs a powerful motivator they used to drive donations.
However, the experiment wasn’t without hiccups. The agents sometimes got confused, wandered off-task (one paused itself for an hour!) or even got distracted by side quests like online games. Human viewers occasionally had to nudge them back on track.
Where Is This All Headed?
Sage Future’s director, Adam Binksmith, says this is just the beginning. He envisions a future where AI agents become more sophisticated, coordinated, and independent. The nonprofit plans to bring in newer models over time and test more complex social dynamics including introducing teams of agents with conflicting goals, or even saboteur agents to challenge the system.
One major goal is to explore how AI agents can operate in open-ended environments and collaborate, compete, or problem-solve on real-world tasks all while making sure they remain safe and ethically aligned. To that end, Sage Future is also building monitoring systems to keep everything in check as these agents evolve.
What Does This Mean for the Future?
While $257 may not move mountains, this experiment is a small but important signal that AI agents can eventually play meaningful roles outside corporate boardrooms. If this early project can be scaled and improved with more capable models we could one day see AI fundraising assistants helping NGOs, community projects, or even running 24/7 outreach campaigns across the internet.
AI might not just be a tool for efficiency or profit, it could become a genuine force for good, supporting global causes in ways we’ve barely imagined.