
Key insights:
At Stripe Sessions, OpenAI CEO Sam Altman sat down with Stripe CEO Patrick Collison for a wide-ranging conversation about AI, startups, compute infrastructure, and the future of work. The discussion covered everything from why coding AI suddenly got good to how companies should think about adopting AI internally. Here are the key takeaways from their fireside chat.
Patrick Collison opened by noting that Stripe's metrics went parabolic starting late 2024 and into early 2025. The curves changed shape. Something clearly shifted. He asked Altman directly: why did coding models suddenly start to work?
Altman pointed to several converging factors. Raw model intelligence, meaning the reasoning horsepower, hit a critical level. There was enough of a feedback loop from people actually using the models for code. The training data reached sufficient depth. And once several labs crossed the threshold at roughly the same time, the knowledge that it was possible drove everyone to push harder.
This is a pattern Altman has seen before. He compared it to the launch of ChatGPT with GPT-3.5. Why did that specific model cause the world to flip from "not that impressive" to "this changes everything"? He still can't fully explain it. You just feel it when a model crosses that subjective line.
Collison asked what domain would feel like it has the next big unlock after coding. Altman's answer was surprising. He said it won't be a specific domain. It will be the realization of how much time people waste just trying to use a computer.
Think about how much of your day involves clicking between messaging apps, copying and pasting, responding to boring things you could clearly automate. Altman said the degree to which people will be able to sit back and watch an AI handle their drudgery will surprise them.
He also made a personal observation. Actually working this way, letting AI handle the small stuff, gave him more enjoyment from work. He didn't realize how much the little tasks dragged him down and pulled him out of a productive flow state. The subjective quality of life improvement was huge.
The most adamant Codex users are still using it for coding. But Altman noted a tidal wave of new users coming in recently, and the depth of what they're using it for has surprised him. OpenAI's ambition is for Codex to handle all the work you do in front of a computer, not just coding.
Altman estimated they're about 10% of the way there for non-coding use cases. But now that they have a real user base pushing into these other areas, he expects progress to come very fast. The pattern is clear: once you have users doing something, you can iterate quickly to make it better.
If you're interested in building the automations and AI-powered workflows that are reshaping how work gets done, the Complete RPA Bootcamp teaches you exactly these skills, from Robotic Process Automation to agentic automation and enterprise orchestration.
One of the most practical parts of the conversation focused on what separates companies that are making the most effective use of AI from everyone else. Altman shared specific patterns he's observed from OpenAI's customers, both large and small.
Altman highlighted Toby Lütke of Shopify as the first CEO he knew who went all in on AI. Lütke didn't just issue a mandate. He got his hands dirty, personally building AI automations and making his team do the same. It wasn't a token leaderboard or a gamified initiative. It was the CEO saying: we are putting AI into everything we do, and I won't be happy if you're not doing that.
This top-down energy has since been replicated by other companies. But the pattern is consistent. When the CEO of a company says "we're going to automate ourselves" and then actually holds people to it, ideally doing it themselves, it works extremely well. There's a fractal effect throughout the organization.
OpenAI is now experimenting with sending an FDE (field deployment engineer) to work hands-on with CEOs, helping them automate as much of their own job as possible. The theory is that if you automate the leader's work, the benefits cascade through the entire company.
The second pattern Altman observed was being "uncomfortably permissive" with data access. He was careful to stop short of a recommendation, but he described what he's seen from the most effective companies.
These companies record their meetings. They let AI access their codebase, every Slack message, every email. Every employee gets to use it that way. The results are remarkable. But Altman acknowledged the tension. This is easier for small startups than for companies with sensitive data and compliance requirements.
He doesn't know how the world will decide the trade-offs between data privacy and AI efficiency. Some regulation will probably need to change. But the power of full data access for AI is undeniable. The companies doing it are operating at a completely different level.
Collison shared an example from Tempo, a new blockchain project that Stripe incubated with Paradigm. The Tempo team, just a couple dozen people, set up an AI orchestration tool in their Slack that handles essentially everything. You can ask it to read Google Docs, turn them into Linear tasks, write pull requests, deploy them, and then use log analysis tools to verify the deployment worked.
Collison described watching a small organization do everything in a single Slack channel as "extremely trippy." He noted it was the first time he had the experience Altman was describing, where it's clearly incredible for them even if it's hard to see how to transpose it to a larger organization like Stripe.
Altman agreed that there's a missing abstraction for how humans and AIs will interface at massive scale. Small companies have an advantage because they can just be AIs without figuring out the interface with hundreds or thousands of people. But he's confident the industry will figure it out.
This is exactly the kind of automation work that's becoming a career path. If you want to be the person building these AI orchestration systems rather than being replaced by them, the Complete RPA Bootcamp can help you get there, taking you from beginner to professional in RPA, agentic automation, and enterprise orchestration.
Altman shared some revealing thoughts about how he wants OpenAI to operate as a business, the massive infrastructure investment required, and why he doesn't think we're in a capex bubble.
Collison raised the concern that AI labs might progress up the stack, consuming the entire software value chain. Altman's response was direct: some labs do want that. OpenAI doesn't.
He said he admires Stripe's model because it's clearly aligned with its customers. Stripe makes more money when its customers make more money. He wants to get to a similar model for OpenAI, functioning as an infrastructure provider, potentially a forever low-margin business as long as it's huge and growing fast.
The vision is to supply what he called an "intelligence meter" that companies can buy to automate things, accelerate operations, and build products. He wants OpenAI aligned with the success of the entire distributed economic engine of the world. And he pointed out that switching costs in AI are low. You've seen how easy it is for people to switch from one coding product to another. That's actually a consequence of AI getting smarter.
Altman acknowledged that the AI infrastructure buildout will be the most expensive infrastructure project the world has ever undertaken. Revenue is ramping to meet it, and efficiency gains have been incredible, delivering far more per GPU than expected.
But demand goes up more than linearly as you drop the price of each unit of intelligence. So the question of "what is enough" doesn't have a good answer. Demand for intelligence at a low enough price is effectively uncapped.
On whether we're in a bubble, Altman was refreshingly honest. He said he doesn't think so, but he also admitted he's never been able to figure out how to tell when you're actually in a bubble versus when people are just calling one. In his previous career as an investor, he tried to develop a framework for this and failed. Smart people who correctly called bubbles also called them ten more times in the preceding years.
When asked what OpenAI headcount would look like in five years if indexed to 100 today, Altman said he'd love it to be 200. Not 10x. Just 2x. The implication is clear: AI itself will handle the scaling, not massive hiring.
He described three phases of OpenAI. Phase one was a pure research company trying to figure out how to build AGI when it sounded crazy. Phase two added building a product company. Phase three, which they're entering now, requires building a massive-scale "token factory" for the world, a new utility that needs deep full-stack integration and enormous infrastructure.
Each phase required a different management style. Altman admitted the third phase probably won't be a natural fit for him. He either needs to hire great people for it, figure out how to do things differently, or build an AI that can manage it.
The conversation shifted to broader topics including how startup investing has changed, AI's potential to accelerate scientific discovery, and what technologies excite Altman most beyond AI models.
Altman shared a fascinating observation. There used to be a type of person the startup world made fun of: the "idea guy" who had a great idea but just needed a coder to build it. At Y Combinator, teams without technical founders were difficult to make work.
That's flipped. It's now the revenge of the idea guys. People who deeply understand their users but can't code at all are suddenly fundable. Altman said he wants to fund those people now. That's a big turnaround from the previous era where technical talent was the most important ingredient on a founding team.
This shift has massive implications. If you understand a problem deeply, AI tools can now help you build the solution. The barrier to entry for starting a company has dropped dramatically.
Altman said he hopes AI's most important contribution to human quality of life will be in science. If we can discover new science at a much faster rate, whether that's new materials, cures for diseases, or other breakthroughs, life gets better because we understand science better and then figure out how to build stuff with it.
Starting with models from a few months ago, and especially with GPT-5.5, the models have gotten smart enough that excellent scientists are saying they can figure out better ideas with AI assistance. The models are making small but important discoveries. Eventually, automated labs and robots will accelerate this further.
The Arc Institute, which Collison co-founded, is targeting Alzheimer's as the first complex disease to cure. OpenAI's foundation recently made a grant to support this work. Altman said the OpenAI Foundation, which he expects to become the largest foundation in the world, will focus heavily on science and AI resilience.
Altman listed several areas he's watching closely:
On nuclear fusion, Altman made a bold prediction: a profitable fusion reactor within five years, driven partly by data center electricity demand pushing prices up. On hypersonic commercial air travel, he was less certain, guessing more than 10 years.
The conversation closed with Altman reflecting on what he hopes his specific involvement changes about AI's trajectory for the world.
Altman said the most controversial decision in OpenAI's history was what they now call iterative deployment. When they released ChatGPT, many people in the AI safety community thought it was insanely dangerous. The prevailing view was that only a small set of people thinking about AI safety should know what was coming. The technology should stay locked up. The experts would discover wonderful things in their ivory tower and share the fruits with the world, but they would control the AI.
That sat very poorly with Altman. He believed then and believes now that avoiding that kind of power concentration is extremely important. By enabling people to explore the wide opportunity space, messy as it sometimes is, the world builds a much bigger gift on top of what OpenAI provides.
Altman described himself as a believer in entrepreneurship, innovation, and the fundamental goodness of people. He said people are mostly good and mostly do amazing things with tools. His single biggest contribution, he believes, has been pushing for AI to be a democratized technology that people get to use and build on.
This philosophy directly connects to why tools like Codex and OpenClaw exist as products anyone can use, rather than capabilities locked behind institutional access. The bet is that distributed innovation will produce far more value than centralized control ever could.
For the full conversation including all the nuance, humor, and audience reactions, watch the embedded video below. It's a fascinating look at how two of tech's most influential leaders think about the future. You can find more conversations like this on the Stripe YouTube channel.