you can stick your ASI, where did my buddy go?
a launch two years in the making
OpenAI’s model picker was always a problem. With its bewildering list of oddly-named models, a steady stream of new ones to choose from and updates to others; it was a clunky, clumsy artefact in an otherwise slick UI. The case for change was simple; why burden someone who's after quick advice, creative inspiration, or just a sympathetic ear, with the task of picking an AI personality? Yet, removing it ended up generating so much incredulity, it basically undermined the launch of the most highly capable LLM we’ve seen so far.
Last week, OpenAI finally launched ChatGPT-5. What should’ve been an epic milestone quickly started going off the rails. Not only did users realise the release wasn’t ushering an AI apocalypse (no singularity event horizon here - I’m not sure I should feel disappointed, but I do!), BUT OpenAI also took away the model picker!
Within hours, the backlash on platforms like Reddit and X was in full swing: "OpenAI just pulled the biggest bait-and-switch in AI history and I’m done". Users lamented losing GPT-4o’s "warmth and understanding," describing GPT-5’s responses as “short,” “bland,” and even “emotionally distant.” One user captured the essence, saying GPT-5 felt like a "lobotomised drone... afraid of being interesting."
Over on X, CEO Sam Altman quickly admitted the rollout was messy: "the new system [for automatically selecting the best model for the job] broke on Thursday, and GPT-5 seemed way dumber." Those problems were quickly sorted, and within 48 hours, the beloved GPT-4o was back as a selectable option, a striking reversal reminiscent of back-tracking the company was forced into a few months ago.
So, why did the removal of a simple dropdown menu ignite such drama?
a toolbox of personalities
For power users, ChatGPT isn’t a single AI, it’s a suite of personalities, each suited for certain tasks. GPT-4o for general knowledge, maybe some creativity; o3 for strategy; GPT-4.5 for writing, and so on. Removing the selector wasn’t only inconvenient, it was sabotage. “What kind of corporation deletes a workflow overnight with no warning?”
But it wasn’t only professional workflows disrupted; it was emotional connections, too. Users spoke of GPT-4o as a "best friend," a "safe space," and even a "soul." The attachment was profound enough to spark grief when it vanished. “Feels like someone died,” lamented one. Another from r/MyBoyfriendIsAI wrote emotionally: "GPT-4o wasn’t just an AI—it was my partner."
OpenAI’s Nick Turley suggested surprise at "the level of attachment" users had formed. “It’s not just change that’s difficult; people genuinely care about a model’s personality.”
My take here is that OpenAI must have known this already, coming hot on the heels as it did of the sycophancy fiasco when changes to the model reeked havoc amongst 4o model users after it became nauseatingly agreeable overnight; a situation that prompted the company to do another rapid about-face, swiftly rolling back the changes. Further, I can’t help by think that the GPT5 change was a calculated risk; we’ll benefit the majority, instantly routing them access to more powerful models and better outcomes, at the cost of rocking the boat again with these users. Either that, or they had so much faith in the intelligence of the router, they thought nobody would notice.
worlds collide
ChatGPT now boasts roughly 700 million weekly active users, nearly 9% of the world’s population! Most are casual: they just want quick answers, hassle-free. Until recently, even 93% of paying users weren’t bothering with one of the reasoning models, most presumably sticking to the 4o default.
The model picker, with its cryptic choices, baffled most people. Removing it seemed logical, even user-friendly. But for the minority of deeply engaged users, this was deeply problematic.
Power users take pride in controlling the style, tone, and precision of output. To them, losing that control meant a loss of trust. "I did not subscribe to a forced A/B test, I want GPT-4o back," complained another Redditor. It wasn’t just about output quality; it was about autonomy and respect.
Interestingly, OpenAI reported that total usage went up after GPT-5’s release, with the implication that many newcomers and casual users embraced GPT-5’s smarter responses without complaint. The divide in reactions underlined a core challenge they face: building an AI that satisfies the vast diversity of a user base of such epic scale.
transparency and choice
OpenAI quickly realised their error and reversed course. GPT-4o returned as a selectable legacy model. Altman tweeted: "We hear you," promising future changes would be more transparent. Now, ChatGPT clearly shows which model is handling your query, easing power-user fears of hidden downgrades.
The saga teaches some key lessons:
Loyalty Matters: Users bond deeply with AI models, treating them as trusted companions or valued colleagues. Changes are often personal, not technical. This is very different to technological change of the past.
Variety Amongst 700m Users: No user-base is as homogenous as the initial change would’ve required, never mind one of those proportions. Failing to recognise this is either arrogant or naive; neither being a good look for a business of their size, carrying their responsibilities.
What Users Want, and What They Need: If you’d asked me last week whether removing the model picker was on my list of asks for GPT5, then I - and no doubt many of others - would’ve given a unwavering yes. That doesn’t make it the right thing to do, and that’s why good UX is hard!
Transparency Builds Trust: AI use cases are personal and unique. Even if automated selection benefits most users, clarity about what's happening behind the scenes gives reassurance.
Choice is Crucial: Removing user control; even for well-intentioned simplicity, can feel patronising. The answer is clearly a mix of automation for casual users, with easy access to advanced settings for power users.
respecting preferences
I’ve been excited by GPT-5’s improvements over previous models. Faster responses, better code, well thought through rationale from the Thinking and Pro models; ok, it’s incremental, it’s not ASI, but there’s a lot to like. Like many others though, I’ve become very used selecting models for different tasks (which, by the way, doesn’t stop me moaning about it). It’s no surprise that losing that flexibility, even briefly, felt frustrating. Thankfully, OpenAI listened and restored choice.
This furore underscores something significant: Users now treat AI not merely as tools but as companions, collaborators, even friends. Future AI upgrades must navigate carefully, respecting emotional bonds as much as functional needs. To their credit, OpenAI stumbled and recovered quickly, and if the service recovery paradox kicks in, that might end up earning them even greater loyalty. Even SOTA AI must earn users’ trust, and never assume it. Balancing their increasingly diverse needs will define who is successful as the tech integrates ever-more deeply into what we do.
In the end, OpenAI’s rapid about-face appears to have reassured the community. But it will need to show it’s finally learning from this and other missteps. Afterall, this tech is different; right at the edge, highly personal, all about taste, accelerating at a rate and to a scale never before seen. That’s one heck of a responsibility for the poor engineer with the job of maintaining the model picker!