AI Version Control: Ability to Select and Lock Specific AI Model Versions
Zikang Ho
Business Problem:
Currently, when the platform updates its underlying AI models, all users are forced onto the new version immediately. In a professional production environment, even subtle changes in how a model interprets instructions can lead to unpredictable behavior and output regression.
For users who have spent significant time "Prompt Engineering" their AI for specific, high-precision tasks (such as intent routing, data extraction, or strict formatting), these forced updates often break existing logic. A model that was previously reliable may suddenly change its tone, formatting, or compliance with negative constraints, forcing teams to manually re-test and re-adjust their entire automation setup.
Desired Outcome:
- Model Version Selection: Provide a dropdown menu within the AI Agent/Workflow settings that allows users to select specific model versions (e.g., GPT 5.2 vs. latest).
- Version Locking: Allow users to "Lock" their AI to a specific version so it does not update automatically when the platform releases a new default.
- Legacy Support: Ensure that older versions remain available for a reasonable grace period, giving businesses time to migrate and test their instructions before the old version is sunset.
Impact on AI Results:
Forced updates compromise the reliability of AI-driven automation. Without version control, we cannot guarantee a consistent experience for our customers. Providing versioning options will allow professional users to build stable, mission-critical automations without the fear of their logic breaking overnight due to platform-wide updates.
Craig Dodge
The update to GPT5.4 has resulted in all kinds of odd, incorrect answers that were not an issue before the change. I've spent every day since re-writing instructions and knowledge documents to try and correct bad answers. I'm all for reverting back to the previous model, if it were an option.
Zikang Ho
Craig Dodge Thanks for sharing, Craig. We are facing the same thing at our company. For us, a stable and predictable AI is way more important than just having the most latest model.
Craig Dodge
Zikang Ho for sure! I confess this is my first experience managing an AI agent. I've spent the better part of 2 months getting the instructions and knowledge documents for the AI agent it to answer the way we need it to. This update blew it all up. Frustrating. The last answer from tech support: "Right now, the best approach will be updating your prompt." So I brew some more coffee and get help from Claude to fix things.
W
Wilson Tan
Hi Craig Dodge and Zikang Ho thank you both for raising this so openly.
I'm genuinely sorry for the disruption. Spending weeks tuning instructions only to have them shift overnight is incredibly frustrating, and "update your prompt" isn't a satisfying answer when you've already put in that work.
For context: GPT 5.4 has produced exceptional results in our testing, and we also found that it pairs especially well with Claude as a prompt-building assistant. But we fully recognize that its tighter instruction-following, while a strength in principle, changes the behavior of prompts engineered against the previous model, and that cost lands on you, not us.
Two things we're actively looking into:
- Reducing disruption and the risk of regressions when we roll out future model changes.
- A model selection capability so high-precision setups aren't at the mercy of platform-wide updates. It's firmly on our radar.
Please keep sharing specific examples where you've seen regressions, it directly shapes how we prioritize this.