The Zhitong Finance App learned that OpenAI, supported by Microsoft (MSFT.US), is developing a large-scale language model codenamed “Garlic” to cope with Google (GOOGL.US)'s recent progress in the AI field. According to people familiar with the matter, OpenAI's chief research officer Mark Chan told some colleagues last week that the new model performed well in the company's internal evaluations, at least in tasks such as programming and reasoning, compared to Google's Gemini 3 and Anthropic's Opus 4.5.
OpenAI plans to release a version of Garlic as soon as possible, probably as early as the beginning of next year under the name GPT-5.2 or GPT-5.5.
This news comes at a time when Google's new AI model, Gemini 3, was a big success. According to reports, OpenAI CEO Sam Altman has announced the launch of the “Red Code” campaign in an increasingly intense AI competition to improve the quality of ChatGPT.
Altman told colleagues that OpenAI is preparing to launch a new inference model that is “ahead” of Gemini 3 in internal evaluations.
Garlic is different from another new model currently under development, “Shallotpeat.” Altman told employees in October that ShallotPeat will help OpenAI challenge Gemini 3. Garlic, on the other hand, includes bug fixes when the company developed Shallotpeat during the pre-training phase. Pre-training is the first stage of model training, where LLM learns data from networks and other sources to understand its associations.
This is important because Google said last month that it had made new progress in pre-training when developing Gemini 3, and OpenAI leaders acknowledged this.
Chen said that in the process of developing Garlic, OpenAI solved some of the key issues that have always existed in terms of pre-training, including improving GPT-4.5, the “best” and “much larger” pre-training model before it. The model was launched in February and is now largely out of business.
Chen said that these improvements mean that OpenAI can now inject the amount of knowledge previously only obtained by developing larger models into a smaller model. As can be assumed, the cost and time required to develop larger models is generally higher than developing smaller models.
Chen added that with the experience drawn from the Garlic project, OpenAI has set out to develop a larger and better model.
Garlic still needs to complete several steps before it is released, including post-training (in which the model will be exposed to more screened data to learn specific fields such as medical or legal knowledge, or learn how to better respond to chatbot users), other tests, and safety assessments.