OpenAI Announces New Team to Accelerate Scientific Research
OpenAI has established a new team called 'OpenAI for Science' and begun investigating how large language models can accelerate scientific discovery. Examples were shared demonstrating how GPT-5 assists researchers in fields like mathematics and physics.

A New Partner for the Scientific World
Following the transformation artificial intelligence has created in daily life and the business world, OpenAI has now set its sights on scientific research. The company, with its 'OpenAI for Science' team established in October, has begun investigating how large language models, particularly GPT-5, can assist scientists. The team is conducting studies to make the models more suitable for scientific processes.
Competitor Believes the Move is Late
However, OpenAI's move is considered 'late' by some circles. Rival company Google DeepMind has had an 'AI-for-science' team for years with models like AlphaFold, which revolutionized protein structure prediction. DeepMind's Founder Demis Hassabis cites AI for science as the main motivation of his career. This development is reminiscent of the cloud and AI infrastructure race discussed in the article titled Why is Microsoft Changing the Rules of the Cloud Wars with Maia 200?.
"AGI's Greatest Benefit Will Be in Science"
Vice President Kevin Weil, who heads the OpenAI for Science team, connects the team's founding purpose with the company's overall mission. Weil states, "We believe the greatest and most positive impact of artificial general intelligence (AGI) will come from its ability to accelerate science." Noting that this potential is beginning to materialize with GPT-5, Weil argues that the models have now become sufficiently good 'scientific collaborators.'
Reasoning Ability Changed the Game
OpenAI's first 'reasoning model,' announced in December 2024, significantly increased language models' ability to solve math and logic problems step-by-step. The company claims that GPT-5.2 achieved a 92% score on the GPQA benchmark, which tests doctoral-level biology, physics, and chemistry questions. This rate is far above GPT-4's 39% score. However, some experts are cautious about what this ambitious performance means in practice. A similar transformation potential is also observed in deep research tools, which came to the fore with the news StepFun AI Announces Low-Cost Deep Research Agent Step-DeepResearch.
Not New Discoveries, But Forgotten Solutions
Last October, claims by OpenAI executives that GPT-5 had found solutions to unsolved mathematical problems were questioned by mathematicians. Examinations revealed that the model actually uncovered solutions that already existed in old research papers, even in German publications. On this matter, Kevin Weil manages expectations by saying, "If LLMs prevent us from wasting time on a problem that has already been solved, that in itself is an acceleration." Weil emphasizes that while the models have not yet made Einstein-level discoveries that would completely transform a field, they are accelerating scientists' work.
Mixed Reactions from Scientists
Researchers who participated in the case studies shared by OpenAI state that GPT-5 is useful for literature review, idea brainstorming, and experiment design. Professor Robert Scherrer from Vanderbilt University says the model solved a problem he and his student had been struggling with for months. However, other scientists like statistician Nikita Zhivotovskiy from Berkeley University argue that the models are still combining existing knowledge rather than generating entirely new ideas, and sometimes even combining it incorrectly. The expanding role of AI also brings to mind the 'governance' dimension discussed in the articles AI Will No Longer Just Answer, It Will Manage People: Humans&'s Big Claim and AI Will No Longer Be Content with Answering: Is It Coming to Manage People?.
Those Who Cannot Adapt May Be Disadvantaged
Biologist Professor Derya Unutmaz from the Jackson Laboratory notes that data analyses that used to take months can now be completed much faster, and not using these tools is ceasing to be an option. Experts warn that, similar to the proliferation of computers and the internet, researchers who do not adopt AI tools may find themselves at a disadvantage in the long term. One of the obstacles to this transformation is highlighted as compatibility issues with old infrastructure and applications, as emphasized in the article titled Cloudflare Says: Making Money from AI with Old Applications is Almost Impossible.
In conclusion, OpenAI's open move towards scientific research is interpreted as a sign of the institutionalization process of AI-science collaboration. How this process will transform scientific methodology and when it will trigger truly groundbreaking discoveries will become clear over time and through critical evaluations to be made.


