On Thursday, OpenAI officially launched GPT-Rosalind, a specialized large language model deeply trained for biological research scenarios. Unlike the general scientific model approach taken by giants like Google and Microsoft, OpenAI has chosen a more focused path—directly addressing core pain points in biological research.
Wang Yunyun, head of life science products, emphasized the mission of this model at the launch: to help researchers break through two long-standing barriers—massive data accumulated over decades of genome sequencing and the highly specialized terminology barriers. In reality, a geneticist specializing in a specific gene often finds it difficult to deal with the overwhelming amount of neurobiology literature, and information overload has become a common dilemma in biological research.

To address this, OpenAI has integrated 50 common biological workflows and access to major public databases on top of a general large model foundation, enabling the model to connect genotype and phenotype, infer protein structure and function, and subsequently screen potential drug targets. At the same time, the team has specifically adjusted the model's "personality"—intentionally strengthening critical thinking to avoid simply catering to users. When faced with low-value targets, the model will choose to directly reject them.
Naturally, challenges remain unavoidable. The issue of hallucinations remains unresolved, as the model may generate content that appears reasonable but is not verifiable, posing significant risks in rigorous scientific research. OpenAI also acknowledges that there is no complete solution yet and reminds users to remain cautious. Potential risks in biosecurity are equally concerning; if misused to enhance viral transmissibility, the consequences would be unimaginable. To address this, OpenAI has implemented strict access controls, currently only opening applications to entities within the United States, while limited-life science plugins will gradually be made available to a broader audience.
