Human Brain Cells Construct AI System, Achieving 78% Accuracy in Speech Recognition


iFlytek launched its AI hardware and software integrated solution at the 2025 1024 Developer Festival. By deeply integrating algorithms and hardware, it solves recognition challenges in complex environments such as high noise and far-field conditions, improving the accuracy of voice and visual intelligence, marking a significant breakthrough in this field.
OpenAI secretly launched the "Mercury" project, recruiting over 100 former bankers and financial experts to train an AI system. The project aims to automate repetitive tasks of junior investment bankers, such as generating complex financial models, to replace time-consuming basic work and target the core business of Wall Street.
At the 2025 YUNQI Conference, NetEase announced the integration of the Tongyi Qianwen AI system, significantly improving game development efficiency by 50%. The system focuses on natural language processing and machine learning, aiding in game testing and optimization. It demonstrates NetEase's technological foresight and provides new insights for industry innovation.
Recently, Tongyi Lab of Alibaba officially released its latest end-to-end speech recognition large model - FunAudio-ASR. The biggest highlight of this model is its innovative "Context Module," which significantly improves the accuracy of speech recognition in high-noise environments. The hallucination rate has been reduced from 78.5% to 10.7%, a decrease of nearly 70%. This technological breakthrough has set a new benchmark for the speech recognition industry, especially suitable for noisy environments such as meetings and public places. FunAudio-AS
Recently, the OpenAI Evals tool received a significant and exciting update, adding native audio input and evaluation features. This innovation means that developers can now evaluate speech recognition and generation models directly using audio files, without going through the cumbersome process of text transcription. This change greatly simplifies the evaluation process, making the development of audio applications more efficient. In previous evaluation processes, developers often needed to first convert audio content into text, which was time-consuming and labor-intensive, and could also affect