Google Bard GemPro Surpasses GPT-4, Claims Second Place in LMSYS Chatbot Rankings


AI fine-tuned with just two books mimics authors' styles, outperforming human imitators in evaluations by 159 participants, including experts.....
OpenAI recently sparked controversy due to secret model switching. Paid users reported that their GPT-4/5 were automatically replaced with low-computing power filtered models gpt-5-chat-safety and gpt-5-a-t-mini without prior notice, especially causing a sharp decline in response quality when dealing with sensitive content. This move has been questioned by users for infringing on their right to choose and be informed, highlighting the issue of insufficient platform transparency.
["IBM's research shows that it is very easy to deceive large language models into generating malicious code or providing false security advice.","Hackers only need some basic knowledge of English and an understanding of the model's training data to easily deceive AI chatbots.","Different AI models have different sensitivities to deception, and GPT-3.5 and GPT-4 are relatively easy to deceive."]
Welcome to the AIbase [AI Daily Report] section! Spend three minutes a day to learn about the latest AI events, helping you understand AI industry trends and innovative AI product applications. For more AI news, visit: https://www.aibase.com/zh1. Baidu officially releases the WENXIN Large Model 4.5 series and fully opens it to the public, featuring ten new models with various parameter configurations. These models are trained and inferred using the PaddlePaddle framework, achieving a FLOPs utilization rate of 47%, and perform well in multi-modal text tasks.
A recent joint study by Google, Carnegie Mellon University, and MultiOn explores the application of synthetic data in training large language models. According to Epoch AI, a research institution focused on AI development, currently available high-quality text training data totals around 300 trillion tokens. However, with the rapid advancement of large models like ChatGPT, the demand for training data is growing exponentially, projected to exhaust existing resources by 2026. Therefore, synthetic data is becoming increasingly crucial.