Research Shows: Large Language Models Learn Faster and Smarter from Human Feedback


Apple introduces two ML studies: SQUIRE enhances AI-generated UI control and fine-tuning with GPT-4o and slot query representation, while another improves image safety review to address current tech challenges.....
Wikipedia has officially banned the use of large language models to generate or rewrite article content, ending its previous ambiguous stance on AI. The new policy received overwhelming support from volunteer editors, aiming to maintain the reliability of content and prevent inaccurate or plagiarized content generated by AI.
Google used a large language model to analyze 5 million news articles globally, mining unstructured data and building a global flood prediction system. This innovative approach solves the problem of traditional deep learning models struggling to predict floods in remote areas due to a lack of historical weather data.
Google DeepMind and the YouTube team introduced the STATIC framework to address issues such as generating incorrect product IDs or violating inventory logic in recommendation systems using large language models. This technology uses sparse transition matrices to accelerate the Trie index, enabling constrained decoding and improving the accuracy and reliability of generative retrieval.
AI startup Fundamental ends its stealth mode and announces a $255 million Series A funding round, with a post-money valuation of $1.2 billion. The round was led by multiple institutions including Oak HC/FT, with the CEOs of Perplexity and Datadog also participating individually. Its core product is the base model Nexus, aimed at competing with mainstream large language models such as ChatGPT.