Microsoft's Gorilla Outperforms GPT-4 in Generating API Calls


The study found that AI-generated social media posts can be easily recognized by humans, with an accuracy rate of 70%-80%, far exceeding random levels. The research team tested multiple large language models, revealing their shortcomings in content recognition.
Artificial intelligence startup Viven uses large language models and data privacy technologies to create employee digital twins, solving the problem of critical information loss caused by vacations or time zone differences, avoiding project stagnation, and reducing time costs.
Meta Super Intelligence Lab introduced the REFRAG technology, which improves the inference speed of large language models in retrieval-augmented generation tasks by more than 30 times. This breakthrough result was published in a related paper and profoundly transforms the way AI models operate. The lab was established in California in June this year, stemming from Zuckerberg's emphasis on the Llama4 model.
Google introduces a reasoning memory framework that allows AI agents to learn from their experiences and errors, accumulating knowledge and achieving self-improvement. This technology aims to address the current limitation of large model agents being unable to grow from experience, promoting the development of AI toward more autonomous and intelligent directions.
Anthropic, in collaboration with the UK Institute for AI Safety and other institutions, found that large language models are vulnerable to data poisoning attacks. As few as 250 poisoned files can be used to implant a backdoor. Testing showed that the effectiveness of the attack is not related to the model size (600 million to 13 billion parameters), highlighting the prevalence of AI security vulnerabilities.