MediaTek Launches AI Processor Dimensity 8300, Redmi K70E to Debut Globally


Meta released Llama3.2 at its annual Meta Connect 2024 conference, aiming to enhance the capabilities of edge AI and visual tasks. The newly launched Llama3.2 model includes medium visual models with 1.1 billion and 9 billion parameters, as well as small models with 100 million and 300 million parameters, specifically optimized for use on mobile devices, supporting Qualcomm and MediaTek hardware. Meta CEO Mark Zuckerberg stated that these models will assist developers in achieving results without the need for large resources.
MediaTek and vivo have collaborated to be the first to implement AI large language models on mobile devices. MediaTek's new generation flagship AI processor provides powerful AI computing capabilities and performance for vivo. Together, MediaTek and vivo bring consumers an industry-leading innovative experience in generative AI applications at the device level. The deployment of generative AI on devices offers advantages such as protecting user information security, enhancing real-time performance, and achieving personalized user experiences. This collaboration will actively promote the application of AI technology in the mobile sector.
1. MediaTek collaborates with OPPO to build a lightweight large model edge-side deployment solution. 2. Based on AndesGPT, uses 4-bit quantization technology for edge-side large model performance optimization. 3. The new round of public testing for XiaoBu Assistant is launched, built on AndesGPT. 4. AndesGPT is OPPO's self-trained generative large language model. 5. AndesGPT will continue to enhance the AI capabilities of OPPO's XiaoBu for more product implementations.
Tesla has completed the filing of its in-car voice large model service in China, and will integrate generative AI to enhance the intelligence level of voice interaction, comply with regulatory requirements, and provide owners with a more natural cockpit experience.
Tencent Cloud open-sources CubeSandbox, providing an efficient and secure execution environment for AI Agents. This sandbox service achieves hardware-level isolation, with a startup speed of sub-hundred milliseconds, and supports zero-cost migration of existing applications, significantly improving development efficiency and security.