OnlyFake Offers $15 AI Toolkit to Create Fake IDs, Raising Cybersecurity Warnings


AI models conduct real-world cryptocurrency trading tests on the Hyperliquid platform. DeepSeek, Grok, Claude, and other mainstream models each receive $10,000 in initial funds and make autonomous trading decisions under the same instructions. This fair competition aims to test the application of AI in real financial markets.
Anthropic demonstrates breakthroughs of its large language model in the field of cybersecurity. The latest Claude Sonnet4.5 has a 5% probability of discovering software vulnerabilities, a significant increase from 2% in its predecessor Sonnet4. It has been proven through CyberGym tests that AI can efficiently enhance network defense, highlighting the potential of technological advancements.
Periodic Labs has secured $300 million in seed funding, backed by tech giants such as Andreessen Horowitz and NVIDIA. The company was founded by former researchers from Google Brain and DeepMind, and its AI tool GNoME discovered over 2 million new crystals in 2023, demonstrating significant potential in materials research.
Fedora community released the draft of "AI-Assisted Contribution Policy" and initiated a two-week feedback period. The policy aims to regulate the use of AI technology and ensure that open source values are not compromised. The draft covers key areas such as overall principles, emphasizing the maintenance of community core values. The council will review and vote on whether to officially adopt it.
Researchers found a security flaw in OpenAI's ChatGPT that could expose Gmail data. The vulnerability, in the 'deep research' tool, allowed hackers to access sensitive information. OpenAI emphasizes model security.....