Apple confirmed last week that WWDC 2026 will run June 8 to 12 at Apple Park, with the keynote on Monday June 8 at 10 a.m. Pacific. The format is the same as the past four years, with a recorded keynote streamed globally and an in-person developer event for invited attendees. The major reveal is widely expected to be Apple Intelligence 2.0, the second generation of the on-device AI platform that Apple introduced at WWDC 2024 and shipped in iOS 18 the same year. Bloomberg's Mark Gurman reported in a March newsletter that Apple has been running internal review meetings on a feature set that includes a redesigned Siri, an expanded on-device large language model with reasoning capability, and a partnership with a third-party AI provider that Apple has been calling internally the World Knowledge Answers project.
The Siri redesign is the piece that has been most anticipated. The current generation of Apple Intelligence shipped Siri with limited screen-aware capability and the ability to take actions in a small set of system apps. Reports out of Cupertino suggest that the 2.0 version will move Siri to a transformer architecture trained on a combination of Apple's own foundation model and a partner model that handles general world knowledge queries. The partner model has been the subject of negotiation since fall 2024. The company first ran a deal with OpenAI for ChatGPT integration. Reports from The Information in early April indicate that the partnership has been expanded to include Anthropic's Claude as a second optional provider, with a per-query revenue share rather than a flat fee.
The on-device side is where Apple's argument differentiates from the competition. The current foundation model on iPhone 16 Pro and later is approximately 3 billion parameters running on the Neural Engine. The next generation, internally called Foundation 4 in leaked documentation, is expected to be 7 to 9 billion parameters with a chain of thought reasoning layer that handles multi-step queries entirely on device. The implication is that the privacy story stays intact for most use cases. Apple's Private Cloud Compute architecture, which handles queries that exceed the on-device model's capability by sending them to Apple-controlled servers running anonymized inference, is also expected to get an upgrade with longer context windows and faster response times.
What is genuinely new is the developer side. Apple is reported to be opening the on-device model to third-party developers through a refreshed App Intents framework that gives apps direct access to the foundation model for inference and the action graph for executing user requests. Developers have been asking for this since 2024 and the lack of model access has been one of the loudest complaints about Apple Intelligence in its first generation. If the announcement lands as described, third party apps will be able to build features that use the on-device model without sending data to a third-party API. The privacy positioning is the obvious commercial pitch, but the developer experience matters more for adoption.
The competitive context is the part Apple cannot avoid talking about. Google released Gemini 3 in March with a major upgrade to multimodal reasoning. Microsoft's Copilot platform has been deployed at enterprise scale and the company reported in its Q2 earnings that 60 percent of Fortune 500 companies are now using Copilot in some capacity. Anthropic released Claude Opus 4.6 in early April with computer-use tools that have started to be deployed by enterprise customers. Meta's open-source Llama 4 launched in February with a 405 billion parameter model that outperformed several closed competitors on standard benchmarks. The argument that Apple is behind has become harder to dismiss, and the WWDC keynote is the company's first chance to make the counter-argument in a year.
The hardware side has been quieter. The iPhone 17 Pro launched in September with the A19 chip and a 35 percent faster Neural Engine, but the lineup did not get a refresh between then and now. The Mac M5 chip is expected to be announced in October with a redesigned Neural Engine architecture that handles the larger on-device models without thermal throttling. The Vision Pro 2, which has been in development since the original Vision Pro shipped in early 2024, has reportedly been pushed to a fall 2026 announcement. WWDC will likely confirm the development tools and software framework but not the hardware itself.
What to watch June 8: the Siri demo, the third-party model integration, the developer access announcement, and how aggressively Apple frames its privacy story against the cloud-first competitors. The keynote runs about 90 minutes. The first 30 are usually iOS, the next 30 are usually macOS and the other operating systems, and the last 30 are usually whatever Apple is most excited about. Apple Intelligence 2.0 will get the closing slot if the company believes it has caught up. If it does not, the keynote will tell that story in what gets emphasized and what gets glossed over. June 8 will answer the question that has been hanging over the company for two years.