Apple's Worldwide Developers Conference opens Monday June 8 at 10 AM Pacific in Apple Park, and the framing this year is materially different from last year's. The 2025 keynote announced Apple Intelligence as a comprehensive on-device generative AI layer for iPhone, iPad, and Mac, but the actual delivered functionality lagged the announcement by 8 to 12 months and the new Siri was widely seen as falling short. Apple's developer documentation confirmed in March that the next iOS, iOS 19, will ship with what Apple is internally calling Apple Intelligence v2, with substantial improvements to summarization, image generation, and Siri's contextual reasoning. The Vision Pro 2, expected to be announced in fall 2026, is also expected to be the first Apple device with Apple Intelligence as the central operating layer.

The gap Apple is trying to close is real. ChatGPT, Claude, and Google Gemini have all shipped meaningful product updates in the past 90 days. OpenAI's GPT-5 timeline, confirmed Wednesday in a developer blog, points to enterprise rollout in Q3 2026 and consumer availability in Q4. Google's Gemini 3.1 Ultra launched Tuesday with native 2 million token context and pricing of $12 input and $60 output per million tokens, undercutting Anthropic's Claude Opus 4.6 and OpenAI's GPT-5 Pro. Microsoft Copilot's 30-day general availability rollout for Wave 1 enterprise customers including JPMorgan, Bristol-Myers, McKinsey, and Bain begins in August. Apple's product strategy of running models on-device and refusing to send queries to cloud-based partners has limited Apple Intelligence's capability ceiling on consumer hardware.

What we know is shipping. The new Siri will integrate ChatGPT for queries that exceed on-device model capacity, similar to the existing Apple Intelligence ChatGPT integration but with a default-on flow rather than the opt-in pattern in iOS 18.2. The integration is expected to use GPT-5, with Apple users on the free tier getting 50 queries per day to GPT-5 Standard and Apple users with a ChatGPT Plus or Pro subscription getting their full subscription benefits. Real-time translation across 47 languages will be built into FaceTime, Messages, and Phone, with the on-device model handling speech-to-text and the cloud model handling translation when network is available. AirPods Pro 3 firmware will get conversational mode, which routes microphone input through Apple Intelligence for live translation and accessibility responses.

The developer story matters more than the consumer story for short-term stock implications. Apple is expected to publish a stable Foundation Models Framework that lets app developers run Apple Intelligence inference inside their apps without building separate model orchestration. Sample apps in Apple's developer documentation since March show Foundation Models being used for note summarization in Notes, suggested replies in Mail, and contextual search in Photos. The framework's pricing model has not been disclosed publicly, but Apple's developer relations team has briefed several large developer accounts on a free tier with rate limits and a paid tier with negotiated capacity. The pricing details are expected at the WWDC platform state of the union session on June 8 at 1 PM Pacific.

For Apple's hardware story, the WWDC announcements set up the September iPhone 18 launch. The iPhone 18 Pro is expected to ship with the A19 Pro chip, which Apple's supply chain disclosures suggest will roughly double the on-device neural engine capacity over the A18 Pro. The Vision Pro 2, expected to ship in November, is rumored to be priced at $2,499, materially below the original Vision Pro at $3,499, with the price reduction enabled by removing the external EyeSight display and using a lighter aluminum frame. The Q1 2026 results released last quarter put Apple's services revenue at $30.1 billion, with a record gross margin of 76.4 percent, which is the metric most relevant to whether Apple Intelligence v2 unlocks new App Store and subscription revenue.

The competitive question for app developers is whether to bet on Apple Intelligence's on-device-first approach or build for cloud-based model providers directly. The on-device approach has clear privacy and latency advantages but a hard capability ceiling at current hardware. The cloud approach has higher capability but requires API spend that scales with usage, with current pricing for Claude Opus 4.6 at $80 input and $250 output per million tokens, GPT-5 Pro at $80 and $240, and Gemini 3.1 Ultra at $12 and $60. Most production apps are now running a hybrid model, where simple queries use on-device inference and complex queries route to a chosen cloud provider. The Apple Intelligence v2 framework is expected to make this routing logic significantly easier to implement.

For business users, three iOS 19 features are worth tracking. The expected new Notes summarization feature handles meeting notes longer than 30 minutes and extracts action items in a way that current third-party tools struggle with. The Mail Smart Reply, which moves from suggested phrases to fully drafted reply paragraphs, will affect how much time the average professional spends on email. And the Calendar AI scheduler, which Apple's developer documentation references in a stub API, would be Apple's first direct response to the Microsoft Outlook scheduling assistant and Calendly's enterprise tier.

What to watch on June 8. The keynote begins at 10 AM Pacific and runs roughly two hours. Apple Intelligence v2 will likely lead the keynote ahead of any hardware announcements. The platform state of the union at 1 PM Pacific is where developers will get the Foundation Models Framework details and pricing. The Apple Intelligence sessions throughout the week will reveal the actual capability of the new on-device models, with developer hands-on labs starting June 9. And the Vision Pro 2 announcement, if it appears at all, would most likely come in a Tim Cook closing segment rather than in a dedicated hardware segment.