The smartphone market hasn’t seen a truly disruptive entrant since the Apple iPhone debuted in 2007. Nearly two decades on, hardware innovation has plateaued, but the software stack is evolving at breakneck speed.
Smartphones powered by large language models (LLMs) and Agentic AI promise to reshape personal computing with advanced reasoning, contextual awareness, and autonomous decision-making, even as the fate of overhyped devices such as the Rabbit R1 and Humane AI Pin serve as a cautionary note.
In this week’s edition of Mint Tech Talk:
- OpenAI’s rumoured Agentic AI smartphone
- AI Tool of the Week: ChatGPT Images 2.0
- Everyone wants a piece of Anthropic PBC
In May 2025, Sam Altman (who invested in Humane AI) brought on board legendary iPhone designer Jony Ive to build a new class of AI-first devices at OpenAI. And while OpenAI has made no official comment on this topic, OpenAI could unveil its first smartphone by late 2026 or early 2027, with mass production targeted for 2028, according to Ming-Chi Kuo—an analyst with TF International Securities (HK).
The AI smartphone is expected to blend cloud and on-device LLMs, powered by chips optimised for energy efficiency, memory management, and continuous contextual awareness. Partnerships with MediaTek and Qualcomm are reportedly under consideration, as OpenAI seeks end-to-end control over both the hardware and operating system to deliver a “comprehensive AI agent service”, says Kuo.
OpenAI has built a broad infrastructure stack spanning cloud, silicon, and data centres. The company has also outlined plans for a “superapp” that brings together ChatGPT, Codex, browsing, and broader agentic capabilities into a unified, agent-first experience. An “Agentic AI” phone, and a line-up of similar “agentic” devices, closes the loop on the hardware side.
Still, the question remains: Even if OpenAI does launch a ChatGPT phone, how different will it be from today’s Gen AI smartphones?
Devices by Samsung, Google, and Apple already integrate Gen AI deeply into their chips and operating systems, running on-device models via dedicated neural processing units. Smartphones, such as the Google Pixel 10 Pro with its Tensor G5 chip and Samsung Galaxy S26 powered by the Exynos 2600 on a 2 nm processor, highlight how far this integration has come—fast, efficient, and capable of handling meaningful on-device workloads across text, voice, image, and multimodal tasks.
The market is still early but growing. According to Intel Market Research, Gen AI smartphones were valued at about $95 million in 2025 and are projected to reach roughly $414 million by 2034. In 2025 alone, production reached 185 million units at an average price of $558. Demand is being driven by content creation, productivity, and emerging use cases in healthcare, education, and creative fields, while advances in model compression are pushing these capabilities into mid-range devices.
Yet today’s AI remains largely reactive—it waits for prompts. It can summarise, generate, and assist, but it does not autonomously manage tasks like travel-planning or subscription optimisation end-to-end. A true Agentic AI device, by contrast, would embed intelligence into the operating system itself, shifting interaction from “open an app” to “state a goal”.
Potential buyers?
OpenAI potentially has a captive audience. Currently, it has more than 900 million weekly active users, and that base is expected to approach one billion in the near term. Likely AI-native professionals already using tools like ChatGPT, Gen-Z driving cultural adoption, and enterprises seeking workflow automation. A phone purpose-built for agentic enterprise workflows—one that can autonomously draft, send, and follow up on communications—is an automation tool, not merely a handset.
Cost and trust
Gen AI adds an estimated $120–$200 to device manufacturing, even as privacy concerns remain high, according to Intel Market Research. Layer in EMIs, data plans, and AI subscriptions, and pricing becomes tricky. That is unless, as Google and Apple suggest, AI is bundled and largely hidden from the user. IPO-bound OpenAI, on its part, has not disclosed any details.
But where’s the money?
Meanwhile, The Wall Street Journal reported that OpenAI missed its own targets for new users and revenue, raising concern among company leaders about whether it will be able to support its massive spending on data centres. Chief Financial Officer Sarah Friar has said that she is worried that OpenAI may not be able to pay for future computing contracts if revenue doesn’t grow fast enough.
OpenAI has also ended its exclusivity arrangement with Microsoft Corp., meaning the Azure parent will no longer have sole access to OpenAI’s models and products. The move opens the door for OpenAI to work with rival cloud platforms such as Google Cloud and Amazon Web Services.
Announced on 27 April, the revised agreement also removes Microsoft’s obligation to pay a revenue share on OpenAI products it resells via its cloud.
“The greater predictability in the amended agreement strengthens our joint ability to build and operate AI platforms at scale, while providing both companies the flexibility to pursue new opportunities,” the two companies said in a joint statement.
AI TOOL OF THE WEEK
By AI&Beyond, with Jaspreet Bindra and Anuj Magazine
The AI hack we unlocked today is based on: ChatGPT Images 2.0
What problem does it solve? Most teams using AI images have the same frustration: the image looks good, but falls apart if there’s text in it. Misspelled words, garbled scripts, numbers that don’t match what you asked for.
This isn’t a niche problem. Think of a compliance team that needs to communicate a regulatory change to customers across five Indian languages. Or an HR team that needs a phishing awareness poster in English and Marathi.
ChatGPT Images 2.0 is the first image model to solve this with reasoning built in. Before rendering anything, the model reads your brief, plans composition, verifies object counts, checks that every text element matches what you wrote, and only then generates.
How to access: chat.openai.com. Thinking Mode (the reasoning layer) requires a Plus, Pro, or Business subscription.
ChatGPT Images 2.0 can help you:
- Render accurate text inside images: In English, Hindi, Tamil, Japanese, Arabic, and more.
- Generate data infographics where the numbers in the image match the numbers you provided.
- Produce up to 10 brand-consistent images in a single request.
:
Suppose a finance company needs to communicate a revised loan policy to customers across four regions and needs the notice in English, Hindi, Marathi, and Tamil, ready for WhatsApp broadcast within 48 hours, with no design team available.
Here’s how ChatGPT Images 2.0 handles it:
- Brief the model in plain language: “Create a formal customer notice. Header: IMPORTANT POLICY UPDATE. Body: [4-line policy summary]. A4 portrait format, professional design, dark blue border.” The model generates the English version.
- Switch language, keep the template: “Now render the same notice in Hindi – Devanagari script. Same layout, same design.” Then Marathi. Then Tamil. Each language version accurate, each in the correct script.
- Generate the broadcast asset: “Combine all four language versions into a 2×2 grid — one image, ready for WhatsApp.”
- Iterate if needed: “The Hindi header font is too small. Increase it, keep everything else.” The model adjusts only what you specified.
What makes ChatGPT Images 2.0 special?
- Reasons before it renders: The model plans the layout, verifies text, and checks your constraints before committing to pixels, dramatically improving first-pass accuracy on complex briefs.
- Multilingual text that actually works: Accurate rendering in Indic scripts, CJK characters, Cyrillic, and Arabic in the same image—a capability no current image-only tool matches.
Conversational iteration: Because it lives inside ChatGPT, you refine through dialogue—no re-uploading, no re-prompting from scratch, no context lost between edits.
Note: The tools and analysis featured in this section demonstrated clear value based on our internal testing. Our recommendations are entirely independent and not influenced by the tool creators.
Everyone wants a piece of Anthropic
Google will invest $10 billion in Anthropic PBC, with another $30 billion potentially to follow, strengthening the relationship between two firms that are at once partners and rivals in the race to build AI.
Anthropic has ramped up its fundraising amid the breakout success of Claude Code, an AI agent that speeds the process of writing computer software. The startup said earlier this week that it nabbed another $5 billion from Amazon, also at a valuation of $350 billion, with the option to inject another $20 billion over time.
Anthropic raised $30 billion in February and investors have since sought to back the firm at a valuation of $800 billion or more. Read more.
AI exposure highest in skilled roles not low-wage jobs
According to Anthropic’s Labor Market Impacts Report, professionals currently most exposed to AI are those with higher education, significant experience and above-average earnings.
The findings in this report turn conventional thinking upside down. There’s a clear tension here: the qualifications and expertise that once helped secure a stable career may now be the very factors placing workers at greater risk of disruption.
Amazon’s 6-point AI playbook for engineers
Amazon.com Inc. is formalising how it builds with AI as part of a broader push to make AI central to its engineering culture, Business Insider reports. The e-commerce firm’s massive retail division, known internally as “Stores”, has formalised its approach into a set of six “AI-native engineering tenets” designed to guide how teams should approach AI development across the organisation.
Meta to unwind $2.5-billion Manus acquisition
Meta Platforms Inc. is planning to unwind its acquisition of AI startup Manus after the Chinese government banned the transaction on national security grounds, according to a new report by The Wall Street Journal, citing sources familiar with the matter.
Any attempt to reverse the deal is expected to be complex, requiring the company to disentangle operations, data, and technology that have already been combined.
Meanwhile, a Bloomberg report noted that Manus staff have been integrated into Meta Platforms, funds have been dispersed, and the startup’s leadership has joined the company’s AI division.
Elon Musk tells his side of the OpenAI story
Elon Musk took the stand for the second day Wednesday in the landmark trial that pits the world’s richest person against Sam Altman, a fellow OpenAI co-founder he accuses of betraying promises to keep the company as a non-profit dedicated to humanity’s benefit.
Musk, who invested about $38 million in OpenAI from December 2015 through May 2017, gave his account of OpenAI’s early years, recounting how he lost confidence that Altman would keep it a non-profit. Questioned by his lawyer Steven Molo, Musk said by late 2022 he was concerned Altman was trying to “steal the charity.”