
OpenAI brings Codex coding app to mobile devices
May 15, 2026As AI models become more common, startups are racing to build the software that connects them all together. Osaurus, an open-source Mac-only tool, takes an interesting approach by letting users switch between different AI models – both local and cloud-based – while keeping their files and data on their own hardware.
The project grew out of an earlier idea for a desktop AI companion called Dinoki. When customers asked why they should pay for the app if they still had to pay for AI tokens, co-founder Terence Pae started thinking more seriously about running AI locally on people’s computers.
“That’s how Osaurus started,” Pae, who previously worked as a software engineer at Tesla and Netflix, told TechCrunch. “You can do pretty much everything on your Mac locally, like browsing your files, accessing your browser, accessing your system configurations. I figured this would be a great way to position Osaurus as a personal AI for individuals.”
This approach addresses a growing concern in the AI space: privacy and data control. While most AI tools require sending your data to remote servers, Osaurus keeps everything on your own machine. This matters especially for professionals in fields like healthcare and law, where data privacy is critical.
Today, Osaurus can connect with locally hosted AI models or cloud providers like OpenAI and Anthropic. Users can freely choose which AI models they want to use and keep other parts of the experience on their own hardware, including the AI’s memory and their personal files. Since different AI models have different strengths, users can switch to whichever model works best for their specific task.
This makes Osaurus what’s called a “harness” – a control layer that connects different AI models, tools, and workflows through one interface. Similar tools like OpenClaw or Hermes exist, but they’re typically aimed at developers who know how to use command-line interfaces. Some also have security concerns.
Osaurus presents an easy-to-use interface that regular consumers can navigate. It addresses security issues by running everything in a hardware-isolated virtual sandbox, which limits what the AI can access and keeps your computer and data safe.
Running AI models on your own machine is still challenging because it requires significant computing power. To run local models, your system needs at least 64 GB of RAM. For larger models like DeepSeek v4, Pae recommends systems with about 128 GB of RAM.
But Pae believes these requirements will decrease over time. “I can see the potential of it, because the intelligence per wattage – which is like the metric for local AI – has been going up significantly,” he said. “Last year, local AI could barely finish sentences, but today it can actually run tools, write code, access your browser, and order stuff from Amazon. It’s just getting better and better.”
Osaurus currently supports a wide range of models and services:
- Local models: MiniMax M2.5, Gemma 4, Qwen3.6, GPT-OSS, Llama, DeepSeek V4
- Apple’s on-device foundation models and Liquid AI’s LFM family
- Cloud services: OpenAI, Anthropic, Gemini, xAI/Grok, Venice AI, OpenRouter, Ollama, and LM Studio
The tool works as a full MCP (Model Context Protocol) server, meaning any compatible client can access your tools through it. It comes with over 20 built-in plugins for common tasks like email, calendar, vision processing, file management, web browsing, music, and more. Recent updates have added voice capabilities as well.
Since launching nearly a year ago, Osaurus has been downloaded more than 112,000 times according to its website. The founders, including co-founder Sam Yoo, are currently participating in the New York-based startup accelerator Alliance.
They’re considering expanding to business customers, particularly in legal and healthcare sectors where privacy concerns make local AI processing attractive. As local AI models become more powerful, the team believes this could reduce demand for AI data centers.
“We’re seeing this explosive growth in the AI space where [cloud AI providers] have to scale up using data centers and infrastructure, but we feel like people haven’t really seen the value of the local AI yet,” Pae said. “Instead of relying on the cloud, they can actually deploy a Mac Studio on-prem, and it should use substantially less power. You still have the capabilities of the cloud, but you will not be dependent on a data center to be able to run that AI.”




