It's 2 AM. Your laptop sits closed on the desk, fans still whirring. OpenAI's updated Codex isn't sleeping—it's refactoring that legacy API wrapper you flagged last week, running your unit test suite, committing clean code to a branch you'll merge tomorrow. When you open the lid, the work is done. No browser tab. No terminal window. Just completed tasks waiting in your Codex history.
This is the shift OpenAI announced yesterday: background execution for Codex on macOS and Windows. The AI coding assistant can now commandeer your desktop while you do something else—writing emails, taking calls, or, yes, sleeping. It handles the "lights on" work: debugging sessions, test generation, documentation drafts, pull request reviews. You set the task, minimize the window, and return hours later to find the output ready.
The update goes beyond scheduling. OpenAI packed the new Codex with computer use capabilities—letting the model directly manipulate files, navigate folders, and execute terminal commands. It gains in-app browsing for researching APIs and documentation mid-task. Image generation lands inside the workflow. A persistent memory layer remembers your codebases, coding style, and project conventions across sessions. Plugin support opens third-party integrations.
This is not a copilot. A copilot suggests code while you type. Codex now operates as an autonomous agent embedded in your OS—launching applications, reading files, spawning processes, monitoring task progress. The distinction matters: GitHub Copilot augments human coding. Codex replaces human coding time.
The competitive pressure is obvious. Anthropic positioned Claude as an agentic platform with computer use and deep desktop integration. OpenAI's response treats Codex as the vector for that same capability, targeting developers first but expanding into "knowledge work" broadly. The blog post explicitly calls this groundwork for a "super app"—a persistent AI layer that mediates between user and machine.
For practitioners, the immediate question is reliability. OpenAI claims background tasks won't interfere with active work, but the execution model requires trust: your OS resources, file system, and applications become Codex's workspace. One errant command could corrupt a build or overwrite a config file. The memory feature mitigates some risk by maintaining context, but production-grade autonomy demands safeguards the company hasn't fully detailed.
Pricing remains unchanged—free tier with usage limits, Plus and Pro plans for heavier workloads. The desktop app requires no additional subscription beyond existing Codex access.
The background execution model is the tell. OpenAI isn't shipping a smarter autocomplete. It's building a persistent background process that treats your development environment as its domain. The developer tool framing is gone. What's emerging looks more like an operating system layer—one that happens to write code.