OpenAI turns Codex into a full-computer coding agent that can click, browse, remember, and run in parallel
OpenAI rolled out a major Codex update that pushes it beyond code completion into a full agentic workspace. The new release adds background computer use, browser interaction, memory, automations, image generation, and support for dozens of new plugins.
OpenAI is trying to turn Codex from a coding assistant into an operating environment
OpenAI’s latest Codex release is not a small feature update. It is a bet that coding tools are evolving from autocomplete and chat-based help into persistent software agents that can operate across the entire development workflow. In a product post published April 16, OpenAI said it is releasing a major upgrade to Codex that lets the system work with more tools, browse the web, use a computer cursor, remember user preferences, and keep long-running tasks moving in the background.
The headline change is that Codex is no longer being framed as just something that writes code inside a terminal or editor. OpenAI is positioning it as a development command center. The company wrote that Codex can now “operate your computer alongside you,” “generate images,” “remember your preferences,” and “take on ongoing and repeatable work.” Those are meaningful shifts because they move the product from a request-response model toward something much closer to a software coworker.
What actually changed
According to OpenAI, the upgraded app now supports several capabilities that developers have been asking for as coding agents become more ambitious:
- Background computer use: Codex can see, click, and type with its own cursor on macOS.
- Parallel agents: multiple agents can work at once without interrupting the user’s own activity.
- In-app browser support: developers can comment directly on web pages and guide frontend iteration.
- Image generation: Codex can use gpt-image-1.5 to create and iterate on visuals and mockups.
- More integrations: OpenAI says it is releasing more than 90 additional plugins spanning skills, app integrations, and MCP servers.
- Remote and review workflows: support now includes addressing GitHub review comments, multiple terminal tabs, and alpha support for remote devboxes over SSH.
- Automations and memory: Codex can preserve context across conversation threads, schedule future work, and remember useful context from prior tasks.
That collection matters because each feature removes a separate bottleneck in software work. Code is only one part of shipping. Developers also review pull requests, inspect visual output, dig through docs, compare versions, respond to comments, and revisit tasks later with partial context. OpenAI is effectively saying the agent should live across that whole loop.
The big quote: OpenAI wants Codex to handle more of the messy middle
“Codex can now operate your computer alongside you, work with more of the tools and apps you use everyday, generate images, remember your preferences, learn from previous actions, and take on ongoing and repeatable work.”
That quote captures the ambition cleanly. It is not just about producing code faster. It is about collapsing the friction between an idea, a spec, a UI tweak, a browser check, a PR review, and a follow-up task tomorrow morning.
OpenAI also emphasizes that Codex can now keep working after the immediate interaction ends. The company wrote that it has “expanded automations to allow re-using existing conversation threads” and that Codex “can now schedule future work for itself and wake up automatically to continue on a long-term task, potentially across days or weeks.” That is the sort of feature that pushes agent software into a different category operationally. When the system is allowed to resume work later with continuity, it starts behaving less like a tool and more like a semi-persistent worker process with product-level memory.
Why the browser and computer-use pieces matter
One of the chronic weaknesses of coding assistants has been their separation from the actual surfaces developers care about. Writing React code is one thing; inspecting the broken spacing in a real browser is another. OpenAI’s browser support is aimed squarely at that gap. The company says the app now includes an in-app browser where users can comment directly on pages, which should make frontend and game iteration more precise than passing screenshots back and forth.
The computer-use capability goes a step further. OpenAI wrote that “with background computer use, Codex can now use all of the apps on your computer by seeing, clicking, and typing with its own cursor.” That means Codex is not restricted to the apps with polished APIs. It can work in the awkward real world of software teams, where workflows spill across terminals, browsers, cloud dashboards, design tools, and issue trackers.
If that works reliably, it is a genuine expansion of what coding agents can practically do. If it works unreliably, it becomes a trust problem very quickly. The product challenge here is not just capability. It is predictability, supervision, and recoverability when an agent touches real interfaces.
The plugin flood is another signal
OpenAI says it is releasing more than 90 new plugins, including integrations developers will recognize immediately: Atlassian Rovo, CircleCI, CodeRabbit, GitLab Issues, Microsoft Suite, Neon by Databricks, Remotion, Render, and Superpowers. The number matters less than the pattern. Coding agents become more useful when they can pull issue context, test results, deployment status, and design artifacts into one place instead of forcing a human to relay that context manually.
There is an ecosystem play here too. By combining skills, integrations, and MCP servers, OpenAI is trying to make Codex extensible enough that teams can bend it toward their own stack rather than waiting for one canonical workflow blessed by OpenAI.
Memory is the subtle but important feature
Memory is easy to overlook next to computer use, but it may end up being the stickiest part of the release. OpenAI describes it as a preview feature that lets Codex “remember useful context from previous experience, including personal preferences, corrections and information that took time to gather.”
“This helps future tasks complete faster and to a level of quality previously only possible through extensive custom instructions.”
That line gets at a basic truth of real development work: teams spend enormous energy rebuilding context. Preferred coding style, deployment caveats, where logs live, which component is fragile, which reviewer always wants more tests. If memory reduces that repeated overhead without becoming noisy or overconfident, it could be one of the most practically valuable pieces of the whole update.
What this means for the coding-agent race
The release also says something broader about the state of AI coding products in 2026. The market is moving away from one-shot generation and toward systems that combine coding, environment control, long context, task persistence, and lightweight autonomy. Everyone in the category is converging on the same basic insight: developers do not need another autocomplete box nearly as much as they need a system that can carry context through a messy workflow without constantly dropping the thread.
OpenAI clearly wants Codex to be that system. It says that “developers start with Codex to write code, then increasingly use it to understand systems, gather context, review work, debug issues, coordinate with teammates, and keep longer-running work moving.” That is a bigger ambition than code generation. It is an attempt to own the flow of software work around the code itself.
The real question is trust
All of this sounds powerful, but the release raises the same central question that hangs over every agentic product: how much do users trust it to act without causing extra cleanup? Background action, browser control, image generation, and automations are only net positives if they reduce overhead rather than adding supervision burden.
OpenAI’s framing suggests it understands that risk. The company repeatedly describes Codex as a partner and emphasizes the stages between writing code, checking outputs, reviewing changes, and collaborating with the agent “in one workspace.” That sounds less like full replacement and more like a coordinated human-agent loop.
Still, the direction is unmistakable. OpenAI is no longer presenting Codex as a helper that sits on the edge of the workflow. It is trying to make Codex the place where the workflow happens.
Bottom line
This is one of the more consequential product moves in AI developer tooling this week because it ties together several trends that had previously been fragmented: computer use, browser action, memory, task scheduling, multimodal creation, and deep integration with external tools. On their own, each feature is notable. Together, they suggest OpenAI wants Codex to become less like a model you ask for code and more like a persistent development environment staffed by software agents.
If that vision lands, this update will look like an important step toward agent-native software development. If it does not, it will still be remembered as a clear marker of where the frontier companies think coding tools are going next.
0 Comments
No comments yet. Be the first to say something.