One model that sees, thinks, and acts in the real world.
Today we’re launching OPAL, our physics‑native vision‑language‑action foundation model that turns raw sensory input into smooth, whole‑body motion plans across diverse robotic platforms.
OPAL (Operant Physical Agent with Language) embeds the laws of physics directly into its reasoning, producing action plans that are simultaneously more capable, more coherent, and dramatically faster to run than today’s generalist robot policies. OPAL represents the first time a generalist robot policy matches models finetuned on specific tasks without the need for post-training modifications.
The OPAL framework bridges the gap between symbolic representation and neural learning, allowing robots and autonomous systems to develop robust models of their surroundings based on fundamental physical principles. OPAL learns rich causal relations between objects and actions in the physical world, allowing it to learn quickly and perform at SOTA levels with significantly fewer examples and computational resources than traditional approaches.
Nyrus OPAL is the first of our causal reasoning models, a new class of VLAs, which learn why their actions have certain outcomes instead of simply what outcomes match which actions. It sets a new SOTA on benchmarks including COLOSSEUM, CALVIN, and SimplerEnv, as well as in real world testing.
OPAL performs especially strongly at tasks which require long-horizion physical planning, such as making cups of coffee in novel environments, and tasks where sensor data is perturbed or inconsistent, as in many real world scenarios.
OPAL's success in these domains points to the possibility of our techniques scaling to a comprehensive causal understanding of the physical world, which would result in a dramatic increase in the capabilites and use cases for robotic systems. Already, OPAL demonstrates emergent capacity to reason abstractly about the objects around it, the action it takes, and the systems which it inhabits, and we are optimistic that these factors will continue to scale as we refine and scale our approach.
While OPAL is far from our vision of a General Physical Intelligence, as capable as a human at navigating the world and able to inhabit any platform, from robotic arms to jet aircraft, early results suggest that the scaling laws first observed in language models hold true in our architectures, both in scaling pre-training model size and post-training RL. As we continue to scale OPAL, we expect to see the same rate of improvement seen in language models.
We're interested in investigating if the techniques we pioneered in OPAL will apply in other context, such as improving causal reasoning in other intelligence models such as LLMs. We're specifically interested in seeing if we can perform post-training using OPAL-inspired methedologies to improve long-horizon task performace to a point where Iterative Distillation and Amplification would be able to scale the architecture to be able to complete tasks with time horizions of 10s of hours.
Conventional transformers treat every token pair equally. OPAL’s topological masks weight edges by contact type, link distance, and affordance class. That turns self‑attention into a compliant, graph‑aware planner that respects physical limitations out of the box.
Over the course of its training, OPAL identifies cause-and-effect relationships between entities, allowing for counterfactual reasoning and robust planning in complex scenarios.
Perception modules project raw sensor frames into a shared manifold aligned with OPAL’s physical latent. Each adapter is plug‑and‑play: drop a new camera or tactile pad onto the robot, fine‑tune the adapter for a few thousand steps, and OPAL immediately leverages the new stream.
By leveraging structured knowledge representations, OPAL achieves state-of-the-art performance with significantly fewer examples than traditional machine learning approaches.
Architecture | Causal Reasoning Model |
Modalities | Visual, spatial, kinesthetic |
Latency | 15-54 ms for inference (environment-dependent) |
Release Date | May 2025 |
For more technical details, please refer to our research paper:
Read the OPAL PaperInterested in seeing how OPAL can transform your robotics or automation systems? Request a demonstration or access to our API by contacting our research team.
Request Demo Access