Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Enterprise AI failed because of bad data engineering, not bad models. Nvidia believes it can fix that

NemoClaw is built on the assumption that enterprises can solve this themselves. The consulting industry's business model depends on them being wrong

Defused News Writer profile image
by Defused News Writer
Enterprise AI failed because of bad data engineering, not bad models. Nvidia believes it can fix that

The most revealing thing about Nvidia's NemoClaw launch is not what it does. It is what it implies about everyone else.

As Nate B Jones argues, the AI industry has spent two years making a straightforward problem complicated, and the bill is now coming due.

OpenAI and Anthropic spent the better part of a year discovering that the businesses they partnered with could not implement their solutions.

Despite both companies achieving rapid results in-house, their enterprise rollouts struggled. Rather than simplify their approach, both turned to large consulting firms to bridge the gap. They outsourced the hard part, and handed over control of their own narrative in the process.

What NemoClaw actually is

Nvidia's answer runs in the opposite direction. NemoClaw is an enterprise-hardened version of the open-source OpenClaw framework, designed to run on local Nvidia hardware with policy-based guardrails and model constraints.

It is built on the assumption that enterprises, given the right primitives and a clean environment, can do this themselves.

Huang's strategy, Jones argues, is an ecosystem play. Every developer who contributes to OpenClaw adds value to NemoClaw, which Nvidia can sell up the stack to enterprises. It is a familiar move for a company that built its dominance by owning the chip layer and is now reaching for more of the value chain.

The approach is notable for what it does not include: a consulting arm, a change management deck, or a partnership with Accenture.

The problem was never the technology

Jones returns to Rob Pike's five rules of programming as a framework for understanding what the industry gets wrong. The rules, including measuring before optimising, avoiding complexity, and letting good data structures make algorithms self-evident, are not nostalgic. They are what agentic systems demand.

The five hard problems Jones identifies in production agent deployment, including context compression, code instrumentation, linting discipline, multi-agent coordination and specification clarity, are software engineering problems with known solutions. The context window fills up; you need compression strategies. Agents write inconsistent code; you need strict linting rules. Long-running tasks lose coherence; you design for milestones.

None of this is new. What is new is that consultants are charging for it.

Who benefits from the complexity

The hype around agentic AI exists in part because complexity creates business. Consultants earn by making the learnable seem esoteric. Real change management, Jones argues, requires working alongside engineers, product managers and designers, not presenting them with a slide deck.

Nvidia, by backing an open-source framework and trusting developers to build on it, is making a different argument: that the fundamentals of good engineering are not a professional secret. They are a teachable moment.

The companies that grasp that first will not need to outsource their AI strategy. The ones that do not will keep paying someone else to explain it to them.

Defused News Writer profile image
by Defused News Writer

Explore stories