AI is delivering ROI in manufacturing, but scaling it remains the challenge. Without stronger data and network foundations, progress stalls.
By Nic Leszczynski, Principal Solutions Engineer, UKI, at Riverbed Technology
From the shop floor to the supply chain, artificial intelligence (AI) is already delivering measurable returns throughout manufacturing. Just ask the 87% of the sector leaders who told the 2025 Riverbed Global Survey that ROI from Artificial Intelligence for IT Operations (AIOps) has met or exceeded their expectations.
The challenge, however, is scaling those gains across the enterprise. Only 12% of AI initiatives are fully deployed across manufacturing right now, and just 37% of organizations within the sector currently feel fully prepared to operationalize AI at scale. This structural divide paints AI as a work in progress: partially realized, yet unable to advance to completion.

With Industry 4.0 well under way, it’s not necessary to prove that AI pays off – the data already shows it does. But while it’s demonstrably effective in controlled tests, the majority of manufacturers are struggling to scale those benefits across their complex, real-world environments. This is likely to remain the case until the right digital foundations are in place.
According to Deloitte, ‘smart manufacturing’ – which leverages AI to drive things like predictive maintenance, quality assurance, supply chain optimization, and energy efficiency monitoring – is already enabling a 10% – 20% improvement in production output and 10% – 15% in unlocked capacity. The net potential is therefore enormous, yet the breadth of its possible impact makes scaling AI complicated.
That’s because smart manufacturing environments depend on a reliable and secure flow of data between all devices, machines, facilities, and logistics networks. AI models must ingest, interpret, and act on this entanglement of telemetry in real-time – often across legacy systems, edge devices, and cloud platforms. Decision-makers need to be able to see and analyze this information as well.
Convoluted or bottlenecked data pipelines create operational strain, while limited visibility into system behavior makes optimization and scaling significantly more difficult. The pressure is mounting to address these issues on an infrastructural level.

The performance upsides of AI are undeniable. But thanks to a set of structural issues, the scalability of these benefits is elusive. Before the technology can be embraced enterprise-wide, manufacturers must establish the infrastructural rigor required to support it. New research findings paint this pursuit of AI-readiness in more detail:
– 91% of manufacturers say AI data movement is critical to their strategy.
– 96% cite network performance and reliability as top considerations.
– Nearly half (47%) lack confidence in the accuracy and completeness of their data.
Taken together, these figures prove there’s still a lot more progress to be made. If over nine-tenths of leaders view data movement and network performance as critical, then AI’s integration hinges on infrastructure built to move data quickly and reliably. Yet at the same time, nearly one in every two organizations does not fully trust the data being moved. Both components must be tackled to avoid counter-productive investment.
While manufacturers clearly understand what AI needs to thrive, many are still operating on digital foundations that are not yet robust and flexible enough to accommodate it at scale. Add to that the fragmentation caused by multiple observability tools and it becomes evident why the systems that support production operations are unable to move past proof-of-concept initiatives.

On an organizational level, the success of AI will be shaped by the quality of data, the speed at which it moves, and the transparency teams have into the systems that carry it. So, before investing in full-scale deployment, it is imperative that manufacturers:
– Treat network performance and data flow as strategic priorities
Manufacturing AI systems have to operate across distributed production environments and interconnected supply chains, all driven by high-volume, real-time data. As a result, network performance can no longer be treated as a background IT concern.
Since this shift from pilot to operating model requires deliberate leadership and coordination, organizations need consistency in how their data is captured and moved. Frameworks like OpenTelemetry help achieve strategic alignment by standardizing telemetry across complex networks – creating the stable, high-functioning data backbone AI needs to integrate enterprise-wide.
– Strengthen data quality and trust
AI models are only as effective as the data they consume. And in manufacturing, where data is generated and funneled across a range of active sources, inconsistencies or inaccuracies introduce risk and compromise integrity.
– Reduce tool sprawl and blind spots so leaders can see performance gaps
With organizations typically running 13 observability tools from nine vendors, fragmentation persists as a major barrier to scaling AI. While each of these tools may solve a specific challenge, their collective overlap creates duplication and limits visibility – making it hard to understand how systems interact under AI-driven workloads.
This architectural tension is why 95% of manufacturers are going through a process of tool consolidation. In this context, integrating its tech stack into a unified observability platform gave a global manufacturing organisation access to data-driven and actionable insights to improve productivity and performance. This kind of forensic clarity is what empowers organizations to scale AI with confidence and without friction.
Ambitiously, 85% of manufacturers expect to be AI-ready by 2028. But if only 37% feel fully prepared today, the viability of that timeline will be determined by how quickly organizations can close the readiness gap.
Over the course of those next few years, AI will become even more deeply embedded than it is now – with the influence it has on production lines, supply chains, and decision-making processes only adding extra pressure on the efficacy of telemetry and observability.
It’s never been more apparent that the manufacturers that invest in full-fidelity data and full-stack visibility now – while also adapting their strategic principles to anticipate the new demands of Industry 4.0 – will be best positioned to turn their successful AI pilots into sustained operational and commercial advantages.

Nic Leszczynski is Principal Solutions Engineer, UKI, at Riverbed Technology.
As manufacturers offer more customization than ever before, managing product complexity has become a critical challenge. Tune in with Dan Joe Barry, Vice President of Product Marketing at Configit, who explores how companies are tackling the growing number of product configurations across engineering, sales, manufacturing, and service. He explains how Configuration Lifecycle Management (CLM) helps organizations maintain a single source of truth for configuration data. The result: fewer errors, faster quoting, and the ability to deliver customized products at scale.