The AI Paradox: Budgets Surge, Yet Deployment Lags
In a recent study conducted by Qlik in collaboration with Enterprise Technology Research (ETR), an intriguing gap has emerged between the commitment to Artificial Intelligence (AI) and its actual implementation. Despite a staggering 97% of large enterprises allocating budgets for agentic AI, with a significant portion planning investments exceeding $1 million, only a mere 18% have fully deployed it. This disparity raises crucial questions about the challenges and opportunities in the AI landscape.
Qlik's Chief Strategy Officer, James Fisher, sheds light on this paradox, expressing surprise at the disconnect between expectations and the reality of AI adoption. He emphasizes the need to bridge the gap between talk and measurable action, a sentiment echoed in the study's findings.
But here's where it gets controversial... While the number of organizations with formal AI strategies has jumped from 37% to an impressive 69%, a significant portion (46%) believes it will take three to five years to operationalize AI at scale. This highlights a potential mismatch between ambition and the practical challenges of implementation.
Budget Pressures and the Fragmented Funding Landscape
Fisher frames the deployment gap within the broader context of enterprise constraints. IT budgets, he notes, are consistently under pressure, and when it comes to increasing investments in AI, especially in the face of flat or declining overall budgets, difficult decisions and trade-offs become inevitable.
While 56% have dedicated budgets for AI innovation, a substantial 60% are still drawing from IT/Technology budgets, and 42% from line-of-business funds. This fragmented funding approach underscores the complexity of AI adoption within enterprises.
Data Foundations and the Skills Gap
Data quality, availability, and accessibility emerged as the primary barriers to AI adoption, with 56% of respondents citing these challenges. Perhaps most revealing is the confidence gap: while 77% claim confidence in distinguishing agentic AI from other tools, only 42% believe their organization has the internal expertise to design and deploy it without external support. Fisher highlights data literacy and the transfer of skills as persistent barriers, even pre-agentic AI.
Integration with existing systems ranks as the second most significant barrier, followed closely by the lack of internal expertise. Only a small fraction (13%) mentioned multi-agent systems when defining agentic AI, with most focusing on autonomous decision-making and task automation.
Security and Governance Concerns
Cybersecurity vulnerabilities topped the list of deployment concerns, with 61% citing this as a primary worry. IT Operations is the primary target area for implementation, according to 72% of respondents. Legal and compliance exposure, as well as the lack of explainability and auditability, are also significant concerns, with 51% and 47% of respondents, respectively, expressing these worries.
Fisher emphasizes the evolving stakeholder landscape, with legal teams now taking a more prominent role at the table. He observes that governance policies are crucial, setting the foundation for how people work with AI and ensuring compliance with job requirements.
Successful Deployment Stories: Learning from the Early Adopters
Following the study, Fisher shared customer examples of successful AI deployment. The common thread? Each organization built AI on existing data foundations, avoiding the pitfall of attempting transformation first.
A North American specialty chemicals distributor implemented a generative AI assistant for sales and customer service within two months. By connecting the AI to existing document repositories in its Qlik environment and SharePoint, they enabled roughly 40 people to use it daily for complex product data queries. This rapid deployment allowed the company to hire commercial talent from outside the industry, confident that AI would bridge the product knowledge gap.
A global industrial manufacturer with 3,500 employees took a similar approach with unstructured technical content, setting up knowledge bases in just 15 minutes plus indexing time. This speed convinced leadership that AI could be scaled without a large upfront data engineering project.
An Asia Pacific entertainment group improved attendance forecast accuracy from 70% to over 90% by leveraging predictive capabilities on their existing Qlik Cloud Analytics deployment. This enhanced accuracy now drives labor scheduling and operational planning in near real-time.
A European food producer built AI-powered demand forecasts for premium organic meat products, reducing forecast deviations to around 1%. This not only cut over-production and costly downgrades but also lowered storage costs while supporting sustainability goals.
The Key Takeaways: Starting Small and Defining Success
Fisher emphasizes the need to prove AI's value and believes the necessary tools are available. The question, he says, is how distributed these tools are across enterprise architecture and the vendor ecosystem, and what it costs organizations to integrate them.
He adds a thought-provoking perspective: "When you're holding a hammer, everything looks like a nail." This suggests that organizations are beginning to understand which technologies to use in specific use cases.
The customer examples provide a compelling argument for starting small and defining success within bounded scopes. None of these organizations undertook multi-year transformation programs or resolved every data quality issue first. Instead, they connected AI to existing data foundations and demonstrated value quickly.
The Definitional Confusion and Its Impact
The definitional confusion around agentic AI may explain the wider gap between budget commitment and deployment. Only 13% mentioned multi-agent systems when defining agentic AI, its arguably distinguishing feature. Most conflated it with autonomous decision-making or task automation. This misunderstanding could make it harder for organizations to scope deployments that deliver tangible results.
Policy, Governance, and Bounded Scope: A Recipe for Success?
Fisher emphasizes policy and governance as starting points for successful AI implementation. The customer examples highlight another critical factor: bounded scope. The chemicals distributor, for instance, didn't attempt enterprise-wide transformation but connected an AI assistant to SharePoint and existing Qlik data in a matter of weeks.
After three years of research revealing persistent implementation challenges, these examples offer a glimmer of hope for organizations willing to take a more nuanced and targeted approach to AI adoption.
So, what's your take? Do you think starting small and defining success within bounded scopes is the key to unlocking AI's potential? Or do you believe a more transformative, enterprise-wide approach is necessary? Share your thoughts in the comments below!