What AI and Quantum Are Forcing Us to Rethink About Data
- Christina Richmond
- Jan 27
- 6 min read
How AI, post-quantum risk, and geopolitics are reshaping data’s role
Richmond Advisory Group recently spoke with Jonathan Nguyen-Duy, CTO of Arqit, about the accelerating convergence of data management, post-quantum cryptography (PQC), AI, and geopolitical pressure. The conversation explored how data is shifting from a passive byproduct of digital systems into a strategic, regulated asset—one
that underpins predictive security models, digital sovereignty, and trust in AI-driven outcomes.
Why this matters: as organizations race to deploy AI and prepare for a post-quantum world, their ability to prove the origin, integrity, and trustworthiness of data will increasingly determine regulatory compliance, competitive advantage, and long-term resilience.
The Future Importance of Data Management
For decades, data was treated as a byproduct of digital systems—something to be stored cheaply, processed periodically, and protected primarily at rest. That era is ending.
Data is rapidly evolving into a high-value strategic asset that shapes regulatory posture, competitive advantage, and even geopolitical alignment. As artificial intelligence and quantum technologies mature, the way organizations manage, secure, and authenticate data will determine not only operational success, but trust itself.
This shift is not incremental. It represents a structural change in how data is created, inspected, governed, and monetized across borders.
From Data Lakes to Unified Data Posture Management
One of the most important changes underway is the move toward unified data posture management. Historically, data programs were fragmented: classification lived in one tool, compliance mapping in another, data loss prevention (DLP) in yet another, and encryption policies somewhere else entirely. This siloed approach made sense when data moved slowly and jurisdictions were loosely enforced.
That world no longer exists.

Today’s data flows continuously across SaaS platforms, clouds, APIs, devices, and partners. It crosses jurisdictions in milliseconds. In response, organizations are increasingly demanding a single, coherent view of their data posture—one that unifies:
Data classification (what the data is)
Regulatory context (which rules apply, and where)
Risk exposure (how it could be misused or exfiltrated)
Cryptographic state (how it is protected, now and in the future)
Crucially, this unified posture cannot rely on static inventories or periodic scans. The industry is moving away from traditional “data lake” models—where data is dumped, stored, and analyzed after the fact—toward real-time inspection of data in transit.
"Real-time inspection of data in transit not in static data lakes."
Enterprises increasingly want to understand data as it moves: whether it contains regulated information, whether it violates policy, whether it is properly encrypted, and whether it can be trusted. Risk, data loss prevention (DLP), and cryptographic controls are converging around live data flows rather than post-ingestion analysis. In effect, data governance is shifting from a storage problem to a motion problem.
Digital Sovereignty Meets Digital Trust
As data becomes more valuable, it is also becoming more political.
Different regions are articulating different philosophies. In the U.S., the conversation tends to center on digital trust—ensuring that systems, data, and outcomes can be relied upon. Elsewhere, particularly in Europe and across emerging economic blocs, the emphasis is increasingly on digital sovereignty—who controls data, where it originates, and which laws apply.

This divergence matters because data is no longer neutral. Its origin, authenticity, and handling increasingly determine its economic value. Just as physical goods are taxed, regulated, and protected based on provenance, data is moving in the same direction. Organizations should expect data to be taxed, restricted, or privileged based on where it comes from, how it was created, and whether its authenticity can be proven.
Taxation of data by provenance is the future.
In this environment, managing data is no longer just an IT function. It becomes a core element of corporate strategy, compliance, and even diplomacy.
The Shift from Reactive to Predictive Security
This transformation in data management parallels a deeper shift in cybersecurity itself: the move from reactive defense to predictive security models.
As industry leaders like Nguyen-Duy have noted, cybersecurity has historically been reactive. Standards were often created only after attacks were observed in the wild. Controls evolved in response to failure.
Post-quantum cryptography (PQC) represents a rare departure from that pattern. For one of the first times in modern security history, governments and enterprises are acting before widespread exploitation occurs. The scale of investment being planned for cryptographic discovery, inventory, and migration reflects an acknowledgment that waiting is no longer an option.
This predictive mindset extends beyond cryptography. As AI and quantum capabilities converge, entire sectors may shift from reactive to predictive outcomes. Healthcare is a

common example: instead of diagnosing disease after symptoms appear, future systems could identify risk years in advance with high confidence, enabling preventive intervention rather than invasive treatment.
The same principle applies to security operations. Organizations that can anticipate risk—based on trusted data, advanced analytics, and real-time insight—will outpace those still responding to alerts after damage is done.
Automation and the Coming Disruption of MSSPs
These changes place enormous pressure on traditional managed security service providers. Many MSSPs are still built on labor-intensive models: humans reviewing alerts, triaging incidents, and responding manually. That model does not scale in a world of machine-speed data flows and AI-driven threats.
Over the next few years, MSSPs that fail to adopt automation and machine-to-machine security will face existential risk. AI-driven systems will increasingly handle detection, policy enforcement, key management, and response without human intervention. Human expertise will remain critical—but it will move up the stack, focusing on strategy, oversight, and exception handling rather than routine operations.
Data management sits at the center of this shift. Without high-quality, trustworthy, and well-governed data, automation collapses. Bad data does not just reduce efficiency; it amplifies risk at machine speed.
Why Quantum Accelerates AI
Quantum technology is often described as the next industrial revolution not because it replaces classical computing, but because it expands what is possible.

Classical systems operate on binaries—zeros and ones—forcing complex problems to be broken into sequential steps. Quantum systems, through superposition, can model many possible outcomes simultaneously. Even before fully universal quantum computers arrive, quantum-inspired architectures and specialized accelerators aligned with machine learning are beginning to reshape advanced analytics.
For AI, this means deeper inference. Quantum-enhanced approaches promise better understanding of unstructured data—text, images, signals, and patterns that are difficult for classical systems to contextualize fully. They enable richer correlations, faster optimization, and more nuanced predictions.
But none of this matters without trust.
For quantum-accelerated AI to deliver value, the underlying data must be authentic. Volume and velocity are not enough. Veracity—confidence in origin, integrity, and meaning—becomes the gating factor. In near real time, systems must be able to answer not just “what does the data say?” but “should I believe it?”
PQC, Data Provenance, and the Value of Authenticity
This is where PQC intersects directly with data management. PQC is not only about future-proofing encryption; it is foundational to data provenance—the ability to verify where data came from, how it was handled, and whether it has been altered.
A useful analogy is geographic authenticity. Certain products derive their value from origin. Data is heading in the same direction. Information that can prove its source, integrity, and chain of custody will command higher value and lower risk. Data that cannot will be discounted, restricted, or rejected entirely.
Achieving this requires moving beyond traditional security models.
Toward “Attested” Zero Trust, or AZT
Many current architectures claim to implement zero trust, yet still implicitly trust server-side environments and data once it reaches a cloud service. This assumption becomes increasingly fragile as workloads move across shared infrastructure and geopolitical boundaries.

By combining confidential computing* with PQC, organizations can move toward attested zero trust. In this model, cryptographic attestation certificates verify not only user identity, but also the integrity of the application environment and the data being processed. Systems can prove—cryptographically—that they are running approved code on trusted hardware, and that data has not been tampered with.
When paired with continuously rotated, quantum-safe symmetric keys, this architecture delivers end-to-end integrity for data both in transit and at rest. It enables enterprises to move even highly sensitive workloads into public cloud environments with demonstrable assurance that the data feeding AI models is authentic, uncompromised, and policy-compliant.
This is not an abstract future vision. It is rapidly becoming a prerequisite for regulated industries and globally distributed organizations.
The Interoperability Challenge Ahead
One final challenge looms large: interoperability.
As regions adopt different post-quantum standards and regulatory frameworks, organizations will need to operate across multiple cryptographic regimes simultaneously. Maintaining data provenance, trust, and compliance in this fragmented landscape will require sophisticated policy orchestration and adaptive architectures.
Unified data posture management becomes even more critical in this context. Enterprises must be able to map data flows, cryptographic controls, and regulatory requirements dynamically—without breaking operations or sacrificing trust.
Data as the Operating System of the Future

The future importance of data management cannot be overstated. Data is no longer just fuel for applications; it is the operating system for trust, automation, and intelligence in a quantum-enabled world.
Organizations that treat data as a strategic asset—governed in real time, authenticated end to end, and aligned with geopolitical reality—will be positioned to lead. Those that continue to rely on static inventories, reactive security, and implicit trust will struggle to keep up.
In the next decade, competitive advantage will not belong to those with the most data, but to those who can prove their data is real, trustworthy, and fit for predictive outcomes.
*Confidential computing is a security model that protects data while it is being processed by isolating workloads in hardware-based trusted execution environments (TEEs), ensuring that data remains encrypted and inaccessible to the operating system, cloud provider, or other privileged software—and can be cryptographically attested as trustworthy.
