Interviews
Mark Medum Bundegaard, Chief Product Officer at Partisia – Interview Series

Mark Medum Bundegaard, Chief Product Officer at Partisia, is a technology and product leader specializing in privacy-enhancing technologies, blockchain architecture, and secure machine learning. He leads Partisia’s product vision and roadmap, working across engineering, design, and business teams to deliver platforms built on secure multiparty computation and quantum-secure infrastructure. His background spans senior roles in banking, telecom, media, and startups, with deep expertise in cloud-native systems, distributed architecture, and applied AI, alongside experience managing large-scale operations such as Denmark’s Copenhell music festival.
Partisia is a cryptography-focused software company developing infrastructure for privacy-preserving data collaboration. Its platform enables organizations to compute on encrypted data using secure multiparty computation combined with blockchain orchestration, allowing insights to be generated without exposing sensitive information. By supporting confidential computing, decentralized identity, and privacy-first machine learning, Partisia helps enterprises unlock data value while maintaining compliance, security, and data sovereignty across industries including finance, healthcare, and digital infrastructure.
The recent joint report from Europol frames post-quantum cryptography (PQC)—encryption designed to remain secure even once quantum computers can break today’s standards—as a long-term, risk-based transition rather than a one-time algorithm swap. From what you’re seeing across financial institutions, where is the biggest gap between recognizing quantum risk and actually being able to act on it?
The biggest gap today is visibility. Most financial institutions do not yet have a complete “cryptographic bill of materials” — meaning a clear inventory of where cryptography is used, which algorithms protect which assets, and how long those assets need to remain secure.
Without that foundation, it’s difficult to prioritise migration. Even once visibility improves, replacing cryptography in regulated, legacy systems is a slow process involving software, hardware, certification, and operational changes. The challenge is not recognising the risk, but translating that awareness into an actionable migration roadmap.
The report notes that many organizations lack a complete inventory of their cryptography, meaning they don’t fully know where encryption is used across applications, data flows, and infrastructure. Why is this visibility still so limited in large financial institutions, and what are the most practical ways to fix it?
Modern systems are composed of many interconnected pieces of software and sub-systems that were developed independently, at different times, and by different actors. It is often hard to simply get a complete overview of what software is being used, not to mention which algorithms are used. There is probably no “one-size-fits-all” solution.
However, companies that already have an overview of their software, are probably better off. The same goes for hardware, most banks still work with encryption keys being provided through the use of HSMs, which creates a dependency chain from the application all the way to the running hardware. When the Cloud boom came over the financial industry, it made a lot of things easier, but having the right amount of visibility into services was harder. So for now a lot of institutions are working, as part of the migration to PQC standards, to create a full overview of keys.
You’ve pointed out that PQC readiness often stalls due to ownership and governance issues rather than cryptographic choice itself. How do unclear responsibilities between security, IT, and product teams turn into real long-term security and compliance risk?
Cryptography sits at the intersection of security, infrastructure, and application development. When ownership is unclear, migration efforts stall — not because the algorithms are unknown, but because no single team has the mandate or visibility to drive the transition.
Over time, this creates systemic risk. Systems remain dependent on legacy cryptography longer than intended, increasing exposure and making eventual migration more complex, costly, and disruptive.
Crypto-agility—the ability to swap or upgrade cryptographic algorithms without rebuilding entire systems—is frequently cited as essential for PQC readiness. How does the lack of crypto-agility increase lock-in, technical debt, and future upgrade costs for financial institutions?
Crypto-agility determines whether cryptographic components can be replaced without redesigning entire systems. Where cryptography is deeply embedded into application logic or infrastructure, replacing it becomes expensive and operationally risky.
Institutions that build agility now will be able to transition incrementally. Those that do not may face large-scale, disruptive migrations later, particularly as standards and regulatory expectations evolve.
From your vantage point at Partisia, operating at the cryptographic and infrastructure layer of regulated systems, which types of legacy platforms or architectural patterns are proving hardest to prepare for PQC, and why?
Highly regulated systems are often the hardest to update. This is intentional — these systems are designed for stability and assurance, not rapid change.
Transitioning them to PQC requires more than updating algorithms. It involves software updates, hardware support, recertification, and operational validation. These constraints make early planning essential, as migration timelines are measured in years, not months.
A core recommendation in the report is that organizations assess how long different data assets need to remain secure—for example, whether sensitive financial or personal data must stay confidential for years or decades. How should institutions realistically evaluate cryptographic “shelf life” when planning PQC migration?
Institutions need to assess how long specific data must remain confidential and what the consequences would be if it were exposed in the future.
Some data, such as transaction records or personal financial information, may need protection for decades. This makes it vulnerable to “harvest now, decrypt later” scenarios, where encrypted data is collected today and decrypted once quantum capabilities mature. Understanding these timelines is essential for prioritising migration.
Many teams assume PQC work begins only once final standards are fully settled, yet the report suggests preparation must happen earlier. Over the next 12–24 months, what concrete actions should security and architecture teams prioritize even before large-scale migrations begin?
The most important step is establishing a complete inventory of cryptographic usage — understanding where cryptography is used, how it is implemented, and which systems depend on it.
This visibility allows institutions to identify high-risk systems and begin designing crypto-agile architectures that support future algorithm transitions without large-scale disruption.
There’s still a belief that delaying PQC planning saves money until quantum threats become more immediate. Based on what you’re seeing in practice, how does postponement actually increase future cost, operational complexity, and risk exposure?
Good question. Delaying preparation does not eliminate migration work — it compresses it into a shorter timeframe. Systems deployed today may remain in operation for decades, meaning decisions made now determine future risk exposure.
Early preparation allows institutions to incorporate crypto-agility into normal upgrade cycles. Waiting too long may require costly, urgent migrations under regulatory or threat pressure.
You’ve worked across cloud-native systems, blockchain infrastructure, and privacy-preserving technologies such as secure multiparty computation, which allows data to be processed without being revealed. How does PQC planning differ in these distributed or privacy-focused environments compared to traditional centralized financial systems?
At a fundamental level, the challenge is the same: identifying where cryptography is used and ensuring it can be replaced safely. Whether a system is centralized or distributed, its security ultimately depends on the strength and lifecycle of its cryptographic primitives.
The main difference is architectural visibility. Distributed and cryptography-driven systems often have clearer boundaries around key usage and verification, which can make dependencies easier to identify. But the core task — gaining visibility, planning migration, and ensuring cryptographic agility — remains the same across both environments.
Looking ahead, do you expect PQC readiness to become a regulatory compliance baseline, or could it evolve into a competitive and trust-based differentiator for financial institutions—and what early signals should investors and technology leaders be watching?
In the near term, PQC will likely emerge as part of evolving security and regulatory expectations, rather than a standalone compliance requirement.
Over time, institutions that demonstrate strong cryptographic resilience and long-term data protection will have a trust advantage. Investors and regulators are increasingly attentive to infrastructure risk, and cryptographic preparedness is becoming part of that broader resilience conversation.
Thank you for the great interview, readers who wish to learn more should visit Partisia.












