A Short History of Tokens
It’s been over 10 years since Bitcoin first introduced blockchain technology to the world. In that time, the list of potential use cases for distributed ledgers has expanded rapidly, from digital currencies, to supply chains, to identity management. At their core, however, many of these uses cases take a similar structure: they enable users to hold and transfer digital assets on a peer-to-peer basis. Put simply, we can now trade and track digital assets without needing a central trusted authority to manage the process.
This evolution of the space naturally led to the invention of “tokens” – digital assets on a blockchain that are ownable and transferable between individuals. Tokens are split into two main categories: those that represent a natively digital asset, and those that represent an underlying real-world asset. Leveraging this new paradigm, hundreds of thousands of different tokens have already been created on Ethereum alone, with a combined market cap of over $15 billion at the time of writing.
One of the most promising applications of tokens is the representation of real-world securities on-chain, which allows traditionally illiquid assets like commercial real estate to be fractionalized and transferred peer-to-peer. This process, known as “tokenization”, has gained significant mindshare from both legacy institutions and new start-ups, due to its potential to alleviate many existing pain points within the capital markets.
While blockchain can make it easier to transfer ownership in a technical sense, security tokens are still subject to the same laws and regulations as traditional securities. Ensuring security tokens are compliant with regulation is therefore critical to any potential tokenization, and has been a barrier to adoption to date. As seen in the chart below, regulatory uncertainty is widely considered the largest barrier to blockchain adoption.
Numerous projects have emerged in the blockchain space, each designing a protocol that attempts to simplify and standardize how security tokens are regulated, traded, and managed. Looking at Ethereum alone, projects that have published standards tackling this problem include Securitize, Harbor, Polymath, and more. However ultimately, without modifications to how these protocols are currently designed, investors and exchanges will continue to experience significant friction when buying and selling tokenized securities. Why is this? Interoperability.
Interoperability is Crucial
Interoperability is one of the most significant benefits of tokenization. It allows an entire ecosystem of capital markets applications and products to integrate with one another because they share common software standards. However to enable interoperability at the application and product level, it needs to begin at the lowest level with the tokens themselves. In the security token space, interoperability is essential for two key parties: exchanges and investors.
As an exchange, you want to be able to authorize investors for the purchase of any security token that they are eligible to buy – no matter the company that created the token. This means not having a bespoke integration with each security token, but a simple and generic integration that is uniform across all security tokens.
As an investor, you want the onboarding process to be as simple and frictionless as possible. Currently when an investor wants to purchase shares from multiple places, they have to provide their personal information time and time again in a process called Know Your Customer “KYC”. Blockchain has the potential to transform this process by storing this information immutably on-chain, where it can then be referenced by all security tokens. This would mean not having to repetitively provide the same personal information every time you wish to purchase a new token, instead only supplementary or updated information would be required after the initial registration. However, this process will only be possible if interoperability between security tokens is designed into the standards that govern the system.
Three of Ethereum’s leading security token protocols were published by Securitize, Harbor, and Polymath. All three of these protocols are built upon Ethereum’s ERC-20 token standard, which they then extend to enforce compliance into the trading of the security token. This is achieved by querying a second contract on the legality of each trade at the time that it happens.
Whilst named differently in the protocols, the use of a second contract is consistent throughout all three, achieving the same result: preventing non-compliant trades. This second ‘Regulator’ contract is kept up to date with users’ KYC and accreditation information by off-chain services that are authorized to do so – for example an exchange, or the token’s issuer.
Although these three components may seem like everything you need to regulate a security token (and in the simplest form, they are), it is how the components are programmed that really determines interoperability. Sadly, the protocols lack interoperability in two key areas, which will continue to cause friction and slow adoption of this technology:
- How do authorized parties update on-chain information about users?
Harbor declares in their whitepaper that they will be the only party authorized to update user information on-chain for the time being. The centralization of this role means that exchanges would not update any data referenced by the Regulator. They therefore will not be able to approve new recipients of the token, preventing tokens from being easily traded outside of the Harbor platform.
Securitize have already implemented a system whereby multiple parties can be authorized, meaning investors can register their compliance information in multiple places and are not required to go through Securitize themselves. The on-chain data is then updated directly by the authorized party, and can be viewed by all of Securitize’s tokens. Furthermore, to prevent investors from having to provide information multiple times, Securitize have designed an API to allow authorized parties to access the private information about investors that is stored off-chain, enabling them to easily determine whether an individual is compliant or if more information is needed.
Polymath has a native digital utility token called POLY that is required throughout their platform to perform various tasks, including to get an authorized party to update your on-chain data. In order for an individual to KYC themselves, they first must purchase POLY tokens, which does not have a liquid fiat to POLY market. Instead the individual must purchase another cryptocurrency such as Ethereum’s “ether” (ETH) using fiat, and then exchange this for POLY. The tokens can then be used on Polymath’s KYC marketplace to make a bid to a KYC provider. If the KYC provider approves the offer, they are paid in POLY tokens to perform the KYC check for the individual. This process is clearly a significant onboarding friction to the Polymath platform, and makes the process more complex than necessary.
- How this information about users is then stored and accessed on-chain?
From looking at the whitepaper and smart contracts on GitHub, it is technically possible for many of Harbor’s tokens to all share one common Regulator contract, and share one common source of user data, however this is unlikely due to the differences in regulation between different tokens. The lack of live Harbor’s tokens on Ethereum has not clarified whether it is their intention for this to be the case, or whether each token will be deployed with its own Regulator.
Securitize’s protocol is designed such that their Regulator contract queries a third smart contract which stores user information. This enables each token to have unique regulations encoded in their own individual Regulator, whilst still sharing a common source of user data in the third contract, meaning when a user KYCs for one Securitize token their information is stored ready for them to buy future tokens
It’s not explicitly stated in their whitepaper whether Polymath has a central source of compliance data stored on-chain that each Regulator then interacts with, or if tokens have their own local source of information. However, based on Polymath’s sample contracts, it appears that each token uses a local source of information, which is not shared between different tokens. While this may have advantages, this setup risks data redundancy and inconsistencies.
Take the following example: Bob has expressed interest in two Polymath security tokens, ABC and DEF, and has been approved as an investor for each of them. This information is sent to the Regulator contract for each of the tokens. A month later, Bob tries to purchase further DEF tokens but it is found that he is no longer accredited. This information is sent to DEF’s Regulator to update Bob’s investor status to be non-accredited. Now, on-chain, there is conflicting information: ABC thinks that Bob is a verified investor, however DEF disagrees. It is easy to see that having a central source of information would prevent such discrepancies from occurring.
Interoperability of the Protocols
As discussed previously, there are two main parties involved in the issuance and exchange of security tokens to whom interoperability will matter greatly: exchanges and investors. Both of these parties desire a smooth experience when interacting with different security tokens. So, if using the protocols as-is, let’s take a look at how exchanges and users will be affected.
As an exchange, integrating these protocols for purposes of transfer is easy: all of the tokens utilize the ERC-20 token standard, providing a uniform interface to invoke transfers, approvals and balance checks. However further integration with the compliance aspect of every protocol becomes far more complex. You’ll remember it’s not currently possible for a trusted party to become authorized on Harbor’s protocol – they will instead have to direct users to Harbor to KYC themselves. To then integrate with Securitize’s protocol, the trusted party must be authorized by Securitize, which will then allow them to access investor KYC data through the off-chain API, and to update on-chain information stored in the on-chain data store.
To integrate with Polymath’s protocol is likely the most complex. The trusted party must register themselves as a KYC provider on Polymath’s KYC marketplace and set themselves up to receive bids in POLY tokens in return for providing KYC services. In providing KYC services to investors the trusted party must then organize a way to ensure that the duplicative on-chain data stored about a user in each security’s Regulator④ does not become inconsistent.
Not only do the protocols have different interfaces that the trusted party must integrate with, each protocol also has a different way to provide error reporting to the exchange. When building an interface it is important to be able to translate any errors that occur into something that is understandable by users. For example, if a user cannot purchase a token this could be for a wide variety of reasons: the security may have a holding period that has not yet been satisfied, or may restrict the maximum number of permissible holders. To be able to communicate these messages to users, the exchange would have to integrate with a different method of error reporting for each protocol.
The different methods by which onboarding of investors is currently designed in the protocols means that investors will likely have to provide personal information many times to different platforms and in different ways. This is caused by the fact that Harbor have not authorized any other parties, and Polymath require investors to bid for KYC processes using POLY tokens. The friction caused by the enforcement of these compliance methods may render investors unwilling or unable to purchase securities they would otherwise purchase.
The scale of this protocol-induced friction on investors may be somewhat alleviated by the manner in which exchanges go about integrating each of the protocols. For example, if an investor chooses to KYC on an exchange to purchase a Polymath token, that exchange, if authorized, could choose to update Securitize’s data storage at the same time. This would mean the investor’s information is on-chain in case it is needed in the future. However, if no changes are made to the current protocol designs, then the process of registering and purchasing securities will remain daunting.
The solution to this problem need not be complex. In fact, it is possible to introduce certain solutions without changing any tokens that are already live on Ethereum. An ideal solution that results in minimal friction for both exchanges and investors, and that prevents data inconsistencies caused by having many different sources of compliance data would closely resemble Securitize’s centralized on-chain data store; however, any such a set-up must then be adopted on an industry-wide scale.
By having a central source of information on-chain, the risks of data inconsistencies is removed, and investors are able to purchase different securities through just one compliance verification. This central contract would carry out the verification that the transfer was compliant for all security tokens, and the transfer would continue or revert. The off-chain API that is accessible to all authorized exchanges means that investor compliance information can be communicated to exchanges and reduces the number of times investors must be asked to provide data. These aspects together also massively reduce the amount of integration work required by exchanges.
The introduction of a new system like this clearly causes some complications, and a number of issues would still have to be ironed out. For example in the design of how each exchange becomes authorized: who makes the decision that an exchange should be trusted? Time has to be taken to design a system that allows a consensus to be reached.
The tokenization of securities is still an area that is early in development and adoption, which is in-part due to the complexities of regulatory compliance. While the publication of protocols simplifies the compliance with many of these regulations by enabling them to be enforced in the execution of every transfer, there is still a long way to go before this is a seamless process. Until we have an agreement between protocols on how investor information is stored and updated both on-chain and off-chain, there will remain significant friction throughout the registration and investment processes for all parties involved.
Under Scrutiny: How to Pass Due Diligence as a Blockchain Project – Thought Leaders
Every business is destined to undergo multiple assessments. Regulators granting licenses and permissions, potential partners, investment advisors and investors – each of them has a set of filters that a tech project should pass to be considered viable. The task gets more tricky for deep tech startups utilizing blockchain, AI and other cutting-edge technologies.
This article is structured as a list of questions for a startup to check its investment readiness and prepare for a due diligence process, grouped in three broad categories: 1) technical, 2) legal, and 3) business. Starting with generic ones, we are diving deeper into industry-specific questions with particular examples in the tech part to illustrate nuances and pitfalls a project might face, especially fueled by high competition in the space.
Is the proposed solution technically possible?
This might sound obvious – but many founders neglect this question while chasing the visionary technological dream, especially in deeptech areas like AI/ML, brain-computer interfaces, biotech, or blockchain. If your project exists only as a concept yet (especially if you’re not the tech guy and will do external hiring), make sure that it is possible to develop before you pitch.
If the solution is not technically possible at the moment, how much time and effort is needed for research and development (R&D)? Are these estimates aligned with time and funding limitations, if there are any?
In some cases, a tech team is strong and the idea is very promising, but it might take full five or ten years to develop and be adopted – like quantum computing for solving enterprise-grade problems in the pharmaceutical industry.
You have to be honest – and realistic – about the timing and expenses. You will certainly get this question. Here you need to distinguish between research and common software development costs: the research stage is inventing algorithms to build something that previously hasn’t been possible due to technological limitations, with uncertain results and timelines. The software development stage is building a well-understood solution, which only requires a certain period of time.
Clearly enough, investments at the research stage are much less predictable. However, development can also take much longer than team plans originally, trying to impress investors and overestimating capacity. Make sure you don’t.
In the case of a software product, does the project really need proprietary software and not a white-label solution or SaaS?
Reinventing the wheel might be seductive. However, in some cases, spending resources for the development of a new in-house technical solution can be a waste of time. If you as a startup do not suggest a software innovation, it might be easier and cheaper to purchase a ready technical part and customize it to the particular business needs.
What are the external dependencies (e.g. libraries)? How is external software maintained?
No software is written totally by the company in-house team. Every project in the world uses multiple external databases and code libraries, often open-source, maintained by global communities of developers or by corporations. The resilience of the project depends on the timely update of external software for security and efficiency.
If you’re doing an AI project, what is the source of data? Is it sufficient? Is it available?
The viability of AI projects is extremely dependent on data quality. Algorithms may be inefficient when there is not enough data. Also, inherent biases in the data (e.g. racial) will impact the final algorithm. Furthermore, there may be a chicken-and-egg problem if the customers are a source of data and, at the same time, the main value is delivered using the AI/ML. If the data is not free, its cost should be considered vs potential value compared to using less advanced methods.
If you’re doing an AI project, how is the context-dependence addressed?
Even if there is plenty of data available, it may be gathered in a specific context, often being non-applicable in another. For example, if the network was able to distinguish cats and dogs indoors, it may be unable to do so outdoors.
If you’re doing a blockchain project, why the database should be distributed, in other words, why do you need blockchain?
Many problems that are claimed to be solved with the blockchain can be solved with a simpler cryptographically protected database with a robust permission management system that can also utilize public-key cryptography if needed.
In the case of the original concept of blockchain, the database is distributed among multiple participants with all of them being able to make an input. This is not always needed. For example, an enterprise may need a database to store and process its internal data, in which case it shouldn’t be distributed. Or it may be a database of a governmental body, to which everyone should have access but only the government should be able to validate input data.
If it makes sense for a database to be distributed, does blockchain have to be public?
Blockchains can be generally divided into public and private. Public (permissionless) blockchains are the ones in which anyone can host a node, thus having access to all data recorded and validate database updates. In private (permissioned) blockchains only certain participants can have access to data and validate input.
Public chains significantly reduce the control over the business as the state of the database is now controlled by multiple people scattered across multiple countries. This also means an increased regulatory uncertainty, especially in the case of heavily regulated industries or the ones that are of systemic importance. For these reasons, the case for public chains must be really strong. In many cases, a private blockchain is enough to satisfy business requirements. For example, transaction processing requires only financial institutions participating in the blockchain, sharing medical history data requires only hospitals participation.
If you’re doing a blockchain project, what are the incentives of participants to act for the benefit of the system? What are the ways to break these incentives and how are they addressed?
As blockchain, especially the public one, is maintained by common efforts, and the quality of data, the transaction costs depend on the participants, incentives should be designed in a due way to ensure that the system is sustainable.
An example of where it is problematic is the Tezos blockchain that utilizes the so-called Liquid Proof of Stake (LPoS) consensus algorithm. A consensus algorithm is a way in which validators agree on the new state of the ledger. In LPoS consensus participants can stake a certain amount of a blockchain native token to get a right to either validate transactions themselves or select another trusted person that would do that instead, who would validate a transaction and distribute the reward. Although such algorithms have multiple benefits, the common point of criticism is that incentives for participants to become validators are questionable as they can select someone else, and still receive a significant chunk of reward because of the competition among potential validators, while not spending time and computational resources on network maintenance and governance. This creates a risk of blockchain centralization and various types of attacks.
How is the cybersecurity ensured?
Cybersecurity is a primary feature of any IT infrastructure. Especially for a regulator, who’s main concern is protecting customers.
If you’re doing a hardware business, how is the quality of supplies ensured?
While software businesses are dependent on external libraries, hardware businesses depend on supplies providers for the quality of their solutions.
Assessing legal implications of a project, compliance costs and limitations arising from legal requirements.
Does the company need licenses to operate legitimately, and which ones?
This point is especially important for heavily regulated industries, such as fintech. Almost any financial services require some kind of licensing, and some of them – such as MiFID II in Europe – can take up to two years or more to acquire.
Also note that in most cases you need a separate license in every country where you intend to operate and provide services, although there may be various arrangements between competent authorities, especially between the EU Member States, that allow facilitated transfer of license.
All clients need to be identified, especially in the financial services industry, as well as the origin of their funds so that the business is not used as a means for money-laundering. However, making customers confirm their identity may not be a great and engaging UX, negatively impacting conversion rates. The proportionality principle should be applied – the higher the risk, the stricter measures.
Who holds the custody of the funds?
This question will be asked to any business that allows clients to deposit their funds, such as investment management. Holding clients’ money and assets also requires licensing, and the project should consider a partnership with an applicable license holder institution.
Who is liable for failures?
This happens to be one of the most neglected matters. Even if you will suffer eventual reputation damage, you can still protect itself from legal liability by building corresponding arrangements with service providers. For example, if client data is stored on third-party servers, they should be responsible for the data safekeeping. Note, though, that such arrangements will increase service costs. Sometimes providing a service for which a liability may be taken is a core business of a company. Although it is impossible to avoid liability completely in such case, it can still be reduced, for example, if employees are liable, and not a company, or if limits are imposed on the amount of liability.
Founders of blockchain projects, especially of decentralized ones, tend to consider that they hold no liability, as they don’t control the network. However, regulatory authorities may have another view as the legislation is built on the premise of a liable service provider who has the responsibility to ensure that the system operates in a due manner. Thus, the project team may become subject to claims in case of failures.
Being poorly managed, taxes can significantly reduce company profits, especially in the case of unfavourable double taxation regime between countries the company operates in. Furthermore, taxation issues can make the company much less attractive as an investment opportunity. A proper optimization should be undertaken in order to mitigate these problems.
What is the intellectual property of the company? Is it protected? Does the company violate any IP?
There are three main points to it.
Firstly, a company may at some point become a target for patent trolls, so it should get patents and copyrights for all its relevant assets.
Secondly, in order to make an MVP startups may violate someone’s intellectual property in some cases, for example, use protected images, design, UX, etc. It is unlikely to be problematic at the initial stage but may be when the company grows bigger. Especially if the IP violated belongs to direct competitors.
Thirdly, IP is an asset that increases valuation, that may be used for tax optimization.
In recent years GDPR became an increasingly pressing issue. Basic privacy setup goes far beyond cookies disclaimers and should include proper storage of personal data, hijacking of which may result in significant lawsuits, proper data management, such as not giving to third parties without consent, the possibility of erasure, etc.
The blockchain may often store sensitive personal and financial data, which are strictly protected on the regulatory level. They can sometimes be contradictory to the nature of the technology, such as the right to be forgotten or the obligation to store data on the server of the country where the person resides. It is advisable to consider not storing personal data on the public blockchains at all, which enables more control over them.
What problem does the project solve?
Emerging technologies are sometimes called “a solution looking for a problem” – not unjustly. Behind the engaging narrative and brilliant technological thought, it can be easy to lose the most important question: who is your target audience, and why it will use the proposed solution?
Check if the stated problem does exist, confirmed by the potential clients. Customer surveys and test can help a project make sure that you are moving in the right direction. If a project operates in a vacuum with no direct contact with its target audience – it is a red flag for investors, as it risks meeting no demand once it goes live.
Sometimes a problem is not pressing enough for it to require a separate solution.
How is the problem currently solved? How is the proposed solution better?
In order to be adopted, a project has to offer a very clear benefit to its customers – saving someone’s time or money, fulfilling a particular need or simply providing positive emotions.
If the benefit is marginal, clients are unlikely to pay more or bother switching to a new service at all – so make sure a project has to lead a competitive analysis and found its clearly defined niche in the market.
We once had a discussion with a project building a network of supercomputers in different countries that would solve the AI problems with built-in algorithms so that customers would only input data and choose algorithms. The problem was that in cloud computing they were competing with Amazon and Microsoft, and in AI software – with IBM. No chance they would win.
What are the core assumptions on which the business model is based? How are they validated or are going to be validated?
How actually the company is going to make money? What metrics in such cases determine the profits? Are the revenue predictions realistic?
For example, if transaction fees are the main source of revenue, certain transaction volumes are expected and should be justified by market analysis.
What is the place of a company in the industry value chain? Who are other participants the company is working with? How supply chain sustainability is managed?
No company delivers its value to end customers independently, it is always working together with multiple other actors. It is critical to identify the exact added value the company provides. All other companies in the value chain are external dependencies that may pose risk and should be managed, for example, by diversification.
How does the unit economics of the company work? Can it be profitable at all?
That is, does a single customer bring more money than it costs, including processing and acquisition costs.
In the case of broken unit economics, is the increased revenue per customer possible, or they will not pay more? Is it possible to cut costs in the future with significant investments, for example, software that reduces operational expenses, or marketing that raises the credibility, reducing acquisition cost?
In other words, investors will look at the factors that will make the investment justified.
What is the growth strategy? How is the growth engine validated? Does it suit the business model?
To make the investment feasible, a project should have a certain growth potential that matches the risk. For an operational profitable business growth expectation is lower compared to a startup. The company with the potential of viral growth prospects differ significantly from the B2B company that should employ sales department.
Who are the direct competitors? What is the competitive advantage, if any? If there are none, what are the possible options to gain some and the expected investments? If there are some, how are they sustained?
A business does not necessarily need a competitive advantage at every point of time if the demand on the market is significantly higher than supply. However, this is not a sustainable situation, and the competition will increase. Thus, if there is no competitive advantage, you should focus on getting one. If you do have one – make sure you’re able to sustain it and adapt to the ever-changing market conditions.
Did the company use debt funding? What is the debt to earnings ratio?
Indebtedness of the company creates additional risks for anyone engaging in business with it, resulting in less favourable collaboration or a lack thereof.
Who are the major company shareholders? How will they impact company direction? Do they support profitability or growth? Do they participate in operational management?
Shareholders are a source of information about the business that will be looked upon. In the financial industry or when offering securities to the public, major shareholders and directors should pass fitness and properness checks. The company should be cautious and make its due diligence when accepting investments not only regarding the legal background of the investor but also the broader impact it will have on the company’s strategy.
If a project is looking into engaging serious partners, attracting significant funding round or raising public and media awareness, it will definitely become a subject to thorough scrutiny that will target not only superficial financial parameters and the quality of the idea, but also the non-sexy things, such as taxes, intellectual property, cybersecurity, and supply chain resilience. Answering those questions in advance makes you not only well-prepared for the due diligence, but also more able to succeed in the fierce competition on the market, and should be undertaken as early as possible.
Due diligence requires asking hard questions. But it is critical to ensure that we devote our time and money to what will have a real impact on the world.
Firsthand Overview of Digital Securities Legislation in Malta
When it comes to choosing a jurisdiction for a digital securities offering, Malta is among the first on the list. In the course of the past several years, Malta has taken a unique position as the “blockchain island”, fostering technological innovation by introducing advanced blockchain legislation, friendly tax policies and progressive approach to regulation.
This article provides a comprehensive overview of the legal status of digital securities in Malta, based on the months of research and personal communication with Maltese regulator and local lawyers, while we have been structuring our platform for digital securities offering on the island.
Digital securities on Malta are regulated, first and foremost, by traditional legislation on financial instruments and services, the most important of them being the Companies Act and Investment Services Act. These acts incorporate themselves into provision of the EU legislation, namely MiFID II, Prospectus Regulation and others.
Apart from the existing set of laws, Malta has also introduced a specific legislation on innovative blockchain-based financial instruments that defines what should be regulated by the traditional legislation and what falls under the scope of the new ones.
This approach is different from the one adopted by countries with a common law system that don’t require a specific legislation to define the legal status of an innovative object, relying on the existing one instead.
There are three main acts referring to the digital securities particularly:
- Virtual Financial Assets (VFA Act), which defines DLT-based assets and the rules governing them
- Malta Digital Innovation Authority (MDIA), which established MDIA as a governing entity and its role in regulating blockchain companies
- The Innovation Technology Arrangements & Services (ITAS), which introduced the term “innovative technology arrangement”, the procedure and conditions for the licensing
A separate act regarding STOs as a fundraising method is currently under development.
Apart from that, there are several guidelines and strategies. The most relevant of them are the MFSA STO Consultation Paper that outlines the MFSA approach to STO and MFSA Fintech Strategy, which, inter alia, discusses plans to establish regulatory sandbox for fintech ventures.
Below, I am taking a closer look at the most important aspects of the existing legislation.
There are two main regulatory bodies governing digital securities on Malta: The Malta Financial Services Authority (MFSA) and The Malta Digital Innovation Authority (MDIA).
MFSA is the single regulator of financial services in Malta, which regulates both financial services providers and issuers of any types of financial instruments. This has two implications for digital securities issuers:
- They need to work with MFSA-licensed service providers
- Their offering has to be approved by the MFSA
The role of MDIA is to set and enforce rules and standards for technological innovation. In digital securities regard, the regulator reviews and authorizes the technical infrastructure of crypto and security token exchanges and other infrastructural projects to make sure they are reliable and secure.
In order to get an MFSA (prevailing financial authority) license, you do not necessarily need MDIA authorization – in most cases, system audit is enough and MDIA opinion remain voluntary. However, if transaction volumes exceed certain levels, the authorization by the latter becomes mandatory.
Obviously, MDIA has limited bandwidth and cannot check every application for authorization itself, so the regulator attracts third-party MDIA-licensed system auditors to review the technical blueprint of the suggested system. There are currently five of them, including consulting giants KPMG and PwC. Once the audit is done, MDIA makes the final decision to grant the authorization based on the auditor assessment, business model, senior management personalities and qualifying shareholders of the innovative technology company.
The competent authorities are pursuing three priority goals: protecting investors, supporting
Malta’s reputation of the “center for excellence for technological innovation” and promoting healthy competition and choice.
The strong focus on reputation makes Malta different from other blockchain-friendly jurisdictions, such as Estonia. Although Malta does much to promote blockchain-based business by establishing clear legislation, creating regulatory fintech sandbox and so on, getting licenses here is more difficult. To get licensed, a company needs to comply with strict requirements, pass systems audit to ensure the resilience of infrastructure, defend its business model.
One of the necessary requirements to get authorization for any regulated activity on Malta are so-called “fit and proper checks” for all qualifying shareholders (>25% stake) and senior management – another mechanism to prevent fraud, protect investors and good reputation of Malta.
Such measures create an additional credibility for a company licensed on Malta, which in its turn creates incentives for decent companies to establish business activities there.
Virtual Financial Assets Act: providing classification
VFA Act introduced the legal framework for virtual financial assets and asset offerings in November 2018. The Act defines four types of DLT-assets:
- electronic money – common money, accounted for on DLT
- virtual tokens – units that have value only inside a system, for example, loyalty points
- financial instruments – assets defined by MiFID II regulations which include, inter alia, transferable securities and units in collective investment undertakings
- virtual financial assets – everything that does not fit into any of the above
The beauty of the Act is that when an asset does not fall into conventional forms, it is dealt with on a case-by-case basis. Many jurisdictions don’t adopt such a granular approach, preferring to qualify DLT-based assets broadly as security, utility and payment tokens with the same rules for every group of assets.
While it might seem like a good idea to create a separate category for cryptocurrencies, the problem is that, as we know, they can be very different by their essence: some are native tokens of a blockchain, others are not, some are anonymous, some are not, some are decentralized, and some are not.
VFA Act is mostly focused on procedures regarding the issuance or offering of virtual financial assets. However, it is unclear from the act itself how security tokens should be qualified depending on their nature. Thus MFSA has issued further guidelines and is working on a specific legislation for digital securities, which is going to cover all specific use cases in the industry.
STO Consultation Paper: defining digital securities
STO Consultation Paper divides security token offerings into traditional and non-traditional.
Units, offered during traditional STO, are classified as financial instruments under MiFID. Thus, they are regulated mostly by MiFID and Investment Services Act. At Stobox we call such units “digital securities”, and mostly work with them.
All other exotic types of investment units fall under the definition of non-traditional STOs. The most common example may include a unit that provides a right to a revenue share but does not represent a company’s equity, thus being some sort of a derivative contract. Many of the security token offerings conducted so far have been of such nature, although the regulation they fall under differs depending on the jurisdiction. Most security token offerings conducted so far have been of that nature.
MFSA has not yet issued an opinion on non-traditional STOs.
Prospectus regulation: offering & trading digital securities
The offering of digital securities is regulated mainly by the Prospectus Regulation, which requires issuers to register a Prospectus when making a public offering. However, European legislation courteously offers exemptions under which the offering can be conducted without registering a Prospectus.
The two most widely used include:
1) offering targeted solely on accredited investors (private sale)
2) offering with a total consideration under EUR 5 million in the European Union during a 12 months period.
Nonetheless, if the issuer is seeking to get listed on a trading venue it has to comply with the listing rules and prepare a Prospectus-like Admission Document, thus reducing the benefits of an exempted offer.
However, there are secondary market arrangements that do not fall under the definition of a regulated trading venue and, thus, can introduce less strict admission rules. One of them is bulletin board, which is a market at which participants can place their buying and selling interests, but there is no automated matching. Instead transaction is initiated when another clients agrees with the proposed terms and chooses to become a counterparty of the trade. Although there is no precedent of a kind on Malta yet, UK’s Financial Conduct Authority, which is subject to the same EU legislation, does not consider such arrangement an MTF:
“In our view, any system that merely receives, pools, aggregates and broadcasts indications of interest, bids and offers or prices should not be considered a multilateral system. That means that a bulletin board should not be considered a multilateral system. This is because there is no reaction of one trading interest to another other within these types of facilities.”
For this reason, we at Stobox are building our secondary marketplace in the form of a bulletin board to reduce requirements for companies to be onboarded and have access to liquidity.
Malta has introduced one of the most progressive legislative frameworks for digital securities in the world, which finds balance between investor protection and facilitating innovation. Creating comprehensive legislation from scratch is an non trivial task and takes a lot of time –– it explains why the majority of digital securities offerings to the date took place in other jurisdictions. However, exactly due to the fact that Malta has put so much time and effort into it, Maltese providers and companies can be trusted from both the perspective of long-term regulatory stability and correspondence to prudential standards.
Sufficient Decentralization and Security Tokens – Thought Leaders
By Derek Edward Schloss, Director of Strategy, Security Token Academy
*Author’s Note: The following is not legal advice, but an exploration and possible interpretation of the currently regulatory landscape for blockchain-based fundraising.
As it relates to the explosive blockchain industry, perhaps no theme has been dissected more than that of industry regulation.
On one hand, a number of projects have questioned whether digital assets can thrive in the U.S. without forward-thinking regulation. On the other hand, insiders argue that our regulators are doing their best to follow the laws enacted through the legislative process — really, it’s up to our lawmakers to draw the final boundaries.
The truth is likely somewhere in between.
In the midst of these arguments, the SEC has increased the volume of its “guidance by enforcement” actions, targeting bad-faith fundraisers and ICOs that have consciously ignored the presence of federal securities laws over the last few years. In 2018, the SEC doled out over a dozen enforcement actions involving digital assets and initial coin offerings. And although this year’s numbers aren’t yet available, several high-profile cases are shining light on the regulatory opacity many have criticized.
Of note, the SEC made headlines last quarter when it reached a $24 million settlement with Block.one, the firm behind the EOS blockchain. Block.one had previously sold tokens to fund the development of the EOS network, raising over $4 billion between 2017 and 2018. The SEC argued that a purchaser in the ICO would have had a reasonable expectation of future profit based on Block.one’s efforts, including its development of EOS software and promotion of the EOS blockchain, satisfying the presence of an investment contract under the Howey Test and U.S. federal securities laws. As a result of the offering’s status as a security, the SEC found that Block.one violated securities laws by not filing a registration statement for its initial offering, or qualifying for an exemption from registration.
While the $24 million settlement might appear significant, many in the blockchain community were quick to note that the amount represented less than 1% of the total capital raised during Block.one’s year-long ICO. Further, Block.one announced that the negotiated settlement resolved all ongoing matters between Block.one and the SEC, leading some to question whether the EOS token, which currently trades on exchanges and is used to power the EOS blockchain, no longer falls within the crosshairs of federal securities laws.
One day after the Block.one settlement was announced, the SEC settled with Nebulous over an unregistered token offering that took place in 2014. As part of the settlement, Nebulous did not have to register its Siacoin utility token as a security. Like the EOS token, the Siacoin token currently powers a blockchain network that’s fairly well used by a number of distributed parties (323 hosts in 43 different countries).
Two weeks later, messaging giant Telegram Inc. was sued by the SEC to enjoin the firm from flooding the U.S. capital markets with billions of Grams tokens previously sold to accredited investors. Telegram had raised $1.7 billion selling Gram tokens to over 170 accredited investors under a SAFT framework (Simple Agreement for Future Tokens). Like the intended utility of the EOS and Siacoin tokens, Grams tokens were intended to eventually power the TON network.
Read together, what exactly do these three cases tell us? It’s difficult to decipher. Certainly, we know that Block.one and Nebulous originally offered investment contracts to investors — those events were unquestionably illegal offers of unregistered securities. But reading between the lines, it’s also possible to argue that both project’s utility tokens (EOS tokens and Siacoin tokens) are not securities today, though both trade freely on cryptoasset exchanges.
Telegram’s case is more straightforward — its offering of (future) Grams tokens to investors was also deemed to be an investment contract security by the SEC, but unlike the EOS token, for example, Grams tokens appear to remain securities in the eyes of the regulatory body. As a result, the SEC successfully enjoined Telegram from flooding the U.S. capital markets with Grams tokens.
So how are Grams tokens still securities after their initial sale, while the analysis for EOS and Siacoin tokens more murky? In its emergency action against Telegram, the SEC found that because initial purchasers expected to “reap enormous profits” once the Grams market launched, there still existed an expectation of profit reliant on the future actions of Telegram, Inc. As a result, the SEC articulated that Grams tokens remained investment contract securities, as the prongs of the Howey Test remain satisfied.
As it relates to the EOS token, recall that the SEC’s Framework for “Investment Contract” Analysis of Digital Assets published in 2019 stated that digital assets previously sold as investment contract securities could be “reevaluated at the time of later offers or sales”. In these situations, if there exists no more “reliance on the effort of others”, nor a “reasonable expectation of profit” as it relates to the investment contract security, then it’s possible the prongs of Howey will not be satisfied, and the investment contract analysis will fail. In these select circumstances, future sales of the digital asset would not be classified as sales of a security.
With this framework in mind, if we attempt to chart a through-line across the three recent SEC actions, one possible (but certainly not definitive) conclusion is that the SEC views the EOS and Sia networks to be sufficiently decentralized as the networks exist today. What factors might play into that analysis? As the SEC’s William Hinman stated in June 2018, and as later codified in the SEC’s Framework in 2019, “if the network on which the token…is sufficiently decentralized — where purchasers would no longer reasonably expect a person or group to carry out essential managerial or entrepreneurial efforts — the assets may not represent an investment contract.” Applying this analysis, it’s plausible that the EOS token could have successfully transitioned from an investment contract at the point of their initial sale into something more akin to a commodity today.
Alternatively, as it relates to Telegram’s TON blockchain, it’s easier to conclude that the SEC believes there still exists a reasonable expectation of profit (held by token holders) enabled by the ongoing role of an active participant (here, Telegram Inc.). As a result, the TON blockchain does not yet meet the minimum threshold for sufficient decentralization. And, without a sufficiently decentralized network, Grams tokens must remain investment contract securities — the form they originally took during the initial offering to investors. There has been no transition under those facts.
Other takeaways? When looking at these three SEC actions together, one could argue that securities laws will always apply to investment contract sales of pre-launched network tokens, regardless of the offering’s form as a SAFT or direct token sale. This fact notwithstanding, the SEC could also be acknowledging the concept of “transitional” securities as a token’s underlying network decentralizes over time.
It’s also possible that these cases can be broadly interpreted as a win for security tokens as an initial fundraising mechanism. If pre-network digital assets must always be offered as investment contracts under federal securities laws, then the token being sold should be placed inside a security token wrapper, and the project’s fundraisers must file a registration statement for the securities offering, or qualify for an exemption from registration. In addition, if a network token aims to transition into a commodity-like digital asset sometime in the future — much like Ether, EOS, or Siacoin tokens — then the token must be imbued with security token transfer restrictions until that event occurs, so that all parties remain in compliance. Security token protocols offer issuers this type of transitional regulatory compliance.
Finally, it’s possible the next wave of digital asset regulation in the U.S. will be more fluid, more accessible, and more open than any of our current legacy constructs. A reading of these cases demonstrates that our U.S. regulators may be evolving their historically rigid interpretations of securities laws to meet this transformative technology head on.
What’s certain is that many questions still remain. For example, while we may have a better conceptual understanding of when sufficient decentralization is satisfied at the network level (Ethereum, EOS, Sia), and when it certainly is not satisfied (Telegram’s TON Blockchain), we still don’t know the exact point at which decentralization is reached during a network’s lifecycle, and as a result, when that network’s underlying token has officially transitioned out of security status.
Maybe that’s for our legislators to decide.
But whether you believe the SEC’s actions represent a loud warning for the industry, or a sign that our regulators are willing to play ball and speak the same language as the digital asset world, it’s undeniable that the increasing clarity provided will ensure the industry’s evolution — in one direction or another.
- Canadian Securities Administrators (CSA) Address Crypto-Assets within Regulatory Framework
- SEC Provides Warning on ‘Initial Exchange Offerings (IEOs)’
- New Shore Invest Starts a New Ship Finance Platform
- iSTOX Builds Upon Recent Successes with a Further $5M in Funding
- US Global Securities to Act as ‘Lead Financial Advisor’ for $50M Raise by Custodial Platform, Koine.