If you could hire an Artificial Intelligence (AI) powered personal assistant to work with you exclusively, and it would become better at its job the more time it spent helping you, would you want that assistant to fully belong to you? Or would you prefer to rent that assistant from a company you knew very little about?
In 2017, Juniper Networks issued a report which estimated that, by 2022, 55% of households worldwide would own a voice assistant, like Amazon’s Alexa or Google Assistant. As the technology matures, artificial intelligence (AI) will permeate into nearly every form of human activity and environment. It becomes an invisible force that guides human interactions. Considering, however, that AI integrates into technology that supports schools, hospitals, and homes, information on its design (how it was trained, how it manages personal data, etc) ought to be accessible to all. Invisibility and inaccessibility should not be a feature of AI when it is so intimately linked to human self-determination. From this reality stems our ethical imperative: building a decentralized and democratic AI.
Distributed Ledger Technologies (DLTs) become crucial tools to enable a decentralized AI ecosystem that is 1. Transparent, 2. Rich in participants, and 3. Accountable. Multiple organizations are now working at the forefront of the implementation of AI and blockchain to achieve similar goals. The DAIA (Decentralized Artificial Intelligence Alliance), for example, has been assembling AI and blockchain projects into a consortium aimed at sharing best practices and gold standards for fostering positive growth within the AI industry.
Decentralized AI: A Guide
Transparency is key to a healthier development environment for AI. Coincidentally, it is a feature held dear by the crypto industry and one that is structurally attainable through decentralized blockchain technology. Transparency ensures that algorithmic models are being built to receive the scrutiny and input of any potentially affected individual and thus go beyond the only assumptions or biases of their developers. The same goes for the datasets used to train AI systems; recognizing our bias and teaching machines about our common values is difficult in a closed environment. The risk we, as a society, incur by delegating the responsibility of developing ethical guidelines for the new AI, is too high. Values differ vastly from religion to religion, culture to culture, country to country. If any bias is be encoded into AI today it will very likely come from datasets originated by one dimensional Silicon Valley giants.
There are too many cases of racial or socioeconomic biases found in decisions made by AIs that are now supporting the functioning of critical institutions –governmental or private. For an AI to operate beyond the limitations of a small group of performance-pressed people, it needs collective, abundant input. It needs to be exposed to the characteristics of an ever-growing and diverse set of individuals. And the internet seems like a good place to start.
It should also be noted that ethical features are “open to progressive increase” and not “bounded between 0 and 1”. To put it differently, a feature such as transparency, should be viewed as only making an algorithm increasingly fair in comparison to another algorithm but not as making it 100% fair just as something could not be made 100% fast.
Open development, by virtue of exposing a large number of people to a piece of code, allows us to identify the points of convergence between what is likely to be an international group of participants, and train our system according to a wider societal context. The most popular inclinations become the foundations from which the logic of the AI emerges. By always keeping the code accessible to the public, multiple avenues can be tested concurrently according to the different interpretation and functions of the technology in its unique applications.
Among other advantages, the ability to relate problems (theorems) to their solutions (proofs) in a transparent manner is particularly suited to complex tasks such as bringing heterogeneous processes to inter-operate with each other, which link machine understanding and human understanding and enable deep levels of introspection and meta-learning. In other words, understanding what we reason about is needed for introspection, which is needed for complex AI. Open source development facilitates cooperation across engineers with different beliefs, cultures and education and allows them and their systems to interact at a higher level of knowledge/understanding of each other.
To enable as many people as possible to contribute to the development of new AI systems, you want to enable monetization. The best way to monetize open-source technology is by creating an ecosystem that incentivizes collaboration on any use case that an interested party would like to see developed.
To incentivize the ecosystem, you need to enable trustless cooperation and reward developmental efforts. A balance between supply and demand needs to be struck: AI developers on one side, while individuals, companies, NGOs, and governments stand on the other.
On the demand side, it is necessary to diversify desires. The ecosystem would benefit from highly detailed and personalized requests for new AI systems. By doing so, each party on the demand side would fuel and benefit from an ever-widening set of applications using AI technology. Novel or niche tools and methodologies will spring from these complex demands, to the benefit of the wider community of developers.
In turn, increasing and widening demand will serve as a significant trigger for more specialized AI expertise on the supply side. The more complex demand becomes, the more diversity in AI knowledge and skill sets will become necessary. Coupled to that is a strong sense of cooperation between the different subfields of AI invoked by the demand that allows for faster development. No code would need to be written twice, overlap between projects would be positive and encouraged, and new developments/projects would build upon state-of-the-art findings instead of starting from scratch. Cooperation can also extend to the safe and anonymized sharing of data. The sharing of data between projects will unlock exponential growth for systems that are inherently dependent on the volume and quality of data they receive. Projects such as Ocean Protocol, a decentralized data exchange protocol, which facilitate frictionless exchanges and sharing of datasets would be instrumental in such an ecosystem. It is important to note, however, that such projects are still constrained by scaling challenges that are yet to be solved by the Ethereum blockchain. For example, there is currently not enough space on the Ethereum blockchain to allow for “privacy-preserving cryptographic data markets to work for more than a few hundred thousand people” as Vitalik Buterin reminded us.
As a direct consequence of open cooperation,new demand will consistently reach further into novel use cases as past use cases in the developer ecosystem would be readily reproducible through publicly available code. The ecosystem enables increasing demand which, in turn, fuels the ecosystem —a virtuous cycle is established.
The creation of democratic AI requires the active participation of anyone who wishes to have a say in how a certain design choice could affect them or the community they represent. Once the design and AI are set, individual intervention/modification should still be possible for any user of the AI or participant in the ecosystem. Democratic governance of open technology is a familiar concept to most blockchain projects. Blockchain technology would allow for a large number of stakeholders to hold the reins on development and have a chance to express their vision of the ecosystem.
The ecosystem outlined above will benefit from participants’ ability to vote on various propositions pertaining to developments in the ecosystem. For example, building a missile guidance AI system on top of the open systems that were built by independent groups for non-harmful purposes could engender a dispute. The ability to vote on the rules of the ecosystem, rules of interactions, the levels of explainability required for every AI system, and even dispute resolutions, would provide a degree of accountability that is almost absent in proprietary systems and public discourse today. The ecosystem in question could, for example, be formed in such a way that strictly prohibits any AI system that is not benevolent, or at least not obviously geared towards harmful applications vis-à-vis human beings. Conversely, if the actors of the ecosystem are largely malevolent, it is likely that we would see reprehensible technology emerge from their joint efforts. The perennial democratic problem of “the tyranny of the majority”, in which the interests of the majority of voters is favored at the detriment of those in the minority, is just as relevant in this ecosystem. Yet, as Churchill words “democracy is the worst form of government, except for all the rest” still ring true today.
After decades of efforts to define appropriate ethical principles to guide the use and development of digital technologies, the debate, albeit incomplete, seems to have shifted from what (principles) to how (to apply).
In environmental ethics, the concept of “greenwashing” refers to the malpractice of private or public actors seeking to appear more nature-friendly than they actually are, in order to distract a given audience from the disappointing reality of the project’s actual priorities. This form of misinformation is “often achieved by spending a fraction of the resources needed to tackle the actual ethical problems, by concentrating on mere marketing, advertising, or other public relations activities”, to present a veneer of integrity. AI development is too important –so is the environment– to become another victim of “greenwashing” operations. The what has been defined and is being actively worked on, the how is being crash-tested by many projects as we write, and the open ecosystem improving AI is one way to look at how to apply ethical principles to AI.
What is left now is the who. Who will uphold these principles and take the necessary steps to join such an ecosystem? This is a sincere call for businesses to consider using decentralized AI when looking to streamline their operations or create impactful AI for society. Overcoming the human problem of trust and strict ownership might be the social solution to the technological problem of building better performing and safer AI.
Ultimately, the ambition of an open and decentralized AI ecosystem, as described above, is to create AI that is for the people and owned by the people. The aforementioned principles, as abstractions, support this vision by acting as normative constraints, and by offering guidance on how to go about applying ethics in AI and creating higher levels of user autonomy —a core goal in AI ethics. In this open environment, users are given the tools to reproduce and alter an AI system, but more importantly, they are given the possibility of trusting an AI they use.
We are, in fact, at a turning point in history where we are witnessing one of the most significant technological leaps ever occurred. The rise of thinking machines is too critical to be left in the hands of the few and too impactful to be conceived behind the closed doors of tech giants.
SingularityNET has been building a decentralized marketplace for AI services since its inception in 2017. It lets anyone create, share, and monetize AI services at scale. Since its Beta launch in February 2019, the network has delivered 40+ live AI services along with high developer activity results. We have also partnered and are currently working with more than 20 companies and institutions, including UNESCO, the government of Malta, and Dominos Pizza. To learn more about our journey to build truly decentralized AI, follow us on Medium at: https://blog.singularitynet.io/
Arif Khan is the Chief Marketing Officer of the SingularityNET foundation. He is a top writer for Artificial Intelligence on Medium.com and his work has been covered by VentureBeat, The Wall Street Journal and the New York Times. He grew up in Singapore and presently calls Washington DC home.