SV1. Digital distrust
Digital distrust may arise from known or perceived risks to safety and inclusion. Digital distrust is debilitating to the success of DPI and to the development and adoption of new innovations which enrich DPI. Like discrimination, distrust in DPI is often tied to pre-existing social factors that must be acknowledged and understood in order to be effectively addressed. Regardless of the reason, digital distrust presents serious risks to the legitimacy, effectiveness, adoption and sustainability of DPI systems and may extend to distrust in all digital services and government institutions in general.
SV2. Weak rule of law
Weak rule of law limits the ability of normative frameworks that prescribe legal, regulatory and ethical requirements to effectively mitigate risks. As DPI can amplify the political, social and economic power of those who control these systems, there is a risk that this concentrated power undermines the conventional institutions responsible for upholding the rule of law and escapes the essential checks and balances, potentially leading to abuses. Concentration of power in the form of monopolies may inhibit innovation, limit services and their features and leave inefficient quality of service unchecked. Inadequate accountability within the community of innovators and service providers that constitute a DPI can lead to malicious use, harms, and circumvention of the law with relative anonymity.
SV3. Weak institutions
Weak institutions diminish the effectiveness and legitimacy of safeguards by failing to implement necessary policies and practices. The failure to contextualize institutional needs also jeopardizes DPI value and effectiveness. Insufficient institutional capacity, mechanisms and resources to fulfil necessary roles represent a pervasive risk to DPI, as does the absence of appropriate institutions to oversee the entire DPI life cycle. The lack of will or wherewithal to coordinate (or cooperate) between key agencies and stakeholders in the ecosystem to employ a whole-of-society approach to DPI diminishes its value and impact.
SV4. Technical shortcomings
Technical shortcomings can be detrimental to DPI safeguards. Risks arise when systems are not designed to prevent safety, inclusivity and other harms; are poorly implemented, or inadequately tested. Vulnerabilities include security risks to the DPI itself and to people; inappropriate design for specific persons (gender, age, disability, etc.,); inappropriate technology choices leading to non-standard, non-interoperable or excessively costly solutions; restricted, conditional or encumbered ownership of full solutions; inadequate skills and competencies in DPI, and issues related to its sustainability. Among other harms, technical shortcomings erode trust in DPI, particularly where users have limited digital literacy and are still building confidence.
SV5. Unsustainability of a DPI poses significant risks to those who have invested in and rely on its services, and limits adoption by its potential users and those of other DPI systems. Such risks arise from inadequate value to users, inadequate design, maintenance, improvement, updates and resourcing. Financial threats include high operational and maintenance costs, hardware and software obsolescence and compromised components. Vendor lock-in limits flexibility and adaptability to new technologies, leading to long-term costs and other challenges. Additionally, without strategies to reduce carbon footprints and manage the environmental impact of discarded electrical and electronic equipment (“e-waste”), the environmental impact of DPI could jeopardize its role in advancing environmental sustainability goals—and in turn its own sustainability. The consequences of DPI unsustainability are significant due to its broad societal impact.
RI1. Discrimination
Discrimination in any of its forms (e.g. racial, socioeconomic, gender, disability, age, linguistic, geographic, cultural) reduces access to opportunities, economic empowerment, essential services like health and education, and participation in public and economic life. It is particularly important to avoid discrimination in those digital ID systems that provide social and emergency services, government services, and enable the broader economy. Discrimination is a leading cause of statelessness globally, with affected persons often excluded from identification and other systems. The digitalization of ID and other systems risks perpetuating existing disenfranchisement.
RI2. Unequal access
Unequal access to DPI is not only caused by discrimination but is also due to the digital divide and other sources of shortfall in technology (electricity, Internet connectivity, smartphones, and computers), as well as socioeconomic barriers (poverty, general education, digital literacy), infrastructure and service gaps in geographic areas, language barriers and disability. Human rights harms arise when access to critical public information and services is not possible due to unequal access to DPI and the social and economic structures they rely on.
RI3. Exclusion
Exclusion also occurs when enrolment in DPI systems is onerous, impossible or causes unease, particularly when it is a mandatory requirement to access public information or services. This often imposes a hidden cost on vulnerable individuals who may need to rely on others for assistance. In developing countries, where resources for support may be limited, the lack of alternative methods for accessing services is a prevalent risk. Courts may need to intervene to protect the rights of excluded individuals. Exclusion can lead to market power concentration, resulting in higher service costs, reduced choice and lower service quality.
RI4. Disempowerment
Disempowerment may be caused by DPI systems which restrict individuals’ control over their personal data, threatening autonomy and human agency. The threat is exacerbated when people have little understanding of the possible use and reuse of the data, the associated impact, and how, or if, they can exercise control over it. Mandatory data provision can also erode human agency and, in some jurisdictions, violate human rights and civil liberties, potentially being unconstitutional.
RS1. Privacy vulnerability - privacy vulnerability occurs when personal information is processed (shared, stored or used) without consent, beyond reasonable privacy expectations, or misused to cause harm. These breaches can lead to physical, financial, psychological, emotional and reputational damage. Significant risks include identity theft and fraud, especially in financial services like payments and credit, where victims may face severe financial losses. Privacy breaches may also enable governments to unlawfully access and misuse data to infringe upon human rights through means such as unauthorized surveillance.
RS2. Digital insecurity - extends beyond privacy vulnerabilities, encompassing service outages and sector-wide disruptions and other forms of systemic instability. Inadequately secured systems are susceptible to exploitation for malicious purposes, including the sabotage of critical infrastructure, unlawful surveillance, suppression of speech and assembly, espionage, and the destabilization of nations. The repercussions of digital insecurity are extensive, leading to financial loss, physical danger, reputational damage, and more.
RS3. Physical insecurity - often stems from digital insecurity. For example, physical harm may result when medical records in a data exchange system are compromised. Intrusive surveillance may expose persons’ movements and places of residence to tracking, harassment, or coercion. The safety of asylum seekers is threatened when their identities and movements are traceable, potentially leading to persecution,discrimination, or denial of protection. Poorly secured DPI can also deny stateless persons legal protections or access to essential services; and can be exploited to threaten the safety of individuals who express dissenting opinions or engage in lawful protest, through retaliation, persecution, or other forms of physical harm.
RS4. Lack of recourse refers to the absence or inadequacy of effective remedies and redress mechanisms for rights violations which leaves persons affected by DPI risks with no means of mitigating the harms caused to them. This deficiency undermines the integrity of DPI systems, eroding public trust and reducing adoption rates, which, in turn, challenges the sustainability of DPI, diminishes its effectiveness, and creates significant obstacles to realizing its potential benefits.
DPI systems comprise standards (including protocols), technological systems and services that operate at the intersection of individuals on the one hand, and public and private entities that hold institutionalized political and economic power, on the other. Risks therefore derive from failures and inadequacies in the overarching legal, regulatory and ethical (normative) frameworks in which they operate, encompassing all organizations and stakeholders that have a role related to DPI service delivery. Risks also lie within the technological systems themselves.
RS1. Privacy vulnerability
RS2. Digital insecurity
RS3. Physical insecurity
RS4. Lack of recourse
RI1. Discrimination
RI2. Unequal access
RI3. Exclusion
RI4. Disempowerment
SV1. Digital distrust
SV2. Weak rule of law
SV3. Weak institutions
SV4. Technical shortcomings
SV5. Unsustainability