Intelligent & Digital Roadway Infrastructure for Vehicles Integrated with Next-Gen Technologies

By Rada Stoilova – Legal Expert in International and Human Rights Law, Law and Internet Foundation

As Europe is continuously advancing towards a connected, cooperative, and automated mobility (CCAM), AI has taken a leading role behind enhancing better driving conditions and efficiency in road infrastructure. The iDriving project, funded by Horizon Europe, reflects this evolution by developing an AI-powered system designed to enhance road safety, support proactive road monitoring and maintenance, and improve accessibility and traffic efficiency through advanced analytics and sensor technologies.  

Yet, alongside innovation comes the responsibility to ensure that these systems are developed and deployed in line with EU values and fundamental rights, as set out in EU human rights Treaties and the European Charter of Fundamental Rights. Central to this responsibility is the principle of fairness and non-discrimination. As emphasised by the EU’s Ethics Guidelines for Trustworthy AI, the fairness principle calls for systems that avoid unfair bias, respect human rights, and ensure equal treatment for all individuals. Guided by this principle, AI must be designed and deployed in a way that actively incorporates fairness at every stage, ensuring that the perspectives and needs of diverse users are consistently considered.  

This blog post examines how fairness can be ensured in practice, focusing on diverse and representative datasets, accessibility, inclusivity, and ongoing monitoring to ensure AI systems serve all users equitably.  

Diverse and Representative Datasets  

AI systems can unintentionally perpetuate bias if the data sets used for training, validation, or operation are incomplete, unrepresentative, or skewed toward specific groups. Dataset bias arises when the collected data does not accurately reflect the populations or contexts the AI system is intended to serve, which can lead to uneven performance, discriminatory outcomes, or the marginalisation of certain groups. For example, an AI system trained predominantly on data featuring a particular gender or racial group may fail to recognise road users from underrepresented groups accurately, potentially creating safety risks or unequal treatment for these users. 

Bias in datasets can emerge in multiple ways: some groups may be over- or under-represented, data collection methods may introduce systematic errors, temporal changes may render historical data less relevant, and measurement approaches may inadvertently disadvantage certain populations. These forms of bias can be compounded by incomplete governance, poor documentation, or lack of oversight, resulting in unintended prejudice or discrimination. 

Since such biases have real-world consequences, high-risk AI systems are required under the EU AI Act to undergo rigorous assessment and mitigation of bias in their datasets, while the General Data Protection Regulation (GDPR) provides complementary safeguards to ensure that personal data is processed in a manner that avoids discriminatory effects. Compliance involves not only using diverse and representative datasets but also implementing robust data governance practices, including thorough documentation of data origins, annotation, cleaning, enrichment, and continuous evaluation for gaps or inaccuracies. Where special categories of personal data are necessary for bias detection and correction, strict security measures must be enforced to protect individuals’ rights, and the use of such data must be justified, documented, and limited to the purpose of ensuring fairness. 

To comply with these frameworks, AI systems should: 

Inclusivity and Stakeholder Engagement 

Reflecting the real-world experiences of diverse populations is not only achieved using representative datasets but also through active stakeholder and public engagement throughout the development, deployment, and governance of AI systems. Such engagement involves municipalities, advocacy groups, community representatives, and end-users via consultations, surveys, workshops, and participatory committees.  

Public engagement provides a platform for users to share feedback, identify barriers to accessibility, and flag potential biases in system operation. Iterative consultation allows developers to refine policies, features, and AI tools in alignment with fairness, equity, and respect for diversity. For instance, stakeholder input can guide the prioritisation of mobility solutions for vulnerable road users, ensuring that AI recommendations are safe, accessible, and culturally sensitive. Ensuring transparent communication and using inclusive language throughout these interactions is essential, as it helps participants understand how their input informs system design, how decisions are made, and how data is collected and used, thereby fostering trust, accountability, and equitable participation. 

Accessible AI Systems 

Another fundamental aspect of fairness is accessibility, which refers to the design of systems, services, and technologies in a way that enables all individuals, regardless of their abilities, needs, or contexts, to use them effectively. In AI design, ensuring accessibility means that technologies are usable and beneficial for everyone, including those with physical, sensory, or cognitive impairments, as well as users with varying levels of digital literacy or language proficiency. 

For instance, an AI-based traffic management system may incorporate user-friendly interfaces, high-contrast visual displays, voice commands, speech recognition, and screen readers to support users with mobility, visual, or hearing impairments. Simplified navigation, large buttons, and customisable settings enhance usability for older adults, while multilingual support ensures interaction for individuals from diverse linguistic backgrounds. Attention to device diversity, digital literacy, and context-specific conditions, such as urban versus rural mobility, further ensures equitable access and prevents exclusion of underrepresented user groups. 

Continuous Monitoring 

Finally, beyond design and deployment, continuous monitoring is essential to maintain fairness and inclusivity over time. AI systems operate in dynamic environments where user behavior, traffic patterns, and societal conditions can evolve, potentially introducing new biases or accessibility challenges. By implementing ongoing evaluation mechanisms, developers can identify and address these emerging issues, ensuring that the system remains equitable and effective for all users. This proactive approach reinforces accountability and signals a long-term commitment to ethical AI practices.  

Driving Change: Why Ethics Must Lead the Way in AI 

As AI emerges as a significant force influencing innovation across technological domains, ethical considerations must direct AI’s development in order to safeguard fundamental rights and achieve significant, positive change. To guarantee that no one is left behind and that the technology advances for the common good, values like diversity, non-discrimination, accessibility, and social equity should be placed at the centre of AI design. In addition, by integrating human-centred values into the development process at every level and regularly checking systems for bias, mistakes, and unintended consequences, we can create AI that not only benefits society but establishes a promising future for technology.