Material Issues in the Blossoming World of AI

Today, AI is becoming ever more present in supply chains, administration, customer service and daily activities of almost every sector.

Material Issues in the Blossoming World of AI

Today, AI is becoming ever more present in supply chains, administration, customer service and daily activities of almost every sector.

Categorization of material issues through the lifecycle of AI systems

Today, AI is becoming ever more present in supply chains, administration, customer service and daily activities of almost every sector. While sectors such as the financial, health and marketing sectors have been known to develop and use these new technologies, sectors such as retail are now investing billions into what is already a rapidly evolving technology. Although AI still seems a far-away and abstract concept to most people, developments and visibility of AI into a rapidly expanding commercial market have brought about legal and ethical questions regarding the impact of AI on society and the environment. Questions such as, will I lose my job to AI? (ethical), or can AI use my data for malicious intent (legal), begin to emerge. At the same time, civil society has also concerned itself with holding big companies and decision-makers accountable, requiring a new level of transparency and ethics. These raised societal concerns require a new way of thinking about how to ensure AI systems performance impacts are socially desirable, environmentally friendly, as well as ethically and legally acceptable. Understanding where these impacts occur and how they will affect your company will allow you to report these findings and address concerns related to AI that are sure to come, either now, or in the near future. Categorizing material issues through the lifecycle of an AI system will thus help inform AI designers, developers, system integrators, deployers, and/or users on how to improve and make informed decisions on their AI.

MATERIALITY AND ARTIFICIAL INTELLIGENCE

As society begins to understand and consciously engage with AI, stakeholder expectations on knowing how a company’s AI will impact their society and the environment, will also grow. As a result, companies should expect to see AI become an issue worth assessing in a material way.

But what does it mean to define material issues and why is it important?

An issue is material to a company if it meets two conditions. Firstly, it impacts the company in terms of growth, cost, risk or trust. And secondly, it is important to company’ stakeholders – such as consumers, customers, employees, governments, investors, NGOs and suppliers. Thus, a material issue can have a major impact on the financial, economic, reputational, and legal aspects of a business, as well as on the system of internal and external stakeholders of that company. A materiality assessment is an analysis process that helps define which aspects are most important to a company and its stakeholders. Figure 1 offers a visualization of the prioritization of materials issues.

Companies will undergo such an assessment as part of engaging stakeholders and reporting on their business practices, while also figuring out the most urgent and important impact areas for themselves and their stakeholder. In particular, investors of companies often will want to know how the identified issues are integrated into business strategy and what impact these material issues will have on value creation. While different companies choose varying mediums to disclose reports, materiality ensures that important material issues are not missed: omitting material issues could otherwise lead to reputation damage or distrust from stakeholders.

Immpact on the business

Material issues in AI

While the materiality assessment allows for the prioritization of material issues relevant to the company and its stakeholders, categorization the material issues into the four impact areas (ethical, legal, social and environmental) throughout the lifecycle of the AI system allows the company to review whether the prioritized impact performances are being upheld; holding the company accountable in all respects of business and throughout the lifecycle of AI. By AI lifecycle, we refer to three stages: the design or development stage, the deployment stage, and the use stage. Throughout each stage the AI system will have different performance impacts as well as different criteria to weigh its impact. It is thus important to weigh the AI system against its performance criteria at every stage. For example, at the design stage, material issues such as security, purpose, bias and discrimination (to name a few) can be assessed against the systems performance. However, even in the most carefully designed AI systems, manufactures, programmers and designers will not be able to control or predict what the AI system will experience once it has left their care. Take bias. In 2018, Amazon discontinued its AI-based recruitment system because it was biased against women. While the designer or developer may not have intended the AI system to be biased against women, the AI system had learned through historical data to be biased. In such a situation, assessing the AI for bias at all stages of its lifecycle and implementing iteration phases to audit the AI’s impact could have mitigated such societal impacts. This can also be applied to the deploy and use phases of the AI system. While different AI systems will, no doubt, have differing, related impacts and different requirements during their lifecycle, discovering their material issues will allow you to realize when and how to prioritize your issues. YAGHMA applies the materiality assessment and provides a materiality matrix against ethical, legal, environmental and social aspects of an AI system throughout its lifecycle. The following offers examples of each:

Ethical impacts of AI: e.g. human rights, privacy and surveillance, bias and discrimination, role of human judgement

  • Legal impacts of AI: e.g. data protection, intellectual property, decision-making (public)

  • Societal impacts of AI: e.g. automation leading to job loss, social isolation

  • Environmental impacts of AI: e.g. carbon footprint of AI, green data

This way, YAGHMA is able to help you assess and substantially improve these impact performances of your AI, help you make informed decisions quicker and more efficiently on how to improve your AI system, and grow the trustworthiness of future AI. In conclusion, YAGHMA’s AI impact assessment tool ensures that your AI system is socially desirable, environmentally friendly, as well as ethically and legally acceptable.

The overall process includes:

1) the materiality assessment;

2) contextualizing the identified material issues;

3) evaluating and assessing these material issues against your company’s priorities;

4) tailoring an AI project strategy based on the results;

5) auditing/monitoring to ensure the AI system remains in line with your company’s priorities.

Materiality Assessment

Research

  • AI Taxonomy

    The first of its kind, YAGHMA’s categorization tool, AI Taxonomy, paves the way for healthcare professionals, policy makers and other healthcare stakeholders to make informed decisions on AI’s societal impacts

  • Circular Economy

    Businesses are integral to achieving Europe’s vision for 2030 environmental goals. While the UN sustainable development goals (SDGs) and the EU’s Green Deal serve as useful guides towards achieving climate action goals…

  • A New Perspective on AI Impacts in the Healthcare Sector

    The prevalence of Artificial Intelligence (AI) and related technologies can be seen in business and society, as well as healthcare…

  • Enabling Trustworthy AI Model Lifecycle

    New solutions for assessing and monitoring the impacts of Artificial Intelligence (AI) systems should be adopted to …

Topics

Lastest Topics

Accompanying innovation to build a better tomorrow

Address

YAGHMA, Poortweg 6C,

2612 PA Delft, 

Netherlands

Contact with us

Get in touch for inquiries and collaboration opportunities.

Legal and Privacy

Privacy Policy

Terms of Service

Cookie Policy

Follow us

Copyright © 2025 YAGHMA, All rights reserved.

Accompanying innovation to build a better tomorrow

Address

YAGHMA, Poortweg 6C,

2612 PA Delft, 

Netherlands

Contact with us

Get in touch for inquiries and collaboration opportunities.

Legal and Privacy

Privacy Policy

Terms of Service

Cookie Policy

Follow us

Copyright © 2025 YAGHMA, All rights reserved.

Accompanying innovation to build a better tomorrow

Address

YAGHMA, Poortweg 6C,

2612 PA Delft, 

Netherlands

Contact with us

Get in touch for inquiries and collaboration opportunities.

Legal and Privacy

Privacy Policy

Terms of Service

Cookie Policy

Follow us

Copyright © 2025 YAGHMA, All rights reserved.