Enabling Trustworthy AI Model Lifecycle

New solutions for assessing and monitoring the impacts of Artificial Intelligence (AI) systems should be adopted to ensure and facilitate positive impact in a complex and evolving legal and regulatory environment…

Enabling Trustworthy AI Model Lifecycle

New solutions for assessing and monitoring the impacts of Artificial Intelligence (AI) systems should be adopted to ensure and facilitate positive impact in a complex and evolving legal and regulatory environment…

Enabling Trustworthy AI Model Lifecycle

New solutions for assessing and monitoring the impacts of Artificial Intelligence (AI) systems should be adopted to ensure and facilitate positive impact in a complex and evolving legal and regulatory environment…

Enabling a Trustworthy AI-Model Lifecycle

New solutions for assessing and monitoring the impacts of Artificial Intelligence (AI) systems should be adopted to ensure and facilitate positive impact in a complex and evolving legal and regulatory environment. The AI model lifecycle assessment is a technical evaluation that helps identify and address potential risks and unintended consequences of AI systems across businesses, to engender trust and build supportive systems around AI decision-making. To enable a trustworthy AI model lifecycle, a series of qualitative and quantitative checks are performed to ensure trustworthiness across all AI model lifecycle phases. YAGHMA helps companies move from Trustworthy AI principles and values to practice (and build trust) by defining and implementing solutions across the AI model lifecycle.

AI system lifecycle phases in our work involve:

i) ‘design, data and models’; which is a context-dependent sequence encompassing planning and design, data collection and processing, as well as model building;

ii) ‘verification and validation’;

iii) ‘deployment’; and

iv) ‘operation, usage and monitoring’.

Figure 1 illustrates the AI model lifecycle phases. These phases often take place in an iterative manner and are not necessarily sequential. The decision to retire an AI system from operation may occur at any point during the operation, use and monitoring phases

YAGHMA supports AI actors across AI model lifecycle in creating trustworthy AI systems, from designing, modelling, analysing, and developing to operating, integrating, deploying and updating AI systems. During each phase of the AI’s lifecycle, the AI Impact Assessment Tool goes through a number of processes to ensure the improvement of an AI system based on prioritized values.

Our AI Impact Assessment Tool receives feedback from each phase in order to measure and manage both positive and negative impacts of the AI system against ethical, legal, societal and environmental aspects and requirements. At each phase, the AI’s stakeholders are consulted to improve and adjust the system. Once trustworthy AI values priorities are defined, we assess them with a view of potential legal expectations and internationally applied ethical guidelines, such as those from EU AI Act, OECD and IEEE. Through this, the AI model’s ecosystem is continuously being adapted, implemented, and adopted: changing it over time.

The insights gained from AI Impact Assessment Tool and AI Taxonomy enables designing, deploying, and uptake of an AI system with explicit consideration for individual and societal trustworthy AI values, such as transparency, sustainability, privacy, fairness and accountability, as well as values typically considered in system engineering, such as efficiency and effectiveness.

Through our tools, YAGHMA guides companies in sharing sufficient and appropriate information about their AI systems with their stakeholders to build trust in AI across AI lifecycle ecosystems. For this, we build understanding for the ethical content of the AI solutions, enrich information about the extraction and prioritisation of core trustworthy AI’ values, improve AI solution’ explainability, and advise about the availability of collected information both during the AI system’s development and afterwards through the deployment and use phases of AI systems.

Materiality Assessment

Research

Topics

Lastest Topics

  • A New Perspective on AI Impacts in the Healthcare Sector

    The prevalence of Artificial Intelligence (AI) and related technologies can be seen in business and society, as well as healthcare…

  • Circular Economy

    Businesses are integral to achieving Europe’s vision for 2030 environmental goals. While the UN sustainable development goals (SDGs) and the EU’s Green Deal serve as useful guides towards achieving climate action goals…

  • AI Taxonomy

    The first of its kind, YAGHMA’s categorization tool, AI Taxonomy, paves the way for healthcare professionals, policy makers and other healthcare stakeholders to make informed decisions on AI’s societal impacts

  • Enabling Trustworthy AI Model Lifecycle

    New solutions for assessing and monitoring the impacts of Artificial Intelligence (AI) systems should be adopted to …

Accompanying innovation to build a better tomorrow

Address

YAGHMA, Poortweg 6C,

2612 PA Delft, 

Netherlands

Contact with us

Get in touch for inquiries and collaboration opportunities.

Legal and Privacy

Privacy Policy

Terms of Service

Cookie Policy

Follow us

Copyright © 2025 YAGHMA, All rights reserved.

Accompanying innovation to build a better tomorrow

Address

YAGHMA, Poortweg 6C,

2612 PA Delft, 

Netherlands

Contact with us

Get in touch for inquiries and collaboration opportunities.

Legal and Privacy

Privacy Policy

Terms of Service

Cookie Policy

Follow us

Copyright © 2025 YAGHMA, All rights reserved.

Accompanying innovation to build a better tomorrow

Address

YAGHMA, Poortweg 6C,

2612 PA Delft, 

Netherlands

Contact with us

Get in touch for inquiries and collaboration opportunities.

Legal and Privacy

Privacy Policy

Terms of Service

Cookie Policy

Follow us

Copyright © 2025 YAGHMA, All rights reserved.