{"id":7019,"date":"2026-05-04T14:48:40","date_gmt":"2026-05-04T14:48:40","guid":{"rendered":"https:\/\/pixlex.it\/?p=7019"},"modified":"2026-05-04T14:58:46","modified_gmt":"2026-05-04T14:58:46","slug":"trasparenza-spiegabilita-sistemi-ia-ai-act","status":"publish","type":"post","link":"https:\/\/pixlex.it\/en\/ai-transparency-explainability-ai-act\/","title":{"rendered":"Meaning of transparency and explainability of an AI system and legal consequences"},"content":{"rendered":"<div data-elementor-type=\"wp-post\" data-elementor-id=\"7019\" class=\"elementor elementor-7019\" data-elementor-post-type=\"post\">\n\t\t\t\t<div class=\"elementor-element elementor-element-e040c71 e-flex e-con-boxed e-con e-parent\" data-id=\"e040c71\" data-element_type=\"container\" data-settings=\"{&quot;content_width&quot;:&quot;boxed&quot;}\" data-core-v316-plus=\"true\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-64ec92b elementor-widget elementor-widget-image\" data-id=\"64ec92b\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<style>\/*! elementor - v3.19.0 - 07-02-2024 *\/\n.elementor-widget-image{text-align:center}.elementor-widget-image a{display:inline-block}.elementor-widget-image a img[src$=\".svg\"]{width:48px}.elementor-widget-image img{vertical-align:middle;display:inline-block}<\/style>\t\t\t\t\t\t\t\t\t\t<img fetchpriority=\"high\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/pixlex.it\/wp-content\/uploads\/2026\/05\/pexels-googledeepmind-17483868-1024x576.jpg\" class=\"attachment-large size-large wp-image-7022\" alt=\"\" srcset=\"https:\/\/pixlex.it\/wp-content\/uploads\/2026\/05\/pexels-googledeepmind-17483868-1024x576.jpg 1024w, https:\/\/pixlex.it\/wp-content\/uploads\/2026\/05\/pexels-googledeepmind-17483868-300x169.jpg 300w, https:\/\/pixlex.it\/wp-content\/uploads\/2026\/05\/pexels-googledeepmind-17483868-768x432.jpg 768w, https:\/\/pixlex.it\/wp-content\/uploads\/2026\/05\/pexels-googledeepmind-17483868-1536x864.jpg 1536w, https:\/\/pixlex.it\/wp-content\/uploads\/2026\/05\/pexels-googledeepmind-17483868-2048x1152.jpg 2048w, https:\/\/pixlex.it\/wp-content\/uploads\/2026\/05\/pexels-googledeepmind-17483868-18x10.jpg 18w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-6dda262 e-flex e-con-boxed e-con e-parent\" data-id=\"6dda262\" data-element_type=\"container\" data-settings=\"{&quot;content_width&quot;:&quot;boxed&quot;}\" data-core-v316-plus=\"true\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-d877b2a elementor-widget elementor-widget-text-editor\" data-id=\"d877b2a\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<style>\/*! elementor - v3.19.0 - 07-02-2024 *\/\n.elementor-widget-text-editor.elementor-drop-cap-view-stacked .elementor-drop-cap{background-color:#69727d;color:#fff}.elementor-widget-text-editor.elementor-drop-cap-view-framed .elementor-drop-cap{color:#69727d;border:3px solid;background-color:transparent}.elementor-widget-text-editor:not(.elementor-drop-cap-view-default) .elementor-drop-cap{margin-top:8px}.elementor-widget-text-editor:not(.elementor-drop-cap-view-default) .elementor-drop-cap-letter{width:1em;height:1em}.elementor-widget-text-editor .elementor-drop-cap{float:left;text-align:center;line-height:1;font-size:50px}.elementor-widget-text-editor .elementor-drop-cap-letter{display:inline-block}<\/style>\t\t\t\t<h2><span lang=\"EN-US\">The need of a trustworthy AI<\/span><\/h2><p>The recent popularity of AI systems (especially LLMs), AI agents and more generally AI based tools and applications has been leading to a massive use of such technology in all the corners of our social interactions. This includes any type of business, social networks, personal life and so on.<\/p><p>Besides questions regarding the political power obtained by the developers of such systems, major attention is directed towards the risks related to the use of AI systems and the consequent incidents.<\/p><p>If you are curious to dive deeper into the typical risks, incidents and hazards related to the use of AI systems, you can check the MIT Ai risk repository or the OECD page dedicated to AI incidents and hazards monitor.<\/p><p>The latter shows quite clearly how the number of AI related incidents is increasing rapidly with the spiking use of AI systems:<\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-2ffcde9 e-flex e-con-boxed e-con e-parent\" data-id=\"2ffcde9\" data-element_type=\"container\" data-settings=\"{&quot;content_width&quot;:&quot;boxed&quot;}\" data-core-v316-plus=\"true\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-fd206f7 elementor-widget elementor-widget-image\" data-id=\"fd206f7\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"1024\" height=\"439\" src=\"https:\/\/pixlex.it\/wp-content\/uploads\/2026\/05\/explainable-AI-1024x439.png\" class=\"attachment-large size-large wp-image-7021\" alt=\"explainable AI\" srcset=\"https:\/\/pixlex.it\/wp-content\/uploads\/2026\/05\/explainable-AI-1024x439.png 1024w, https:\/\/pixlex.it\/wp-content\/uploads\/2026\/05\/explainable-AI-300x129.png 300w, https:\/\/pixlex.it\/wp-content\/uploads\/2026\/05\/explainable-AI-768x329.png 768w, https:\/\/pixlex.it\/wp-content\/uploads\/2026\/05\/explainable-AI-18x8.png 18w, https:\/\/pixlex.it\/wp-content\/uploads\/2026\/05\/explainable-AI.png 1411w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-e6f081d e-flex e-con-boxed e-con e-parent\" data-id=\"e6f081d\" data-element_type=\"container\" data-settings=\"{&quot;content_width&quot;:&quot;boxed&quot;}\" data-core-v316-plus=\"true\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-2691438 elementor-widget elementor-widget-text-editor\" data-id=\"2691438\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<p>In order to ensure that such risks are contained during the development, training and use of AI systems, several actors globally have developed risks frameworks and principles. The designed measures and principles are aimed at stimulating the use of a \u201ctrustworthy AI\u201d: this term refers to \u201ccharacteristics which help relevant stakeholders understand whether the AI system meets their expectations\u201d (see ISO\/IEC 22989:2022(E)).<\/p><p>For AI systems to be considered trustworthy, they often need to satisfy a range of criteria that matter to different stakeholders. While these criteria may sometimes influence each other and lead to trade-offs, strengthening the AI trustworthiness in general helps mitigate adverse risks.<\/p><p>Typically, the characteristics of trustworthiness are tightly intertwined with social and organizational practices; the data used to train and operate AI systems; the choice of models and algorithms; the design and governance decisions made by developers; and the ways humans contribute expertise, oversight, and accountability during deployment.<\/p><p>Human judgment is essential when selecting the metrics used to evaluate these characteristics and when setting the specific threshold values those metrics must meet.<\/p><h2>Principles of a trustworthy AI<\/h2><p>While different AI risk frameworks can value differently the characteristics of a trustworthy AI (e.g. giving more or less weight to privacy), there is a general coherence among the globally published documents and guidelines.<\/p><p>Typical principles of a trustworthy AI include (i) fairness, (ii) safety, (iii) privacy and security, (iv) transparency, (v) explainability, (vi) accountability.<\/p><h2>The concept of \u201ctransparency\u201d<\/h2><p>The term \u201ctransparency\u201d has a broad nature, and it implies a certain flexibility, as its meaning may vary according to the context. Generally, it involves communicating appropriate information about the AI system to stakeholders. This may include, for example, explaining how the system works, the details of the maintenance of technical and nontechnical documentation across the AI life cycle, the goals and limitations, design choices, models and so on.<\/p><p>Together with the information strictly regarding the system, transparency duties and best practices include also informing the stakeholder in relation to the data used in the development (e.g. training, validation and testing data).<\/p><p>It is important to notice that transparency typically does not mean a duty to disclosure of the source or other proprietary code or proprietary datasets but rather enabling people to understand how an AI system is developed, trained, deployed and works in certain uses or environments.<\/p><h2>The concept of \u201cexplainable\u201d<\/h2><p>The idea of explainable differs from the concept of transparency. In particular, it refers to explain the users how the AI system produces a certain output or takes a specific decision. This means giving clear and accessible information to stakeholders so that they can understand what elements led to a certain outcome and how people (negatively) affected by the outcome can challenge it.<\/p><p>Transparency and explainability are both elements that enable a more trustworthy AI but they are not synonyms. While transparency relates to describing the system (both technical and non-technical aspects), explainability specifically relates to describing how the system goes from input to output and what factors influence the outcome.<\/p><p>Consequently, a system may also be transparent but not explainable or viceversa.<\/p><h2>What to do in practice<\/h2><p>Providers operating in Europe or with European customers are going to be bound to the principles of transparency and explainability by hard law, namely the AI Act. This regulation imposes a duty to design and develop high-risk AI systems \u201cin such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system\u2019s output and use it appropriately\u201d. In particular, under Art. 13 (3) of the AI Act such high-risk AI systems shall be accompanied by a set of instructions which provide a wide range of information to the deployers regarding the provider, the system itself and the data (training, validation and testing), as well as information to enable deployers to interpret the output.<\/p><p>Similarly, Art. 50 of the AI Act imposes certain transparency obligations to providers and deployers of certain AI systems that expose the public to particular risks. For example, providers of AI systems that interact directly with the public or deployers of AI system that generates or manipulates image, audio or video content constituting a deep fake shall inform the public that its interacting with an AI system or with content AI generated.<\/p><p>For companies it is fundamental to be able to be able to show and explain the AI system, the data it uses, how it works and how it reaches outcomes. This is typically done via documentation and easy-to-understand explanations (i.e. documents, diagrams and so on).<\/p><p>\u00a0<\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>","protected":false},"excerpt":{"rendered":"<p>La necessit\u00e0 di un\u2019IA affidabile La recente popolarit\u00e0 dei sistemi di IA (in particolare degli LLM), degli agenti di IA e, pi\u00f9 in generale, degli strumenti e delle applicazioni basati sull\u2019IA sta portando a un utilizzo massiccio di tale tecnologia in tutti gli ambiti delle nostre interazioni sociali. Ci\u00f2 include qualsiasi tipo di attivit\u00e0 d\u2019impresa, [&hellip;]<\/p>","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[27],"tags":[79],"class_list":["post-7019","post","type-post","status-publish","format-standard","hentry","category-ti","tag-artificial-intelligence"],"_links":{"self":[{"href":"https:\/\/pixlex.it\/en\/wp-json\/wp\/v2\/posts\/7019","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pixlex.it\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/pixlex.it\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/pixlex.it\/en\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/pixlex.it\/en\/wp-json\/wp\/v2\/comments?post=7019"}],"version-history":[{"count":4,"href":"https:\/\/pixlex.it\/en\/wp-json\/wp\/v2\/posts\/7019\/revisions"}],"predecessor-version":[{"id":7025,"href":"https:\/\/pixlex.it\/en\/wp-json\/wp\/v2\/posts\/7019\/revisions\/7025"}],"wp:attachment":[{"href":"https:\/\/pixlex.it\/en\/wp-json\/wp\/v2\/media?parent=7019"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/pixlex.it\/en\/wp-json\/wp\/v2\/categories?post=7019"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/pixlex.it\/en\/wp-json\/wp\/v2\/tags?post=7019"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}