Skip to content

What is Xpdeep ?

Trustworthy Deep Models#

Xpdeep technology fits into the broader framework of trustworthy AI and eXplainable Artificial Intelligence (XAI), which is a strategic priority in many fields, particularly in critical and sensitive areas such as defence, environmental protection and industry. While deep learning models are among the most powerful tools in AI, essential for processing large and complex data such as sensor data, images, and video, they are also among the most opaque and difficult to interpret. This lack of transparency significantly limits their deployment and potential. Xpdeep offers a new self-explainable deep learning framework, enabling the simultaneous generation of deep models and their complete and intelligible explanations, without compromising performance.

Xpdeep: an Explainable Deep Learning Framework#

Xpdeep offers a unique self-explainable deep learning framework that, through an end-to-end process, enables the simultaneous generation of robust and high-performing deep models along with their complete and intelligible explanations.

Xpdeep provides AI/ML engineers and scientists with the tools needed to understand, optimize, and explain their deep models to meet business objectives and constraints, and to address issues of trust, compliance, and risk management related to deployment in software or equipment.

When are Xpdeep explanations required?

  • For deployment in trusted software and hardware environments.
  • To ensure regulatory compliance.
  • To identify risk factors in specific contexts such as legal, certification, insurance, and financial domains.

What does Xpdeep explain and why?

  • Explain the model and its inferences for adoption and trust purposes,
  • Explain generated inferences to analyze and control future predictions,
  • Explain past predictions for auditing and investigating critical scenarios, incidents...

Xpdeep: A Versatile Deep Learning Framework with State-of-the-art Performance#

Xpdeep's explainable deep models have demonstrated their superiority over state-of-the-art deep models in several challenging applications, including landing runway detection, drone collision avoidance, and human activity recognition.

Xpdeep can be applied to a wide range of tasks, both basic and advanced, such as multi-target regression, joint classification-regression analysis, synchronous or asynchronous forecasting, as well as early anomaly and object detection. The Xpdeep technology has enabled the easy and efficient analysis of large volumes of data, up to 100 million individuals, as well as very high-resolution images.

Recent applications demonstrate the impressive capabilities of Xpdeep technology, particularly to:

  • Learn powerful, explainable deep models.
  • Explain existing deep models while preserving their performance without increasing their complexity.
  • Provide rich and intelligible explanations, with quality (reliability and robustness) proven through extensive experimentation.
  • Easily reduce the complexity of deep models by using strategic explanations.
  • Speed up inference times by reducing the complexity of the learned model.

Xpdeep Models: for Rapid Design and Integration#

Xpdeep is a groundbreaking AI/ML tool that has been designed to easily integrate in production. It enables model design and training to be tailored to business needs.

As an AI/ML explainable framework, Xpdeep enables you to:

  • Design powerful, explainable models from scratch.
  • Explain existing models without altering their architecture (such as Yolo or DETR, ...).
  • Deliver clear explanations via a customizable interface XpViz designed for both AI experts and non-experts.
  • Provide explanations for both the functioning of the deep model and its inferences.
  • Quickly identify model errors and instabilities for more efficient and faster optimization.
  • Identify redundant input factors to build more efficient and less complex models.
  • Adjust model complexity to manage underfitting and overfitting in predictive regions.

As a production-ready solution, Xpdeep models:

  • Are compatible with any deep learning architecture (CNN, LSTM, Transformer, ... )
  • Are applicable on many types of data (tabular, time series, images, text, ...)
  • Are compatible with PyTorch through Xpdeep APIs, enabling the training, validation, and testing of explainable deep models following standard procedures
  • Can be deployed whether on the cloud or on-premises.

Xpdeep Explanations and their Representations#

Unlike state-of-the-art post-hoc explainable approaches, Xpdeep provides two types of explanations:

  1. Explanations related to the deep model and its functioning: these explanations provide detailed insights into the model's decisions, including the key features involved and their significance. They also offer diagnostic explanations to assess the model's quality, uncover its strengths and weaknesses, and help optimize both its performance and complexity.
  2. Causal explanations related to the model’s inferences. These causal explanations reveal the factors that caused or influenced the predictions generated by the model. They are generated for an individual (local explanations) or for a group of individuals (semi-local explanations).

Both types of explanations — those related to the model's functioning and its inferences and their causes — are essential for designing trustworthy and robust deep models. They also play a crucial role in controlling and refining the model's operation and predictions. Most existing explainable methods (post-hoc methods) are limited to explaining an individual's inference (local explanations), without providing explanations for the learned model or its functioning, and they are often ineffective at explaining inferences for groups of individuals.

Model functioning explanations are represented by a graph titled "Model Decision Graph", that illustrates the hierarchy of decisions learned by the model to achieve a given task, and is characterized by the following features:

  • Each node in the graph represents a decision learned by the model, allowing the population to be segmented into as many groups as there are branches.
  • Each decision is multifactorial and nonlinear, ensuring greater power than tree-based methods such as decision trees, Random Forest, XGBoost, etc.
  • Each decision is expressed in an intelligible way, understandable by both AI experts and non-experts, within the input data space.
  • The leaves of the graph represent the final predictive regions.
  • Each prediction is generated through a sequence of decisions (from top to bottom) that outline a path in the Model Decision Graph.
  • Each node and leaf provides explanations, including quality metrics, performance metrics, statistics, charts, and more, related to the population of individuals associated with that node and leaf.

Causal explanations are represented by a graph titled "Inference Graph", that is an instance of the Model Decision Graph, depicting all the decisions involved in generating predictions for an individual or a group of individuals. It is characterized by the following features:

  • Inference decisions may involve part or all of the Model Decision Graph.
  • The nature of the explanations at each node or leaf is identical to that of the Model Decision Graph, but the values may differ as they are related to the individuals subject to inference.

Xpdeep vs. Post-Hoc Explanation Methods#

Xpdeep solution outperforms Post-Hoc explanation methods in several key aspects, summarized as follows:

  • Model explanation by design: post-hoc methods, which function externally on ’blackbox’ models, often fall short in providing a comprehensive understanding of how these models operate. In contrast, Xpdeep integrates explainability directly into its design, offering clear and detailed insights into the model’s decision-making process, including the features influencing each decision, as well as its strengths and weaknesses,
  • Reliable and robust explanations: post-hoc explanations, being generated from the outside, often lack precision and comprehensiveness. In contrast, Xpdeep provides explanations directly from within the model, eliminating the need for post-processing or human intervention, which results in more accurate and detailed insights,
  • Explanations at different levels of granularity: unlike Post-hoc approaches that offer explanations on an individual basis, Xpdeep provides explanations not only for individual cases but also for groups of individuals and entire datasets,
  • Rich and comprehensive explanations: while Post-hoc methods typically offer explanations in the form of weight vectors for individual cases, Xpdeep provides an interactive and flexible interface. This interface delivers detailed explanations immediately after model training, facilitating a deeper understanding of both the model’s decisions and its inferences,
  • Native explainable deep models for time series data: Explanations provided by Post-hoc methods for time series data are often adapted from techniques originally designed for image data, rendering them unsuitable for complex temporal data. In contrast, Xpdeep offers native explanation solutions specifically tailored for time series data, accommodating various complexities (univariate or multivariate, synchronous or asynchronous) and tasks (classification, forecasting, multi- target horizons, etc.).