{"id":1916,"date":"2021-06-07T10:47:00","date_gmt":"2021-06-07T08:47:00","guid":{"rendered":"https:\/\/insights.ieseg.fr\/?p=1916"},"modified":"2024-03-19T14:40:48","modified_gmt":"2024-03-19T13:40:48","slug":"implementing-interpretable-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/insights.ieseg.fr\/en\/resource-center\/big-data-ai\/implementing-interpretable-artificial-intelligence\/","title":{"rendered":"Implementing interpretable Artificial Intelligence (AI) in an organization: an interview with professors Coussement and De Caigny"},"content":{"rendered":"\n<p><em><strong>Companies worldwide are increasingly turning to Artificial Intelligence (AI). Recent findings of <a href=\"https:\/\/www.mckinsey.com\/business-functions\/mckinsey-analytics\/our-insights\/global-survey-the-state-of-ai-in-2020#\" target=\"_blank\" rel=\"noreferrer noopener\">a global survey by McKinsey &amp; Company<\/a>, for example, show that half of respondents reported that their organization had adopted AI in at least one business function (marketing, finance, etc.) and that some of these companies \u201cplan to invest even more in AI in response to the COVID-19 pandemic and its acceleration of all things digital\u201d.<\/strong><\/em><\/p>\n\n\n\n<p>Kristof COUSSEMENT, professor at <a href=\"https:\/\/insights.ieseg.fr\/en\/about-ieseg\/\">I\u00c9SEG School of Management<\/a> and director of the I\u00c9SEG center of excellence for marketing analytics (<a href=\"https:\/\/icma.ieseg.fr\/\">ICMA<\/a>), and professor Arno DE CAIGNY are experts in big data analytics and the application of AI models in organizations. We spoke to them about their research into the development of interpretable AI (i.e. AI that can be used to provide and explain practical insights for improved business decision making) and asked them to give some concrete advice for organizations on its implementation.<\/p>\n\n\n\n<h2><strong>Could you briefly explain what interpretable artificial Intelligence is and why it is important?<\/strong><\/h2>\n\n\n\n<p><em><strong>Kristof COUSSEMENT<\/strong><\/em>: AI and the use of data science continues to grow in importance. They are now deployed across a variety of business functions including marketing (for example with customer relation management and direct marketing) but also by finance for credit risk or fraud detection, as well as HR teams to improve recruitment or detect burn-out. There are several key components to consider to successfully implement AI: <strong>data<\/strong> in its different forms including numeric data and unstructured data such as text, audio, images for example; the <strong>various AI methodologies<\/strong> used to crunch this data (including machine learning and deep learning); and finally the <strong>insights and knowledge generated<\/strong> for organizations so value is added to their businesses. These are the three main pillars in the AI process.<\/p>\n\n\n\n<p>We clearly see in our field that many organizations are nowadays focusing on the algorithmic side of AI trying to implement the most efficient and effective algorithms. We believe that this is certainly important but that companies should not just be looking at the algorithms themselves but should be focusing on understanding what they are doing and delivering effective insights and knowledge through these algorithms. This is where interpretable AI comes in to play as it refers to the possibility for business users to understand and explain the data science model in question and to capitalize on this knowledge for improved decision-making.<\/p>\n\n\n\n<p>It therefore connects the technological and methodological side of data science with the business knowledge that exists within an organization. It is crucial that these go hand in hand to build trust within an organization. If you do not gain trust from people using your algorithms they will never be implemented effectively. Finally, this understanding of algorithms enables managers to act on this data and set up concrete plans of action and potentially avoid any biases that might be developed via an algorithm.<\/p>\n\n\n\n<h2><strong>Can you explain the findings of some of your recent research?<\/strong><\/h2>\n\n\n\n<p><em><strong>Arno DE CAIGNY<\/strong><\/em>: We have published a number of papers in this area, but I would like to highlight one paper published at the end of 2018, which looks at interpretable AI in the field of customer relationship management *. This research focuses specifically on customer churn prediction, one of the key pillars of CRM. Put simply, churn refers to the rate a company loses customers for example when they stop buying products or services. Our paper is listed as one of the most cited publications since 2018 in the CNRS cat. 1 European Journal of Operation Research, a demonstration of the importance of this topic in today\u2019s business and academic context.<\/p>\n\n\n\n<p>We focused on the development of a new predictive tool that can be used to help managers reduce churn. There are two important components for such tools; the first one is predictive performance \u2013 identifying which customers are likely to leave. The second one is interpretability. This means that the model created should provide insights into the factors that might be driving customers away so that organization can put measures in place to improve customer retention.<\/p>\n\n\n\n<p>Our model (the <strong>logit leaf<\/strong> model) combines both these elements; predictive performance and the interpretability of insights. It creates different segments of customers, which boosts the predictive performance and allows for interpretation of the drivers for churn on customer segment level rather than on the entire customer base.<\/p>\n\n\n\n<p>Imagine for example you have two different types of customers that leave \u2013 those leaving because of price\/cost factors and those who leave for example due to poor customer service. Hence, companies could benefit from targeting these different types of customers with different incentives to retain them. The <strong>logit leaf<\/strong> model allows organizations to achieve this goal by segmenting and separating groups of customers while traditional models have generally taken an overall approach for analyzing this problem.<\/p>\n\n\n\n<h2><strong>What are the main messages that companies should bear in mind developing or implementing algorithms to enhance customer relation management?<\/strong><\/h2>\n\n\n\n<p><em><strong>Kristof COUSSEMENT and Arno DE CAIGNY<\/strong><\/em>: There are several messages that we would give to companies concerning interpretable AI.<\/p>\n\n\n\n<p>1) The first relates to <strong>the richness and quality of the data<\/strong>. This is key and is the first step to successfully implementing interpretable AI in an organization. If you crunch poor data you will get very poor results and insights. It\u2019s not just a question of quantity, it\u2019s important to have a richness or diversity of data; not just numerical data but data potentially coming from call centers, social media, etc.<\/p>\n\n\n\n<p>2) The second relates to <strong>the choice of algorithm<\/strong>. Data scientists often refer to black box and white box solutions. With the former, it is really a case of dropping in the data and the black box crunches the information without deeper knowledge on the process. While these are generally good for predictive performance, they do not provide the interpretable insights of a white box solution. It is important, therefore, that organizations carefully balance the choice of their algorithmic models. Of course, the model choice really depends on companies\u2019 maturity in terms of data analytics. For example, if we consider startups or young companies that might not be working with data analytics, more interpretive models are likely to be more useful to convince the business users of its usefulness.<\/p>\n\n\n\n<p>3) The <strong>presentation of the outcomes<\/strong> is crucial. Therefore, everything that deals with data visualization is key. The findings need to be easily understandable and explainable for managers in the organization.<\/p>\n\n\n\n<p>4) The final point is that <strong>interpretable AI needs to be actionable<\/strong>. This means that the AI outcomes needs to provide insights to activities that can be put in place. For example, with customer loyalty schemes, models might provide clear insights into which groups of customers should potentially be moved into a more\/less-rewarding program to drive engagement.<\/p>\n\n\n\n<p>As a final consideration, interpretable AI requires having a \u201chuman\u201d touch to the technological and methodological field of data science. For example over the last years various profiles in the field of data science were added to the traditional <em>data scientist<\/em>, <em>data engineer<\/em> or <em>data analyst<\/em> (roles). For instance, many organizations hire <em>data strategists<\/em> who are positioned between the business and the data science department. These <em>data strategists<\/em>&nbsp;have both the technical and methodological competences in AI and the in-depth understanding of a company\u2019s strategy and vision.<\/p>\n\n\n<div class=\"methodologie\">\r\n\t\t<div class=\"element\">\r\n\t\t<p class=\"title\">ICMA<\/p>\r\n\t\t<p>ICMA is the Center of Excellence for Marketing Analytics of I\u00c9SEG School of Management. It is a knowledge hub formed by a team of academic experts with a proven track record in the fields of marketing analytics that aims to support teaching, research and business projects<\/p>\n<\/div>\r\n<\/div>\n\n\n<p>*De Caigny, A., Coussement, K., &amp; De Bock, K. W. (2018). <a href=\"https:\/\/doi.org\/10.1016\/j.ejor.2018.02.009\" target=\"_blank\" rel=\"noreferrer noopener\">A new hybrid classification algorithm for customer churn prediction based on logistic regression and decision trees. European Journal of Operational Research, 269(2), 760\u2013772<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Companies worldwide are increasingly turning to Artificial Intelligence (AI). Recent findings of a global survey by McKinsey &amp; Company, for example, show that half of respondents reported that their organization had adopted AI in at least one business function (marketing, finance, etc.) and that some of these companies \u201cplan to invest even more in AI <a href=\"https:\/\/insights.ieseg.fr\/en\/resource-center\/big-data-ai\/implementing-interpretable-artificial-intelligence\/\" class=\"more-link\">&#8230;<span class=\"screen-reader-text\">  Implementing interpretable Artificial Intelligence (AI) in an organization: an interview with professors Coussement and De Caigny<\/span><\/a><\/p>\n","protected":false},"author":4,"featured_media":1908,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[482,489],"tags":[343,251,431,18,445],"article-type":[12],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v19.5.1 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Implementing interpretable artificial Intelligence (AI) : interview<\/title>\n<meta name=\"description\" content=\"Implementing interpretable Artificial Intelligence (AI) in an organization: an interview with professors Coussement and De Caigny\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insights.ieseg.fr\/en\/resource-center\/big-data-ai\/implementing-interpretable-artificial-intelligence\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Implementing interpretable artificial Intelligence (AI) : interview\" \/>\n<meta property=\"og:description\" content=\"Implementing interpretable Artificial Intelligence (AI) in an organization: an interview with professors Coussement and De Caigny\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insights.ieseg.fr\/en\/resource-center\/big-data-ai\/implementing-interpretable-artificial-intelligence\/\" \/>\n<meta property=\"og:site_name\" content=\"I\u00c9SEG Insights\" \/>\n<meta property=\"article:published_time\" content=\"2021-06-07T08:47:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-19T13:40:48+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insights.ieseg.fr\/wp-content\/uploads\/2022\/08\/iStock-1172878142-1.jpg-1200px-1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"710\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Alice Goyau\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Alice Goyau\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insights.ieseg.fr\/en\/resource-center\/big-data-ai\/implementing-interpretable-artificial-intelligence\/\",\"url\":\"https:\/\/insights.ieseg.fr\/en\/resource-center\/big-data-ai\/implementing-interpretable-artificial-intelligence\/\",\"name\":\"Implementing interpretable artificial Intelligence (AI) : interview\",\"isPartOf\":{\"@id\":\"https:\/\/insights.ieseg.fr\/#website\"},\"datePublished\":\"2021-06-07T08:47:00+00:00\",\"dateModified\":\"2024-03-19T13:40:48+00:00\",\"author\":{\"@id\":\"https:\/\/insights.ieseg.fr\/#\/schema\/person\/fd2e6555f747249b815351e47eb76c04\"},\"description\":\"Implementing interpretable Artificial Intelligence (AI) in an organization: an interview with professors Coussement and De Caigny\",\"breadcrumb\":{\"@id\":\"https:\/\/insights.ieseg.fr\/en\/resource-center\/big-data-ai\/implementing-interpretable-artificial-intelligence\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insights.ieseg.fr\/en\/resource-center\/big-data-ai\/implementing-interpretable-artificial-intelligence\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insights.ieseg.fr\/en\/resource-center\/big-data-ai\/implementing-interpretable-artificial-intelligence\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"I\u00c9SEG Insights\",\"item\":\"https:\/\/insights.ieseg.fr\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Big Data &amp; AI\",\"item\":\"https:\/\/insights.ieseg.fr\/resource-center\/big-data-ia\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Implementing interpretable Artificial Intelligence (AI) in an organization: an interview with professors Coussement and De Caigny\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insights.ieseg.fr\/#website\",\"url\":\"https:\/\/insights.ieseg.fr\/\",\"name\":\"I\u00c9SEG Insights\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insights.ieseg.fr\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insights.ieseg.fr\/#\/schema\/person\/fd2e6555f747249b815351e47eb76c04\",\"name\":\"Alice Goyau\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insights.ieseg.fr\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/ac04b4b53ce0324d446254c669d2d481?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/ac04b4b53ce0324d446254c669d2d481?s=96&d=mm&r=g\",\"caption\":\"Alice Goyau\"},\"url\":\"https:\/\/insights.ieseg.fr\/en\/resource-center\/author\/a-goyau-894651\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Implementing interpretable artificial Intelligence (AI) : interview","description":"Implementing interpretable Artificial Intelligence (AI) in an organization: an interview with professors Coussement and De Caigny","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insights.ieseg.fr\/en\/resource-center\/big-data-ai\/implementing-interpretable-artificial-intelligence\/","og_locale":"en_US","og_type":"article","og_title":"Implementing interpretable artificial Intelligence (AI) : interview","og_description":"Implementing interpretable Artificial Intelligence (AI) in an organization: an interview with professors Coussement and De Caigny","og_url":"https:\/\/insights.ieseg.fr\/en\/resource-center\/big-data-ai\/implementing-interpretable-artificial-intelligence\/","og_site_name":"I\u00c9SEG Insights","article_published_time":"2021-06-07T08:47:00+00:00","article_modified_time":"2024-03-19T13:40:48+00:00","og_image":[{"width":1200,"height":710,"url":"https:\/\/insights.ieseg.fr\/wp-content\/uploads\/2022\/08\/iStock-1172878142-1.jpg-1200px-1.jpg","type":"image\/jpeg"}],"author":"Alice Goyau","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Alice Goyau","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insights.ieseg.fr\/en\/resource-center\/big-data-ai\/implementing-interpretable-artificial-intelligence\/","url":"https:\/\/insights.ieseg.fr\/en\/resource-center\/big-data-ai\/implementing-interpretable-artificial-intelligence\/","name":"Implementing interpretable artificial Intelligence (AI) : interview","isPartOf":{"@id":"https:\/\/insights.ieseg.fr\/#website"},"datePublished":"2021-06-07T08:47:00+00:00","dateModified":"2024-03-19T13:40:48+00:00","author":{"@id":"https:\/\/insights.ieseg.fr\/#\/schema\/person\/fd2e6555f747249b815351e47eb76c04"},"description":"Implementing interpretable Artificial Intelligence (AI) in an organization: an interview with professors Coussement and De Caigny","breadcrumb":{"@id":"https:\/\/insights.ieseg.fr\/en\/resource-center\/big-data-ai\/implementing-interpretable-artificial-intelligence\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insights.ieseg.fr\/en\/resource-center\/big-data-ai\/implementing-interpretable-artificial-intelligence\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insights.ieseg.fr\/en\/resource-center\/big-data-ai\/implementing-interpretable-artificial-intelligence\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"I\u00c9SEG Insights","item":"https:\/\/insights.ieseg.fr\/en\/"},{"@type":"ListItem","position":2,"name":"Big Data &amp; AI","item":"https:\/\/insights.ieseg.fr\/resource-center\/big-data-ia\/"},{"@type":"ListItem","position":3,"name":"Implementing interpretable Artificial Intelligence (AI) in an organization: an interview with professors Coussement and De Caigny"}]},{"@type":"WebSite","@id":"https:\/\/insights.ieseg.fr\/#website","url":"https:\/\/insights.ieseg.fr\/","name":"I\u00c9SEG Insights","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insights.ieseg.fr\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insights.ieseg.fr\/#\/schema\/person\/fd2e6555f747249b815351e47eb76c04","name":"Alice Goyau","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insights.ieseg.fr\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/ac04b4b53ce0324d446254c669d2d481?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/ac04b4b53ce0324d446254c669d2d481?s=96&d=mm&r=g","caption":"Alice Goyau"},"url":"https:\/\/insights.ieseg.fr\/en\/resource-center\/author\/a-goyau-894651\/"}]}},"_links":{"self":[{"href":"https:\/\/insights.ieseg.fr\/en\/wp-json\/wp\/v2\/posts\/1916"}],"collection":[{"href":"https:\/\/insights.ieseg.fr\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insights.ieseg.fr\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insights.ieseg.fr\/en\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/insights.ieseg.fr\/en\/wp-json\/wp\/v2\/comments?post=1916"}],"version-history":[{"count":4,"href":"https:\/\/insights.ieseg.fr\/en\/wp-json\/wp\/v2\/posts\/1916\/revisions"}],"predecessor-version":[{"id":7133,"href":"https:\/\/insights.ieseg.fr\/en\/wp-json\/wp\/v2\/posts\/1916\/revisions\/7133"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insights.ieseg.fr\/en\/wp-json\/wp\/v2\/media\/1908"}],"wp:attachment":[{"href":"https:\/\/insights.ieseg.fr\/en\/wp-json\/wp\/v2\/media?parent=1916"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insights.ieseg.fr\/en\/wp-json\/wp\/v2\/categories?post=1916"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insights.ieseg.fr\/en\/wp-json\/wp\/v2\/tags?post=1916"},{"taxonomy":"article-type","embeddable":true,"href":"https:\/\/insights.ieseg.fr\/en\/wp-json\/wp\/v2\/article-type?post=1916"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}