Explainable recommendation A survey and new perspective_第一章_个人笔记
- Abstract
- 1. Introduction
-
- 1.1 Explainable Recommendation
- 1.2 A Historical Overview
- 1.3 Classification of the Methods
- 1.4 Explainability and Effectiveness
- 1.5 Explainability and Interpretability
- 1.6 How to Read the Survey
报告1全篇7个章节,80+页, 围绕目录,个人选取部分阅读和记录。
Abstract
介绍可解释性推荐目的(attemps to)作用(helps to)和全文结构。
-
目的作用
可解释推荐目的:产生有高质量且直观的解释性的推荐模型
Explainable recommendation attempts to develop models that generate not only high-quality recommendations but also intuitive explanations.解释内容来自事后比较或者可解释的模型
The explanations may either be post-hoc or directly come from an explainable model.可解释性推荐有助于提高推荐系统的透明度、说服力、有效性、可信度和满意度,另外也可以有助于系统调试。
Explainable recommendation helps to improve the transparency, persuasiveness, effectiveness, trustworthiness, and satisfaction of recommendation systems. It also facilitates system designers for better system debugging -
全文结构
作者首先通过推荐领域的5W分类(what, when, who, where, and why),介绍可解释性推荐在推荐系统中的位置。
We first highlight the position of explainable recommendation in recommender
system research by categorizing recommendation problems into the 5W, i.e., what, when, who, where, and why.文中的其他三个工作
- 梳理可解释推荐的历史发展
- 从信息来源和算法原理,两个方面剖析当下的可解释推荐
- 可解释推荐的应用
1)We provide a chronological research timeline of explainable recommendation, including user study approaches in the early years and more recent model-based approaches
2)We provide a two-dimensional taxonomy to classify existing explainable recommendation research: one dimension is the information source (or display style) of the explanations, and the other dimension is the algorithmic mechanism to generate explainable recommendations.
3)We summarize how explainable recommendation applies to different recommendation tasks, such as product recommendation, social recommendation, and POI recommendation.
1. Introduction
1.1 Explainable Recommendation
可解释推荐指的是个性化推荐中的why问题(解释在后面),指的是不仅可以提供推荐结果,还解释为什么如此。
Explainable recommendation refers to personalized recommendation algorithms that address the problem of why– they not only provide users or system designers with recommendation results, but also explanations to clarify why such items are recommended.
个性化推荐按照5W问题,可分为:
- when: 时间推荐(time-aware recommendation )
- where: 位置推荐(location-based recommendation)
- who:社交推荐(social recommendation)
- what : 应用推荐(application-aware recommendation )
- why: 可解释推荐(explainable recommendation)
Specifically, personalized recommendation research can be classified into the 5W problems –when, where, who, what, and why, corresponding to time-aware recommendation (when), location-based recommendation (where), social recommendation (who), application-aware recommendation (what), and explainable recommendation (why), where explainable recommendation aims to answer why-type questions in recommender systems.
可解释推荐可分为 (和认类认知决策一样,先想后做 和 先做再解释)
- model-intrinsic (模型固有的)
决策机制是透明的,自然而然的提供解释。- model-agnostic(模型未明确的)
也可称为事后解释的方法(post-hoc)。决策过程可以是不透明的,但是有可解释机制,在决策后生成解释。
- Explainable recommendation models can either be model-intrinsic or model-agnostic.
- The model-intrinsic approach develops interpretable models, whose decision mechanism is transparent, and thus, we can naturally provide explanations for the model decisions (Zhang et al., 2014a)
- The model-agnostic approach(Wang et al., 2018d), or sometimes called the post-hoc explanation approach (Peake and Wang, 2018), allows the decision mechanism to be a blackbox. Instead, it develops an explanation model to generate explanations after a decision has been made
1.2 A Historical Overview
尽管explainable recommendation概念近年由(Zhang et al., 2014a)提出,然而还有一些前历史(pre-history)由作者进行概述。
早期个性化推荐大多是基于内容和协同过滤的。
-
基于内容(content-based)
用内容信息对用户和物品的简介进行建模
- 如电子商务中商品的价格、颜色、品牌;
- 评论系统中电影的类型、导演、时长作者推荐相关总结文章:
Ferwerda, B., K. Swelsen, and E. Yang (2012). “Explaining content-based recommendations”. New York. 1–24.
-
基于协同过滤 (CF-based)
利用群体智慧,可能比基于内容的节约时间。但可解释性没有基于内容的直观。
解释性,例如,在物品推荐(item recommendation)上可分为:-
user-based CF
解释为:推荐和用户相似的人也喜欢的物品“users that are similar to you loved this item”
-
item-based CF
解释为:推荐和用户过去喜欢的物品形似的物品“the item is similar to your previously loved items”
作者推荐相关总结论文:
Sarwar, B., G. Karypis, J. Konstan, and J. Riedl (2001).“Item-based collaborative filtering recommendation algorithms”. In:Proceedings of the 10th International Conference on World Wide Web. ACM.285–295.
-
之后有人提出Latent Factor Models (LFM)加入CF,矩阵分解Matrix Factorization (MF) 就是其中之一。提高了预测效果,但是可解释性仍然困难。
随后可解释推荐系统出现Explicit Factor Model(EFM)将潜在的维度和显式的特征联系在一起。
也有将deep learning (DL)的方法引入推荐,存在可解释性困难的问题。
PS:关于DL是否能真的提高推荐性能的论文:
Dacrema, M. F., P. Cremonesi, and D. Jannach (2019). “Are we really making much progress? A worrying analysis of recent neural recommendation approaches”. In: Proceedings of the 13th ACM Conference on Recommender Systems. ACM. 101–109.
1.3 Classification of the Methods
从两个正交的维度分类当下可解释推荐研究
-
The information source or display style of the explanations
从人机交互(HCI)角度(e.g., textual sentence explanation, or visual explanation)
-
The model to generate such explanations
从机器学习(ML)角度。Potential explainable models include the nearest-neighbor, matrix factorization, topic modeling, graph models, deep learning, knowledge reasoning, association rule mining, and others.
两个维度的任意一个组合,就是一个可解释推荐领域的子问题。
两个维度又是有联系的,因为信息的类型决定了解释的方式。
these two principles are closely related to each other because the type of information usually determines how the explanations can be displayed.
基于以上两个维度,作者总结了部分文献如表1.1。空缺部分可能还没有相关文献
1.4 Explainability and Effectiveness
可解释性和有效性可能是相互矛盾的,需要我们权衡。
Explainability and effectiveness could sometimes be conflicting goals
in model design that we have to trade-off (Ricci et al., 2011)
但两者也有可能不冲突。例如the deep representation learning 可以使我们设计有效且可解释的推荐模型。
While recent evidence also suggests that these two goals may not necessarily conflict with each other when designing recommendation models (Bilgic et al., 2004; Zhang et al., 2014a). For example, state-of-the-art techniques – such as the deep representation learning approaches– can help us to design recommendation models that are both effective and explainable.
1.5 Explainability and Interpretability
Explainability (可解释性)和 Interpretability(可理解为:直接可解释性) 是很接近的概念。
后者是实现前者的方式之一。
In general, interpretability is one of the approaches to achieve explainability.
Explainable AI (XAI) 目的是向用户/设计者解释推荐结果。
所以,模型可以是直接可解释的,或非直接可解释的。
To achieve the goal, the model can be either interpretable or non-interpretable.
举例:
- 直接可解释(interpretable)的例子如:线性模型(例线性回归)和树形模型(例决策树)。
- 非直接可解释(non-interpretable)的例子如:neural attention mechanisms, natural language explanations, and many post-hoc explanation models
1.6 How to Read the Survey
主要介绍需要提前准备的基础知识。
-
推荐系统知识:
基于内容推荐 content-based recommendation (Pazzani and Billsus,2007)
Pazzani, M. J. and D. Billsus (2007). “Content-based recommendation systems”. In: The Adaptive Web. Springer. 325–341.
协同过滤 collaborative filtering (Ekstrand et al., 2011)
Ekstrand, M. D. et al. (2011). “Collaborative filtering recommender systems”. Foundations and Trends? in Human–Computer Interaction.4(2): 81–173.
推荐系统评估 evaluation of recommender systems (Shani and Gunawardana, 2011)
Shani, G. and A. Gunawardana (2011). “Evaluating recommendation
systems”. In: Recommender Systems Handbook. Springer. 257–297. -
可解释性相关研究:
从用户研究角度的可解释推荐explanations in recommender systems from a user study perspective
Tintarev, N. and J. Masthoff (2007a). “A survey of explanations in
recommender systems”. In: Data Engineering Workshop, 2007 IEEE
23rd International Conference. IEEE. 801–810.可解释机器学习 interpretable machine learning
Lipton, Z. C. (2018). “The mythos of model interpretability”. Communications of the ACM. 61(10): 36–43.
Molnar, C. (2019). Interpretable Machine Learning. Leanpub.
一般可解释人工智能 explainable AI in general
Gunning, D. (2017). “Explainable artificial intelligence (XAI)”. Defense Advanced Research Projects Agency (DARPA).
Samek, W., T. Wiegand, and K.-R. Müller (2017). “Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models”. arXiv preprint arXiv:1708.08296.
Yongfeng Zhang and Xu Chen (2020), “Explainable Recommendation: A Survey and New Perspectives”, Foundations and Trends?in Information Retrieval:
Vol. 14, No. 1, pp 1–101. DOI: 10.1561/1500000066. ??