Introduction

Music catalogues for online retail have become immense over the past decades. In 2013 the iTunes music catalogue was comprised of over 26 million tracks with users downloading over 25 billion songs [2]. Today virtually anyone can create music and upload it to a music database such as bandcamp, iTunes, or Last.fm [2, 3, 32]. Well-known artists and tracks make up a very small portion of this item space, which is known as the Long-tail phenomenon [33]. As a result, finding new, interesting music has become a challenging task. Recommender systems try alleviate this problem by filtering the item repository based on a user’s music taste. Taste can be modelled by analyzing user preferences and tracking user behaviour, e.g., by analyzing a user’s listening history [43].

Ever since computer engineers started to develop this kind of systems, a wide range of algorithms have been designed and implemented to compute item recommendations [8, 35, 42, 43]; each of these with their own advantages and disadvantages.

There are two commonly applied recommendation strategies [43]:

  • Content-based filtering (CBF) : Using chosen or modelled features of items to define similarity between items in the user profile and candidate suggestions;
  • Collaborative filtering (CF) : Using overlap of item sets of each user profile to find possible suggestions in the difference of these item sets.

Although recommender systems have proven to be successful in terms of prediction accuracy, the success of a recommender system also relies on the trust in its recommendations by the end user. If the user does not know why a particular item is recommended to him, the user may be reluctant to check it out. Herlocker et al. [19] describe this issue as the black box problem. To improve acceptance of recommendations, they propose to build an explanation system presenting the user with a white box model of the recommender system rationale.

There are different ways in which explanation systems can be designed. An ambitious approach would be to explain each step of the recommendation algorithm, but this not always possible or desired. Other examples of how additional context can be provided for explainations are indicating which tracks in a user’s music library are closely related to the given recommendations, giving the system’s confidence in the accuracy of the suggestions, et cetera [19].

Over the course of the last decade a wide range of explanation systems have been implemented. Many of these also use visualizations to explore user and/or item relationships [4, 17, 18, 41, 62].

Thesis objective

The initial thesis objectives as described in [30] are two-fold:

  • The conduction of a literature study on techniques for the visualization of music suggestions;
  • The design, implementation and evaluation of an interactive visualization that will allow the user to gain insight into the recommendation process as well as actively steer the process.

The literature study describes recommender systems and their rationale, different visualization techniques, how users gain insight into visualization, and a number of visual explanation systems. In this context we will compose a new white box model that can be used as an explanation system for the collaborative recommendation rationale. This initial design is tested and improved through a number of iterations, resulting in an application that satisfies a number of criteria. These criteria are based on a set of evaluation properties, as described in the next subsections.

Evaluation properties

The explanation system will be evaluated based on seven aims described by Tintarev and Masthoff [53] listed in table 1.1. Also learnability (Learn.) and memorability (Mem.), properties of usability as described by Nielsen [37], are evaluated.

Table 1.1 : Explanation aims. Table adapted from Tintarev and Masthoff [53]

Aim Definition
Transparency (Tra.) Explain how the system works.
Scrutability (Scr.) Allow users to tell the system is wrong.
Trust Increase users’ confidence in the system.
Effectiveness (Efk.) Help users make good decisions.
Persuasiveness (Pers.) Convince users to try or buy.
Efficiency (Efc.) Help users make decisions faster.
Satisfaction (Sat.) Increase the ease of usability or enjoyment.

Evaluation methodology

Transparency is tested by evaluating insight into the recommendation process based on North’s evaluation method. We will use the think aloud protocol to obtain observational data. In particular we are looking for a user to make “domain specific inferences and hypotheses” [40].

Satisfaction, efficiency, and learnability are tested through think aloud usability testing and a summative system usabiliy scale (SUS) questionnaire. SUS is a Likert scale method consisting out of 10 questions, listed in figure 1.2, to investigate the subjective usability of an application [7]. Memorability is tested by asking test users that participated in previous iterations to explain the recommender rationale again at the beginning of the test.

Table 1.2 : System usability scale questions.

Label Question
Q1 I think that I would like to use this system frequently.
Q2 I found the system unnecessarily complex.
Q3 I thought the system was easy to use.
Q4 I think that I would need the support of a technical person to be able to use this system.
Q5 I found the various functions in this system were well integrated.
Q6 I thought there was too much inconsistency in this system.
Q7 I would imagine that most people would learn to use this system very quickly.
Q8 I found the system very cumbersome to use.
Q9 I felt very confident using the system.
Q10 I needed to learn a lot of things before I could get going with this system.

Trust, persuasiveness, and effectiveness are evaluated through direct feedback from the test subjects.

Success criteria

We aim to build a system that is accessible to non-expert users with an average to high interest in music. To achieve this, we hope to achieve positive results in terms of overall usability, learnability, memorability and transparency. By providing transparency, we hope to alleviate problems with regard to trust and effectiveness.

The visual explanation system

The white box model developed for this thesis tries to explain the recommendation rationale for collaborative filtering. The network structure that is inherent to this algorithm is visuallized as a graph. Users are eliminated from the graph, and are instead represented as implicit information in the remaining edges. To retain some contextual information, users are listed next to the graph and when one of them is selected, their corresponding edges and items are highlighted.

Last.fm applies a CF-based approach to generate artist recommendations [32, 33, 58]. The model we have developed will be applied on this recommender system.

The application created for this thesis is a page action Chrome Extension that injects HTML and JavaScript into the recommendations page of Last.fm at http://last.fm/home/recs. The application makes use of several JavaScript libraries, such as D3 and jQuery, as well as a specific JavaScript library by Felix Bruns to facilitate the usage of the Last.fm API.

Figure 1.1 shows the application. The application can be found in the Google Chrome web store. The source code for all the projects that have been developed for this thesis is available at https://www.github.com/soundsuggest/. For additional information on the project beyond this thesis, go to https://soundsuggest.wordpress.com/.

soundsuggest

Figure 1.1 : The SoundSuggest application.

Next chapters

The rest of this thesis text is organized as follows. Chapter 2 presents a literature study on recommender systems, visualization techniques, insight gaining and visual explanation systems. Chapter 3 tries to identify the target audience and how the application can be used. Chapter 4 describes the testing methodology and the different iterations. Chapter 5 looks at the technologies that were used to develop the application and the architecture of the application, and discusses some of the specifics of the implementation. Chapter 6 concludes the thesis text. It provides an interpretation of the application’s evaluation results, further conclusions, and a reflection on future work and opportunities.

Advertisements