Mercurial > hg > musicweb-iswc2016
changeset 41:9ef82da57c17
background
author | mariano |
---|---|
date | Sun, 01 May 2016 03:08:27 +0100 |
parents | 967b0369ba07 |
children | 4f7cf0afbad1 |
files | musicweb.bib musicweb.tex |
diffstat | 2 files changed, 17 insertions(+), 15 deletions(-) [+] |
line wrap: on
line diff
--- a/musicweb.bib Sun May 01 02:57:01 2016 +0100 +++ b/musicweb.bib Sun May 01 03:08:27 2016 +0100 @@ -2,10 +2,10 @@ %% http://bibdesk.sourceforge.net/ -%% Created for George Fazekas at 2016-05-01 02:56:50 +0100 +%% Created for George Fazekas at 2016-05-01 02:56:50 +0100 -%% Saved with string encoding Unicode (UTF-8) +%% Saved with string encoding Unicode (UTF-8) @@ -224,3 +224,13 @@ Url = {http://doi.acm.org/10.1145/2814895.2814922}, Year = {2015}, Bdsk-Url-1 = {http://doi.acm.org/10.1145/2814895.2814922}} + +@incollection{passant2010dbrec, + title={dbrec—music recommendations using DBpedia}, + author={Passant, Alexandre}, + booktitle={The Semantic Web--ISWC 2010}, + pages={209--224}, + year={2010}, + publisher={Springer} +} +
--- a/musicweb.tex Sun May 01 02:57:01 2016 +0100 +++ b/musicweb.tex Sun May 01 03:08:27 2016 +0100 @@ -182,16 +182,6 @@ MusicWeb is an application which offers the user the possibility of exploring editorial, cultural and musical links between artists. It gathers, extracts and manages musical metadata from many different sources and connects them in informative ways. This paper deals with the different ways in which musicweb collects these resources and shapes them into high-level information. We will first review various knowledege-based web resources available to musicweb. We will then introduce the application itself and detail the architecture to analyse and extract data. Before the final conclusions and discussion of future work we will analyse the experience of interfacing with the application and how users can explore and discover new musical paths. -\begin{itemize} -\item related work - \begin{itemize} - \item http://musikipedia.org/ MUSIKIPEDIA, some paper by the guy who made it, Mohamed Sordo - \item Kurt's thesis on the similarity ontology - \item Phuong Nguyen, Paolo Tomeo, Tommaso Di Noia and Eugenio Di Sciascio: Content-based recommendations via DBpedia and Freebase - \item music recommendation dbpedia - \end{itemize} -\item very brief intro to the role of music related data sources on the web and what they are -\end{itemize} \section{Background}\label{sec:background} Researchers have realised the usefulness of musical metadata and have for some time tried to collect and exploit music metadata for knowledge representation. Several ontologies have been and continue to be developed to link music metadata, such as the music ontology\cite{DBLP:conf/ismir/RaimondASG07}, which defines all objects in the process of creation, interpretation and distribution of music; the similarity ontology\cite{jacobson2011}, which allows for associations based on similarity of all musical elements contained in the music ontology; the studio ontology, which can be used to describe all elements in music studio environments\cite{fazekas2011studio}; or the audio effects ontology\cite{wilmering2013}, permitting the description of audio effects employed in music production processes. Linked music metadata is full of promise. However, most attempts to make use of linked metadata to guide music discovery have stressed some aspects of metadata while ignoring others. Pachet identifies three types of musical metadata \cite{Pachet2005}: @@ -201,9 +191,11 @@ \item Acoustic metadata: data extracted from audio files using music information retrieval methods. \end{enumerate} Of these, only the first has been exploited to a significant degree. Web resources for music discovery which employ liked data such as musicbrainz or lastfm rely mostly on editorial metadata to link. Commercial recommendation systems make use of cultural metadata, mainly through collaborative filtering. -To our knowledge the first recommedation system based on linked data was proposed in \cite{celma2008foafing}, which used web crawling to gather data which could then be offered to the user. Recommendation was based on profiling the user's listening habits and foaf connections. A further step was taken in \cite{heitmann2010}, in which the author addresses common problems in recommender system such the new item problem or the new user problem. -Nguyen \emph{et al}\cite{nguyen2015} explore the effectiveness of recommendation systems based on knowledge encyclopedias such as dbpedia and freenet. +To our knowledge the first recommedation system based on linked data was proposed in \cite{celma2008foafing}, which used web crawling to gather data which could then be offered to the user. Recommendation was based on profiling the user's listening habits and foaf connections. A further step was taken in \cite{heitmann2010}, in which the author addresses common problems in recommender system such the new item problem or the new user problem. dbrec, a recommender system presented in \cite{passant2010dbrec}, recommended music obtained from dbpedia by computing a measure of semantic distance as the number of indirect and distinct links between resources in a graph. The system offered the user an explanation for each recommendation, listing the resources shared by the artists recommended. +Nguyen \emph{et al}\cite{nguyen2015} explore the effectiveness of recommendation systems based on knowledge encyclopedias such as dbpedia and freenet. The authors compute several different similarity measures of linked data extracted from both datasets which they then feed to a recommender system. +There are several web resources offering services similar to MusicWeb. One of them is musikipedia\footnote{http://musikipedia.org/}. The user can visit a page for an artist and listen to music or watch videos. The user can also link to other artists that are connected to the current one, and an explanation of the connection is offered. Links are extracted from dbpedia and offer all common categories between artists. +Dbpedia and freebase are two of the most common sources of linked data available. There are several other sources of music metadata. Acousticbrainz\footnote{https://acousticbrainz.org/} is an crowd source information resource which contains low and high level music metadata, including audio and editorial features. Acousticbrainz is participated by musicbrainz, which is also a major container of linked editorial metadata. @@ -299,7 +291,7 @@ \subsection{Content-based linking}\label{sec:mir} -Content-based Music Information Retrieval (MIR) \cite{casey08} facilitates applications that rely on perceptual, statistical, semantic or musical features derived from audio using digital signal processing and machine learning methods. These features may include statistical aggregates computed from time-frequency representations extracted over short time windows. For instance, spectral centroid is said to correlate with the perceived brightness of a sound [Schubert et.al., 2006], therefore it may be used in the characterisation in timbral similarity between music pieces. More complex representations include features that are extracted using a perceptually motivated algorithm. Mel-Frequency Cepstral Coefficients (MFCCs) for instance are often used in speech recognition as well as in estimating music similarity. Higher-level musical features include keys, chords, tempo, rhythm, as well as semantic features like genre or mood, with specific algorithms to extract this information from audio. +Content-based Music Information Retrieval (MIR) \cite{casey08} facilitates applications that rely on perceptual, statistical, semantic or musical features derived from audio using digital signal processing and machine learning methods. These features may include statistical aggregates computed from time-frequency representations extracted over short time windows. For instance, spectral centroid is said to correlate with the perceived brightness of a sound [Schubert et.al., 2006], therefore it may be used in the characterisation in timbral similarity between music pieces. More complex representations include features that are extracted using a perceptually motivated algorithm. Mel-Frequency Cepstral Coefficients (MFCCs) for instance are often used in speech recognition as well as in estimating music similarity. Higher-level musical features include keys, chords, tempo, rhythm, as well as semantic features like genre or mood, with specific algorithms to extract this information from audio. % Content-based features are increasingly used in music recommendation systems to overcome issues such as infrequent access of lesser known pieces in large music catalogues (the ``long tail'' problem) or the difficulty of recommending new pieces without user ratings in systems that employ collaborative filtering (``cold start'' problem) \cite{Celma2010}.