Notes on first meeting » History » Version 2
Version 1 (Gyorgy Fazekas, 2012-02-22 07:35 PM) → Version 2/32 (Gyorgy Fazekas, 2012-02-22 11:19 PM)
h1. Notes on first meeting
h2. Topics discussed (roughly)
** What are the main research use cases for an Audio Features Ontology (AF) ?
** Are they served well by the existing AF ? well?
** If not, what are the most important extensions we need to do?
** Does the fundamental structure of the ontology need to be changed?
** Personal Objectives: what are we going to do with a modified/re-engineered ontology?
** What is the relation of AF to existing software, including:
*software like
Sonic Annotator, SV, SAWA, AudioDB other tools...
*and and projects like OMRAS2, EASAIER, new Semantic Media/Semantic Audio grants... EASAIER client, AudioDB
h2. Use cases:
++Thomas:
** drive audio effects -> adaptive effect (controlling effects)
** KM like use case: association of audio effects and audio features e.g. pitch shifter won’t change onsets
** part of the AFX ontology
** more audio features
** technical classification of audio effects
++Steve:
** Finding structure, repeated sequences of features
** Beat related stuff, BPM (tempo, major/minor is it an audio feature, not necessarilty)
** Chords => Chord ontology
** Melody and notes
++George:
** Improve SAWA
** Facilitate the development of intelligent music production systems
** Release large content based metadata repositories in RDF
** Re-release the MSD in RDF (??)
** Deploy a knowledge based environment for content-based audio analysis based on the concept of the Knowledge Machine that can combine multiple modalities
** Research reproducibility using Ontologies as a model to exchange research data.
h2. Resources:
h3. Some work related to Steve's use cases, segmentation and Ontologies:
** SALAMI Project: Kevin Page, DaveDeRoure http://salami.music.mcgill.ca/
** The Segment Ontology: http://users.ox.ac.uk/~oerc0033/preprints/admire2011.pdf
** PopStructure Ontology: Kurt Jacobson Unpublished.
(Example available: http://wiki.musicontology.com/index.php/Structural_annotations_of_%22Can%27t_buy_me_love%22_by_the_Beatles)
** Similarity Ontology: Kurt Jacobson http://grasstunes.net/ontology/musim/musim.html
h2. Open issues:
h3. Domain and scope:
** Are Musicological concepts outside the scope of an AF ?
** Physical features : Acoustic features, Perceptual Features, DSP type, Musical Features
h3. Fundamental structure of existing AF: structure:
** Does it serve you well?
** For example, loudness is defined as a segment in AF, and it does not fit a perceptual attribute well.
** What depth do we want ? (both in terms of scope and the level of detail in describing a feature extraction workflow)
** How AF relates to the DSP workflows used in extracting them?
h2. Existing resources :
h3. Some work related to Steve's use cases, segmentation and Ontologies:
** SALAMI Project: Kevin Page, DaveDeRoure http://salami.music.mcgill.ca/
** The Segment Ontology: http://users.ox.ac.uk/~oerc0033/preprints/admire2011.pdf
** PopStructure Ontology: Kurt Jacobson Unpublished.
(Example available: http://wiki.musicontology.com/index.php/Structural_annotations_of_%22Can%27t_buy_me_love%22_by_the_Beatles)
** Similarity Ontology: Kurt Jacobson http://grasstunes.net/ontology/musim/musim.html
h2. Ideas/resources Ideas for new Ontologies: ontologies:
** Steve Steve: has worked on Acoustics related ontology
** Creating a DSP ontology:
ontology
** include processing steps down to math operations
(this can take advantage to the math:namespace in CWM: http://www.w3.org/DesignIssues/Notation3.html) operation
** describe common DSP parameters
** create an Acoustics Ontology ontology
** describe Musicological concepts
** describe concepts related to cognitive and perceptual issues
h2. Currently missing features
** MFCC-s are missing
** Rythmogram
** RMS energy
** combined features, e.g. weighted combinations or combinations, statistical averages over features
h2. Development issues
** chaining, combination, weighting
** how you associate features with arbitrary data
** summary feature types
** SM (similarity matrix) are they part of the ontoogy?
** how to describe salience, can you hear it, can you perceive, is there an agreement
** how to describe weighting, confidence
** mood, music psychology, cognition, emotion, (perception ?)
** provenance => music provenance
** need for deprecation and versioning
h2. Long term objectives:
Some concrete tasks that can be done as the outcome of the collaboration:
** A version of Sonic Annotator that produces output adhering according to the new ontology
** Are we making people happier by doing so?
** gradual transition period?
transition.
** extend other software toolkits; e.g. a verison of Marsyas in C++
** multitrack processing using Sonic Annotator (this feature might come along soon)
h2. Some immediate Immediate tasks (TODO):
** collect more resources
** Verify the relationship between AF as is, and other feature/segmentation Ontologies segmentation ontologies
** what other software uses it?
** papers and literature review
** relation to projects e.g. SIEMAC
** SIEMAC
** collect features that we need
need.
** define plugins, LADSPA, VAMP, Marsyas, CLAM, libextract, COMirva, MIRtoolbox, Supercollider, other frameworks
** scope (extend the diagram (diagram of the a set of ontologies: )
ontologies)
** collect specific application examples from existing processing chain / workflow
collect software/projects that use/produce audio features:
** plugins, LADSPA, VAMP, Marsyas, CLAM, libextract, COMirva, MIRtoolbox, Supercollider, other frameworks
h2. Topics discussed (roughly)
** What are the main research use cases for an Audio Features Ontology (AF) ?
** Are they served well by the existing AF ? well?
** If not, what are the most important extensions we need to do?
** Does the fundamental structure of the ontology need to be changed?
** Personal Objectives: what are we going to do with a modified/re-engineered ontology?
** What is the relation of AF to existing software, including:
*software like
Sonic Annotator, SV, SAWA, AudioDB other tools...
*and and projects like OMRAS2, EASAIER, new Semantic Media/Semantic Audio grants... EASAIER client, AudioDB
h2. Use cases:
++Thomas:
** drive audio effects -> adaptive effect (controlling effects)
** KM like use case: association of audio effects and audio features e.g. pitch shifter won’t change onsets
** part of the AFX ontology
** more audio features
** technical classification of audio effects
++Steve:
** Finding structure, repeated sequences of features
** Beat related stuff, BPM (tempo, major/minor is it an audio feature, not necessarilty)
** Chords => Chord ontology
** Melody and notes
++George:
** Improve SAWA
** Facilitate the development of intelligent music production systems
** Release large content based metadata repositories in RDF
** Re-release the MSD in RDF (??)
** Deploy a knowledge based environment for content-based audio analysis based on the concept of the Knowledge Machine that can combine multiple modalities
** Research reproducibility using Ontologies as a model to exchange research data.
h2. Resources:
h3. Some work related to Steve's use cases, segmentation and Ontologies:
** SALAMI Project: Kevin Page, DaveDeRoure http://salami.music.mcgill.ca/
** The Segment Ontology: http://users.ox.ac.uk/~oerc0033/preprints/admire2011.pdf
** PopStructure Ontology: Kurt Jacobson Unpublished.
(Example available: http://wiki.musicontology.com/index.php/Structural_annotations_of_%22Can%27t_buy_me_love%22_by_the_Beatles)
** Similarity Ontology: Kurt Jacobson http://grasstunes.net/ontology/musim/musim.html
h2. Open issues:
h3. Domain and scope:
** Are Musicological concepts outside the scope of an AF ?
** Physical features : Acoustic features, Perceptual Features, DSP type, Musical Features
h3. Fundamental structure of existing AF: structure:
** Does it serve you well?
** For example, loudness is defined as a segment in AF, and it does not fit a perceptual attribute well.
** What depth do we want ? (both in terms of scope and the level of detail in describing a feature extraction workflow)
** How AF relates to the DSP workflows used in extracting them?
h2. Existing resources :
h3. Some work related to Steve's use cases, segmentation and Ontologies:
** SALAMI Project: Kevin Page, DaveDeRoure http://salami.music.mcgill.ca/
** The Segment Ontology: http://users.ox.ac.uk/~oerc0033/preprints/admire2011.pdf
** PopStructure Ontology: Kurt Jacobson Unpublished.
(Example available: http://wiki.musicontology.com/index.php/Structural_annotations_of_%22Can%27t_buy_me_love%22_by_the_Beatles)
** Similarity Ontology: Kurt Jacobson http://grasstunes.net/ontology/musim/musim.html
h2. Ideas/resources Ideas for new Ontologies: ontologies:
** Steve Steve: has worked on Acoustics related ontology
** Creating a DSP ontology:
ontology
** include processing steps down to math operations
(this can take advantage to the math:namespace in CWM: http://www.w3.org/DesignIssues/Notation3.html) operation
** describe common DSP parameters
** create an Acoustics Ontology ontology
** describe Musicological concepts
** describe concepts related to cognitive and perceptual issues
h2. Currently missing features
** MFCC-s are missing
** Rythmogram
** RMS energy
** combined features, e.g. weighted combinations or combinations, statistical averages over features
h2. Development issues
** chaining, combination, weighting
** how you associate features with arbitrary data
** summary feature types
** SM (similarity matrix) are they part of the ontoogy?
** how to describe salience, can you hear it, can you perceive, is there an agreement
** how to describe weighting, confidence
** mood, music psychology, cognition, emotion, (perception ?)
** provenance => music provenance
** need for deprecation and versioning
h2. Long term objectives:
Some concrete tasks that can be done as the outcome of the collaboration:
** A version of Sonic Annotator that produces output adhering according to the new ontology
** Are we making people happier by doing so?
** gradual transition period?
transition.
** extend other software toolkits; e.g. a verison of Marsyas in C++
** multitrack processing using Sonic Annotator (this feature might come along soon)
h2. Some immediate Immediate tasks (TODO):
** collect more resources
** Verify the relationship between AF as is, and other feature/segmentation Ontologies segmentation ontologies
** what other software uses it?
** papers and literature review
** relation to projects e.g. SIEMAC
** SIEMAC
** collect features that we need
need.
** define plugins, LADSPA, VAMP, Marsyas, CLAM, libextract, COMirva, MIRtoolbox, Supercollider, other frameworks
** scope (extend the diagram (diagram of the a set of ontologies: )
ontologies)
** collect specific application examples from existing processing chain / workflow
collect software/projects that use/produce audio features:
** plugins, LADSPA, VAMP, Marsyas, CLAM, libextract, COMirva, MIRtoolbox, Supercollider, other frameworks