Music Tech Fest 2013: MUSIC INFORMATICS RESEARCH GROUP



watch it up so the group was formed in 2005 and it was originally called the Center for computational creativity and right now we've got eight members of staff and the research area that we're working on our basically music information retrieval music Siglin analysis computation musicology representing musical knowledge and also applications of these techniques to other areas like environmental sounds or biological sounds and we've got a couple of research project like the idea master project which was on music education and right now we're running the slick man project which is about semantics linking of metadata for early music and one of the things that we're working on is music signal analysis which is can be said to be machine listening and how to make machines understand music and this is about understanding the nodes in the recordings instruments present two separate sources and one popular application this thing is automatic music transcription which is how to convert an audio recording into some form of music notation and this is an example of an audio recording of mozart's so if we transcribe that that sounds a bit like this so you'll notice that may be a few nodes were missed here in their budget system mostly works and we are able to actually transcribe music into some sort of notation another application that we're working on is on optical music recognition and tablatures for early music so we have really old hundred and tablature of Luke music and we're able to do optical music recognition in that detect the notes follow the voices across and come up with a modern notation for each voice and now I'm going over to Srikkanth who's going to talk about music prediction check yeah so the basic question that you're trying to answer with the such models for analyzing music is why music sounds the way the stores and why do we why do some of us like some kinds of music why some of us don't like the same kind of music and how our music tastes develop and so on so what we'll be doing with this is to use certain statistical and probabilistic models and train these on musical data so for example it's like saying that you develop a taste for music by listening to some kind of music over and over again or depending on what kind of music you listen to your taste also develops so in the same way we have these models which you try to train on some some data on different kinds of sequential patterns like pitch sequences or the kind of different kinds of rhythms and see how these models summarize this information inside its for themselves and so what this can help you understand notions of similarity in style and music and also it's interesting to see it gives you different perspectives on creativity and how actually a creative work comes about and yeah and also a little about familiarity and preference so and another interesting thing that you can do with these models is that you can once you train these models on some data to understand this music you can also have them generate music and this is I guess an interesting thing to this is an example of a model dep restrained on bark Corral's and so yeah I mean the idea is to make such generation sound like the music that they've been exposed to and yeah right so this is another way if this is one way of looking at music similarity is by going simply by how the signal looks like what what the signal looks like so you you have a the audio signal and you you analyze it at each piece you you break it down into several components and components and then you use this information to understand notions of similarity in music pass it on to so apart from analyzing music we also interested in a ways of talking about music and actually formalizing at descriptions of music like how can we um transport musical information over the web power can we build data sets which are able to describe music in a way which is generic enough to describe several ways of music but also specific enough to contain all the information about certain specific musical pieces we've done research in the semantic web ontology and we've developed an ontology for melodies chord progressions and music so it's in semantic web ontology for music which helps too well to display and to actually formalize by these melodies we've seen analyzed before um another project of ours is to model similarity in general we've seen the modeling of melodic similarity before by shrikant and in melodic similarity is just one facet or one aspect what you can call music similarity and we would all agree that probably different people in this room would have different ideas of how to describe similar of certain pieces you would focus on different aspects of the music and there will be in information of different cultures and of your context and of course also psychoacoustics which would play into your idea of musicology and music similarity so we try to build computational similarity models which include all these aspects of culture context psychoacoustic mics and music perception the question the central question emergently music similarities which features of music are important in which context to what person and can we make models which are generic enough to recommend music based on music similarity or to retrieve music based on music similarity so you could use it in the end in a nice web app or something in order to get this data and also other datos we've developed a system called kazimir which is a game with the purpose framework and designed to collect data about music basically annotations about music from players of the game and I'm going to give a short demo of this so this is the spot the odd song out game it's just the first login form spot your tongue out is available on facebook you can all play it to search for spot the hot song out on facebook and well we're interested in some information about our participants because we want to relate culture and other personal attributes to the music data and the music you actually give us so when we start this game we come into menu and we can have a nice avatar we can just start a match and then participants of this game it's a multiplayer game can listen to songs oh sorry that was shot Roman for me yeah oh so it's three songs and there's some cases in which you hear three different songs and other cases where you actually hear only two different songs and one is the same in we use these cases to check if the one who's playing our game is actually making any sense in doing the decision so the task is to spot the odd song out which is the song which is different out of the three are the most different to the others so in this case we've got two which are the same and one which is different and I will choose that one arm in this case we've been playing with other players which are in computer place and we've all selected the one song which is different which is great which tells the system that the participants make some kind of sense another task in this game is developed by the kth Stockholm University and it collects temporal data so here you have to listen to a music clip and then you can tap the rhythm to the music something like this that's the task so I wasn't very good to tapping so I probably won't get many points but the research behind this I didn't get any points this is my school but the task is to collect information about how people tap tempo if they tap to the quarter notes dat notes how they perceive rhythm and yeah so we have developed this framework and we want to encourage other people maybe also in this room to actually contribute more modules like spot your tongue out or the tap tempo module to the game to make it more interesting it's very easy it's all based on html5 and JavaScript it's works on mobile phones and browsers and yeah okay so as I said we are very interested in collaborations in general as well as a work as research group if if you're an app developer and have any ideas to the new apps contribute to the framework if you've got any creative feedback to the other things we've talked about before about melodic similarity about automatic transcription did you find anything especially interesting would you like to see something at your music school which uses those technologies and yeah would you like to also use the applications we've seen here in like music education finally of course we're interested in research and discussion of the stuff we've just shown you here and yeah please visit our website if you've got any further questions sorry the website is mi dot SOI dot sea-tac UK so it's mi dot SOI dot CC city AC UK and you can just google the mir g group thank you very much lovely thank you thank you City uni does anybody have any you

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *