Welcome to the 2nd MILC Workshop
MILC 2019 is held in conjunction with the 24th International Conference on Intelligent User Interfaces (IUI 2019) and takes place on March 20th, 2019 at the Marriott Marina Del Rey in Los Angeles, CA, USA. (Room: Pacific 2)
- MILC Workshop Proceedings online
- Call for demos open to all! If you want to present your intelligent music interface in the open demo session, send an email to email@example.com
- Workshop Program online
- We are excited and honored to have Masataka Goto as the keynote speaker at this year’s MILC workshop!
9:00-9:10 Welcome (Proceedings Preface / IUI Overview Paper)
9:10-10:30 Session 1 (Short Papers)
- A Web-Based System For Suggesting New Practice Material To Music Learners Based On Chord Content. Johan Pauwels and Mark Sandler (15+5’)
- Groove Explorer: An Intelligent Visual Interface for Drum Loop Library Navigation. Fred Bruford, Mathieu Barthet, SKoT McDonald and Mark Sandler (15+5’)
- Curating Generative Raw Audio Music with D.O.M.E.. CJ Carr and Zack Zukowski (15+5’)
- Creating Immersive Electronic Music from the Sonic Activity of Environmental Soundscapes. Eli Stine (15+5’)
10:30-11:00 Open Demo Session during Coffee Break
This is an open session to demonstrate intelligent music interfaces from the workshop and outside. Current list of demos:
- A Web-Based System For Suggesting New Practice Material To Music Learners Based On Chord Content. Johan Pauwels
- A Minimal Template for Interactive Web-Based Demonstrations of Musical Machine Learning. Vibert Thio
- TextTimeline: Visualizing Vocalized Timing of Singing Voice along Display Text. Tomoyasu Nakano, Jun Kato, and Masataka Goto
11:00-12:00 Session 2 (Long Papers)
- Towards a Hybrid Recommendation System for a Sound Library. Jason Smith, Dillon Weeks, Mikhail Jacob, Jason Freeman and Brian Magerko (25+5’)
- A Minimal Template for Interactive Web-Based Demonstrations of Musical Machine Learning. Vibert Thio, Hao-Min Liu, Yin-Cheng Yeh and Yi-Hsuan Yang (25+5’)
12:00-13:00 Keynote by Masataka Goto: Intelligent Music Interfaces Based on Music Signal Analysis
13:00 Workshop Closing
Keynote Talk by Masataka Goto
Intelligent Music Interfaces Based on Music Signal Analysis
In this talk I will present intelligent music interfaces demonstrating how end users can benefit from automatic analysis of music signals (automatic music-understanding technologies) based on signal processing and/or machine learning. I will also introduce our recent challenge of deploying research-level music interfaces as public web services and platforms that enrich music experiences. They can analyze and visualize music content on the web, enable music-synchronized control of computer-graphics animation and robots, and provide an audience of hundreds with a bring-your-own-device experience of music-synchronized animations on smartphones. In the future, further advances in music signal analysis and music interfaces based on it will make interaction between people and music more active and enriching.
Masataka Goto, National Institute of Advanced Industrial Science and Technology (AIST)
Masataka Goto received the Doctor of Engineering degree from Waseda University in 1998. He is currently a Prime Senior Researcher at the National Institute of Advanced Industrial Science and Technology (AIST). In 1992 he was one of the first to start working on automatic music understanding and has since been at the forefront of research in music technologies and music interfaces based on those technologies. Over the past 26 years he has published more than 250 papers in refereed journals and international conferences and has received 46 awards, including several best paper awards, best presentation awards, the Tenth Japan Academy Medal, and the Tenth JSPS PRIZE. He has served as a committee member of over 110 scientific societies and conferences, including the General Chair of the 10th and 15th International Society for Music Information Retrieval Conferences (ISMIR 2009 and 2014). In 2016, as the Research Director he began a 5-year research project (OngaACCEL Project) on music technologies, a project funded by the Japan Science and Technology Agency (ACCEL, JST).
Today’s music ecosystem is permeated by digital technology — from recording to production to distribution to consumption. Intelligent technologies and interfaces play a crucial role during all these steps. On the music creation side, tools and interfaces like new sensor-based musical instruments or software like digital audio workstations (DAWs) and sound and sample browsers support creativity. Generative systems can support novice and professional musicians by automatically synthesising new sounds or even new musical material. On the music consumption side, tools and interfaces such as recommender systems, automatic radio stations, or active listening applications allow users to navigate the virtually endless spaces of music repositories.
Both ends of the music market therefore heavily rely on and benefit from intelligent approaches that enable users to access sound and music in unprecedented manners. This ongoing trend draws from manifold areas such as interactive machine learning, music information retrieval (MIR) — in particular content-based retrieval systems, recommender systems, human computer interaction, and adaptive systems, to name but a few prominent examples. Following the successful first edition held in Tokyo 2018, the 2nd Workshop on Intelligent Music Interfaces for Listening and Creation (MILC 2019) will bring together researchers from these communities and provide a forum for the latest trends in user-centric machine learning and interfaces for music consumption and creation.
Exemplary Topics of Interest
- Music and audio search and browsing interfaces
- Adaptive music user interfaces
- Music learning interfaces
- Music recommender systems
- Gamification in music interfaces
- Novel visualization paradigms
- New technologies for human expression, creativity, and embodied interaction
- Machine learning for new digital musical instruments
- Gestural interfaces for music creation and listening
- Accessible music making technologies
- Intelligent systems for music composition
- User modeling for personalized music interfaces
All papers must be original and not simultaneously submitted to another journal or conference. We solicit three types of submissions:
- Full papers (up to 6 pages)
- Short papers (up to 4 pages)
- Demo papers (up to 4 pages)
Submissions must follow the standard SigCHI format, using one of the following templates: LaTeX, Microsoft Word. Note that references count towards the page limits.
Please anonymize your submission (double-blind reviewing policy) and submit your paper via EasyChair. Submissions will be reviewed by at least three members of the program committee. Authors of accepted submissions will be required to attend and give a presentation at the workshop. The workshop proceedings are to be published in the joint proceedings of the ACM IUI 2019 Workshops.
Important Dates (all deadlines 23:59 Anywhere on Earth)
- Deadline for paper submission (extended):
December 7th, 2018December 14th, 2018
- Acceptance notification for paper submissions:
January 14th, 2019
- Deadline for final copy of accepted papers:
February 15th, 2019
- Workshop date: March 20, 2019
- Peter Knees, TU Wien, Austria
- Markus Schedl, Johannes Kepler University Linz, Austria
- Rebecca Fiebrink, Goldsmiths, University of London, UK
- Baptiste Caramiaux, IRCAM
- Mark Cartwright, NYU
- Bruce Ferwerda, Jönköping University
- Fabien Gouyon, Pandora Inc.
- Masataka Goto, AIST
- Dietmar Jannach, AAU Klagenfurt
- Vikas Kumar, University of Minnesota
- Cárthach Ó Nuanáin, melodrive Inc.
- Adam Roberts, Google
- Gabriel Vigliensoni, McGill University
- Richard Vogl, TU Wien