Tenor 2016


Friday 27 May 2016
Centre for Music and Science
Cambridge University Faculty of Music
West Road - Cambridge CB3 9DP

Composing and Improvising with DJster Autobus

This workshop will center around the question as to how to control a powerful generative music application from within a music notation environment. The focus will be on DJster, a port of Clarence Barlow's legacy software AUTOBUSK. Following the original design concept, DJster exists in two flavours which have been implemented in plain Max as well as Max for Live: a Live device for real-time generation of musical events and a "Scorepion" (a plugin for MaxScore) for non-real generation aiming for the notation of musical structures to compose with. The two implementations are capable of sharing presets allowing users to set and tweak parameters in the Live device, and to export those settings to the composition/notation environment. During the workshop I will explain the technology behind DJster, give an introduction to both implementations and discuss practical applications for experimental, microtonal and film music composition.
Georg Hajdu
Hajdu, born in Göttingen, Germany, is among the first composers of his generation dedicated to the combination of music, science and computer technology. After studies in Cologne and at the Center for New Music and Audio Technologies (CNMAT), he received his Ph.D. from UC Berkeley. In 1996, following residencies at IRCAM and the ZKM, Karlsruhe, he co-founded the ensemble WireWorks. In 1999, he produced his full-length opera Der Sprung. In May 2002, his Internet performance environment Quintet.net was employed in a Munich Biennale opera performance. He co-founded the European Bridges Ensemble for networked music performance in 2005 and organized several editions of the Music in the Global Village Conference (Budapest, 2007 and 2008) as well as the first conference entirely dedicated to the Bohlen-Pierce scale (Boston, 2010).
In addition to his compositions, which are characterized by a pluralistic attitude and have earned him several international prizes, the IBM-prize of the Ensemble Modern among them, Georg Hajdu published articles on several topics on the borderline of music and science. His areas of interest include multimedia, microtonality, algorithmic, interactive and networked composition. As software designer he has created tools for musicians including the music notation software MaxScore (co-developed with New York guitarist and composer Nick Didkovsky).
Georg Hajdu is professor of multimedia composition at the Hamburg University of Music and Theatre and founding director of the Center for Microtonal Music and Multi-Media (ZM4).
Recent Windows laptops and MacBooks with Max 7.1 and Ableton Live 9.6 installed (authorisation of the software is not required). The participants will also be required to install MaxScore and DJster from http://computermusicnotation.com and http://djster.georghajdu.de resp. Temporary MaxScore licenses will be provided.
Georg Hajdu (georghajdu@mac.com)

The Trinity Test: Workshop on Unified Notations for Practices and Pedagogies in Music and Programming

Source code and music notation both seek to provide expressive and efficient specifications that define the behaviour of systems or agents over time - be they computers or musicians - for subsequent live, interactive, or repeatable execution. This workshop looks at the creative and pedagogical opportunities for notations that draw on and bridge both programming and musical practices and ontologies. Exploration and experimentation is supported by the Manhattan music coding environment (Nash, 2014), which combines a text-based pattern sequencer notation with spreadsheet formula-style code expressions, enabling a continuum of musical expression from traditional/manual note arranging to increasingly algorithmic and generative configurations. With the aid of Sam Aaron, other technologies that bridge programming and music, such as Sonic Pi, will also be considered and explored.
More information:
C. Nash, “Manhattan: End-User Programming for Music” Proceedings of NIME 2014, pp. 28-33.
Friday May 28, from 12:30 to 15:30

12:30Welcome and Introductions (15m)
Host and delegates introduce themselves, briefly describing their background and respective areas of interest or expertise.
12:45Opening Remarks (30m, Chris Nash & Sam Aaron)
Audio/visual presentation with live software examples (e.g. Manhattan, Sonic Pi, Excel, Logic, Max/MSP), highlighting concepts, notations and usability issues in digital music practices or programming language design.
13:15Initial Questions / Discussion (15m)
Open discussion amongst delegates of the issues raised in the opening presentation, specifically in the context of the delegates’ own experiences and research interests. Used to help frame and guide subsequent interactive sessions.
13:30Interactive Session 1 (Example Exercises, 45m)
Structured exercises from provided materials, designed to introduce delegates to the fundamentals of the Manhattan tool, while also providing specific examples of programming concepts (e.g. variables, arrays, iteration, functions, conditional statements) presented in a practical musical context.
14:15Interactive Session 2 (Experimentation, 60m)
Drawing on and combining their own musical and programming experiences, delegates are invited to join one of two activity groups, both using Manhattan (or other tools, such as Sonic Pi) to experiment with new ideas: Group CM (Code to Music) discusses, explores, and develops musical expressions of ideas from programming or algorithmic / generative music; Group MC (Music to Code) explores the use of formulae to encapsulate traditional music practices, forms, or works (common practice music, MIDI arranging, electronic, folk and popular styles).
15:15 - Closing Discussion (15mins)
Brief review of issues and findings (or research questions) that emerged, and call for interest in further research / collaboration.
Chris Nash (principal organizer)
is a professional programmer and composer – currently Senior Lecturer in Music Technology at UWE Bristol, teaching software development for audio, sound, and music (DSP, C/C++, Max/MSP). His research focuses on digital notations, HCI in music, virtuosity, end-user computing, systematic musicology, and pedagogies for music and programming.
Sam Aaron (guest speaker)
is a musician, researcher, and developer at both Cambridge University and the Raspberry Pi Foundation. He is an expert in programming language design and semantics, and creator of the Sonic Pi project, which uses music to engage children and other non-coders in programming. He is an active performing live coder.
Sam Hunt (technical support, recorder)
is a post-graduate researcher at UWE Bristol, currently completing a PhD in music content analysis and generative applications in digital music composition, supervised by Chris Nash.
As an exploration of end-user programming, no specific expertise is required. However, the workshop would particularly suit people with backgrounds, research interests or experience in: notation, composition (modern or common-practice), sequencing, programming (usage and semantics), pedagogy, virtuosity, live coding or usability.
Participants are likely (and encouraged) to bring and use their own laptops, for which software will be supplied through USB sticks. Participants can retain the software. They should indicate their preferred OS (OS X or Windows), or whether they need a computer to be provided.
Chris Nash (chris.nash@uwe.ac.uk)

The Theory and Practice of Animated Music Notation

This workshop has been designed to investigate the theory and practice of Animated Music Notation, and will be divided into four sections. The purpose of Section 1 is to provide an historical and technological context, as well as a general introduction to contemporary animated scoring practices and animated music notation. In Section 2, a taxonomy of high-level and low-level animated score functionalities and symbols will be posited in order to provide a consistent terminology with which to approach animated scores in theory and practice. In Section 3, attendees will be encouraged to participate in a series of hands-on explorations of a variety of animated score functionalities, and in Section 4, a theory of animated music notation will be presented simultaneously with an extension of the hands-on practices experienced in Section 3. Each Section has been designed to encourage discussion throughout and upon their completion. Workshop attendees are strongly encouraged to bring instruments, although this is by no means necessary.
Ryan Ross Smith
Fremont Center, NY-based composer and performer Ryan Ross Smith has performed as a pianist and electronicist throughout the US, Europe and UK, including performances at MoMA, PS1, LaMaMa [NYC] and Le Centre Pompidou [Paris, FR], and has had his compositions performed throughout North America, Iceland, Australia and the UK. He has presented his work and research on Animated Music Notation at conferences including NIME, Tenor2015, ICLI, ISEA and the Deep Listening Conference, and has lectured at colleges and universities. Smith is currently a PhD candidate at the Rensselaer Polytechnic Institute in Troy, NY. For more information, please visit ryanrosssmith.com and animatednotation.com
Attendees are encouraged to bring instruments of some sort, but of course, this is not required. With that said, an instrument can be as simple as two rocks.
Ryan Ross Smith (ryanrosssmith@gmail.com)

Practical information

Workshops will take place at:

Centre for Music and Science
Cambridge University Faculty of Music
West Road
Cambridge CB3 9DP

Walking from Anglia Ruskin to West Road takes about 30 minutes.

Participation is free, however you must register due to the limited number of places.
You can register to the workshop when you register to the TENOR conference: tick the workshop options.