What we do

We hear a new world of sound that is ubiquituous, ever changing and one that we own, since it originates within us.

We are creating the sound of the 2020's today - collaborative, contextual, behaviouristic biomusic: informed by nature, made possible though science, we create possibilities for anyone with a pulse to enjoy an emergent, authentic and personally satisfying experience that goes beyond music and genres, straight to the body and directly to the brain.

We would like you to join us in our musical quest to unite people, places and things in a new, infinite world of sound.

Sign up for the beta here

How it works

We want to make a continuous, dynamic music experience out of everyday life, whatever it may be. The world and our actions in it form a complex, adaptive event process, which we turn into sounds that follow musical rules and interact with each other.

This kind of music is relevant, informative and engaging, as it requires no active operation, but uses natural movements and inferred states to reflect users’ relationship with the surrounding environment.

The environment provides musical matter, energy and information, including that of other users. We are developing a massively multi-performer music system based on social music and emotional contagion that takes holarchic form.

Sensor data from the phone is processed and mapped to musical parameters, hundreds of them. Information pulled from web services or other devices in your surroundings contribute to the synthesis model.

Biomechanical movements and other activities create a dynamic soundscape to be physically explored in time. Optional wearables, like activity bands or sports watches, measure heart rate, skin temperature, galvanic skin response, movement and more.

Respiration phase, amplitude and rate creates a natural breath controller and provides a strong biofeedback channel. Heart rates between users can be synchronised (we support Ableton Link). We can already send and receive musical phrases that depend on body postures, gestures, distance or context.

The app learns and detects gestures and hand positions. You can also set locations and beacons for further interactions.

Depending on your current context, our sound engine automatically adapts the soundscape that is created by your movement in the city. Tempo is controlled by step rate when walking or running, alternatively by using your vehicle’s speed or synced to your heart rate when standing still.

A special ambient mode is triggered by a sitting or supine position.

Local weather pulled from a meteorological service controls several aspects of the sound. Signals include wind speed, wind direction, air temperature, air pressure, humidity, visibility, cloud cover, UV index and weather type.

Additionally, time of day, day of week, seasons and astronomical phenomena create slow, sometimes large changes in the melodies or the sound itself. Altitude, weather and the position of your vehicle forms part of the music. Course, heading and proximity to other elements create higher level changes in the music.

Waypoints, destinations and landmarks influence musical parameters from a distance. Their proximity, character and bearing is made audible in the music.

In addition to already using geofences, we set up Estimote beacons in special locations, like shops. Footfall and dwell time become musical parameters. B2B customers can announce and define their physical presence musically.

Coupling places and movement with musical entrainment redefines our relationship to music and results in the embodiment of sound.

Partners

Our hardware ecosystem now includes Suunto’s Movesense platform and soon the upcoming respiration sensors from our partners iBreve. We welcome other modules, sensors and APIs to be supported!

B2B services

You can include our tech in your app or your event. We are happy to provide:

-custom development based on our tech

-licensed or white label apps

Case studies

Beddit tracks your sleep using a thin film cardioballistography sensor on the mattress. We prototyped and created different smart alarm versions and sleep musification models. The product was adapted to be used for sonification and visualisation of biosignals at marketing events.

Beddit demo, Slush 2014.

Marfle: We researched and created a vehicular sonification for ships, using technology by Marfle, a fleet monitoring company. A limited edition vinyl was produced from the resulting music.

Helsinki Design Week 2015.

The Art and Science of Biomusic

According to Nobel prize winner Roger Sperry’s common coding theory, the vertebrate brain evolved in order to control motor activity with the basic function to transform sensory patterns into patterns of motor coordination. We translate data derived from these patterns to music information.

Most music we listen to has a function; to help focus, relax, mask noise, vent our mood or escape into another world. Often, we are focused on the task at hand and effectively in background listening mode. Vocal music, the dominant form of popular music, is not ideal as background music as it causes distraction.

Music listening is often effectively cognitive entrainment that accompanies, relaxes and stimulates us during our everyday activities. We can now create such soundscapes that adapt to our movements and actions, time of day, seasons and weather, in order to create variations and changes while informing us of events in our environment.

Why it works

As we all know, music appreciation is highly contextual, which is why we can make certain assumptions based on available data. For example, implicit actions express underlying social drivers and are good indicators of future activities.

According to Juslin, Friberg, & Bresin (2002), listeners consider tempo curves derived from human motion profiles more musical and expressive than simple tempo changes. Previously, Todd (1992, 1994, 1995, 1999) proposed a model that relates intensity contours in music to human motion perception. Andrei C. Miu and Jonna K. Vuoskoski write that process theories have identified empathy and contagion as mechanisms by which music may induce emotions.

The ancestral mechanism that gives us our disposition to music can be harnessed for subconscious, biomechanical generation of synth music. A closed biofeedback loop is often experienced by the participant, for example through pulse sonification or locomotor-respiratory coupling. Interoception deepens the embodiment of sound. Context- and location-awareness enables situated cognition, or exteroception. Passive, non-volitional operation allows the dorsal system to communicate location-related musical information using the peripheral attention system of the brain. This can be used to improve situational awareness.

Theories of embodied cognition and situated cognition state that we respond emotionally to music more than half of the time that we listen to it. Somatic theory states that bodily feedback modulates the experience of emotion.

Research has demonstrated that users gravitate spontaneously towards media in which proximity can be simulated accurately. The technology becomes more successful and effective as the the perceived proximity increases.

Using this methodology, we take one complex phenomenon, music, and couple it with another, human locomotion. The end result is functional music that makes people move in various ways.

Sound becomes a personalised two-way channel of information. The world becomes a set of ever-changing instruments that you play with your activities. Yet the world also plays you, through topography, architecture, vehicles and other participants. As you leverage your musical subconscious, you become part of the circuit, of the synth. You become sound.

Benefits

Biomusic.cc empowers users musically by turning their movements, activities and environment into a music generator that accompanies and sometimes drives their daily life.

Combining sensory modalities increases the robustness of perception. When cues are presented simultaneously in several modalities, they can be detected more accurately, faster and at lower thresholds,than when presented separately.

It is possible to increase learning by reinforcing information through more than one modality. Coupling auditory and visual stimuli that communicate the same information may therefore improve processing and memory.

According to Joel Beckerman (Manmade Music), sound triggers are associated with reward, which drives behaviour.

The app enables smooth transition from exploratory mode (dorsal stream, everyday listening) to information-extraction mode (ventral stream, active musical listening). Not all aspects are under user control, however. Entrainment to environmental stimuli can be termed asymmetrical entrainment, because one cannot influence the entraining rhythm, such as the alternation of light and dark, seasons or commuter traffic schedules. Circadian, infradian and ultradian rhythms are sonified in our interaction model.

Our app is different because:

Social mode adds musical user-to-user communication and emergent group behaviour.

Zero-UI minimises distraction and screen-time.

Biofeedback encourages vigorous everyday activities.

Synthesis-based audio ensures interesting and dynamic soundscapes.

IoT integration creates situated, hyperlocal experiences that can be commercialised.

Research

Producing expressive and entertaining music based on biosignal measurement is an interesting research topic. We have so far published one paper on our technology, at the 11th International Symposium on Intelligent Data Analysis in October 2012.

Paalasmaa, J., Murphy, D. J. & Holmqvist, O., 2012. Analysis of Noisy Biosignals for Musical Performance. 11th International Symposium on Intelligent Data Analysis (IDA 2012). Lecture Notes in Computer Science 7619. pp. 241-252. Springer, Heidelberg. download PDFdoi: 10.1007/978-3-642-34156-4_23

We are honoured to have been chosen to present our work in a demo at the 10th Audio Mostly conference at Queen Mary University of London, in August 2017.

Some videos


An early prototype connected to Korg Gadget.

 

Biomusic samples: