Research 1

Prompting questions

When starting with research, I decided that it would be best to lay out which questions I was hoping to answer.

How will user input be mapped to sound properties?

In my initial plan, I created a table which set out some of my options for basic shape-sound property mapping. I will likely look at these, but some research is needed to better understand the tactile relationship between these shape properties and the sound they “should” make.

How can the gap between signal and perception be bridged?

The way a sound is perceived is very different to its actual signal properties. How can I write an efficient mapping function which feels synesthetically responsive for the user, in a way that doesn’t feel overly technical. Striking the balance between precision and simplicity will be difficult, but is vital for the final product.

What will be used to control timbre?

Timbre is a quality of sound which is difficult to describe with signals, as it is perceived wildly differently from ear-to-ear. I want to understand what I can do to give the user timbre-control, whether that would take shape in the form of presets with sampled instruments/sounds or another configuration.

Focuses

I also laid out the following research focuses, to help guide it further:

  • Relationship between music and emotion
  • Relationship between sound and physical sensation
  • Relationship between hearing and sight, with a focus on senses

After searching on google scholar, I came across The Sonification Handbook. This source has been vital to my understanding of “Sonification” as a concept and how it can be applied to technology. You can read my Initial write-up here, where I detailed my findings and takeaways.