Design Approach

April 21st, 2018

Equations

I plan on using equations in order to determine the standard distribution of frequencies that form a particular formant. For example, if I analyze each member of my group singing a particular vowel, I can collect the data of each formant, and create a distribution to analyze what the mean frequency is for each formant of that vowel. Since men and women have slightly different formants, I will also likely need to calculate the differences to see if blend among women differs from blend among men.

I will likely use these equations, which I’ve sources from http://integral-table.com/downloads/stats.pdf:

Mean:          μ = (1/n)∑xi

Std. Deviation:          sx = Sqrt[1/(n-1)∑[(xi −x bar)^2]

z-score:          z = (x−μ)/σ

Correlation:          r = 1/(n-1) ∑(from i=1 to n) [(xi −x bar)/sx ][(yi −y bar)/sy ]

 

Tools

Praat

In order to analyze the different frequencies present when a speaker says or sings a certain formant, I will be using a technology called Praat. Developed in Europe, this software allows a user to record a vocal sound and then analyze the different frequencies produced. I plan to utilize the function that creates a spectrogram and a waveform. It not only shows dark bands for different formants, but it also has the functionality to superimpose a line of best fit so the user can better visualize the formant’s change over time. There is also a feature that allows the user to break up the speech signal into phonemes, which creates readily understandable images to be presented with the data.

Here’s a screenshot of analyzing a word in Praat:

Audacity

I plan on further analyzing the data I collect using Audacity. Audacity also allows for more intricate audio editing than Praat, so I will likely use it for filtering and trying to change formants in order to fix the blend on audio segments.

Python

Things are still in the works, but I hope to have a component of my project that takes in audio samples and returns which vowel is being produced. If that is too difficult or not applicable enough to my project, I hope to find another way to be able to utilize Python in the way I analyze formants and calculate blend, in order to create a user interface in which a user who knows nothing about formants or the science of waves can still analyze blend. This will likely be in a Jupyter notebook, since that is such a great user interface.

Introduction and Background

April 13th, 2018

My project will be considering the science behind a cappella. I will be studying the waveforms of different voices individually and combined in order to consider the effects of frequency and vowel/phoneme quality on blend. The motivation for this project comes from experience in my a cappella group, Under Construction, in combination with my experience in my phonetics and phonology course. 

When my group rehearses our music, one of the most challenging things to do (besides staying in key) is blending with the other members of the group. Even when we’re all singing the correct notes, if someone’s vowel differs from the rest, they stick out, and the piece sounds bad. We need to blend on high notes as well as low notes– basses need to be able to blend with sopranos. Similarly, we need to sound uniform when we’re singing the same vowels/phones, but I wonder if some phonemic sounds are more conducive to blend based solely on their phonetic properties present in the wave forms.

I plan to complete this analysis by first recording members of my a cappella group in Audacity. I will then import files into a phonetics program called Praat, in which I will create graphs of and analyze the formants present in different sounds and how these differ when there are multiple people singing, when those people are or are not attempting to blend, etc.

Some research has been done to scientifically study choral blend. I found an article in the Journal of Research in Music Education called “An Acoustical Study of Individual Voices in Choral Blend” that talks about how singers’ formants differ when they’re singing a solo piece versus trying to blend with a choir  https://doi.org/10.1177/0022429480028002…).

Here is a link of my a cappella performing at a concert at Yale in the hall. This example is especially interesting because we’d never performed in this space before, and the room acoustics were such that it was difficult to hear one another, and this likely had a huge effect on our blend. https://www.youtube.com

Hopefully through this project, I can analyze video/audio clips like the above and figure out how to have better blend in a cappella music by going down to the level of the waves to understand the music better.