SpeechMark Publications and Presentations

Improving the Accuracy of Automatic Detection of Emotions From Speech

Computers that can recognize human emotions could react appropriately to a user’s needs and provide more human like interactions.

Some of the applications of emotion recognition:

  • Diagnostic tool for medical purposes
  • Onboard car driving systems to keep the driver alert if stress is detected[1]
  • Similar system in aircraft cockpits
  • Online tutoring

Our contributions:

  • Use new combinations of acoustic feature sets to improve the performance of emotion recognition from speech
  • Provide a comparison of feature sets for detecting different emotions

Read More… Download PDF

Measurement of Child Speech Complexity Using Acoustic Landmark Detection

Dysphonia negatively affects speech intelligibility especially in the presence of background noise; however, no clinical tool exists to measure this deficit. Landmark (LM) analysis may serve as the basis of such tool.

The analysis identifies characteristic patterns of abrupt changes in the speech signal over time, and assigns them particular “landmarks.” Consequently, it describes speech as a sequence of LMs.

Read More… Download PDF

Measurement of Child Speech Complexity Using Acoustic Landmark Detection

An important measure of intelligibility in young children is the ability to articulate complex syllables1-4. The development of well-formed syllables in infancy has been shown to be a significant predictor of later communication skills. 1-4 Children with delayed speech acquisition do not show this same developmental trend, and deviations in syllable acquisition may serve as a diagnostic marker of future speech delay.

Read More… Download PDF

Deep Brain Stimulation May Contribute to Dysarthria in Patients with Parkinson’s Disease as Detected by Objective Measures

Dysarthria is found in approximately 80% of patients with Parkinson’s Disease (PD) and significantly limits communication as the severity worsens. Surgical implantation of deep brain stimulators (DBS) into the subthalamic nucleus (STN) has become more common and is an effective treatment for the motoric symptoms of PD. However, the effect of DBS on speech is equivocal.

We have developed computer algorithms that quickly and objectively analyze the speech of PD patients, allowing clinicians to assess the effect of speech on DBS programming or other therapies.

Read More… Download PDF

Spontaneous Vocalization Change in Infants with Severe Impairments using visiBabble

Children with difficulty producing speech sounds can practice sounds in play, even prelinguistically. visiBabble is a prototype computer-based program that responds with customized animations to targeted types of infant vocalizations. The program automatically recognizes acoustic-phonetic characteristics of the vocalizations and can selectively respond to utterances with varying levels of complexity (e.g. multisyllable utterances).

This poster reports syllable production changes of three children with physical and speech impairments, ages 1-4, in response to visiBabble reinforcement. Results include immediate effects of visiBabble reinforcement on infant vocalizations as well as longer-term effects of home visiBabble practice on spontaneous sound production.

Read More… Download PDF

Landmark-based Analysis of Sleep-Deprived Speech

There is a common perception that speech articulation becomes “slurred”, or less precisely articulated, under sleep deprivation
conditions. There have been few studies of speech under sleep deprivation. Morris et al. (1960) and Harrison & Horne (1997) found that listeners heard a difference between speech recorded under rested and sleep-deprived conditions.

Read More… Download PDF

A Platform for Automated Acoustic Analysis for Assistive Technology

While physical, neurological, oral/motor, and cognitive impairments can all significantly impact speech, people with disabilities may still be best able to communicate with computers through vocalization.

Aspects of vocal articulation are highly sensitive markers for many neurological conditions. As a source of data, recordings are

  • non-invasive,
  • inexpensive to collect, and
  • easily integrated into existing research and clinical protocols.

Read More… Download PDF

Objective Data on Clear Speech: Does it Help in Training Audiology Students?

Typical speakers instinctively use a ‚CLEAR‛ speaking style when they are instructed to ‚speak as if your listener is hearing impaired‛ or ‚speak as if your listener is not a native speaker of your language‛. CLEAR speech is more intelligible to hearing impaired listeners by about 17% (1, 2, 3). The ability to automatically detect differences between a speaker’s ordinary speech patterns and their most intelligible speech, would clearly be helpful in clinical training and telemedicine applications.

Here we describe, a Landmark-based computer program (4, 5) to detect articulatory differences between ‚CLEAR‛ and ‚CONVERSATIONAL‛ styles of speech. Landmark-based speech analysis takes advantage of the fact that important articulatory events, like voicing, frication etc. show characteristic patterns of abrupt change in the speech signal. These patterns are detected by an automated computer system and assigned to a particular type of Landmark.

Read More… Download PDF

Using Landmark Detection to Measure Effective Clear Speech

A number of studies have established that normal native speakers of a language know how to improve their intelligibility to listeners under intelligibility-challenging conditions. (Uchanski, 2005).

This “Clear Speech” speaking style is significantly more intelligible to listeners; the average Clear Speech benefit is 15-17% to normal-hearing listeners in noise and to hearing impaired listeners in quiet (Uchanski, 2005).

Read More… Download PDF

Acoustic Analysis of PD Speech


Parkinson’s disease (PD) is an idiopathic neurodegenerative disease caused by loss of dopamineproducing cells in the substantia nigra of the basal ganglia, affecting over one-half million people in the U.S., most over age 50. Its major symptoms are muscular rigidity, bradykinesia, resting tremor, and postural instability. An estimated 70%–90% of patients with PD also develop speech or voice disorders…

Karen Chenausky, Joel MacAuslan, and Richard Goldhor, “Acoustic Analysis of PD Speech,” Parkinson’s Disease, vol. 2011, Article ID 435232, 13 pages, 2011. doi:10.4061/2011/435232

Read More… Download PDF