SpeechMark Publications and Presentations

Deep Brain Stimulation May Contribute to Dysarthria in Patients with Parkinson’s Disease as Detected by Objective Measures

Craig Van Horne M.D Ph.D, Joel MacAuslan Ph.D, Karen Chenausky M.S CF-SLP, Carla Massari
Dysarthria is found in approximately 80% of patients with Parkinson’s Disease (PD) and significantly limits communication as the severity worsens. Surgical implantation of deep brain stimulators (DBS) into the subthalamic nucleus (STN) has become more common and is an effective treatment for the motoric symptoms of PD. However, the effect of DBS on speech is equivocal.

We have developed computer algorithms that quickly and objectively analyze the speech of PD patients, allowing clinicians to assess the effect of speech on DBS programming or other therapies.

Read More… Download PDF

Spontaneous Vocalization Change in Infants with Severe Impairments using visiBabble

Harriet Fell, Joel MacAuslan, Cynthia J. Cress, Cara Stoll, Kara Medeiros, Jennifer Rosacker, Emily Kurz, Jenna Beckman
Children with difficulty producing speech sounds can practice sounds in play, even prelinguistically. visiBabble is a prototype computer-based program that responds with customized animations to targeted types of infant vocalizations. The program automatically recognizes acoustic-phonetic characteristics of the vocalizations and can selectively respond to utterances with varying levels of complexity (e.g. multisyllable utterances).

This poster reports syllable production changes of three children with physical and speech impairments, ages 1-4, in response to visiBabble reinforcement. Results include immediate effects of visiBabble reinforcement on infant vocalizations as well as longer-term effects of home visiBabble practice on spontaneous sound production.

Read More… Download PDF

Landmark-based Analysis of Sleep-Deprived Speech

Suzanne Boyce, Joel MacAuslan, Ann Bradlow, Rajka Smiljanic
There is a common perception that speech articulation becomes “slurred”, or less precisely articulated, under sleep deprivation
conditions. There have been few studies of speech under sleep deprivation. Morris et al. (1960) and Harrison & Horne (1997) found that listeners heard a difference between speech recorded under rested and sleep-deprived conditions.

Read More… Download PDF

A Platform for Automated Acoustic Analysis for Assistive Technology

Harriet Fell, Lorin Wilde, Suzanne Boyce, Keshi Dai, Joel MacAuslan
While physical, neurological, oral/motor, and cognitive impairments can all significantly impact speech, people with disabilities may still be best able to communicate with computers through vocalization.

Aspects of vocal articulation are highly sensitive markers for many neurological conditions. As a source of data, recordings are

  • non-invasive,
  • inexpensive to collect, and
  • easily integrated into existing research and clinical protocols.

Read More… Download PDF

Objective Data on Clear Speech: Does it Help in Training Audiology Students?

Boyce, S. E., Balvalli, S. N., MacAuslan, J., Clark, J. C., Martin, D.
Typical speakers instinctively use a ‚CLEAR‛ speaking style when they are instructed to ‚speak as if your listener is hearing impaired‛ or ‚speak as if your listener is not a native speaker of your language‛. CLEAR speech is more intelligible to hearing impaired listeners by about 17% (1, 2, 3). The ability to automatically detect differences between a speaker’s ordinary speech patterns and their most intelligible speech, would clearly be helpful in clinical training and telemedicine applications.

Here we describe, a Landmark-based computer program (4, 5) to detect articulatory differences between ‚CLEAR‛ and ‚CONVERSATIONAL‛ styles of speech. Landmark-based speech analysis takes advantage of the fact that important articulatory events, like voicing, frication etc. show characteristic patterns of abrupt change in the speech signal. These patterns are detected by an automated computer system and assigned to a particular type of Landmark.

Read More… Download PDF

Using Landmark Detection to Measure Effective Clear Speech (2005)

Suzanne E. Boyce, Jean Krause, Sarah Hamilton, Rajka Smiljanic, Ann Bradlow, Ahmed Rivera Campos, Joel MacAuslan
A number of studies have established that normal native speakers of a language know how to improve their intelligibility to listeners under intelligibility-challenging conditions. (Uchanski, 2005).

This “Clear Speech” speaking style is significantly more intelligible to listeners; the average Clear Speech benefit is 15-17% to normal-hearing listeners in noise and to hearing impaired listeners in quiet (Uchanski, 2005).

Read More… Download PDF

Acoustic Analysis of PD Speech (2011)

Karen Chenausky, Joel MacAuslan, and Richard Goldhor
Parkinson’s disease (PD) is an idiopathic neurodegenerative disease caused by loss of dopamineproducing cells in the substantia nigra of the basal ganglia, affecting over one-half million people in the U.S., most over age 50. Its major symptoms are muscular rigidity, bradykinesia, resting tremor, and postural instability. An estimated 70%–90% of patients with PD also develop speech or voice disorders…

Read More… Download PDF

Automatic Syllabic Cluster Analysis of Children’s Speech Data to Identify Speech-Disorders (2015)

Speights, Marisha, and Boyce, Suzanne and MacAuslan, Joel and Fell, Harriet,

This research investigates syllabic complexity in children with normal and disordered speech production using a computerized method of analysis.

Automatic Syllabic Cluster Analysis based upon landmark theory (Stevens 1992, 2002; Liu 1996; Howitt 2000; Fell & MacAuslan, 2005) is used to automate the analysis of child speech.

The algorithm automatically measures acoustic changes that correspond to syllable patterns and provides a fast method for measuring complexity in syllable production without the need for phonetic transcription.

Read More… Download PDF

Poster from the SpeechMark exhibit at the Acoustical Society of America Meeting (2014)

the LMs are placed at instants of abrupt change of energy occurring simultaneously across multiple frequency ranges and at multiple time scales.

Read More… Download PDF

SpeechMark Acoustic Landmark Tool: Application to Voice Pathology (2013)

Suzanne Boyce, Marisha Speights, Keiko Ishikawa, Joel MacAuslan

One area of voice research that has historically been understudied is the interaction between voice pathology and acoustic aspects of the speech signal that affect intelligibility. Landmark-based software tools are particularly suited to fast, automatic analysis of small, non-lexical differences in the acoustic signal reflecting the production of speech. We are building a tool set that provides fast, automatic summary statistics for measures of speech acoustics based on Stevens’ paradigm of landmarks, points in an utterance around which information about articulatory events can be extracted. This paper explores the use of landmark analysis for evaluation of intelligibility-based measures of vocal pathology. Index Terms: speech analysis, landmarks, voice pathology.

Read More… Download PDF