Landmark-based Analysis of Sleep-Deprived Speech

Suzanne Boyce, Joel MacAuslan, Ann Bradlow, Rajka Smiljanic
There is a common perception that speech articulation becomes “slurred”, or less precisely articulated, under sleep deprivation
conditions. There have been few studies of speech under sleep deprivation. Morris et al. (1960) and Harrison & Horne (1997) found that listeners heard a difference between speech recorded under rested and sleep-deprived conditions.

Read More… Download PDF

A Platform for Automated Acoustic Analysis for Assistive Technology

Harriet Fell, Lorin Wilde, Suzanne Boyce, Keshi Dai, Joel MacAuslan
While physical, neurological, oral/motor, and cognitive impairments can all significantly impact speech, people with disabilities may still be best able to communicate with computers through vocalization.

Aspects of vocal articulation are highly sensitive markers for many neurological conditions. As a source of data, recordings are

  • non-invasive,
  • inexpensive to collect, and
  • easily integrated into existing research and clinical protocols.

Read More… Download PDF

Objective Data on Clear Speech: Does it Help in Training Audiology Students?

Boyce, S. E., Balvalli, S. N., MacAuslan, J., Clark, J. C., Martin, D.
Typical speakers instinctively use a ‚CLEAR‛ speaking style when they are instructed to ‚speak as if your listener is hearing impaired‛ or ‚speak as if your listener is not a native speaker of your language‛. CLEAR speech is more intelligible to hearing impaired listeners by about 17% (1, 2, 3). The ability to automatically detect differences between a speaker’s ordinary speech patterns and their most intelligible speech, would clearly be helpful in clinical training and telemedicine applications.

Here we describe, a Landmark-based computer program (4, 5) to detect articulatory differences between ‚CLEAR‛ and ‚CONVERSATIONAL‛ styles of speech. Landmark-based speech analysis takes advantage of the fact that important articulatory events, like voicing, frication etc. show characteristic patterns of abrupt change in the speech signal. These patterns are detected by an automated computer system and assigned to a particular type of Landmark.

Read More… Download PDF

Using Landmark Detection to Measure Effective Clear Speech (2005)

Suzanne E. Boyce, Jean Krause, Sarah Hamilton, Rajka Smiljanic, Ann Bradlow, Ahmed Rivera Campos, Joel MacAuslan
A number of studies have established that normal native speakers of a language know how to improve their intelligibility to listeners under intelligibility-challenging conditions. (Uchanski, 2005).

This “Clear Speech” speaking style is significantly more intelligible to listeners; the average Clear Speech benefit is 15-17% to normal-hearing listeners in noise and to hearing impaired listeners in quiet (Uchanski, 2005).

Read More… Download PDF

Frication Peak Landmarks

Landmarks (LMs) are acoustically identifiable points in an utterance. They come in the form of abrupt transitions (abrupt LMs) and peaks (peak LMs) of some contour or contours.
Until now the only peak type has been Vowel, computed by vowel_lms.

For vowels the peak is that of maximum harmonic power and often corresponds to the maximum opening of the mouth.
Frication-type peak landmarks are computed using…

Read More… Download PDF

Using the SpeechMark MATLAB Toolbox for Syllabic Cluster Analysis

The SpeechMark MATLAB Toolbox is a platform-independent add-in to the MATLAB language and computation environment, developed by MathWorks. This Toolbox adds acoustic landmark detection and visualization tools, methods, and scripts to MATLAB.

This product is a standard MATLAB toolbox. To use it, a valid instance of MATLAB (version R2010b or newer) must be installed, as well as a valid version of the MATLAB Signal Processing Toolbox.

Read More… Download PDF

Acoustic Analysis of PD Speech (2011)

Karen Chenausky, Joel MacAuslan, and Richard Goldhor
Parkinson’s disease (PD) is an idiopathic neurodegenerative disease caused by loss of dopamineproducing cells in the substantia nigra of the basal ganglia, affecting over one-half million people in the U.S., most over age 50. Its major symptoms are muscular rigidity, bradykinesia, resting tremor, and postural instability. An estimated 70%–90% of patients with PD also develop speech or voice disorders…

Read More… Download PDF

Automatic Syllabic Cluster Analysis of Children’s Speech Data to Identify Speech-Disorders (2015)

Speights, Marisha, and Boyce, Suzanne and MacAuslan, Joel and Fell, Harriet,

This research investigates syllabic complexity in children with normal and disordered speech production using a computerized method of analysis.

Automatic Syllabic Cluster Analysis based upon landmark theory (Stevens 1992, 2002; Liu 1996; Howitt 2000; Fell & MacAuslan, 2005) is used to automate the analysis of child speech.

The algorithm automatically measures acoustic changes that correspond to syllable patterns and provides a fast method for measuring complexity in syllable production without the need for phonetic transcription.

Read More… Download PDF

Towards Development of an Automatic Analysis for Assessment of Dysphonic Speech

Voiced interval was greater in dysphonic speech – more frequent instance of unexpected vocal fold onset and offset occurred in unexpected moments. Other parameters for syllabic cluster measure were not significant predictors for dysphonic speech.

The analysis may have been affected by the wide variability seen in dysphonic group (especially VF paralysis group).

Read More… Download PDF

Poster from the SpeechMark exhibit at the Acoustical Society of America Meeting (2014)

the LMs are placed at instants of abrupt change of energy occurring simultaneously across multiple frequency ranges and at multiple time scales.

Read More… Download PDF