About SpeechMark

We are a group of researchers and developers who are interested in speech production and speech acoustics. We work at Speech Technology and Applied Research Corp., the University of Cincinnati, and Northeastern University.

In our own work, we have developed automatic tools for detecting, counting and analyzing acoustic events in speech signals that are commonly used by speech scientists to measure differences in speech articulation. We have used these tools successfully to identify differences between the vocalizations of at-risk and typically-developing infants [Fell et al 2002, 1998], sleep-deprived and normal speech in adults [Boyce, MacAuslan, et al., 2008], speech of Parkinson’s Disease patients with and without deep-brain stimulation [Chenausky, et al., 2011. “Acoustic Analysis of PD Speech”], and speech of individuals in a normal state and under emotional stress [Fell & MacAuslan 2003].

Our software is based on the detection of speech-acoustic landmarks and is particularly suited to fast, automatic analysis of small, non-lexical differences in production of the same speech material by the same speaker.

The National Institutes of Health recognizes that our tools can have wide application to other scientists’ research projects and have funded us to further the development and distribution of our software.

To make our landmark-based SpeechMark products available to the wider scientific community, we have developed a MATLAB® toolbox and a suite of independent applications and plugins for existing software platforms such as WaveSurfer and R.


SpeechMark Team

Dr. Richard Goldhor

Senior Research Fellow and Board Member, Speech Technology and Applied Research Corp. (STAR)

Richard Goldhor is a Senior Research Fellow at STAR Corp. He received his PhD from MIT’s Speech Communications group. He has over 25 years of experience developing innovative speech and signal processing algorithms and commercially successful consumer products. He directed the software development of the world’s first commercial 1000-word speech recognizer for Kurzweil AI (now Nuance), was co-founder and Chief Technical Officer of AudioFile, Inc. (a developer of Windows-based audio products), and co-founder and VP of Engineering at Enounce, Inc., where he managed the development of consumer products for timescale modification (i.e., the slow-down and speed-up) of digital multimedia. He has extensive experience mentoring technology transfer efforts: transferring innovative but still-imperfect technology from research contexts into effective commercially successful products.

Dr. Joel MacAuslan

President and Chief Science Officer, Speech Technology and Applied Research Corp. (STAR)

Joel MacAuslan, PhD, developed the landmark-based and other speech-processing methods and algorithms that underlie SpeechMark. This especially includes (with Prof. Fell) the SpeechMark MATLAB function toolbox and (with Profs. Fell and Boyce variously) the applications for several projects that have used these functions. Dr. MacAuslan began his career in 1980 developing image-processing algorithms for a division of Kodak, based on degrees in applied math and astrophysics. He founded STAR in 1995 to perform research funded by the US National Institutes of Health, especially for commercially focused applications in image-, signal-, and speech-processing.

Dr. Harriet Fell

HF

Professor, Dept. of Computer and Information Science, Northeastern University, Boston, MA

Harriet Fell is a Professor Emeritus in the College of Computer and Information Science at Northeastern University. She works on the design and development of assistive technology and on medical applications. She was the Principal Invertigator on the Early Vocalizationa Project, the first software that was designed specifically to analyze pre-speech utterances. With Joel MacAuslan, she designed and developed the visiBabble system to encourage syllabic utterance in children at risk for being non-speaking. She has also applied speech processing to detect non-verbal information in speech, such as emotional stress. She has worked with Richard Goldhor, Joel MacAuslan, and Suzanne Boyce, to produce SpeechMark, a software tool for scientists interested in speech articulation as a marker for neurological status. She is currently coPI with Tim Bickmore on an NSF project to build a virtual co-presenter, a software agent that can present a talk along with a human speaker.

Dr. Suzanne Boyce

SB

Professor, Dept. of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH

Suzanne Boyce has been working on acoustic analysis of speech all her professional life, through undergraduate and graduate school (A.B., Harvard University), doctoral student (Ph.D., Yale University), two postdoctoral positions in speech recognition and signal processing for speech (M.I.T. and Boston University), and a further graduate degree in Speech-Language Pathology (C.A.G.S., Boston University) specializing in articulation disorders. She has had continuous research support from the National Institutes of Health since 1995. Her most recent projects include ultrasound technology for speech sound disorders, and automatic detection of change in the acoustic signal corresponding to changes in the precision of speech articulation, as a result of accommodation to listener need or health status.

Dr. Marisha Speights Atkins

MS

M.S., CCC-SLP and Professor at the University of Cincinnati

Marisha Speights Atkins, Ph.D., CCC-SLP, is an assistant professor in the Department of Communication Sciences and Disorders at Auburn University. Clinically, She specializes in the assessment and treatment of speech sound and language disorders in children. As a doctoral student at the University of Cincinnati, she focused on applying the acoustic analysis of speech in the assessment of child speech sound disorders. Her current research focuses on the development and translation of innovative acoustic-based tools for assessing speech and expressive language disorders in preschool age children. She works in collaboration with computer science and software engineers to research and develop computer-based approaches for identifying speech production differences between children who acquire speech according to expected developmental trajectories and those with speech sound disorders that affect intelligibility. She was the recipient of an NIH pre-doctoral grant funded to investigate innovative acoustic tools for neuroscience.

Dr. Keiko Ishikawa

MS

M.M., M.A., CCC-SLP and Professor at the University of Cincinnati

Keiko Ishikawa, Ph.D., CCC-SLP, is a speech language pathologist and PhD candidate in the Department of Communication Sciences and Disorders at the University of Cincinnati. She also holds a certificate in clinical and translational research from a NIH-funded program at the university. She is a recipient of Kara Singh Memorial Fund Graduate Student Scholarship from the American Speech-Language-Hearing Foundation and PhD scholarship from the Council of Academic Programs in Communication Sciences and Disorders. Clinically, she has worked at the Voice and Speech Laboratory at the Massachusetts Eye and Ear Infirmary and Blaine Block Institute for Voice Analysis and Rehabilitation. Her research has focused on molecular mechanisms in vocal fold wound healing and development of tools for measurement of intelligibility deficits for those with voice disorders (dysphonia). She conducts studies to evaluate the suitability of landmark-based models for acoustic characterization of dysphonic speech. Her research interests also include application of data-mining techniques to analyze disordered speech and associated clinical data. The overarching goal of her projects is to increase clinical efficiency and effectiveness for treatment of intelligibility deficits by the use of automated acoustic tools.

Read CV