University of Connecticut University of UC Title Fallback Connecticut

How Do We Learn to Speak and Read?

By: Kenneth Best

3brain

Do you remember how you learned to speak? Most people do not recall learning how to talk, or know how it is that they can understand others. The process involves a complex coordination of moving air from our lungs in coordination with the larynx, palate, jaw, tongue, and lips to form vowels and consonants that express a thought originating in the neural network of the brain.

You may recall the difficult process of learning how to read – associating a letter of the alphabet with a sound and then putting letters together to form words and sentences. In comparison, learning to speak may seem to come to us more naturally.

Ultimately, finding the answers behind how we learn to speak and read could help those who have an impaired ability to speak or understand others, as well as assist those who have difficulty learning to read and write.

UConn researchers at Haskins Laboratories in New Haven, Conn., use the latest technology to study the science of the spoken and written word, including brain scans to learn the cognitive and neurobiological foundations of speech and language.

UConn’s Experts

UConn faculty and alumni associated with world-renowned Haskins Laboratories in New Haven, Conn., have been working on the science of the spoken and written word for more than four decades. Founded in 1935 by Caryl Haskins and Franklin Cooper, Haskins is an independent, interdisciplinary research center affiliated with UConn and Yale University.

“We have a literacy crisis in this country,” says Philip Rubin ’73 MA, ’75 Ph.D., Haskins chief executive officer and former director of the Division of Behavioral and Cognitive Sciences at the National Science Foundation. “Many of our kids struggle with reading. At the heart, what we do is address those that are struggling. What makes them different than kids who don’t struggle … is the kind of work that we’re doing.”

The National Center for Educational Statistics says about 22 percent of adults in the United States have “minimal literacy skills,” meaning that they can read some words but cannot understand simple forms, such as a job application, or instructions, such as how to operate a computer.

Haskins researchers have been responsible for major scientific advances in speech and reading, including the development of the first reading machine for the blind, which ultimately led to the synthesis of artificial speech in computers. One of the scientists who conducted early research on the device was the late Alvin Liberman, a psychologist who served as director of Haskins for a decade and helped create the Department of Linguistics in the College of Liberal Arts and Sciences (CLAS) in Storrs. Liberman and Donald Shankweiler, professor emeritus of psychology in CLAS, collaborated with other Haskins colleagues in 1967 to produce “Perception of the Speech Code,” a landmark study published in Psychological Review that remains among the most cited papers in the literature of psychology.

“Haskins Labs in the 1950s was beginning to ask the question: What are the bits of sound, physical sound, that are conveying consonants and vowels?” says Shankweiler. “That was not an easy question to answer. Speech recognition is still less than perfect, but it depended very much on the research done at Haskins Labs over the past 40 to 50 years.”

Shankweiler says the link between speech and reading results in literacy, which provides the key to unlocking the ability to learn. “One of the main advantages of reading is that we are not limited by the speech we hear,” he says. “We extend our knowledge through print. A scholar will learn more through print than the spoken word. It’s a way to expand our use of language to increase knowledge.”

Read more at UConn Today.


More News Stories

Upcoming Events