Nov 24 ====== Clopper/Pisoni:2005 ------------------- Why did Peterson and Barney (1952) take only the first and second formant frequency measurements? In the work of Bradlow and Bent (2003), why did they report that non-native listeners perform better than native listeners on speech intelligibility tasks involving non-native speech samples? What type of samples did they use? Clopper and Pisoni recommend several ways to improve Preston's map task, including exposing a listener to a short clip, requiring direct perception of speech. Does this really get rid of any biases related to working memory? Some variants are still more salient to naïve listeners than others (perhaps that's the point) (pg. 316). I understand that part of the reason Niedzielski (1999) (cited in Clopper and Pisoni, 2005) used synthesized vowel tokens was to make sure that the same "voice" was producing the speech and tokens (/aj, aw/ as [ʌj, ʌw] before voiceless consonants) in question (pg. 319). Have there been any studies that directly compared how naïve listeners rate synthesized speech versus human talkers and the accuracy of the dialect ratings? I think it would be difficult for one speaker to try to talk with dialect features they don't possess (that might sound 'unnatural' to a listener), so wouldn't it be better to have different talkers produce speech? Although controlling the talkers for other variables like gender and race/ethnicity raises some issues as to who is chosen as the representative speaker of a certain dialect (in the cases of the American studies, unless stated otherwise they're white speakers). How can one apply exemplar theory to the perception of linguistic variation (besides experience as an effect, as suggested by Clopper and Pisoni)? (p. 333) Comments: Even 20+ years after Purnll, Idsardi, and Baugh's (1999) study (cited in Clopper and Pisoni, 2005), more recent research (Wright 2023) has shown that 'non-standard' varieties of American English perform better in neighborhoods with social demographics that match those of the given non-standard variety. Not only does linguistic profiling occur when hearing speech, but also reading it over text (Hanson and Hawley 2011, cited in Wright 2023). One interesting thing to note is that the Fair Housing Act of 1968 protects homebuyers based on their membership in protected classes (e.g. race, nationality, (dis)ability), but the law is currently interpreted as one that must occur in physical proximity (Hillier 2005, cited in Wright 2023). Therefore this excludes interactions between potential homebuyers and landlords over the phone or text. Wright, K. (2023). Housing policy and linguistic profiling: An audit study of three American dialects. Language Volume 99 (2). https://muse.jhu.edu/pub/24/article/900094/pdf It was mentioned that a shift in methodology would allow additional insights into the underlying process of how we as listeners make our decisions in categorization tasks. One possible research question was if some varieties might be easier to identify than others. But what makes a variety easier to identify? Isn't that dependent on where the listener and speaker is from? It was stated that naïve listeners can make reliable judgements about where an unfamiliar talker is from. However, the performance in categorization tasks is pretty low (accuracy of around 30 %) even after the perceptual training. Don't these results refute this statement? It is very interesting that participants rate their own language as the most understandable and pleasant and that non-standard variants are rated lower in terms of intelligence but higher in terms of pleasantness. It is also worth noting that our perception of speech sounds is influenced by our knowledge and that this can be manipulated. Why is the variation between white speakers much more regionally based than variation among African-American speakers? It is also interesting to note that women are generally ahead of men in language change. Why is that?