Chapter 1 Introduction: Experiments and Variables

Each chapter of this book will involve the analysis of data from a single perceptual experiment carried out by a group of fifteen listeners. In this chapter we will discuss the data for our experiment, introduce concepts related to variables and probabilities, and provide a very basic introduction to R along the way. As noted in the preface, a working knowledge of R is assumed and a familiarity with basic statistics is probably helpful, though not strictly necessary. The preface also provides suggested readings for those wanting to do some background reading on R or statistical inference, and provides information about the software you need installed to follow along with the examples in the book.

1.1 Chapter pre-cap

In this chapter, we discuss the use of experiments for scientific inference, in addition to some inherent problems associated with inferences based on limited amounts of observations. After this, we describe the perceptual experiment that will be analyzed in this book, including a discussion of its structure, aims, procedures, and resulting data. Following the introduction of the experiment, we discuss variables and the way that we use these in experiments to make inferences. After that, we introduce some R data types, and the relationship between these and different kinds of variables is presented. Finally, we present some ways to visualize different sorts of variables and the relationships between them.

1.2 Experiments and effects

An experiment is a procedure or process that can help answer some research question. Obviously, when defined so broadly almost anything can be an experiment. In fact, when a child touches a hot stove to see what ‘red’ feels like, they are conducting an experiment which provides essential information about their world. In an academic context, experiments are expected to be scientific. However, there is no definition of scientific that is not socially and historically contingent. What is considered ‘scientific’ is determined by what scientists in a specific time and place consider to be scientific, and this can change, and has changed, substantially over time.

At the moment, in most contexts, a research project is ‘scientific’ when it generally conforms to the scientific method. Of course, just as with science and scientific, there is no single scientific method, no single ‘true’ definition that can be referred to. Instead, the scientific method consists generally of a process in which researchers: 1) Ask questions based on gaps in their knowledge about the world, 2) Collect data using codified procedures developed to avoid certain pitfalls and maximize the chance that the collected data can answer their questions, 3) Evaluate their questions in light of their data, and 4) Reach conclusions where possible, and synthesize their conclusions with their previous knowledge about the world.

Modern ‘scientific’ work usually involves the collection of empirical measurements, the quantification of patterns in these measurements, and the qualitative description of the quantitative patterns in the measurements. As a result, much modern scientific work yields large quantities of numeric values, observed under different conditions, which the researcher must then (statistically) analyze in order to understand. For example, imagine an experiment about whether caffeine makes people read faster. Subjects are asked to drink either a cup of coffee or a cup of decaf. After a 30-minute wait they are asked to read a passage aloud and the duration of the reading is measured. Basically, we are measuring two different values, “the amount of time it takes people to read a passage of text after drinking decaf”, and “the amount of time it takes people to read a passage of text after drinking regular coffee”.

The experiment outlined above allows us to ask: is “the amount of time it takes people to read a passage of text after drinking decaf” the same as “the amount of time it takes people to read a passage of text after drinking regular coffee”? Another way to look at this is that we are interested in the effect of caffeine on reading times. By ‘effect’ we mean the degree to which caffeine is associated with changes in the characteristics of our observation (reading times) in some way. For example, if the average reading times were the same in both groups we would conclude that “caffeine has no effect on reading times”. In contrast, if reading times were 800 milliseconds shorter in the caffeine group, we might conclude “caffeine has the effect of reducing reading times by 800 milliseconds”.

The relationship between statistical effects and causality is tricky. In some cases, like with caffeine and reading times, it seems reasonable that the caffeine is actually causing faster reading times, since caffeine is associated with increased energy. However, we want to be sure to not imply that effects are necessarily causal. For example, decaf might also speed up reading times (relative to cold water) due to a placebo effect. This would suggest that some of the increase in reading times with regular coffee is due to people’s expectations regarding its effect, in addition to the actual effect of caffeine. So, increased reading times after drinking regular coffee allows us to establish an association between these observations, but does not in any way prove that one is causing the other. So, our use of the term effect should be interpreted as meaning ‘association’ rather than ‘cause’.

Our experiment on reading times is specifically constructed to investigate the effect of caffeine on reading times. If the speakers in our experiment were randomly assigned to conditions, there is no particular reason to expect that their reading times would be different in the absence of the caffeine. So, if we find that people read faster in the caffeine group, we may establish an association between caffeine and an increase in speaking rate. Combined with our knowledge of the physiological effect of caffeine on human listeners, we may conclude that this association suggests that the caffeine consumption causes the difference in reading times. However, it’s important to keep in mind that it is our world knowledge that allows us to make a causal inference and not the statistical association on its own. If we found a statistical association between wearing a green t-shirt and reading times, we should be more hesitant to make claims about a causal relationship.

This same logic applies in situations where we do not randomly assign subjects to groups, as long as we are careful in creating equivalent groups. Consider the same experiment about speaking rate carried out with groups based on speaker gender rather than drinking coffee. In this case the question would be “is the amount of time it takes men to read this passage of text the same amount of time that it takes women to read this passage of text”. If the speakers are generally similar in important characteristics (e.g., dialect, cultural background, education levels) apart from gender, then any group differences may be attributable to the effect for gender on speaking rate (although establishing causal relationships is a difficult thing, as noted above, and involves more than simply randomization).

What we are describing in the above paragraphs are controlled experiments, experiments where the researcher takes an active role in ensuring the ‘fairness’ of the experiment. The notions of control and fairness are somewhat hazy, and are perhaps more gradient than discrete (i.e. ‘controlled’ vs. ‘uncontrolled’). However, some situations clearly do not lead to ‘fair’ outcomes. For example, what if the caffeine group of readers were all first language English speakers, and the decaf group had substantial number of second language speakers with little experience with English. The caffeine group may very well read faster simply because they are more polished readers, independent of any effects of caffeine. Whenever possible, researchers avoid situations like this by exerting control over their experiments, both in the structure of their experiments and in the recruiting and assignment of their participants to experimental conditions.

Due to random between-speaker variation (among other things), there is no chance whatsoever that average reading times across both groups will be exactly identical, even if caffeine has no effect on reading times at all. In addition, if you re-ran the experiment with the same people and sentences, there is no chance that both group means would be the same across experiments. Basically, any two given group means are always expected to differ due to chance alone. And yet, there is the possibility that caffeinated reading times are systematically different - i.e. different in a way that the random variation of groups across replications is not. So, how can we ever establish that our measures are actually different and don’t just appear to be different because of randomness? It is precisely this problem that has motivated scientists to use statistical analyses to help answer their research questions.

1.2.1 Experiments and inference

This book is about statistical inference. We will talk about the ‘statistics’ part in more detail in the next chapter, but we can talk about the ‘inference’ part now. Inference is a form of reasoning that allows you to go from a set of observations or premises to a conclusion about some facts. For example, you may arrive at a newly discovered island and see white cats wandering around. If you are there for a while and continue to observe only white cats, you may conclude “all the cats on this island are white”. If you do this you have made what is called an inductive inference: You have gone from a set of observations (the cats you saw) to a general conclusion about all the cats on the island.

Often, experiments are not just about observing and measuring certain effects, but also about drawing inferences regarding those effects. For example, in the reading time experiment described above, the researchers are not specifically interested in the reading times of the people in the experiments (i.e., the cats they saw) but rather about the reading times of people more generally (i.e. all the cats on the island).

Since inductive inference seeks to go from limited observation to general rules or principles, it has a central weakness. For example, your inference that only white cats exist on the island is on solid ground until you see a cat that is not white. Can you be sure this won’t happen? You can’t, because you don’t know what you don’t know and you can’t be sure that what hasn’t happened yet will never happen. This is called the problem of induction and it is a fundamental weakness of inductive reasoning.

Another problem faced when using experiments to gather knowledge is the fallacy of affirming the consequent, also known as the fallacy of the converse. Affirming the consequent arises when a researcher works backwards from their “then” statement to their “if” statement. For example consider the statement: “If caffeine speeds up reading, then reading times will be faster for the caffeine group”. Even if it’s true that the reading times are faster for the caffeine group, it is not necessarily true that it is caused by the caffeine.

The problem is that the caffeine group reading faster is a necessary but not sufficient condition for the conclusion that ‘caffeine speeds up reading’. Affirming the consequent does not mean that if/then statements are not useful, but simply that they cannot be used to prove the truth of the premises in a logically necessary manner. For example, consider the statement: “If am the king of England, this coin flip will be heads”. This silly example shows that the truth of the second part of the statement does not prove the first.

We actually do think it would be reasonable to conclude that caffeine is causing the increase in reading times based on the experiment outlined above, given enough participants. However, it’s useful to be aware of the fundamental limitations of trying to understand general patterns given limited sets of observations. It’s also useful to think about how we can reason in a way that might minimize the odds of inferential mistakes, especially by including our general knowledge of the world (and the specific topic) in our reasoning. For example, rather than observing white cats and leaving it at that, we can ask: Why are the cats white? Do evolutionary pressures cause them to be white? How do their genetics ensure that all members of the species will be white? Is there any chance non-white cats could enter into the population? Considering the answers to questions like this, in combination with our observations, can make inferences like “all cats on this island are white” more reliable.

The examples above involved the effect of caffeine on reading times. We are interested in generalizing to the human population based on what is a tiny sample of humans, relatively speaking. If we make the claim “caffeine speeds up reading times”, are we extending that to all humans, or at least to all English speakers? Past, present and future? That is a bold claim based on a small number of data points, or it would be in the total absence of any world knowledge and prior expectations. Of course, we know that caffeine is a stimulant and seems reasonably likely to make people read faster. The finding fits within our larger world view and, as a result, we may accept as likely to be ‘true’. In contrast, suppose that the two groups had instead drank plain water, one ‘regular’ and one dyed with blue food coloring. In this situation we may be skeptical of a large effect for the food coloring. This is because there is no reason to suppose that there is an effect. Since this finding does not conform to any prior knowledge about the world, it is the sort of inference that may turn out to be less reliable, in the long run.

1.3 Our experiment

As noted above, each chapter in this book will feature the analysis of data from the same perceptual experiment. In this section we provide an overview of the experiment, its design, the general research questions it can address, and an overview of the resulting data. A more thorough explanation of the issues at hand and the design of the experiment can be found in chapter 13.

1.3.1 Our experiment: Introduction

Any two speakers will likely ‘sound’ different from each other even when they are saying the ‘same’ word. These between-speaker differences can, in some cases, be systematically associated with speaker characteristics such as age, height, and gender. So, tall speakers tend to sound one way, while shorter speakers tend to sound another way. As a result, although it may sound odd to talk about how tall someone sounds, listeners are able to use the acoustic information in a speaker’s voice to guess the speaker’s age, gender, size, and so on. This information is referred to as the speaker’s indexical characteristics: Social and physical information regarding the speaker that is understood from the way someone speaks.

We can ask two different questions with respect to assessments of indexical characteristics from speech: 1) Are they accurate, and 2) How do listeners arrive at their guesses? Generally speaking, listeners are often not very accurate in their judgments of indexical characteristics, however, they are very consistent in the errors that they tend to make. For example, if one voice is incorrectly assumed to belong to a particular sort of speaker, it will often be the case that this mistake is a regular occurrence.

The ‘guessing’ of speaker characteristics is dominated by two acoustic cues: Voice pitch and voice resonance. Voice pitch can be thought of as the ‘note’ someone produces with their speech. When you sing you produce different notes by producing different pitches. The pitch of a sound is related to the vibration rate of the thing that produced the sound, because repetitive vibration produces a repetitive sound wave that humans perceive as a pitch. Human voice pitch is regulated by changing the vibration rate of the vocal folds in your larynx. You can feel this vibration if you hum a song and press your fingers against the middle of the front of your neck. Pitch is an auditory sensation, a feeling you have in relation to a periodic acoustic event, a sound. When you hear two sounds, if you can order them based on which sounds ‘lower/higher’ than the other, then they differ in pitch. Since this quality cannot be directly measured, scientists measure the fundamental frequency (f0) of the sound to quantify its pitch. The f0 of a sound is measured in Hertz (Hz), which measures how many times a sound wave repeats itself in a second.

Smaller things tend to vibrate at higher rates than larger things. This holds for vocal folds as well; shorter vocal folds tend to vibrate at higher rates than longer vocal folds, thereby producing speech with a higher pitch. As a result, larger speakers tend to produce speech with a lower pitch than smaller speakers. Since the vocal folds grow as one ages into adulthood, voice pitch is a good indicator of age between young childhood and adulthood, but is less useful to distinguish adults of different ages. Basically, pitch may help you distinguish a 5 year old from an 18 year old, but not an 18 year old from a 30 year old.

In addition to general age-related changes, the vocal folds tend to increase in size during puberty for most males so that post-pubescent males tend to produce speech with a lower pitch than the rest of the human population. As a result of these relations, a voice with a lower voice pitch is more likely to be produced by someone who is older, taller, and more likely to be male than a voice with a higher pitch. The relationships between age, height, gender and f0 are presented in Figure 1.1. The height information used throughout this book is available in the height_data data included in the bmmb package and is from Fryar et al. (2012).

(left) Average height of males and females in the United States of America, organized by age (Fryar et al. (2012)). (middle) Average f0 produced by male and female speakers from 5 years of age until adulthood (Lee et al. 1999). (right) Average acoustic vocal-tract length (VTL) of male and female speakers from 5 years of age until adulthood (Lee et al. 1999). The average adult male VTL was set to 15 cm. Error bars indicate two standard deviations.

Figure 1.1: (left) Average height of males and females in the United States of America, organized by age (Fryar et al. (2012)). (middle) Average f0 produced by male and female speakers from 5 years of age until adulthood (Lee et al. 1999). (right) Average acoustic vocal-tract length (VTL) of male and female speakers from 5 years of age until adulthood (Lee et al. 1999). The average adult male VTL was set to 15 cm. Error bars indicate two standard deviations.

Resonance can be thought of as the ‘size’ of a sound. For example, a violin and a cello can be playing the same note (with the same pitch), but the cello will ‘sound’ bigger. This is because lower frequencies resonate in its larger structure. In the same way, speakers with longer vocal tracts (the space from the vocal folds to the lips) tend to ‘sound’ bigger by producing speech with lower frequencies overall. We don’t really have good words to describe what resonance ‘sounds’ like, but a small resonance (short vocal tract) sounds ‘heliumy’. When a person breathes helium and speaks, their pitch does not go up, but their resonance frequencies increase. Acoustically and perceptually, this mimics the effects of having a very short vocal tract (for more information on this, see chapter 13).

Long vocal tracts sound like slow motion speech (think of someone saying “noooooooooooooooooo….” when something bad is happening in slow motion in a movie), and this is because slowing down the playback of a recording simulates a lowering of resonance frequencies in speech (along with the pitch). In fact, size simulation by resonance manipulation is how the recordings for ‘Alvin and the chipmunks’ were originally created. A low-pitched male singer was recorded singing abnormally slow, and the recording was sped up in order to simulate speech with a very high resonance (and an associated very short vocal tract). If you wonder what an increased resonance sounds like, there is a gas called ‘sulfur hexafluoride’ that mimics this effect (because it is very dense). Examples of people increasing their voice resonance by inhaling this gas can be found on YouTube.

There are many ways to measure the resonance of a voice. In our data we will use speech acoustics to directly estimate the length of the vocal tract that produced it, in centimeters (in the manner described in chapter 13). So, our measure of voice resonance will not be acoustic at all but will instead measure the physical correlate of the vocal tract expected to have produced the speech sound.

In general, a lower voice resonance suggests a longer vocal tract length in centimeters. There is a strong positive relationship between vocal-tract length and body length (i.e. height) across the entire human population. This means that if a person is taller than another, their vocal-tract is expected to be longer and their voice resonance is expected to be lower. Since height increases from birth into adulthood, this means that voice resonance can be used to predict both height and age. In addition, adult males tend to be somewhat taller than adult females in most populations, with an average difference of about 15 cm in the United States. As a result, voice resonance can be used to infer the gender of adult speakers, and possibly that of children as well. These relationships are shown in Figure 1.1.

In summary, voice pitch and voice resonance are independent ways that someone can acoustically ‘sound’ bigger/smaller, older/younger, and male/female. The experiment to be described below is a perceptual experiment involving behavioral measures, meaning we observed people’s behavior in reaction to the stimuli. In this experiment, human listeners listened to auditory stimuli (words) and were asked to answer questions regarding what they heard. The experiment was designed to investigate the way that speech acoustics are used by listeners to determine the age, gender, and height of speakers, and the way that these decisions affect each other.

1.3.2 Our experimental methods

Our listeners were 15 speakers of American English. Listeners were presented with the word “heed” produced by 139 different speakers of Michigan English. These speech samples were recorded by Hillenbrand et al. (1995) and are available on the GitHub page associated with this book. So, this experiment featured 139 unique stimulus sounds that the listeners in the experiment were asked to respond to. The stimuli used were productions by 48 adult females, 45 adult males, 19 girls (10-12 years of age), and 27 boys (10-12 years of age). These speakers showed substantial variation in their voice pitch and resonance as measured by their f0 and estimated vocal-tract length (as will be discussed in section 1.5).

In addition to the natural acoustic variation that exists between speakers, voice resonance was also manipulated experimentally. All stimuli were manipulated by shifting the spectral envelope down by 8%, simulating an increase in speaker size of approximately 8% (all other things being equal). The downward shift of the spectral envelope results in a perceptually lower resonance which should ‘sound bigger’ to listeners. This acoustic manipulation is similar to the one carried out to make voices such as those of ‘Alvin and the Chipmunks’ sound small, but in reverse (and pitch was not affected).

Each listener responded to all 278 stimuli (139 speakers x 2 resonance levels), for a total of 4170 observations across all listeners (15 listeners x 278 stimuli). Stimuli were presented one at a time, randomized along all stimulus dimensions. This means that tokens were thrown in one big pile and selected at random in a way that the properties of an upcoming stimulus was never predictable based on the previous one. For each trial, listeners were presented with a single word at random and were asked to:

  1. Indicate whether they thought the speaker was a “boy 10-12 years old”, a “girl 10-12 years old”, a “man 18+ years old”, or a “woman 18+ years old”. This is the apparent speaker category.

  2. Estimate the height of the speaker in feet and inches (converted to centimeters for this book). This is the apparent speaker height.

Our intention is to analyze the apparent speaker category and height judgments provided by listeners in order to address specific research questions (discussed in the following section). To do this, we will use acoustic descriptions of the different speakers’ voices, focusing on the fundamental frequency of their speech, and the vocal-tract length implied by their speech (estimated as described in chapter 13). In addition, we will investigate how listeners judgments about speaker age, gender, and height can affect each other and the use of speech acoustics.

1.3.3 Our research questions

This experiment is meant to investigate how listeners use speech acoustics to estimate the height of unknown talkers. The results will also let us investigate the relationship between the perception of talker size and the perception of talker category. Specific research questions will be discussed in each chapter, however, a general overview will be provided here. The expectations to be outlined below are based on the empirical relationships between these measurements and characteristics outlined above, shown in figure 1.1, and on previous research (discussed in chapter 13).

The assumption is that listeners are familiar with the relationships between apparent speaker characteristics and speech acoustics, and ‘somehow’ use the information in speech to guess speaker characteristics. For example, if we know that a speaker with an f0 of 100 Hz is usually an adult male and is usually about 176 cm tall, we might expect listeners will identify speech stimuli with an f0 near 100 Hz as produced by an adult male speaker who is about 176 cm tall.

Listeners were asked to provide two responses, speaker height and speaker group. The four speaker groups can be split according to two characteristics: The age of the group and the gender of the group (boy = male child, girl = female child, man = male adult, woman = female adult). So, we can consider that listeners reported the height, the age and the gender of the speaker, for each sound they listened to.

Lower frequencies, whether f0 or resonances, are expected to be associated with taller and older speakers. For postpubescent speakers, low frequencies, particularly in f0, can also be an indicator of maleness. In general, we expect that the perception of maleness will be associated with the perception of taller speakers, in particular for older speakers. The perception of adultness should be associated with taller speakers for either gender, however, the difference in height between boys and men tends to be larger than that between girls and women.

Finally, it’s also possible that the acoustic information in voices is used differently based on the apparent category of the speaker. For example, maybe listeners use f0 one way when they think the speaker is an adult and another way when they think the speaker is a child. In addition, it’s possible that different listeners use acoustic information in idiosyncratic ways that are systematic within-listener, but which differ arbitrarily from each other between listener.

1.3.4 Our experimental data

The data associated with this experiment is available in the bmmb package (discussed in the preface), and can be accessed using the code below:

# load package
library ("bmmb")

# load data
data (exp_data_all)

The code above loads our data and places it into our workspace in an object called exp_data_all. Below we use the head function to see the first six lines of the data for the experiment. Our data is in long format so each row is a different individual observation and each column is a different piece of information regarding that observation. Each individual trial (a single row) represents an individual listener’s response to a single stimulus word played to them. So, we know that this data frame has 4170 rows to represent the 4170 observations in our data.

# see first 6 rows
head (exp_data_all)
##   L C height R S C_v  vtl  f0 dur G A G_v A_v
## 1 1 g  165.6 a 1   b 12.2 277 237 f c   m   c
## 2 1 w  173.2 b 1   b 12.2 277 237 f a   m   c
## 3 1 w  165.6 a 2   b 12.4 287 317 f a   m   c
## 4 1 g  147.8 b 2   b 12.4 287 317 f c   m   c
## 5 1 g  165.6 a 3   b 11.6 219 277 f c   m   c
## 6 1 g  158.8 b 3   b 11.6 219 277 f c   m   c

If this were data that you collected and wanted to analyze, you would likely have it somewhere on your hard drive in a csv file, or some equivalent data file. If you were to open this data in Excel (or a similar software) you would see your data arranged in rows and columns. Below we write our data out as a csv file so that we can have a look at it outside of R.

write.csv (exp_data_all, "exp_data_all.csv", row.names = FALSE)

We can get more information about our data using the str function, which tells us that our data is stored in a data frame. A data frame is a collection of vectors that can be of different types, but which must be of the same length. A vector is a collection of elements of the same kind. Below, we see that the str function tells us about the vectors comprised by our data frame.

str (exp_data_all)
## 'data.frame':    4170 obs. of  13 variables:
##  $ L     : int  1 1 1 1 1 1 1 1 1 1 ...
##  $ C     : chr  "g" "w" "w" "g" ...
##  $ height: num  166 173 166 148 166 ...
##  $ R     : Factor w/ 2 levels "a","b": 1 2 1 2 1 2 1 2 1 2 ...
##  $ S     : int  1 1 2 2 3 3 4 4 5 5 ...
##  $ C_v   : Factor w/ 4 levels "b","g","m","w": 1 1 1 1 1 1 1 1 1 1 ...
##  $ vtl   : num  12.2 12.2 12.4 12.4 11.6 11.6 11.9 11.9 12.1 12.1 ...
##  $ f0    : int  277 277 287 287 219 219 260 260 244 244 ...
##  $ dur   : int  237 237 317 317 277 277 318 318 242 242 ...
##  $ G     : chr  "f" "f" "f" "f" ...
##  $ A     : chr  "c" "a" "a" "c" ...
##  $ G_v   : Factor w/ 2 levels "f","m": 2 2 2 2 2 2 2 2 2 2 ...
##  $ A_v   : Factor w/ 2 levels "a","c": 2 2 2 2 2 2 2 2 2 2 ...

We see four kinds of vectors in our data: int indicating that the vector contains integers, num indicating that the vector contains floating point numbers (i.e. numbers that can have decimal points), chr indicating that the vector contains elements made up of characters (i.e. letters or words), or numbers being treated as if they were letters (i.e. as symbols with no numeric value), and Factor which indicates that the vector contains categorical predictors (discussed below). The information represented in each column of our data frame is:

  • L: An integer from 1-15 indicating which listener responded to the trial.
  • C: A letter representing the apparent speaker category (b=boy, g=girl, m=man, w=woman) reported by the listener for each trial.
  • height: A floating-point number representing the height (in centimeters) reported for the speaker on each trial.
  • R: A letter representing the resonance scaling for the stimulus on each trial. The coding is a (actual) for the unmodified resonance and b (big) for the modified resonance (intended to sound bigger).
  • S: An integer from 1-139 indicating which speaker produced the trial stimulus.
  • C_v: A letter representing the veridical (actual) speaker category (b=boy, g=girl, m=man, w=woman) for each trial.
  • vtl: An estimate of the speaker’s vocal-tract length in centimeters.
  • f0: The vowel fundamental frequency (f0) measured in Hertz.
  • dur: The duration of the vowel sound, in milliseconds.
  • G: The apparent gender of the speaker indicated by the listener, f (female) or m (male).
  • A: The apparent age of the speaker indicated by the listener, a (adult) or c (child).
  • G_v: The veridical gender of the speaker indicated by the listener, f (female) or m (male).
  • A_v: The veridical age of the speaker indicated by the listener, a (adult) or c (child).

Since more than one person who read earlier versions of this book complained about the short variable names, we want to explain why we used names like A and not apparent_age, and L instead of Listener. When we get to more complicated models, A:G:f0:L is a manageable amount of characters for a variable name, while apparent_age:apparent_gender:f0:Listener is not. The latter may be nicer in the short term but becomes impossible to deal with in a compact manner in plots and on the page. The only way to feature compact descriptions and print-outs for the more complicated models in the second half of this book was to go with the short variable names from the start. By using the short names, we can have consistent naming in the text, in the formal mathematical descriptions of our models, and in the information printed in the R console. Since the same data is used in every chapter and the same handful of variables are used in every chapter, we hope that this decision will not be too vexing for the reader.

We can access the individual vectors that make up our data frame in many ways. One way is to add a $ after the name of our data frame, and then write the name of the vector after. This is shown below for our vector of heights.

exp_data_all$height

Running the command above will write out the entire vector to your screen, all 4170 observations of height responses that make up our data. Using the head function will show you the first six elements of an object, and you can get specific elements of the vector using brackets as shown below.

# show the first six
head(exp_data_all$height)
## [1] 165.6 173.2 165.6 147.8 165.6 158.8

# show the first element
exp_data_all$height[1]
## [1] 165.6

# show elements 2 to 6
exp_data_all$height[2:6]
## [1] 173.2 165.6 147.8 165.6 158.8

Below, we use two sets of brackets to retrieve the height vector using its position in the data frame (first example), or its name (second example).

head(exp_data_all[[3]])
## [1] 165.6 173.2 165.6 147.8 165.6 158.8

head(exp_data_all[["height"]])
## [1] 165.6 173.2 165.6 147.8 165.6 158.8

We can also retrieve the height vector by using a single set of parentheses as shown below. This method relies on treating the data frame as a matrix whose elements are arranged on a grid. Each element of the grid can then be accessed by providing x and y grid coordinates in single brackets as in [x,y]. Below we retrieve the entire third column by specifying a column number (or name) but leaving the row number unspecified.

head(exp_data_all[,3])
## [1] 165.6 173.2 165.6 147.8 165.6 158.8

head(exp_data_all[,"height"])
## [1] 165.6 173.2 165.6 147.8 165.6 158.8

We use the same method to recover the entire first row of the data frame, and then the second element of the first row (or, from another perspective, the first element of the second column).

exp_data_all[1,]
##   L C height R S C_v  vtl  f0 dur G A G_v A_v
## 1 1 g  165.6 a 1   b 12.2 277 237 f c   m   c

exp_data_all[1,2]
## [1] "g"

1.4 Variables

Each of the columns in the exp_data_all data frame can be thought of as a different variable. Variables are placeholders for some value, whether we know it or not. For example you can say “my weight is \(x\) pounds”, or “this data represents a response provided by experimental subject \(x\)”. In our data, our variables take on different values from trial to trial, and the values of these variables tell us about the different outcomes and conditions associated with the trial. In this section we’re going to discuss different aspects of variables, especially as they pertain to the analysis of experimental data.

1.4.1 Populations and samples

Anything that varies from observation to observation in an unpredictable manner can be thought of as a random variable. For example, your exact weight varies from day to day around your ‘average’ weight. In principle, you could probably explain exactly why your weight varies if you were so inclined. However, in practice you are probably not exactly sure why your weight is a bit higher one day and a bit lower the next. So, your weight is a random variable not necessarily because it is impossible to know why it varies, but simply because you don’t currently have the means to predict its exact value for any given observation.

In order to answer questions about reasonable values for variables of interest, scientists often collect measurements of that variable. These measurements can help us understand the most probable values of this variable, and the expected range of the variable, even if its value for any given observation is unpredictable. For example, although you may not know your exact weight on any given day, if you weigh yourself with some regularity you may have enough observations to have a pretty good idea of what your weight might be tomorrow. In addition, your expectation may be so strong that a large deviation from it would be more likely to result in your buying a new scale than believing the measurement.

A sample is a finite set of observations - measurements of a variable - that you actually have. The population is the entire, possibly infinite, set of possible outcomes associated with the random variable. For example, the population of “f0 produced by adult women in the United States” contains all possible values of f0 produced by the entire set of women from the United States. Our sample is the specific set of observations we have from the speakers we observed.

Usually, a scientist will collect a sample to make inferences about the population. In other words, we are interested in the general behavior of the variable itself, not just of the small number of instances that we observed. For example, Hillenbrand et al. collected their data to make inferences about speakers of American (or Michigan) English in general, and not because they were particularly interested in the speakers in their sample. Similarly, we are not specifically interested in the opinions of the 15 listeners in our data, but in what their behavior might tell us about the population of human listeners in general.

1.4.2 Dependent and Independent Variables

We can make a very basic distinction between variables that we want to explain or understand, and variables that we use to explain and understand. The variables we want to explain are our dependent variables, they are usually the variables we measure or observe in an experiment. The variables that we use to explain and understand our measurements are our independent variables (sometimes called explanatory variables).

Dependent variables can often be random, which means their values are not knowable a priori (before observation). For example, you may have some expectation about what your weight might be before you get on a scale, but in general you can’t know exactly what it will say with certainty before collecting the observation. Although the exact values of our dependent variables can vary somewhat unpredictably from trial to trial, in the context of an experiment there is the general expectation that these values will depend in some way on the other variables in the experiment. For example, in the experiment we will analyze in this book we modified stimuli so that some are expected to ‘sound’ bigger than others. As a result, we expect an association between values of apparent height and the R (Resonance) variable in our data. In other words, the value of apparent height depends on the value of R.

Variables that help predict the response (dependent) variable are sometimes referred to as independent variables because their values are not considered to depend on those of the other predictors. More specifically, we can say the values of our independent variables are not assumed to depend on the values of the other variables within the context of our experiment, or in a manner that directly relates to the relevant research questions.

Our experiment has two response variables: the apparent height (height) reported for each trial, and the apparent speaker category (C) reported for each trial. Our experiment also involves several variables that could be used to understand our responses (i.e. every other variable in the data). Whether a variable is dependent or independent depends on the research question and on the structure of the model more than on some inherent property of variables and data. For example, the data in exp_data_all could be used to understand variation in voice pitch (f0) across speaker groups. In this case f0 would be the dependent variable and the veridical speaker category (C_v) would be the independent variable. Another researcher may choose to model how perceived height varies as a function of f0 and speaker group. In this case height would be the dependent variable and f0 and C_v would be the independent variables.

1.4.3 Categorical variables and ‘factors’

Categorical variables, also sometimes called nominal, are variables that take on some set of non-numeric, usually character values. Often, categorical variables are the labels that we apply to objects or groups of objects. For example, gender is a categorical variable with possible values of ‘male’ and ‘female’ among others. In our experiment data, C, S, L, R, C_v, A, G, A_v, and G_v are categorical variables. Categorical predictors are often called factors. Factors can take on a limited number of values, called levels. For example if your factor is “word category” your factor levels may be “verb” and “noun”. If your factor is “first language” your levels may be “Mandarin”, “English”, “Italian”, and “Hindi”.

A factor is a data type in R. A vector of factors is very similar to a vector of words, but it has some additional properties that are useful. For example, consider our C_v predictor, which tells us which category each speaker falls into. When C_v is treated as a vector of factors, rather than a character vector, our nominal labels will have associated numerical values. Many R functions turn nominal (non-numeric) predictors into factors, and doing this yourself gives you control over how this will be handled.

# see the first 6 observations
head (exp_data_all$C_v)   
## [1] b b b b b b
## Levels: b g m w

# it has levels
levels(exp_data_all$C_v)  
## [1] "b" "g" "m" "w"

# each level has numerical values
table (exp_data_all$C_v, as.numeric (exp_data_all$C_v))  
##    
##        1    2    3    4
##   b  810    0    0    0
##   g    0  570    0    0
##   m    0    0 1350    0
##   w    0    0    0 1440

By default, factor levels are ordered alphabetically. You can control this behavior by re-ordering the factor levels as below:

# re order
exp_data_all$C_v_f = factor (exp_data_all$C_v, levels = c('w','m','g','b'))

# the new order is evident
levels (exp_data_all$C_v_f)
## [1] "w" "m" "g" "b"

# note that 'm' is now the second category
xtabs ( ~ exp_data_all$C_v + exp_data_all$C_v_f)
##                 exp_data_all$C_v_f
## exp_data_all$C_v    w    m    g    b
##                b    0    0    0  810
##                g    0    0  570    0
##                m    0 1350    0    0
##                w 1440    0    0    0

Although our factors seem to have an ‘order’ this is only because items can only be discussed and presented one at a time, and so there must be some order in our nominal variables at some level of organization. For example, when presenting effects and plotting figures, you literally do have to decide to show one effect first and another second. However, the ordering of factors is exchangeable meaning it does not in any way affect our analysis. For example, the listeners and speakers in our experiment received unique numbers. However, listener 1 is not the listener who ‘most’ has the quality of listener, and speaker 8 is not twice the speaker (or anything else) that speaker 4 is. In other words, although we must commit to some order in our factors to organize our data, this ordering is arbitrary and not meaningful.

There is a special kind of categorical variable, called an ordinal variable, where the ordering of the categories is meaningful. These variables are halfway between numbers and labels: They faithfully represent the order (rank) of categories but not the magnitude of the difference between values. For example, consider the first, second, and third place runners in a race. These are ordinal labels. You know who finished before/after who, but don’t know anything about how much of a difference there was between the runners. As a result, these variables seem to have some of the properties of numbers, while not being totally like ‘real’ numbers. We will discuss the prediction of ordinal dependent variables in more detail in chapter 12.

1.4.4 Quantitative variables

Unlike nominal variables, quantitative variables let us represent the relative ordering of different observations and the relative differences between them. Some examples of quantitative variables are time, frequency, and weight. In our experiment data, height is a quantitative dependent variable, and f0, vtl, and dur are quantitative independent variables.

A distinction is made between continuous and discrete quantitative variables. Continuous variables have infinitely small spaces between adjacent elements (like the real numbers), at least in principle. On the other hand discrete variables have gaps between the possible values of the variable, like the integers. For example, things like time are naturally continuous while things like counts are naturally discrete.

When we use a quantitative variable as our dependent variable, there is usually the expectation that is is continuous rather than discrete. However, in practice all measures stored on computers are discrete and many continuous values (e.g. reaction times) can be measured with a maximal precision, resulting in discrete values. For example, a chronometer that measures reaction times to the millisecond contains only 1000 possible values between zero and one second. Similarly, human height is difficult to measure to much less than a centimeter of precision, making height measurements effectively discrete. Below are some more questions that will help you decide if you should treat a variable as quantitative, even if it is discrete:

  • Is the variable on a ratio or interval scale? This is a prerequisite for a quantitative value to be used as a dependent variable. An interval scale means that differences between values are meaningful, and a ratio scale additionally means that 0 is meaningful.

  • Is the underlying value continuous? Many variables are discrete in practice due to limitations in measurement. However, if the underlying value is continuous (e.g., height, time) then this can motivate treating the measurement as a quantitative dependent variable since fractional values ‘make sense’. For example, even if you measure time only down to the nearest millisecond, a value of 0.5 milliseconds is possible and interpretable. In contrast, a value of 0.5 people is not.

  • Are there a large number (>50) of possible values the measured variable can take? For example a die can only take on 6 quantitative values, which is not enough.

  • Are most/all of the observed values far from their bounds? Human height does not really get much smaller than about 50 cm and longer than about 220 cm, so it is technically bounded. However, in most cases our observations are expected to not be away from these boundaries.

If you answered yes to all or most of these questions, it is probably ok to treat your discrete variable as if it were quantitative, though this determination really needs to be made on a case by case basis.

1.4.5 Logical variables

Before finishing with variables, we need to talk about one type that does not appear in our data, but that will come up often. These are referred to as Boolean variables in many other situations, however, they are referred to as logical variables in R. Logical variables can only take one of two values: TRUE and FALSE. Below we use two equal signs to test for the equality of two values, and != to check for an inequality. Notice that we can check for the equality of numbers or characters.

2 == 1
## [1] FALSE

"hello" == "hello"
## [1] TRUE

"hello" != "hello"
## [1] FALSE

We can also check for inequalities between numbers:

2 > 1
## [1] TRUE

2 >= 1
## [1] TRUE

2 < 1
## [1] FALSE

2 >= 1
## [1] TRUE

One useful fact is that the logical values of TRUE and FALSE have numeric values of 1 and 0, as seen below. In each case, TRUE is equal to 1 so the expression evaluates to 2.

TRUE + 1
## [1] 2

(2 == 2) + 1
## [1] 2

When logical operators are applied to vectors, the operation is evaluated for each element of the vector, as below, and a vector of logical values is returned. When combined with the numeric values of logical variables, this means that we can easily calculate the number of times a certain condition was met in the vector.

# are the values less than or equal to 3?
c(1,2,3,4,5,6,7,8,9,10) <= 3
##  [1]  TRUE  TRUE  TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE

Below, we find whether each element of the vector is greater then or equal to three. This results in a vector of logical values equivalent to a vector of ones and zeros. When we find the sum of the vector of logical values, we find the number of times in which the condition was met. Below, we see that three of the elements in this vector satisfy our condition.

logical_vector = c(1,2,3,4,5,6,7,8,9,10) <= 3

as.numeric (logical_vector)
##  [1] 1 1 1 0 0 0 0 0 0 0

sum (logical_vector)
## [1] 3

sum (c(1,2,3,4,5,6,7,8,9,10) <= 3)
## [1] 3

There is one other very important use for vectors of logical values, and this is to extract subsets of your data that meet certain conditions. Below we create a vector of logical values that indicate whether the f0 for a trial is below 175 Hz or not. We can see that this vector has 4170 elements, one for every row in our data, and that 1290 trials satisfied our condition. This is nothing more than a bigger version of the same process we just carried out above with our logical_vector.

# TRUE if f0 < 175
f0_idx = exp_data_all$f0 < 175

str (f0_idx)
##  logi [1:4170] FALSE FALSE FALSE FALSE FALSE FALSE ...

sum (f0_idx)
## [1] 1290

Recall that we can access individual rows of our data frames, that is the individual observations of our data, by placing this information before a comma inside brackets following the name of the data frame (as seen below). When we use a logical vector in this way, the effect is to include every row that equals TRUE and to omit every row that equals FALSE in the vector. Below we use our f0_idx vector to create a new data frame called low_f0 containing only productions with f0 below 175 Hz.

# get only rows where f0 < 175, i.e. where f0_idx is TRUE
low_f0 = exp_data_all[f0_idx,]

nrow(low_f0)
## [1] 1290

max(low_f0$f0)
## [1] 172

We can use the ! operator, which basically means ‘not’, to flip each TRUE to FALSE (and vice versa). When f0_idx is flipped to select a subset of a data frame, the result is to select those rows where speaker f0 is greater than or equal to 175 Hz.

# get only rows where f0 >= 175, i.e. where f0_idx is FALSE
high_f0 = exp_data_all[!f0_idx,]

nrow(high_f0)
## [1] 2880

min(high_f0$f0)
## [1] 175

1.5 Inspecting our data

After running an experiment but before your statistical analysis, you should inspect the patterns in your data. This gives you an opportunity to make sure the data has the characteristics you expect, and that there weren’t any errors during the collection of your data or with the design of your experiment.

1.5.1 Inspecting categorical variables

One of the most useful functions for understanding the distribution of categorical variables is the table function. This function makes a cross tabulation (or contingency table) of the variables passed to the function. If a single factor is passed, the function returns the number of times each level of the factor is found in the data. Since each of our listeners listened to 278 stimuli, we expect that each level of the factor L (representing listeners) will appear 278 times in our data, confirmed below.

table (exp_data_all$L)
## 
##   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15 
## 278 278 278 278 278 278 278 278 278 278 278 278 278 278 278

We can use this approach to confirm basic expectations about our data, and to rule out problems with the design of the experiment. This is always a good idea since mistakes happen, and sometimes only get noticed when attempting to process the data. For example, if any of the levels above appeared more than or fewer than 278 times, we would have a problem.

We can also provide two (or more) factors at a time and the table function will return counts for every combination of factor levels. The table below reflects the fact that each listener heard 54 boys, 38 girls, 90 men, and 96 women, for a total of 278 responses per listener. When you provide multiple factors to table, it will vary the first factor along the rows of the table and the second factor along the columns of the table. If a third factor is provided, it makes a different table for factors one and two, for each level of factor three. More and more factors can be provided to the function, but these tables become harder and harder to work with.

# table of listener and veridical speaker category
table (exp_data_all$C_v, exp_data_all$L)
##    
##      1  2  3  4  5  6  7  8  9 10 11 12 13 14 15
##   b 54 54 54 54 54 54 54 54 54 54 54 54 54 54 54
##   g 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38
##   m 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
##   w 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96

Below we see that unlike our veridical categories, the distribution of apparent speaker categories varies across listeners. This is because the equal distribution of speakers for each listener is an aspect of the experimental design. However, the manner in which listeners interpreted each voice, whether they thought it sounded like a boy or girl for example, may vary across individual listeners.

# table of listener and apparent speaker category
table (exp_data_all$C, exp_data_all$L)
##    
##       1   2   3   4   5   6   7   8   9  10  11  12  13  14  15
##   b  42  89  59  90  59  59  67  58  51  12  68  40  98  95  63
##   g  58  45  24  50  44  34  32  29  44  60  24  34  16  23  43
##   m  83  89  90  84  90  92  88  90  89  93  86  95  88  88  89
##   w  95  55 105  54  85  93  91 101  94 113 100 109  76  72  83

We can visualize relationships between categorical variables using a mosaic plot. In figure 1.2 we see mosaic plots representing the two tables shown immediately above. Mosaic plots use rectangles of different sizes to reflect the relative frequencies of different combinations of categorical variables. In the left mosaic plot we see that the size of the rectangle for each category is identical across listeners. This tells us these variables do not affect each other: Changing the listener does not affect the distribution of veridical speaker category in any way. In contrast, the distribution of apparent speaker category is affected by the listener and this is shown in the right plot where columns differ randomly from each other.

Comparisons of mosaic plots showing variables that do not (left), and do (right), affect each other.

Figure 1.2: Comparisons of mosaic plots showing variables that do not (left), and do (right), affect each other.

Below we make a three-dimensional table, and inspect the table and each dimension. Notice that to index the table along the third dimension we need to add two commas inside the brackets.

tmp_tab = table (exp_data_all$C, exp_data_all$L, exp_data_all$R)
tmp_tab
tmp_tab[,,1]
tmp_tab[,,2]

When we plot the relationship between apparent speaker class, listener, and resonance (i.e. actual vs. big), we see a three-way relationship between the variables. First, we see that the chances of observing different speaker categorizations depends on the listener. Second, we see that the chances of observing each category depends on resonance. And third, we see that the effect of resonance potentially affects each listener a somewhat different way. The first chapters of this book will focus on understanding patterns in continuous variables. However, we will discuss the prediction and modeling of categorical dependent variables beginning in chapter 10.

Mosaic plots highlighting a three-way relationship: The two-way listener by apparent speaker class relationship varies as a function of the third variable, resonance (indicated along the top of each plot).

Figure 1.3: Mosaic plots highlighting a three-way relationship: The two-way listener by apparent speaker class relationship varies as a function of the third variable, resonance (indicated along the top of each plot).

1.5.2 Inspecting quantitative variables

Using R, we can easily find useful information about any quantitative variable. Below, we calculate the sample mean, the number of observations, the sample standard deviation, and some important quantiles for our speaker height judgments. The quantiles of a set of observations are values such that a given percentage of observations fall above and below the value. Quantiles are found by ordering the observations and selecting the observation that is greater than x% of the sampled values and less than ((100-x)%) of the sampled values. For example, the 50% quantile, also called the median, is the value such that 50% of the distribution is below it and 50% is above it, and the 25% quantile (the first quartile) is the value such that 25% of the distribution is below it and 75% is above it.

# calculate the mean
mean (exp_data_all$height)
## [1] 162.8

# find the number of observations
length (exp_data_all$height)
## [1] 4170

# find quantiles
quantile (exp_data_all$height)
##    0%   25%   50%   75%  100% 
## 106.7 154.9 164.8 173.5 198.1

We can use this information to make some basic, and potentially useful, statements about our data. The mean and median are 162.8 and 164.8 cm respectively, and height values range from 106.7 to 198.1 cm. However, there are not many observations at the extremes, and 50% of values are between 154.9 and 173.5 cm. We know this because these are the values of the first and third quartiles, the 25% quantiles that divide our distribution into four equal parts. Since 75-25=50, we know that 50% of the distribution of observations must fall inside of these boundaries.

We can look at the distribution of apparent height judgments in several ways, as seen in Figure 1.4. In the top row each point indicates an individual production. Points are jittered (randomly shifted) along the y axis to make them easier to distinguish so that dense and sparse locations can be compared.

Each column presents the same data in three different ways: (top) As individual jittered points, (middle) as a boxplot, and (bottom) as a histogram.

Figure 1.4: Each column presents the same data in three different ways: (top) As individual jittered points, (middle) as a boxplot, and (bottom) as a histogram.

In the middle row we see a box plot of the same data. The edges of the box correspond to the 25% and 75% quantiles of the distribution, and the line in the middle of it corresponds to the median. So, the box spans the interquartile range of your observations and 50% of observations are contained in the box. The boxplot whiskers extend from the edge of the boxplots. By default, these extend out 1.5 times the interquartile range. These whiskers are simply intended to give you an estimate of the amount of ‘typical’ variation in your sample. Beyond the whiskers we see individual outliers, points considered to be substantially different from the rest of the sample. We can see that the boxplot does a good job of summarizing the information in the top plots, and provides information related to both average apparent height values and to the expected variability in these values.

The bottom row presents what is known as a histogram of the same data. The histogram divides the x axis into a set of discrete sections (‘bins’), and gives you the count (or frequency) of observations in each bin. Bins with lots of observations are relatively taller (more dense) than bins with fewer observations in them. As a result, histograms can be used to summarize where observations tend to be. For example, we can see that the bins under the interquartile range have the most observations, and that values further from the median value become increasingly less frequent. In addition, histograms can provide us with information that boxplots can’t. For example, in the right column we see that our distribution of height judgments actually has a little gap at around 150 cm. This information does not really come across in the boxplot representation of the same data.

Scatter plots are plots that represent two quantitative variables at a time using a set of points on a coordinate space. Each point represents a single observation, the x-axis location represents the value of one variable, and the y-axis location represents the value of the other variable. Scatter plots are useful to understand relationships between quantitative predictors. Below we consider the relationships between our quantitative predictors using a pairs plot (pairs). A pairs plot creates scatter plots for all pairs of quantitative variables provided, resulting in \(n^2\) plots for \(n\) variables. Each plot below contains a single point for each different stimulus used in this experiment (height values represent averages across all listeners).

A pairs plot of the continuous variables in our data, showing different sorts of relationships between: f0 (fundamental frequency in Hertz), vtl (vocal-tract length in centimeters), dur (duration in milliseconds), and height (apparent speaker height in centimeters).

Figure 1.5: A pairs plot of the continuous variables in our data, showing different sorts of relationships between: f0 (fundamental frequency in Hertz), vtl (vocal-tract length in centimeters), dur (duration in milliseconds), and height (apparent speaker height in centimeters).

In the plot above we can see several apparent relationships between our quantitative variables. For example, pitch (f0) and vocal-tract length (vtl) are negatively related. This means that as the value of f0 increases (left to right), the value of vtl decreases (top to bottom). In other words, if the f0-vtl relationship were a hill it would have a negative, decreasing, slope. In contrast we see that height and vtl enter into a positive relationship: As you increase vtl, height also increases. Finally, we see that duration (dur) and height do not seem to have much of a relationship. Unlike the other two scatterplots which looked a bit like ramps or lines, the scatter plot of dur and height resembles a Rorschach test inkblot. This suggests either that these two variables are not strongly related, or that the nature of the relationship is more complicated than what can be understood using these simple plots.

1.5.3 Exploring continuous and categorical variables together

We can also consider the relationships between our quantitative and categorical variables. We can use the boxplot function as below:

boxplot (y ~ factor)

To make a set of boxplots for the variable y. The function call above will create a plot with a separate box for each level of the factor in the function call. In figure 1.6, we see different quantitative variables organized according to veridical speaker category. For example, the left plot shows the distribution of observations of f0 for boys, girls, men, and women respectively. In this case the differences between the boxplots for each level of the factor tell us about the values of f0 usually observed for speakers in that category.

Boxplots showing the distribution of different quantitative variables in our data according to the veridical speaker categories of boy (b), girl (g), man (m), and woman (w).

Figure 1.6: Boxplots showing the distribution of different quantitative variables in our data according to the veridical speaker categories of boy (b), girl (g), man (m), and woman (w).

Another way to think of the relationships between our categorical and quantitative variables is using scatter plots with different points, as in figure 1.7. In the scatter plots below, each point indicates a single speaker from our experiment, and the position of each point is determined by the f0, vocal-tract, and average apparent height for the speaker. However, rather then plot using meaningless symbols, each point is labeled using a letter which indicates the veridical category that the speaker falls into. Using plots like the one below helps us understand the relationship between our acoustic predictors and our speaker categories. For example, it’s clear that adult males are fairly distinct acoustically compared to the other speaker categories. In addition, it seems that boys, girls, and women are easier to separate along the vocal-tract length dimension than the f0 dimension.

Speakers plotted according to their fundamental frequency (f0), vocal-tract length, and average apparent height. Letters indicate if speaker is a boy (b), girl (g), man (m), or woman (w).

Figure 1.7: Speakers plotted according to their fundamental frequency (f0), vocal-tract length, and average apparent height. Letters indicate if speaker is a boy (b), girl (g), man (m), or woman (w).

1.6 Exercises

The analyses in the main body of the text all involve only the unmodified ‘actual’ resonance level. Responses for the stimuli with the simulated ‘big’ resonance will be reserved for exercises throughout. You can get the ‘big’ resonance data in the exp_ex data frame, or all the data in the exp_data_all data frame. For now, it would be useful to compare the results in exp_data and exp_ex using the techniques covered in this chapter. The data is quite similar but not exactly the same, so familiarizing yourself with apparent height for exp_ex, and any differences between exp_data and exp_ex, will help your analyses later on.

1.7 References

Fryar, C. D., Gu, Q., & Ogden, C. L. (2012). Anthropometric reference data for children and adults: United States, 2007-2010. Vital and Health Statistics. Series 11, Data from the National Health Survey, 252, 1–48.

1.8 Plot Code


################################################################################
### Figure 1.1
################################################################################
library (bmmb)
data (height_data)
data (lee_etal_data)

par (mfrow = c(1,3), mar = c(4.1,4.1,1,1))
plot (height_data$age[height_data$gender=="f"]-.1,
      height_data$height[height_data$gender=="f"],
      pch=16,col=2,lwd=2,cex=1.5, ylim = c(70,205),type='b', xlab="Age (years)",
      ylab = "Height (cm)",xlim=c(2,21),cex.axis=1.3,cex.lab=1.3)
lines (height_data$age[height_data$gender=="m"]+.1,
       height_data$height[height_data$gender=="m"],
      pch=16,col=4,lwd=2,cex=1.5,type='b')
grid()
legend (13,110,legend = c("Female","Male"), col = c(2,4),pch=16,cex=1.2,
        pt.cex=1.5, bty='n')

phonTools::errorbar(height_data$age[height_data$gender=="f"]-.1,
                    height_data$height[height_data$gender=="f"],
                    height_data$sd[height_data$gender=="f"]*2,col=2,lwd=1,length=0.051)
phonTools::errorbar(height_data$age[height_data$gender=="m"]+.1,
                    height_data$height[height_data$gender=="m"],
                    height_data$sd[height_data$gender=="m"]*2,col=4,lwd=1,length=0.051)

rect (9.5,120,12.5,177,lwd=2,border="forestgreen",lty=1)
rect (17.5,143,20.5,205,lwd=2,border="forestgreen",lty=1)


lee_etal_data$f0_original=lee_etal_data$f0

lee_etal_data$f0 = log(lee_etal_data$f0_original)
aggdmu = aggregate (f0 ~ age+gender, FUN = mean, data = lee_etal_data)
aggdmu[,3] = exp(aggdmu[,3])

lee_etal_data$f0 = log(lee_etal_data$f0sd/lee_etal_data$f0_original)
aggdsd = aggregate (f0 ~ age+gender, FUN = sd, data = lee_etal_data)
aggdsd[,3] = (exp(aggdsd[,3])-1)*aggdmu[,3] 

aggdmuf0 = aggdmu
aggdsdf0 = aggdsd

plot (aggdmu$age[aggdmu$gender=="f"]-.1,aggdmu$f0[aggdmu$gender=="f"],
      pch=16,col=2,lwd=2,cex=1.5, ylim = c(75,370),type='b', xlab="Age (years)",
      ylab = "f0 (Hz)",xlim=c(2,21),cex.axis=1.3,cex.lab=1.3)
lines (aggdmu$age[aggdmu$gender=="m"]+.1,aggdmu$f0[aggdmu$gender=="m"],
      pch=16,col=4,lwd=2,cex=1.5,type='b')
grid()

phonTools::errorbars(aggdmu$age[aggdmu$gender=="f"]-.1,aggdmu$f0[aggdmu$gender=="f"],
                   aggdsd$f0[aggdsd$gender=="f"]*2,col=2,lwd=1,length=0.051)
phonTools::errorbars(aggdmu$age[aggdmu$gender=="m"]+.1,aggdmu$f0[aggdmu$gender=="m"],
                   aggdsd$f0[aggdsd$gender=="m"]*2,col=4,lwd=1,length=0.051)


gbars = (log(lee_etal_data$f1)+log(lee_etal_data$f2)+log(lee_etal_data$f3))/3

aggdmu = aggregate (gbars ~ age+gender, FUN = mean, data = lee_etal_data)
aggdmu[,3] = (aggdmu[,3])

aggdmu$vtl = exp(-((aggdmu$gbar)-min(aggdmu$gbars)))
aggdmu$vtl = 15 * aggdmu$vtl

vtl = (log(lee_etal_data$f1sd/lee_etal_data$f1)+
         log(lee_etal_data$f2sd/lee_etal_data$f2)+
         log(lee_etal_data$f3sd/lee_etal_data$f3))/3

aggdsd = aggregate (vtl ~ age+gender, FUN = mean, data = lee_etal_data)
aggdsd[,3] = exp(aggdsd[,3])*aggdmu$vtl

aggdmuvtl = aggdmu
aggdsdvtl = aggdsd

plot (aggdmu$age[aggdmu$gender=="f"]-.1,aggdmu$vtl[aggdmu$gender=="f"],
      pch=16,col=2,lwd=2,cex=1.5, ylim = c(8.5,17),type='b', xlab="Age (years)",
      ylab = "VTL (cm)",xlim=c(2,21),cex.axis=1.3,cex.lab=1.3)
lines (aggdmu$age[aggdmu$gender=="m"]+.1,aggdmu$vtl[aggdmu$gender=="m"],
      pch=16,col=4,lwd=2,cex=1.5,type='b')
grid()

phonTools::errorbars(aggdmu$age[aggdmu$gender=="f"]-.1,
                   aggdmu$vtl[aggdmu$gender=="f"],
                   aggdsd$vtl[aggdsd$gender=="f"]*2,col=2,lwd=1,length=0.051)
phonTools::errorbars(aggdmu$age[aggdmu$gender=="m"]+.1,
                   aggdmu$vtl[aggdmu$gender=="m"],
                   aggdsd$vtl[aggdsd$gender=="m"]*2,col=4,lwd=1,length=0.051)


################################################################################
### Figure 1.2
################################################################################

par (mfrow = c(1,2), mar=c(2,2,1,.5))
plot (t(table (exp_data_all$C_v, exp_data_all$L)), main = '',xlab='Listener', 
      col = bmmb::cols[2:5],ylab = 'Veridical Speaker Class')
plot(t(table (exp_data_all$C, exp_data_all$L)), main = '',xlab='Listener', 
     col = bmmb::cols[2:5],ylab = 'Apparent Speaker Class')

################################################################################
### Figure 1.3
################################################################################

tmp_tab = table (exp_data_all$C, exp_data_all$L, exp_data_all$R)

par (mfrow = c(1,2), mar=c(2,2,2.5,0.5))
plot (t(tmp_tab[,,1]), main = 'Actual Resonance',xlab='Listener', 
      col = bmmb::cols[2:5],
      ylab = 'Apparent Speaker Class')
plot(t(tmp_tab[,,2]), main = 'Big Resonance',xlab='Listener', 
     col = bmmb::cols[2:5],
     ylab = 'Apparent Speaker Class')

################################################################################
### Figure 1.4
################################################################################

mens_height = exp_data_all$height[exp_data_all$C_v=='m']

set.seed(7)
par (mfcol = c(3,2), mar = c(0.5,4,0.5,1), oma = c(4,0,2,0))
plot (mens_height, jitter (rep(1,length(mens_height))), xlim = c(100, 210), ylim = c(.95,1.05),
      yaxt='n',ylab='', pch = 16, col = yellow, xaxt='n')
mtext (side =3, outer = FALSE, text = "Adult male apparent heights", line = 1)
boxplot (mens_height, horizontal = TRUE, ylim = c(100, 210), col = coral,xaxt='n')
hist (mens_height,main="", col = teal, xlim = c(100, 210),breaks=40,cex.lab=1.3,
      cex.axis=1.3, xlab = "")
mtext (side =1, outer = FALSE, text = "Apparent height (cm)", line = 3)
box()

plot (exp_data_all$height, jitter (rep(1,length(exp_data_all$height))), xlim = c(100, 210), ylim = c(.95,1.05),
      yaxt='n',ylab='', pch = 16, col = yellow, xaxt='n')
mtext (side =3, outer = FALSE, text = "All apparent heights", line = 1)
boxplot (exp_data_all$height, horizontal = TRUE, ylim = c(100, 210), col = coral,xaxt='n')
hist (exp_data_all$height,main="", col = teal, xlim = c(100, 210),breaks=40,cex.lab=1.3,
      cex.axis=1.3,xlab = "")
mtext (side =1, outer = FALSE, text = "Apparent height (cm)", line = 3)

box()

################################################################################
### Figure 1.5
################################################################################

agg_data = aggregate (height~f0+vtl+dur, data = exp_data_all, FUN=mean)
pairs (agg_data, col = lavender,pch=16)


################################################################################
### Figure 1.6
################################################################################

par (mfrow = c(1,3), mar = c(3,4.2,1,1))
boxplot (f0 ~ C_v, data = exp_data_all, col = bmmb::cols[1:4],cex.lab=1.3,cex.axis=1.3,
         xlab="",ylab="f0 (Hz)")
boxplot (vtl ~ C_v, data = exp_data_all, col = bmmb::cols[5:8],cex.lab=1.3,cex.axis=1.3,
         xlab="",ylab="Vocal-tract Length (cm)")
boxplot (dur ~ C_v, data = exp_data_all, col = bmmb::cols[9:12],cex.lab=1.3,cex.axis=1.3,
         xlab="",ylab="Duration (ms)")

agg_data = aggregate (cbind(f0,vtl,dur,height)~C_v+S, data = exp_data_all, FUN=mean)


par (mfrow = c(1,3), mar = c(4,4,1,1))

plot (agg_data$vtl, agg_data$height, type = 'n',ylab="Apparent height (cm)",xlab="Vocal-tract length (cm)",
      cex.lab=1.3,cex.axis=1.3)
text (agg_data$vtl, agg_data$height, labels = agg_data$C_v, 
      col = bmmb::cols[2:5][factor(agg_data$C_v)], cex = 1.5)

plot (agg_data$f0, agg_data$height, type = 'n',xlab="f0 (Hz)",ylab="Apparent height (cm)",
      cex.lab=1.3,cex.axis=1.3)
text (agg_data$f0, agg_data$height, labels = agg_data$C_v, 
      col = bmmb::cols[2:5][factor(agg_data$C_v)], cex = 1.5)

plot (agg_data$vtl, agg_data$f0, type = 'n',ylab="f0 (Hz)",xlab="Vocal-tract length (cm)",
      cex.lab=1.3,cex.axis=1.3)
text (agg_data$vtl, agg_data$f0, labels = agg_data$C_v, 
      col = bmmb::cols[2:5][factor(agg_data$C_v)], cex = 1.5)

################################################################################
### Figure 10.1
################################################################################


library (bmmb)
data (exp_data)
options (contrasts = c('contr.sum','contr.sum'))

tab = table (exp_data$S, exp_data$C_v)
mod_cat = apply (tab, 1,which.max)


exp_data$vtl = exp_data$vtl - mean (exp_data$vtl)

aggd = aggregate (cbind ( height, A=="a", G=="f", vtl,f0, vtl) ~ S + C_v, 
                      data = exp_data, FUN = mean)

par (mfrow = c(2,2), mar = c(2,4,1,1.5), oma = c(3,1,0,0))
plot (aggd$vtl, aggd$height, cex =2, col = bmmb::cols[c(2:5)][factor(aggd$C_v)], 
      xlim=c(-3,3),  pch=16,lwd=2,ylim = c(130,185),xlab = "",
      ylab="Apparent height (inches)",cex.lab=1.2,cex.axis=1.2)
grid()
abline (lm(aggd$height~aggd$vtl)$coefficients, lwd=2)
points (aggd$vtl, aggd$height, cex =2, pch=16,lwd=2,
      col = bmmb::cols[c(4,6)][aggd$group])

legend (.8,165, legend = c("Boys","Girls","Men","Women"),lwd=2,lty=0,
        col = cols[2:5], bty='n',pch=16,pt.cex=2)
plot (exp_data$vtl, exp_data$G=='f', cex =2, col = bmmb::cols[c(2:5)][factor(aggd$C_v)], yaxt='n',
      xlim=c(-3,3),  pch=16,lwd=2,ylim = c(-.1,1.1),xlab = "",
      ylab="G == 'f'",cex.lab=1.2,cex.axis=1.2)
grid()
abline (lm(aggd[,5]~aggd$vtl)$coefficients, lwd=2)
points (aggd$vtl, aggd[,5], cex =2, pch=16,lwd=2,
      col = bmmb::cols[c(4,6)][aggd$group])
abline (h=.5)
axis (side=2, at=0:1)

plot (aggd$vtl, aggd[,5], cex =2, col = bmmb::cols[c(2:5)][factor(aggd$C_v)], 
      xlim=c(-3,3),  pch=16,lwd=2,ylim = c(-.1,1.1),xlab = "",
      ylab="P(G  = 'f')",cex.lab=1.2,cex.axis=1.2)
grid()
abline (lm(aggd[,5]~aggd$vtl)$coefficients, lwd=2)
points (aggd$vtl, aggd[,5], cex =2, pch=16,lwd=2,
      col = bmmb::cols[c(4,6)][aggd$group])
abline (h=.5)

plot (aggd$vtl, logit(aggd[,5]), cex =2, col = bmmb::cols[c(2:5)][factor(aggd$C_v)], 
      xlim=c(-3,3),  pch=16,lwd=2,ylim = c(-6.1,6.1),xlab = "",
      ylab="Logit (P(G  = 'f'))",cex.lab=1.2,cex.axis=1.2)
grid()
abline (lm(logit(aggd[,5])~aggd$vtl)$coefficients, lwd=2)
points (aggd$vtl, logit(aggd[,5]), cex =2, pch=16,lwd=2,
      col = bmmb::cols[c(4,6)][aggd$group])
abline (h=0)

mtext (side=1,text="Centered VTL (cm)", outer = TRUE, line = 1.5, cex=0.9)

################################################################################
### Figure 10.2
################################################################################

x = seq (-8,8,.01)
y = x

par (mfrow = c(1,3), mar=c(4,4,3,1))
plot (x,y, type = 'l',lwd=4, col=bmmb::"deepgreen", xlim=c(-7,7), main = "y = x",
      xlab = "Predictor", ylab = "Logits",cex.lab=1.3,cex.axis=1.2)
abline (h=0,v=seq(-8,8,2),lty=3)

plot (x,bmmb::inverse_logit (y), type = 'l',lwd=4, col=bmmb::"darkorange", xlim=c(-7,7), 
      main = "y = inverse logit ( x )", xlab = "Predictor", ylab="Probability",
      cex.lab=1.3,cex.axis=1.2)
abline (h=c(0,1,.5),v=seq(-8,8,2),lty=3)

plot (x,bmmb::logit(bmmb::inverse_logit (y)), type = 'l',lwd=4, 
      col=bmmb::"lavender", xlim=c(-7,7), cex.lab=1.3,cex.axis=1.2,
      main = "y = logit ( inverse logit ( x ) )", xlab = "Predictor", ylab="Logits")
abline (h=0,v=seq(-8,8,2),lty=3)

################################################################################
### Figure 10.3
################################################################################

x = seq (-3.5,3.5,.01)
y = x

par (mfrow = c(1,2), mar=c(4,4,1,1))

plot (x,inverse_logit (y), type = 'l',lwd=3, col=darkorange, xlim=c(-3,3),
      xlab="Logit",ylab="Probability", ylim = c(0,1),yaxs='i',xaxs='i')
abline (h=seq(0,1,.1),v=(-9:9),lty=3)
abline (h = 0.5, lwd=2)

plot (x,y, type = 'l',lwd=3, col=deepgreen, xlim=c(-3,3),xlab="Predictor",
      ylab = "Logit",yaxs='i',xaxs='i')
abline (h=logit(seq(0.1,0.9,.1)),v=c(-9:9),lty=3)
abline (h = 0, lwd=2)


################################################################################
### Figure 10.4
################################################################################

tab = table (exp_data$S, exp_data$C)
mod_cat = apply (tab, 1,which.max)

aggd = aggregate (cbind ( height, A=="a", G=="f", vtl,f0, vtl) ~ S + C_v, 
                      data = exp_data, FUN = mean)
aggd$C_v = factor(aggd$C_v)

cffs = gender_vtl_hypothesis[,1]

par (mfrow = c(1,2), mar = c(3,4,1,1))

plot (aggd$vtl, logit(aggd[,5]), cex =2, ylim = c(-5,5),xlab="",
      ylab = "Logit (P(F==1))", col = bmmb::cols[c(2:5)][mod_cat],pch=16,
      lwd=2, xlim =range (exp_data$vtl))
grid()
abline (h=0,lty=3)
curve ( (cffs[1] + cffs[4]*x), xlim =range (exp_data$vtl), add = TRUE, 
        col = 1, lwd=4)
curve ( (cffs[3] + cffs[6]*x), xlim =range (exp_data$vtl), add = TRUE, 
        col = bmmb::cols[10], lwd=4, lty=1)
curve ( (cffs[2] + cffs[5]*x), xlim =range (exp_data$vtl), add = TRUE, 
        col = bmmb::cols[14], lwd=4, lty=1)

legend (0.8,4.5, legend = c("Boys","Girls","Men","Women"),lwd=2,lty=0,
        col = bmmb::cols[2:5], bty='n',pch=16,pt.cex=1.5)

plot (aggd$vtl, (aggd[,5]), cex =2, ylim = c(0,1),xlab="",
      ylab = "P(F==1)", col = bmmb::cols[c(2:5)][mod_cat],
      pch=16,lwd=2, xlim =range (exp_data$vtl))
grid()
abline (h=.50,lty=3)

curve ( inverse_logit(cffs[1] + cffs[4]*x), xlim =range (exp_data$vtl), 
        add = TRUE, col = 1, lwd=4)
curve ( inverse_logit(cffs[3] + cffs[6]*x), xlim =range (exp_data$vtl), 
        add = TRUE, col = bmmb::cols[10], lwd=4, lty=1)
curve ( inverse_logit(cffs[2] + cffs[5]*x), xlim =range (exp_data$vtl), 
        add = TRUE, col = bmmb::cols[14], lwd=4, lty=1)

legend (-2,-1, legend = c("Boys","Girls","Men","Women"),lwd=2,lty=0,
        col = bmmb::cols[3:6], bty='n',pch=1,pt.cex=1.5)

mtext (side=1, text = "Centered VTL (cm)", outer = TRUE, cex = 1, line=-1)

legend (0.8,.9, legend = c("Overall","Adult","Child"),lwd=5,
        col = c(1,bmmb::cols[c(14,10)]), bty='n')


################################################################################
### Figure 10.5
################################################################################


knitr::include_graphics("_main_files/figure-html/Figure 10.5.jpg")

# jpeg ("_main_files/figure-html/Figure 10.5.jpg",4800,1800,res=600)
# 
# tab = table (exp_data$S, exp_data$C_v)
# mod_cat = apply (tab, 1,which.max)
# 
# muvtl = round (mean(exp_data$vtl),1)
# 
# aggd = aggregate (cbind ( height, A=="a", G=="f", vtl,f0, vtl) ~ S + C_v,
#                       data = exp_data, FUN = mean)
# aggd$C_v = factor(aggd$C_v)
# 
# cffs = gender_vtl_hypothesis[,1]
# 
# bounds = boundaries_1[,1]
# 
# par (mfrow = c(1,2), mar = c(4.1,4.1,1,1))
# layout (mat = matrix(c(1,2,1,3,1,4,1,5),4,2,byrow=TRUE))
# 
# plot (aggd$vtl, bmmb::logit(aggd[,5]), cex =2, ylim = c(-5,5),xlab="",
#       ylab = "Logit (P(F==1))", col = bmmb::cols[c(2:5)][mod_cat],pch=16,
#       lwd=2, xlim =range (exp_data$vtl),cex.lab = 1.3,cex.axis=1.3,
#       xaxt = 'n')
# axis (at = -2:2, labels = (-2:2) + muvtl,
#       cex.axis = 1.3, side=1)
# abline (h=0,lty=3)
# 
# curve ( (cffs[1] + cffs[4]*x), xlim =range (exp_data$vtl), add = TRUE,
#         col = 1, lwd=3)
# curve ( (cffs[3] + cffs[6]*x), xlim =range (exp_data$vtl), add = TRUE,
#         col = bmmb::cols[10], lwd=4, lty=1)
# curve ( (cffs[2] + cffs[5]*x), xlim =range (exp_data$vtl), add = TRUE,
#         col = bmmb::cols[14], lwd=4, lty=1)
# 
# legend (0.8,4.7, legend = c("Boys","Girls","Men","Women"),lwd=2,lty=0,
#         col = bmmb::cols[2:5], bty='n',pch=16,pt.cex=1.5,cex=1.5)
# 
# abline (v = c(0.24,.655,-.99), col = c(1,bmmb::cols[14],bmmb::cols[10]),
#         lwd=2,lty=3)
# 
# par (mar = c(.5,.5,.5,.5))
# 
# bound = bounds[3]
# plot (0,xlim = c(-2,2),ylim=c(0,1),xaxt='n',yaxt='n',type='n')
# rect(-3, -1, bound, 2, col=bmmb::cols[3])
# rect(bound, -1, 3, 2, col=bmmb::cols[2])
# text (c(-1.5,1.2),c(0.5,0.5), c("Girl","Boy"), cex = 2, col = 0)
# abline (v = bounds[3], lwd = 4, col = bmmb::cols[10])
# 
# bound = bounds[1]
# plot (0,xlim = c(-2,2),ylim=c(0,1),xaxt='n',yaxt='n',type='n')
# rect(-3, -1, bound, 2, col=bmmb::cols[8])
# rect(bound, -1, 3, 2, col=bmmb::cols[7])
# text (c(-1.5,1.2),c(0.5,0.5), c("Female","Male"), cex = 2, col = 0)
# abline (v = bounds[1], lwd = 3, col = 1)
# 
# bound = bounds[2]
# plot (0,xlim = c(-2,2),ylim=c(0,1),yaxt='n',type='n',
#       cex.axis = 1.3, xaxt = 'n')
# rect(-3, -1, bound, 2, col=bmmb::cols[5])
# rect(bound, -1, 3, 2, col=bmmb::cols[4])
# text (c(-1.5,1.2),c(0.5,0.5), c("Woman","Man"), cex = 2, col = 0)
# axis (at = -2:2, labels = (-2:2) + muvtl,
#       cex.axis = 1.3, side=1)
# abline (v = bounds[2], lwd = 4, col = bmmb::cols[14])
# 
# mtext (side=1, "Centered vocal-tract length (cm)", line = 3)
# 
# dev.off()

################################################################################
### Figure 10.6
################################################################################


par (mfrow = c(1,2), mar = c(4,4,1,1))

p1 = (tapply (exp_data$Female, exp_data$G_v, mean))
p2 = (tapply (adults$Female, adults$G_v, mean))
p3 = (tapply (children$Female, children$G_v, mean))

plot (c(1,1), p1, type = 'b', pch=16, xlim = c(0.8,3.2), ylim = c(0,1),
      ylab= "P(F==1)", col=c(1,1),xaxt='n',xlab='')
#points (1, mean(p1), cex=1.5,pch=16)
points (c(1,1), p1, col=c(3,2), pch=16,cex=1.5)
lines (c(2,2), p2, type = 'b', pch=16)
points (c(2,2), p2, col=c(3,2), pch=16,cex=1.5)
#points (2, mean(p2), cex=1.5,pch=16)
lines (c(3,3), p3, type = 'b', pch=16)
#points (3, mean(p3), cex=1.5,pch=16)
points (c(3,3), p3, col=c(3,2), pch=16,cex=1.5)
abline (h = 0.5)
axis (side=1,at=1:3, labels=c("All","Adults","Chidren"))

legend (2.3, 0.9, legend=c("H","FA"),col=c(3,2),pch=16,bty='n',pt.cex=1.3)

p1 = bmmb::logit (tapply (exp_data$Female, exp_data$G_v, mean))
p2 = bmmb::logit (tapply (adults$Female, adults$G_v, mean))
p3 = bmmb::logit (tapply (children$Female, children$G_v, mean))

plot (c(1,1), p1, type = 'b', pch=16, xlim = c(0.8,3.2), ylim = c(-4,2.5),
      ylab= "Logit (P(F==1))", col=c(1,1),xaxt='n',xlab='')
points (1, mean(p1), cex=1.5,pch=16)
points (c(1,1), p1, col=c(3,2), pch=16,cex=1.5)
lines (c(2,2), p2, type = 'b', pch=16)
points (2, mean(p2), cex=1.5,pch=16)
points (c(2,2), p2, col=c(3,2), pch=16,cex=1.5)
lines (c(3,3), p3, type = 'b', pch=16)
points (3, mean(p3), cex=1.5,pch=16)
points (c(3,3), p3, col=c(3,2), pch=16,cex=1.5)
axis (side=1,at=1:3, labels=c("All","Adults","Chidren"))

abline (h = 0)

#############################################################################
### Figure 10.7
#############################################################################

biases1 = bmmb::short_hypothesis (
  model_gender_dt,
  hypothesis = c("Intercept+A_v1 = 0"),group="L", scope="coef")
biases2 = bmmb::short_hypothesis (
  model_gender_dt,
  hypothesis = c("Intercept-A_v1 = 0"),group="L", scope="coef")

sensitivities1 = bmmb::short_hypothesis (
  model_gender_dt,
  hypothesis = c("2*(F_v+F_v:A_v1) = 0"),group="L", scope="coef")
sensitivities2 = bmmb::short_hypothesis (
  model_gender_dt,
  hypothesis = c("2*(F_v-F_v:A_v1) = 0"),group="L", scope="coef")


par (mar = c(4,4.2,1,.2))
layout (m=t(c(1,2,3)), widths = c(.30,.4,.4))
bmmb::brmplot (gender_dt_hypothesis[1:3,], ylim = c(-3,13), col = bmmb::cols[7],
               nudge = -.01, labels="", ylab = "Logits",cex.lab=1.2,cex.axis=1.2)
bmmb::brmplot (gender_dt_hypothesis[4:6,], add = TRUE, col = bmmb::cols[8], 
               nudge = .01, labels="")
axis (side = 1, at = 1:3, labels = c("All","Adults","Children"),cex.axis=1.2)

par (mar = c(4,.1,1,.2))
bmmb::brmplot (biases1, ylim = c(-3,13), col = bmmb::cols[7],yaxt='n',
               labels=1:15, cex.lab=1.2,cex.axis=1.2)
bmmb::brmplot (sensitivities1, add = TRUE, col = bmmb::cols[8], labels="")

par (mar = c(4,.1,1,.2))
bmmb::brmplot (biases2, ylim = c(-3,13), col = bmmb::cols[7],yaxt='n',
               labels = 1:15,cex.lab=1.2,cex.axis=1.2)
bmmb::brmplot (sensitivities2, add = TRUE, col = bmmb::cols[8], pch=16, labels="")

legend (5, 11, legend =c("Sensitivity", "Bias"), pch=16,lwd=2,
        col = bmmb::cols[c(8,7)], cex = 1.5, bty='n')

################################################################################
### Figure 10.8
################################################################################

tmp = bmmb::exp_data
par (mfrow = c(1,1), mar = c(4,4,1,1))

plot (density (tmp$vtl[tmp$C_v=="b"]), xlim = c(11,16),
      ylim = c(0,1.3),lwd=2,col=cols[2],main="",xlab="Vocal-Tract Length (cm)")
polygon (density (tmp$vtl[tmp$C_v=="b"]),
       lwd=2,col=cols[2])
polygon (density (tmp$vtl[tmp$C_v=="g"]),
       lwd=2,col=cols[3])
polygon (density (tmp$vtl[tmp$C_v=="w"]),
       lwd=2,col=cols[5])
polygon (density (tmp$vtl[tmp$C_v=="m"]),
       lwd=2,col=cols[4])

legend (13.7,1.2,legend = c("boys","girls","men","women"),col=bmmb::cols[2:5],
        pch=16,bty='n',cex=1.0)


###############################################################################
### Figure 11-1
###############################################################################

agg_data = aggregate (height~f0+vtl+C_v, data = bmmb::exp_data, FUN=mean)
agg_data = agg_data[nrow(agg_data):1,]

ptcex = agg_data[,4] - min(agg_data[,4])
ptcex = 1 + (ptcex / max(ptcex))*1.5

par (mfrow = c(1,3), mar = c(4,4,1,1))
plot (agg_data[,c(2,4)],pch=16,col = bmmb::cols[2:5][factor(agg_data[,3])], 
      cex=ptcex,xlab="VTL (cm)",ylab="Apparent height (cm)",cex.lab=1.3,cex.axis=1.3)
abline (lm(agg_data[,c(4)]~agg_data[,c(2)]),lwd=2,lty=2)
grid()
plot (agg_data[,c(1,4)],pch=16,col = bmmb::cols[2:5][factor(agg_data[,3])], 
      cex=ptcex,xlab="f0 (Hz)",ylab="Apparent height (cm)",cex.lab=1.3,cex.axis=1.3)
abline (lm(agg_data[,c(4)]~agg_data[,c(1)]),lwd=2,lty=2)
grid()

plot (agg_data[,c(1,2)],pch=16,col = bmmb::cols[2:5][factor(agg_data[,3])], 
      cex=ptcex,xlab="f0 (Hz)",ylab="VTL (cm)",cex.lab=1.3,cex.axis=1.3)
grid()


################################################################################
### Figure 11.2
################################################################################

knitr::include_graphics("_main_files/figure-html/Figure 11.2.jpg")

# jpeg ("_main_files/figure-html/Figure 11.2.jpg",4800,2700,res=600)
# 
# agg_data = aggregate (height~f0+vtl+C_v, data = bmmb::exp_data, FUN=mean)
# colnames(agg_data) = c("f0","VTL","C_v","height")
# 
# agg_data = aggregate (height~f0+vtl+C_v, data = bmmb::exp_data, FUN=mean)
# agg_data = agg_data[nrow(agg_data):1,]
# 
# ptcex = agg_data[,4] - min(agg_data[,4])
# ptcex = 1 + (ptcex / max(ptcex))*1.5
# 
# par (mfrow = c(1,2), mar = c(0,1,0,2), oma = c(0,0,0,0))
# 
# tmp = agg_data[,c(2,1,4)]
# s3d = scatterplot3d::scatterplot3d (tmp, color = bmmb::cols[2:5][factor(agg_data[,3])],
#                      pch=16, angle = 55,type = "p",ylim=c(90,310),
#                      cex.symbols=rev(ptcex)*.9, zlab="Apparent height (cm)",
#                      xlab="Vocal-tract length (cm)",
#                      ylab="f0 (Hz)")
# 
# my.lm <- lm(height ~ vtl+f0, data = agg_data)
# s3d$plane3d(my.lm, col = 1,lty=3, draw_polygon=TRUE,
#             polygon_args = list(col = rgb(0, 0, 0, 0.2)))
# 
# s3d = scatterplot3d::scatterplot3d (agg_data[,c(1,2,4)],pch=16, angle = 55,
#                                     color = bmmb::cols[2:5][factor(agg_data[,3])],
#                                     type = "p",xlim=c(90,310),
#                                     cex.symbols=rev(ptcex)*.9,
#                                     zlab="Apparent height (cm)",
#                                     ylab="Vocal-tract length (cm)",
#                                     xlab="f0 (Hz)")
# my.lm <- lm(height ~ f0+vtl, data = agg_data)
# s3d$plane3d(my.lm, col = 1,lty=3, draw_polygon=TRUE,
#             polygon_args = list(col = rgb(0, 0, 0, 0.1)))
# dev.off()

################################################################################
### Figure 11.3
################################################################################

f1 = function (a,b,c) a * 1 + b *1  
f2 = function (a,b,c) a*b*2
f3 = function (a,b,c) a * 1 + b *1 + a*b*2

x <- y <- seq(-1, 1, length= 12)

par (mfrow = c(3,3), mar = c(1,1,0,0), oma = c(0,0,0,0))

z <- outer(x, y, f1)
persp(x, y, z, col = bmmb::cols[2],theta = 0,zlim = c(-3,3))
persp(x, y, z, col = bmmb::cols[2],theta = -90,zlim = c(-3,3))
persp(x, y, z, col = bmmb::cols[2],theta = 45,zlim = c(-3,3))

z <- outer(x, y, f2)
persp(x, y, z, col = bmmb::cols[3],theta = 0,zlim = c(-3,3))
persp(x, y, z, col = bmmb::cols[3],theta = -90,zlim = c(-3,3))
persp(x, y, z, col = bmmb::cols[3],theta = 45,zlim = c(-3,3))

z <- outer(x, y, f3)
persp(x, y, z, col = bmmb::cols[4],theta = 0,zlim = c(-3,5))
persp(x, y, z, col = bmmb::cols[4],theta = -90,zlim = c(-3,5))
persp(x, y, z, col = bmmb::cols[4],theta = 45,zlim = c(-3,5))


################################################################################
### Figure 11.4
################################################################################
pts = lme4::fixef (lme_model_height_vtl_f0)[-1]
err_bars = summary (lme_model_height_vtl_f0)$coefficients[-1,2]

fixefs = brms::fixef (model_height_vtl_f0)[-1,]
good = (fixefs[,3] < -0.5 & fixefs[,4] < -0.5) | (fixefs[,3] > 0.5 & fixefs[,4] > 0.5) 
pchs = ifelse (good, 16,1)

par (mfrow = c(1,1), mar = c(6,4,1,1))
bmmb::brmplot (brms::fixef (model_height_vtl_f0)[-1,], ylim = c(-7,10),las=2, pch=pchs,lwd=2, ylab = "Centimeters")
points (brms::fixef (model_height_vtl_f0)[-1,1], pch=pchs,lwd=2,cex=1.5)
points ((1:15)+.2, pts, cex=1.5,lwd=2,col=2,pch=16)
segments((1:15)+.2, pts-2*err_bars,(1:15)+.2, pts+2*err_bars,lwd=2,col=2)

abline (h = c(-0.5,0.5), lty = 3, col = bmmb::cols[6],lwd=2)

################################################################################
### Figure 11.5
################################################################################

Ssd = attr(lme4::VarCorr(lme_model_height_vtl_f0)[[1]],'stddev')
Lsd = attr(lme4::VarCorr(lme_model_height_vtl_f0)[[2]],'stddev')
lmer_vars = c(sigma(lme_model_height_vtl_f0), Lsd, Ssd)
vars_final = bmmb::banova (model_height_vtl_f0, superpopulation =TRUE)
vars_final = vars_final[vars_final$cluster != "fixefs",]

layout (m=1:3, heights = c(.4,.3,.3))
par (mar = c(5,4,.5,1), oma = c(.1,.1,.1,.1))
bmmb::brmplot (vars_final, ylim = c(0,8.5), las = 2,cex.axis=1.3,cex.lab=1.2,
               col=bmmb::cols[14],yaxs="i", labels = "",ylab="Centimeters")
points (lmer_vars, cex=3,lwd=3,col=bmmb::cols[2],pch=4)
axis (1:nrow(vars_final), labels = rep("",nrow(vars_final)),side=1)
text (1:nrow(vars_final),-.5,rownames(vars_final), srt=35, 
      xpd = TRUE, adj = 1, cex=1.2)

pts = lme4::ranef(lme_model_height_vtl_f0)$L[,4]

par (mar = c(.5,4,.5,1))
bmmb::brmplot (brms::ranef(model_height_vtl_f0)$L[,,4],labels="", 
               col = bmmb::cols[8],ylab="Centimeters",cex.lab=1.2)
points ((1:15), pts, cex=3,lwd=3,col=bmmb::cols[7],pch=4)

corrs_lme = attr(lme4::VarCorr(lme_model_height_vtl_f0)$L,"correlation")
corrs_lme = corrs_lme[lower.tri (corrs_lme)]
corrs_brms = bmmb::getcorrs(model_height_vtl_f0, "L")

bmmb::brmplot (corrs_brms, ylim = c(-.97,.97), las = 2, cex.lab=1.2,
               labels = "",line=FALSE,col=bmmb::cols[4],ylab="Centimeters")
abline (h=0)
points (corrs_lme, cex=1.5,lwd=3,pch=4, col = bmmb::cols[12])

################################################################################
### Figure 11.6
################################################################################

par (mfrow = c(1,1), mar = c(4,4,.5,.5))

bmmb::brmplot (brms::ranef (model_height_vtl_f0)$L[,,"Intercept"], 
               col = bmmb::cols[9], line=TRUE, xlab = "Listener",ylab="Centimeters")
bmmb::brmplot (brms::ranef (model_height_vtl_f0)$L[,,"vtl:G1"], add = TRUE, 
         nudge = 0.1, labels = "", col = bmmb::cols[10],pch=17)

legend (11,10, legend =c("Intercept:L","VTL:G:L"),pch=16:17,horiz = TRUE,bty='n',
        col=bmmb::cols[9:10],pt.cex=1.5)

################################################################################
### Figure 11.7
################################################################################

par (mar = c(.125,2,.125,.125), mfrow = c(2,1), oma = c(6.5,2,1,0))
banovaplot (banova_height_vtl_f0_finite[-2,], las = 2,line=FALSE,labels = "",
            yaxs="i", ylim = c(0,10))
abline (h=0)
legend (10,10,legend = c("Error","Fix.Eff","L.Ran.Eff","S.Ran.Eff"),
        pch=16,bty='n',col = bmmb::cols[2:5], horiz = TRUE)
box()
banovaplot (banova_height_vtl_f0_super[-2,], las = 2, line=FALSE,yaxs="i",
            ylim = c(0,10))
abline (h=0)
box()
mtext (side=2,outer=TRUE,line=1,"Centimeters")


################################################################################
### Figure 11.8
################################################################################

sds_reduced = getsds(model_height_vtl_f0_reduced)
sds = getsds(model_height_vtl_f0)
use = rownames(sds) %in% rownames(sds_reduced)
sds = sds[use,]

par (mfrow = c(1,2), mar = c(4,4,1,1))
layout (m = t(c(1,2)),widths = c(.55,.45))
brmplot (fixef (model_height_vtl_f0)[c(2,3,4,5,7,11),], ylim = c(-6,10),las=2,
         ylab = "Centimeters", col = bmmb::cols[2])
brmplot (fixef (model_height_vtl_f0_reduced)[c(3,2,4,5,7,6),],
         add = TRUE, nudge = 0.2, col = bmmb::cols[12],labels = "")
abline (h=0)

legend (4, 10, legend = c("Full","Reduced"),bty='n',pch=16,
        col=bmmb::cols[c(2,12)], pt.cex = 1.3)

brmplot (sds, las = 2, ylab = "Centimeters", col = bmmb::cols[2])
brmplot (sds_reduced, add = TRUE, col = bmmb::cols[12], nudge = .2, labels = "")


################################################################################
### Figure 11.9
################################################################################

f1 = function (a,b,c) a * 1 + b *1  
x <- y <- seq(-3, 3, length= 12)

par (mfrow = c(2,3), mar = c(1,1,0,0), oma = c(0,0,0,0))

z <- outer(x, y, f1)
persp(x, y, z, col = bmmb::cols[14],theta = 0,zlim = c(-6.5,6.5))
persp(x, y, z, col = bmmb::cols[14],theta = -90,zlim = c(-6.5,6.5))
persp(x, y, z, col = bmmb::cols[14],theta = 45,zlim = c(-6.5,6.5))

z <- outer(x, y, f1)
persp(x, y, inverse_logit(z), col = bmmb::cols[13],theta = 0,zlim = c(0,1))
persp(x, y, inverse_logit(z), col = bmmb::cols[13],theta = -90,zlim = c(0,1))
persp(x, y, inverse_logit(z), col = bmmb::cols[13],theta = 45,zlim = c(-0,1))


################################################################################
### Figure 11-10
#######################\#########################################################

f1 = function (a,b,c) a * 1 + b *1 + a*b*1  

x <- y <- seq(-3, 3, length= 12)

par (mfrow = c(2,3), mar = c(1,1,0,0), oma = c(0,0,0,0))

z <- outer(x, y, f1)
persp(x, y, z, col = bmmb::cols[10],theta = 0,zlim = c(-9.5,15.5))
persp(x, y, z, col = bmmb::cols[10],theta = -90,zlim = c(-9.5,15.5))
persp(x, y, z, col = bmmb::cols[10],theta = 45,zlim = c(-9.5,15.5))

z <- outer(x, y, f1)
persp(x, y, ztop(z), col = bmmb::cols[6],theta = 0,zlim = c(0,1))
persp(x, y, ztop(z), col = bmmb::cols[6],theta = -90,zlim = c(0,1))
persp(x, y, ztop(z), col = bmmb::cols[6],theta = 45,zlim = c(-0,1))


################################################################################
### Figure 11-11
################################################################################

par ( mfrow = c(1,2), mar = c(5.5,4,1,1))
layout (mat= t(c(1,2)), widths = c(.4,.6))
bmmb::brmplot (brms::fixef(model_gender_vtl_f0), line = TRUE, las = 2,
               ylab = "Centimeters")
par (mar = c(5.5,2,1,1))
bmmb::banovaplot (banova_gender_vtl_f0, line = TRUE, las = 2)

################################################################################
### Figure 11-12
################################################################################

knitr::include_graphics("_main_files/figure-html/Figure 11.12.jpg")

# jpeg ("_main_files/figure-html/Figure 11.12.jpg",4800,1650,res=600)
# # y = (-b*x - a - z) / (dx+c)
# # fixef(model_gender_vtl_f0)
# 
# tmp = bmmb::exp_data
# tmp = tmp[tmp$R=='a',]
# 
# tmp$vtl_original = tmp$vtl
# mu_vtl = mean (tmp$vtl_original)
# tmp$vtl = tmp$vtl - mean (tmp$vtl)
# 
# tmp$f0_original = tmp$f0
# mu_f0 = mean (tmp$f0_original)
# tmp$f0 = tmp$f0 - mean(tmp$f0)
# tmp$f0 = tmp$f0 / 100
# 
# aggd = aggregate (cbind ( height, A=="a", G=="f", vtl,f0, vtl) ~ S + C_v,
#                       data = tmp, FUN = mean)
# aggd$C = ""
# aggd$C[aggd[,4]>= 0.5 & aggd[,5]>= 0.5] = "w"
# aggd$C[aggd[,4]>= 0.5 & aggd[,5]<= 0.5] = "m"
# aggd$C[aggd[,4]<= 0.5 & aggd[,5]>= 0.5] = "g"
# aggd$C[aggd[,4]<= 0.5 & aggd[,5]<= 0.5] = "b"
# #table(aggd$C)
# 
# tab = table (tmp$S, tmp$C)
# mod_cat = apply (tab, 1,which.max)
# 
# par (mfrow = c(1,3), mar = c(4,.25,.5,.25), oma = c(0,4.5,0,0))
# 
# plot (aggd$vtl,aggd$f0, cex =1.2, col = bmmb::cols[c(2:5)][factor(aggd$C)],
#       pch=16,lwd=2, xlab = "",ylab="Height (inches)")
# grid()
# points (aggd$vtl, aggd$f0, cex =1.2, pch=16,lwd=2,
#       col = bmmb::cols[c(2:5)][factor(aggd$C)])
# 
# curve ((-b_all*x - a_all) / (d_all*x+c_all), from = -6, to = -c_all/d_all,
#        add = TRUE,lwd=2,col=bmmb::cols[7])
# curve ((-b_all*x - a_all) / (d_all*x+c_all), from = -c_all/d_all,
#        to = 6, add = TRUE,lwd=2,col=bmmb::cols[7])
# 
# curve ((-b_adult*x - a_adult) / (d_adult*x+c_adult), from = -6,
#        to = -c_adult/d_adult-.05, add = TRUE,
#        lwd=2, lty=2,col=bmmb::cols[10])
# curve ((-b_adult*x - a_adult) / (d_adult*x+c_adult), from = -c_adult/d_adult+.05,
#        to = 6, add = TRUE,
#        lwd=2, lty=2,col=bmmb::cols[10])
# 
# curve ((-b_child*x - a_child) / (d_child*x+c_child), from = -6,
#        to = -c_child/d_child, add = TRUE,
#        lwd=2, lty=2,col=bmmb::cols[8])
# curve ((-b_child*x - a_child) / (d_child*x+c_child), from = -c_child/d_child,
#        to = 6, add = TRUE,
#        lwd=2, lty=2,col=bmmb::cols[8])
# 
# plot (aggd$vtl,aggd$f0, cex =1.2, col = bmmb::cols[c(2:5)][factor(aggd$C)],
#       pch=16,lwd=2, xlab = "",ylab="Height (inches)",xlim = c(-4,4),ylim = c(-2,2),
#       yaxt='n')
# grid()
# points (aggd$vtl, aggd$f0, cex =1.2, pch=16,lwd=2,
#       col = bmmb::cols[c(2:5)][factor(aggd$C)])
# 
# curve ((-b_all*x - a_all) / (d_all*x+c_all), from = -6, to = -c_all/d_all, add = TRUE,lwd=2,col=bmmb::cols[7])
# curve ((-b_all*x - a_all) / (d_all*x+c_all), from = -c_all/d_all, to = 6, add = TRUE,lwd=2,col=bmmb::cols[7])
# 
# curve ((-b_adult*x - a_adult) / (d_adult*x+c_adult), from = -6,
#        to = -c_adult/d_adult-.05, add = TRUE,
#        lwd=2, lty=2,col=bmmb::cols[10])
# curve ((-b_adult*x - a_adult) / (d_adult*x+c_adult), from = -c_adult/d_adult+.05,
#        to = 6, add = TRUE,
#        lwd=2, lty=2,col=bmmb::cols[10])
# 
# curve ((-b_child*x - a_child) / (d_child*x+c_child), from = -6,
#        to = -c_child/d_child, add = TRUE,
#        lwd=2, lty=2,col=bmmb::cols[8])
# curve ((-b_child*x - a_child) / (d_child*x+c_child), from = -c_child/d_child,
#        to = 6, add = TRUE,
#        lwd=2, lty=2,col=bmmb::cols[8])
# 
# plot (aggd$vtl,aggd$f0, cex =1.2, col = bmmb::cols[c(2:5)][factor(aggd$C)],
#       xlim=c(-4,4), ylim = c(-2,2),  pch=16,lwd=2, xlab = "",yaxt='n',
#       ylab="Height (inches)")
# grid()
# 
# rect (-5,-3,5,3,col=2)
# 
# x = seq(-c_adult/d_adult+.05,5,0.1)
# y = (-b_adult*x - a_adult) / (d_adult*x+c_adult)
# polygon (c(-5,x,5),c(-3,y,-3),col="seagreen")
# 
# x = seq(-5 , -c_adult/d_adult-.05,0.1)
# y = (-b_adult*x - a_adult) / (d_adult*x+c_adult)
# polygon (c(-5,x,5),c(3,y,3),col="seagreen")
# 
# points (aggd$vtl, aggd$f0, cex =1.2, pch=16,lwd=2,
#       col = 0)
# 
# mtext (side=1, outer = TRUE,line=-1, "Centered vocal-tract length (cm)", cex=0.9)
# mtext (side=2, outer = TRUE,line=3, "Centered f0 (Hz) / 100",cex=0.9)
# dev.off()

################################################################################
### Figure 11-13
################################################################################


knitr::include_graphics("_main_files/figure-html/Figure 11.13.jpg")

# jpeg ("_main_files/figure-html/Figure 11.13.jpg",4800,1650,res=600)
# 
# fixef_1 = brms::fixef (model_gender_vtl_f0)
# fixef_2 = brms::fixef (model_gender_vtl_f0_reduced)
# 
# # y = (-b*x - a - z) / (dx+c)
# # fixef(model_gender_vtl_f0)
# 
# tmp = bmmb::exp_data
# tmp = tmp[tmp$R=='a',]
# 
# tmp$vtl_original = tmp$vtl
# mu_vtl = mean (tmp$vtl_original)
# tmp$vtl = tmp$vtl - mean (tmp$vtl)
# 
# tmp$f0_original = tmp$f0
# mu_f0 = mean (tmp$f0_original)
# tmp$f0 = tmp$f0 - mean(tmp$f0)
# tmp$f0 = tmp$f0 / 100
# 
# aggd = aggregate (cbind ( height, A=="a", G=="f", vtl,f0, vtl) ~ S + C_v,
#                       data = tmp, FUN = mean)
# aggd$C = ""
# aggd$C[aggd[,4]>= 0.5 & aggd[,5]>= 0.5] = "w"
# aggd$C[aggd[,4]>= 0.5 & aggd[,5]<= 0.5] = "m"
# aggd$C[aggd[,4]<= 0.5 & aggd[,5]>= 0.5] = "g"
# aggd$C[aggd[,4]<= 0.5 & aggd[,5]<= 0.5] = "b"
# #table(aggd$C)
# 
# tab = table (tmp$S, tmp$C)
# mod_cat = apply (tab, 1,which.max)
# 
# par (mfrow = c(1,3), mar = c(4,.25,.5,.25), oma = c(0,4.5,0,0))
# 
# plot (aggd$vtl,aggd$f0, cex =1.2, col = bmmb::cols[c(2:5)][factor(aggd$C)],
#       xlim=c(-2.5,3),  pch=16,lwd=2, xlab = "",
#       ylab="Height (inches)")
# grid()
# points (aggd$vtl, aggd$f0, cex =1.2, pch=16,lwd=2,
#       col = bmmb::cols[c(2:5)][aggd$C])
# 
# legend (1,300, legend = c("Boys","Girls","Men","Women"),lwd=2,lty=0,
#         col = bmmb::cols[2:5], bty='n',pch=16,pt.cex=1.5)
# 
# 
# curve ((-b_all*x - a_all) / (c_all), from = -3, to = 3, add = TRUE,lwd=2,col=bmmb::cols[7])
# curve ((-b_adult*x - a_adult)/ (c_adult), from = -3, to = 3, add = TRUE,lwd=2, lty=2,col=bmmb::cols[10])
# curve ((-b_child*x - a_child)/ (c_child), from = -3, to = 3, add = TRUE,lwd=2, lty=2,col=bmmb::cols[8])
# 
# plot (aggd$vtl,aggd$f0, cex =1.2, col = bmmb::cols[c(2:5)][factor(aggd$C)],
#       xlim=c(-2.5,3),  pch=16,lwd=2, xlab = "",yaxt='n',
#       ylab="Height (inches)")
# grid()
# points (aggd$vtl, aggd$f0, cex =1.2, pch=16,lwd=2,
#       col = bmmb::cols[c(2:5)][aggd$C])
# 
# legend (1,300, legend = c("Boys","Girls","Men","Women"),lwd=2,lty=0,
#         col = bmmb::cols[2:5], bty='n',pch=16,pt.cex=1.5)
# 
# curve ((-b_all_reduced*x - a_all_reduced) / (c_all_reduced), from = -3, to = 3, add = TRUE,lwd=2,col=bmmb::cols[7])
# curve ((-b_adult_reduced*x - a_adult_reduced)/ (c_adult_reduced), from = -3, to = 3, add = TRUE,lwd=2, lty=2,col=bmmb::cols[10])
# curve ((-b_child_reduced*x - a_child_reduced)/ (c_child_reduced), from = -3, to = 3, add = TRUE,lwd=2, lty=2,col=bmmb::cols[8])
# 
# 
# plot (aggd$vtl,aggd$f0, cex =1.2, col = bmmb::cols[c(2:5)][factor(aggd$C)],
#       xlim=c(-2.5,3),  pch=16,lwd=2, xlab = "",yaxt='n',
#       ylab="Height (inches)")
# grid()
# 
# xlim = par()$usr[1:2]
# ylim = par()$usr[3:4]
# 
# x = seq(-3,4,0.1)
# y = (-b_adult_reduced*x - a_adult_reduced)/ (c_adult_reduced)
# polygon (c(-3,x,4),c(-3,y,-3),col="seagreen")
# 
# y = (-b_adult_reduced*x - a_adult_reduced)/ (c_adult_reduced)
# polygon (c(-3,x,1),c(3,y,3),col=2)
# 
# points (aggd$vtl, aggd$f0, cex =1.2, pch=16,lwd=2,
#       col = 0)
# 
# mtext (side=1, outer = TRUE,line=-1, "Centered vocal-tract length (cm)", cex=0.9)
# mtext (side=2, outer = TRUE,line=3, "Centered f0 (Hz) / 100",cex=0.9, adj=.65)
# dev.off()

################################################################################
### Figure 11-14
################################################################################
point_col = tapply (exp_data$C_v, exp_data$S,findmode)
point_col = as.numeric(factor(point_col))

par (mfrow = c(1,2), mar = c(1,.5,1,.5), oma = c(0,4,0,0))

plot (model_gender_vtl_f0$criteria$loo$diagnostics$pareto_k, ylab="", 
      ylim = c(0,1.5),col=bmmb::cols[1+point_col],pch=16, xaxt='n')
abline (h = c(0.5,.7,1), col = deeppurple, lwd = 2, lty=2)
plot (model_gender_vtl_f0_reduced$criteria$loo$diagnostics$pareto_k, ylab="", 
      ylim = c(0,1.5),col=bmmb::cols[1+point_col],pch=16, yaxt='n', xaxt='n')
abline (h = c(0.5,.7,1), col = deeppurple, lwd = 2, lty=2)
mtext (side = 2, "Pareto k", outer = TRUE, line= 2.5, cex = 1.2)

legend (10, 1.4, legend=(c("boy","girl","man","woman")), col = bmmb::cols[2:5],pch=16,
        horiz = TRUE, bty = 'n',x.intersp = .8)

################################################################################
### Figure 11-15
################################################################################

tmp = bmmb::exp_data
tmp = tmp[tmp$R=='a',]

tmp$vtl_original = tmp$vtl
mu_vtl = mean (tmp$vtl_original)
tmp$vtl = tmp$vtl - mean (tmp$vtl)

tmp$f0_original = tmp$f0 
mu_f0 = mean (tmp$f0_original)
tmp$f0 = tmp$f0 - mean(tmp$f0)
tmp$f0 = tmp$f0 / 100

point_col = tapply (tmp$C_v, tmp$S,findmode)
point_col = as.numeric(factor(point_col))

par (mfrow = c(1,3), mar = c(4,4,1,1))

plot (p_preds[,1], model_gender_vtl_f0$criteria$loo$diagnostics$pareto_k,
      col=bmmb::cols[1+point_col],pch=16, ylim = c(0,1.5),cex.lab=1.2,cex.axis=1.2,
      xlab="Expected Probbility of a Female Response",ylab="Pareto k")

legend (0.2,1.3,legend=c("boy","girl","man","woman"),bty='n',pch=16,
        col = bmmb::cols[2:5],cex=1.2)

aggd = aggregate (pareto_k ~ f0 + vtl + S + A_v, data = tmp, FUN = mean)
aggd = aggd[aggd$A_v=="a",]

tmp = tmp[tmp$A_v=="a",]
point_col = tapply (tmp$C_v, tmp$S,findmode)
point_col = as.numeric(factor(point_col))+2

plot(aggd[,2:1], cex = aggd$pareto_k*4, col = cols[1+point_col], pch=16,lwd=2,
     xlab = "Centered VTL (cm)", xlim = c(-2.2,3), cex.lab=1.2,cex.axis=1.2,
     ylim = c(-1.2,1.1), ylab = "Centered f0 (Hz)/100")

curve ((-b_adult*x - a_adult) / (d_adult*x+c_adult), from = -6, xlab="Centered VTL (cm)",
       ylab = "Centered f0 (Hz)/100", to = -c_adult/d_adult-.01, add = FALSE, lwd=2, lty=1,col=1, 
       xlim = c(-2.2,3), ylim = c(-1.2,1.1),cex.lab=1.2,cex.axis=1.2,)
curve ((-b_adult*x - a_adult) / (d_adult*x+c_adult), from = -c_adult/d_adult+0.01, 
     to = 6, add = TRUE, lwd=2, lty=1,col=1)

colors = colorRampPalette(c("blue",bmmb::cols[4],bmmb::darkorange))(13)
for (i in seq(-62,-2,2)){
  curve ((-b_adult*x - a_adult - i) / (d_adult*x+c_adult), from = -6, 
         to = -c_adult/d_adult-0.01, add = TRUE, lwd=1, lty=1,col=bmmb::cols[5])
  curve ((-b_adult*x - a_adult - i) / (d_adult*x+c_adult), from = -c_adult/d_adult+0.01, 
       to = 6, add = TRUE, lwd=1, lty=1,col=bmmb::cols[5])
}

for (i in seq(2,62,2)){
  curve ((-b_adult*x - a_adult - i) / (d_adult*x+c_adult), from = -6, 
         to = -c_adult/d_adult-0.01, add = TRUE, lwd=1, lty=1,col=bmmb::cols[4])
  curve ((-b_adult*x - a_adult - i) / (d_adult*x+c_adult), from = -c_adult/d_adult+0.01, 
       to = 6, add = TRUE, lwd=1, lty=1,col=bmmb::cols[4])
  
}
points(aggd[,2:1], cex = 1, col = "grey40", pch=16,lwd=2)

################################################################################
### Figure 12.1
###############################################################################

x = seq (-3,3,0.1)

z1 = 0 + x*0
z2 = -3 + x*-3
z3 =-3 + x*3

scores = cbind (z1,z2,z3)

softmax = function (scores){
  ps = scores*0
  for (i in 1:nrow (scores)){
    ps[i,] = exp(scores[i,]) / sum(exp(scores[i,]))
  }
  ps
}

ps = softmax (scores)
ps2 = softmax (scores[,-3])

par (mfrow = c(2,2), mar=c(.2,4,.2,1), oma = c(4,1,.5,0))
plot (x, z2, type = 'l',col=4, ylim = c(-15,8),xlim=c(-3,3),xaxt='n',
      ylab = "Logit",lwd=3,cex.lab=1.3,xaxs='i')
grid()
lines (x,z1,col=1,lwd=3)

plot (x, ps2[,2], type = 'l',col=4, ylim = c(0,1),xlim=c(-3,3),xaxt='n',
      ylab = "P(Y=C)",lwd=3,cex.lab=1.3,xaxs='i')
grid()
lines (x,ps2[,1],col=1, lty=2,lwd=3)

plot (x, z2, type = 'l',col=4, ylim = c(-15,8),xlim=c(-3,3),
      ylab = "Score",lwd=3,cex.lab=1.3,xaxs='i')
grid()
lines (x,z1,col=1,lwd=3)
lines (x,z3,col=2,lwd=3)
mtext (side=1, "Predictor", line=2.7,lwd=3)

plot (x, ps[,2], type = 'l',col=4, ylim = c(0,1),xlim=c(-3,3),
      ylab = "P(Y=C)",lwd=3,cex.lab=1.3,xaxs='i')
grid()
lines (x,ps[,1],col=1,lwd=3)
lines (x,ps[,3], col=2,lwd=3)
mtext (side=1, "Predictor", line =2.7)


################################################################################
### Figure 12.2
###############################################################################

params = fixef (model_multinomial)
params = fixef (model_multinomial, summary = FALSE)
params = cbind (colMeans (params[,1:3]), 
                colMeans (params[,c(4,6,8)]),
                colMeans (params[,c(5,7,9)]))

params = rbind (c(0,0,0), params[1:3,])

tab = table (exp_data$S, exp_data$C)
mod_cat = apply (tab, 1,which.max)

par (mfrow = c(2,3), mar = c(0.2,0.2,0.2,0.2), oma = c(4.5,4.5,0.5,0.5))

plot(model_multinomial$data$vtl,model_multinomial$data$f0, ylim = c(-1.25,1.25), 
     xlim = c(-2.5,3), pch = 16, col = bmmb::cols[2:5][mod_cat],cex=2,
     xaxt='n')
abline (find_intersection (params[1,], params[2,]),lwd=2, col = bmmb::cols[7]) # b vs g

text (1.5,1, "boy-girl", cex=1.5)

plot(model_multinomial$data$vtl,model_multinomial$data$f0, ylim = c(-1.25,1.25), 
     xlim = c(-2.5,3), pch = 16, col = bmmb::cols[2:5][mod_cat],cex=2,
     yaxt='n',xaxt='n')
abline (find_intersection (params[1,], params[3,]),lwd=2, col = bmmb::cols[8]) # b vs m

text (1.5,1, "boy-man", cex=1.5)

legend (-2.3,-0.00, legend=c("b","g","m","w"),col=bmmb::cols[2:5],
        pch=16,pt.cex=1.75,bty='n', cex=1.20)

plot(model_multinomial$data$vtl,model_multinomial$data$f0, ylim = c(-1.25,1.25), 
     xlim = c(-2.5,3), pch = 16, col = bmmb::cols[2:5][mod_cat],cex=2,
     yaxt='n',xaxt='n')
abline (find_intersection (params[1,], params[4,]),lwd=2, col = bmmb::cols[9]) # b vs w

text (1.5,1, "boy-woman", cex=1.5)

plot(model_multinomial$data$vtl,model_multinomial$data$f0, ylim = c(-1.25,1.25), 
     xlim = c(-2.5,3), pch = 16, col = bmmb::cols[2:5][mod_cat],cex=2)
abline (find_intersection (params[2,], params[3,]),lwd=2, col = bmmb::cols[10]) # g vs m

text (1.5,1, "girl-man", cex=1.5)

plot(model_multinomial$data$vtl,model_multinomial$data$f0, ylim = c(-1.25,1.25), 
     xlim = c(-2.5,3), pch = 16, col = bmmb::cols[2:5][mod_cat],cex=2,
     yaxt='n')
abline (find_intersection (params[2,], params[4,]),lwd=2, col = bmmb::cols[13]) # g vs w

text (1.5,1, "girl-woman", cex=1.5)

plot(model_multinomial$data$vtl,model_multinomial$data$f0, ylim = c(-1.25,1.25), 
     xlim = c(-2.5,3), pch = 16, col = bmmb::cols[2:5][mod_cat],cex=2,
     yaxt='n')
abline (find_intersection (params[3,], params[4,]),lwd=2, col = bmmb::cols[14]) # m vs w

text (1.5,1, "man-woman", cex=1.5)

mtext (side=1, "Centered vocal tract length (cm)", outer = TRUE, line=3)
mtext (side=2, "Centered f0 / 100 (Hz)", outer = TRUE, line=3)



################################################################################
### Figure 12.3
################################################################################


params = fixef (model_multinomial)
params = fixef (model_multinomial, summary = FALSE)
params = cbind (colMeans (params[,1:3]), 
                colMeans (params[,c(4,6,8)]),
                colMeans (params[,c(5,7,9)]))

params = rbind (c(0,0,0), params[1:3,])

tab = table (exp_data$S, exp_data$C)
mod_cat = apply (tab, 1,which.max)

par (mfrow = c(1,2), mar = c(.0,.0,.21,.2),oma=c(4,4,.1,.1))

plot(model_multinomial$data$vtl,model_multinomial$data$f0, ylim = c(-1.25,1.25), 
     xlim = c(-2.5,3), pch = 16, col = bmmb::cols[2:5][mod_cat])
abline (find_intersection (params[1,], params[2,]),lwd=3, col = bmmb::cols[7])
abline (find_intersection (params[1,], params[3,]),lwd=3, col = bmmb::cols[8])
abline (find_intersection (params[1,], params[4,]),lwd=3, col = bmmb::cols[9])
abline (find_intersection (params[2,], params[3,]),lwd=3, col = bmmb::cols[10])
abline (find_intersection (params[2,], params[4,]),lwd=3, col = bmmb::cols[13])
abline (find_intersection (params[3,], params[4,]),lwd=3, col = bmmb::cols[14])

points(model_multinomial$data$vtl,model_multinomial$data$f0, pch = 16, col = bmmb::cols[2:5][mod_cat], cex = 2)
#points(model_multinomial$data$vtl,model_multinomial$data$f0, pch = 1, col = 1,
#     lwd=2, cex=2)

plot(model_multinomial$data$vtl,model_multinomial$data$f0, ylim = c(-1.25,1.25), 
     xlim = c(-2.5,3), pch = 16, col = bmmb::cols[2:5][mod_cat],ylab="",yaxt="n")

maps = make_map (t(params))

plot_map (maps, colors = adjustcolor (bmmb::cols[2:5], alpha.f=.2), new=FALSE)

points(model_multinomial$data$vtl,model_multinomial$data$f0, pch = 16, col = bmmb::cols[2:5][mod_cat], cex = 2)
#points(model_multinomial$data$vtl,model_multinomial$data$f0, pch = 1, col = 1,
#     lwd=2, cex=2)

mtext (side=1, "Centered vocal tract length (cm)", outer = TRUE, line=2.5)
mtext (side=2, "Centered f0 / 100 (Hz)", outer = TRUE, line=2.5)

legend (-0.5,1.3, legend=c("b","g","m","w"),col=bmmb::cols[2:5],
        pch=16,pt.cex=1.25,bty='n', cex=1.0, horiz = TRUE)

################################################################################
### Figure 12.4
################################################################################


params = fixef (model_multinomial)
params = fixef (model_multinomial, summary = FALSE)
params = cbind (colMeans (params[,1:3]), 
                colMeans (params[,c(4,6,8)]),
                colMeans (params[,c(5,7,9)]))

params = rbind (c(0,0,0), params[1:3,])

tab = table (exp_data$S, exp_data$C)
mod_cat = apply (tab, 1,which.max)

cats = apply(multi_pred[,1,],1,which.max)
#table (exp_data$C, cats)
tab2 = table (exp_data$S, cats)
mod_cat2 = apply (tab2, 1,which.max)

cats = apply(multi_pred_re[,1,],1,which.max)
#table (exp_data$C, cats)
tab2 = table (exp_data$S, cats)
mod_cat3 = apply (tab2, 1,which.max)


par (mfrow = c(1,3), mar = c(.0,.0,.21,.2),oma=c(4,4,2,.1))

plot(model_multinomial$data$vtl,model_multinomial$data$f0, ylim = c(-1.25,1.25), 
     xlim = c(-2.5,3), pch = 16, col = bmmb::cols[2:5][mod_cat],ylab="")
maps = make_map (t(params))
mtext (side=3, text = "Listener Judgments", line = 0.5)

plot_map (maps, colors = adjustcolor (bmmb::cols[2:5], alpha.f=.2), new=FALSE)
points(model_multinomial$data$vtl,model_multinomial$data$f0, pch = 16, col = bmmb::cols[2:5][mod_cat], cex = 2)

legend (-0.9,1.3, legend=c("b","g","m","w"),col=bmmb::cols[2:5],
        pch=16,pt.cex=1.5,bty='n', cex=1.3, horiz = TRUE)

plot(model_multinomial$data$vtl,model_multinomial$data$f0, ylim = c(-1.25,1.25), 
     xlim = c(-2.5,3), pch = 16, col = bmmb::cols[2:5][mod_cat2],ylab="",yaxt="n")
plot_map (maps, colors = adjustcolor (bmmb::cols[2:5], alpha.f=.2), new=FALSE)
points(model_multinomial$data$vtl,model_multinomial$data$f0, pch = 16, col = bmmb::cols[2:5][mod_cat3], cex = 2)
mtext (side=3, text = "Model Predictions (with RE)", line = 0.5)

plot(model_multinomial$data$vtl,model_multinomial$data$f0, ylim = c(-1.25,1.25), 
     xlim = c(-2.5,3), pch = 16, col = bmmb::cols[2:5][mod_cat3],ylab="",yaxt="n")
plot_map (maps, colors = adjustcolor (bmmb::cols[2:5], alpha.f=.2), new=FALSE)
points(model_multinomial$data$vtl,model_multinomial$data$f0, pch = 16, col = bmmb::cols[2:5][mod_cat2], cex = 2)
mtext (side=3, text = "Model Predictions (no RE)", line = 0.5)

mtext (side=1, "Centered vocal tract length (cm)", outer = TRUE, line=2.5)
mtext (side=2, "Centered f0 / 100 (Hz)", outer = TRUE, line=2.5)


################################################################################
### Figure 12.5
################################################################################

sranef = ranef(model_multinomial)$S

tab = table (exp_data$S, exp_data$C_v)
real_cat = apply (tab, 1,which.max)

par (mfrow = c(3,1), mar =c(.1,4,.1,.5), oma = c(.5,0,.5,0))
#brmplot((sranef[,,1]+sranef[,,2]+sranef[,,3])/3, labels = '', 
#        col = bmmb::cols[real_cat+1], ylim = c(-8,8))
#text (1,7,"Boy category speaker intercepts", pos=4,cex=1.4)

brmplot(sranef[,,1], labels = '', col = bmmb::cols[real_cat+1], ylim = c(-8,8))
text (1,7,"Girl category speaker intercepts", pos=4,cex=1.4)

legend (1,-4.5, legend=c("b","g","m","w"),col=bmmb::cols[2:5],
        pch=16,pt.cex=1.5,bty='n', cex=1.1, horiz = TRUE)

brmplot(sranef[,,2], labels = '', col = bmmb::cols[real_cat+1], ylim = c(-8,8))
text (1,7,"Man category speaker intercepts", pos=4,cex=1.4)

brmplot(sranef[,,3], labels = '', col = bmmb::cols[real_cat+1], ylim = c(-8,8))
text (1,7,"Woman category speaker intercepts", pos=4,cex=1.4)


################################################################################
### Figure 12.6
################################################################################


params = fixef (model_multinomial_noS)
params = fixef (model_multinomial_noS, summary = FALSE)
params = cbind (colMeans (params[,1:3]), 
                colMeans (params[,c(4,6,8)]),
                colMeans (params[,c(5,7,9)]))

params = rbind (c(0,0,0), params[1:3,])

tab = table (exp_data$S, exp_data$C)
mod_cat = apply (tab, 1,which.max)

cats = apply(multi_pred_noS[,1,],1,which.max)
#table (exp_data$C, cats)
tab2 = table (exp_data$S, cats)
mod_cat2 = apply (tab2, 1,which.max)

cats = apply(multi_pred_re_noS[,1,],1,which.max)
#table (exp_data$C, cats)
tab2 = table (exp_data$S, cats)
mod_cat3 = apply (tab2, 1,which.max)


par (mfrow = c(1,3), mar = c(.0,.0,.21,.2),oma=c(4,4,2,.1))

plot(model_multinomial$data$vtl,model_multinomial$data$f0, ylim = c(-1.25,1.25), 
     xlim = c(-2.5,3), pch = 16, col = bmmb::cols[2:5][mod_cat],ylab="")

maps_noS = make_map (t(params))

mtext (side=3, text = "Listener Judgments", line = 0.5)

legend (-2.5,-0.15, legend=c("b","g","m","w"),col=bmmb::cols[2:5],
        pch=16,pt.cex=1.5,bty='n', cex=1.15)

plot_map (maps_noS, colors = adjustcolor (bmmb::cols[2:5], alpha.f=.2), new=FALSE)
points(model_multinomial$data$vtl,model_multinomial$data$f0, pch = 16, col = bmmb::cols[2:5][mod_cat], cex = 2)

plot(model_multinomial$data$vtl,model_multinomial$data$f0, ylim = c(-1.25,1.25), 
     xlim = c(-2.5,3), pch = 16, col = bmmb::cols[2:5][mod_cat2],ylab="",yaxt="n")
plot_map (maps_noS, colors = adjustcolor (bmmb::cols[2:5], alpha.f=.2), new=FALSE)
points(model_multinomial$data$vtl,model_multinomial$data$f0, pch = 16, col = bmmb::cols[2:5][mod_cat3], cex = 2)
mtext (side=3, text = "Model Predictions (with RE)", line = 0.5)

plot(model_multinomial$data$vtl,model_multinomial$data$f0, ylim = c(-1.25,1.25), 
     xlim = c(-2.5,3), pch = 16, col = bmmb::cols[2:5][mod_cat3],ylab="",yaxt="n")
plot_map (maps_noS, colors = adjustcolor (bmmb::cols[2:5], alpha.f=.2), new=FALSE)
points(model_multinomial$data$vtl,model_multinomial$data$f0, pch = 16, col = bmmb::cols[2:5][mod_cat2], cex = 2)
mtext (side=3, text = "Model Predictions (no RE)", line = 0.5)

mtext (side=1, "Centered vocal tract length (cm)", outer = TRUE, line=2.5)
mtext (side=2, "Centered f0 / 100 (Hz)", outer = TRUE, line=2.5)


################################################################################
### Figure 12.7
################################################################################


params = fixef (model_multinomial_noS)
params = fixef (model_multinomial_noS, summary = FALSE)
params = cbind (colMeans (params[,1:3]), 
                colMeans (params[,c(4,6,8)]),
                colMeans (params[,c(5,7,9)]))

params = rbind (c(0,0,0), params[1:3,])

tab = table (exp_data$S, exp_data$C)
mod_cat = apply (tab, 1,which.max)

cats = apply(multi_pred_noS[,1,],1,which.max)
tab2 = table (exp_data$S, cats)
mod_cat2 = apply (tab2, 1,which.max)

cats = apply(multi_pred_re_noS[,1,],1,which.max)
tab2 = table (exp_data$S, cats)
mod_cat3 = apply (tab2, 1,which.max)


par (mfrow = c(1,2), mar = c(.0,.0,.0,.2),oma=c(3.5,3.5,2,.1))

plot(model_multinomial$data$vtl,model_multinomial$data$f0, ylim = c(-1.25,1.25), 
     xlim = c(-2.5,3), pch = 16, col = bmmb::cols[2:5][mod_cat],ylab="")

#maps_noS = make_map (t(params))

mtext (side=3, text = "model_multinomial", line = 0.5)

legend (-0.5,1.3, legend=c("b","g","m","w"),col=bmmb::cols[2:5],
        pch=16,pt.cex=1.25,bty='n', cex=1.0, horiz = TRUE)

plot_map (maps, colors = adjustcolor (bmmb::cols[2:5], alpha.f=.2), new=FALSE)
points(model_multinomial$data$vtl,model_multinomial$data$f0, pch = 16, col = bmmb::cols[2:5][mod_cat], cex = 2)

plot(model_multinomial$data$vtl,model_multinomial$data$f0, ylim = c(-1.25,1.25), 
     xlim = c(-2.5,3), pch = 16, col = bmmb::cols[2:5][mod_cat2],ylab="",yaxt="n")
plot_map (maps_noS, colors = adjustcolor (bmmb::cols[2:5], alpha.f=.2), new=FALSE)
points(model_multinomial$data$vtl,model_multinomial$data$f0, pch = 16, col = bmmb::cols[2:5][mod_cat], cex = 2)
mtext (side=3, text = "model_multinomial_noS", line = 0.5)

mtext (side=1, "Centered vocal tract length (cm)", outer = TRUE, line=2.5)
mtext (side=2, "Centered f0 / 100 (Hz)", outer = TRUE, line=2.5)

################################################################################
### Figure 12.8
###############################################################################

par (mar = c(4,8,1,1))
brmplot (fixef(model_multinomial)[c(1,4,5,2,6,7,3,8,9),], las = 2,
         horizontal = FALSE,col = 0, xlim = c(-5,6), xlab = "Coefficients")
abline (h = seq(1,10),lty=3,col="grey")
brmplot (fixef(model_multinomial)[c(1,4,5,2,6,7,3,8,9),], add = TRUE,
         nudge = -0.2, labels = "", horizontal = FALSE, pch=16,
         col = rep(bmmb::cols[3:5],each=3))
brmplot (fixef(model_multinomial_noS)[c(1,4,5,2,6,7,3,8,9),], add = TRUE,
         nudge = 0.2, labels = "", horizontal = FALSE, pch=17,
         col = rep(bmmb::cols[3:5],each=3))

legend (3,9.5,legend =c("With Speaker","No Speaker"),bty='n',
        pch=16:17,pt.cex=1.3)

################################################################################
### Figure 12.9
################################################################################

SG = 0
SG[exp_data$C=='m'] = 3
SG[exp_data$C=='w'] = 2
SG[exp_data$C=='g'] = 1
SG[exp_data$C=='b'] = 1


x = seq (100,200,.1)
y = seq (100,200,.1)

par (mfrow = c(1,2), mar = c(.1,.1,.1,.1), oma = c(4,4,.5,.5))

boxplot (height ~ SG, data = exp_data,col=bmmb::cols[c(1,11,10)],ylim = c(100,200),
         xlab = "Apparent Speaker Category",names=c("Child","Woman","Man"))

tmp = aggregate (height ~ SG, data = exp_data, FUN = mean)
abline (h = mean (tmp[1:2,2]), lwd = 2, col = bmmb::cols[15])
abline (h = mean (tmp[2:3,2]), lwd = 2, col = bmmb::cols[15])
boxplot (height ~ SG, data = exp_data,col=bmmb::cols[c(1,11,10)],
         add = TRUE,xlab = "",xaxt='n')
mtext (side=1,text="Size Group", line = 2.5)
mtext (side=2,text="Apparent Height (cm)", line = 2.75)

plot (x,y, yaxt='n',type = 'l', xlim = c(120,190),ylim = c(100,200),xaxt='n')
rect (100,90,200,mean(tmp[1:2,2]), col = bmmb::cols[1],border=bmmb::cols[1])
rect (100,mean(tmp[1:2,2]),200,mean(tmp[2:3,2]), col = bmmb::cols[11],
      border=bmmb::cols[11])
rect (100,mean(tmp[2:3,2]),200,400, col = bmmb::cols[10],border=bmmb::cols[10])
lines (x,y, lwd=4, col = bmmb::cols[2])

points (c(140,160,180),c(140,160,180),pch=16,col=bmmb::cols[2],cex=2)
abline (h = mean (tmp[1:2,2]), lwd = 6, col = bmmb::cols[15])
abline (h = mean (tmp[2:3,2]), lwd = 6, col = bmmb::cols[15])

text (118,c(140,163,195), c("Small", "Medium", "Large"),cex=1.1,pos=4)
axis (side=1, at = seq(110,190,length.out=6), labels = 11:16)
mtext (side=1,text="Vocal tract length (cm)", line = 2.75)

################################################################################
### Figure 12.10
################################################################################

tmp = aggregate (height ~ SG, data = exp_data, FUN = mean)

x = seq (-35,35,.1)
y = dlogis(x,0,5)

x_2 = seq (100,230,.1)
y_2 = x_2


par (mfrow = c(1,1), mar = c(4,4,.1,4), oma = c(.5,.5,.51,.51))
#layout (mat = t(c(1,2)), widths = c(.6,.4))

plot (x,y, yaxt='s',type = 'l', xlim = c(120,190),ylim = c(105,215),
      xlab = "Vocal-tract length (cm)",xaxt='n',ylab="Apparent Height (cm)")
rect (100,90,200,mean(tmp[1:2,2]), col = bmmb::cols[1],border=bmmb::cols[1])
rect (100,mean(tmp[1:2,2]),200,mean(tmp[2:3,2]), col = bmmb::cols[11],border=bmmb::cols[11])
rect (100,mean(tmp[2:3,2]),200,400, col = bmmb::cols[10],border=bmmb::cols[10])
lines (x_2,y_2, lwd=4, col = bmmb::cols[2])


axis (side=4, at = seq(120,200,20), labels = seq(120,200,20)/5)

spot = 160
lines (-y*150+spot, x+spot, ylim = c(120,190), xlim = c(-.35,0), xaxs='i',type='l',lwd=2)
lines (rep(spot,length(x)),x+spot,type="l",lwd=2)
points (spot,spot,pch=16,col=bmmb::cols[2],cex=2)

spot = 140
lines (-y*150+spot, x+spot, ylim = c(120,190), xlim = c(-.35,0), xaxs='i',type='l',lwd=2)
lines (rep(spot,length(x)),x+spot,type="l",lwd=2)
points (spot,spot,pch=16,col=bmmb::cols[2],cex=2)

spot = 180
lines (-y*150+spot, x+spot, ylim = c(120,190), xlim = c(-.35,0), xaxs='i',type='l',lwd=2)
lines (rep(spot,length(x)),x+spot,type="l",lwd=2)
points (spot,spot,pch=16,col=bmmb::cols[2],cex=2)

text (118,c(140,163,195), c("Small", "Medium", "Large"),cex=1.1, pos=4)

abline (h = mean (tmp[1:2,2]), lwd = 5, col = bmmb::cols[15])
abline (h = mean (tmp[2:3,2]), lwd = 5, col = bmmb::cols[15])

axis (side=1, at = seq(110,190,length.out=6), labels = 11:16)
lines (x,y, lwd=4, col = bmmb::cols[2])

mtext (side=4, "Bigness", line=2.2)

################################################################################
### Figure 12.11
################################################################################

par (mfcol = c(2,3), mar = c(.1,.2,.11,.2), oma = c(2.5,4,3,.5))

x = seq(-5,5,.01)
plot (x, dnorm (x), type = 'l', xaxt='n', xaxs='i',lwd=4,col=bmmb::cols[5],yaxt='n')
mtext (side=2,"Density", line=2.5)
mtext (side=3,"Standard Normal", line=1)
abline (v = 0,lty=3,col="grey",lwd=3)
plot (x, pnorm (x), type = 'l', xaxs='i',lwd=4,col=bmmb::cols[5])
mtext (side=2,"Cumulative Probability", line=2.5)
segments (-6,.5,0,.5,lty=3,col="grey",lwd=3)
segments (0,-5,0,.5,lty=3,col="grey",lwd=3)
text (0.1,.2,"x=0", cex = 1.5,pos=4)
text (-3,.6,"y=0.5", cex = 1.5,pos=4)

x = seq(-10,10,.01)
plot (x, dlogis (x), type = 'l', xaxt='n', yaxt='n', xaxs='i',lwd=5,col=bmmb::cols[2])
mtext (side=3,"Logistic", line=1)
abline (v = 2,lty=3,col="grey",lwd=3)
plot (x, plogis (x), type = 'l', yaxt='n', xaxs='i',lwd=5,col=bmmb::cols[2])
segments (-16,.88,2,.88,lty=3,col="grey",lwd=3)
segments (2,-5,2,.88,lty=3,col="grey",lwd=3)
text (2.1,.2,"x=2", cex = 1.5,pos=4)
text (-7,.8,"y=0.88", cex = 1.5,pos=4)

x = seq(-10,10,.01)
plot (x, dlogis (x), type = 'l', xaxt='n', yaxt='n', xaxs='i',lwd=5,col=bmmb::cols[1])
mtext (side=3,"Logistic", line=1)
abline (v = c(2,-1),lty=3,col="grey",lwd=3)
plot (x, plogis (x), type = 'l', yaxt='n', xaxs='i',lwd=5,col=bmmb::cols[1])
segments (-16,.88,2,.88,lty=3,col="grey",lwd=3)
segments (-16,.27,-1,.27,lty=3,col="grey",lwd=3)
segments (2,-5,2,.88,lty=3,col="grey",lwd=3)
segments (-1,-5,-1,.27,lty=3,col="grey",lwd=3)
text (2.1,.2,"x=2", cex = 1.5,pos=4)
text (-7,.8,"y=0.88", cex = 1.5,pos=4)

text (-5.11,.1,"x= -1", cex = 1.5,pos=4)
text (-7,.35,"y=0.27", cex = 1.5,pos=4)

################################################################################
### Figure 12.12
################################################################################

x = seq (20,44,.01)
y = dlogis(x,30,1)

x_2 = seq (-4,4.4,.05)
y_2 = x_2

par (mfrow = c(3,1), mar = c(.5,1,.5,1), oma = c(4,.5,.51,.51))

plot (x+3.2,y*2, yaxt='s',type = 'n', xlim = c(22,42),ylim = c(0.02,.8),
      xlab = "",xaxt='n',ylab="",yaxt = 'n')
rect (1,0,31,1, col = bmmb::cols[1],border=bmmb::cols[1])
rect (31,0,34,1, col = bmmb::cols[11],border=bmmb::cols[11])
rect (34,0,50,1, col = bmmb::cols[10],border=bmmb::cols[10])
lines (x-2,y*2.5, xaxs='i',type='l',lwd=2)
abline (v = c(31,34),lwd=5,col=bmmb::cols[15])
abline (v = 28, lty=3)

text (25, .45, "P(SG=1)=0.95", cex=1.1)
text (32.5, .3, "P(SG=2)=0.04", cex=1.1)
text (40, .3, "P(SG=3)=0.01", cex=1.1)

plot (x+3.2,y*2, yaxt='s',type = 'n', xlim = c(22,42),ylim = c(0.02,.8),
      xlab = "Bigness",xaxt='n',ylab="",yaxt = 'n')
rect (1,0,31,1, col = bmmb::cols[1],border=bmmb::cols[1])
rect (31,0,34,1, col = bmmb::cols[11],border=bmmb::cols[11])
rect (34,0,50,1, col = bmmb::cols[10],border=bmmb::cols[10])
lines (x+2,y*2.5, type = 'l', lwd=2)
abline (v = c(31,34),lwd=5,col=bmmb::cols[15])
abline (v = 32, lty=3)

text (25, .3, "P(SG=1)=0.26", cex=1.1)
text (32.5, .2, "P(SG=2)=0.61", cex=1.1)
text (40, .3, "P(SG=3)=0.12", cex=1.1)

plot (x+3.2,y*2, yaxt='s',type = 'n', xlim = c(22,42),ylim = c(0.02,.8),
      xlab = "Bigness",xaxt='n',ylab="",yaxt = 'n')
rect (1,0,31,1, col = bmmb::cols[1],border=bmmb::cols[1])
rect (31,0,34,1, col = bmmb::cols[11],border=bmmb::cols[11])
rect (34,0,50,1, col = bmmb::cols[10],border=bmmb::cols[10])
axis (side=1, at = seq(24,40,4),cex.axis=1.2)
lines (x+6,y*2.5, type = 'l', lwd=2)
abline (v = c(31,34),lwd=5,col=bmmb::cols[15])
abline (v = 36, lty=3)

text (25, .3, "P(SG=1)=0.01", cex=1.1)
text (32.5, .35, "P(SG=2)=0.11", cex=1.1)
text (40, .3, "P(SG=3)=0.88", cex=1.1)

mtext (side=1,"Bigness", line=2.75)

################################################################################
### Figure 12.13
################################################################################

fixefs = fixef(model_ordinal_disc)
thresholds = fixefs[1:2,1]

xaxts = c('n','n','n','n','n',
          'n','n','n','n','n',
          's','s','s','s','s')

ymaxs = c(.2,.2,.2,.2,.2,
          .2,.2,.2,.2,.2,
          .35,.35,.35,.35,.35)

par (mfrow = c(3,5), mar = c(.15,.2,.2,.2), oma = c(4,3,1,1))
i=1
for (i in 1:15){
  x = seq (-15,15,.1)
  y = dlogis (x,0,listener_scale[i,1])
  max_y = max(y)
  y = y/max_y
  plot (x, y, type = 'l',yaxt='n',xaxt=xaxts[i], 
        yaxs = 'i',xaxs='i',xaxt='n', ylim = c(0,1.4)) #ylim=c(0,ymaxs[i])
  
  if (xaxts[i]=='s') axis(side=1,at = seq(-12,12,4))
  x = seq (-15,thresholds[1],.1)
  y = dlogis (x,0,listener_scale[i,1])
  y = y/max_y
  y = c(0,y,0)
  x = c(x[1],x,x[length(x)])
  polygon (x,y,col=bmmb::cols[1],border=bmmb::cols[1])
  
  x = seq (thresholds[1],thresholds[2],.1)
  y = dlogis (x,0,listener_scale[i,1])
  y = y/max_y
  y = c(0,y,0)
  x = c(x[1],x,x[length(x)])
  polygon (x,y,col=bmmb::cols[11],border=bmmb::cols[11])
  
  x = seq (thresholds[2],15,.1)
  y = dlogis (x,0,listener_scale[i,1])
  y = y/max_y
  y = c(0,y,0)
  x = c(x[1],x,x[length(x)])
  polygon (x,y,col=bmmb::cols[10],border=bmmb::cols[10])
  
  x = seq (-15,15,.1)
  y = dlogis (x,0,listener_scale[i,1])
  y = y/max_y
  lines (x, y, type = 'l',yaxt='n',xaxt=xaxts[i],lwd=2)
  abline (v = thresholds,col=bmmb::cols[15], lwd=2)
  
  p1 = plogis (thresholds[1],0,listener_scale[i,1])
  p3 = 1 - plogis (thresholds[2],0,listener_scale[i,1])
  p2 = (1-p3) - p1
  
  text (-8,1.2,round(p1,2))
  text (0,1.2,round(p2,2))
  text (8,1.2,round(p3,2))
}
mtext (side=1,"Centered vocal-tract length (cm)", outer = TRUE, line = 2.9, cex = 0.9)

################################################################################
### Figure 13.1
################################################################################

knitr::include_graphics("_main_files/figure-html/Figure 13.1.jpg")

# jpeg ("_main_files/figure-html/Figure 13.1.jpg",4800,1800,res=600)
# library (bmmb)
# data(height_data)
# data (lee_etal_data)
# 
# lee_etal_data$f0_original=lee_etal_data$f0
# lee_etal_data$f0 = log(lee_etal_data$f0_original)
# 
# aggdmu = aggregate (f0 ~ age+gender, FUN = mean, data = lee_etal_data)
# aggdmu[,3] = exp(aggdmu[,3])
# 
# lee_etal_data$f0 = log(lee_etal_data$f0sd/lee_etal_data$f0_original)
# aggdsd = aggregate (f0 ~ age+gender, FUN = sd, data = lee_etal_data)
# aggdsd[,3] = (exp(aggdsd[,3])-1)*aggdmu[,3]
# 
# aggdmuf0 = aggdmu
# aggdsdf0 = aggdsd
# 
# par (mfrow = c(1,3), mar = c(4.1,4.1,1,1))
# 
# plot (height_data$age[height_data$gender=="f"]-.1,
#       height_data$height[height_data$gender=="f"],
#       pch=16,col=2,lwd=2,cex=1.5, ylim = c(70,205),type='b', xlab="Age (years)",
#       ylab = "Height (cm)",xlim=c(2,21),cex.axis=1.3,cex.lab=1.3)
# lines (height_data$age[height_data$gender=="m"]+.1,
#        height_data$height[height_data$gender=="m"],
#        pch=16,col=4,lwd=2,cex=1.5,type='b')
# grid()
# legend (11,120,legend = c("Female","Male"), col = c(2,4),pch=16,cex=1.2,pt.cex=1.5)
# 
# 
# phonTools::errorbar(height_data$age[height_data$gender=="f"]-.1,
#                     height_data$height[height_data$gender=="f"],
#                     height_data$sd[height_data$gender=="f"]*2,col=2,lwd=1,length=0.051)
# phonTools::errorbar(height_data$age[height_data$gender=="m"]+.1,
#                     height_data$height[height_data$gender=="m"],
#                     height_data$sd[height_data$gender=="m"]*2,col=4,lwd=1,length=0.051)
# 
# rect (9.5,128,12.5,168,lwd=2,border="forestgreen",lty=1)
# rect (17.5,150,20.5,190,lwd=2,border="forestgreen",lty=1)
# 
# plot (aggdmu$age[aggdmu$gender=="f"]-.1,aggdmu$f0[aggdmu$gender=="f"],
#       pch=16,col=2,lwd=2,cex=1.5, ylim = c(50,370),type='b', xlab="Age (years)",
#       ylab = "f0 (Hz)",xlim=c(2,21),cex.axis=1.3,cex.lab=1.3)
# lines (aggdmu$age[aggdmu$gender=="m"]+.1,aggdmu$f0[aggdmu$gender=="m"],
#       pch=16,col=4,lwd=2,cex=1.5,type='b')
# grid()
# 
# phonTools::errorbars(aggdmu$age[aggdmu$gender=="f"]-.1,aggdmu$f0[aggdmu$gender=="f"],
#                    aggdsd$f0[aggdsd$gender=="f"]*2,col=2,lwd=1,length=0.051)
# phonTools::errorbars(aggdmu$age[aggdmu$gender=="m"]+.1,aggdmu$f0[aggdmu$gender=="m"],
#                    aggdsd$f0[aggdsd$gender=="m"]*2,col=4,lwd=1,length=0.051)
# 
# plot (height_data$height[height_data$gender=="f"][-c(1,2,3,19)],
#       aggdmu$f0[aggdmu$gender=="f"], pch=16,col=2,lwd=6,cex=2.75, ylim = c(50,350),
#       type='b', xlab="Hieght (cm)", ylab = "f0 (Hz)",xlim=c(95,205),cex.axis=1.3,
#       cex.lab=1.3)
# lines (height_data$height[height_data$gender=="m"][-c(1,2,3,19)],
#        aggdmu$f0[aggdmu$gender=="m"],
#       pch=16,col=4,lwd=6,cex=2.75,type='b')
# grid()
# 
# count = 1
# for (i in (1:19)[-c(1,2,3,19)]){
#   phonTools::sdellipse (
#     matrix(c((height_data$sd[height_data$gender=="f"][count]),0,
#              0,(aggdsd$f0[aggdsd$gender=="f"][count]))^2,2,2),
#     means = c(height_data$height[height_data$gender=="f"][i],
#               aggdmu$f0[aggdmu$gender=="f"][count]), col = 2,lty=3)
# 
#   phonTools::sdellipse (
#     matrix(c((height_data$sd[height_data$gender=="m"][count]),0,
#              0,(aggdsd$f0[aggdsd$gender=="m"][count]))^2,2,2),
#     means = c(height_data$height[height_data$gender=="m"][i],
#               aggdmu$f0[aggdmu$gender=="m"][count]), col = 4, lty=3)
# 
#   count = count + 1
# }
# dev.off()


################################################################################
### Figure 13.2
################################################################################

data (lee_etal_data)

gbars = (log(lee_etal_data$f1)+log(lee_etal_data$f2)+log(lee_etal_data$f3))/3

aggdmu = aggregate (gbars ~ age+gender, FUN = mean, data = lee_etal_data)
aggdmu[,3] = (aggdmu[,3])

aggdmu$vtl = exp(-((aggdmu$gbar)-min(aggdmu$gbars)))
aggdmu$vtl = 15 * aggdmu$vtl

vtl = (log(lee_etal_data$f1sd/lee_etal_data$f1)+
         log(lee_etal_data$f2sd/lee_etal_data$f2)+
         log(lee_etal_data$f3sd/lee_etal_data$f3))/3

aggdsd = aggregate (vtl ~ age+gender, FUN = mean, data = lee_etal_data)
aggdsd[,3] = exp(aggdsd[,3])*aggdmu$vtl

aggdmuvtl = aggdmu
aggdsdvtl = aggdsd

par (mfrow = c(1,3), mar = c(4.1,4.1,1,1))
plot (height_data$age[height_data$gender=="f"]-.1,
      height_data$height[height_data$gender=="f"],
      pch=16,col=2,lwd=2,cex=1.5, ylim = c(70,205),type='b', xlab="Age (years)",
      ylab = "Height (cm)",xlim=c(2,21),cex.axis=1.3,cex.lab=1.3)
lines (height_data$age[height_data$gender=="m"]+.1,
       height_data$height[height_data$gender=="m"],
      pch=16,col=4,lwd=2,cex=1.5,type='b')
grid()
legend (11,120,legend = c("Female","Male"), col = c(2,4),pch=16,cex=1.2,pt.cex=1.5)

phonTools::errorbar(height_data$age[height_data$gender=="f"]-.1,
                    height_data$height[height_data$gender=="f"],
                    height_data$sd[height_data$gender=="f"]*2,col=2,lwd=1,length=0.051)
phonTools::errorbar(height_data$age[height_data$gender=="m"]+.1,
                    height_data$height[height_data$gender=="m"],
                    height_data$sd[height_data$gender=="m"]*2,col=4,lwd=1,length=0.051)

rect (9.5,128,12.5,168,lwd=2,border="forestgreen",lty=1)
rect (17.5,150,20.5,190,lwd=2,border="forestgreen",lty=1)

plot (aggdmu$age[aggdmu$gender=="f"]-.1,aggdmu$vtl[aggdmu$gender=="f"],
      pch=16,col=2,lwd=2,cex=1.5, ylim = c(8.5,17),type='b', xlab="Age (years)",
      ylab = "VTL (cm)",xlim=c(2,21),cex.axis=1.3,cex.lab=1.3)
lines (aggdmu$age[aggdmu$gender=="m"]+.1,aggdmu$vtl[aggdmu$gender=="m"],
      pch=16,col=4,lwd=2,cex=1.5,type='b')
grid()


phonTools::errorbars(aggdmu$age[aggdmu$gender=="f"]-.1,
                   aggdmu$vtl[aggdmu$gender=="f"],
                   aggdsd$vtl[aggdsd$gender=="f"]*2,col=2,lwd=1,length=0.051)
phonTools::errorbars(aggdmu$age[aggdmu$gender=="m"]+.1,
                   aggdmu$vtl[aggdmu$gender=="m"],
                   aggdsd$vtl[aggdsd$gender=="m"]*2,col=4,lwd=1,length=0.051)

plot (height_data$height[height_data$gender=="f"][-c(1,2,3,19)],
      aggdmu$vtl[aggdmu$gender=="f"],
      pch=16,col=2,lwd=6,cex=2.75, ylim = c(8.5,17.5),type='b', xlab="Height (cm)",
      ylab = "VTL (cm)",xlim=c(105,185),cex.axis=1.3,cex.lab=1.3)
lines (height_data$height[height_data$gender=="m"][-c(1,2,3,19)],
       aggdmu$vtl[aggdmu$gender=="m"],
      pch=16,col=4,lwd=6,cex=2.75,type='b')
grid()

count = 1
for (i in (1:19)[-c(1,2,3,19)]){
  phonTools::sdellipse (
    matrix((c(height_data$sd[height_data$gender=="f"][count],0,
             0,aggdsd$vtl[aggdsd$gender=="f"][count]))^2,2,2), 
    means = c(height_data$height[height_data$gender=="f"][i],
              aggdmu$vtl[aggdmu$gender=="f"][count]), col = 2,lty=3)
  
  phonTools::sdellipse (
    matrix((c(height_data$sd[height_data$gender=="m"][count],0,
             0,aggdsd$vtl[aggdsd$gender=="m"][count]))^2,2,2), 
    means = c(height_data$height[height_data$gender=="m"][i],
              aggdmu$vtl[aggdmu$gender=="m"][count]), col = 4, lty=3)
  
  count = count + 1
}


################################################################################
### Figure 13.3
################################################################################

library (bmmb)
data(height_data)
data (lee_etal_data)

lee_etal_data$f0_original=lee_etal_data$f0
lee_etal_data$f0 = log(lee_etal_data$f0_original)

aggdmu = aggregate (f0 ~ age+gender, FUN = mean, data = lee_etal_data)
aggdmu[,3] = exp(aggdmu[,3])

lee_etal_data$f0 = log(lee_etal_data$f0sd/lee_etal_data$f0_original)
aggdsd = aggregate (f0 ~ age+gender, FUN = sd, data = lee_etal_data)
aggdsd[,3] = (exp(aggdsd[,3])-1)*aggdmu[,3]

aggdmuf0 = aggdmu
aggdsdf0 = aggdsd

par (mfrow = c(1,2), mar = c(4.1,.2,.5,.25), oma = c(0,4,0,0))


plot (aggdmuvtl$vtl,aggdmuf0$f0, pch=16,col=2,lwd=2,cex=2.5, ylim = c(90,320),type='n', 
      xlab="Vocal-tract length (cm)", ylab = "f0 (Hz)",xlim=c(9,16.5),cex.axis=1.1,cex.lab=1.1)
grid()

text (aggdmuvtl$vtl,aggdmuf0$f0, 5:19, col = rep(bmmb::cols[7:8], each = 15),cex=1.2)
mtext (side=2, "f0 (Hz)", line = 3, cex = 1.1)


legend (9.5,200,legend=c("female","male"), col=bmmb::cols[7:8],pch=16,bty='n')

use = aggdmuvtl$age %in% c(10,12,19)
plot (aggdmuvtl$vtl[use],aggdmuf0$f0[use], pch=16,col=2,lwd=2,
      cex=1.5, ylim = c(90,320),type='n', xlab="Vocal-tract length (cm)", 
      ylab = "f0 (Hz)",xlim=c(9,16.75),cex.axis=1.1,cex.lab=1.1,yaxt='n')
grid()

colss=bmmb::cols[c(3,3,5,2,2,4)]
text (aggdmuvtl$vtl[use],aggdmuf0$f0[use], rep(5:19,2)[use], col = colss, cex=1.5)

col_count = 1
for (i in c(6,8,15)){
  
  phonTools::sdellipse (
    matrix((c(aggdsdvtl$vtl[aggdsdvtl$gender=="f"][i],0,
             0,aggdsdf0$f0[aggdsdf0$gender=="f"][i]))^2,2,2), 
    means = c(aggdmuvtl$vtl[aggdmuvtl$gender=="f"][i],
              aggdmuf0$f0[aggdmuf0$gender=="f"][i]), 
    col = bmmb::cols[c(3,3,5)][col_count], lty = 1)
  
  phonTools::sdellipse (
    matrix((c(aggdsdvtl$vtl[aggdsdvtl$gender=="m"][i],0,
             0,aggdsdf0$f0[aggdsdf0$gender=="m"][i]))^2,2,2), 
    means = c(aggdmuvtl$vtl[aggdmuvtl$gender=="m"][i],
              aggdmuf0$f0[aggdmuf0$gender=="m"][i]), 
    col = bmmb::cols[c(2,2,4)][col_count], lty = 1)
  
  col_count = col_count + 1
}



################################################################################
### Figure 13.4
################################################################################

#aggd = aggregate (cbind (height,f0,vtl) ~ S + C_v, data = exp_data, FUN = mean)

#tab = table (exp_data$S, exp_data$C)
#mod_cat = apply (tab, 1,which.max)
mod_cat = as.numeric(factor(exp_data$C))

par (mfrow = c(1,3), mar = c(4.2,.25,1,.25), oma = c(0,4,0,0))
boxplot (height ~ A+G, data = exp_data, col = bmmb::cols[c(5,3,4,2)],cex.lab=1.2,
         cex.axis=1.2,ylim=c(100,195), names = c("b","g","m","w"),
         ylab="Apparent height (cm)", xlab="Apparent Speaker Category")
grid ()
boxplot (height ~ A+G, data = exp_data, col = bmmb::cols[c(5,3,4,2)],add=TRUE,ylim=c(100,195), xaxt='n',yaxt='n',
         ylab="", xlab="")

mtext(side=2,"Apparent height (cm)",line=3,cex=0.9)
plot (exp_data$f0_original, exp_data$height, pch=16,ylab="Apparent height (cm)",
      col = bmmb::cols[1+mod_cat], cex=1,ylim=c(100,195), #xlim = c(-1.2,1.2),
      cex.lab=1.2, cex.axis=1.2,yaxt='n',xlab="f0 (Hz)")
grid ()
plot (exp_data$vtl_original, exp_data$height, pch=16,ylab="Apparent height (cm)", 
      col = bmmb::cols[1+mod_cat], cex=1,ylim=c(100,195), #xlim = c(-2.5,2.9),
      cex.lab=1.2, cex.axis=1.2,yaxt='n',xlab="VTL (cm)")
legend (14.1,139, legend = c("boy","girl","man","woman"), pch=16,col=bmmb::cols[2:5],
        bty='n', cex = 1.3)
grid ()

################################################################################
### Figure 13.5
################################################################################


par (mfrow = c(1,2), mar = c(4,6,1,1))

fixefs = fixef (height_model)[-1,]
good = (fixefs[,3] < -0.5 & fixefs[,4] < -0.5) | (fixefs[,3] > 0.5 & fixefs[,4] > 0.5) 
pchs = ifelse (good, 16,1)

brmplot (fixefs, xlim = c(-7,10),las=2, #ylim = c(-1,12),
               pch=pchs,lwd=2,horizontal=FALSE,xlab="Centimeters")
abline (v = c(-0.5,0.5), lty = 3, col = bmmb::cols[6],lwd=2)

sds = bmmb::get_sds (height_model)
good = (sds[,3] < -0.5 & sds[,4] < -0.5) | (sds[,3] > 0.5 & sds[,4] > 0.5) 
pchs = ifelse (good, 16,1)

brmplot (sds, xlim = c(0,10),las=2, pch=pchs,lwd=2,xaxs='i',
               horizontal=FALSE,xlab="Centimeters")
abline (v = c(0.5), lty = 3, col = bmmb::cols[6],lwd=2)


################################################################################
### Figure 13.6
################################################################################

mod_cat = as.numeric(factor(exp_data$C))

par (mar = c(4.2,4.2,3,1.5))
layout (mat=t(c(1:3)), widths=c(.27,.27,.46))
brmplot (age_intercepts, labels = c("adult","child"),ylab="Apparent height (cm)",
         cex.lab=1.3,cex.axis=1.3, main = "Intercept")
brmplot (age_vtl_slopes, labels = c("adult","child"),ylab="Centimeters",
         cex.lab=1.3,cex.axis=1.3, main = "VTL slope")

tmp = exp_data$vtl - mean (exp_data$vtl)
aggd = aggregate (cbind (height,f0,tmp) ~ S + C_v, data = exp_data, FUN = mean)

plot (aggd$tmp, aggd$height, pch=16, col = bmmb::cols[1+mod_cat], 
      cex=2,ylim=c(130,182),xlab="Centered VTL (cm)", ylab = "Apparent Height (cm)",
         cex.lab=1.3,cex.axis=1.3)

#cffs = fixef(height_model)[1:2,1]
#abline (cffs[1], cffs[2], lwd=3,col=1)

abline (age_intercepts[1,1], age_vtl_slopes[1,1], lwd=3,col=2)
abline (age_intercepts[2,1], age_vtl_slopes[2,1], lwd=3, col=4)
legend (1,152, legend = c("boy","girl","man","woman"), pch=16,col=bmmb::cols[2:5],
        bty='n', cex = 1.3)


################################################################################
### Figure 13.7
################################################################################

qq = aggregate (exp_data$height ~ exp_data$S + exp_data$A + exp_data$C_v, 
                data = exp_data, FUN = mean)
a = qq[qq[,2]=='a',]
c = qq[qq[,2]=='c',]

inter = intersect(a[,1],c[,1])


tmpp = exp_data[exp_data$S %in% inter,]
tmpp = tapply (tmpp$A=="a", tmpp$S, mean)
tmp = abs (tmpp - 0.5)

a = a[a[,1] %in% inter,]
c = c[c[,1] %in% inter,]

#hist(a[,3] - c[,3])

listener_gender_effects = 
  bmmb::short_hypothesis(height_model, "-(G1+A1:G1)=0",scope="coef",group="L")

age_intercepts = bmmb::short_hypothesis(
  height_model, c("2*(A1+A1:G1) = 0","2*(A1-A1:G1) = 0",
                  "2*(G1+A1:G1) = 0","2*(G1-A1:G1) = 0"))

par (mar =c(4,4,1.5,1))
layout (mat = t(c(1:2)), widths = c(.4,.6))
hist(a[tmp < 0.3,4] - c[tmp < 0.3,4], xlim = c(0,30), main = '',
     xlab = "Apparent height difference (cm)", col = bmmb::cols[1])

brmplot (listener_gender_effects[,1:4], col = bmmb::cols,
         labels = 1:15,xlab = "Listener", ylab = "Centimeters")
mtext (side=3,line=0.5,"Effect for apparent maleness")
#brmplot (age_intercepts, labels = c("w-g","m-b","w-m","g-b"))



################################################################################
### Figure 13.8
################################################################################

exp_data$F = as.numeric(exp_data$G=='f')

aggd = aggregate (cbind (exp_data$F,f0_original,vtl_original) ~ 
                    S + C_v, data = exp_data, FUN = mean)
colnames (aggd)[3]="Female"

tab = table (exp_data$S, exp_data$C)
mod_cat = apply (tab, 1,which.max)

par (mfrow = c(1,2), mar = c(4,.2,0,.2), oma = c(0,4,1,.5))
plot (aggd$f0_original, aggd$Female, pch=16, col = bmmb::cols[1+mod_cat], 
      cex=1.5,ylim=c(0,1), ylab="P(female response)", xlab="Fundamental frequency (Hz)")
legend (95,0.99,legend=c("boy","girl","man","woman"),col=bmmb::cols[2:5],
        pch=16,bty='n',pt.cex=1.2)
abline (h=.5,lty=3)
mtext (side=2,"P(female response)", line = 3)
plot (aggd$vtl_original, aggd$Female, pch=16, col = bmmb::cols[1+mod_cat], 
      cex=1.5,ylim=c(0,1), ylab="", xlab="Vocal-tract length (cm)",
      yaxt='n')
abline (h=.5,lty=3)

################################################################################
### Figure 13.9
################################################################################

par (mfrow = c(1,2), mar = c(4,5,1,.25), oma = c(0,0,0,1))

fixefs = fixef (gender_model)
good = (fixefs[,3] < -0.5 & fixefs[,4] < -0.5) | (fixefs[,3] > 0.5 & fixefs[,4] > 0.5) 
pchs = 16

brmplot (fixefs, xlim = c(-4,4),las=2, #ylim = c(-1,12),
               pch=pchs,lwd=2,horizontal=FALSE,xlab="Logits",line=TRUE)
#abline (v = c(-0.5,0.5), lty = 3, col = bmmb::cols[6],lwd=2)
#abline (v=0,lty=3,lwd=2)

sds = bmmb::get_sds (gender_model)
good = (sds[,3] < -0.5 & sds[,4] < -0.5) | (sds[,3] > 0.5 & sds[,4] > 0.5) 
#pchs = ifelse (good, 16,1)

brmplot (sds, xlim = c(0,2.5),las=2, pch=pchs,lwd=2,xaxs='i',
               horizontal=FALSE,xlab="Logits")
#abline (v = c(0.5), lty = 3, col = bmmb::cols[6],lwd=2)


################################################################################
### Figure 13-10
################################################################################

knitr::include_graphics("_main_files/figure-html/Figure 13.10.jpg")

# jpeg ("Figure 13.10.jpg",4800,1650,res=600)
# #get fixed effect parameters
# samples = brms::fixef (gender_model, summary = FALSE)
# 
# # get a,b,c coefficients for overall plane
# a_all_reduced = mean (samples[,"Intercept"])
# b_all_reduced = mean (samples[,"vtl"])
# c_all_reduced = mean (samples[,"f0"])
# 
# # get a,b,c coefficients for adult plane
# a_adult_reduced = mean (samples[,"Intercept"] + samples[,"A1"])
# b_adult_reduced = mean (samples[,"vtl"] + samples[,"vtl:A1"])
# c_adult_reduced = mean (samples[,"f0"] + samples[,"f0:A1"])
# 
# # get a,b,c coefficients for child plane
# a_child_reduced = mean (samples[,"Intercept"] - samples[,"A1"])
# b_child_reduced = mean (samples[,"vtl"] - samples[,"vtl:A1"])
# c_child_reduced = mean (samples[,"f0"] - samples[,"f0:A1"])
# 
# fixef_2 = brms::fixef (gender_model)
# 
# # y = (-b*x - a - z) / (dx+c)
# # fixef(model_gender_vtl_f0)
# 
# tmp = bmmb::exp_data
# tmp = tmp[tmp$R=='a',]
# 
# tmp$vtl_original = tmp$vtl
# mu_vtl = mean (tmp$vtl_original)
# tmp$vtl = tmp$vtl - mean (tmp$vtl)
# 
# tmp$f0_original = tmp$f0
# mu_f0 = mean (tmp$f0_original)
# tmp$f0 = tmp$f0 - mean(tmp$f0)
# tmp$f0 = tmp$f0 / 100
# 
# aggd = aggregate (cbind ( height, A=="a", G=="f", vtl,f0, vtl) ~ S + C_v,
#                       data = tmp, FUN = mean)
# aggd$C = ""
# aggd$C[aggd[,4]>= 0.5 & aggd[,5]>= 0.5] = "w"
# aggd$C[aggd[,4]>= 0.5 & aggd[,5]<= 0.5] = "m"
# aggd$C[aggd[,4]<= 0.5 & aggd[,5]>= 0.5] = "g"
# aggd$C[aggd[,4]<= 0.5 & aggd[,5]<= 0.5] = "b"
# #table(aggd$C)
# 
# tab = table (tmp$S, tmp$C)
# mod_cat = apply (tab, 1,which.max)
# 
# par (mfrow = c(1,3), mar = c(4,.25,.5,.25), oma = c(0,4,0,0))
# 
# plot (aggd$vtl,aggd$f0, cex =1.2, col = bmmb::cols[c(2:5)][factor(aggd$C)],
#       xlim=c(-2.5,3),  pch=16,lwd=2, xlab = "",yaxt='s',
#       ylab="Height (inches)")
# grid()
# points (aggd$vtl, aggd$f0, cex =1.2, pch=16,lwd=2,
#       col = bmmb::cols[c(2:5)][aggd$C])
# 
# legend (-2.5, -0.25, legend = c("Boys","Girls","Men","Women"),lwd=2,lty=0,
#         col = bmmb::cols[2:5], bty='n',pch=16,pt.cex=1.5, x.intersp=.25)
# 
# curve ((-b_all_reduced*x - a_all_reduced) / (c_all_reduced), from = -3, to = 3, add = TRUE,lwd=2,col=bmmb::cols[7])
# curve ((-b_adult_reduced*x - a_adult_reduced)/ (c_adult_reduced), from = -3, to = 3, add = TRUE,lwd=2, lty=2,col=bmmb::cols[10])
# curve ((-b_child_reduced*x - a_child_reduced)/ (c_child_reduced), from = -3, to = 3, add = TRUE,lwd=2, lty=2,col=bmmb::cols[8])
# 
# mtext ("Centered f0 (Hz/100)", side=2, line = 2.7, cex = 0.8)
# 
# plot (aggd$vtl,aggd$f0, cex =1.2, col = bmmb::cols[c(2:5)][factor(aggd$C)],
#       xlim=c(-2.5,3),  pch=16,lwd=2, xlab = "",yaxt='n',
#       ylab="Height (inches)")
# grid()
# 
# xlim = par()$usr[1:2]
# ylim = par()$usr[3:4]
# 
# x = seq(-3,4,0.1)
# y = (-b_adult_reduced*x - a_adult_reduced)/ (c_adult_reduced)
# polygon (c(-3,x,4),c(-3,y,-3),col="seagreen")
# 
# y = (-b_adult_reduced*x - a_adult_reduced)/ (c_adult_reduced)
# polygon (c(-3,x,1),c(3,y,3),col=2)
# 
# points (aggd$vtl, aggd$f0, cex =1.2, pch=16,lwd=2,
#       col = 0)
# box()
# 
# for ( i in seq(-20,20,2)){
#   curve ((-b_adult_reduced*x - a_adult_reduced+i)/ (c_adult_reduced),
#          from = -5, to = 5, add = TRUE,lwd=1, lty=3,col=0)
# }
# curve ((-b_adult_reduced*x - a_adult_reduced)/ (c_adult_reduced),
#          from = -5, to = 5, add = TRUE,lwd=2, lty=1,col=1)
# 
# 
# plot (aggd$vtl,aggd$f0, cex =1.2, col = bmmb::cols[c(2:5)][factor(aggd$C)],
#       xlim=c(-2.5,3),  pch=16,lwd=2, xlab = "",yaxt='n',
#       ylab="Height (inches)")
# grid()
# 
# xlim = par()$usr[1:2]
# ylim = par()$usr[3:4]
# 
# x = seq(-3,4,0.1)
# y = (-b_child_reduced*x - a_child_reduced)/ (c_child_reduced)
# polygon (c(-3,x,4),c(-3,y,-3),col="cyan3")
# 
# y = (-b_child_reduced*x - a_child_reduced)/ (c_child_reduced)
# polygon (c(-3,x,1),c(3,y,3),col="darkgoldenrod1")
# 
# points (aggd$vtl, aggd$f0, cex =1.2, pch=16,lwd=2,
#       col = 0)
# box()
# 
# for ( i in seq(-20,20,2)){
#   curve ((-b_child_reduced*x - a_child_reduced+i)/ (c_child_reduced),
#          from = -5, to = 5, add = TRUE,lwd=1, lty=3,col=0)
# }
# curve ((-b_child_reduced*x - a_child_reduced)/ (c_child_reduced),
#          from = -5, to = 5, add = TRUE,lwd=2, lty=1,col=1)
# 
# mtext ("Centered vocal-tract length (cm)", side = 1, outer = TRUE, line=-1,cex=0.9)
# dev.off()


################################################################################
### Figure 13-11
################################################################################

# y = (-b*x - a - z) / (dx+c)
# fixef(model_gender_vtl_f0)

tmp = bmmb::exp_data
tmp = tmp[tmp$R=='a',]

tmp$vtl_original = tmp$vtl
tmp$vtl = tmp$vtl - mean (tmp$vtl)

tmp$f0_original = tmp$f0 
tmp$f0 = tmp$f0 - mean(tmp$f0)
tmp$f0 = tmp$f0 / 100

aggd = aggregate (cbind ( height, A=="a", G=="f", vtl,f0, vtl) ~ S + A_v + G_v + C_v, 
                      data = tmp, FUN = mean)
#aggd = aggd[aggd$A_v=='a',]

a = listener_coefficients_adult[1:15,1]
b = listener_coefficients_adult[16:30,1]
c = listener_coefficients_adult[31:45,1]

ac = listener_coefficients_child[1:15,1]
bc = listener_coefficients_child[16:30,1]
cc = listener_coefficients_child[31:45,1]

par (mar = c(.2,.2,.2,.2), oma = c(4,4,.5,.5), mfrow = c(4,4))

plot (aggd$vtl,aggd$f0, cex =1.2, col = bmmb::cols[c(2:5)][factor(aggd$C_v)], 
      pch=16,lwd=2, xlab = "",ylab="f0 (cm)",xaxt = 'n',
        xlim=c(-2.5,2.7),ylim=c(-1.2,1.2))
grid()

for (i in 1:15)
  curve ((-b[i]*x - a[i]) / (c[i]), from = -3, to = 3, 
         add = TRUE,lwd=2,col=bmmb::cols[8],xaxt='n')

for (i in 1:15)
  curve ((-bc[i]*x - ac[i]) / (cc[i]), from = -3, to = 3, 
         add = TRUE,lwd=2,col=bmmb::cols[7],xaxt='n')

for (i in 1:15){
  xaxt = "n"
  yaxt = "n"
  if (i %in% 12:15) xaxt = "s"
  if (i %in% c(4,8,12)) yaxt = "s"
  tmpp = tmp[tmp$L==i,]
  plot (tmpp$vtl,tmpp$f0, cex =1.2, col = bmmb::cols[2:5][factor(tmpp$C)],
        pch=16,lwd=2, xlab = "",ylab="f0 (cm)",xaxt=xaxt,yaxt=yaxt,
        xlim=c(-2.5,2.7),ylim=c(-1.2,1.2))
  grid()
  curve ((-b[i]*x - a[i]) / (c[i]), from = -5, to = 3, add = TRUE,
         lwd=2,col=bmmb::cols[8])
  curve ((-bc[i]*x - ac[i]) / (cc[i]), from = -5, to = 3, add = TRUE,
         lwd=2,col=bmmb::cols[7])
  text (-1.5,-.75,i,cex=1.5)
}
  mtext (side=1,outer=TRUE, "Centered vocal-tract length (cm)", line = 2.9,cex=0.9)
  mtext (side=2,outer=TRUE, "Centered f0 (Hz/100)", line = 2.9, cex=0.9)

################################################################################
### Figure 13-12
################################################################################

tab = table (exp_data$S, exp_data$C_v)
mod_cat = apply (tab, 1,which.max)

par (mar = c(1,4,1,1), oma = c(0,0,0,0))
brmplot (ranef(gender_model)$S[,,1], col = bmmb::cols[mod_cat+1],labels="",
         ylab="Logits")

legend (1,4,legend = c("b","g","m","w"), col = bmmb::cols[2:5],pch=16,bty='n',
        horiz=TRUE)