Sunday, November 15, 2015
Confidence intervals for standard deviation
Here is a link to a post about estimating sigmax using sx, n and the chi square table. The degrees of freedom will be n - 1.
Thursday, November 12, 2015
Using the Consumer Price Index table
The Consumer Price Index
This week, students got a green handout sheet with the consumer price index number for the years 1950 to 2015. The simplest way to use it goes as follows.
How did prices change from 1960 to 1980? We use the two CPI values.
CPI(1960) = 29.6
CPI(1980) = 82.4
82.4/29.6 = 2.78378.....
29.6/82.4 = .35922...
What these numbers mean is that on average, a $10 item in 1960 sold for 10*2.78378... = $27.84 in 1980, while a $10 item in 1980 would have sold for 10*.35922... = $3.59.
We can use this to figure out the cost of living increase in any given year by dividing the CPI for that year by the CPI for the previous year. For example, the rate in 1975 would be
CPI(1975)/CPI(1974) = 53.8/49.3 = 1.09127789... This is 1 + rate, so rounded to the nearest tenth of a percent we would have 9.1% and to the nearest hundredth of a percent it would be 9.13%.
Let's look at the number CPI(1980)/CPI(1960) = 2.783783... This gives us how much prices increased in the 20 year period from 1961 to 1980. To get the average increase over those 20 years, we take (2.783783)^(1/20) = 1.05252... This is 1 + rate, so the average rate = .05252... or 5.3% if rounded to the nearest tenth of a percent and 5.25% rounded to the nearest hundredth.
This week, students got a green handout sheet with the consumer price index number for the years 1950 to 2015. The simplest way to use it goes as follows.
How did prices change from 1960 to 1980? We use the two CPI values.
CPI(1960) = 29.6
CPI(1980) = 82.4
82.4/29.6 = 2.78378.....
29.6/82.4 = .35922...
What these numbers mean is that on average, a $10 item in 1960 sold for 10*2.78378... = $27.84 in 1980, while a $10 item in 1980 would have sold for 10*.35922... = $3.59.
We can use this to figure out the cost of living increase in any given year by dividing the CPI for that year by the CPI for the previous year. For example, the rate in 1975 would be
CPI(1975)/CPI(1974) = 53.8/49.3 = 1.09127789... This is 1 + rate, so rounded to the nearest tenth of a percent we would have 9.1% and to the nearest hundredth of a percent it would be 9.13%.
Let's look at the number CPI(1980)/CPI(1960) = 2.783783... This gives us how much prices increased in the 20 year period from 1961 to 1980. To get the average increase over those 20 years, we take (2.783783)^(1/20) = 1.05252... This is 1 + rate, so the average rate = .05252... or 5.3% if rounded to the nearest tenth of a percent and 5.25% rounded to the nearest hundredth.
Thursday, November 5, 2015
Answers to Quiz 8
Here are the answers to Quiz 8, which will be counted as a lab instead of a quiz and a new Quiz 8 will be given on Tuesday, with checksums.
Data set: 1990-2009 n = 20
R² =.5956
yp = .1830x - 361.3035
What is the value of yp when you plug in 1990? 2.8665
What is the value of yp when you plug in 2009? 6.3435
95% confidence threshold = .1971 Does R² surpass it? Yes
99% confidence threshold = .3147 Does R² surpass it? Yes
====================
Data set: 1990-2009 n =10
R² =.1601
yp = .1006x -196.0848
What is the value of yp when you plug in 2000? 5.1152
What is the value of yp when you plug in 2009? 6.0206
95% confidence threshold = .3994 Does R² surpass it? No
99% confidence threshold = .5852 Does R² surpass it? No
Tuesday, November 3, 2015
The frequency table solution for TI-83 and TI-84
Put the list of numbers in one list (let's say L1) and the frequencies in a separate list (for simplicity's, make it L2 in this example.) Under the STAT menu, in the CALC sub-menu, choose
2-Var Stats L1, L2
Assuming the frequencies are in the second list, the sum of the y values is n, the size of the sample and the sum of the xy is the sum of all the x values. You then need to divide (sum of xy)/(sum of y) to get the average x-bar.
As for the median, you have to do it by hand, just like the folks with the TI-30xIIs.
Here are the lists from class, the first number x and the second number f(x). The answers are in the comments.
This year's daily differences, 2015 - (avg. 1999-2014)
-9, 1
-8, 3
-7, 4
-6, 7
-5, 13
-4, 15
-3, 12
-2, 16
-1, 11
0, 24
1, 27
2, 24
3, 24
4, 25
5, 18
6, 14
7, 11
8, 8
9, 14
10, 5
11, 7
12, 1
13, 1
14, 4
15, 2
16, 3
17, 5
18, 4
20, 2
21, 1
Here is the list of wins by NFL teams in 2014
12, 5
11, 4
10, 3
9, 4
8, 2
7, 4
6, 3
5, 1
4, 2
3, 2
2, 2
Answers in the comments.
Tuesday, October 27, 2015
Correction to Homework 9 answers
The number for the final test statistic on the back page on should be
t = (75.20-71.70)/sqrt(3.49²/10+.67²/10) = 3.114, which means we should reject H0.
Sorry for the mistake.
Thursday, October 22, 2015
Two sets of data
Dealing with tests of two statistics from two samples, inferring differences (or not) between the underlying populations, both for proportions and averages.
Two sets of data of the same size and using the calculator's two variable mode.
The one-variable method with two data sets of the same size: matched pairs.
Tuesday, October 20, 2015
Links to posts on Bayesian probability
An easier method for probabilities of false positives and false negatives
Here are the previous posts about Bayesian probability, which in our case means we want to understand the different between the probabilities of false positive tests and false negative tests.
In class we used a contingency table method to find the probabilities of false negatives and false positives. This is useful if we want to work with doing a second sample, as we did when the false positive probability was so high. If we just want to find probabilities on a single test, here is an easier method.
e = error rate and a = accuracy rate, where e + a = 1
p = proportion of trait in the population and q = 1 - p
probability of a false positive = eq/(eq + pa)
probability of a false negative = ep/(ep + qa)
Friday, October 16, 2015
Mistake in Homework 8
The checksum in the top set of numbers is incorrect. Instead of 221,689.1, it should be 212,104.69.
Thanks to Kayla Simmons for catching the mistake.
Thursday, October 15, 2015
Notes for October 13 and 15
Testing a hypothesis about the average of an underlying population (mux) based on the average of a sample (x-bar)
Let's assume we have information about an average from a population. For example, it is regularly assumed that human body temperature is 98.6° Fahrenheit or that the average IQ is 100. Another way to assume information would like our test to see if 2015 in Oakland is warmer than average, comparing it to the average temperature for 1999 to 2014. Here are the steps to take to create a null hypothesis H0 and an alternate hypothesis HA, determine a confidence level at which we will reject H0, and use a sample's statistics to see if the data warrants the rejection of the null hypothesis or it fails to meet that standard.
Setting the two hypotheses: The null hypothesis is always an equality and the alternate hypothesis an inequality. There are three kinds of inequalities.
One tailed high: Consider a drug that is supposed to increase muscle mass. The only kind of result that will impress us is one that shows that increase. This would set the two hypotheses as
H0: mux = constant
HA: mux > constant
One tailed low: Instead let's say we have a drug that is supposed to reduce cholesterol. We want to see results where the average goes down.
H0: mux = constant
HA: mux < constant
Two tailed: In class, we looked at data that seems to indicate human body temperature is not 98.6° Fahrenheit. When this claim was made, it was not made clear if the temperature was now higher or lower and in this case, any significant difference would be a surprising result.
H0: mux = constant
HA: mux != constant (The equal sign with the slash isn't available in this text editor. The symbols '!=' are used in some computer languages to mean inequality.)
Setting the confidence level: We may have some leeway as to whether our confidence level is 90%, 95% or 99%. In most experiments I've seen published about scientific statements, the 99% confidence level is standard.
Using the numbers from the experiment: We will need x-bar, sx and n to produce our test statistic t, shown in the equation on the left. We will also use n to get the degrees of freedom, which in this case is n-1.
Example: In class we had a set where n = 36, so degrees of freedom would be 35.
Threshold for one-tailed high test in this situation: Our test stat t would have to be greater than 2.438.
Threshold for one-tailed low test in this situation: Our test stat t would have to be less than -2.438.
Threshold for two-tailed test in this situation: Our test stat t would have to be greater than 2.724 -OR- less than -2.724.
When we plugged in the values 97.96 for the sample average and 0.69 for the sample standard deviation we got (97.96 - 98.6)/0.69 * sqrt(36) = -5.49. This number is well beyond or low threshold and we would reject the null hypothesis. The technical statement in this case would be
"We are 99% confident from the evidence our our sample that the average human body temperature is not 98.6° Fahrenheit." Notice that we cannot say what the true value is from this, though most samples with fairly large n place the true number now at around 98.2° Fahrenheit. We cannot be certain if the temperature has changed over time or if the means of measurement have become more accurate.
Notes on Bayesian probabilities.
Let's assume we have information about an average from a population. For example, it is regularly assumed that human body temperature is 98.6° Fahrenheit or that the average IQ is 100. Another way to assume information would like our test to see if 2015 in Oakland is warmer than average, comparing it to the average temperature for 1999 to 2014. Here are the steps to take to create a null hypothesis H0 and an alternate hypothesis HA, determine a confidence level at which we will reject H0, and use a sample's statistics to see if the data warrants the rejection of the null hypothesis or it fails to meet that standard.
Setting the two hypotheses: The null hypothesis is always an equality and the alternate hypothesis an inequality. There are three kinds of inequalities.
One tailed high: Consider a drug that is supposed to increase muscle mass. The only kind of result that will impress us is one that shows that increase. This would set the two hypotheses as
H0: mux = constant
HA: mux > constant
One tailed low: Instead let's say we have a drug that is supposed to reduce cholesterol. We want to see results where the average goes down.
H0: mux = constant
HA: mux < constant
Two tailed: In class, we looked at data that seems to indicate human body temperature is not 98.6° Fahrenheit. When this claim was made, it was not made clear if the temperature was now higher or lower and in this case, any significant difference would be a surprising result.
H0: mux = constant
HA: mux != constant (The equal sign with the slash isn't available in this text editor. The symbols '!=' are used in some computer languages to mean inequality.)
Setting the confidence level: We may have some leeway as to whether our confidence level is 90%, 95% or 99%. In most experiments I've seen published about scientific statements, the 99% confidence level is standard.
Using the numbers from the experiment: We will need x-bar, sx and n to produce our test statistic t, shown in the equation on the left. We will also use n to get the degrees of freedom, which in this case is n-1.
Example: In class we had a set where n = 36, so degrees of freedom would be 35.
Threshold for one-tailed high test in this situation: Our test stat t would have to be greater than 2.438.
Threshold for one-tailed low test in this situation: Our test stat t would have to be less than -2.438.
Threshold for two-tailed test in this situation: Our test stat t would have to be greater than 2.724 -OR- less than -2.724.
When we plugged in the values 97.96 for the sample average and 0.69 for the sample standard deviation we got (97.96 - 98.6)/0.69 * sqrt(36) = -5.49. This number is well beyond or low threshold and we would reject the null hypothesis. The technical statement in this case would be
"We are 99% confident from the evidence our our sample that the average human body temperature is not 98.6° Fahrenheit." Notice that we cannot say what the true value is from this, though most samples with fairly large n place the true number now at around 98.2° Fahrenheit. We cannot be certain if the temperature has changed over time or if the means of measurement have become more accurate.
Notes on Bayesian probabilities.
Type I (false positive) and Type II (false negative) errors
I've seen a lot of explanations of Type I and Type II errors, but this photo collage is my favorite.
When we set a confidence level, what we are doing is setting the probability of Type I error. The null hypothesis can always be interpreted as "nothing special is happening" and Type I errors mean we think something special is happening when it isn't. In many cases, false positive are more disruptive than false negatives and we are more interested in limiting Type I errors than we are in limiting Type II errors. There will be more discussion of this in class on Oct. 15 and the note will also be posted here.
When we set a confidence level, what we are doing is setting the probability of Type I error. The null hypothesis can always be interpreted as "nothing special is happening" and Type I errors mean we think something special is happening when it isn't. In many cases, false positive are more disruptive than false negatives and we are more interested in limiting Type I errors than we are in limiting Type II errors. There will be more discussion of this in class on Oct. 15 and the note will also be posted here.
Tuesday, October 6, 2015
Notes for October 6 and 8.
The posts about the confidence intervals for proportions.
The explanation of Fisher's idea, the hypothesis test.
The differences between p, p-hat and p-value.
Let's consider the lady tasting tea and how we would test her using a z-score. This is a reasonably simply calculation, though it isn't as precise when n is small as using the numbers we get from looking at the problem as sampling with replacement.
In this case, the null hypothesis is given by the equation
H0: p = 0.5
What this states is that she is "just guessing" between milk-in-tea and tea-in-milk, so she should have a 50-50 chance to be right (or wrong) each time.
Let's say we set our confidence level at 99% and tested her six times, and she went six for six. That means p-hat = 6/6 = 1. We now have all the numbers we need to get our z-score. using sqrt to stand in for square root, in our calculator we would type
(1 - .5)/sqrt(.5*.5/6) = 2.4494897...,
which is 2.45 when rounded to the nearest hundredth. We use this to look up the p-value on the goldenrod sheet and get .9929. This is the p-value, the test statistic we use to make out decision whether or not to reject H0 or fail to reject. Because this is a high tailed test and .9929 >= .99, we reject the null hypothesis, which means we think she is not just guessing given the results of this test.
Translated into English, we are 99% convinced she actually has some talent at telling the difference between the two ways of making tea, but we still hold out the 1% possibility that she was just a very lucky guesser.
As for the three numbers that all use the letter p, once again:
p is the probability we get by the definition of the test. In this case it was 0.5, but we will do other tests where the number can be something else.
p-hat is the proportion of answers she got right if we do the high tailed test, or the proportion of answers she got wrong if we use the more accurate low tailed test.
The p-value is the proportion we get from the test statistic, and this is what we use to decided whether we will reject or fail to reject the null hypothesis H0.
Tuesday, September 29, 2015
Notes for September 29 and October 1
Here are links to posts about how to get average and both standard deviations on the TI-30XIIs.
Here is a link to a post about t-scores and their use in confidence intervals.
Here is a link to the posts about Confidence of Victory.
A list of what the major things that can go wrong with samples.
Too small a sample size. A very small sample will have huge confidence intervals for the values of proportions for categorical variables, which should be a red flag for anyone reading it. But often, people only mention n and the confidence intervals as afterthoughts and many papers have been published and quoted in much larger publication before anyone notices how small the samples were.
Convenience sampling: Our class could be considered a sample of students at Laney, but is it representative? It's convenient for me to get information from the students, but groups of students who would be ignored include:
1. Students whose majors do not require statistics
2. Students who primarily take night classes or distance learning classes
3. Students who primarily take Monday and Wednesday classes
It's not inevitable that excluding these groups would change the proportions of males and females, for example, but a convenience sample is always suspect.
Self-selection. Internet polls on websites might ask you about politics or sports or entertainment. You are under no compulsion to answer the questions and you do so only because the topic interests you. Instead of being convenient for the researcher, self selection polls are convenient for the responders. Almost every such poll will have a disclaimer stating "not a scientific poll" and the numbers aren't a good place to start using statistical methods to find out about the underlying population.
Leading questions. In polling data for opinions, leading questions can create bias.
Under-sampling and oversampling of demographic groups. I have been following polls for several elections now and in nearly every poll, someone will complain that some group is under-represented. Too many conservatives or too many liberals, too many men or too many women, not enough people from outside of major cities or too many from outside major cities, some age group is under or over represented.
No sample is completely perfect, but honest sampling companies do work at using acceptable methods.
Thursday, September 24, 2015
Tuesday, September 15, 2015
Notes for September 15th and 17th
Link to a post about the shared birthday problem.
Link to a post about the Game Show problem, a.k.a. the Monty Hall problem. (many topics discussed, this topic at the bottom of the post.)
Probability of r successes in n dependent trials using sampling without replacement, which is like drawing cards from a deck.
A new use for independent probability: Missing a rare side effect. Let us consider a drug company running tests on a new drug. The tests are designed to check the drug's effectiveness in comparison to other drugs on the market, but they are also designed to see if the subjects experience side effects. If you've ever listened to a drug commerical on TV, you know that some side effects can be quite dangerous. If the probability of a side effect is p and the size of the sample is n, the expected value for the frequency is np.
Example: Let's say the drug company is testing a new drug on 500 subjects. Let's also stipulate there is a fairly rare side effect that we should see in 1% of the population, so p = .01. 500 * .01 = 5, so the expected value of people with the side effect in the sample is 5. Since the expected value is a whole number, this means the most likely number people with the side effect is 5. Let's do the binomial distribution for 4, 5 and 6, rounding to four places after the decimal.
Probability of exactly 4 people out of 500 having the side effect:
500 nCr 4 * .01 ^ 4 * .99 * 496 = .1760 or 17.6%
Probability of exactly 5 people out of 500 having the side effect:
500 nCr 5 * .01 ^ 5 * .99 * 495 = .1764 or 17.64%
Probability of exactly 6 people out of 500 having the side effect:
500 nCr 6 * .01 ^6 * .99 * 494 = .1470 or 14.70%
As we can see, the odds of 5 out of 500 are slightly greater than 4 out of 500, and about 3% more than 6 out of 500. No other outcome is more likely than 5 out of 500.
Here's a different question: what are the chances of 0 out of 500? The reason to ask this is if the trial misses the side effect completely and drug goes to market, the company could face a lot of lawsuits they didn't expect when the side effect starts showing up in the much larger sample of patients taking the drug.
Probability of 0 people out of 500 having the side effect:
500 nCr 0 * .01 ^0 * .99 * 500 = .0066 or 00.66%
(Note: when we have "n choose 0" the answer is always 1, and likewise any non zero number raised to the power of 0 is one. For this problem only, we can just type in the last term (1 - p)^n
Because the sample was large enough and the side effect was not all that rare, the odds of a sample missing this side effect are relatively low. But what if the side effect were rarer, say 1 in 400, which is the decimal .0025. This changes the numbers, of course. The expected value is now 500 * .025 = 1.25, which means the most likely event should be either 1 person or maybe 2 people showing the side effect. Let's look at 0, 1 and 2 people having the side effect.
Probability of exactly 0 people out of 500 having the side effect:
500 nCr 0 * .0025 ^ 0 * .9975 * 500 = .2861 or 28.61%
Probability of exactly 1 person out of 500 having the side effect:
500 nCr 1 * .0025 ^ 1 * .9975 * 499 = .3585 or 35.85%
Probability of exactly 2 people out of 500 having the side effect:
500 nCr 2 * .0025 ^ 2 * .9975 * 498 = .2242 or 22.42%
So the most likely event is to have one person showing the side effect, which will happen about 36% of the time. But the next most likely event is not 2 out of 500 but 0 out of 500, which happens over 28% of the time. 1 in 400 people showing a side effect might not seem that high, but a successful drug can be given to hundreds of thousands of patients, possibly more, and having 1 in every 400 showing a very bad side effect could get very expensive for the company.
Here are some practice problems. Assume the sample size is n = 1000 and we are interested in 0 people showing the side effect. Round the answers to the nearest tenth of a percent.
a) the side effect shows up in 1 in 500 patients
b) the side effect shows up in 1 in 1,000 patients
c) the side effect shows up in 1 in 1,500 patients
Answers in the comments.
Link to a post about the Game Show problem, a.k.a. the Monty Hall problem. (many topics discussed, this topic at the bottom of the post.)
Probability of r successes in n dependent trials using sampling without replacement, which is like drawing cards from a deck.
A new use for independent probability: Missing a rare side effect. Let us consider a drug company running tests on a new drug. The tests are designed to check the drug's effectiveness in comparison to other drugs on the market, but they are also designed to see if the subjects experience side effects. If you've ever listened to a drug commerical on TV, you know that some side effects can be quite dangerous. If the probability of a side effect is p and the size of the sample is n, the expected value for the frequency is np.
Example: Let's say the drug company is testing a new drug on 500 subjects. Let's also stipulate there is a fairly rare side effect that we should see in 1% of the population, so p = .01. 500 * .01 = 5, so the expected value of people with the side effect in the sample is 5. Since the expected value is a whole number, this means the most likely number people with the side effect is 5. Let's do the binomial distribution for 4, 5 and 6, rounding to four places after the decimal.
Probability of exactly 4 people out of 500 having the side effect:
500 nCr 4 * .01 ^ 4 * .99 * 496 = .1760 or 17.6%
Probability of exactly 5 people out of 500 having the side effect:
500 nCr 5 * .01 ^ 5 * .99 * 495 = .1764 or 17.64%
Probability of exactly 6 people out of 500 having the side effect:
500 nCr 6 * .01 ^6 * .99 * 494 = .1470 or 14.70%
As we can see, the odds of 5 out of 500 are slightly greater than 4 out of 500, and about 3% more than 6 out of 500. No other outcome is more likely than 5 out of 500.
Here's a different question: what are the chances of 0 out of 500? The reason to ask this is if the trial misses the side effect completely and drug goes to market, the company could face a lot of lawsuits they didn't expect when the side effect starts showing up in the much larger sample of patients taking the drug.
Probability of 0 people out of 500 having the side effect:
500 nCr 0 * .01 ^0 * .99 * 500 = .0066 or 00.66%
(Note: when we have "n choose 0" the answer is always 1, and likewise any non zero number raised to the power of 0 is one. For this problem only, we can just type in the last term (1 - p)^n
Because the sample was large enough and the side effect was not all that rare, the odds of a sample missing this side effect are relatively low. But what if the side effect were rarer, say 1 in 400, which is the decimal .0025. This changes the numbers, of course. The expected value is now 500 * .025 = 1.25, which means the most likely event should be either 1 person or maybe 2 people showing the side effect. Let's look at 0, 1 and 2 people having the side effect.
Probability of exactly 0 people out of 500 having the side effect:
500 nCr 0 * .0025 ^ 0 * .9975 * 500 = .2861 or 28.61%
Probability of exactly 1 person out of 500 having the side effect:
500 nCr 1 * .0025 ^ 1 * .9975 * 499 = .3585 or 35.85%
Probability of exactly 2 people out of 500 having the side effect:
500 nCr 2 * .0025 ^ 2 * .9975 * 498 = .2242 or 22.42%
So the most likely event is to have one person showing the side effect, which will happen about 36% of the time. But the next most likely event is not 2 out of 500 but 0 out of 500, which happens over 28% of the time. 1 in 400 people showing a side effect might not seem that high, but a successful drug can be given to hundreds of thousands of patients, possibly more, and having 1 in every 400 showing a very bad side effect could get very expensive for the company.
Here are some practice problems. Assume the sample size is n = 1000 and we are interested in 0 people showing the side effect. Round the answers to the nearest tenth of a percent.
a) the side effect shows up in 1 in 500 patients
b) the side effect shows up in 1 in 1,000 patients
c) the side effect shows up in 1 in 1,500 patients
Answers in the comments.
Thursday, September 10, 2015
Notes for the week of September 8 and 10, 2015
Here is a link to a discussion of probabilities from a contingency table.
Here is a link to a discussion of Pascal's triangle and the binomial coefficients, used when looking at the probabilities of events from series of independent trials - like flipping coins or rolling dice - and some dependent trials - like drawing cards from a deck.
Thursday, September 3, 2015
Homework 2 (due September 8)
Last four points of homework and older posts about z-score and lookup tables.
4 points for Homework 2
Here is the list of wins for the 30 NBA wins in the 2014-15 season, including playoff wins. Find the z-score for the highest number of wins and lowest number of wins (note: list not in order) and determine if these two numbers of wins count as outliers by the following method.
If z >= 3, the value is very unusually high.
If z >= 2, the value is unusually high.
If z <= -2, the value is unusually low.
If z <= -3, the value is very unusually low.
List
68 76 56 49 52 43 40 40 38 37 33 32 25 18 17 83 65 62 52 61 58 51 45 45 39 38 30 29 21 16
Average = 43.97
Standard deviation = 16.99
Round z-score to the nearest hundredth.
z(high value) = ______________
Is this any kind of outlier, and of so, which one? ____________
z(low value) = ______________
Is this any kind of outlier, and of so, which one? ____________
===============
If you are looking for more information on look-up tables, follow this link to a previous post.
Here is a link to some practice problems for raw scores to z-scores to proportions and vice versa.
Tuesday, August 25, 2015
Links to all posts about the Homework 1 topics
The five number summary and outliers
Stem and leaf plots
Scales other than percent
Here is one link to several posts about the five number summary and the outlier test involved.
The post at the very end is the explanation, the posts at the top have practice problems.
Here is the link for the posts about stem and leaf plots.
Here is the link for the posts about scales other than percent.
Subscribe to:
Posts (Atom)