BrownMath.com → Stats w/o Tears → Ch 5 Solutions
Stats w/o Tears home page

Stats without Tears
Solutions for Chapter 5

Updated 20 Sept 2014 (What’s New?)
Copyright © 2012–2017 by Stan Brown

View or
Print:
These pages change automatically for your screen or printer. Underlined text, printed URLs, and the table of contents become live links on screen; and you can use your browser’s commands to change the size of the text or search for key words. If you print, I suggest black-and-white, two-sided printing.
Because this textbook helps you,
please click to donate!
Because this textbook helps you,
please donate at
BrownMath.com/donate.

← Exercises for Ch 5 

Problem Set 1

1 (a) There are three coins, and each has two possible outcomes, so the sample space will have 2³ = 8 entries.
 

(b)

S = {   HHH HTH THH TTH   }
  HHT HTT THT TTT  

(c) Three events out of eight equally likely events: P(2H) = 3/8

Common mistake: Sometimes students write the sample space correctly but miss one of the combinations of 2 heads. I wish I could offer some “magic bullet” for counting correctly, but the only advice I have is just to be really careful.

2
Service typeProb.
Landline and cell58.2%
Landline only37.4%
Cell only2.8%
No phone1.6%
  Total100.0%

(a) In a probability model, the probabilities must add to 1 (= 100%). The given probabilities add to 62.6%. What is the missing 37.4%? They’ve accounted for cell and landline, cell only, and nothing; the remaining possibility is landline only. The model is shown at right.

(b) P(Landline) = P(Landline only) + P(Landline and cell)

P(Landline) = 37.4% + 58.2% = 95.6%

Remark: “Landline” and “cell” are not disjoint events, because a given household could have both. But “landline only” and “landline and cell” are disjoint, because a given house can’t both have a cell phone with landline and have no cell phone with landline.

3 No, because the events are not disjoint. The figures are for being struck or attacked, not killed. You’d have to be pretty unlucky to be struck by lightning and attacked by a shark in the same year, but it could happen. If the question were about being killed by lightning or by a shark, then the events would be disjoint and you could add the probabilities.
4 (a) P(not A) = 1−P(A) = 1−0.7 → P(not A) = 0.3

(b) That A and B are complementary means that one or the other must happen, but not both. Therefore P(B) = P(not A) → P(B) = 0.3

(c) Since the events are complementary, they can’t both happen: P(A and B) = 0

Common mistake: Many students get (c) wrong, giving an answer of 1. If events are complementary, they can’t both happen at the same time. That means P(A and B) must be 0, the probability of something impossible.

Maybe those students were thinking of P(A or B). If A and B are complementary, then one or the other must happen, so P(A or B) = P(A) + P(B) = 1. But part (c) was about probability and, not probability or.

5 Yes, because the events are disjoint or mutually exclusive: a person might have both cancer and heart disease, but the death certificate will list one cause of death. (1/5 + 1/7 ≈ 34%.)
6 P(divorced | man) is the probability that a randomly selected man is divorced, or the proportion of men who are divorced. P(man | divorced) is the probability that a randomly selected divorced person is a man, or the proportion of divorced persons that are men.
7 If the probability of a future event is zero, then that event is impossible. If the probability of a past event is zero, that just means that it didn’t happen in the cases that were studied, not that it couldn’t have happened.

This is the difference between theoretical and empirical probability. A truly impossible event has a theoretical probability of zero. But the 0 out of 412 figure is an empirical probability (based on past experience). Empirical probabilities are just estimates of the “real” theoretical probability. From the empirical 0/412, you can tell that the theoretical probability is very low, but not necessarily zero. In plain language, an unresolved complaint is unlikely, but just because it hasn’t happened yet doesn’t mean it can’t happen.

8 13/52 or 1/4

Common mistake: Students often try some sort of complicated calculation here. You would have to do that if conditions were stated on all five of those cards, but they weren’t. Think about it: any card has a 1/4 chance of being a spade.

9 S = { HH, HT, TH, TT }
(a) Three outcomes (HH, HT, TH) have at least one head. One of the three has both coins heads. Therefore the probability is 1/3.
(b) Two outcomes (HH, HT) have heads on the first coin. One of the two has both coins heads. Therefore the probability is 1/2.
10

(a) 0.0171 × 0.0171 = 0.0003

(b) The events are not independent. When a married couple are at home together or out together, any attack that involves one of them will involve the other also.

11 (a) P(divorced) = 22.8/219.7 ≈ 0.1038

(b) About 10.38% of American adults in 2006 were divorced. If you randomly selected an American adult in 2006, there was a 0.1038 probability that he or she was divorced.

(c) Empirical or experimental

(d) P(divorcedC) = 1−P(divorced) = 1−22.8/219.7 ≈ 0.8962
About 89.62% of American adults in 2006 were not divorced (or, had a marital status other than divorced).

(e) P(man and married) = 63.6/219.7 ≈ 0.2895 (You can’t use a formula on this one.)

(f) Add up P(man) and P(not man but married):

P(man or married) = 106.2/219.7 + 64.1/219.7 ≈ 0.7751

Alternative solution: By formula:

P(man or married) = P(man) + P(married) − P(man and married)

P(man or married) = 106.2/219.7 + 127.7/219.7 − 63.6/219.7 = 0.7751

Remember, math “or” means one or the other or both.

(g) What proportion of males were never married? 30.3/106.2 = 28.53%.

(h) P(man | married) uses the sub-subgroup of men within the subgroup of married persons.

P(man | married) = 63.6/127.7 = 0.4980

49.80% of married persons were men.

Remark: You might be surprised that it’s under 50%. Isn’t polygamy illegal in the US? Yes, it is. But the table considers only resident adults. Women tend to marry slightly earlier than men, so fewer grooms than brides are under 18. Also, soldiers deployed abroad are more likely to be male.

(i) P(married | man) used the sub-subgroup of married persons within the subgroup of men.

P(married | men) = 63.6/106.2 = 0.5989

59.89% of men were married.

12 P(five cards, all diamonds) = (13/52) × (12/51) × (11/50) × (10/49) × (9/48) ≈ 0.0005
(I was surprised that the probability is that high, about once every 2000 hands. And the probability of being dealt a five-card flush of any suit is four times that, about once in every 500 hands.)
13

(a) 3 of 20 M&Ms are yellow, so 17 are not yellow. You want the probability of three non-yellows in a row:
(17/20)×(16/19)×(15/18) ≈ 0.5965

(b) The probability is zero, since there are only two reds to start with.

14 You’re being asked about all three possibilities: two fail, one fails, none fail. Therefore the three probabilities must add up to 1, and you need to compute only two of them. It’s also important to note that the companies are independent: whether one fails has nothing to do with whether the other fails. (Without knowing that the companies are independent, you could not compute the probability that both fail.)

(a) Since the companies are independent, you can use the simple multiplication rule:

P(A bankrupt and W bankrupt) = P(A bankrupt) × P(W bankrupt)

P(A bankrupt and W bankrupt) = .9 × .8 = 0.72

At this point you could compute (b), but it’s little messy because you need the probability that A fails and W is okay, plus the probability that A is okay and W fails. (c) looks easier, so do that first.

(c) “Neither bankrupt” means both are okay. Again, the events are independent so you can use the simple multiplication rule.

P(neither bankrupt) = P(A okay and W okay)

P(A okay) = 1−.9 = 0.1; P(W okay) = 1−.8 = 0.2

P(neither bankrupt) = .1 × .2 = 0.02

(b) is now a piece of cake.

P(only one bankrupt) = 1 − P(both bankrupt) − P(none bankrupt)

P(only one bankrupt) = 1 − .72 − .02 = 0.26

Remark: If you have time, it’s always good to check your work and work out (b) the long way. You have only independent events (whether A is okay or fails, whether W is okay or fails) and disjoint events (A fails and W okay, A okay and W fails). The “okay” probabilities were computed in part (c).

P(only one bankrupt) = (A bankrupt and W okay) or (A okay and W bankrupt)

P(only one bankrupt) = (.9 × .2) + (.1 × .8) = 0.26

Common mistake: When working this out the long way, students often solve only half the problem. But when you have probability of exactly one out of two, you have to consider both A-and-not-W and W-and-not-A.

You can’t use the “or” formula here, even if you studied it. That computes the probability of one or the other or both, but you need the probability of one or the other but not both.

Remark: If you computed all three probabilities the long way, pause a moment to check your work by adding them to make sure you get 1. Whenever possible, check your work with a second type of computation.

(a) 15(You can assume independence because it’s a small sample from a large population.) P(red1 and red2 and red3) = 0.13×0.13×0.13 = 0.0022

(b) P(red) = 0.13; P(redC) = 1−0.13 = 0.87.
P(red1C and red2C and red3C) = 0.87×0.87×0.87 or 0.87³ = 0.6585

Common mistake: Students sometimes compute 1−.13³. But .13³ is the probability that all three are red, so 1−.13³ is the probability that fewer than three (0, 1, or 2) are red. You need the probability that zero are red, not the probability that 0, 1, or 2 are red. Think carefully about where your “not” condition must be applied!

(c) The complement is your friend with “at least” problems. The complement of “at least one is green” is “none of them is green”, which is the same as “every one is something other than green.”
P(green) = 0.16, P(non-green) = 1−0.16 = 0.84.
P(≥1 green of 3) = 1 − P(0 green of 3) = 1 − P(3 non-green of 3) = 1−0.84³ ≈ 0.4073

(d) (Sequences are the most practical way to solve this one.)
(A) G1 and G2C and G3C; (B) G1C and G2 and G3C; (C) G1C and G2C and G3
.16×(1−.16)×(1−.16) + (1−.16)×.16×(1−.16) + (1−.16)×(1−.16)×.16 ≈ 0.3387

16 In “at least” and “no more than” probability problems, the complement is often your friend. The complement of “at least one had not attended” is “all had attended”. If the fans are randomly selected, their attendance is independent and you can use the simple multiplication rule.

P(all 5 attended) = 0.45^5 = 0.0185

P(at least 1 had not attended) = 1 − 0.0185 = 0.9815

17 Sequences are the way to go here:

(cherry1 and orange2) or (orange1 and cherry2)

Common mistake: There are two ways to get one of each: cherry followed by orange and orange followed by cherry. You have to consider both probabilities.

There are 11+9 = 20 sourballs in all, and Grace is choosing the sourballs without replacement (one would hope!), so the probabilities are:

(11/20)×(9/19) + (9/20)×(11/19) = 99/190 or about 0.5211

18 The complement is your friend, and the complement of “win at least once in 5 years” is “win 0 times in 5 years” or “lose 5 times in 5 years”.

P(win ≥1) = 1−P(win 0) = 1−P(lose 5).

P(lose) = 1−P(win) = 1−(1/500) = 499/500

P(lose 5) = [P(lose)]5 = (499/500)^5 = 0.9900

P(win ≥1) = 1−P(lose 5) = 1−0.9900 = 0.0100 or 1.00%

Common mistake: If you compute 1−(499/500)5 in one step and get 0.00996008, be careful with your rounding! 0.00996... rounds to 0.0100 or 1%, not 0.0010 or 0.1%.

Common mistake: 1/500 + 1/500 + ... is wrong. You can add probabilities only when events are disjoint, and wins in the various years are not disjoint events. It is possible (however unlikely) to win more than once; otherwise it would make no sense for the problem to talk about winning “at least once”.

Common mistake: You can’t multiply 5 by anything. Take an analogy: the probability of heads in one coin flip is 50%. Does that mean that the probability of heads in four flips is 4×50% = 200%? Obviously not! Any process that leads to a probability >1 must be incorrect.

Common mistake: 1−(1/500)5 is wrong. (1/500)5 is the probability of winning five years in a row, so 1−(1/500)5 is the probability of winning 0 to 4 times. What the problem asks is the probability of winning 1 to 5 times.

19 (a), (b), and (c) are all the possibilities there are, so the probabilities must total 1. You can compute two of them and then subtract from 1 to get the third.

(a) P(not first and not second) = P(not first) × P(not second) = (1−.7)×(1−.6) = 0.12

(c) P(first and second) = P(first) × P(second) = .7×.6 = 0.42

(b) 1−.12−.42 = 0.46

Alternative: You could compute (b) directly too, using sequences:

P(exactly one copy recorded) =

P(first and not second) + P(second and not first) =

P(first)×(1−P(second)) + P(second)×(1−P(first)) =

.7×(1−.6) + .6×(1−.7) = 0.46

A very common mistake on problems like this is writing down only one of the sequences. When you have exactly one success (or exactly any definite number), almost always there are multiple ways to get to that outcome.

You can’t use the “or” formula here, even if you studied it. That computes the probability of one or the other or both, but you need the probability of one or the other but not both.

Problem Set 2

20 (a) P(ticket on route A) = P(taking route A) × P(speed trap on route A) = 0.2×0.4 = 0.08. In the same way, the probabilities of getting a ticket on routes B, C, D are 0.1×0.3 = 0.03, 0.5×0.2 = 0.10, and 0.2×0.3 = 0.06. He can’t take more than one route to work on a given day, so those are disjoint events. The probability that he gets a ticket on any one morning is therefore 0.08+0.03+0.10+0.06 = 0.27.

(b) The probability of not getting a ticket on a given morning is 1−0.27 = 0.73. The probability of getting no tickets on five mornings in a row is therefore 0.735 ≈ 0.2073 or about 21%.

21 Two events A and B are independent if P(A|B) = P(A).

P(man) = 106.2/219.7 ≈0.4834

P(man|divorced) = 9.7/22.8 ≈ 0.4254

Since P(man|divorced) ≠ P(man), the events are not independent.

Alternative solution:  You could equally well show that P(divorced|man) ≠ P(divorced):

P(divorced|man) = 9.7/106.2 ≈ 0.0913

P(divorced) = 22.8/219.7 ≈ 0.1038

22 What’s the probability of ten of the same flip in a row? In other words, given either result, what’s the probability that the next nine will be the same? That must be (1/2)9 = 1/512. You therefore expect this to happen about once in about every 500 flips, or about twice in every thousand.
23 P(open door) = P(unlocked) + P(locked)×P(right key)
P(open door) = 0.5 + 0.5×(2/5) = 0.7

What’s New

Because this textbook helps you,
please click to donate!
Because this textbook helps you,
please donate at
BrownMath.com/donate.

Updates and new info: https://BrownMath.com/swt/

Site Map | Home Page | Contact