KERJA PROJEK MATEMATIK TAMBAHAN 2016 NEGERI P.PINANG P.PINANG (TIMUR (TIM UR LAUT) - ANSWER http://addmathsprojectwork.blogspot.my/
PART 1
a)
INTRODUCTION
Probability is a way of expressing knowledge or belief that an event will occur or has occurred. In mathematics the concept has been given an exact meaning in probability theory, that is used extensively in such areas of study as mathematics, statistics, finance, gambling, science, and philosophy to draw conclusions about the likelihood of potential events and an d the underlying mechanics of complex systems. Probability has a dual aspect: on the one hand the probability or likelihood of hypotheses given the evidence for them and on the other hand the behavior of stochastic processes such as the throwing of dice or coins. The study of the former is historically older in, for example, the law of evidence, while the mathematical treatment of dice began with the work of Pascal and Fermat in the 1650s. Probability is distinguished from statistics. While statistics deals with data and inferences from it, (stochastic) probability deals with the stochastic (random) processes which lie behind data or outcomes. HISTORY Probable and likely and their cognates in other modern languages derive from medieval learned Latin probabilis and verisimilis, deriving from Cicero from Cicero and generally applied to an opinion to mean plausible or generally approved .
Ancient and medieval law medieval law of evidence developed a grading of degrees of proof, probabilities, presumptions and half-proof and half-proof to deal with the uncertainties of evidence in court. In Renaissance In Renaissance times, betting was discussed in terms of odds such as "ten to one" and maritime insurance maritime insurance premiums were estimated based on intuitive risks, but there was no theory theo ry on how to calculate such odds or premiums.
The mathematical methods of probability arose in the correspondence of Pierre Pierre de Fermat and Blasé Pascal (1654) on such questions as the fair division of the stake in an interrupted game of chance. Christiaan chance. Christiaan Huygens (1657) gave a comprehensive treatment of the subject. Jacob Bernoulli's Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham and Abraham de Moivre's Moivre's The Doctrine of Chances (1718) put probability on a sound mathematical footing, showing how to calculate a wide range of complex probabilities. Bernoulli proved a version of the fundamental law fundamental law of large numbers, which numbers, which states that in a large number of trials, the average of the outcomes is likely to be very close to the expected value - for example, in 1000 throws of a fair coin, it is likely that there are close to 500 heads (and the larger the number of throws, the closer to half-and-half the proportion is likely to be). The power of probabilistic methods in dealing with uncertainty was shown by Gauss' by Gauss'ss determination of the orbit of Ceres Ceres from a few observations. The theory The theory of errors used the
method of least squares to correct error-prone observations, especially in astronomy, based on the assumption of a normal distribution of errors to determine the most likely true value. Towards the end of the nineteenth century, a major success of explanation in terms of probabilities was the Statistical mechanics of Ludwig Boltzmann and J. Willard Gibbs which explained properties of gases such as temperature in terms of the random motions of large numbers of particles. The field of the history of probability itself was established by Isaac Todhunter's monumental History of the Mathematical Theory of Probability from the Time of Pascal to that of Lagrange
(1865). Probability and statistics became closely connected through the work on hypothesis testing of R. A. Fisher and Jerzy Neyman, which is now widely applied in biological and psychological experiments and in clinical trials of drugs. A hypothesis, for example that a drug is usually effective, gives rise to a probability distribution that would be observed if the hypothesis is true. If observations approximately agree with the hypothesis, it is confirmed, if not, the hypothesis is rejected. The theory of stochastic processes broadened into such areas as Markov processes and Brownian motion, the random movement of tiny particles suspended in a fluid. That provided a model for the study of random fluctuations in stock markets, leading to the use of sophisticated probability models in mathematical finance, including such successes as the widely-used Black-Scholes formula for the valuation of options. The twentieth century also saw long-running disputes on the interpretations of probability. In the mid-century frequentism was dominant, holding that probability means long-run relative frequency in a large number of trials. At the end of the century there was some revival of the Bayesian view, according to which the fundamental notion of probability is how well a proposition is supported by the evidence for it. APPLICATIONS
Two major applications of probability theory in everyday life are in risk assessment and in trade on commodity markets. Governments typically apply probabilistic methods in environmental regulation where it is called "pathway analysis", often measuring well-being using methods that are stochastic in nature, and choosing projects to undertake based on statistical analyses of their probable effect on the population as a whole. A good example is the effect of the perceived probability of any widespread Middle East conflict on oil prices - which have ripple effects in the economy as a whole. An assessment by a commodity trader that a war is more likely vs. less likely sends prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are not assessed independently nor necessarily very rationally. The theory of behavioral finance emerged to describe the effect of such groupthink on pricing, on policy, and on peace and conflict. It can reasonably be said that the discovery of rigorous methods to assess and combine probability assessments has had a profound effect on modern society. Accordingly, it may be of some importance to most citizens to understand how odds and probability assessments are made, and how they contribute to reputations and to decisions, especially in a democracy.
Another significant application of probability theory in everyday life is reliability. Many consumer products, such as automobiles and consumer electronics, utilize reliability theory in the design of the product in order to reduce the probability of failure. The probability of failure may be closely associated with the product's warranty. b)
Empirical Probability of an event is an "estimate" that the event will happen based on how
often the event occurs after collecting data or running an experiment (in a large number of trials). It is based specifically on direct observations or experiences. Empirical Probability Formula
P ( E ) = probability that an event, E , will occur. Top = number of ways the specific event occurs. Bottom = number of ways the experiment could occur.
Example: A survey was conducted to determine students' favorite breeds of dogs. Each student chose only one breed. PitDog Collie Spaniel Lab Boxer Other bull #
10
15
35
8
5
12
What is the probability that a student's favorite dog breed is Lab? Answer: 35 out of the 85 students chose Lab. The probability is
.
Theoretical Probability of an event is the number of ways that the event can occur, divided by the total number of outcomes. It is finding the probability of events that come from a sample space of known equally likely outcomes. Theoretical Probability Formula
P ( E ) = probability that an event, E , will occur. n(E) = number of equally likely outcomes of E . n(S) = number of equally likely outcomes of sample space S .
Example 1: Find the probability of rolling a six on a fair die.
Answer: The sample space for rolling is die is 6 equally likely results: {1, 2, 3, 4, 5, 6}. The probability of rolling a 6 is one out of 6 or
Example 2: Find the probability of tossing a fair die and getting an odd number. Answer: event E : tossing an odd number outcomes in E : {1, 3, 5} sample space S : {1, 2, 3, 4, 5, 6}
Comparing Theoretical Probability and Empirical Probability
Sum of the rolls of two dice Karen and Jason roll two dice 50 times and record their results in the accompanying chart. 1.) What is their empirical probability of rolling a 7? 2.) What is the theoretical probability of rolling a 7? 3.) How do the empirical and theoretical probabilities compare?
Solution: 1.) Empirical probability (experimental probability or observed probability) is 13/50 = 26%. 2.) Theoretical probability (based upon what is possible when working with two dice) = 6/36 = 1/6 = 16.7% (check out the table at the right of possible sums when rolling two dice). 3.) Karen and Jason rolled more 7's than would be expected theoretically.
3, 5, 5, 4, 6, 7, 7, 5, 9, 10, 12, 9, 6, 5, 7, 8, 7, 4, 11, 6, 8, 8, 10, 6, 7, 4, 4, 5, 7, 9, 9, 7, 8, 11, 6, 5, 4, 7, 7, 4, 3, 6, 7, 7, 7, 8, 6, 7, 8, 9
PART 2
a)
There are three player, considered as P1,P2, and P3. The total side of the die which is cube is six, and the number of dots on the dice is 1, 2, 3, 4, 5and 6 respectively. Thus, the possible outcomes are:{1,2,3,4,5,6}
b) When we tossed two dice simultaneously, the possible o utcomes is as shown in the table below
DIE 1 DIE 2
1
2
3
4
5
6
1
(1,1)
(1,2)
(1,3)
(1,4)
(1,5)
(1,6)
2
(2,1)
(2,2)
(2,3)
(2,4)
(2,5)
(2,6)
3
(3,1)
(3,2)
(3,3)
(3,4)
(3,5)
(3,6)
4
(4,1)
(4,2)
(4,3)
(4,4)
(4,5)
(4,6)
5
(5,1)
(5,2)
(5,3)
(5,4)
(5,5)
(5,6)
6
(6,1)
(6,2)
(6,3)
(6,4)
(6,5)
(6,6)
PART 3
a)
Sum of dots on both turnedup faces (x) 2
Table 1 Possible outcomes (1,1)
3
(1,2),(2,1)
4
(1,3),(2,2),(3,1)
5
(1,4),(2,3),(3,2),(4,1)
6
(1,5),(2,4),(3,3),(4,2),(5,1)
7
(1,6),(2,5),(3,4),(4,3),(5,2),(6,1)
8
(2,6),(3,5),(4,4),(5,3),(6,2)
Probability, P(x)
1 362 363 364 365 366 365 36
9
(3,6),(4,5),(6,3),(5,4)
10
(4,6),(5,5),(6,4)
11
(5,6),(6,5)
12
(6,6)
4 363 362 361 36
b) A = {the two numbers are the same} = {(1,1),(2,2),(3,3),(4,4), (5,5),(6,6)}
() () = =
P (A) =
B = {the product of the two numbers is greater than 25} = {(5,5),(5,6),(6,5),(6,6)}
() () = =
P (B) =
C = {Both numbers are prime or the difference bet ween two numbers is even)
∪
={(2,2),(2,3),(2,5),(3,2),(3,3),(3,5),(5,2),(5,3),(5,5)} {(1,3),(1,5),(2,4),(2,6),(3,1),(3,5),(4,2),(4,6),(5,1),(5,3),(6,2),(6,4),} P (C) =
() () = + =
D = {The sum of the two numbers are odd and both numbers are perfect square) = {(1,2),(1,4),(1,6), (2,1),(2,3),(2,5),(3,2),(3,4),(3,6),(4,1),(4,3),(4,5),
∩
(5,2),(5,4),(5,6),(6,1),(6,3),(6,5)} {(1,1),(1,4),(4,1),(4,4) } P (D) = P(Odd) + P(Perfect Square) – P(Odd and Square) P (D) =
=
36 1 18
PART 4
Sum of the two numbers (x)
Frequency(f)
fx
fx²
2 3 4 5 6 7 8 9 10 11 12 Total
2 5 5 1 4 8 8 5 7 3 2 50
4 15 20 5 24 56 64 45 70 33 24 360
8 45 80 25 144 392 512 405 700 363 288 2962
. Mean = 360÷50 = 7.2 Variance = (2962÷50) - 7.2² = 7.4 Standard Deviation = √7.4 = 2.72 b) When the number of tossed of the two dice simultaneously is increase to 100 times, the value of mean is also change.
c) x
f
fx
2 3 4 5 6 7 8 9 10 11 12
6 9 11 12 13 10 7 12 7 6 7 100
12 27 44 60 78 70 56 108 70 66 84 =675
∑ =
∑
24 81 176 300 468 490 448 972 700 726 1008 =5393
∑
∑
mean =
= 6.75
=
∑ Variance = −̅ −6.75 = =8.3675
∑ Standard deviation = −̅ =2.8927 Write your own comments about the prediction proven
When two dice are tossed simultaneously, the actual mean and variance of the sum of all dots on the turned-up faces can be determined by using the formulae below:
a)
Based on Table 1, the actual mean, the variance and the standard deviation of the sum of all dots on the turned-up faces are determined by using the formulae given.
x
2
4
3
9
4
16
5
25
6
36
7
49
8
64
9
81
10
100
11
121
12
144
P(x)
x P(x)
1 36 2 36 3 36 4 36 5 36 6 36 5 36 4 36 3 36 2 36 1 36
1 18 1 6 1 3 5 9 5 6 7 6 10 9
P(x) 1 9 1 2 4 3 25 9 5
49 6 160 3
80 __ 9
1
9
5 6 11 18 1 3
25 3 121 18 4
Mean = 7
−7 = 50.2778
Variance =
329 6 5.8333
√ 50.2778
Standard deviation= 5.8333 =7.0907 2.4152
b) Table below shows the comparison of mean, variance and standard deviation of part 4 and part 5. PART 4
PART 5
n=50
n=100
Mean
7.2
6.75
7.00
Variance
7.4
8.3675
50.2778
Standard Deviation
2.72
2.89266
7.090682462
5.8333
2.4152
We can see that, the mean, variance and standard deviation that we obtained through experiment in part 4 are different but close to the theoretical value in part 5. For mean, when the number of trial increased from n=50 to n=100, its value get closer (from 7.2 to 6.75) to the theoretical value. This is in accordance to the Law of Large Number. Nevertheless, the empirical variance and empirical standard deviation that we obtained part 4 get further from the theoretical value in part 5. This violates the Law of Large Number. This is probably due to a. The sample (n=100) is not large enough to see the change of value of mean, variance and standard deviation. b. Law of Large Number is not an absolute law. Violation of this law is still possible though the probability is relative low. In conclusion, the empirical mean, variance and standard deviation can be different from the theoretical value. When the number of trial (number of sample) getting bigger, the empirical value should get closer to the theoretical value. However, violation of this rule is still possible, especially when the number of trial (or sample) is n ot large enough.
c) The range of the mean
6 ≤ ≤ 7.2 Conjecture: As the number of toss, n, increases, th e mean will get closer to 7. 7 is the theoretical mean. Image below support this conjecture where we can see that, after 500 toss, the theoretical mean become very close to the theoretical mean, which is 3.5. (Take note that this is experiment of tossing 1 die, but not 2 dice as what we do in our experiment)
FURTHER EXPLORATION
In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed. For example, a single roll of a six-sided die produces one of the numbers 1, 2, 3, 4, 5, 6, each with equal probability. Therefore, the expected value of a single die roll is
1+2+3+4+5+6 = 3.5 6 According to the law of large numbers, if a large number of dice are rolled, the average of their values (sometimes called the sample mean) is likely to be close to 3.5, with the accuracy increasing as more dice are rolled. Similarly, when a fair coin is flipped once, the expected value of the number of heads is equal to one half. Therefore, according to the law of large numbers, the proportion of heads in a large number of coin flips should be roughly one half. In particular, the proportion of heads after n flips will almost surely converge to one half as n approaches infinity. Though the proportion of heads (and tails) approaches half, almost surely the absolute (nominal) difference in the number of heads and tails will become large as the number of flips becomes large. That is, the probability that the absolute difference is a small number approaches zero as number of flips becomes large. Also, almost surely the ratio of the absolute difference to number of flips will approach zero. Intuitively, expected absolute difference grows, but at a slower rate than the number of flips, as the number of flips grows. The LLN is important because it "guarantees" stab le long-term results for random events. For example, while a casino may lose money in a single spin of the roulette wheel, its earnings will tend towards a predictable percentage over a large number of spins. Any winning streak by a player will eventually be overcome by the parameters of the game. It is important to remember that the LLN only applies (as the name indicates) when a large number of observations are considered. There is no principle that a small number of observations will converge to the expected value or that a streak of one value will immediately be "balanced" by the others. See the Gambler's fallacy.