Tugas Ichakj

download Tugas Ichakj

of 4

Transcript of Tugas Ichakj

  • 7/28/2019 Tugas Ichakj

    1/4

    A chi-squared test, also referred to as chi-square test or test, is anystatisticalhypothesis testin which thesampling distributionof the test statistic is achi-squared distributionwhen thenullhypothesisis true, or any in which this is asymptotically true, meaning that the sampling

    distribution (if the null hypothesis is true) can be made to approximate a chi-squared distribution

    as closely as desired by making the sample size large enough.

    Some examples of chi-squared tests where thechi-squared distributionis only approximately

    valid:

    Pearson's chi-squared test, also known as the chi-squared goodness-of-fit test or chi-squared test for independence. When the chi-squared test is mentioned without anymodifiers or without other precluding context, this test is usually meant (for an exact test

    used in place of , seeFisher's exact test). Yates's correction for continuity, also known as Yates' chi-squared test. CochranMantelHaenszel chi-squared test. McNemar's test, used in certain 2 2 tables with pairing Tukey's test of additivity Theportmanteau testintime-series analysis, testing for the presence ofautocorrelation Likelihood-ratio testsin general statistical modelling, for testing whether there is

    evidence of the need to move from a simple model to a more complicated one (where thesimple model is nested within the complicated one).

    One case where the distribution of thetest statisticis an exactchi-squared distributionis the test

    that the variance of a normally distributed population has a given value based on asample

    variance. Such a test is uncommon in practice because values of variances to test against are

    seldom known exactly.

    Chi-squared test for variance in a normal population [edit]

    If a sample of size n is taken from a population having anormal distribution, then there is a well-

    known result (seedistribution of the sample variance) which allows a test to be made of whether

    the variance of the population has a pre-determined value. For example, a manufacturing process

    might have been in stable condition for a long period, allowing a value for the variance to bedetermined essentially without error. Suppose that a variant of the process is being tested, giving

    rise to a small sample ofn product items whose variation is to be tested. The test statistic Tin

    this instance could be set to be the sum of squares about the sample mean, divided by the

    nominal value for the variance (i.e. the value to be tested as holding). Then Thas a chi-squared

    distribution with n 1degrees of freedom. For example if the sample size is 21, the acceptanceregion forTfor a significance level of 5% is the interval 9.59 to 34.17.

    http://en.wikipedia.org/wiki/Statisticalhttp://en.wikipedia.org/wiki/Statisticalhttp://en.wikipedia.org/wiki/Hypothesis_testhttp://en.wikipedia.org/wiki/Hypothesis_testhttp://en.wikipedia.org/wiki/Hypothesis_testhttp://en.wikipedia.org/wiki/Sampling_distributionhttp://en.wikipedia.org/wiki/Sampling_distributionhttp://en.wikipedia.org/wiki/Sampling_distributionhttp://en.wikipedia.org/wiki/Chi-squared_distributionhttp://en.wikipedia.org/wiki/Chi-squared_distributionhttp://en.wikipedia.org/wiki/Chi-squared_distributionhttp://en.wikipedia.org/wiki/Null_hypothesishttp://en.wikipedia.org/wiki/Null_hypothesishttp://en.wikipedia.org/wiki/Null_hypothesishttp://en.wikipedia.org/wiki/Null_hypothesishttp://en.wikipedia.org/wiki/Chi-squared_distributionhttp://en.wikipedia.org/wiki/Chi-squared_distributionhttp://en.wikipedia.org/wiki/Chi-squared_distributionhttp://en.wikipedia.org/wiki/Pearson%27s_chi-squared_testhttp://en.wikipedia.org/wiki/Pearson%27s_chi-squared_testhttp://en.wikipedia.org/wiki/Fisher%27s_exact_testhttp://en.wikipedia.org/wiki/Fisher%27s_exact_testhttp://en.wikipedia.org/wiki/Fisher%27s_exact_testhttp://en.wikipedia.org/wiki/Yates%27s_correction_for_continuityhttp://en.wikipedia.org/wiki/Yates%27s_correction_for_continuityhttp://en.wikipedia.org/wiki/Cochran%E2%80%93Mantel%E2%80%93Haenszel_statisticshttp://en.wikipedia.org/wiki/Cochran%E2%80%93Mantel%E2%80%93Haenszel_statisticshttp://en.wikipedia.org/wiki/Cochran%E2%80%93Mantel%E2%80%93Haenszel_statisticshttp://en.wikipedia.org/wiki/Cochran%E2%80%93Mantel%E2%80%93Haenszel_statisticshttp://en.wikipedia.org/wiki/Cochran%E2%80%93Mantel%E2%80%93Haenszel_statisticshttp://en.wikipedia.org/wiki/Cochran%E2%80%93Mantel%E2%80%93Haenszel_statisticshttp://en.wikipedia.org/wiki/McNemar%27s_testhttp://en.wikipedia.org/wiki/McNemar%27s_testhttp://en.wikipedia.org/wiki/Tukey%27s_test_of_additivityhttp://en.wikipedia.org/wiki/Tukey%27s_test_of_additivityhttp://en.wikipedia.org/wiki/Portmanteau_testhttp://en.wikipedia.org/wiki/Portmanteau_testhttp://en.wikipedia.org/wiki/Portmanteau_testhttp://en.wikipedia.org/wiki/Time-series_analysishttp://en.wikipedia.org/wiki/Time-series_analysishttp://en.wikipedia.org/wiki/Time-series_analysishttp://en.wikipedia.org/wiki/Autocorrelationhttp://en.wikipedia.org/wiki/Autocorrelationhttp://en.wikipedia.org/wiki/Autocorrelationhttp://en.wikipedia.org/wiki/Likelihood-ratio_testhttp://en.wikipedia.org/wiki/Likelihood-ratio_testhttp://en.wikipedia.org/wiki/Test_statistichttp://en.wikipedia.org/wiki/Test_statistichttp://en.wikipedia.org/wiki/Test_statistichttp://en.wikipedia.org/wiki/Chi-squared_distributionhttp://en.wikipedia.org/wiki/Chi-squared_distributionhttp://en.wikipedia.org/wiki/Chi-squared_distributionhttp://en.wikipedia.org/wiki/Sample_variancehttp://en.wikipedia.org/wiki/Sample_variancehttp://en.wikipedia.org/wiki/Sample_variancehttp://en.wikipedia.org/wiki/Sample_variancehttp://en.wikipedia.org/w/index.php?title=Chi-squared_test&action=edit&section=1http://en.wikipedia.org/wiki/Normal_distributionhttp://en.wikipedia.org/wiki/Normal_distributionhttp://en.wikipedia.org/wiki/Normal_distributionhttp://en.wikipedia.org/wiki/Variance#Distribution_of_the_sample_variancehttp://en.wikipedia.org/wiki/Variance#Distribution_of_the_sample_variancehttp://en.wikipedia.org/wiki/Variance#Distribution_of_the_sample_variancehttp://en.wikipedia.org/wiki/Degrees_of_freedom_(statistics)http://en.wikipedia.org/wiki/Degrees_of_freedom_(statistics)http://en.wikipedia.org/wiki/Degrees_of_freedom_(statistics)http://en.wikipedia.org/wiki/Degrees_of_freedom_(statistics)http://en.wikipedia.org/wiki/Variance#Distribution_of_the_sample_variancehttp://en.wikipedia.org/wiki/Normal_distributionhttp://en.wikipedia.org/w/index.php?title=Chi-squared_test&action=edit&section=1http://en.wikipedia.org/wiki/Sample_variancehttp://en.wikipedia.org/wiki/Sample_variancehttp://en.wikipedia.org/wiki/Chi-squared_distributionhttp://en.wikipedia.org/wiki/Test_statistichttp://en.wikipedia.org/wiki/Likelihood-ratio_testhttp://en.wikipedia.org/wiki/Autocorrelationhttp://en.wikipedia.org/wiki/Time-series_analysishttp://en.wikipedia.org/wiki/Portmanteau_testhttp://en.wikipedia.org/wiki/Tukey%27s_test_of_additivityhttp://en.wikipedia.org/wiki/McNemar%27s_testhttp://en.wikipedia.org/wiki/Cochran%E2%80%93Mantel%E2%80%93Haenszel_statisticshttp://en.wikipedia.org/wiki/Yates%27s_correction_for_continuityhttp://en.wikipedia.org/wiki/Fisher%27s_exact_testhttp://en.wikipedia.org/wiki/Pearson%27s_chi-squared_testhttp://en.wikipedia.org/wiki/Chi-squared_distributionhttp://en.wikipedia.org/wiki/Null_hypothesishttp://en.wikipedia.org/wiki/Null_hypothesishttp://en.wikipedia.org/wiki/Chi-squared_distributionhttp://en.wikipedia.org/wiki/Sampling_distributionhttp://en.wikipedia.org/wiki/Hypothesis_testhttp://en.wikipedia.org/wiki/Statistical
  • 7/28/2019 Tugas Ichakj

    2/4

    Chi-Square Test

    Chi-square is a statistical test commonly used to compare observed data with data we would

    expect to obtain according to a specific hypothesis. For example, if, according to Mendel's laws,you expected 10 of 20 offspring from a cross to be male and the actual observed number was 8

    males, then you might want to know about the "goodness to fit" between the observed andexpected. Were the deviations (differences between observed and expected) the result of chance,or were they due to other factors. How much deviation can occur before you, the investigator,

    must conclude that something other than chance is at work, causing the observed to differ from

    the expected. The chi-square test is always testing what scientists call the null hypothesis, which

    states that there is no significant difference between the expected and observed result.

    The formula for calculating chi-square (2) is:

    2= (o-e)

    2/e

    That is, chi-square is the sum of the squared difference between observed (o) and the expected(e) data (or the deviation, d), divided by the expected data in all possible categories.

    For example, suppose that a cross between two pea plants yields a population of 880 plants, 639

    with green seeds and 241 with yellow seeds. You are asked to propose the genotypes of the

    parents. Yourhypothesis is that the allele for green is dominant to the allele for yellow and thatthe parent plants were both heterozygous for this trait. If your hypothesis is true, then the

    predicted ratio of offspring from this cross would be 3:1 (based on Mendel's laws) as predicted

    from the results of the Punnett square (Figure B. 1).

    Figure B.1 - Punnett Square. Predicted offspring from cross

    between green and yellow-seeded plants. Green (G) is dominant(3/4 green; 1/4 yellow).

    To calculate2

    , first determine the numberexpectedin each category. If the ratio is 3:1 and the

    total number of observed individuals is 880, then the expected numerical values should be 660green and 220 yellow.

    Chi -square requi res that you use numerical values, not percentages or ratios.

  • 7/28/2019 Tugas Ichakj

    3/4

    Then calculate2

    using this formula, as shown in Table B.1. Note that we get a value of 2.668

    for2. But what does this number mean? Here's how to interpret the

    2value:

    1. Determine degrees of freedom (df). Degrees of freedom can be calculated as the number of

    categories in the problem minus 1. In our example, there are two categories (green and yellow);therefore, there is I degree of freedom.

    2. Determine a relative standard to serve as the basis for accepting or rejecting the hypothesis.The relative standard commonly used in biological research isp > 0.05. The p value is the

    probability that the deviation of the observed from that expected is due to chance alone (no other

    forces acting). In this case, usingp> 0.05, you would expect any deviation to be due to chancealone 5% of the time or less.

    3. Refer to a chi-square distribution table (Table B.2). Using the appropriate degrees of 'freedom,locate the value closest to your calculated chi-square in the table. Determine the closestp

    (probability) value associated with your chi-square and degrees of freedom. In this case (2=2.668), the p value is about 0.10, which means that there is a 10% probability that any

    deviation from expected results is due to chance only. Based on our standard p > 0.05, this iswithin the range of acceptable deviation. In terms of your hypothesis for this example, the

    observed chi-squareis not significantly different from expected. The observed numbers are

    consistent with those expected under Mendel's law.

    Step-by-Step Procedure for Testing Your Hypothesis and Calculating Chi-Square

    1. State the hypothesis being tested and the predicted results. Gather the data by conducting the

    proper experiment (or, if working genetics problems, use the data provided in the problem).

    2. Determine the expected numbers for each observational class. Remember to use numbers, not

    percentages.

    Chi -square should not be calcul ated if the expected value in any category is less than 5.

    3. Calculate2

    using the formula. Complete all calculations to three significant digits. Round off

    your answer to two significant digits.

    4. Use the chi-square distribution table to determine significance of the value.

    a. Determine degrees of freedom and locate the value in the appropriate column.b. Locate the value closest to your calculated 2 on that degrees of freedom dfrow.c. Move up the column to determine the p value.

  • 7/28/2019 Tugas Ichakj

    4/4

    5. State your conclusion in terms of your hypothesis.

    a. If thep value for the calculated 2 isp > 0.05, accept your hypothesis. 'The deviation issmall enough that chance alone accounts for it. Ap value of 0.6, for example, means that

    there is a 60% probability that any deviation from expected is due to chance only. This is

    within the range of acceptable deviation.b. If the p value for the calculated 2 isp < 0.05, reject your hypothesis, and conclude thatsome factor other than chance is operating for the deviation to be so great. For example, a

    p value of 0.01 means that there is only a 1% chance that this deviation is due to chancealone. Therefore, other factors must be involved.

    The chi-square test will be used to test for the "goodness to fit" between observed and expecteddata from several laboratory investigations in this lab manual.