1.1 The critical question in testing hypotheses in linear regression
1.2 Adding additional independent variables to a regression equation
1.3 Statistical significance in testing whether additional variables indicates
1.4 In simple linear regression, the goal is to
1.5 The slope of the regression line in multiple regression is represented by
1.6 The intercept of the regression line in multiple regression is represented by
1.7 The sum of Y minus Y prime squared is also called
1.8 In hypothesis testing in regression, when there is no model the predicted value of Y is
1.9 The value that minimizes the sum of squared deviations around a constant prediction of Y is
1.10 This component can be partitioned into two parts, that which the model can account for and that which it cannot, which sum to the total.
1.11 Mean squares in an ANOVA source table for a regression analysis are
1.12 The F ratio in an ANOVA source table is calculated by
1.13 With 1 and 19 degree of freedom, an F ratio of 4.307 would
1.14 Significance testing of the intercept in simple linear regression
1.15 Significance testing of the slope given the intercept has already entered the model in simple linear regression
1.16 In hypothesis testing in linear regression, if the slope of the regression line is equal to zero
1.17 In hypothesis testing in linear regression, one degree of freedom is lost for
1.18 In a multiple regression model with a single independent variable, the value of the multiple R will be
1.19 The multiple correlation coefficient (R)
1.20 A linear prediction model that contains both the terms of a simpler model and additional terms ____than the simpler model.
1.21 The value of b0 in the prediction model Y"=b0 that minimizes the value of E(Y-Y")2 is
1.22 What is the value of the Sum of Squares due to Regression (b1|b0)?
1.23 What is the value of Mean Square Residual?
1.24 Dichotomous variables
1.25 What will be the value of b0 in Y"=b0+b1X, given the following table of means?
1.26 What will be the value of b1 in Y"=b0+b1X, given the following table of means?
1.27 What will be the value of sy.x, given the following table of means?
1.28 What effect will increasing the "Effect Size" in the interactive exercise to demonstrate dichotomous variables in regression have on the resulting regression equation.
1.29 The most flexible technique in hypothesis testing is
1.30 With a dichotomous independent variable and an interval dependent variable, which of the following hypothesis tests would result in a different exact significance level fro the others?
1.31 The preferred coding of dichotomous variables is
1.32 The intercept of the regression line when the X variable is dichotomous and coded 0 and 1 is
1.33 The slope of the regression line when the X variable is dichotomous and coded 0 and 1 is
1.34 The correlation between two variables when the X variable is dichotomous and coded 0 and 1 will be negative when
1.35 The correlation between two variables when the X variable is dichotomous and coded 0 and 1 will be larger when
1.36 The slope of the regression line when the Y variable is dichotomous and coded 0 and 1 is
1.37 The values for the F ratios when the dependent measure is dichotomous using the Regression and Compare means procedure will be
1.38 The advantage of using the Regression approach compared to the traditional t-test approach is
1.39 When there are unequal numbers of scores in each groups, the Mean Square Within Groups is
2.1 In a linear transformation
2.2 In a linear transformation where Y = 10 + 3*X1 - 2*X2, if X1=10 and X2=20, the resulting value for Y would be
2.3 In a linear transformation with two variables
2.4 In a linear transformation where Y = 10 + 3*X1 - 2*X2, if the mean of X1=20, the mean of X2=10, the standard deviation of X1=4, the standard deviation of X2=6, and the correlation coefficient between X1 and X2 r=.5, the resulting value for the mean of Y would be
2.5 In a linear transformation where Y = 10 + 3*X1 - 2*X2, if the mean of X1=20, the mean of X2=10, the standard deviation of X1=4, the standard deviation of X2=6, and the correlation coefficient between X1 and X2 r=.5, the resulting value for the standard deviation of Y would be
2.6 In a linear transformation with two variables, the variance of the transformed variable will be larger than the sum of the squared weights times the variance of the two variables
2.7 In a linear transformation with two variables, the variance of the transformed variable will be equal to the sum of the squared weights times the variance of the two variables
2.8 Which of the lines on the grid best describes the equation Y = 10 + 16*X1 + 12*X2?
2.9 Which of the lines on the grid best describes the equation Y = -100 - 16*X1 - 12*X2?
2.10 Which of the lines on the grid best describes the equation Y = 100 + X1 + 7*X2?
2.11 Which of the lines on the grid best describes the equation Y = -100 - 16*X1 + 12*X2?
2.12 How does increasing the value of w0 in the equation Y = w0 + w1*X1 + w2*x2, change the graph of the line?
2.13 Which of the following transformations would share the same rotated axis as Y = 20 + 3*X1 - 4*X2?
2.14 The same line can represent two different linear transformations of two variables if
2.16 Which of the following transformations is a normalized linear transformation
2.17 Which of the following transformations is a normalized linear transformation
2.18 Where X" = w1X1 + w2X2, X"" = -w2X1 + w1X2, and w1X2 + w2X2 = 1.00
2.19 The transformation Y = 5*X1 - 8*X2 could be normalized to
2.20 The advantage of standardized normalized transformations when maximizing total variance is
2.21 A rotation perpendicular to Y = .6*X1 - .8*X2 would be
2.22 If two variables have been standardized and then transformed with two standardized normal perpendicular transformations, the sum of the variances of the transformed variables will be
2.23 The weights that maximize the variance of a normalized linear transformation
2.24 The maximum and minimum variances in two standard normalized perpendicular transformations
2.25 The sum of the variances of two standard normalized perpendicular linear transformations will be ______ no matter what the weights.
2.26 Which program in SPSS can be used to find eigenvectors and eigenvalues of linear transformations
2.27 Linear transformations may be used
3.1 Discriminant function analysis
3.2 The method of choice when desiring to classify individuals into known groups is
3.3 When predicting whether a student would pass, withdraw, or fail a class, the statistical method of choice would be
3.4 Results of a discriminant function analysis
3.5 The simplest case of discriminant function analysis has
3.6 The simplest discriminant function analysis is identical to
3.7 The variable that might best discriminate between male and female students is
3.8 When discrimination between groups is low in overlapping relative frequency polygons
3.9 In discriminant function analysis, the underlying assumption is that the distribution of the interval variable(s) is modeled by
3.10 Assumptions made when computing probabilities of group membership in discriminant function analysis include
3.11 In the discriminant function analysis program that allows the student to explore the relationship between different generating functions (poor, medium, or good discrimination; equal or unequal variances), sample size, and the resulting model based on the sample, the larger the sample size of each group
3.12 In discriminant function analysis, suppose that the model for a given group had a value of 201 for mu and 10 for sigma, what would be the probability of the data given the group for a score of 213?
3.13 In discriminant function analysis, suppose that the model for a given group had a value of 201 for mu and 18 for sigma, what would be the probability of the data given the group for a score of 213?
3.14 In discriminant function analysis, suppose that the model for a given group had a value of 201 for mu and 10 for sigma, what score would have an identical probability of the data given the group as a score of 213?
3.15 P(G/D) may be interpreted as
3.16 P(D/G) may be interpreted as
3.17 In general, in discriminant function analysis, the larger the difference between the means of the groups,
3.18 In a recent election, the local bond issue for the school system was soundly defeated. If a researcher wished to predict voting behavior (for or against) using discriminant function analysis, she would set the prior probabilities
3.19 A high posterior probability
3.21 If the prior probability of belonging to a particular group is very low
3.22 In discriminant function analysis, transformation from prior to posterior probabilities is
3.23 For two groups, if the prior probability was .7 and .3, respectively, and the P(D/G) was .5 and .3, what would be the P(G/D) for group 1?
3.24 For two groups, if the prior probability was .7 and .3, respectively, and the P(D/G) was .5 and .3, what would be the P(G/D) for group 2?
3.25 The posterior probabilities of two groups will be equal to their prior probabilities if
3.26 Everything else being equal, increasing the prior probability for group 1 in a two-groups discriminant function analysis
3.27 The sum of the posterior probabilities for all groups will
3.28 The sum of the prior probabilities for all groups will
3.29 The default value for prior probabilities in SPSS Discriminant Function Analysis is
3.30 The score value which most likely generated this table is
3.31 The prior probabilities of group membership in this table
3.32 An individual making the score which generated this table would
3.33 Classification probabilities in SPSS discriminant function analysis output does not include
3.34 When using dichotomous groups and a single dependent variable, the unstandardized canonical discriminant function coefficients presented in the SPSS discriminant function analysis are
3.35 When using dichotomous groups and a single dependent variable, the canonical correlation coefficient presented in the SPSS discriminant function analysis
3.36 Using the SPSS discriminant function analysis program, accuracy in prediction can be best assessed using
3.37 In the following SPSS output, a discriminant function analysis predicting program of student from movie preferences, discriminant function one best discriminated between
3.38 In the following SPSS output, a discriminant function analysis predicting program of student from movie preferences, discriminant function two best discriminated between
3.39 In the following SPSS output, a discriminant function analysis predicting program of student from movie preferences, discriminant function two worst discriminated between
3.40 In the following SPSS output, a discriminant function analysis predicting program of student from movie preferences, suppose a student had standard scores of 0.0 for Sixth Sense, -1.5 for Pinocchio, 1.3 for Nutty Professor, 1.1 for Thelma and Louise, and 0.0 for Little Big Man. This student would most likely be classified as belonging to what program.
3.41 In the following SPSS output, a discriminant function analysis predicting program of student from movie preferences, suppose a student had standard scores of 0.0 for Sixth Sense, 1.0 for Pinocchio, 0.3 for Nutty Professor, -1.1 for Thelma and Louise, and 0.0 for Little Big Man. This student would most likely be classified as belonging to what program.
3.42 In the following SPSS output, a discriminant function analysis predicting program of student from movie preferences, the group with the largest number of predicted members was
3.43 In the following SPSS output, a discriminant function analysis predicting program of student from movie preferences, the group with the highest correct classification rate was
3.44 In the following SPSS output, a discriminant function analysis predicting program of student from movie preferences, the overall correct classification rate was
3.45 In the following SPSS output, a discriminant function analysis predicting program of student from movie preferences, suppose a student had standard scores of 0.0 for Sixth Sense, 0.0 for Pinocchio, 0.0 for Nutty Professor, 0.0 for Thelma and Louise, and -1.0 for Little Big Man. This student would most likely be classified as belonging to what program.
4.1 The proximities matrix
4.2 A cluster analysis technique that requires that the number of groups desired is known beforehand is
4.3 The last individual to join a cluster group was
4.4 The first individual to join a cluster group was
4.5 The number of clear grouping clusters is
4.6 In doing a cluster analysis, Stuetzle, 1995 recommends
5.1 The difference between simple linear regression and multiple regression is
5.2 The standardized regression weights
5.3 The equation Y" = b0 + b1X1 + b2X2 describes
5.4 A large residual for a given individual means
5.5 In a regression equation predicting points in a graduate statistics course, unstandardized regression weights were -10.37, 1.33, and .78 for the constant term, a measure of intellectual ability, and a measure of motivation. What would be the predicted number of points for a student with a score of 123 on the measure of intellectual ability and a score of 154 on the measure of motivation?
5.6 In a regression equation predicting points in a graduate statistics course, unstandardized regression weights were -10.37, 1.33, and .78 for the constant term, a measure of intellectual ability, and a measure of motivation. What would be the residual for a student with a score of 123 on the measure of intellectual ability and a score of 154 on the measure of motivation, and an observed number of points of 298, the residual for this student would be
5.7 Relatively small values for residuals in a multiple regression equation can be interpreted as
5.8 Assumptions made when using analysis of variance in multiple regression include
5.9 The correlation coefficient between X and Y (rxy) will be different than the multiple correlation coefficient (Ryx) when
5.10 The value of the multiple correlation coefficient R, is
5.11 A new statistician found a multiple correlation coefficient of .54 when the previous statistician found a multiple correlation coefficient of .32 on the same data but different variables, you should
5.12 A new statistician found a multiple correlation coefficient of -.54 when the previous statistician found a multiple correlation coefficient of .32 on the same data but different variables, you should
5.13 The coefficient of determination is
5.14 Which of the following will be the largest?
5.15 Which of the following will be the smallest?
5.16 In multiple regression, the unadjusted R2 value will
5.17 The "adjustment" in the "Adjusted R Square" is for
5.18 The denominator in the definitional formula for the standard error of estimate in linear regression is
5.19 The larger the value of the unadjusted R squared the
5.20 If the sum of squared residuals was 133.48 in a linear regression model for twenty-five score, two independent variables, and a constant term, the standard error of estimate would be.
5.21 In the ANOVA table produced when doing a multiple regression with SPSS the Sum of Squares due to Regression is
5.22 When adding additional terms to a multiple regression model
5.24 Sequential hypothesis testing in multiple regression models is done using SPSS by
5.25 If the R squared change is statistically significant
5.26 The value of the sig. column in the coefficients table of multiple regression
5.27 The Sig. level provided by SPSS on the Coefficients table in the Multiple Regression procedure
5.28 A linear regression equation predicting a single dependent variable from two independent variables can be represented as a
5.29 In a sequential multiple regression modeling procedure predicting Dept, adding Indp02 after Indp01 is already in the regression model would most likely
5.30 In a sequential multiple regression modeling procedure predicting Dept, adding Indp04 after Indp01 is already in the regression model would most likely
5.31 In a multiple regression model, which of the following two variables in combination would most likely best predict Dept?
5.32 In a sequential multiple regression modeling procedure predicting Dept, adding Indp03 after Indp01 is already in the regression model
5.33 In a sequential multiple regression modeling procedure predicting Dept01, the significance level on the coefficients table for Indp03 when both Indp01 and Indp03 have been entered in the regression model would be
5.34 In a sequential multiple regression modeling procedure predicting Dept01, the significance level on the coefficients table for Indp01 when only Indp01 has been entered in the regression model would be
5.35 In a sequential multiple regression modeling procedure predicting Dept, the significance level for R squared change on the summary table for Indp01 after Indp03 has been entered in the regression model would be
5.36 In a sequential multiple regression modeling procedure predicting Dept, the predicted value of Dept01 when Indp01=30 and Indp03=20 would be
5.37 In a sequential multiple regression modeling procedure predicting Dept01, the predicted stand score value of Dept01 when the standard score of Indp01=1.30 and the standard score of Indp03=-2.20 would be
5.38 If two independent variables are highly correlated
5.39 When predicting Y from two predictor variables, if the predictor variables are uncorrelated then
5.40 When two predictor variables are highly correlated in multiple regression
5.41 A suppressor variable relationship is possible when
6.1 In multiple regression there may be many
6.2 In multiple regression, the unstandardized regression weights
6.3 In multiple regression, if there are K independent variables and N observations, how many parameters must be estimated?
6.4 The largest multiple R predicting a dependent variable from a single independent variable can be found
6.5 The largest unadjusted multiple R predicting a dependent variable from a sample of data can be found using
6.6 The difference between the unadjusted and adjusted multiple R squared will be greatest when
6.7 The value in the significance column on the coefficients table in multiple regression is the
6.8 Entering blocks of independent variables in an order determined by the statistician is done using
6.9 A mantra that could be associated with the author of the text would be
6.10 As additional independent variables are added or subtracted from a multiple regression model, which of the following values will remain constant for a given independent variable?
6.11 Collinearity occurs when
6.12 The following figure presents example SPSS output from a study predicting number of offences from various predictor variables.
6.13 The following figure presents example SPSS output from a study predicting number of offences from various predictor variables. The analysis could be best described as
6.14 The following figure presents example SPSS output from a study predicting number of offences from various predictor variables. The best predictor, given all the other variables were already entered in the model would be
6.15 The following figure presents example SPSS output from a study predicting number of offences from various predictor variables. The variable that least predicts given all the other variables are entered into the model would be
6.16 The following figure presents example SPSS output from a study predicting number of offences from various predictor variables. The conclusion reached with respect to the three IQ measures could best be stated as
6.17 The following figure presents example SPSS output from a study predicting number of offences from various predictor variables. The conclusion reached with respect to AGE@REF measure could best be stated as
6.18 The following figure presents example SPSS output predicting current salary using a regression analysis of only the clerical workers in the Employees.sav file included with the SPSS package.
6.19 The following figure presents example SPSS output predicting current salary using a regression analysis of only the clerical workers in the Employees.sav file included with the SPSS package. With respect to the Months since hire variable
6.20 The following figure presents example SPSS output predicting current salary using a regression analysis of only the clerical workers in the Employees.sav file included with the SPSS package. In making a 95% confidence interval for a given score using all the included variables, the range of the confidence interval would be approximately
6.21 The following figure presents example SPSS output predicting current salary using a regression analysis of only the clerical workers in the Employees.sav file included with the SPSS package. Conclusions with respect to gender and minority status might include
6.22 The following figure presents example SPSS output predicting current salary using a regression analysis of only the clerical workers in the Employees.sav file included with the SPSS package. Which of the following variables has results that go against common sense?
6.23 The following figure presents example SPSS output predicting current salary using a regression analysis of only the clerical workers in the Employees.sav file included with the SPSS package. Everything else being equal, the best predictor of current salary would be
6.24 A full model in multiple regression will likely be "less significant" than a partial model
6.25 In a step-up regression, the next variable to be entered into the regression equation
6.26 The step-up and step-down procedures in multiple regression
6.27 In a step-down regression procedure, the next variable to be removed will have the
6.28 The step-up and step-down regression procedures
6.29 The next variable to be eliminated in a step-down regression procedure
6.31 Using a similar multiple regression analysis on two separate samples of data
7.1 Dummy coding involves
7.2 When performing an hierarchical regression analysis, one of the degrees of freedom for an R squared change hypothesis test is
7.3 When performing an hierarchical regression analysis, adding a single independent variable that is uncorrelated with all other independent variables will result in an R square change of
7.4 When possible, the statistician generally prefers
7.5 To use a categorical variable with 8 levels in a multiple regression model, it would be necessary to create __ dummy coded variables
7.6 ANOVA is a special case of
7.7 If a researcher is interested only in the overall gain in predictive power of a categorical variable with three or more levels when using a multiple regression model,
7.8 When the dummy codes are correlated with each other,
7.9 Which of the following values would most likely be different from the others
7.10 Dummy codes are sometimes called
7.11 When a set of dummy codes are orthogonal,
7.12 The selection of a set of contrasts makes a difference
7.13 In general, the set of contrasts selected and tested
7.14 It is conventional to construct contrasts with a sum equal to
7.15 When using a set of orthogonal contrasts, the correlation coefficient between the contrast and the dependent variable will equal
7.16 The General Linear Model program in SPSS
7.17 When two categorical variables are used in a multiple regression model, the total number of groups will be
7.18 When dummy coding for two or more categorical variables, interaction contrasts can be found by
7.19 When there are equal sample sizes in each combination of two categorical variables the order of entry of blocks of main effects and interaction effects
7.20 When there are unequal sample sizes in each combination of two categorical variables the order of entry of blocks of main effects and interaction effects
7.21 In most real-life regression analyses with two categorical variables one may expect to find
7.22 When fitting too many parameters with too few data points
7.23 Which of the following statements is true
7.24 Changing the order that main and interaction effects are entered into a multiple regression equation will have an effect on the R2 change values if
7.25 In a study with two categorical variables, A and B, if A had 5 levels and B had 4 levels, how many contrasts would be necessary to explain the variance predicted by the combinations of these variables
7.26 Which contrast compared the two types of music
7.27 Which contrast was least significant
7.28 White noise was significantly different from street noise
7.29 White noise was significantly different from absolute quiet
7.30 The music and crying baby were more disruptive than the other conditions
8.1 The marginal means will be the mean of the cell means when
8.2 Main effects can be observed as
8.3 A main effect is defined as
8.4 An interaction effect can be defined as
8.5 If the lines in a plot of cell means in an A X B design are parallel.
8.6 If the cell means of two different sets of data are identical, the effects discovered using an ANOVA
9.1 In ANOVA the assumption of an interval scale is important for the
9.2 Causal inferences are easiest with
9.3 Which of the following is an attribute of a treatment factor?
9.6 The gender of the subject, if used as a factor in an experimental design, would be
9.7 Blocking factors are often useful in an experimental design because
9.8 Which of the following is not an attribute of a fixed factor
9.9 The Subjects factor will always be a
9.10 In a finger-tapping experiment subjects (S) tapped twice (T) with both their right and left hands (H). Each subject participated in either a caffeine or a no caffeine condition (C). Gender (A) was also included as a factor in the experimental design. This design could best be described as
9.11 In a finger-tapping experiment subjects (S) tapped twice (T) with either their right or left hand (H). Each subject participated in either a caffeine or a no caffeine condition (C). Gender (A) was also included as a factor in the experimental design. This design could best be described as
9.12 In an experimental design looking at achievement test performance, students (S) were either male or female (A) and belonged to one the six classrooms (C), three each in two different schools (B). Which of the following best describes this experimental design?
9.13 In a finger-tapping experiment subjects (S) tapped twice (T) with either their right or left hand (H). Each subject participated in either a caffeine or a no caffeine condition (C). Gender (A) was also included as a factor in the experimental design. If 20 subjects were desired for each possible combination of factors, how many subjects would be needed?
9.14 In a finger-tapping experiment subjects (S) tapped twice (T) with both their right and left hands (H). Each subject participated in either a caffeine or a no caffeine condition (C). Gender (A) was also included as a factor in the experimental design. If 20 subjects were desired for each possible combination of factors, how many subjects would be needed?
9.15 In a finger-tapping experiment subjects (S) tapped twice (T) with both their right and left hands (H). Each subject participated in both a caffeine and a no caffeine condition (C). Gender (A) was also included as a factor in the experimental design. If 20 subjects were desired for each possible combination of factors, how many subjects would be needed?
9.16 Given equal numbers of levels of factors S, A, B, and C, which of the following designs would require the greatest number of variables in a data file in SPSS?
9.17 Ninety-six children are subjects in a study of perceptual discrimination. Half of the children are six years old, and half are nine years old. Half the subjects are tested with two-dimensional objects and half with three-dimensional objects. Half are required to discriminate on the basis of shape, half on the basis of color. Thus there are eight groups differing in age (A), dimensions (D), and relevant cue (C). We have the following hypotheses: (a) Nine-year-olds will make fewer errors than six-year-olds. (b) On the average, it will be easier to discriminate three-dimensional objects than two-dimensional objects. (c) The difference between two- and three-dimensional objects will be more marked for six-year-olds than for nine-year-olds. (d) The difference between two- and three-dimensional objects will hold for shape, but not for color. For the D main effect is
9.18 Ninety-six children are subjects in a study of perceptual discrimination. Half of the children are six years old, and half are nine years old. Half the subjects are tested with two-dimensional objects and half with three-dimensional objects. Half are required to discriminate on the basis of shape, half on the basis of color. Thus there are eight groups differing in age (A), dimensions (D), and relevant cue (C). We have the following hypotheses: (a) Nine-year-olds will make fewer errors than six-year-olds. (b) On the average, it will be easier to discriminate three-dimensional objects than two-dimensional objects. (c) The difference between two- and three-dimensional objects will be more marked for six-year-olds than for nine-year-olds. (d) The difference between two- and three-dimensional objects will hold for shape, but not for color. For the C main effect is
9.19 Ninety-six children are subjects in a study of perceptual discrimination. Half of the children are six years old, and half are nine years old. Half the subjects are tested with two-dimensional objects and half with three-dimensional objects. Half are required to discriminate on the basis of shape, half on the basis of color. Thus there are eight groups differing in age (A), dimensions (D), and relevant cue (C). We have the following hypotheses: (a) Nine-year-olds will make fewer errors than six-year-olds. (b) On the average, it will be easier to discriminate three-dimensional objects than two-dimensional objects. (c) The difference between two- and three-dimensional objects will be more marked for six-year-olds than for nine-year-olds. (d) The difference between two- and three-dimensional objects will hold for shape, but not for color. For the AxD interaction effect is
9.20 Ninety-six children are subjects in a study of perceptual discrimination. Half of the children are six years old, and half are nine years old. Half the subjects are tested with two-dimensional objects and half with three-dimensional objects. Half are required to discriminate on the basis of shape, half on the basis of color. Thus there are eight groups differing in age (A), dimensions (D), and relevant cue (C). We have the following hypotheses: (a) Nine-year-olds will make fewer errors than six-year-olds. (b) On the average, it will be easier to discriminate three-dimensional objects than two-dimensional objects. (c) The difference between two- and three-dimensional objects will be more marked for six-year-olds than for nine-year-olds. (d) The difference between two- and three-dimensional objects will hold for shape, but not for color. For the AxC interaction effect is
9.21 Ninety-six children are subjects in a study of perceptual discrimination. Half of the children are six years old, and half are nine years old. Half the subjects are tested with two-dimensional objects and half with three-dimensional objects. Half are required to discriminate on the basis of shape, half on the basis of color. Thus there are eight groups differing in age (A), dimensions (D), and relevant cue (C). We have the following hypotheses: (a) Nine-year-olds will make fewer errors than six-year-olds. (b) On the average, it will be easier to discriminate three-dimensional objects than two-dimensional objects. (c) The difference between two- and three-dimensional objects will be more marked for six-year-olds than for nine-year-olds. (d) The difference between two- and three-dimensional objects will hold for shape, but not for color. For the DxC interaction effect is
9.22 Ninety-six children are subjects in a study of perceptual discrimination. Half of the children are six years old, and half are nine years old. Half the subjects are tested with two-dimensional objects and half with three-dimensional objects. Half are required to discriminate on the basis of shape, half on the basis of color. Thus there are eight groups differing in age (A), dimensions (D), and relevant cue (C). We have the following hypotheses: (a) Nine-year-olds will make fewer errors than six-year-olds. (b) On the average, it will be easier to discriminate three-dimensional objects than two-dimensional objects. (c) The difference between two- and three-dimensional objects will be more marked for six-year-olds than for nine-year-olds. (d) The difference between two- and three-dimensional objects will hold for shape, but not for color. For the AxDxC interaction effect is
9.23 Ninety-six children are subjects in a study of perceptual discrimination. Half of the children are six years old, and half are nine years old. Half the subjects are tested with two-dimensional objects and half with three-dimensional objects. Half are required to discriminate on the basis of shape, half on the basis of color. Thus there are eight groups differing in age (A), dimensions (D), and relevant cue (C). The hypotheses that Nine-year-olds will make fewer errors than six-year-olds is:
9.24 Ninety-six children are subjects in a study of perceptual discrimination. Half of the children are six years old, and half are nine years old. Half the subjects are tested with two-dimensional objects and half with three-dimensional objects. Half are required to discriminate on the basis of shape, half on the basis of color. Thus there are eight groups differing in age (A), dimensions (D), and relevant cue (C). The hypothesis that On the average, it will be easier to discriminate three-dimensional objects than two-dimensional objects is:
9.25 Ninety-six children are subjects in a study of perceptual discrimination. Half of the children are six years old, and half are nine years old. Half the subjects are tested with two-dimensional objects and half with three-dimensional objects. Half are required to discriminate on the basis of shape, half on the basis of color. Thus there are eight groups differing in age (A), dimensions (D), and relevant cue (C). The hypothesis that The difference between two- and three-dimensional objects will be more marked for six-year-olds than for nine-year-olds is.
9.26 A three-way interaction can best be described as
9.27 In the following table of means, which effect could be significant?
9.28 In the following table of means, which effect could be significant?
9.29 In the following ANOVA table, which effect is significant?
9.30 In the following ANOVA table, which effect is significant?
9.31 In the following ANOVA table, which effect is not significant?
9.32 In the following table of means, which of the following effects could be significant?
9.33 In the following table of means, which of the following effects could be significant?
9.34 In the following table of means, which of the following effects could be significant?
9.35 In the following table of means, which of the following effects could not be significant?
9.36 In the following table of means, which of the following effects could be significant?
9.37 In the following table of means, which of the following effects could be significant?
9.38 In the following table of means, which of the following effects could be significant?