In scientific data gathering, there are
three different ways to find the data needed to look at the relationships being
studied. These three methods will tell you different things, and have their own
strengths and weaknesses. The three methods are the experimental, the quasi
experimental, and the correlation method.
In the experimental method, the
scientist is trying to isolate causation from a single variable. To see if a particular variable has a
consequence the scientist must hold all other variable to be the same and only
change the one thing that they are studying. To make sure that the experiment
is being done correctly, the choice of who is exposed to the independent
variable must be at random (Salkind 2014 p. 8). For example, say that the scientist
thinks that college men wearing a baseball cap will be able to run faster. The scientist
will then distribute baseball caps to the study group and then measure the running
speeds of all the participants. If the study finds that the average speed of
the baseball cap wears was in fact faster than the average speed of the non-cap
wearers, then the hypothesis is confirmed.
Not all variables are as easily tested
as the question of speed of college-age men and baseball cap wearing. Sometimes
what is to be measured is not so easy to control. If the experiment designer cannot pick who
receives a variable, then there is a measure of control lost. This becomes part
of a quasi-experimental method (Salkind 2014 p. 9). An example where a
quasi-experimental method would be used is one where a scientist was curious at
who was better at chess, all other things being equal, left-handed people or
right-handed people. Since nature has already chosen who will be left-handed
and who will be right handed, the element of randomness has been taken away
from the experimenter. The ultimate results of the quasi-experiment may be less
certain than a strictly experimental method because there may be other
variables that the left-handers possess other than their dominate hand that may
skew the results.
The final method for looking at relationships
between two variables is the correlational method. In this method, there is no
experiment run, but the scientist looks at two sets of data to see if there is
a relationship between them. Does one go up while at the same time the other
goes down? Alternatively, do they move in tandem together? If they do either
one, then the indicators are said to be correlated. The problem with looking for
correlation is that scientist cannot tell if there is a direct causal
relationship (Salkind 2014 p. 10). Say a scientist can look at the sales of
Happy Meals in America as well as the average weight of American children. If
the scientist sees that both variables increased over the same time, then a
correlation can be said to exist. The issue is that there is no way to say
directly what caused what. Did children gain weight because they were eating
too many Happy Meals, or did already-obese children demand more Happy Meals?
The overall result is that the more control
a scientist has over the independent variables that they are studying, the more
certain they can be with the validity of their results. In the use of data,
more control is the desired starting point, but it may not always be possible
to attain. That is why the other options exist.
No comments:
Post a Comment