[RUME] How Can We Measure Student Learning? - Response to Statistician Ling

Richard Hake rrhake at earthlink.net
Wed May 17 23:35:55 EDT 2006


If you reply to this very long (34kB) post please don't hit the reply 
button unless you prune the copy of this post that may appear in your 
reply down to a few relevant lines, otherwise the entire already 
archived post may be needlessly resent to subscribers.

ABSTRACT: I respond point by point to 9 deliciously provocative 
sections of a recent EdStat post <http://tinyurl.com/jk86h> by 
statistician Bob Ling (2006a), concluding that psychologists are dead 
wrong in expressing negative interest :-( in the pre/post testing 
issue.

In my EdStat post of 14 May 2006 titled "Re: How Can We Measure 
Student Learning?" [Hake (2006a)] I wrote [bracketed by lines 
"HHHHHHHHH. . . .":

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
EdStat's Bob Ling, in his generally thoughtful EdStat post of 14 May 
2006 01:01:14-0400 made some excellent points with which I agree, but 
then placed himself in very bad company. . .[I should have said with 
respect to this issue only!]. . .  by responding to [my statement 
that psychologists, as a group, have shown zero or even negative 
interest in assessing the effectiveness of their own introductory 
courses by means of definitive pre/post testing] as follows:

"I find a rare agreement with the psychologists on this issue. On 
many of the courses I've taken myself, I already knew 75% of more of 
the course content before I started, but I gained enough insight from 
world-class teachers to make the courses truly rewarding.  Did I 
learn less than those with zero knowledge (pre) and 50% knowledge 
(post)? I think not."

Bob is evidently unaware that my term "DEFINITIVE pre/post testing" 
means NOT using the actual average class gain but instead the 
NORMALIZED gain.

The single student normalized gain is:

g = (actual gain) / (maximum possible gain)

    = (post - pre)  /   (100% - pre).

So perhaps Bob's g would have been (95% - 75%) / 100% - 75%) = 0.8,

while his classmates' g would have been the much lower (50% - 0%)/
100% - 0%) = 0.5.. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
The origin, justification, and utility of the normalized gain are 
explained in, e.g., the article "The Physics Education Reform Effort: 
A Possible Model for Higher Education" [Hake (2005a)] from which the 
snippits in my post "How Can We Measure Student Learning?" [Hake 
(2006b)] were extracted, and in "Re: The Value of Pre/post Testing" 
[Hake (2006c)].
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

I thank Bob Ling (2006a) for his deliciously provocative response 
that set me to "noting and contriving" [see the signature quote]. 
Ling wrote [bracketed with "LLLLLLL. . . . ."; I have divided his 
post into 9 parts 1, 2, 3, . . . so as to facilitate a response to 
each part]:

LLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLL
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
1. [the definition and example of single student normalized gain 
given by Hake (2006a)]. . . is only pedantic arithmetic using [his] 
definition.  I was using % of total content of the course merely as a 
simple example using simple arithmetic.

2. If we are going to speak in terms of SUBSTANCE of your definition, 
then you immediately would have gone into undefined areas of "gain".

3. What if I already knew MORE than the total content of the course 
when I began?  Would my normalized gain be 0/0 ?   Or do you now have 
a new system of percentages where a part may be greater than the 
whole?

4. In any event, it was a rare case of my agreement with 
psychologists, on pure COMMONSENSE alone, without any special 
pedantic definition of "gain".

5. Everything you cited was by yourself, and citing "posts" that are 
unrefereed and unpublished (in the accepted meaning of scholarly 
publications). Did you count that kind of posts as your publications 
before your became a Professor Emeritus?

6. I can readily argue against the rationale of your "normalization" 
which has many possible definitions in general,. . . . .

7. . . . .but in your example, an infinitesimal gain could be equated 
with a 100% gain, where the concept of % gain itself is far from 
being an unequivocal measure. But it's not worth my time to go into 
the rhetorical argument.

8. Suffice to say that given your cited SECOND paragraph,if an idea 
is meritorious, why is it only gradually gaining foothold in 
"introductory" courses in specific subject areas, rather than 
universal, in all subject areas at all levels, such as MY idea on 
exams which I used to evaluate college freshmen as well as advanced 
graduate students taking my courses who  already had Ph.D. or M.D. 
degrees.

9. Perhaps the psychologists are right in your characterization of 
"negative interest" in the pre/post testing issue.  :-)
LLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLL

Responding below to each of the above 9 sections of Ling's post:

11111111111111111111111111111111111111111
1. Ling: "[the definition and example of single student normalized 
gain given by Hake (2006a)]. . . is only pedantic arithmetic using 
[his] definition.  I was using % of total content of the course 
merely as a simple example using simple arithmetic."

Hake's response: Perhaps I was not clear enough. Let me try again, 
drawing on Ling's excellent example.  Consider the %pre, %post, 
actual gain
G = (%post -%pre), and normalized gain g =  G / G(max possible) =
(%post - %pre) / (100 - %pre), of two students A and B: 


Student    %post    %pre      G       g

   A         95       75      25      0.8

   B         50        0      50      0.5

If I understand him correctly (please correct me if I'm wrong), Ling 
implies that pre/post testing yields *misleading* results, as 
suggested by the fact that student B achieves an *actual* gain  G = 
50, whereas student A, for whom the course is actually more 
effective, receives the much lower *actual* gain G = 25. 

On the other hand, Hake says pre/post testing yields *valuable* 
results, as suggested by the fact that student B achieves a 
*normalized* gain  g = 0.5, whereas student A, for whom the course is 
actually more effective, receives the much higher g = 0.8.

Since physics education researchers (PER's) generally use the 
*normalized* gain g, rather than the *actual* gain G as a gauge of 
course effectiveness, Ling's argument is totally irrelevant to the 
issue of the validity of PER's use of pre/post testing to gauge 
course effectiveness.


22222222222222222222222222222222222222222
2. Ling: "If we are going to speak in terms of SUBSTANCE of your 
definition, then you immediately would have gone into undefined areas 
of 'gain'."

Hake's response: "Undefined areas of gain"?? The areas may be 
"undefined" for Ling, who appears to have zero knowledge of the 
literature in this area, but they are NOT undefined for the 
cognoscente. As far as I know, what I call the "normalized gain" g 
was first utilized over a half-century ago by the 
psychologists/sociologists Hovland et al. (1949) who called it the 
"effectiveness index." Then, independently, by Frank Gery (1972) who 
called it the "gap closing factor." Then, independently, by Hake 
(1998a,b; 2002a,b) who called it the "normalized gain."


33333333333333333333333333333333333333333
3. Ling: "What if I already knew MORE than the total content of the 
course when I began?  Would my normalized gain be 0/0 ?  Or do you 
now have a new system of percentages where a part may be greater than 
the whole?"

Hake's response: In the extremely unlikely event that a student 
scores 100% on the pretest and 100% on the posttest, then her/his 
normalized gain is (as indicated by Ling):

g = (%post - %pre) / (100 - %pre) =  0 / 0

and therefore indeterminate.

But this circumstance, were it to occur, would have little effect on 
the standard analysis of pre/post test results for *courses*. In 
comparing the effectiveness of introductory physics courses as in 
Hake (1998a,b)], it is customary to use the AVERAGE normalized gain 
<g> defined as:

<g> = g = (<%post> - <%pre>) / (100 - <%pre>)

where the angle brackets signify course averages.

This AVERAGE normalized gain  <g> is ordinarily within about 5% of 
what I call g-ave, the average of the single-student gains g for a 
course, as discussed in Sec. V (and also footnote #46) of Hake 
(1998a).

In "Assessment of Physics Teaching Methods" [Hake (2002b)], I wrote 
[bracketed by lines "HHHHHHH. . . ."]:

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
As a statistic for comparison of courses and for meta-analyses, the 
class average <g> is better, in my opinion, than g-ave because the 
latter: (a) must exclude students who score 100% on the pretest and 
thus achieve an infinite or indeterminate g; and (b) may introduce 
skewing due to outliers who score near 100% on the pretest and less 
on the posttest such their <g>'s are large and negative. The 
selective removal of outliers so as to avoid "(b)" by various 
different investigators with different outlier criteria will lead to 
a degree of uncertainty in comparing normalized gains of different 
courses.
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH


44444444444444444444444444444444444444444
4. Ling: "In any event, it was a rare case of my agreement with 
psychologists, on pure COMMONSENSE alone, without any special 
pedantic definition of 'gain'."

Hake's response: As indicated in "1" above, Ling's agreement with 
psychologists on the basis of his "commonsense," is irrelevant to the 
issue of validity of PER's use of the normalized gain as a gauge of 
course effectiveness. In my opinion, Ling's discounting of the 
normalized gain as "pedantic" indicates his abysmal ignorance of the 
literature in this area.


55555555555555555555555555555555555555555
5. Ling (referring to Hake): "Everything you cited . . .[in Hake 
(2006a)]. . . was by yourself, and citing (sic) "posts" that are 
unrefereed and unpublished (in the accepted meaning of scholarly 
publications). Did you count that kind of posts as your publications 
before your became a Professor Emeritus?"

Hake's response: If Ling had taken the time to access the posts cited 
in Hake (2006a) by clicking on the hot-linked URL's he would have 
found many citations to my articles in the peer-reviewed scholarly 
literature, e.g., Hake (1998a,b; 2002a,b].  At the time I became 
Professor Emeritus I had authored about 80 papers on condensed-matter 
physics (1956-1989) and education (1987-2000) in the peer reviewed 
scholarly literature as indicated in my VITA at 
<http://www.physics.indiana.edu/~hake>.

6666666666666666666666666666666666666
6. Ling: "I can readily argue against the rationale of your 
"normalization" which has many possible definitions in general,. . . 
. ."

As indicated by Don Burrill (2006) the use of the adjective 
"normalized" in "normalized gain" has been previously discussed on 
various lists including all those to which this post is addressed.

As I stated in Hake (2002c, 2003) [see those posts for the references]:

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
I think there is a physics/psychometric language gap here. To a 
physicist "to normalize" CAN mean merely "to divide a quantity by 
some parameter so as to make the 'normalized' quantity useful" (i.e., 
"reduce it to a norm or standard" in accord with Webster's 3rd 
unabridged), and does not necessarily have anything to do with the 
statistical 'normal' curve."

For example, a "normalized weight D" of a BODY of weight "W" and 
volume "V" could be DEFINED as:

       D = W/V . . . . . . . . . . . . . . . .  (1)

where D is just the "weight density". Ever since the work of 
Archimedes (231 + or - 21BC) it's been known by the scientific 
cognoscente that the density "D" IS A MUCH BETTER INDICATOR OF 
WHETHER OR NOT A BODY WILL SINK OR FLOAT IN WATER THAN IS THE WEIGHT 
"W." For example, a BODY of density D will float if D < D(water).

Similarly the "normalized gain g" for a TREATMENT is DEFINED (Hovland
1949, Gery 1972, Hake 1998a) as:

       g =  Actual Gain/[Actual Gain(max)] . .  (2)

Ever since the work of Hovland et al. (1949) it's been know by 
pre/post cognoscente (up until about 1998 probably less than 100 
people worldwide) that g IS A MUCH BETTER INDICATOR OF THE EXTENT TO 
WHICH A TREATMENT IS EFFECTIVE THAN IS EITHER actual gain OR 
posttest. For example, if the TREATMENT yields g > 0.3 for a 
mechanics course, then the course is in the "interactive-engagement 
zone."
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH


77777777777777777777777777777777777777777
7. Ling: ". . . . but in your example, an infinitesimal gain could be 
equated with a 100% gain, where the concept of % gain itself is far 
from being an unequivocal measure."

Hake's response: Don Burrill (2006) correctly noted that an 
"infinitesimal gain" has no operational meaning (i.e., it can't be 
measured). Therefore Ling's statement is meaningless [even despite 
the anti-positivist vigilantes (Phillips (2002)]. But in response to 
Burrill, Ling (2006b) uncharitably accused him of the cardinal sin of 
never having studied either mathematics or calculus!


88888888888888888888888888888888888888888
8. Ling:  "Suffice to say that given your cited SECOND paragraph, if 
an idea is meritorious, why is it only gradually gaining foothold in 
'introductory' courses in specific subject areas, rather than 
universal, in all subject areas at all levels, such as MY idea on 
exams which I used to evaluate college freshmen as well as advanced 
graduate students taking my courses who  already had Ph.D. or M.D. 
degrees."

[The second paragraph cited in Hake (2006a) was:

"Despite the nay-sayers, pre/post testing is gradually gaining a 
foothold in introductory astronomy, economics, biology, chemistry, 
computer science, economics, engineering, and physics courses [see 
Hake (2004) for references."]

Hake's response: Ling appears to think that HIS idea on exams "used 
to evaluate college freshmen as well as advanced graduate students 
taking my courses who already had Ph.D. or M.D. degrees" is 
"universal." I assume he must mean *potentially* universal since, as 
indicated in "The Physics Education Reform Effort: A Possible Model 
for Higher Education" [Hake (2005a)]:

"Wilbert McKeachie (1987) has pointed out that the time-honored gauge 
of student learning - course exams and final grades - typically 
measures lower-level educational objectives such as memory of facts 
and definitions rather than higher-level outcomes such as critical 
thinking and problem solving."

I wonder if Ling has made any effort to publish and propagate his 
idea on teacher authored exams so as to promote the universality of 
his idea and thereby promote the much needed reform of higher 
education.

As to Ling's question "why is [pre/post testing] only gradually 
gaining a foothold in 'introductory' courses in specific subject 
areas, rather than universal," perhaps Ling, being a statistician, 
has never heard of INERTIA [see e.g., "Eleven Quotes in Honor of 
Inertia" [Hake (2006d)].

99999999999999999999999999999999999999999
9. Ling: 9. "Perhaps the psychologists are right in your 
characterization of "negative interest" in the pre/post testing 
issue.  :-)"

Hake's response: For the reasons indicated in Hake (2005b,c,d), I 
think the psychologists are dead wrong in expressing negative 
interest :-( in the pre/post testing issue.


Richard Hake, Emeritus Professor of Physics, Indiana University
24245 Hatteras Street, Woodland Hills, CA 91367
<rrhake at earthlink.net>
<http://www.physics.indiana.edu/~hake>
<http://www.physics.indiana.edu/~sdi>

"Conflict is the gadfly of thought. It stirs us to observation and 
memory. It instigates to invention. It shocks us out of sheep-like 
passivity, and sets us at noting and contriving. Not that it always 
effects this result; but that conflict is a sine qua non of 
reflection and ingenuity."
    John Dewey "Morals Are Human," Dewey: Middle Works, Vol.14, p. 207.


REFERENCES [Tiny URL's courtesy <http://tinyurl.com/create.php>]
Burrill, D. 2006. "Re: How Can We Measure Student Learning?" EdStat 
post of 15 May 2006 15:22:19-0400; online at
<http://lists.psu.edu/cgi-bin/wa?A2=ind0605&L=edstat-l&T=0&X=130D0B43FD760512FF&Y=rrhake%40earthlink.net&P=5103> 
or more compactly at <http://tinyurl.com/z58hm>.

Gery, F.W. 1972. "Does mathematics matter?" in A. Welch, ed.,
"Research papers in economic education." Joint Council on Economic
Education. pp. 142-157.

Hake, R.R. 1998a. "Interactive-engagement vs traditional methods: A
six-thousand-student survey of mechanics test data for introductory 
physics courses," Am. J. Phys. 66: 64-74; online as ref. 24 at
<http://www.physics.indiana.edu/~hake>, or simply click on 
<http://www.physics.indiana.edu/~sdi/ajpv3i.pdf> (84 kB). A 
comparison of the pre- to post-test average normalized gain <g> for 
62 introductory high-school, college, and university physics courses 
enrolling a total 6542 students showed that fourteen "traditional" 
(T) courses  (N = 2084) which made little or no use of 
interactive-engagement (IE) methods achieved an average gain <g>T-ave 
= 0.23 plus or minus 0.04 (std dev), regardless of the experience, 
enthusiasm, talents, and motivation of the lecturers.  In sharp 
contrast, forty-eight courses (N = 4458) which made substantial use 
of IE methods achieved an average gain <g>IE-ave = 0.48 plus or minus 
0.14 (std dev), almost two standard deviations of <g>IE-ave above 
that of the traditional courses. Here: (a) the average normalized 
gain <g> is the actual gain  [<%post> - <%pre>] divided by the 
maximum possible gain [100% - <%pre>] where the angle brackets 
indicate the class averages; (b) IE courses are operationally defined 
as those designed at least in part to promote conceptual 
understanding through interactive engagement of students in heads-on 
(always) and hands-on (usually) activities which yield immediate 
feedback through discussion with peers and/or instructors; (c) T 
courses are operationally defined courses as those reported by 
instructors to make little or no use of IE methods, relying primarily 
on passive-student lectures, recipe labs, and algorithmic problem 
exams. More recently, average normalized gain differences between T 
and IE courses that are consistent with the work of Hake (1998a,b) 
have been reported by many other physics education research groups as 
referenced in Hake (2002a,b).

Hake, R.R. 1998b. "Interactive-engagement methods in introductory 
mechanics courses," online as ref. 25 at 
<http://www.physics.indiana.edu/~hake>, or simply click on 
<http://www.physics.indiana.edu/~sdi/IEM-2b.pdf> (108 kB).  Submitted 
on 6/19/98 to the Physics Education Research Supplement (PERS) to Am. 
J. Phys. but rejected by its editor on the grounds that the very 
transparent Physical Review-type data tables were too complex! PER 
suffers because it has no Physical-Review-type archival journal. 
This paper is a crucial companion paper to Hake (1998a): average 
pre/post test scores, standard deviations, instructional methods, 
materials used, institutions, and instructors for each of the survey 
courses of Hake (1998a) are tabulated and referenced. In addition the 
paper includes: (a) case histories for the seven IE courses of Hake 
(1998a) whose effectiveness as gauged by pre-to-post test gains was 
close to those of T courses, (b) advice for implementing IE methods, 
and (c) suggestions for further research.

Hake, R.R. 2002a. "Lessons from the physics education reform effort," 
Ecology and Society 5(2): 28; online at
<http://www.ecologyandsociety.org/vol5/iss2/art28/>. Ecology and Society
(formerly Conservation Ecology) is a free online "peer-reviewed 
journal of integrative science and fundamental policy research" with 
about 11,000 subscribers in about 108 countries.

Hake, R.R. 2002b. "Assessment of Physics Teaching Methods, 
Proceedings of the UNESCO-ASPEN Workshop on Active Learning in 
Physics, Univ. of Peradeniya, Sri Lanka, 2-4 Dec. 2002; also online 
as ref. 29 at
<http://www.physics.indiana.edu/~hake/>, or download directly by clicking on
<http://www.physics.indiana.edu/~hake/Hake-SriLanka-Assessb.pdf> (84 kB)

Hake, R.R. 2002c. "Re: Normalized Gain," online at
<http://listserv.nd.edu/cgi-bin/wa?A2=ind0204&L=pod&O=A&P=4112>. Post 
of 11 Apr 2002 20:25:41-0700 to ASSESS, AERA-D, EvalTalk, Math-Learn, 
PhysLnR, and POD.. See the correction at 
http://listserv.nd.edu/cgi-bin/wa?A2=ind0204&L=pod&O=A&P=4375

Hake, R.R. 2003. "Re: Normalized Gain," online at
<http://listserv.nd.edu/cgi-bin/wa?A2=ind0311&L=pod&O=A&P=19019>. 
Post of 25 Nov 2003 15:30:06-0800 to ASSESS, EvalTalk, EdStat, 
Math-Learn, PhysLrnR, and POD.

Hake, R.R. 2004. "Re: Measuring Content Knowledge," POD posts of 14 
& 15 Mar 2004, online at
<http://listserv.nd.edu/cgi-bin/wa?A2=ind0403&L=pod&P=R13279&I=-3> and
<http://listserv.nd.edu/cgi-bin/wa?A2=ind0403&L=pod&P=R13963&I=-3>.

Hake, R. R. 2005a. "The Physics Education Reform Effort: A Possible 
Model for Higher Education," online at 
<http://www.physics.indiana.edu/~hake/NTLF42.pdf> (100 kB). [41 
SCHOLARLY PEER REVIEWED LITERATURE REFERENCES - 4 BY HAKE.] This is a 
slightly updated version of an article that was (a) published in the 
National Teaching and Learning Forum 15(1), December 2005, online to 
subscribers at 
<http://www.ntlf.com/FTPSite/issues/v15n1/physics.htm>, and (b) 
disseminated by the Tomorrow's Professor list 
<http://ctl.stanford.edu/Tomprof/postings.html> as Msg. 698 on 14 Feb 
2006.

Hake, R.R. 2005b. "Re: Why Don't Psychologists Research the 
Effectiveness of Their Own Introductory Courses?" online at 
<http://tinyurl.com/muvy6>. Post of 20 Jan 2005 16:29:56-0800 to 
PsychTeacher (rejected) & PhysLrnR.

Hake, R.R. 2005c. "Do Psychologists Research the Effectiveness of 
Their Own Introductory Courses?" TIPS post of 19 Feb 2005 
07:58:43-0800; online at
<http://www.mail-archive.com/tips@acsun.frostburg.edu/msg13133.html>.

Hake, R.R. 2005d. "Do Psychologists Research the Effectiveness of 
Their Courses? Hake Responds to Sternberg," online at 
<http://tinyurl.com/n9dp6>. Post of 21 Jul 2005 22:55:31-0700 to 
AERA-C, AERA-D, AERA-J, AERA-L, ASSESS, EvalTalk, PhysLrnR, POD, & 
STLHE-L, TeachingEdPsych.

Hake, R.R. 2006a. "Re: How Can We Measure Student Learning?"  EdStat 
post of 14 May 2006 18:50:16-0700; online at
<http://lists.psu.edu/cgi-bin/wa?A2=ind0605&L=edstat-l&T=0&O=D&X=7D01A01F07EC4A1CBF&Y=rrhake%40earthlink.net&P=4378>, 
or more compactly at <http://tinyurl.com/ho82j>.

Hake, R.R. 2006b. "How Can We Measure Student Learning?" online at
<http://listserv.nd.edu/cgi-bin/wa?A2=ind0605&L=pod&O=D&P=5631> and (addenda)
<http://listserv.nd.edu/cgi-bin/wa?A2=ind0605&L=pod&O=D&P=5744>. Post 
of 13 May 2006 to AERA-A, AERA-B, AERA-C, AERA-D, AERA-J, AERA-K, 
AERA-L, ASSESS, Biopi-L, Chemed-L, EdStat, EvalTalk, IFETS, ITFORUM, 
PhysLrnR, Phys-L, POD, PsychTeacher, RUME, TeachingEdPsych, & TIPS. 
[16 SCHOLARLY PEER REVIEWED LITERATURE REFERENCES - 2 BY HAKE.]

Hake, R.R. 2006c. Re: The Value of Pre/post Testing," online at
<http://listserv.nd.edu/cgi-bin/wa?A2=ind0603&L=pod&P=R16153&I=-3>. 
Post of 22 Mar 2006  to AERA-D, AERA-L, ARN-L,  ASSESS, EDDRA, 
EdStadt,  EvalTalk,  PhysLrnR, POD, & STLHE-L. [11 SCHOLARLY PEER 
REVIEWED LITERATURE REFERENCES - 4 BY HAKE.]

Hake,R.R. 2006d. "Eleven Quotes in Honor of Inertia," online on the 
archives of HOPOS (History of the Philosophy of Science). 
<http://listserv.nd.edu/archives/hopos-l.html>, or more compactly at 
<http://tinyurl.com/hdver>.  Post of 24 Apr 2006 09:56:43-0700 to 
AmericanPhilosophy, HOPOS, IFETS, & WilliamJames-L. One must 
subscribe to HOPOS to access its archives, but it takes only a few 
minutes to subscribe by following the simple directions at
<<http://listserv.nd.edu/archives/hopos-l.html> / "Join or leave the 
list (or change settings)" where "/" means "click on." If you're 
busy, then subscribe using the "NOMAIL" option under "Miscellaneous." 
Then, as a subscriber, you may access the archives and/or post 
messages at any time, while receiving NO MAIL from the list!

Hovland, C. I., A. A. Lumsdaine, and F. D. Sheffield. 1949. "A 
baseline for measurement of percentage change," in C. I. Hovland, A. 
A. Lumsdaine, and F. D. Sheffield, eds. 1965, "Experiments on mass 
communication." Wiley (first published in 1949).) Reprinted as pages 
77-82 in P. F. Lazarsfeld and M. Rosenberg, eds. 1955. "The language 
of social research: a reader in the methodology of social Research." 
Free Press.

Ling, R. 2006a. "Re: How Can We Measure Student Learning?" EdStat 
post of 15 May 2006 00:59:20-0400; online at
<http://lists.psu.edu/cgi-bin/wa?A2=ind0605&L=edstat-l&T=0&O=D&X=7D01A01F07EC4A1CBF&Y=rrhake%40earthlink.net&P=4494>, 
or more compactly at  <http://tinyurl.com/jk86h>.

Ling, R. 2006b. "Re: How Can We Measure Student Learning?" EdStat 
post of 15 May 2006 19:17:28-0400; online at
<http://lists.psu.edu/cgi-bin/wa?A2=ind0605&L=edstat-l&T=0&X=130D0B43FD760512FF&Y=rrhake%40earthlink.net&P=5214>, 
or more compactly at <http://tinyurl.com/n3n5g>.

McKeachie, W.J. 1987. "Instructional evaluation: Current issues and 
possible improvements," Journal of Higher Education 58(3): 344-350.

Phillips, D.C. 2000. "Expanded social scientist's bestiary: a guide 
to fabled threats to, and defenses of, naturalistic social science." 
Rowman & Littlefield.








More information about the Rume mailing list