Print Page   |   Contact Us   |   Report Abuse   |   Sign In   |   Join or Create a Guest Account
2018 Keynote Speakers
Share |
The material presented on this site is provided for the express purpose of non-commercial educational purposes. In using the videos and accompanying instructional materials you agree that you must not reproduce, distribute, modify, transmit, reuse or adapt the material contained in this site.

Samuel J. Messick Memorial Lecture
Sponsored by Educational Testing Service

Pamela Moss
University of Michigan


Evolving Our Knowledge Infrastructures in Measurement/
Recovering Messick’s Singerian Approach to Inquiry


There is a growing call in science and technology studies for critical attention to the infrastructures through which we generate knowledge about the human and natural worlds. Such Knowledge Infrastructures (KIs) entail networks of people, practices, norms, material and conceptual artifacts, standards, technologies, institutions, and the relationships among them. They refer to prior work that does not need to be reconsidered at the start of a new venture. In other words, they reflect what we take for granted as they ways things are done. [Edwards et al., 2013; Slota & Bowker, 2017; adapted from Moss & Lagoze, 2018.]

Our canonical understandings of validity theory in measurement, where Messick’s work has been seminal, are part of a robust KI supporting test development and evaluation and, in turn, action-orienting inferences about test takers, where other KIs implicated in local contexts come into play.
A recent multi-disciplinary conference on KIs asked: (1) How are KIs changing? (2) How do KIs reinforce or redistribute authority, influence, and power? (3) How can we best study, know, and imagine today’s (and tomorrow’s) KIs? [Edwards et al., 2013.]

With his argument for a “Singerian approach to inquiry,” Messick (1989) has also given us tools to address questions like these. He called on us to evaluate one approach to inquiry in terms of others “to elucidate or disrupt” the focal inquiry processes. With Messick’s argument as inspiration and resource, I explore answers to the critical questions raised by KI scholars, focusing in particular on how our KIs in measurement might evolve to better support interpretation and use of test scores in local contexts.

Edwards, P. N., Jackson, S. J., Chalmers, M. K., Bowker, G. C., Borgan, C. L., Ribes, D., Burton, M., & Calvert, S. (2013). Knowledge infrastructures: Intellectual frameworks and research challenges. Ann Arbor, MI: Deep Blue. Retrieved from: http://hdl.handle.net/2027.42/97552.

Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 13-103). New York: American Council on Education/Macmillan.

Moss, P. & Lagoze, C. (2018). Evolving knowledge infrastructures in education. Introduction to Symposium presented at the Annual Meeting of the American Educational Research Association, New York, USA.

Slota, S. C., & Bowker, G. C. (2017) How infrastructures matter. In U. Felt, R. Fouche, C. A. Miller, & L. Smith-Doerr (Eds.), The handbook of science and technology studies (pp. 529-554). Cambridge, MA: The MIT Press.

Pamela Moss is a Professor of Education at the University of Michigan’s School of Education. Her work lies at the intersections of educational assessment, philosophy of social science, and interpretive or qualitative research methods. Her approach to the study of assessment engages the critical potential of dialogue across research discourses--educational measurement, hermeneutics and critical theory, and sociocultural studies--sometimes to complement, sometimes to challenge established theory and practice in assessment. Her current research agenda focuses on validity theory in educational assessment, assessment as a social practice, and approaches to methodological pluralism in research policy and practice. Sam Messick’s (1989) call for a “Singerian mode of inquiry”, where one method of inquiry is critically evaluated in terms another, provided an early inspiration for this multi-methodological agenda.

Moss is an elected member of the US National Academy of Education and a fellow of the American Educational Research Association (AERA). She was a member of the Joint Committee revising the 1999 AERA, APA, NCME Standards for Educational and Psychological Testing, of the US National Research Council's Committee on Assessment and Teacher Quality, and chair of AERA's Task Force on Standards for Reporting on Empirical Social Science Research in AERA Publications. She was co-founder (with Mark Wilson and Paul DeBoeck) of the journal, Measurement: Interdisciplinary Research and Perspectives. She recently completed a chapter on “Engaging Methodological Pluralism” (2016) with Ed Haertel, for AERA’s 5th Handbook of Research on Teaching.


Alan Davies Lecture
Sponsored by The British Council


Joseph Lo Bianco
University of Melbourne


No Policy without Testing!  How the language of policy persuasion and persuasive language helps to make testing count


In policy making discussions testing has a particular kind of presence and power because of what tests produce, or are assumed to produce, i.e., empirical statistically presented evidence of phenomena, characterized by objectivity, reliability, and validity. Policy making is a special field of talk that involves decision making about the disbursement of public resources, which, as classically understood in liberal democratic states, are collected from taxpayers on the assumption that they in turn receive representation. In effect this holds that citizens become tax payers, they surrender their hard earned income, for which in exchange they are entitled to participation in the use and deployment of that income forgone. The rallying phrase accompanying this contract of governance is No Taxation without Representation, attributed to John Otis in 1761, expressing the resentment of American colonists against being taxed by a British parliament to which they did not send representatives. This talk will connect the promise of language testing, and its role (assumed and actual) in decision making in various domains in which it is applied, with the work and ‘talk’ of language testing. I will reflect on the contribution made by Alan Davies to generating both an ethical discourse of testing, and a participatory mode of policy making, to the contract of governance. Testing is seen, in this view, as a way to make information “honest”.


Joseph Lo Bianco is Professor of Language and Literacy Education in the Melbourne Graduate School of Education, at the University of Melbourne, Australia and a past president of the Australian Academy of the Humanities. He specialises in language policy studies, bilingualism and intercultural education and research and action on peace and conflict in multi-ethnic settings. Since 2012 he has directed a multi-country project on language policy and social cohesion in conflict affected settings in SE Asia for UNICEF and conducted large scale policy workshops for high level policy officials across Asia, under the auspices of UNESCO. He has an extensive list of publications with a strong recent focus on social cohesion, peace and conflict mitigation in multi-ethnic settings. As author of the Australian National Policy on Languages in 1987 he was entrusted with its initial implementation, under which he assisted Professor Tim McNamara to establish The Language Testing Research Centre at the University of Melbourne, which began a long friendship and collaboration with Alan Davies, the founding Director of the Centre.



Distinguished Achievement Award Lecture

Sponsored by Cambridge Assessment English and ILTA


Carolyn E. Turner

McGill University


The methodological evolution in language testing/assessment research and the role of community: A personal view

As we look upon the diverse methods, methodology and study design employed in language testing/assessment research currently, one might think it was always that way. This would be far from the truth. What is intriguing about our present research context is the way we got here. I would argue that our present repertoire of research methods evolved due to many factors (e.g., the trajectory of research issues and questions which demanded new ways to explore and investigate phenomena as in other disciplines in the social sciences), but the main contributing factor I would like to focus on is the role of community and the different generations represented within it. I’m not referring to any scholarly community; I’m referring to our language assessment/testing community as manifested in our annual meeting of LTRC. As a developing researcher, I was initially influenced by the warm welcome I received at my first LTRC back in 1989 in San Antonio, Texas where I presented my PhD work. I also couldn’t help but notice the passionate and colorful debates taking place surrounding paper presentations (including my own which used structural equation modeling employing the computer program LISREL, a bit new in our field’s research at the time). The LTRC debates would frequently include the methods used and study design. What was refreshing and valuable, however, was that the dialogue would spill out after the presentations into other spaces where we had fun together – coffee breaks, meals, late evening events - and this is where much exploring, modeling and learning continued in terms of diverse methodologies for our research questions. As a growing scholar initially educated in quantitative methods but feeling the need to add a qualitative dimension and later a combined analysis (MMR), I was motivated by this sense of community and encouraged by the importance placed upon the research design of a study.

In this talk, I would like to build around the theme of the methodological evolution in language testing/assessment research, but explore it specifically through a community perspective and the impact of this perspective. I will draw on my own experiences in several instances, and demonstrate the questioning, but tolerant, approach to new ways of addressing research questions as exemplified in our community’s methodological trajectory. In my view, it is part of our collective identity and a positive force for novice and veteran researchers alike. I will argue how this is not always the case in other disciplines, and how this approach has served our community well in making advances in the issues at hand.


Carolyn E. Turner has been Associate Professor of Second Language Education in the Department of Integrated Studies in Education at McGill University. Her courses focused on classroom assessment, language testing and research methods. In December 2015 she retired from McGill. She currently continues to be active in research and service to her professional community. Her research examines language testing/assessment in educational and healthcare settings. More recently, her focus has also been on learning-oriented assessment (LOA) in classroom contexts. She is co-authoring a book with James Purpura on LOA where they examine the potential of learning as integrated into assessments and how assessments serve learning when embedded in teaching. Carolyn served as President of ILTA (2009 & 2010); was Associate Editor of the journal of Language Assessment Quarterly from its inception up until 2012, remains on its Editorial Advisory Board; and was a founding member of the Canadian Association for Language Assessment/Association canadienne d’évaluation des langues. She has worked with organizations concerning high-stakes testing issues: for example, the TOEFL Committee of Examiners (ETS); the Test Advisory Panel for Paragon Testing Enterprises; International Civil Aviation Organization (ICAO) Aviation English Language Test Service (AELTS), and the Quebec Ministry of Education. Her publications are in journals such as Language Testing, Language Assessment Quarterly, TESOL Quarterly, Canadian Modern Language Review and Health Communication, and chapters in edited collections. In addition, Carolyn has valued supervising graduate students and mentoring young faculty. She is encouraged by their increasing involvement and presence in the international language testing/assessment community.

Association Management Software Powered by YourMembership  ::  Legal