Print Page   |   Contact Us   |   Report Abuse   |   Sign In   |   Join or Create a Guest Account
Robert Lado Memorial Award
Share |

The purpose of this award is to recognize the best paper presentation by a graduate student at the annual meeting of the International Language Testing Association (ILTA), the Language Testing Research Colloquium (LTRC). The recipient will receive an award of US$500.

The Robert Lado Memorial Award, also known as the ILTA Best Graduate Student Paper Award was initiated in 1996, in recognition of the late Professor Robert Lado, by all accounts the founder of modern language testing research and development, and an internationally renowned scholar. Each year at LTRC a committee, appointed by the President of ILTA, selects the winner from among the papers delivered by graduate students. It should be noted that these paper proposals will already have been through a rigorous review process to have been invited for presentation at the conference in the first place. Papers by a combination of graduate student(s) and more senior scholars are not eligible for consideration.

The criteria used in recent years to evaluate the student papers are given below. Although content (study) and form (actual presentation) are intertwined, the content will weight more (60%) than the form (40%). With respect to content a distinction is made between empirical studies and non-empirical studies.

The Study (60%)

Empirical papers  Non-empirical papers

Significance: Importance of the topic, originality of the research and contribution to the field, including appropriate reference to relevant literature (embedding)

Design: The care taken in the research design (including appropriateness of design, use of instruments, data-collection procedures)

Methodologies: The appropriateness of the quantitative and/or qualitative methodologies used to address the research questions and the care taken in data analysis and interpretation

Conclusions: Validity of the conclusions including awareness of the ‘limitations’ of the study and its conclusions

Theory into practice: Potential applied uses of the knowledge gained by the specific research

Significance: Importance of the topic, originality of the research and contribution to the understanding of the topic

Coverage: the extent to which major works in this area are covered to appropriate depth

Analysis: Appropriateness and depth of the categorization, integration, synthesis, and critique of the relevant literature

Conclusions: Validity of the conclusions including awareness of the ‘limitations’ of the study and its conclusions

Theory into practice: Potential applied uses of the knowledge gained by the specific research


The Presentation (40%)

The professionalism and clarity of the presentation itself (ease with which it can be followed, coherence, advance organizers, audience awareness, and also effective use of media, i.e., ppt, overhead, hand-out)

The sufficiency of the information presented to allow audience to make a reasonable evaluation of the research

Time management and appropriate and effective handling of questions

Awardees include:

2019 (Atlanta, Georgia, USA)

Christine Marie Barron (OISE, University of Toronto).  Reading Self-Concept and Reading Achievement in Monolingual and Multilingual Students: A Cross-Panel Multiple-Group SEM Analysis

2018 (Auckland, New Zealand)

Fauve De Backer (Ghent University, Belgium). The effect of multilingual assessment on the science achievement of pupils

2017 (Bogota, Colombia)

Saerhim Oh (Teachers College, Columbia University).  Investigating Second Language Learners’ Use of Linguistic Tools and Its Effect in Writing Assessment

2016 (Palermo, Italy)

Kellie Frost (University of Melbourne). ‘The dynamics of test impact in the context of Australia’s skilled migration policy: Reinterpreting language constructs and scores’

2015 (Toronto, Canada)

Benjamin Kremmel (University of Innsbruck). The more, the merrier? Issues in measuring vocabulary size.

2014 (Amsterdam, Netherlands)

Maryam Wagner (Ontario Institute for Studies in Education/University of Toronto). Use of a diagnostic rubric for assessing writing: Students’ perceptions of cognitive diagnostic feedback.

2013 (Seoul, South Korea)

Jonathan Schmidgall (University of California, Los Angeles). Modelling speaker proficiencies, comprehensibility, and perceived competence in a language use domain.

2012 (Princeton University, Princeton, NJ, USA)

Ikkyu Choi (University of California, Los Angeles). Modeling the Structure of Passage-Based Tests: An Application of a Two-Tier Full Information Item Factor Analysis.

2011 (University of Michigan, Ann Arbor, MI, USA)

Heejeong Jeong (University of Illinois at Urbana-Champaign).  Past, Present, and Future of Language Assessment Courses: Are we headed into the right direction?

2010 (University of Cambridge, UK)

Ron Martinez (University of Nottingham).  Evidence of lack of processing of multiword lexical items in reading tests.

2009 (Denver, Colorado, USA)

Jiyoon Lee (University of Pennsylvania). The analysis of test takers’ performances under their test-interlocutor influence in a paired speaking assessment.

2008 (Hangzhou, China)

May Tan (McGill University). Bilingual high-stakes mathematics and science exams in Malaysia: Pedagogical and linguistic issues.

2007 (Barcelona, Spain)

Spiros Papageorgiou (Lancaster University). Investigating perceptions of language ability in the CEFR Scales.

2006 (Melbourne, Australia)

Lyn May (University of Melbourne). “Effective interaction” in a paired candidate EAP speaking test.

2005 (Ottawa, Canada)

Lorena Llosa (University of California, Los Angeles). Validating the use of a standards-based classroom assessment of English proficiency: A multitrait-multimethod approach.

2004 (Temecula, California, United States)

Vipavee Vongpumivitch (University of California, Los Angeles). Measuring the knowledge of text structure in academic English as a Second Language.

2003 (Reading, United Kingdom)

Lindsay Brooks (University of Toronto). An investigation of the interactions in paired oral proficiency testing.

Jianda Liu (City University of Hong Kong). Use of multiple-choice completion test for Chinese EFL learners.

2002 (Hong Kong, China)

Nathan T. Carr, Michael J. Pan, and Xiaoming Xi (University of California, Los Angeles). Construct refinement and automated scoring in web-based testing.

Sang-Keun Shin and Priya Abeywickrama (University of California, Los Angeles).Why not non-native varieties of English as test input?

2001 (St. Louis, Missouri, United States)

Yeonsuk Cho (University of Illinois). Examining a process-oriented writing assessment for large scale EAP assessment.

2000 (Vancouver, British Columbia, Canada)

Tom Lumley (Hong Kong Polytechnic University). Assessment criteria in a large-scale writing test: What do they really mean to the raters?

1999 (Tsukuba, Japan)

Amy Yamashiro (Temple University, Japan). Using structural equation modeling to validate a rating scale.

1998 (Monterey, California)

Annie Brown (University of Melbourne). Interview style and candidate performance in the IELTS oral interview.

1997 (Orlando, Florida, United States)

Cathie Elder (The University of Melbourne). Is it fair to assess native and non-native speakers together on school foreign language examinations?

1996 (Tampere, Finland)

Vivien Berry (The University of Hong Kong). Ethical considerations when assessing oral proficiency in Pairs.

Tom Lumley and Annie Brown (University of Melbourne). Interlocutor variability in specific-purpose language performance tests.

Association Management Software Powered by YourMembership  ::  Legal