Print Page   |   Contact Us   |   Report Abuse   |   Sign In   |   Join or Create a Guest Account
ILTA Student Travel Award
Share |

The ILTA Student Travel Award is intended to support graduate students in attending and presenting at LTRC. The recipient will receive an award of US$750 towards travel expenses for attending LTRC and official recognition in the LTRC Conference Program. This award also includes a one-year free membership to ILTA and may additionally include a waiver of the LTRC conference registration fee (to be determined by the LTRC Organizing Committee).

Graduate students may apply for this award if their proposal to present at LTRC has been accepted. The proposal should be for a research paper or symposium paper; applications for poster or work in progress proposals will not be considered. The student must be listed as an author, but need not be sole or first author. For co-authored papers, co-authors must be other students rather than supervisors, professors or other senior colleagues.

Graduate students are eligible for consideration if they:

1. were enrolled in a doctoral program at the time of submission

2. are not receiving other financial support that covers a significant portion of the expenses to attend LTRC

3. do not study or live in the LTRC host site

4. have not won the ILTA Student Travel Award before.

Evaluations will be made based on the financial needs of the applicant and the quality of the paper abstract. The Selection Committee may recommend making more than one travel award in any given year. The number of awards made will depend upon available resources.

Click here for information on the 2018 Student Travel Award.

Awardees include:

2018 (Auckland, New Zealand)
Clarissa Lau  (University of Toronto).  How Young Learners’ Interest Can be Used to Support a Learner-Oriented Assessment Approach (co-authored with Megan Vincett).

Simon Davidson (University of Melbourne)
The domain expert perspective on workplace readiness: Investigating the standards set on the writing component of an English language proficiency test for health professionals.

2017 (Bogota, Colombia)
Saerhim Oh  (Teachers College, Columbia University). Investigating Second Language Learners’ Use of Linguistic Tools and Its Effect in Writing Assessment

2016 (Palermo, Italy)

Sharon Yahalom (University of Melbourne). Nurses’ Perspectives of The Qualities of Referral Letters: Towards Profession-Oriented Assessment Criteria.

2015 (Toronto, Canada)

Laura Ballard and Shinhye Lee (Michigan State University). How young children respond to computerized reading and speaking test tasks.

Yueting Xu (Hong Kong University). Language Assessment Literacy in practice: A case study of a Chinese university English teacher.

Naoki Ikeda (University of Melbourne). Investigating constructs of L2 pragmatics through L2 learners oral discourse and interview data.

2014 (Amsterdam, Netherlands)

Jon Trace (University of Hawai'i).  Paper: Building a better rubric: Towards a more robust description of academic writing proficiency (with Gerriet Janssen and Valerie Meier).

Maryam Wagner (Ontario Institute for Studies in Education/University of Toronto). Use of a diagnostic rubric for assessing writing: Students’ perceptions of cognitive diagnostic feedback.

2013 (Seoul, S. Korea)

Nick Zhiwei Bi (University of Sydney). An investigation into the nature of strategic competence through test-takers’ lexico-grammatical test performance.

Chih-Kai (Cary) Lin (University of Illinois at Urbana-Champaign). Handling sparse data in performance-based language assessment under the generalizability theory framework.

Soo Jung Youn (University of Hawai'iat Mānoa). Validating task-based assessment of L2 pragmatics in interaction using mixed methods.

2012 (Princeton, USA)

John Pill (The University of Melbourne). Using health professionals’ feedback commentary to inform the validation of an ESP rating scale


Ikkyu Choi (University of California, Los Angeles.) Modeling the structure of passage-based tests: An application of a two-tier full information item factor analysis.

Huei-Lien Hsu (University of Illinois at Urbana-Champaign). The impact of World Englishes on oral proficiency assessment: rater attitude, rating tendency and challenges.

Jing Xu, (Iowa State University). Collocations in learner speech: Accuracy, complexity and context.

2011 (Michigan, USA)

Christine Doe (Queen's University).Validation of a large-scale assessmet for diagnostic purposese across three contexts: Scoring, teaching, and learning.

Yujie Jia (University of California, Los Angeles) Justifying score-based interpretations from a second language oral test: Multi-group confirmatory factor analysis

2010 (Cambridge, the UK)

Heike Neumann (McGill University). What does it take to make a grade? Teachers assessing grammatical ability in L2 writing with rating scales.

Youngsoon So (UCLA). The dimensionality of test scores on a second language reading comprehension test: Implications for accurate estimation of a test-taker’s level of L2 reading comprehension.

Liu Weiwei (The Hong Kong Polytechnic University). A multi-dimensional approach to the exploration of rater variability: The impacts of gender and occupational knowledge on the Assessment of ESP oral proficiency.

2009 (Denver, the US)

Hongwen Cai (University of California at Los Angeles). Clustering to inform standard setting in an oral test for EFL learners.

Yao Hill (University of Hawai'i). DIF investigation in TOEFL iBT reading comprehension: Interaction between content knowledge and language proficiency.

Talia Isaacs (McGill University). Judgments of L2 comprehensibility, accentedness and fluency: The listeners' perspective.

Jiyoon Lee (University of Pennsylvania). Analysis of test-takers' performance under test-interlocutors' influences in paired speaking assessment.

Yi-Ching Pan (The University of Melbourne). Washback from English certification exit requirements: A conflict between teaching and learning.

Anja Roemhild (University of Nebraska). Assessing domain-general and domain-specific academic English language proficiency.

Jie (Jennifer) Zhang ( Guangdong University of Foreign Studies). Exploring rating process and rater belief: Transparentizing rater variability.

2008 (Hangzhou, China)

Luke Harding (University of Melbourne). Test-taker attitudes towards diverse-accented speakers in EAP listening assessment: The verbal guise approach.

Jiyoung Kim (University of Illinois, Urbana-Champaign). Justifying intended test effects to stakeholders by using effect arguments: A case of development and validation of an ESL writing test.

2007 (Barcelona, Spain)

Khalid Barkaoui (Ontario Institute for Studies in Education of the University of Toronto). Effects of thinking aloud on ESL rater performance: A FACETS analysis.

Taejoon Park (Teachers College, Columbia University). Investigating the construct validity of the Community English Writing Test.

2006 (Melbourne, Australia)

Gary Ockey (University of California, Los Angeles). Validating the group oral discussion task: The effect of personality type.

Tomoko Horai (Roehampton University). Intra-task comparison in a monologic oral performance test: The impact of task manipulation on performance.

Jung Tae Kim Hyeong-Jong Lee, Jinshu Li and Carsten Wilmes (University of Illinois, Urbana-Champaign). Group dynamics in language test development.

2005 (Ottawa, Canada)

Youngmi Yun (University of Illinois at Urbana-Champaign). Factors explaining EFL learners’ performance in a timed-essay test: A structural equation modeling approach.

Ute Knoch (University of Auckland). Individual feedback to enhance rater training: Does it work?

Association Management Software Powered by YourMembership  ::  Legal