Workshop 1 - Data and Model Visualizations for Effective Communication of Statistical ResultsIkkyu Choi, Educational Testing Services
Statistics is about using data to make a best guess, and to quantify how bad that guess is (Rao, 1997). Despite these intuitive goals, the communication of statistical results has been challenging, even within academic communities. This communication challenge is particularly pertinent to language assessment professionals who frequently need to communicate about statistical concepts and results with peers, students, and/or assessment stakeholders. Effective visualizations of data and statistical models can facilitate such communications by replacing technical terminology with straightforward visual representations.
Workshop 2 - Developing and Using Rating Scales for Writing AssessmentSara T. Cushing (Weigle), Georgia State University
In this hands-on workshop we will cover fundamental considerations for developing, using and validating rating scales for writing assessment, both in classroom settings and for large-scale tests. In the first part of the workshop we will discuss the role of the rating scale in writing assessment in terms of scoring validity (Shaw & Weir, 2007) and discuss the advantages and disadvantages of different types of rating scales (i.e., holistic vs. analytic scales). We will generate our own criteria for scoring writing for a specific assessment purpose with a specific group of learners and we will experience different methods of rating scale construction, including the exploration of online resources for scale development. We will also use different published rating scales to evaluate second language writing samples and discuss the benefits and challenges of using different types of scales. We will discuss best practices in training, monitoring, and providing feedback to raters in large-scale assessments and we will discuss how rating scales can be used in classrooms to both lessen teachers’ marking load, provide useful feedback to students, and make use of self-assessment and peer assessment. Depending on participant interests and experience, we will cover the following additional topics: aligning scales to standards such as the CEFR, investigating and reducing various types of rater bias, and interpreting and using published quantitative and qualitative research on rating scales to inform our own research and practice.
Reference: Shaw, S. D., & Weir, C. J. (2007). Examining writing: Research and practice in assessing second language writing (Vol. 26). Cambridge University Press.
Workshop 3 - Current and Emerging Applications of Information Technologies for Language Assessment
Erik Voss, Northeastern University, USA
The role of technology in language learning and assessment is expanding rapidly. Computers are used to develop, deliver, and automatically score language tests. Teachers, test developers, and language testing researchers can benefit professionally from learning about potential capabilities and challenges of assessing language through computer technology (Chapelle & Douglas, 2006). This workshop will explore information technologies that make it possible to assess language learners in technology-enhanced language learning environments. We will focus on how current and developing computer technologies limit or contribute to assessment task design and score interpretation.
Part I: We will begin by examining information technologies in use today such as multimedia, social media, audio and video hardware, video conferencing software, and mobile computing that are currently used for assessing language in online education and classroom-based programs. Discussion topics will include human-computer interaction, digital literacy and anxiety, game-based approaches, social media, and natural language processing. Hands-on activities will contribute to the discussion of how different language technologies may affect language performance and how technology can alter the construct definition or ability being measured.
Part II: Building on the discussions and activities from part one, part two will include examples of sophisticated and emerging technologies (e.g., speech recognition, facial recognition, and virtual reality) that could advance current language assessment frameworks, task types and forms of human-computer interaction. Consideration will be given to the use of commercial technologies and data analytics for expanding knowledge of testing concepts and future directions (especially in terms of artificial intelligence and augmented reality). This part will also address criteria, principles, and argument-based validation approaches for evaluating assessment using technology.
Reference: Chapelle, C. A. & Douglas, D. (2006). Assessing language through computer technology. Cambridge: Cambridge University Press.