Print Page   |   Contact Us   |   Report Abuse   |   Sign In   |   Join or Create a Guest Account
Workshops
Share |

Workshop 1 - Data and Model Visualizations for Effective Communication of Statistical Results

Ikkyu Choi, Educational Testing Services

Statistics is about using data to make a best guess, and to quantify how bad that guess is (Rao, 1997). Despite these intuitive goals, the communication of statistical results has been challenging, even within academic communities. This communication challenge is particularly pertinent to language assessment professionals who frequently need to communicate about statistical concepts and results with peers, students, and/or assessment stakeholders. Effective visualizations of data and statistical models can facilitate such communications by replacing technical terminology with straightforward visual representations.

The goal of this workshop is to provide participants with an understanding of principles for effective data and model visualizations, and hands-on experience in creating them. All visualizations in this workshop will be demonstrated and created in real-time using R, a free statistical program. The corresponding R code will be provided to participants in an editable format for their own projects.

Participants should bring their own laptops (with any major operating system) with the most current version of R installed. Required packages will be installed as part of the workshop. Previous experience with R will not be required.

Part 1 (Day 1, Morning): The first part of this workshop will provide theoretical and practical background for constructing effective data visualizations. We will begin with contrasting examples of data visualization that facilitate or hinder the communication of intended messages, and derive a set of principles for effective statistical graphics. We will then learn the basics of R and its graphics engine, and use them to create “facilitating” data visualization examples.

Part 2 (Day 1, Afternoon): The second part of this workshop will focus on refining data visualizations for presentations and publications. This part will be organized as a series of activities, each of which will attempt to recreate published data visualization examples. Through the activities, we will learn how to customize data representation, to add labels and legends, and to emphasize focal points. We will also cover a number of export options, and discuss preferred options depending on the operating system and the target outlet.

Part 3 (Day 2, Morning): In the third part of this workshop, we will focus on the visualization of statistical models. We will learn how to visualize many popular statistical models among language assessment professionals, such as linear regression models, multilevel models, and factor analysis/item response theory models, which are presented as straight lines or curves on top of data. We will compare the resulting visualizations and the standard outputs from the models, and discuss how effective visualizations of models can show what the models actually do, and how well they fit the data. We will also discuss how to represent the uncertainty of the models to signify the quality of information contained in the graphics.

Part 4 (Day 2, Afternoon): The last part of this workshop will introduce interactive and dynamic visualizations. We will learn how to create interactive graphs that take user input and return visualized results, and dynamic representations of time series and/or repeated events. We will then see how popular statistical and measurement concepts such as p-values, power, and reliability, can be communicated with interactive and/or dynamic visualizations in an intuitive and straightforward manner.

Reference: Rao, C. R. (1997). Statistics and truth: Putting chance to work (2nd ed.). River Edge, NJ: World Scientific.

Workshop 2 - Developing and Using Rating Scales for Writing Assessment

Sara T. Cushing, Georgia State University

In this hands-on workshop we will cover fundamental considerations for developing, using and validating rating scales for writing assessment, both in classroom settings and for large-scale tests. In the first part of the workshop we will discuss the role of the rating scale in writing assessment in terms of scoring validity (Shaw & Weir, 2007) and discuss the advantages and disadvantages of different types of rating scales (i.e., holistic vs. analytic scales). We will generate our own criteria for scoring writing for a specific assessment purpose with a specific group of learners and we will experience different methods of rating scale construction, including the exploration of online resources for scale development. We will also use different published rating scales to evaluate second language writing samples and discuss the benefits and challenges of using different types of scales. We will discuss best practices in training, monitoring, and providing feedback to raters in large-scale assessments and we will discuss how rating scales can be used in classrooms to both lessen teachers’ marking load, provide useful feedback to students, and make use of self-assessment and peer assessment. Depending on participant interests and experience, we will cover the following additional topics: aligning scales to standards such as the CEFR, investigating and reducing various types of rater bias, and interpreting and using published quantitative and qualitative research on rating scales to inform our own research and practice.

Reference: Shaw, S. D., & Weir, C. J. (2007). Examining writing: Research and practice in assessing second language writing (Vol. 26). Cambridge University Press.

Workshop 3 - Current and Emerging Applications of Information Technologies for Language Assessment

Erik Voss, Northeastern University, USA

The role of technology in language learning and assessment is expanding rapidly. Computers are used to develop, deliver, and automatically score language tests. Teachers, test developers, and language testing researchers can benefit professionally from learning about potential capabilities and challenges of assessing language through computer technology (Chapelle & Douglas, 2006). This workshop will explore information technologies that make it possible to assess language learners in technology-enhanced language learning environments. We will focus on how current and developing computer technologies limit or contribute to assessment task design and score interpretation.

Part I: We will begin by examining information technologies in use today such as multimedia, social media, audio and video hardware, video conferencing software, and mobile computing that are currently used for assessing language in online education and classroom-based programs. Discussion topics will include human-computer interaction, digital literacy and anxiety, game-based approaches, social media, and natural language processing. Hands-on activities will contribute to the discussion of how different language technologies may affect language performance and how technology can alter the construct definition or ability being measured.

Part II: Building on the discussions and activities from part one, part two will include examples of sophisticated and emerging technologies (e.g., speech recognition, facial recognition, and virtual reality) that could advance current language assessment frameworks, task types and forms of human-computer interaction. Consideration will be given to the use of commercial technologies and data analytics for expanding knowledge of testing concepts and future directions (especially in terms of artificial intelligence and augmented reality). This part will also address criteria, principles, and argument-based validation approaches for evaluating assessment using technology.

Reference: Chapelle, C. A. & Douglas, D. (2006). Assessing language through computer technology. Cambridge: Cambridge University Press.

Association Management Software Powered by YourMembership  ::  Legal