Print Page   |   Contact Us   |   Report Abuse   |   Sign In   |   Join or Create a Guest Account
LTRC 2020 Invited Symposia
Invited Symposium 1     |    Invited Symposium 2



Invited Symposium 1: Arabic language assessment

Organizer: Atta Gebril, The American University in Cairo, agebril@aucegypt.edu

Discussant: Micheline Chalhoub-Deville, University of North Carolina-Greensboro

This symposium taps into issues related to assessment of Arabic language, a topic that has not received the due attention in the language assessment literature. The unique characteristics of Arabic, such as diglossia and orthography, makes it critical to investigate the challenges and potentials involved in its assessment. Other issues of interest in this discussion include the use of different proficiency frameworks and the validation of Arabic language tests. Another area of interest is the rather traditional approach to Arabic language assessment in different settings. The symposium brings together a group of scholars who are interested in Arabic language assessment and who have been involved in a number of test development projects. The first presentation lays out the conceptual and operational challenges associated with the development and validation of standardized Arabic proficiency tests. The second one delves into the process of linking Arabic proficiency tests to the Common European Framework of Reference and the need to revisit current Arabic proficiency models, taking into consideration the Lingua-cultural attributes of Arabic proficiency. Building on these ideas, the third presentation reports on the validation of a battery of reading tests for school-aged kids in a Middle Eastern context.  Within a relatively similar setting, the fourth presentation investigates the washback effects of an early literacy assessment tool and provides some useful recommendations for different stakeholders working in the region.  The symposium concludes with our discussant who provides a critical take on Arabic language assessment and provides suggestions for required future course of action.


Paper 1:
Key challenges in the development and validation of standardized Arabic tests

Wael Amer, University College London

The proposed presentation seeks to explore the key conceptual and operational challenges which may undermine the development and validation of standardized Arabic proficiency tests based on my experience related to developing Arabic proficiency tests. First, the presentation taps into the definition of Arabic proficiency constructs and their elusive nature due to the significant variance between modern standard Arabic (MSA) and the different dialects used in the Arab region. Another conceptual challenge is the dynamic tension between the traditional framework of reference of the Arabic language (which heavily relies on grammar and morphology in the description of Arabic as a means of communication) and the modern frameworks of reference such as the CEFR and ACTFL. These frameworks tend to focus more on language skills rather than on grammar in their approach to describing the language use domains. While this tension casts difficulties pertaining to construct definition, it also influences the inferences drawn from an Arabic test with their corresponding decisions and score uses. With regard to operational challenges, the presentation will address issues related to the agencies which may develop/validate standardized Arabic proficiency tests. Therefore, issues pertaining to the assessment literacy of the stakeholders of an Arabic standardized proficiency test will be discussed. Moreover, challenges pertaining to business development, technology, and management will be also explored in the presentation. Finally, the presenter will propose a number of guidelines for Arabic test developers, language teachers, administrators, and policy makers.


Paper 2: Linking Arabic proficiency assessment to the common European framework of reference

Rahaf Alabar, University of Cambridge

The mapping of Arabic proficiency assessment to the Common European Framework of Reference (CEFR), which is the focus of this paper, is based on an initial exploratory theoretical research entailing a discussion on the content and construct of the models of Arabic language proficiency. The prevailing models of Arabic proficiency and the CEFR are based on different approaches to language learning and –as a consequence- to testing, and so the underlying constructs of Arabic proficiency models and the CEFR are not the same. The current post-communicative approach to language learning and teaching is a social-constructivist one, which is at the heart of the CEFR’s action-oriented model. It reveals that the ultimate learning outcome is to create a lifelong learner, a member of a learning community, who is willing to discover new perspectives and impart new life skills, which may or may not be reflected in the existing models of Arabic proficiency. Therefore, the paper compares and contrasts these models with the specifications of the CEFR model, aiming to explore how and where they converge and diverge. It provides a description of the content and construct each of the Arabic models comprises. This is followed by an analytical discussion on how relevant these two aspects of the models to the specifications of the action-oriented model of the CEFR, and the implications on the assessment of the four skills; reading, writing, listening and speaking. The paper then recommends that models of Arabic proficiency need to be revisited in order to account for the Langua-cultural attributes of Arabic proficiency. It also proposes that new forms of proficiency constructs which give due weight to actional and interactional competences need to be devised in order to meaningfully align Arabic proficiency assessment to the CEFR.

Paper 3: Development and standardization of an Arabic reading test for primary and middle schools in Kuwait

Abdessattar Mahfoudhi, Australian College of Kuwait & Center for Child Evaluation and Teaching, Kuwait

Mohamed Roshdy, Center for Child Evaluation and Teaching, Kuwait

John Everatt, University of Canterbury, New Zealand

A series of studies will be reported that aimed to develop assessments of Arabic reading and spelling among Kuwaiti students in grades 2 to 9 of mainstream government schools. This test is meant to be used along with other tests of memory, phonological processing, morphological processing, orthographic processing and oral language to assess reading difficulties within the mentioned population. Data collection used in this research project included measures of word reading accuracy, sentence reading fluency, comprehension fluency, reading comprehension, word spelling and spelling choice. The tests were first developed for primary school and later adapted minimally for middle school to avoid floor effects. They were first piloted twice on a primary school sample of about120 boys and 120 girls before the standardization which included all educational districts with a total of 1258 students (equal number of boys and girls). The same procedure was followed for the middle school sample of 1252 students. After the piloting stages which included the deletion of some items and the ordering of others (based on tests of item reliability and item difficulty), the measures showed a high level of internal reliability (Cronbach alpha). Correlation and factor analysis were also used. The results showed high levels of internal reliability and high cross-test correlation. Factor analysis showed that the tests loaded on one single factor. The tests also showed age progression in terms of performance development.  The results support the need for developing tests for secondary school and beyond and to adapt them in other Arab countries.

Paper 4: Early reading assessment: A washback effect beyond imagination

Hanada Taha Thomure, Endowed Professor of Arabic Language at Zayed University

Research provides evidence that certain early literacy skills can predict young learners’ later reading achievement.  These early reading skills include letter knowledge, phonemic awareness, decoding, vocabulary, fluency, and comprehension. Assessing early reading skills in students learning to read has been in use for decades now.  Teachers in effective educational system are closely acquainted with those assessment skills and tools during their teacher preparation programs and later on as practising teachers.  In order to test that, twenty-five grades one to three Arabic language teachers in five public schools in Dubai took part in four months early reading skills training the researcher offered. The participants received a two-hour training session once a week on topics including phonemic awareness, alphabetic principle, phonological awareness, oral reading fluency, sight words, vocabulary building, reading aloud, shared reading, guided reading, independent reading and comprehension strategies. They were then observed by the researchers in classroom once every two weeks and given immediate feedback after class during a meeting that was attended by the teachers observed, researcher who observed the classes, a representative from the ministry of education and the school principal. 1600 of students in grades 1 to three were tested before and after the teacher training pilot took place. A paired samples t-test was conducted to explore the impact of the intervention on student scores before and after the intervention. The outcomes of the analysis show that there is a statistically significant difference between the pre-test (M = 34.76, SD = 38.56) and the post-test (M = 40.18, SD = 40.28), t (8551) = -27.84, p<.001 (two-tailed). The mean test score increases with -5.41 with a 95% confidence interval ranging from -5.79 to -5.03. The eta squared statistic (.08) indicated a moderate effect size. This study has some important implications to make regarding early reading assessment tools and Arabic language teacher preparation and training. 



Micheline Chalhoub-Deville is Professor at the University of North Carolina at Greensboro (UNCG). She is President of ILTA.  She is the recipient of the Outstanding Young Scholar Award by ETS—TOEFL Program, the ILTA Best Article Award, and the UNCG SOE Senior Scholar Award.  She is the founder of MwALT.

 

 

 

Atta Gebril is an associate professor and director of the MATESOL program, the American University in Cairo. He serves as associate editor of Language Assessment Quarterly and also as an editorial board member of Assessing Writing and TESOL journal.  He is the recipient of the 2018 ILTA Best Article Award.

 

 

Wael Amer is a language testing professional with 15 years of experience. He has a Masters in educational and social research, and is currently completing his Ph. D in educational assessment and measurement at UCL Institute of Education, London, UK. Wael is currently interested in issues pertaining to assessment literacy of language educators and stakeholders.

 

 

 

Rahaf Alabar is a UK-based researcher in the field of Language Education. She is currently working as an Assessment Manager at Cambridge Assessment. She holds an MA in TESOL and a Ph.D. in Language Education from Goldsmiths, University of London. Her research interest is in the fields of Language Assessment and Teacher Education.

 

 

Abdessatar Mahfoudhi (PhD) is the Head of the English Department at the Australian College of Kuwait and an educational consultant at the Center for Child Evaluation & Teaching in Kuwait. His research interests are in language, literacy, and language-based disabilities, with a focus on Arabic as a first language and English as an additional language.

 

 

John Everatt is a Professor of Education at the University of Canterbury, New Zealand. He received a PhD from the University of Nottingham in the UK, and has lectured on education and psychology programmes, and conducted research related to language and reading across many parts of the world.

 

 

 

Mohamed Roshdy is educational psychologist who specializes in special education and counselling. He is deputy head of the research unit at the Center for Child Evaluation and Teaching, Kuwait and adjunct professor at the University of Kuwait, Faculty of Education. 

 

 

 

 

Hanada Taha Thomure is the Endowed Professor of Arabic Language and director of the Arabic Language Center for Research & Development at Zayed University, UAE.  Previously, she served as Acting Dean of Bahrain Teachers College,  where she joined as Associate Dean in 2010. Her main research foci include teacher training, curriculum design, and national literacy strategies.

 

 



Invited Symposium 2: Multilingual assessment in Africa and the MENA region

Organizer: Albert Weideman, University of the Free State, South Africa, albert@lcat.design

Discussant: John Read, University of Auckland, New Zealand

Although African scholars and those in the MENA region may not always have been sufficiently mindful of this, all of our assessments of language ability are carried out in a multilingual context. We often need to assess language ability across languages, as in the case of some high stakes tests that are used to control access to university study, and are provided in more than one language. It also applies to international tests of language ability administered from time to time at primary or intermediate school level. It follows, in each of these cases, that test equivalence or at least comparability is of the utmost importance if the tests are to be fair to all, regardless of which language version is taken. Have we done enough to ensure fairness in this respect? What steps have been taken to ensure equitable outcomes despite variation in the language of the measurement instruments?

A further complication arises when one considers the battle to promote multilingualism at all levels: primary, secondary and higher education. There are pockets of research and scientific investigation that have kept interest in multilingual solutions alive: in some countries there has been both policy and materials development work for mother tongues in primary education, for example, as well as comparative work in measuring academic literacy in languages other than English at university level.

Innovative solutions have been proposed, and strategic insights have influenced the practical implementation of multilingual assessment in many cases. The papers featured in this symposium bring together both this design expertise and the data on which cases for such designs have been built, and across a multiplicity of languages.

Paper 1: Validating the highest performance standard of a test of academic literacy for students from different home language backgrounds

Kabelo Sebolai, Stellenbosch University

In the last two decades of the post-apartheid era, the language policies of higher education institutions in South have been a contested terrain, with many such policies changing to a lesser or greater extent. Stellenbosch University (SU) is one former Afrikaans medium institution whose current language policy has shifted towards promoting multilingualism with Afrikaans, English and IsiXhosa, the three languages mostly spoken in the Western Cape, at the centre. While parallel efforts are ostensibly made to promote the other two languages at this university, English continues to be the most dominant of the three in the classroom. The university uses the English version of the National Benchmark Test in Academic Literacy (NBT AL) to measure levels of academic language readiness among first time entering students. The aim of the study underpinning this this presentation was to determine the degree of accuracy to which the highest performance standard set for the NBT AL can classify students from the three language backgrounds of interest to SU’s language policy, as those who are likely to do well in their first year of university as opposed to those that are unlikely to do so. A Sensitivity and Specificity analysis of the scores obtained by a total of 13 858 students at SU on this test was carried out in relation to average first year performance to accomplish the aim of the study. The results show that the test possessed a better classification rate for one group than it did for the other two.

Paper 2: University students’ utilisation of information in simultaneous exposure to the same text in different languages

Gustav Butler, North-West University

This paper reports on research that utilised eye-tracking technology to investigate the focus, attention allocation and reading comprehension of first year, university level Sesotho home language students when presented simultaneously with the same text (the text comprehension section of the Test of Academic Literacy Levels) in Sesotho and in English (the Sesotho having been translated from the English). The primary aim of the research was to determine whether students, when presented with “high stakes” reading material (such as an academic literacy test) in both languages at the same time, would use the opportunity to access the language resources of both languages in order to understand the text best. A crucial question was whether students would show a better understanding of the text in cases where they utilised both languages rather than making use of one language only. The paper therefore further presents an analysis of students’ reading comprehension scores for both contexts, viz. having had access to the reading material in only Sesotho or English and, having been presented with the text in both languages at the same time. Students’ results for the comprehension section of the English and Sesotho academic literacy tests are therefore compared to the results obtained in the eye-tracking experiment where they had the opportunity to make use of both languages for text comprehension. The findings of this research may have important implications for how learning, and more specifically assessment opportunities could be created that utilise the multilingual minds of the majority of South African students.

Paper 3: Challenges to test adaptation and translation within complex linguistic contexts

Samira ElAtia, University of Alberta

The guidelines of the International Language Testing Commission for the translation and adaptation of tests highlight the importance in such processes of considering “the whole cultural context within which a test is to be used.” Similarly, several standards in Standards of Educational and Psychological Testing (NCME, AERA, APA) require the revalidation of tests once such changes have been made, in order to ensure comparability. At the macro-linguistic level, there may be a presupposition that standard versions of both the original language of the test and the target language of its translated and adapted version exist, and are equivalent. No consideration is given to the legal and sociolinguistic nuances that may be present. However, languages are complex: equivalency can become contested when one language is dominant and the other is a minority language, or where regional differences exist and resources for test development are fragmented. There may be a legal mandate to have versions of a test in various languages, but not enough resources to support proper procedure for developing versions of a test in lesser used languages. My own research has shown that the linguistic diversity within variations of the same language will produce group differences in test results/performance. Hence, tests developed in minority and isolated language contexts tend to be scarce, and the focus on commercial tests developed using one standard language becomes the norm. This presentation will focus on the complexity of test development in processes where test translation and adaptation is employed, with specific reference to French, Arabic and Berber.

Paper 4: Bilingualism and trilingualism: Some enduring issues in language assessment

Kassim Shaaban, American University of Beirut

The paper addresses issues faced in evaluating language proficiency of applicants to higher education institutions in Lebanon, a multilingual country where the native language, Arabic, as well as two foreign languages, French and English, are part of the linguistic landscape. The three languages are also languages of daily communication and of instruction in schools and universities. The study looks at the performance of Lebanese university-bound students on proficiency tests in English, French, and Arabic at two universities, one English-medium and the other French-medium. The proficiency tests include the primary language of instruction, English or French, and Arabic. Preliminary analysis shows that about 20-25 percent of students obtain the required scores on English and French and only about 15-18 percent pass the Arabic language test. Possible reasons for this relatively unsatisfactory performance in all three languages are explored, including multilingualism, marginalization of the native language, test taking skills, individual characteristics, socioeconomic disparities and unequal access to quality education. The study also highlights the apparent weakness in the reading part of the tests, especially as it relates to critical comprehension. In its discussion, the paper concentrates on the issues involved in teaching in a language other than the mother tongue and the implications of that for students’ national, social, and individual identity construction, on academic achievement in the foreign language and in the subject matter. It also discusses the threats to culture, religion, and identity posed by teaching in a foreign language in a region where these three factors are considered vital.


 

Albert Weideman is Professor of Applied Language Studies at the University of the Free State. He is the deputy chairperson of the Network of Expertise in Language Assessment (NExLA). His research focuses on how language assessment, course design, and policy relate to a theory of applied linguistics.

E-mail address: albert@lcat.design

 

John Read is a specialist in vocabulary assessment and the testing of English for academic purposes. Co-editor of Language Testing (2002-06) and ILTA President (2011-12), he is the author of Assessing English proficiency for university study (Palgrave Macmillan, 2015); and editor of Post-admission language assessment of university students (Springer, 2016).


 

 

Kabelo Sebolai is the deputy director in the Language and Communication Development section of the Language Centre at Stellenbosch University. He is the chairperson of the Network of Expertise in Language Assessment (NExLA). His research interest revolves around academic literacy teaching and assessment and their relationship to academic performance.

 

 

 

Gustav Butler is the director of the research focus area: Understanding and Processing Language in Complex Settings (UPSET) at North-West University. His research interests include the development of academic literacy with a specific focus on academic writing, the assessment of postgraduate academic literacy, and the translation of academic literacy tests.

 

 

 

Samira ElAtia is professor of education and Associate Dean, Graduate Studies at the Campus Saint-Jean of the University of Alberta. Her research focuses on fairness in bilingual assessment. She has served on expert panels of several international testing agencies, and is currently president of the Canadian Association of Language Assessment.

 

 

 

Kassim Shaaban is Professor of Applied Linguistics, American University of Beirut. His experience includes teacher training, program evaluation, and curriculum development. His research interests cover language planning, multilingualism, and assessment. His current research focus is on the social, economic, and educational impact of language policies in the Arab world.

 


Association Management Software Powered by YourMembership  ::  Legal