theodore schizas
Επιστημονική Έρευνα
Γραπτη Διαμεσολαβηση στις Εθνικες Εξετασεις Πιστοποιησης Γλωσσομαθειας: Διερευνηση γλωσσικων υβριδικων σχηματισμων λογω της κανονιστικης επενεργειας του αρχικου κειμενου (2009)
Μαρία Σταθοπούλου
Τμήμα Αγγλικής Γλώσσας και Φιλολογίας
Φιλοσοφική Σχολή, Εθνικό και Καποδιστριακό Πανεπιστήμιο Αθηνών
Περίληψη
Η έννοια της διαμεσολάβησης είναι κομβική για την παρούσα μελέτη. Τααποτελέσματα της έρευνας που πραγματοποιήθηκε προκύπτουν από την ανάλυση γραπτών υποψηφίων επιπέδου Β2 στις εξετάσεις του Κρατικού Πιστοποιητικού Γλωσσομάθειας (ΚΠΓ) και πιο συγκεκριμένα των κειμένων που γράφτηκαν με αφορμή δοκιμασίες διαμεσολάβησης, δηλαδή δοκιμασίες που απαιτούν τη μεταφορά πληροφοριών από την ελληνική στην αγγλική γλώσσα, με τρόπο που συνάδει κάθε φορά με το συγκειμενικό πλαίσιο το οποίο ορίζεται από τη δοκιμασία. Αφετηρία της έρευνας ήταν η παραδοχή της άποψης της Δενδρινού (2007c) πως το κείμενο ερέθισμα στην ελληνική παίζει κανονιστικό ρόλο για το παραγόμενο κείμενο και σκοπός της μελέτης είναι να διερευνήσει το βαθμό και τον τρόπο με τον οποίο το κείμενο ερέθισμα ρυθμίζει το παραγόμενο κείμενο, με αποτέλεσμα να παράγονται υβριδικοί γλωσσικοί σχηματισμοί. Η έννοια της υβριδικότητας, λοιπόν, η οποία εισάγεται από τη Δενδρινού (2007c) για να περιγράψει τις γλωσσικές συμμείξεις ως αποτέλεσμα διαμεσολαβητικής δράσης, είναι επίσης πολύ σημαντική για την παρούσα μελέτη, η οποία εστιάζει στις γλωσσικές υβριδικοποιήσεις (συμμείξεις) στα γραπτά που παρήγαγαν υποψήφιοι με βάση τη δοκιμασία της διαμεσολάβησης.
Τα γραπτά που αναλύθηκαν ήταν διακόσια σαράντα (240) και αντλήθηκαν από την ηλεκτρονική τράπεζα δεδομένων του Κέντρου Έρευνας για την Αγγλική Γλώσσα του Πανεπιστημίου Αθηνών. H ερευνητική διαδικασία διενεργήθηκε σε τρεις φάσεις. Κατά την πρώτη φάση καταγράφηκαν και κατηγοριοποιήθηκαν οι λεξικογραμματικές επιλογές των υποψηφίων που εμφάνισαν σημάδια «ρύθμισης» και αποτελούσαν γλωσσικές υβριδικοποιήσεις. Κατά τη δεύτερη φάση της έρευνας εξετάστηκαν ξεχωριστά τα γραπτά υποψηφίων με υψηλή επίδοση στην αγγλική και συγκρίθηκαν με εκείνα που έλαβαν μέτρια βαθμολογία, με σκοπό να ανιχνευθεί εάν το επίπεδο γλωσσομάθειας ή η ικανότητα του υποψηφίου στο γραπτό λόγο επηρεάζει το είδος και τον αριθμό των γλωσσικών «παρεκκλίσεων» που προκύπτουν από την κανονιστική επενέργεια του αρχικού κειμένου στο παραγόμενο. Τέλος, εξετάστηκαν εξήντα γραπτά (60) υποψηφίων που γράφτηκαν με αφορμή μια άλλου είδους δοκιμασία ημι-καθοδηγούμενης παραγωγής γραπτού λόγου, η οποία δεν βασίζεται σε ένα ολοκληρωμένο κείμενο και μάλιστα κείμενο γραμμένο σε άλλη γλώσσα. Η καθοδήγηση ως προς το περιεχόμενο, το σκοπό του κειμένου και τον τύπο του παρέχεται στην αγγλική. Ακολούθησε σύγκριση των γραπτών αυτών με γραπτά που ήταν αποτέλεσμα διαμεσολάβησης των ίδιων υποψηφίων στις ίδιες εξεταστικές περιόδους. Τα αποτελέσματα της ερευνητικής διαδικασίας στο σημείο αυτό επιβεβαίωσαν αυτό που είχαμε ήδη υποπτευθεί, πως δηλαδή ο αριθμός και ο τύπος των γλωσσικών υβριδικοποιήσεων δεν θα είναι ο ίδιος στις δύο περιπτώσεις. Πράγματι, ανακαλύψαμε πως υπάρχει μεγάλος αριθμός υβριδικών γλωσσικών σχηματισμών, γεγονός που επιβεβαιώνει τη γενική μας αρχική υπόθεση, ότι δηλαδή το αρχικό κείμενο αναπόφευκτα λειτουργεί κανονιστικά για το παραγόμενο κείμενο.
Επιστημονική Έρευνα
Factors affecting writing and written mediation task difficulty
Vasso Oikonomidou
Faculty of English Language and Literature
School of Philosophy National and Kapodistrian University of Athens
Abstract
The aim of this study is to investigate the kind of variables that affect writing task difficulty in the foreign language examination context. In particular, it focuses on the writing component of the KPG Exams , where candidates are asked to produce two written scripts –“writing production” and “mediation”- based on visual and verbal prompts, following KPG test specifications. Taking the mean score of the activities of different examination periods into consideration, the first stage of the study is distinguish between ‘easier’ and ‘more difficult’ activities and explore the text-types where candidates seem to perform better. The second stage involves the researcher in an attempt to define task difficulty and its effect on the outcome through an analysis of the discourse features of candidate scripts. In particular, tasks are analysed and the changes in task design over time are explored. What is more, candidate expected performance is described and compared with candidate actual performance. The third stage of the study aims at exploring perceptions of writing task difficulty through questionnaires and interviews. The study involves four different groups: candidates, raters, task designers and EFL teachers. The investigation is both quantitative and qualitative. The study may yield interesting results for test designers who need to consider the kinds of difficulty candidates of a specific level are usually confronted with. The results are also expected to prove useful to foreign language teachers and material writers who are interested in designing ‘fair’ activities. On the whole, the findings of this study may be of wider use not only in the area of Language Testing and specifically in writing and written mediation task design, but also in the field of Foreign Language Didactics and its particular concerns with development of EFL writing skills.
Επιστημονική Έρευνα
Towards the validation of the KPG activities of oral performance
Eleftheria Nteliou
Faculty of English Language and Literature
School of Philosophy National and Kapodistrian University of Athens
Abstract
The test takers’ proficiency in a foreign language is measured through their performance in tasks, designed to examine specific aspects of the construct of language ability, which is thoroughly described in test specifications and influences the construction of the scale of assessment criteria, which discriminate performance from level to level. Given that task design should conform to the test designers’ expectations regarding the test takers’ language ability at a certain level of proficiency, tasks are supposed to elicit specific linguistic features, thus affecting language production.
This study focuses on the oral tasks designed for Activities 2 and 3 in the speaking module of the English KPG exams, at levels B1 and B2. Its aim is to determine how specific task characteristics are expected to influence oral language production, by eliciting particular lexicogrammatical elements, which may differ from level to level. For that purpose, the first stage of the research deals with the linguistic description and analysis of the oral tasks designed for levels B1 and B2, in order to determine the lexicogrammatical characteristics that are expected to be elicited when test takers actually perform the speaking tasks. The theoretical background on which the analytical categories of oral task description are based draws from the systemic functional approach to language use, which also determines the construct of language ability in the KPG test specifications. The second part of the research aims at empirically specifying how the expectations at the oral task design stage are realised in actual test performance, thus providing evidence for the assumptions made in the first stage of the research. For that reason, this part deals with the transcription and discourse analysis of a number of simulated interviews at levels B1 and B2 and attempts to shed light on what really happens during oral production and mediation at these two levels.
The purpose of my research is to critically describe the discourse practices of examiners during the oral KPG tests, and the way these practices interfere with the candidates' output on the one hand and with the rating of their communicative performance on the other. In other words, my research focuses on the role of the oral examiners as interlocutors and also as raters. The way that this role is enacted is a major variable (or facet as this is very often referred to in the literature) which can interact with other variables to affect candidate output and examiner rating.
Since the results of this study are based on empirical research, they may prove particularly useful for the KPG test designers (as well as for the test designers of other exam systems), because their work will be based on evidence of what B1 and B2 level candidates are really able to do with language, thus eliminating any kind of intuition regarding spoken language potentials at these two levels and leading to the creation of improved tasks. Moreover, this study will conduce to the ongoing validation of the KPG speaking tests, because it will explore the issue of inter-activity variability and will likely lead to a reformulation of the criteria on the assessment scale. Apart from language testing experts, the findings may be helpful to foreign language teachers as well, in their work to prepare students for B1 and B2 level exams as well as in their effort to maximise their students’ oral ability through the use of appropriate tasks at these two levels of proficiency.
Επιστημονική Έρευνα
Interlocutor performance variability in language proficiency testing: the case of the Greek State Certificate Examinations
Xenia Delieza
Faculty of English Language and Literature
School of Philosophy National and Kapodistrian University of Athens
Abstract
The assessment of oral production constitutes a real challenge in the field of Language Testing since assessment is more or less subjective and results in scores which are not always reliable because there are so many variables that affect the candidates' oral performance. Presently I am interested in thoroughly investigating one of these variables, i.e., the examiner himself/herself.
The context of my investigation is the Greek state exams for foreign language proficiency, the English exams in particular and specifically the component which aims at assessing oral production and mediation. These exams, known as KPG exams (the initials standing for Kratiko Pistopiitiko Glossomathias, meaning State Certificate of Language Proficiency), are based on the scales set by the Council of Europe as described in the Common European Framework of Reference for Languages (thereafter CEFR).
The purpose of my research is to critically describe the discourse practices of examiners during the oral KPG tests, and the way these practices interfere with the candidates' output on the one hand and with the rating of their communicative performance on the other. In other words, my research focuses on the role of the oral examiners as interlocutors and also as raters. The way that this role is enacted is a major variable (or facet as this is very often referred to in the literature) which can interact with other variables to affect candidate output and examiner rating.
This area of study is inextricably linked with the demand for more thorough examiner training and monitoring of their practices, with a view to increasing the possibilities for (a) valid interlocutor performance and (b) inter- and intra-rater reliability. Through the collection, analysis and interpretation of data collected before, during and after the English KPG exams, my ultimate aim is to create a reliable tool on the basis of which the examiner-as-interlocutor discourse is observed and evaluated. This tool can then be used to systematically examine the degree to which examiners comply with the norms of standardisation of the specific examination. That is, whether they follow the specific instructions for conducting the test, whether they adhere to the interlocutor frames prescribed by the test designers, how they use the evaluation criteria and which variables affect the final rating of candidates' performance. Answers to such questions may yield information leading to a revision of the individual processes within the examination with a view to improving it in terms of validity and reliability. Study into examiner practices as a variable affecting outcomes in different examinations presents itself as an area of widespread interest in the international testing arena, where the demand for coherence and transparency in language certification have been repeatedly accentuated and especially emphasised with the introduction of the CEFR. In addition, the oral test of The KPG Exams, being part of a new language testing battery provides an unexplored territory, awaiting research, the results of which could contribute to the understanding of the nature of foreign language performance, give insight into aspects of variation which can detract from reliability and validity and bring to light possible ways of coping with such variation.
