Extending the Reading Performance of Low-Level Readers Through the Use of a Diagnostic Assessment Tool in Explicit Strategic Reading Instruction

Explicit instruction of reading strategies has often been recommended to develop L2 or EFL learners with poor reading performance. This paper aims to examine how effective the approach is when it is combined with a diagnostic assessment tool, both on participants’ reading performance at literal and inferential levels, and participants’ attitudes towards reading. The experimental study involved two groups of 9 grade students in West Manggarai, East Nusa Tenggara who were identified as low-level readers. The findings reveal that while both control and experimental groups made significant progress in their reading performance, the experimental group yielded larger effect size than the control group. It is concluded that the use of diagnostic assessment tool was proven to be instrumental in extending the participants’ reading performance. Classroom implications and suggestions for future research are provided based on the aforementioned findings.


INTRODUCTION
Acquiring reading skills is perceived as a key to success in school and life in many local, national, and international societies. Therefore, "teaching children to read accurately, fluently, and with adequate comprehension is critical" (Snowling & Hulme, 2011), and the pressure to do that is ever so increasing in this era where being able to read in one or two foreign languages have become a requisite in many academic and professional contexts. To be specific, the ability to peruse English effectively for scholastic reasons is broadly perceived as a basic ability in a wide scope of secondary and tertiary education settings (Grabe, 2014).
While no one disputes the importance of reading strategies to enhance readers" reading skills, the best way to teach those skills in L2 or foreign language is still a subject to be investigated by many language teachers and specialists. While research in L1 reading contexts is extensive, Grabe and Stoller (2013) argue that not only is research on L2 strategic processing in reading surprisingly limited (see also Grabe, 2014), but there is also lack of connection between research and strategy instruction practices in class. Meanwhile, research has revealed that successful L2 or EFL readers are readers who know how to apply effective strategies to reach their reading goals. Thus, the challenge is how to move from theory to intervention and assessment.
The missing link mentioned by Grabe can be seen in Indonesia"s context where many of its adolescent learners fall into the category of low-level readers. The 2015 PISA report showed that out of 46% of Indonesian 15-year olds, 86% of them read at PISA Level 2, which means readers can draw simple conclusions by connecting the text to their prior knowledge (OECD, 2016). However, in its report released in 2019, PISA announced that Indonesian adolescent readers scored averagely at 371 at Level 1a (OECD, 2019). At this level, readers can demonstrate abilities in reading for explicit or stated information by connecting the text to general knowledge. This data implies that there more studies in the area of strategic reading are needed to explore and find effective instructions or approaches in teaching reading that teachers can apply in their classrooms. Uribe-Enciso (2015) identified that reading strategies in general share the following characteristics: they involve conscious cognitive processes, they are used as problem solving tools in comprehending a text, and they serve to facilitate reading comprehension. In other words, reading strategies help readers become effective readers. Thus, strategic reading instruction is instruction or teaching how to apply reading strategies to reach the goal of reading activity. In the context of L2 strategic reading instructions, there has been an extensive body of research that champions the effect of explicit instruction of reading strategies (Lencioni, 2013).
Explicit instruction is grounded on the belief that students do not learn simply by going through a set of activities through a given amount of time. They must receive instruction. Archer and Hughes (Archer & Hughes, 2011) believe that the way to help students achieve the maximum success is through explicit instruction, which they define as "instruction that is systematic, direct, engaging, and success oriented . . .explicit instruction is helpful not only when discovery is impossible, but when discovery may be inaccurate, inadequate, incomplete, or inefficient." In reading classes, this approach calls for teacher"s step-by-step instruction which is scaffolded according to learners" and instructional contexts to help learners clearly understand, practice, and master reading skills. Since explicit instruction emphasizes on guiding students through each step of strategies application and initial practice, and on providing practice after each step, 18 struggling readers can find supports that they need with some level of confidence. This instruction has been found effective is particularly helpful for beginners or low-level language learners rather than more abled readers (Lencioni, 2013) and at the middle school level and above (Williams, 2007).
When teaching with the aim that poor readers can understand a text in English, the question often posed is how much a teacher can expect his/her students understand the text since understanding can be attained in different amount, a little or much. Thus, the notions of surface and deep understanding or literal comprehension and inferential comprehension or even evaluative comprehension are very crucial for teachers to be aware of before, during, and after instruction. In context of poor readers, for example, scaffolding instruction from literal to inferential comprehension is very critical. Considered as the first level of comprehension, literal comprehension requires a reader to be able to extract information that is explicitly stated in a passage (Basaraba, Yovanoff, Alonzo, & Tindal, 2013). At this level, comprehension heavily depends on readers" word recognition to accurately identify meaning at word as well at sentence levels (Perfetti, CA.;Landi, N.;Oakhill, 2005). It means when EFL readers with limited amount of vocabulary are presented with a text, reading instruction should be designed to help them attain understanding of explicit information in the text without having them got overwhelmed or even trapped by the task to decode the words, especially unknown words. Then, the more critical point to ask is how reading instruction can further extend low-level readers" skills up to the point they can understand information which is not explicitly stated in the text. This inferential comprehension or the deep level of comprehension is the result of readers" ability to connect the author"s message or intention with their own background knowledge or experience while at the same time exercising control on how much prior knowledge can be assigned to the message. For poor readers in particular, inferential comprehension also includes decoding unknown words by guessing their meaning from context (lexical inferencing) since it "involves making informed guesses as to the meaning of a word in the light of all available linguistic cues in combination with the learner"s general knowledge of the world, her awareness of the context and her relevant linguistic knowledge" (Hasstrup, 1991, as quoted by Hu & Nassaji, 2014). Strategic instruction that equips readers to resolve problems in lexical referencing is crucial since it was found that contextual guessing is often favoured by readers than consulting or ignoring the words (Çetinavcı, 2014). Hence, understanding the cognitive processing demands that inferential comprehension poses on EFL poor readers and how reading instructions can promote this process warrants further exploration.
In assessing low-level readers" reading performance, alternative assessments have been preferred due to its less intimidating approach than the traditional assessments. Pierce & O"Malley characterized alternative assessment as "any method of finding out what a student knows or can do that is intended to show growth and inform instruction and is not a standardized or traditional test" (Pierce, L. V. & O"Malley, 1992). As opposed to traditional assessment which heavily relies on summative assessment, alternative (to) assessment is characterized by its ongoing progress in language using nonconventional strategies and involves techniques which can be used within the context of instruction and can easily be incorporated into the daily activities of the school or classroom (Coombe, C., Purmensky, K., & Davidson, 2012). These suggest that alternative ways of assessing learners recognize varieties of students" needs, interests, and learning styles, thus, may take different forms or strategies which teachers find the most suitable and applicable to support learners" progress. The word "different", however, must not considered suggesting a totally new approach or strategy to assessment practice, rather a range of selection of ways of doing assessment, designed and catered to a certain purpose of assessment at one given time. Thus, alternative (to) assessment offers flexibility of doing assessment, methods and time wise, compared to its traditional counterpart.
Having been aware of poor English literacy among junior high school students in West Manggarai, the Regional Education and Sports Office of West Manggarai, East Nusa Tenggara held an educational training program for 9 th graders or junior high school students across the region. The programs had been held from 2017 to 2020. They aimed to equip students to achieve graduate competency standards in reading at the end of their school year. One key challenge in meeting the objective of the program, besides the poor reading performance of the participants, was how to promote assessment for learning and assessment as learning rather than focusing on assessment of learning so that participants would not leave the program with numeric information about their reading performance, but be equipped with the reading strategies they attained through the program and also be driven to set their own goals and assess themselves along the process of learning. Hence, this study was conceived with a single broad aim which was to investigate the effectiveness of a diagnostic assessment tool, as means to promote assessment for learning and assessment as learning, that was incorporated in the explicit reading instruction on EFL low-level readers. The diagnostic assessment tool is an inventory that consists of reading skills ordered in two general categories, namely literal and inferential types of questions and their corresponding numbers appear in a reading comprehension test. The tool was researcher-designed inventory and was firstly introduced to last year"s cohort, yet there were no 20 studies conducted on the effectiveness of the tool in developing students" reading competence and reading strategies use. Thus, this study aims to answer the following questions: (1) How is the effect of the use of a diagnostic assessment tool when combined with explicit reading instruction on adolescent readers with very low-level of reading proficiency?
(2) How does the tool combined with the explicit instruction affect their literal and inferential comprehension of various English texts?
(3) How are readers" attitudes towards practicing reading strategies with or without the use of the diagnostic assessment tool?

Research method
The study took place in 2020"s program where the researcher was also the facilitator at the English classes. The research design used is a true experimental study where both the control and experimental groups were randomly selected and assigned to each group. This was in line with the program designed by the Regional Education and Sports Office of West Manggarai. Schools were asked to send two or more students to participate in the program. Upon their arrival, they were randomly assigned to different classes right before they took a reading comprehension pretest. The program ran for two days for a total 10 hours of instruction, excluding the pre-test and post-test.
Both the control and experimental groups were explicitly taught cognitive reading strategies that are commonly used to find literal information and inferential information. In this study, strategies to understand literal information include identifying specific information, a topic of a passage, and purpose of a text while strategies to understand inferential information are those to find implied main ideas, infer writer"s intention or reader"s response, draw conclusions and make inferences, as well as guessing meaning from context.
Meanwhile, in the experimental group, participants were exposed to a diagnostic assessment tool which helped them track their reading performance and reflect on the strategies that they used or needed to focus on next time they come across similar reading tasks.
In order to capture students" attitude in using the diagnostic assessment tool during the intervention, a classroom observation was carried out with the help of local teachers who accompanied their students in the program.

Subject
Targeted population of this study was 9 th graders in West Manggarai Region, East Nusa Tenggara. The sampling consisted of 57 students from 31 schools across the region. They were divided into a control group and an experimental group which consisted of 28 students and 29 students respectively.

Materials
Each of the pre-test and post-test used in this study was a reading comprehension test of 50 questions taken from National Examination English subject between 2017 to 2019. The reading texts used in the intervention were taken from various sources, including those used in National Examination. Varied in word counts with the maximum number of words around 260, the texts ranged from short functional texts, such as invitations and announcement, to longer informational texts such as report, descriptive, and narrative texts.
In the experimental group, the following diagnostic assessment tool table (as shown in Picture 1 below) was utilized. Facilitator dictated the numbers of questions which corresponded to the reading skills tested after each test.

Research procedure
Data acquired for this study were taken from the pre-test administered at the beginning of the program, a reading comprehension test given in the middle of the intervention, a post-test at the end of the program, and diagnostic assessment tool tables after each test or practice; all of which were done by the participants along the course of the study.
Right after the pre-test, participants recorded their performance by filling out the diagnostic assessment tool in order to identify the types of mistakes they made and the top 3 most challenging reading questions they had. The result was used as a personal target of improvement as well as the group target, which was then compared to the later reading performances.
During the intervention, the researcher acted as the facilitator who gave explicit instructions on reading strategies, including explaining when, where, how, and why they can or should be used to find literal or inferential information. After a reading strategy was modeled, participants tried to apply the strategy taught, in groups or individually. After individual practice, participants were given time to record their performance on the diagnostic tables. Then, they reflected on the results by comparing the diagnostic assessment tool tables of the individual practices with that of the pre-test and subsequent practices.
At the end of the program, participants once again filled out the diagnostic assessment tool based on their corresponding performance on the post-test.

Findings on between-group comparison on reading comprehension before and after intervention
To examine the effects of reading performance prior to and after the intervention between the participants in the control and experimental group, independent and paired samples t-tests were performed. The data are shown in Table 1 below. The data revealed that prior to the intervention that there was no significant difference between the two groups (t-score=0.83, p>.05). Thus, it can be concluded that although the control group had a higher mean score, both groups were comparable before they received the intervention. The mean scores of the pre-tests also indicate that the participants of both groups can be rightly categorized as low-level readers as they averagely could only answer between 15 to 16 questions correctly, out of 50 questions given in each test. Furthermore, the within-group comparison statistics show that the gain of the control group after the intervention is significantly different (t-score=2.17, p>0.5). This indicates that the intervention given for the group was effective in improving students" use of reading comprehension strategies. Meanwhile, there is a positive significant effect size within the experimental group (t-score=6.89, p<.05), suggesting that the group score significantly higher at post-test than at pre-test. Comparing the means of both groups before and after the intervention, it is found that the experimental group achieved a more significant improvement in their reading scores than the control group. Hence, the illustrated data confirms the previous studies that explicit instruction of reading strategies that focuses on training students" cognitive reading strategies can significantly improve participants" reading performance. However, the data also reveals that low-level readers can improve further when they are given a facilitative tool that can help them reflect on their reading performance and strategies use.
In the context of reading instructions for EFL low-level readers, where lack of vocabulary is often said to be the main reason of poor literacy in English, the findings of this study suggest teaching reading strategies can be found effective in developing learners" reading performance and strategies use even when they have a small vocabulary size or initial poor reading performance.

Findings on within-group control on reading performance on literal and inferential questions
To investigate the effect of the explicit instruction of reading strategies combined with the diagnostic assessment tool on participants" skills in answering literal and inferential questions, paired samples t-tests were conducted by comparing the numbers of correct answers given for each type of questions. There were 28 literal and 22 inferential questions in the pre-test and 26 and 24 in the post-test respectively. The result is illustrated in Table 2 below. After the intervention, the gains in literal and inferential reading questions were significantly different than those in the pre-test (t-scores= 4.23 and 6.43, respectively). This means there was an increase about 15% in literal understanding of a reading passage, and there was around 12% improved understanding of inferential information.
These findings suggest that not only did the explicit instruction of reading strategies and the diagnostic assessment tool improved participants" overall reading performance, but they also served participants" development in dealing with more difficult reading tasks, such as guessing meaning from context, making inferences and drawing conclusions by activating their existing lexical knowledge and prior knowledge. Thus, it is rightly stated that it is possible to teach EFL low-level readers to read between the lines even though it is often found as a very challenging reading task for EFL learners in general.

Findings on group dynamics based on classroom observations
To further investigate the results conveyed by the descriptive statistics and the underlying factors that might have contributed to the results, classroom observations was carried out. They sought to understand participants" reactions to the use of diagnostic assessment tool and its role in developing their awareness in setting goals in reading and assessing those goals. The class observations were done by the researcher and a couple of English teachers who accompanied their students in the training.
First of all, a quick informal survey done in each group revealed that there were less than five students who had ever heard or knew about reading strategies, and those strategies were related only to reading for literal information, such as reading for specific information or for stated main ideas. None of them was aware of how to read for inferential information in a text. In addition, the use of dictionaries, usually pocket bilingual ones, was heavily encouraged by teachers when students were struggling with comprehension. Barely any one of the participants knew how to guess meanings from context to solve their comprehension problems. This survey was confirmed when participants identified the numbers of incorrect answers in the pre-test, for both literal and inferential questions on the diagnostic assessment tool. It was found that the topthree most incorrect answers fell on identifying specific information in a text, making inferences and drawing conclusions, and interpreting words or phrases used by writers. There was complete unanimity on this result in both groups. Even though the result was not surprising for the researcher, for participants it was the first time they could see the specific problems they had when reading a text in English. Some students looked perplexed by the number of mistakes they Jurnal Ilmiah Lingua Idea Vol. 11, No. 1, June 2020, pp. 16-29 p-ISSN: 2086-1877e-ISSN : 2580-1066 made on each type of the questions. Circling each number of questions that they got wrong could potentially lead to raising one"s awareness of his/her performance, especially when he/she had to circle too many numbers of questions. Hasselgreen argues that formative assessment "should be an integral part of young language learning teaching and can be carried out in many forms. . . . Task used should be those that lead to learning" (Hasselgreen, 2012). Hence, this activity of awareness raising served a couple of things that directed the instructions in each group. First, it informed the whole class what strategies to learn first, second, etc. Secondly, it helped each participant set their specific personal reading goals throughout the program based on their previous reading performance. This component of the intervention, which was not present in the control group, was a potential aspect that contributed to a different effect on reading comprehension performance of the experimental group.
Next, complying to the format of explicit instruction, the facilitator first modeled a reading strategy before participants tried to use the reading strategy on their own or in group. The scaffolded instruction gave a structure that participants could follow, thus, making many of them feel comfortable because a model was given first, and they could rely on the facilitator to assist them when they were in doubts. A few participants in the experimental group even showed initiatives in assisting their friends who were having problems in keeping track with filling in the diagnostic assessment tools. To some extent, the tool also developed participants" confidence in identifying types of reading questions as they got more familiar with the strategies, the types of reading questions, and the structure of the reading program.
Furthermore, a carefully scaffolded instruction when teaching strategies for reading inferential information was imperative in teaching low-level readers. In the program, the modeling was structured from using a picture to a text, and from using a single sentence to two sentences and three sentences. Besides that, teaching a skill in both Bahasa Indonesia and English was proven to be beneficial since participants could follow along the train of thoughts of the facilitator clearly. In Indonesian contexts, especially in parts of the country where there are more than one dialect or local language, such as in East Nusa Tenggara, teaching fully in English is the least common practice, yet many teachers are not comfortable with the decisions to code switch during a lesson. This study is a proof that translanguaging should not be perceived as the least favorable approach in classroom communication or deemed to be ineffective in bringing about meaningful changes on students" English use. Learners" L1 thus should be considered as a resource of knowledge which can be used to help them navigate their L1 data in order to perform as best as possible in the L2. That being said, L1 provides a sense of security and validates the learners" lived experiences, allowing them to express themselves and take risks with English (Ellis, 2012). In more recent research which looked into teachers and learners" opinions about the use of L1 in a foreign language classroom revealed that both teachers and students tended to favor the use of L1 the finding emerged from this study suggests that balanced and careful use of L1 in the English classes seem not to affect the students" exposure to the target language. In fact, it can even be beneficial in the learning process and may be needed for increasing comprehension (Ismaili, 2015).
Last but not least, one very obvious classroom dynamics observed in the experimental group is that every time participants filled in the diagnostic assessment tools, added the numbers of wrong answers, compared their current performance with the previous ones, every progress could be seen explicitly as can be seen in Picture 2 below. This process added valuable information to participants" learning process, goal settings, and actual reading performance visually.

Picture 2. Sample of diagnostic assessment tool filled by a participant
It drove participants to expect progress or at least to have curiosity to find out whether or not a progress was achieved. Those who strived to make progress appeared to be more active and enthusiastic in class. Although both groups received affirmation and encouragement from the researcher during the intervention, due to the use of the diagnostic assessment tool, the experimental group appeared active and developed a sense of achievement throughout the program. This observation is in line with the approach in alternative (to) assessment practice called performance-based assessment which is more student-centered in nature than the traditional ones. For young learners, performance-based assessment makes language learning visible. Here visible is more than just scores gained from skills-oriented tests. Its impact is critical since a great deal of linguistic competence and social competence, which are embedded in language skills, is hidden behind such scoring. Thus, visible here refers to observable use of language and all the other faculties that come with it, including cultural understanding, intrapersonal and interpersonal skills, and pragmatic skills. All these aspects should warrant further application in EFL classroom and studies so that learners who are at risk of missing the opportunities to develop their skills can thrive and gain confidence when reading. On the other hand, teachers can take advantage of monitoring students" progress while students take active role in monitoring their progress and goal settings.

CONCLUSION
The use of diagnostic assessment tool along with explicit instruction of reading strategies has proven to show significant positive effects on EFL low-lever readers" reading comprehension performance. The findings show that the combined approach has potential in teaching readers who have very limited vocabulary size some reading strategies even more challenging ones, such as reading between the lines or drawing conclusions. This should give teachers some level of confidence that they do not need to dedicate class hours merely on vocabulary learning. Having said that, merely having students read a lot of passages or answer reading comprehension questions, even with some guidance from teachers, might not result in significant progress of reading skills or awareness of using specific reading skills to navigate difficult reading tasks. The model of diagnostic assessment tool utilized in this study could also be used in combination with explicit instruction of listening strategies since both are receptive skills that are performed or processed at the cognitive level that students often cannot tap into.
Some future experiment on that area or skills or even at a different level of English competence and also on modifications of categorizations would also be valuable in unlocking the potential of such use of diagnostic tool, especially on developing students" metacognitive awareness of strategy use.