5
Recommendations for Solving Equity Gaps at James Monroe High School, Virginia
Michael Whitener
School of Education, Liberty University
In partial fulfillment of EDUC 816
Interview Questions
Central Question:
How can the gaps in college readiness between students from low-income and underserved communities and those from wealthy and majority groups be eliminated?
Interview Questions
1. What parameters/Indicators are used to determine whether a student is college-ready or not?
The question is crucial in identifying whether the instructors are aware of the factors that contribute to college readiness among the students. Several indicators influence college readiness. Such parameters are combined before understanding whether a high school student is college-ready. Some indicators accurately show students’ college preparedness, while others give a false picture. Leeds & Mokher (2019) showed that using placement tests to assign students to developmental courses results in frequent misplacement. The authors used data from Florida. They concluded that it might be preferable to choose cutoffs that minimize misplacement than to use new metrics (Leeds & Mokher, 2019). Also, they proposed that each state use metrics that are unique to their contexts in determining college readiness among students.
From the answer obtained, the researcher will understand whether the metrics used in determining college readiness at the school reflect its contexts. Understanding the connection between the metrics used to determine college readiness and the school will enable the researcher to understand the cause of low college readiness for students from low-income and underserved communities. (Leeds & Mokher, 2019) also, mention grade point average [GPA]) and placement test scores as other misplaced measures of college readiness. Thus, the first interview question seeks to understand whether the nature of rating college readiness at the school is appropriate by identifying the parameters used.
2. How are the college readiness metrics incorporated into the curriculum at the high school level?
The question is essential in understanding whether the school curriculum is designed to help the learners prepare for college education. Across the United States, the education policy requires high schools to prepare graduating students for college. Therefore, the responsibility of preparing students for college is given to instructors. Castellano et al., 2016 studied the effects of Programs of Study (POS) on preparing students for college and careers. Using the structural equation analysis method, they tested the effects of POS enrollment, participation in CTE course sequence on GPA, and graduation (Castellano et al., 2016). The findings showed that enrollment in POS increased the graduation rate among the learners and led to high retention (Castellano et al., 2016). Therefore, the second interview question will help the researcher understand whether the problems in college readiness result from the existing curriculum. Responses to the question will help develop the best way to prepare the students for college education. Also, the researcher will be able to determine whether the teaching methods relating to college preparedness are exclusive or inclusive. Sometimes, the metrics may be biased to favor some students while sidelining others. In the end, favored students will perform better than others. Thus, the nature of incorporating college preparedness in the curriculum is critical in understanding whether there are any biases.
3. What are some of the possible causes of low college readiness for students from low-income and underserved communities?
The purpose of the question is to understand learner or school-specific dynamics that could be contributing to poor college preparedness for high school students. (Leeds & Mokher, 2019) noted in their study relying on metrics like test scores can lead to poor preparedness and misplaced. If the school uses test scores to determine college preparedness, some students are placed at a disadvantage. The researcher will also comprehend whether issues like discrimination are possible causes of poor college preparedness for marginalized learners at the school.
The question will also be critical in understanding the variation in other internal school metrics between students from low-income and underserved communities and those from majority groups. As hinted, there are metrics that stakeholders use to estimate students’ college readiness. When the respondent responds to this question, it is easy to make a follow-up and identify the key areas that make learners from underserved communities be poorly prepared for college. For instance, if the school uses test scores to measure college readiness, the researcher will narrow it down to the specific subjects included in the study. Furthermore, the researcher may want to understand whether each learner has equal access to the required learning materials. Using the tests scores, the researcher may also analyze whether the teaching methods embrace learner diversity or are exclusive. From that angle, it is possible to understand what variables influence test score outcomes and how they can be mediated to reduce the college preparedness gap. Another reason for asking the question is to understand whether the affected learners are to blame for their failures or their teachers need to be held accountable. Also, the response to the question will aid the researcher in determining whether the existing interventions to increase college preparedness are viable. Every appropriate solution starts with comprehensively understanding the problem’s origin.
References
Castellano, M. E., Richardson, G. B., Sundell, K., & Stone, J. R. (2016). Preparing students for college and career in the United States: The effects of career-themed programs of study on High School Performance. Vocations and Learning, 10(1), 47–70.
Leeds, D. M., & Mokher, C. G. (2019). Improving indicators of college readiness: Methods for optimally placing students into multiple levels of postsecondary coursework. Educational Evaluation and Policy Analysis, 42(1), 87–109.