Much of the focus of education has been on the academic core subjects which are necessary, but inadequate for students to seamlessly integrate into the workforce. Precision Exams listens closely to subject-matter experts from various industries to keep our content up to date and relevant to today’s jobs. Each of our exams is reviewed on a one- to three-year cycle by a group of subject-matter experts (SMEs) from industries and education. We closely monitor rapidly advancing areas to stay abreast of changes. When industries adapt and change, so do we. Below we would like to share our career skills standards and exam creation process with you.
Starting with teams of 5‐10 SMEs we work together to develop each Career Skills Standard Set and Exam by following a multi‐step creation process. Guided by their industry knowledge, the SMEs first research and gather current in‐market related topic standards; second, they correlate researched standards together to establish a common base of knowledge at an end‐of‐program level; third, the standards are separated into leveled knowledge from baseline to job entry‐level to in‐depth knowledge and skills; and fourth, the end‐of‐program standards are dissected into smaller course‐level standard sets for consumption within a standard semester course. These course‐level standard sets are then distributed to a larger SME audience for review and feedback. Once finalized, SMEs are utilized to create exam items, forms, and cut scores.
Career Skills Exam items are written to measure higher levels of Bloom’s Taxonomy. Generally, each exams include items at the knowledge, comprehension, application, and analysis levels, while the Performance Requirements specifications are written to measure skills at the synthesis and evaluation levels. Each exam follows a multifaceted development process, including:
ALPHA-PHASE CONTENT REVIEW
To allow SME and psychometric staff to ensure that each exam item is one, rationally related to the specified testing objective, as demonstrated by a chain of reasoning which includes scenario statements; two, operates at the appropriate level of cognitive skill; and three, adheres to general psychometric principles regarding the development of valid and reliable exams.
ITEM-ANALYSIS FROM BETA-TESTING
To compute and examine any statistical property of examinees’ responses to an individual test item, item parameters commonly examined fall into three general categories: Item validity (item difficulty, mean and variance of item response—describe the distribution of responses to a single item); Item discrimination (describe the degree of relationship between responses to the item and total exam scores); Item reliability (a function of item variance and relationship to exam scores).
POST BETA ITEM SELECTION AND BLUE PRINT MATCHING
Here the psychometric staff prepares a report which details problematic items, and recommends a subset of items for form assembly based upon item performance criteria, the maximum allowed test time, and coverage of skills specified in the exam blueprint. SMEs, in consultation with the psychometric staff, decide which items may be used on “live” exams, which should be discarded, or rewritten and retested.
TIME SELECTION MATRIX AND MAXIMUM EXAM TIME ANALYSIS AND ADJUSTMENT
Average item response times are collected and analyzed during psychometric beta testing and continually monitored throughout the life‐span of the exam. Item times are initially used to qualify and reinforce the selection of objective criteria which match the exam specifications guideline. Once the psychometric item selection process has concluded, analyses are performed to determine the average time beta‐examinees required to complete each item. On the basis of this information, a final round of item selection is performed (if necessary) to adjust the sum of the average completion times to the specified exam sitting time.
PASSING STANDARD METHODS AND INTERPRETATION
After final forms are developed each exam passes three stages of analysis (SME exam experience, SME item by item rating, SME group rating) against the psychometric standard of the minimally competent candidate. These different analyses are then synthesized and reviewed by committee for approval.
Exam Monitoring is ongoing once an exam is released. Item‐level exam results are monitored at regular intervals to ensure that specified psychometric parameters remain in force as the size of the sample, or number of examination results, increases. Exam validity claims are structured upon Samuel Messick’s “integrated” model of test validity, which looks to a variety of “empirical evidence and theoretical rationales” to evaluate “the adequacy and appropriateness of inferences and actions based on test scores…” Each exam is designed and developed on a series of interrelated activities beginning with careful development of test standards, followed by development of psychometrically sound domains of the knowledge, skills, and abilities required to demonstrate those standards, the programming of sophisticated, computer‐based assessments that provide direct assessments of skills, and demonstrations that test validity meet with approved testing guidelines.
By maintaining quality CTE standards Precision Exams is working everyday to help educators focus on training highly skilled workers who hold industry certifications.