At least 350 languages are spoken in American homes.
Nearly a quarter of all US households speak a language other than English.* Yet speakers of most non-English languages lack opportunities to demonstrate proficiency and gain recognition for their language skills.
The National Examinations in World Languages
Developed by American Councils, The National Examinations in World Languages (NEWL®) is an online language assessment in four critical languages: Arabic (MSA), Korean, Portuguese (global), and Russian. The exams measure functional language proficiency across four skills: reading, listening, speaking, and writing.
Targeted for traditional foreign language learners and heritage learners, NEWL® exams produce AP®-style score reports, which can be used to apply for college credit and/or placement.
Learn more about NEWL here.
ILR Assessment Tools
We develop foreign language proficiency tests under the Interagency Language Roundtable (ILR) language proficiency guidelines that are used by The Language Flagship, a national foreign language educational program that is sponsored and funded by the National Security Education Program (NSEP).
The exams are used for qualification, pre-departure, and post-return assessment levels for participants, and measure proficiency levels, which range from L0+ to L3+, in levels 0+, 1, 1+, 2, 2+, 3, and 3+ in the ILR scale. Reading, listening, writing, and speaking skills are offered in:
- Arabic (MSA)
ACTFL Proficiency Guidelines - Tools
We develop and administer foreign language proficiency tests based on the American Council on the Teaching of Foreign Languages (ACTFL) proficiency guidelines, which range from Novice Mid to Superior.
Tests developed under the ACTFL scale are used for pre-program and post-program testing for study abroad programs, such as the Advanced Russian Language and Area Studies Program, and the National Security Language Initiative for Youth (NSLI-Y). Language assessments for these programs measure proficiency in Arabic, Chinese, and Russian.
All test data is analyzed to ensure its reliability and validity. Our psychometricians collect response data under field test and operational conditions, analyze these data with statistical and psychometric models, and conduct item review meetings and standard setting workshops with language specialists, stakeholders, and educational measurement experts. Both classical test theory and item response theory approaches are used.
Reliability is the degree by which an assessment is consistent, e.g., how closely the scores from two parallel forms are correlated with each other.
Validity is the degree by which an assessment reflects the performance or proficiency, which it is supposed to measure. For instance, students with more language education should usually score higher on related language assessments.
Our language testing initiatives are powered by an end-to-end online testing system called the American Councils Language Assessment Support System or ACLASS. The system features support for version-controlled item development, test form creation, registration and scheduling, exam administration and monitoring and score reporting. To date, ACLASS has been used to administer more than 12,000 online language exams.
*Source: US Census Bureau, 2015