UpraiserUpraiser
DemoAboutBlogContact
Sign in
Request a demo
Upraiser
Back to blog
State RubricsMarch 14, 202611 min read

M-STAR Teacher Evaluation in Mississippi: A Complete 2026 Guide

Everything Mississippi administrators need to know about M-STAR compliance and AI-assisted evaluation

By The Upraiser Team

Share
Diverse elementary classroom in Mississippi with teacher circulating among students during an M-STAR observation

What Is M-STAR?

The Mississippi Statewide Teacher Appraisal Rubric (M-STAR) is the official teacher evaluation framework mandated by the Mississippi Department of Education (MDE). Introduced to replace a patchwork of district-level instruments, M-STAR provides a single, research-based rubric that every public school district in the state must use to evaluate classroom teachers.

M-STAR is grounded in the work of Charlotte Danielson and Robert Marzano but customized for Mississippi's standards and student demographics. It connects teacher practice directly to the Mississippi College- and Career-Readiness Standards and is designed to be both a compliance instrument and a professional growth tool.

~1,000Public schools in Mississippi using M-STAR

Because Mississippi is a smaller state with roughly 1,000 public schools, many administrators find that M-STAR-specific resources are harder to come by online compared to larger-state frameworks like T-TESS (Texas) or TEAM (Tennessee). This guide aims to fill that gap with a comprehensive, practical reference for 2026 and beyond.

M-STAR Domains and Indicators

M-STAR organizes teacher performance into four domains, each containing multiple indicators that evaluators score during classroom observations. Understanding the structure is essential for fair, consistent evaluation.

Domain 1: Lesson Design

This domain examines how teachers plan instruction aligned to Mississippi standards. Evaluators look for clearly articulated learning objectives, appropriate use of student data to inform planning, and instructional strategies matched to diverse learner needs. Indicators include alignment to standards, use of assessment data, and differentiation.

Domain 2: Student Understanding

Domain 2 focuses on how well teachers communicate content and engage students in meaningful learning. Indicators cover content knowledge and accuracy, instructional delivery, questioning strategies that promote higher-order thinking, and the use of academic vocabulary aligned to Mississippi standards.

Domain 3: Culture and Learning Environment

The classroom environment domain assesses relationship building, behavior management, and the overall culture of high expectations. Evaluators look for evidence of respectful interactions, established routines and procedures, equitable engagement of all students, and a physical environment conducive to learning.

Domain 4: Professional Responsibilities

This domain extends beyond the classroom to encompass teacher professionalism, collaboration, communication with families, and contribution to the school community. It includes reflective practice, participation in professional development, and adherence to ethical standards.

Key distinction: Domains 1-3 are evaluated primarily through classroom observation, while Domain 4 draws on artifacts, documentation, and administrator records. Most AI-assisted tools focus on Domains 1-3 where audio and visual evidence from the classroom is available.

M-STAR Scoring Levels and Observation Requirements

M-STAR uses a four-level performance rating scale for each indicator. Understanding what distinguishes each level is critical for calibrated, defensible evaluations.

The Four Performance Levels

  • Distinguished: Teacher performance significantly exceeds expectations. Evidence shows consistent, exceptional practice that could serve as a model for other educators. Students demonstrate ownership of learning.
  • Effective: Teacher meets the professional standard. Instruction is well-planned, content delivery is accurate, and students are engaged. This is the target rating for proficient teachers.
  • Emerging: Teacher demonstrates inconsistent or developing practice. Some elements are present but not yet reliable. Teachers at this level need targeted support and coaching.
  • Unsatisfactory: Teacher performance does not meet minimum standards. Practice is ineffective or harmful to student learning. This rating triggers formal intervention and improvement planning.

Observation minimums: Mississippi requires at least two formal observations per year for non-tenured teachers and at least one for tenured teachers. Districts may require additional observations. Each formal observation must include a pre-conference, the observation itself (minimum 30 minutes), and a post-conference with written feedback.

Scoring Aggregation

Individual indicator scores roll up to domain-level ratings, which then combine into an overall summative rating. Mississippi policy weights classroom-observed domains (1-3) more heavily than professional responsibilities (Domain 4). Evaluators must provide specific evidence from the observation for every indicator scored -- a rating without evidence is not compliant.

Mississippi-Specific Compliance Requirements

Beyond the rubric itself, Mississippi administrators must navigate several state-specific compliance requirements that govern how M-STAR evaluations are conducted, documented, and reported.

Evaluator Certification

All evaluators must complete MDE-approved M-STAR training before conducting evaluations. This certification must be renewed periodically, and districts are responsible for maintaining records of evaluator credentials. Using uncertified evaluators can jeopardize the legal defensibility of evaluation results.

Documentation and Evidence Standards

Every M-STAR evaluation must include written evidence supporting each indicator rating. MDE expects evaluators to cite specific, observable teacher behaviors and student responses rather than subjective impressions. This evidence-gathering requirement is where many administrators struggle with time management -- a single observation can require 30-45 minutes of post-observation writing.

Timeline and Reporting

Districts must complete all summative evaluations and submit data to MDE by designated deadlines, typically in May. Late or incomplete submissions can affect district accreditation status. Administrators need a reliable system for tracking observation completion across their building or district.

30-45 minAverage time administrators spend writing evidence per observation

Connection to Licensure and Growth Plans

M-STAR results directly impact teacher licensure renewal in Mississippi. Teachers rated Unsatisfactory must be placed on an improvement plan, and consecutive low ratings can affect employment status. Conversely, Distinguished and Effective ratings support career advancement pathways. This high-stakes connection makes accurate, well-evidenced evaluations essential.

Common Challenges for Mississippi Administrators

School leaders across Mississippi face a consistent set of challenges when implementing M-STAR. Understanding these pain points is the first step toward solving them.

Time Burden on Principals

Mississippi principals, particularly in small and rural schools, often serve as the sole evaluator for their entire staff. A building with 25 teachers requires a minimum of 25-50 formal observations per year, each with pre-conferences, post-conferences, and detailed written evidence. For a principal already managing discipline, family communications, and instructional leadership, this workload is unsustainable without efficient tools.

Inconsistent Scoring Across Evaluators

Even with M-STAR training, scoring calibration remains a challenge. Research consistently shows that different evaluators can watch the same lesson and arrive at different ratings. This inconsistency undermines teacher trust in the evaluation system and creates legal risk when ratings are contested.

Evidence Quality and Specificity

MDE expects evidence that references specific teacher and student behaviors, but many evaluators default to vague statements like "good classroom management" or "students were engaged." Writing specific, rubric-aligned evidence requires deep familiarity with M-STAR indicators and the discipline to capture details in real time during observations.

Limited M-STAR-Specific Resources

Unlike Texas (T-TESS) or Tennessee (TEAM), Mississippi's smaller education ecosystem means fewer third-party training resources, scoring guides, and professional development materials specifically designed for M-STAR. Many administrators rely on MDE's official materials supplemented by general Danielson or Marzano resources, which do not always map cleanly to M-STAR's specific indicators.

"I know what effective teaching looks like, but translating a 45-minute observation into rubric-aligned evidence for every indicator takes longer than the observation itself." -- Mississippi elementary school principal

How AI Maps to M-STAR Domains with Classroom Audio Evidence

AI-powered evaluation tools like Upraiser address the core M-STAR challenges by automating the most time-consuming part of the process: capturing evidence and mapping it to rubric indicators. Here is how the technology works in a Mississippi context.

Audio-to-Evidence Pipeline

During a classroom observation, the evaluator records the lesson audio using a phone or tablet. The AI transcribes the entire lesson, creating a searchable, timestamped record of every teacher statement, student response, and instructional interaction. This transcript becomes the evidence base for M-STAR scoring.

Automatic Domain Mapping

Upraiser's AI is trained on M-STAR's specific domains and indicators. It analyzes the transcript and identifies evidence that maps to each indicator -- for example, a teacher's use of higher-order questioning maps to Domain 2, while evidence of established routines maps to Domain 3. The AI produces a draft evaluation with specific transcript citations for every rating.

The evaluator stays in control. AI-generated scores are always drafts. The certified evaluator reviews every rating, adjusts as needed, and adds professional context that the AI cannot capture (such as Domain 4 evidence). The tool accelerates evidence gathering; it does not replace professional judgment.

Consistency and Calibration Benefits

Because the AI applies M-STAR criteria consistently across every observation, it helps reduce scoring drift between evaluators. Principals and district leaders can compare AI-suggested ratings with human ratings to identify calibration gaps and target evaluator training. Over time, this produces more reliable and defensible evaluation data across the district.

Time Savings for Mississippi Schools

The most immediate benefit is time. Instead of spending 30-45 minutes per observation writing evidence from memory, administrators receive a complete draft with citations within minutes of the lesson ending. For a Mississippi principal conducting 40 observations per year, this can recover 20-30 hours annually -- time that goes back to instructional leadership, coaching, and supporting teachers.

20-30 hrsAnnual time savings per evaluator with AI-assisted evidence gathering

Practical M-STAR Tips and Best Practices for 2026

Whether you are a first-year assistant principal or a veteran superintendent, these practical strategies will help you get the most out of M-STAR in the 2025-2026 school year.

1. Front-Load Your Observation Calendar

Do not wait until spring. Schedule all formal observations by October and aim to complete first-round observations before winter break. This gives teachers time to act on feedback before summative ratings are due, and it protects your schedule from the inevitable spring disruptions.

2. Use the Pre-Conference Strategically

The pre-conference is not just a compliance checkbox. Use it to ask teachers which M-STAR domains they are focusing on for growth. This gives you a lens for the observation and makes post-conference feedback more targeted and meaningful.

3. Script Observable Evidence in Real Time

During the observation, focus on scripting what you see and hear rather than making judgments in the moment. Record exact teacher questions, student responses, and transitions. This raw evidence is far more useful (and defensible) than interpretive notes. If you are using audio recording, you have this evidence automatically.

4. Calibrate With Your Evaluation Team

If your district has multiple evaluators, schedule at least two calibration sessions per year where all evaluators watch the same lesson video and independently score it using M-STAR. Compare results, discuss discrepancies, and agree on evidence thresholds for each rating level.

5. Connect M-STAR to Professional Development

M-STAR data should drive your PD planning. If building-wide data shows that Domain 2 (Student Understanding) is the weakest area, invest PD hours there rather than running generic sessions. This creates a visible link between evaluations and teacher support that builds buy-in.

6. Keep Domain 4 Evidence Running Year-Round

Professional Responsibilities evidence does not come from a single observation. Maintain a simple tracking system for committee participation, parent communication logs, PD hours, and collaboration evidence throughout the year. Collecting Domain 4 evidence retroactively in May is a common and avoidable mistake.

Looking ahead: MDE continues to refine M-STAR guidance annually. Stay current with the Mississippi Department of Education's Office of Teaching and Leading for any rubric updates or policy changes that affect the 2026-2027 evaluation cycle.

Getting Started With M-STAR and AI

Mississippi administrators do not have to choose between compliance and efficiency. AI-assisted evaluation tools designed for M-STAR let you maintain the rigor MDE expects while dramatically reducing the time burden on your evaluation team.

Upraiser supports M-STAR alongside 23 other state frameworks, so districts with teachers licensed in multiple states can use a single platform. The system produces M-STAR-aligned evidence reports that are ready for post-conferences, improvement plans, and MDE submission.

For Mississippi schools and districts looking to modernize their evaluation process without sacrificing quality, the combination of M-STAR's research-based rubric and AI-powered evidence capture is a practical path forward for the 2025-2026 school year and beyond.

See M-STAR scoring powered by AI

Watch Upraiser analyze a Mississippi classroom observation and produce M-STAR-aligned feedback with evidence citations from the actual lesson.

Request an M-STAR Demo
← All articles
Share

On this page

  • What Is M-STAR?
  • M-STAR Domains and Indicators
  • M-STAR Scoring Levels and Observation Requirements
  • Mississippi-Specific Compliance Requirements
  • Common Challenges for Mississippi Administrators
  • How AI Maps to M-STAR Domains with Classroom Audio Evidence
  • Practical M-STAR Tips and Best Practices for 2026
  • Getting Started With M-STAR and AI

Related articles

Texas elementary classroom with teacher at whiteboard engaging diverse students during a T-TESS observation
State Rubrics14 min read

The Complete Guide to T-TESS Evaluation in Texas (2026)

How AI is helping Texas administrators save hours on T-TESS classroom observations

February 3, 2026Read
Teacher leading small group discussion with engaged diverse students in a Tennessee classroom
State Rubrics11 min read

TEAM Rubric Scoring Made Simple: Tennessee Teacher Evaluation with AI

How Tennessee administrators are using AI to streamline TEAM observations without sacrificing rigor

February 19, 2026Read
School principal reviewing AI-powered teacher evaluation data on a tablet in a modern school hallway
AI & Evaluation12 min read

AI Teacher Evaluation: How State Rubrics Make All the Difference

Why generic AI tools can't replace frameworks built by educators, for educators

January 8, 2026Read
Upraiser favicon

Upraiser LLC

Terms of ServicePrivacy PolicyEnd User License Agreement

© 2026 Upraiser, Inc. All rights reserved.