UpraiserUpraiser
DemoAboutBlogContact
Sign in
Request a demo
Upraiser
Back to blog
State RubricsFebruary 3, 202614 min read

The Complete Guide to T-TESS Evaluation in Texas (2026)

How AI is helping Texas administrators save hours on T-TESS classroom observations

By The Upraiser Team

Share
Texas elementary classroom with teacher at whiteboard engaging diverse students during a T-TESS observation

What Is T-TESS and Why Does It Matter?

The Texas Teacher Evaluation and Support System (T-TESS) is the state-adopted framework for evaluating and supporting teachers across Texas public schools. Developed by the Texas Education Agency (TEA) in partnership with educators, T-TESS replaced the legacy PDAS system in the 2016-2017 school year with a singular goal: shift teacher evaluation from a compliance checkbox into a meaningful growth tool.

Unlike its predecessor, T-TESS was designed around the principle that evaluation should be a continuous cycle of feedback and improvement, not a once-a-year event. The framework centers on what actually matters: observable teaching practices and their impact on student learning.

~350,000Teachers evaluated under T-TESS across Texas each year

For the roughly 9,000 public schools in Texas, T-TESS is not optional. Every campus administrator -- principals, assistant principals, and designated appraisers -- must be T-TESS certified to conduct evaluations. With approximately 18,000 principals and APs responsible for evaluating hundreds of thousands of teachers annually, the scale of this system is staggering. And that scale is precisely where the challenge lies.

T-TESS evaluations are thorough by design. The framework demands that appraisers collect evidence across four domains and sixteen dimensions, attend pre- and post-conferences, and produce documentation that supports each rating with specific, observable evidence. Done well, this process transforms teaching practice. Done under the crushing weight of administrative workload, it becomes a paper exercise that helps no one.

The 4 T-TESS Domains: A Deep Dive

T-TESS organizes teacher performance into four domains, each containing specific dimensions that appraisers score during the evaluation cycle. Understanding these domains deeply is essential for both evaluators conducting observations and teachers preparing for them.

Domain 1: Planning

Domain 1 is evaluated primarily through the pre-conference and lesson plan review, not during the classroom observation itself. This is a critical distinction that many new appraisers miss. Planning is assessed across two dimensions:

  • Dimension 1.1 -- Standards and Alignment: The teacher designs clear, well-organized lessons aligned to the Texas Essential Knowledge and Skills (TEKS). Look for evidence that the teacher can articulate vertical alignment and has considered how the lesson fits within the broader unit and curriculum.
  • Dimension 1.2 -- Data and Assessment: The teacher uses formal and informal assessment data to plan instruction and monitor student progress. Evidence includes differentiated activities based on assessment results, flexible grouping strategies, and clear formative checkpoints within the lesson design.
  • Dimension 1.3 -- Knowledge of Students: The teacher demonstrates understanding of students' backgrounds, interests, and learning needs. Distinguished teachers proactively seek information about students and use it to design culturally responsive, differentiated instruction.
  • Dimension 1.4 -- Activities: The teacher plans engaging, relevant activities that build on prior knowledge and promote higher-order thinking. Activities should be sequenced logically and provide multiple entry points for diverse learners.

Domain 2: Instruction

Domain 2 is the heart of the classroom observation. This is where the appraiser spends 45 minutes or more collecting evidence of teaching in action. It contains five dimensions:

  • Dimension 2.1 -- Achieving Expectations: The teacher sets high expectations and supports all students in meeting them. Watch for evidence of growth mindset language, scaffolded support, and persistent re-engagement of struggling learners.
  • Dimension 2.2 -- Content Knowledge and Expertise: The teacher demonstrates deep, flexible knowledge of the content area. Distinguished teachers anticipate student misconceptions, connect content to real-world applications, and promote disciplinary literacy.
  • Dimension 2.3 -- Communication: The teacher communicates clearly with students using verbal, nonverbal, and written methods. Look for precise academic language, well-structured explanations, and responsive adjustments when students indicate confusion.
  • Dimension 2.4 -- Differentiation: The teacher adapts instruction to meet the needs of diverse learners. Evidence includes tiered assignments, flexible grouping, varied questioning strategies, and multiple means of demonstrating mastery.
  • Dimension 2.5 -- Monitor and Adjust: The teacher uses formative assessment throughout the lesson to check for understanding and adjusts instruction accordingly. Distinguished teachers make real-time pivots that are seamless and responsive to student data.

Domain 3: Learning Environment

Domain 3 is observed alongside instruction but focuses specifically on the climate and culture of the classroom. It contains three dimensions:

  • Dimension 3.1 -- Classroom Environment, Routines and Procedures: Students know what to do and transitions are smooth. In a Distinguished classroom, students themselves manage routines, monitor their own behavior, and take ownership of the learning environment.
  • Dimension 3.2 -- Managing Student Behavior: The teacher maintains a respectful, productive environment through clear expectations and consistent follow-through. Distinguished teachers rarely need to address behavior because the culture is self-sustaining.
  • Dimension 3.3 -- Classroom Culture: Students feel safe to take intellectual risks. Look for evidence of student voice, collaborative learning, and a genuine sense of community where mistakes are treated as learning opportunities.

Domain 4: Professional Practices and Responsibilities

Domain 4 is not observed during the classroom visit. It is assessed through the post-conference, professional development records, and documentation of the teacher's contributions beyond the classroom. It contains five dimensions:

  • Dimension 4.1 -- Professional Demeanor and Ethics: The teacher complies with legal and ethical requirements, including the Educators' Code of Ethics.
  • Dimension 4.2 -- Goal Setting: The teacher sets measurable professional growth goals aligned to T-TESS dimensions and campus or district priorities.
  • Dimension 4.3 -- Professional Development: The teacher pursues relevant, self-directed professional learning and applies new strategies in the classroom.
  • Dimension 4.4 -- School Community Involvement: The teacher contributes to the school community through leadership, collaboration, and family engagement.
  • Dimension 4.5 -- Data-Driven Practice: The teacher analyzes student data to inform instruction, identifies trends, and uses evidence to drive continuous improvement.

T-TESS Tip: Domains 1 and 4 are assessed outside the classroom observation. Effective appraisers use the pre-conference (Domain 1) and post-conference (Domain 4) as genuine professional conversations, not interrogations. The quality of evidence you gather in these conferences directly impacts the accuracy and defensibility of your summative ratings.

T-TESS Scoring: The 5 Performance Levels

Each of the 16 dimensions is scored on a five-point scale. Understanding the distinctions between levels is critical for inter-rater reliability -- and it is where most scoring inconsistencies occur.

  • Distinguished (5): The teacher is a model for others. Performance at this level is aspirational -- it represents practices that are innovative, student-driven, and consistently exceptional. Students often drive the learning, self-assess, and hold each other accountable. This rating should be rare and supported by compelling evidence.
  • Accomplished (4): The teacher demonstrates a thorough understanding and consistently strong application of the dimension. Teaching at this level is effective, intentional, and clearly impacts student learning. This is the target for experienced, proficient educators.
  • Proficient (3): The teacher meets the standard. Practices are effective and appropriate, though they may lack the consistency or depth of an Accomplished rating. For new teachers, Proficient is a strong starting point. For veteran teachers, remaining at Proficient across all dimensions warrants a growth conversation.
  • Developing (2): The teacher is working toward proficiency but demonstrates inconsistent or emerging practices. A Developing rating is not punitive -- it signals an area where targeted coaching and support will have the greatest impact.
  • Improvement Needed (1): The teacher's performance is below the expected standard and may be negatively impacting student learning. This rating triggers formal support and documentation requirements and should be accompanied by clear, specific evidence and a growth plan.

Scoring Calibration Tip: The most common scoring error in T-TESS is "central tendency" -- clustering ratings at Proficient (3) to avoid difficult conversations. TEA training materials emphasize that appraisers should use the full scale and that every rating must be anchored to specific, observable evidence from the rubric descriptors, not general impressions.

The T-TESS Observation Cycle: Pre-Conference, Observation, Post-Conference

T-TESS is built around a three-phase observation cycle designed to create a meaningful feedback loop between appraiser and teacher. Each phase has a distinct purpose, and skipping or rushing any phase undermines the entire system.

Phase 1: Pre-Conference

The pre-conference is a structured conversation that takes place before the classroom observation, typically within the preceding days. During this meeting, the teacher walks the appraiser through the upcoming lesson, explaining alignment to TEKS, assessment strategies, differentiation plans, and knowledge of students. This is where Domain 1 (Planning) evidence is primarily collected.

Effective pre-conferences last 15-20 minutes and feel like a collaborative planning discussion, not an interview. The appraiser should leave with a clear picture of what to expect during the observation and what the teacher's intentional choices were.

Phase 2: Classroom Observation

The observation itself should last a minimum of 45 minutes, though TEA recommends a full class period when possible. During this time, the appraiser is scripting -- capturing verbatim teacher and student language, noting instructional moves, and documenting evidence across Domains 2 (Instruction) and 3 (Learning Environment).

This is where the time burden hits hardest. A conscientious appraiser is simultaneously listening to instruction, writing detailed notes, tracking student engagement across the room, monitoring questioning strategies, noting differentiation moves, and mentally mapping observations to specific T-TESS dimensions. The cognitive load is enormous, and it is the primary reason that evidence quality varies so dramatically between evaluators.

Phase 3: Post-Conference

The post-conference should occur within 10 working days of the observation. This is where the appraiser shares evidence, discusses ratings, and collaborates with the teacher on growth areas. Domain 4 (Professional Practices and Responsibilities) evidence is gathered here alongside the teacher's self-reflection on the observed lesson.

The post-conference is the moment where evaluation becomes coaching. When done well, teachers leave the post-conference with specific, actionable next steps tied to rubric dimensions. When done poorly -- when it is rushed or generic -- teachers feel the evaluation was performative, and the entire cycle loses its developmental purpose.

3-5 hoursAverage time per complete T-TESS evaluation cycle (pre-conference through post-conference documentation)

The Real Pain Points: Why Texas Administrators Are Struggling

Talk to any Texas principal or AP about T-TESS and you will hear the same themes. The framework itself is well-designed -- most administrators genuinely believe it can improve teaching. The problem is implementation at scale.

The Time Equation Does Not Work

Consider the math. A typical Texas elementary school has 30-40 teachers. Each teacher requires at minimum one formal observation cycle per year (pre-conference, observation, post-conference, documentation). New teachers, teachers on growth plans, and teachers requesting additional feedback may require two or three cycles. At 3-5 hours per cycle, a principal with 35 teachers is looking at 105 to 175 hours per year dedicated solely to T-TESS. That is 13 to 22 full working days -- and that is before walkthroughs, informal observations, and all the other responsibilities competing for a principal's attention.

The result is predictable: observations get compressed, documentation gets thin, and post-conferences become 10-minute conversations instead of genuine coaching sessions. The evaluation system designed to support teachers becomes the very thing that prevents principals from supporting them.

Inconsistent Scoring Between Appraisers

Inter-rater reliability is the Achilles' heel of any rubric-based evaluation system. Even with TEA's certification training, two T-TESS-certified appraisers can watch the same lesson and produce meaningfully different ratings. Research consistently shows that observer calibration degrades over time without regular recalibration exercises, and most districts lack the resources to conduct them.

This inconsistency erodes trust. When a teacher receives a Developing rating from one appraiser and a Proficient from another for the same practice, the credibility of the entire system suffers. Teachers begin to see evaluation as subjective and arbitrary rather than as a reliable tool for growth.

Documentation Is the Bottleneck

T-TESS requires that every rating be supported by specific evidence. That means the handwritten or typed notes from a 45-minute observation must be organized, mapped to dimensions, and translated into clear, defensible justifications for each score. Many administrators report spending more time on post-observation documentation than on the observation itself.

~18,000Principals and APs conducting T-TESS evaluations across Texas

How AI Maps to Each T-TESS Domain

This is where the conversation shifts from "what is T-TESS" to "how do we actually make it work at scale." AI-assisted evaluation tools like Upraiser do not replace the appraiser's professional judgment. They eliminate the mechanical bottlenecks that prevent appraisers from exercising that judgment effectively.

Here is how the technology maps to each domain:

Domain 2 (Instruction) -- Where AI Has the Greatest Impact

During the classroom observation, Upraiser records classroom audio and transcribes it in real-time using AssemblyAI. This transcript becomes a verbatim record of every instructional move, question, student response, and teacher redirect that occurred during the lesson. No more frantically scribbling notes while trying to watch 30 students simultaneously.

The AI then analyzes the transcript against each Domain 2 dimension. For Dimension 2.1 (Achieving Expectations), it identifies instances of growth mindset language, scaffolding, and re-engagement strategies. For Dimension 2.2 (Content Knowledge), it flags moments where the teacher connected content to prior learning, addressed misconceptions, or extended student thinking. For Dimension 2.3 (Communication), it analyzes the clarity of teacher explanations, the ratio of teacher talk to student talk, and the precision of academic vocabulary. For Dimension 2.4 (Differentiation), it identifies instances of flexible grouping, tiered questioning, and multiple response modalities. For Dimension 2.5 (Monitor and Adjust), it detects formative assessment checkpoints and the teacher's responsive adjustments.

Every AI-generated rating comes with specific evidence citations -- timestamped excerpts from the transcript that support the score. The appraiser reviews these citations, adjusts ratings based on contextual knowledge the AI cannot observe (like the student who was having a rough morning), and produces a final evaluation that is more thorough and better-documented than what most humans could produce alone in twice the time.

Domain 3 (Learning Environment) -- Audio + Visual Evidence

Classroom culture lives in the details: the tone of voice a teacher uses when redirecting behavior, the way students talk to each other during collaborative work, the smoothness of transitions. AI analysis of the audio transcript captures these moments -- the respectful redirection at minute 12, the student-to-student academic discourse at minute 23, the seamless transition into group work at minute 31. When paired with images captured during the observation, AI can also analyze classroom organization, anchor charts, and visual evidence of routines and procedures.

Domains 1 and 4 -- Conference Documentation

While Domains 1 and 4 are assessed outside the observation, AI tools accelerate the documentation process. The verbatim transcript from the observation provides natural jumping-off points for the post-conference: "At minute 17, I noticed you asked three consecutive questions at the recall level before shifting to analysis. Walk me through your thinking there." This level of specificity transforms the post-conference from a vague conversation into a precision coaching session.

How It Works in Practice: An appraiser walks into a classroom, starts a recording on their phone, and focuses entirely on watching instruction. After the observation, the AI produces a draft analysis mapped to all 16 T-TESS dimensions with evidence citations. The appraiser reviews, adjusts, and finalizes -- typically in under 30 minutes instead of 2+ hours. The result is a more thorough, better-documented evaluation completed in a fraction of the time.

Improving Inter-Rater Reliability with AI-Assisted Scoring

Inter-rater reliability is arguably the most important quality metric in any evaluation system. If two trained appraisers cannot agree on what "Proficient" looks like in Dimension 2.4, the entire rating system lacks validity. This is not a theoretical concern -- it is a daily reality in Texas schools.

TEA addresses this through certification training and calibration videos, but the effect fades. Studies on observation-based evaluation systems consistently find that rater accuracy declines within months of initial training without ongoing recalibration. Most Texas districts do not have the bandwidth for quarterly calibration sessions.

AI-assisted scoring addresses this problem structurally. The AI applies the same rubric descriptors, with the same interpretation, to every observation. It does not have bad days. It does not unconsciously score the veteran teacher more generously than the first-year teacher. It does not rush through the last five evaluations of the semester because spring break is approaching.

This does not mean the AI is always right. It means the AI provides a consistent baseline that appraisers can calibrate against. When an appraiser's rating diverges from the AI's suggested score, it prompts a reflective question: "What evidence am I seeing that the AI missed? Or what evidence is the AI surfacing that I overlooked?" This built-in calibration mechanism helps appraisers stay anchored to the rubric language rather than drifting toward personal interpretation.

For districts with multiple appraisers -- especially large districts where a teacher might be evaluated by a principal one year and an AP the next -- this consistency is transformative. Teachers receive feedback calibrated to the same standard regardless of who conducts the observation.

District-Level Benefit: When AI provides a consistent scoring baseline across all campuses, district leadership can identify genuine performance trends rather than artifacts of appraiser variation. Data from T-TESS evaluations becomes actionable at the system level, informing professional development investments and coaching resource allocation.

T-TESS Compliance Tips for the 2025-2026 School Year

TEA continues to refine T-TESS guidance, and staying current with requirements is essential for defensible evaluations. Here are the key compliance considerations for the 2025-2026 school year:

  • Appraiser Certification: All T-TESS appraisers must complete TEA-approved certification training. Recertification is required every five years, but districts are encouraged to provide annual calibration exercises. Ensure your appraiser certifications are current before conducting any formal observations.
  • Observation Length: TEA recommends a minimum 45-minute observation for formal evaluations. Observations shorter than a full class period may not provide sufficient evidence across all observable dimensions. Document the start and end time of every observation.
  • Timeline Requirements: Post-conferences must occur within 10 working days of the observation. The summative evaluation conference should occur no later than 15 working days before the last day of instruction. Build your observation calendar backward from these deadlines to avoid end-of-year compression.
  • Evidence Documentation: Every rating must be supported by specific, observable evidence tied to the dimension's rubric descriptors. General statements like "good classroom management" are insufficient. Reference specific moments, teacher language, and student behaviors.
  • Teacher Self-Assessment: Teachers should complete their self-assessment using the T-TESS rubric before the beginning-of-year conference. This self-assessment is a critical component of the goal-setting process and informs the teacher's professional development plan.
  • Student Growth Measures: T-TESS incorporates student growth as one component of the overall evaluation. Ensure your campus has established clear, measurable student growth goals aligned to the Student Learning Objectives (SLO) framework or district-adopted alternative.
  • Technology-Assisted Observations: TEA guidance permits the use of audio and video recording tools during observations when they comply with district policy and applicable FERPA regulations. Ensure your district's board-approved technology use policy covers AI-assisted evaluation tools, and obtain necessary consent documentation.

Record Retention: Texas school districts must retain teacher evaluation records in accordance with the Texas State Library and Archives Commission retention schedules. T-TESS documentation -- including observation notes, conference records, and summative evaluations -- should be retained for a minimum of five years. Digital evaluation platforms that maintain secure, organized archives simplify this compliance requirement significantly.

Why Texas Administrators Are Choosing Upraiser for T-TESS

Upraiser was built by educators who have lived the T-TESS cycle from both sides of the desk. Our founding team includes a 17-year veteran principal who has personally conducted thousands of teacher evaluations. We did not build a generic observation tool and retrofit it for Texas -- we built a platform that understands T-TESS at the dimension level.

Here is what that looks like in practice:

  • Full T-TESS Alignment: Upraiser scores against all 16 T-TESS dimensions across all 4 domains, using the exact rubric language from the TEA-published framework. Every suggested rating is accompanied by timestamped evidence citations from the classroom transcript.
  • 60-70% Time Reduction: Administrators consistently report that AI-assisted evaluations cut the post-observation documentation time from 2+ hours to under 30 minutes. That is not 30 minutes of less-thorough documentation -- it is 30 minutes of better-documented, evidence-rich evaluation.
  • 24 State Frameworks Supported: If your district or consulting group operates across state lines, Upraiser supports T-TESS alongside 23 other state frameworks including TEAM (Tennessee), M-STAR (Mississippi), Danielson FFT, and more. One platform, one workflow, every rubric.
  • Coaching-Integrated Workflow: T-TESS is meant to be a support system, not just a rating system. Upraiser's coaching tools let you conduct lightweight coaching observations that build on evaluation data, track teacher growth over time, and maintain continuity between observation cycles.
  • Consulting Group Support: For educational service centers and consulting organizations supporting multiple Texas districts, Upraiser provides multi-school management, contract tracking, and portfolio-level analytics -- all scoped to T-TESS domains.

The goal is not to automate evaluation. The goal is to give Texas administrators their time back so they can do what T-TESS was actually designed for: having meaningful coaching conversations that improve teaching and learning.

See T-TESS scoring in action

Watch Upraiser analyze a classroom observation and produce T-TESS-aligned scores across all 4 domains -- with evidence citations for every dimension.

Request a T-TESS Demo
← All articles
Share

On this page

  • What Is T-TESS and Why Does It Matter?
  • The 4 T-TESS Domains: A Deep Dive
  • T-TESS Scoring: The 5 Performance Levels
  • The T-TESS Observation Cycle: Pre-Conference, Observation, Post-Conference
  • The Real Pain Points: Why Texas Administrators Are Struggling
  • How AI Maps to Each T-TESS Domain
  • Improving Inter-Rater Reliability with AI-Assisted Scoring
  • T-TESS Compliance Tips for the 2025-2026 School Year
  • Why Texas Administrators Are Choosing Upraiser for T-TESS

Related articles

School principal reviewing AI-powered teacher evaluation data on a tablet in a modern school hallway
AI & Evaluation12 min read

AI Teacher Evaluation: How State Rubrics Make All the Difference

Why generic AI tools can't replace frameworks built by educators, for educators

January 8, 2026Read
Teacher and administrator having a coaching conversation during a Danielson Framework post-observation conference
State Rubrics13 min read

Danielson Framework for Teaching: The AI-Powered Evaluation Guide

How districts in 10+ states are using AI to apply the Danielson FFT more consistently

March 5, 2026Read
School administrator comparing teacher evaluation software options on a large monitor in a conference room
Buyer Guides12 min read

The Administrator's Guide to Choosing Teacher Evaluation Software in 2026

What to look for, what to avoid, and why your state rubric should drive the decision

March 19, 2026Read
Upraiser favicon

Upraiser LLC

Terms of ServicePrivacy PolicyEnd User License Agreement

© 2026 Upraiser, Inc. All rights reserved.