1. Gabriela Weaver
  2. Special Assistant to the Provost for Educational Initiatives
  3. TEval
  4. http://teval.net/
  5. University of Massachusetts-Amherst
  1. Ann Austin
  2. Associate Dean for Research and Professor of Higher Education
  3. TEval
  4. http://teval.net/
  5. Michigan State University
  1. Noah Finkelstein
  2. https://spot.colorado.edu/~finkelsn/
  3. Professor
  4. TEval
  5. http://teval.net/
  6. University of Colorado Boulder
  1. Mark Graham
  2. https://stem-perl.yale.edu
  3. Research Scientist and Director
  4. TEval
  5. http://teval.net/
  6. Yale University
  1. Andrea Greenhoot
  2. https://psych.ku.edu/andrea-follmer-greenhoot
  3. Director/Professor
  4. TEval
  5. http://teval.net/
  6. University of Kansas
  1. Doug Ward
  2. Associate professor, Associate director
  3. TEval
  4. http://teval.net/
  5. University of Kansas
Public Discussion

Continue the discussion of this presentation on the Multiplex. Go to Multiplex

  • Icon for: Danielle Watt

    Danielle Watt

    Facilitator
    Director of Education, Outreach, Diversity
    May 13, 2019 | 12:41 p.m.

    Thank you for sharing your great project to transform the methods of evaluating STEM teaching. Although it is early in your study, do you see an increase in student participation in the evaluation of the courses?

    Do you evaluate at the end of the course to compare to the traditional evaluation or is it distributed at different times during the course, say mid term and at the end, to provide opportunity for intervention if needed?

  • Icon for: Doug Ward

    Doug Ward

    Co-Presenter
    Associate professor, Associate director
    May 13, 2019 | 03:03 p.m.

    I haven't seen any change in student participation in the process at KU. Average response rates on student surveys of teaching range between 50% and 60%. That's one reason we see a need to bring in additional evidence to the evaluation process. I recently led a committee that looked into ways to improve response rates in student surveys. (See 4 ways to increase participation in student surveys of teaching.) One thing we found was that instructors who reach out to students and explain the importance of their feedback generally have higher response rates. Given that, I could see that instructors who use the TEval system might be more engaged in the evaluation process and thus solicit more responses from students.

    The TEval system is intended to create a holistic evaluation for promotion and tenure or for annual review. Some aspects could certainly be used during a semester or for an individual course, but it was designed for evaluating a body of work. It is really up to instructors to conduct their own formative evaluations at midterm or at other points in the class. Those formative evaluations would be good evidence to use in the TEval system to show how an instructor was working to improve a class.      

     
    1
    Discussion is closed. Upvoting is no longer available

    Danielle Watt
  • Icon for: Danielle Watt

    Danielle Watt

    Facilitator
    Director of Education, Outreach, Diversity
    May 16, 2019 | 12:31 a.m.

    Thank you for sharing the article. I see you also acknowledge the use of mid-term evaluations to increase student feedback but they can also impact at least two of the areas on the rubric - reflection & mentoring. I really like the categories the rubric address but wonder if a resource list will be provided to facilitate discussion around the 7 areas (or designated ares depending on the program) particularly when it comes to climate, mentoring, and reflection. 

  • Icon for: Gabriela Weaver

    Gabriela Weaver

    Lead Presenter
    Special Assistant to the Provost for Educational Initiatives
    May 16, 2019 | 12:38 p.m.

    That is an excellent idea!  Thank you for that, Danielle.

  • Icon for: Doug Ward

    Doug Ward

    Co-Presenter
    Associate professor, Associate director
    May 13, 2019 | 01:04 p.m.

    Our team is working to create a more nuanced approach to evaluating teaching. In doing so, we hope to expand the use of evidence-based teaching practices and better understand the way change takes place in different university settings. 

    Most faculty members we have worked with see a need for a more robust approach to evaluating teaching. They have also raised several questions that we would love to hear your thoughts about:

    • How do we add depth and nuance to the evaluation system without adding an enormous amount of time for individuals and committees?
    • What evidence best reflects the work that goes into good teaching?
    • How can a system that provides feedback to help faculty members improve also be used as an evaluation tool for promotion and tenure, and for annual review?
    • How do we help faculty become more comfortable giving nuanced and sometimes critical feedback about teaching when the norm has been that everyone is above average?
  • Icon for: A Daniel Johnson

    A Daniel Johnson

    Higher Ed Faculty
    May 13, 2019 | 09:08 p.m.

    Great project, and long overdue! We're in the process of discussing teaching evaluation and this approach sounds promising. Would you be willing to post a copy of the baseline rubric that you illustrated in the video, to help get more specific questions?

     

  • Icon for: Doug Ward

    Doug Ward

    Co-Presenter
    Associate professor, Associate director
    May 14, 2019 | 10:51 a.m.

    Sure thing. You will find a link to the KU version of the rubric here (it's under Benchmarks Project, which is what we call it at KU), or download a PDF of the rubric directly here.

  • Icon for: A Daniel Johnson

    A Daniel Johnson

    Higher Ed Faculty
    May 16, 2019 | 04:02 p.m.

    Thanks much! This is going to be really helpful.

     

  • Icon for: Sehoya Cotner

    Sehoya Cotner

    Higher Ed Faculty
    May 14, 2019 | 10:39 a.m.

    Great video and admirable project! I look forward to learning more about TEval in the near future.

  • Icon for: Doug Ward

    Doug Ward

    Co-Presenter
    Associate professor, Associate director
    May 14, 2019 | 10:56 a.m.

    Thank you, Sehoya! We have been sharing information about the project at various conferences, and we have some articles in the works.

  • Icon for: Ellis Bell

    Ellis Bell

    Researcher
    May 14, 2019 | 03:09 p.m.

    What a great and important project!

  • Icon for: Gabriela Weaver

    Gabriela Weaver

    Lead Presenter
    Special Assistant to the Provost for Educational Initiatives
    May 14, 2019 | 04:13 p.m.

    Thank you, Ellis.  We'd love to get feedback about how you see this project being applied in various academic settings and how it could be helpful. 

  • Icon for: Phillip Eaglin, PhD

    Phillip Eaglin, PhD

    Facilitator
    Founder and CEO
    May 14, 2019 | 06:29 p.m.

    Great to see a strong focus on improving teaching in the university classroom!  And having a rubric provides for instructors to reflect on where they are and how far they can set their goals.  Questions: Are you comparing end of course feedback from students across semesters?  Is there an opportunity for instructors to receive support from instructional specialists based on the rubric?  How often does that occur?

  • Icon for: Gabriela Weaver

    Gabriela Weaver

    Lead Presenter
    Special Assistant to the Provost for Educational Initiatives
    May 14, 2019 | 08:19 p.m.

    Thank you for your comments, Phillip.  Because our project is examining how different departments implement a rubric of this type into their teaching evaluation practices, there are different models emerging for specific ways of collecting, examining and using feedback.  Several departments are intending to use the rubric to look for change, or growth, in a faculty member's teaching.  In that case, comparisons across semesters will be an integral part of the process.  Numerous departments see this rubric playing a role in the formative feedback for faculty, and opportunities for improvement through various avenues of faculty development (some internal to the department, some more campus-based) will be part of that formative development.  The frequency with which different departments will carry out an evaluation of faculty teaching, again, will vary by department.  Some departments are exploring an every-other-year model for their pre-tenure faculty and/or a every-third-year model for some of their faculty.  Our project is not yet at the point where any of the universities involved have standardized or required the rubric process.  If and when that takes place, we anticipate that the experiences of these "pilot" departments will be considered when making decisions on the types of parameters that you're asking about.  Happy to provide more details if my answer did not fully address your questions.  Gabriela

     
    1
    Discussion is closed. Upvoting is no longer available

    Phillip Eaglin, PhD
  • Icon for: Marcelo Worsley

    Marcelo Worsley

    Facilitator
    Assistant Professor
    May 14, 2019 | 09:41 p.m.

    This looks like a great project that many more of our institutions should take up. Can you talk about any challenges or lessons learned?

    Additionally, I wondered if part of the discussion that happens at the annual TEval conference/seminar involves discussing how some elements of the teaching rubric may be  (or appear to be) in conflict with one another?

    Finally, to what extent is the TEval rubric meant to be something that is shared with others. Many of the existing end of semester systems, at least, are used to gather information that students might use when considering which classes to take. Do you intend for TEval to also be used in this way?

  • Icon for: Gabriela Weaver

    Gabriela Weaver

    Lead Presenter
    Special Assistant to the Provost for Educational Initiatives
    May 15, 2019 | 01:05 p.m.

    Thank you for your comments and questions, Marcelo.  We have not had anyone bring up concerns about apparent contradictions in the rubric.  If those exist, we very much want to know about it/them.  Can you give me some specifics?

    The challenges and lessons to date (20 months into the project) revolve primarily around the workload that this type of holistic assessment entails, compared to a student survey.  The departments and institutions that are participating feel that the shift in workload is balanced by having a more reliable and valid sense of faculty teaching.  However, our piloting process is working to find that "sweet spot" where the effort is matched with the value gained, which means identifying procedures/practices that departments can use to integrate this into faculty workloads.

    The extent to which results of a TEval rubric-based evaluation would be made public is ultimately up to the institution/departments in which it is used.  At this time, the rubric is intended for departments to evaluate faculty members, particularly for formative development, annual reviews and/or promotion reviews.  We did not *intent* for this information to be made publicly available, in the same way that those types of faculty reviews are generally not made available.  But if an institution determines that they would use it that way - particularly if it is an institution with a culture of making student survey results publicly available - then that would be their prerogative.  We are simply developing and assessing a framework. 

    Gabriela

  • Icon for: Matt Fisher

    Matt Fisher

    Higher Ed Faculty
    May 15, 2019 | 02:32 p.m.

    The video talks about potential impact on how teaching is evaluated at research universities. But as someone who has taught at small liberal arts colleges for almost three decades (including seven years as a department chair and two years on our promotion and tenure committee), I think the potential benefit of this work is much wider than just R-1 schools. I plan to look at the rubric this summer and see what ideas would be appropriate to share with my colleagues.

    I would encourage the TEval leadership team to disseminate this work as widely as possible and in ways that make it clear these ideas should be considered by a wide range of higher education institutions.

     
    1
    Discussion is closed. Upvoting is no longer available

    Danielle Watt
  • Icon for: Gabriela Weaver

    Gabriela Weaver

    Lead Presenter
    Special Assistant to the Provost for Educational Initiatives
    May 15, 2019 | 04:55 p.m.

    Hello Matt.  I completely agree with you.  This approach can be applied in a variety of institution types, and across a broad array of disciplines.  We can only report the work through the lens of public R-1's because that is the classification of our three pilot sites.  Our conference presentations through AAC&U are geared at any institutions of higher education who would be interested, and there will be additional public convenings specifically about this topic in the coming year.  (We do not have all the details yet.)

    Thank you for your comments.  Gabriela

  • Icon for: Barbara Rogoff

    Barbara Rogoff

    Researcher
    May 17, 2019 | 02:39 a.m.

    Hi there -- This is very helpful!  I forwarded the link to our Center for Innovations in Teaching and Learning, at UCSC.  Thanks!

  • Icon for: Noah Finkelstein

    Noah Finkelstein

    Co-Presenter
    Professor
    May 20, 2019 | 11:10 a.m.

    Sorry to be slow to the party here -- but great to see (and hear from you Barbara)... I'd be very pleased to connect with our CITL colleagues next time I'm out !

  • Icon for: Doug Ward

    Doug Ward

    Co-Presenter
    Associate professor, Associate director
    May 17, 2019 | 11:39 a.m.

    Thank you, Barbara! If you have any questions, please let us know.

  • Icon for: Diana Bairakatrova

    Diana Bairakatrova

    Higher Ed Faculty
    May 20, 2019 | 11:33 a.m.

    Thank you for sharing your work! Your project is very important and many institutions can learn from it.

    Thanks!

  • Icon for: Noah Finkelstein

    Noah Finkelstein

    Co-Presenter
    Professor
    May 20, 2019 | 11:36 a.m.

    Thank you Diana!  Our hope is that this will be useful in supporting the wide array of faculty engagement and duties (such as the work at ACE(D) Lab !).  Don't hesitate to let us know if these are tools we can share with folks at VT.

  • Icon for: Diana Bairakatrova

    Diana Bairakatrova

    Higher Ed Faculty
    May 20, 2019 | 11:42 a.m.

    Thank you Noah!

  • Further posting is closed as the showcase has ended.