Auburn University Senate Minutes

 

October 9, 2001

 

Broun Hall Auditorium

 

Absent:  W. Bergen, R, Norton, N. Godwin, M. El-Halwagi, J. Gluhman, R. Good, R. Locy, B. Hames, R. Kenworthy, C. Skelton, M. Reinke, J. DeRuiter, J. Bannon, J. LaPrade, CPT Hageman, LTC Buchanan, CPT McMurtrie.

 

Absent (substitute): H. Guffey (J.R. Harris), B. Bowman (L. Crowley), R. Crocker (L. Gerber), C. Rodger (D. Reonard), H. Cummings (C. Bourie).

 

The meeting was called to order at 3:00 p.m.

 

The minutes of the last meeting were approved as they are being corrected. They may be viewed at the Senate Home Page at www.auburn.edu/administration/governance/senate/schedule.html September 4, 2001.

 

Announcements

 

a.  Announcements from the President’s Office – Dr. William Walker

 

There are four or five items I want to mention to you. First, the Board of Trustees passed the fiscal 02 budget this past Friday. It is not a particularly good budget, but it is the best we could do under the circumstances. It failed to meet many of the needs that we have at Auburn. For your information, the budget total is about $522 million. The main campus portion of that, excluding the Experiment Station and Extension System, is about $388 million. Of that $388 million, the state appropriation is about $133 million.

 

Although we were unable to meet everything that we would list as an urgent need, we were able to get some things done that I was very pleased about. The first of those is the tuition remission. The Board did approve the remission of 50% of tuition for spouses and dependents. The specific policy about what is a spouse and a dependent and how it will be applied is to be determined. Dr. Large will be working that up over the next few weeks. I presume that it will be presented to the Board for their approval. Also, they approved changing the insurance premiums. I want to express thanks for your endorsing this proposal. If you have forgotten, the split prior to this decision by the Board was that the employee pays 40% of the insurance premiums and the University pays 60%. The premiums keep going up annually, and we were getting to the point where some employees just couldn’t afford insurance. So, what the Board approved was a change.  If an individual is making less than $20,000, then instead of the split being 40% employee/60% University as it is for the rest of us, the employee will pay 20% and the University will pay 80%. For employees making between $20 and $30,000, the employee will pay 30% and the University will pay 70%. So, I think that is a step in the right direction and I, again, appreciate the leadership the Senate provided in getting that through. Also, the Board picked up the complete GTA allocation. You recall from last year we decided to phase that in over a three-year period and we recommended this year to fund it all. The Board decided to do that, so we now have in place tuition funding for all of our graduate teaching assistants.

 

Clearly, the next issue, with respect to graduate students, is tuition remission for GRAs. Hopefully, we will be able to do something about that at some point in the future. I would like to do something about it this coming year. I would add, as some of you are aware, many of our GRAs are currently receiving tuition remission from one source or another.

 

Because of the “great unknown” with respect to what is happening with the state funding, we are going into this current fiscal year with the largest proration reserve in the history of this University. Don Large has salted away about $6 million, which is about 4.5% of the state appropriation, anticipating pretty dire financial circumstances within this fiscal year. We hear reports of severe proration, but at the same time we hear the governor say he will not prorate at all. Obviously that leads to great confusion forthose of us who are trying to anticipate what is going to happen realistically. At this juncture, all we know to do is to put as much aside as we possibly can in anticipation. Dream as we might, we can anticipate that we may not need all of that for proration; that would be wonderful, and clearly, later in the year we would be prepared to find a way to spend some of those funds.

 

The last announcement I have is that we are going to be establishing a Teaching and Learning Center, called the Biggio Teaching and Learning Center. It comes about as the result of a request of Mr. and Mrs. Biggio, who were long time supporters of this University. The request was approximately $10 million and will go into an endowment and that endowment will pay the expenses of a Teaching and Learning Center that has been designed and recommended by a group of faculty. I believe it will be located in the library, when the SACS self-study office moves out.

 

That ends my announcements. I would be happy to take questions.

 

Connor Bailey, Steering Committee: I would also like to thank David Housel at the Athletic Department for making a very generous contribution to the University. As a one-time contribution, the Athletic Department is contributing half a million dollars for the upcoming fiscal year, and is contributing on an ongoing basis for a five-year period, $200,000 a year, for a million and a half dollars. I have said thank-you to David. I want more. I also want to encourage us to move toward institutionalizing such gifts. As magnanimous as that gift is, I think we need to build it in. My understanding for this current fiscal year is that the Athletic Department is going to be contributing to the general fund of the University roughly $377,000. This is out of a direct-cost budget, not the endowments that they have and the restricted funds, an unrestricted fund of $27 million that is in the budget. That is roughly $27 million. The University has something called a general administrative component that comes on the indirect cost. Those of us who do contracts and grants, you know we pay 45% or so for on-campus research. 6.4% of that is for the general administrative component. I would like to suggest that the central administration consider charging the Athletic Department on their own direct cost; this 6.4%. I would not include the bond of indebtedness, which is probably in the range of $3 million for Athletics. That aside, they’ve got $24 million, that’s $1.4+ million a year. I think that is a reasonable figure. David Housel has recently expressed his satisfaction—as is appropriate given the recent circumstances of Athletics—for their facilities, staffing, salaries, and the rest of the University is now in relatively hard times. I think it is appropriate that the Athletic Department has made this contribution and I am glad that David Housel has done so. I think in realistic terms that more should be expected, and I think we need to build it in as a normal function in our budgeting. There are two southern universities where Athletics contributes 6%; this is based on information that came to Don Large’s office. Others pay much less. But we would not be out of line with other peer institutions to follow this 6.4%, so I would ask you to talk about this with your cabinet and discuss this with David Housel. If you would, for a moment, give us your response.

 

Dr. Walker:  I think it is basically a good idea and have no objection to it at all. I think the aspect of Athletics that I view as being very valuable is the fact that it does pay for itself. There are some questionable charges out there that we are incurring that they really ought to be paying, but set those aside, I think it is very important that Athletics pay for itself. I am hesitant to ask them to incur continuing costs. In the environment in which they are operating, that is fine. Should everything take a downturn—for example, two or three losing seasons—their returns would go down considerably. I think the CEO of this place should then be willing to stand firm and say “no, we are not going to fund Athletics out of the general fund.” That has not always been the case here. Twenty plus years ago, Athletics was drawing money out of the general fund. They haven’t the last several years, and I think that ought to be something we strive for. I would not want there to be associated with such a contribution by Athletics an implicit commitment on the part of the University to dip into the general fund on their behalf.

 

Don Large, VP of Finance:  Dr. Walker, why don’t I follow up with what Connor is suggesting and we crank up the Budget Advisory Committee in the upcoming year.  I can make one of the first agenda items to have Pat Davis come and speak to the group. Also, so we can  advise the Senate on the process.

 

Larry Gerber, History (substitute): Just as a follow-up, having been a member of the Budget Committee and knowing for many years the issue has been discussed within the Committee but without a substantial increase in the Athletic Department’s contribution to the University. it is not as if this would be a brand new issue. I do have some concern.(I am a season ticket holder for both basketball and football seasons; I am not anti-athletics). My understanding is that in the past when we have had bond issues, the credit of the University as a whole is behind any bond issue for Athletic Department expansion. And we are seeing in the paper that the Board is wishing to go ahead with the stadium expansion with the skyboxes. If we have a bond issue and the good faith and credit of the University makes that possible, I think that is worth something to the Athletic Department and should be recognized as well. There are a lot of aspects to this that are obviously not new. I am sure you are aware of the perception problems as far as where our priorities are when the Athletic Department can proudly claim that it competes with our peers, in that we have a stadium and facilities that are second to none, but the rest of the University is struggling.

 

Dr. Walker:  I think you are right. I think using the good name of the University does have value. I think one could make a claim for making a charge for that. In fact, we do that with the logos and so on.

 

Bruce Gladden, Immediate Past-Chair:  I agree with much of what Connor and Larry are saying here, but I think the first thing to do would be to make sure the Athletic Department is paying 100%. We need to find out what that is and make sure that is what they are paying, because there are a lot of hidden costs. I do appreciate the fact that only 15% of the programs in the country pay their way, and Auburn is one of those. But I think we need to find out exactly what all the costs are and go from there.

 

Dr. Walker:  I agree with that. One thing I have learned over the past few months is that it is easy to say “find out what all the costs are.” Then when you think you have it figured out, someone comes in with something you didn’t even think about. I think you are right and we need to find out as best we can what all the costs are.

 

Michael Watkins, Philosophy:  I am curious what counts as money that Athletics brings in. For example, these “prime parking spaces” that many people get close to the stadium. Obviously, they pay to park there so they can go to the game. But those parking spaces themselves do not belong to Athletics. During the week, I get to park there.  Does that money go to Athletics?

 

Dr. Walker: I believe it goes to Athletics. It is part of the scholarship and suite ticket package that those buyers get. It has a value associated with it and the accountants can tell you exactly what that value is because that has to be  deducted if they are going to charge off part of that ticket price as a charitable contribution.

 

Dr. Watkins:  As a quick follow-up, I don’t see that we are making the case that that is money Athletics has made. That is money that is credited to them. This is just one of the issues with bookkeeping. It is not being clearly kept whether Athletics is paying its own way or not.

 

Dr. Walker: There clearly is an absence of a priori principles involved when you’re dealing with Athletics. Don’t quote that.

 

Herb Rotfeld, Steering Committee:  When I was with a group that met with Dr. Weary, he noted that the term “interim” for an administrator is a short-term appointment that is not a candidate for the final position. One member of our group at that time said that here at Auburn it seemed to be heir-apparent for the job. Leaving aside your label as interim, you have been told by a member of the Board that you will have interim on your title for a long time. In fact, even if we started the search tomorrow, you would be interim President longer than we’ve had some Presidents. That aside, I also note that we have an interim Provost, interim Dean of Business, interim Dean of the College of Liberal Arts, interim Dean of Education, interim Dean of the Graduate School, acting Associate Dean of the Graduate School, so I would think that you are going to be interim long-term  enough that we can begin searches for these positions. Or are we going to wait until we are Auburn interim University?

 

Dr. Walker:  I intend to start searches for several of those positions, assuming that the faculty and staff involved are in agreement with that process. With respect to my position, don’t ever forget that I also have some say in the length of my tenure in this position.

 

Connor Bailey, Steering Committee:  It has come to my attention that we do some public relations work on this campus, and that some of this was contracted out. Is it the case to confirm that we have contracts with Rick Hartselle for $6000 a month for the University public relations? I understand they have a similar contract with Athletics and a contract with the Office of Research. Focusing on that first of the three contracts, which total $16,000 a month, can you tell us what this contract brings us and what this does to the University? I am assuming these are public dollars.

 

Dr. Walker:  He is doing work for Athletics, as I understand, work for the Board, and work for Mike Moriarty.

 

Dr. Bailey: I am referring to the Board work.

 

Dr. Walker:  You would have to talk to Grant Davis about that. He is working on some of the Board’s publications, and I don’t know what else. With respect to Moriarty, how much did we bring in from Washington last year of special appropriations?

 

Dr. Michael Moriarty, VP for Research: About $29 million, but none of that had Rick Hartselle’s fingers on it.  It was due to a relationship with someone else.

 

Dr. Bailey: These are public dollars being used in these different contracts.

 

Dr. Large: As far as I know they are. I don’t know about Athletics, but with the Board it is.

 

Dr. Walker: Athletics doesn’t have any public dollars.

 

Dr. Large:  Well, their auxiliary funds are technically public, but their Tigers Unlimited funds are not. They are private.

 

B. Announcements from the Senate Chair – Dr. Jim Bradley

 

I agree with Dr. Walker that we can feel good about the priorities that this group set about a year ago and passed on to the administration, which they then passed on to the Board of Trustees. I think that was a real good thing that Bruce asked us to do and that was to prioritize our wishes for the University budget, and I think we ought to do that again this year. One other item relative to the Board meeting that is germane to something Bruce brought up a minute ago, is the costs for the Athletic Department. Christine Curtis made a report about the problems of trashing the buildings and cleanup after football games, and she mentioned some of the solutions in terms of policies like locking buildings. She didn’t have time to get to everything because we were rushed to get done with this meeting by 10:30. In her written report, she had an itemized list of all the cleanup necessary and the re-landscaping necessary; she estimated $200,000 per football season as a cost to the University, which precisely matches the donation we just received.

 

We have now received nominees from the University faculty for the faculty positions on the committees of the Board of Trustees. Tomorrow at 1:00 the Rules Committee will be meeting to go over those nominees and any additional nominees the committee members have. We will be selecting three nominees for each of the four faculty spots on Board committees.

 

Something else related to the Board is that I continue to have infrequent, but somewhat regular discussions with Trustee Miller about the Academic Affairs Committee. Our discussions have so far been limited to the revision of the charge of that committee. That committee has been merged with the Priorities and Planning Committee of the Board, and he has invited me to converse with him about the revision of the charge.

 

The Administrator Evaluation Committee has been meeting off and on for well over a year, and finally there are forms that have been finalized and approved by the Interim Provost. There are plans to do that survey of Department Heads and Deans sometime early this spring. Bruce Gladden has been chairing that committee.

 

A couple of other quick items, there has also existed an ad hoc Senate committee sabbatical policy, chaired by me, and it has existed for about a year. The committee will be reporting its recommendations this winter. The recommendations will be of two varieties: revisions of eligibility for sabbaticals to bring us in line with the semester system, and recommendations aimed at equalizing opportunities for sabbaticals across campus. At the present time, as most all of you know, those opportunities are not equal.

 

Finally, the Senate has now been assigned office space in Samford Hall. We have our own conference room and an anterior greeting area with room for two desks. Due to Dr. Hendrix’s efforts on the Concessions Committee, we have funds to buy furniture as well. So, as soon as we have the place scanned for listening devices, we’ll be moving into that office. The conference room is 102 Samford. The office is half the area where Dr. Muse was located for the past several months (room 100).

 

Action Items

 

A. Motions from the ad hoc Committee on Outreach Scholarship Assessment – Dr. Wayne Flynt

 

Beginning of Outreach motions-

 

Motions to accept recommendations from the Ad Hoc Committee on Outreach Scholarship Assessment

Motion 1

Adopt the following definition of "outreach":


a. Outreach


Outreach refers to the function of applying academic expertise to the direct benefit of external audiences in support of university and unit missions. A faculty endeavor may be regarded as outreach scholarship for purposes of tenure and promotion if: (1) there is a substantive link with significant human needs and societal problems, issues, or concerns; (2) there is a direct application of knowledge to significant human needs and societal problems, issues, or concerns; (3) there is a utilization of the faculty member's academic and professional expertise; (4) the ultimate purpose is for the public or common good; (5) new knowledge is generated for the discipline and/or the audience or clientele; and (6) there is a clear link between the program / activities and an appropriate academic unit's mission.



Motion 2

Refer the Report of the Ad Hoc Committee on Outreach Scholarship Assessment (available at www.auburn.edu/outreach/facultyhandbook) to the to the Handbook Committee for review and implementation consistent with motion 1.


Explanation


These motions define outreach and forward the report of the Ad Hoc Committee to the Handbook Committee. The purpose is to establish a means of assessing outreach scholarship so that it can be counted in the tenure and promotion process. The report does not change the weight assigned to outreach but provides a method for documenting and assessing quality in outreach. The Handbook Committee will have opportunity to review the recommendations, consult with the Tenure and Promotion Committee and others as appropriate, and bring their recommendations back to the University Senate for discussion and action.


The definition in motion 1 is a slightly modified version of the definition in Faculty Participation in Outreach Scholarship (the Flynt committee report), which was accepted by the University Senate in 1997. It subsumes the activities of Cooperative Extension and it also includes other faculty activities that apply academic expertise to the direct benefit of external audiences in support of university and unit missions. It excludes any forms of community service that do not apply academic expertise and/or do not support university or unit missions. Thus, a physics professor's participation in the Youth Experience in Science program would be considered outreach but the same professor's service on jury duty would not be.


It is not assumed that all outreach activities reflect the quality and impact necessary to be counted for tenure and promotion. Thus, the second sentence in Motion 1 adds restrictions for the purpose of identifying and setting standards for outreach scholarship. An activity would have to meet all of the six conditions, including the generation of new knowledge for the discipline and/or the audience or clientele. Exactly how significant a problem or outreach impact would have to be is a matter to be worked out over time by tenure and promotion committees, just as the requirements for research publications have varied over time. The report of the Ad Hoc Committee offers a format in which candidates can document and reviewers can assess the significance of outreach contributions.



End of Outreach motions-

 

Dr. Bradley: The action that is asked to be taken on this subject is in the form of two motions. The first is simply addressing the definition of outreach scholarship. The second motion says that the committee’s report will be handed to the Faculty Handbook Committee for implementation. Whatever that Committee decides should go in the Handbook will come back to the Senate for a two-thirds vote of approval. What I suggest, given that at the last Senate meeting we had a nice presentation of this by Wayne Flynt, came up to a vote, then discovered we didn’t have a quorum. I suggest that I ask you if you are ready to go ahead and vote. Dr. Flynt was going to be here to answer any questions, but he was called out of town. He did provide this statement that is relatively short:

 

October 5, 2001

    I strongly urge the University Senate to pass the motion
recommended by the Ad Hoc Committee on Outreach Scholarship Assessment.
Literally dozens of individuals have worked for years to ensure that high
quality outreach, based on solid academic research/scholarship, presented
effectively to non-traditional audiences, tied closely to one's field,
which both enriches scholarship and improves quality of life in Alabama,
be fully recognized within the university's reward system. I believe this
component of scholarship can be fairly and rigorously assessed, and I also
believe it will strengthen Auburn's ties to the people of this state.

 

 

Larry Gerber, History (substitute):  Do we have the motion on the floor?

 

Dr. Bradley: No. I am about to ask whether you would like to have further discussion, at which point we will have some people field questions other than myself.

 

Dr. Gerber:  I am going to have one question of clarification. I support my colleague’s proposal, but I do have one suggestion that I would like to ask of someone who is involved in the drafting to respond whether I need to make a motion or not. So, I have a question first and a possible motion.

 

Dr. Bradley: Let’s go ahead and enter any discussion that we need to prior to taking a vote on this. We have David Wilson and Robert Montjoy, and also Barb Struempler who can answer any questions.

 

Dr. Gerber:  My question is in terms of the wording of the definition itself. The six conditions that are listed, it doesn’t exactly say that all six conditions must be met. That statement is made in the explanation, but not in the definition itself.

 

Barb Struempler, Chair-Elect:  A couple of Senate meetings ago, there used to be right before the “six” an “an/or.” The “or” was removed. The “and” implies that all six must be there in order to have merit. It used to be and/or at the July meeting. After that meeting it was edited to remove the “or.” So, in order to qualify you must meet all six conditions. I think it sets some very high standards for outreach types of activities. I’m glad I am a professor now and not going forward because it clearly sets some very high standards. I think the overlying philosophy is to try to encompass more than just Extension types of activities, which you are probably quite familiar with. This hopefully will bring in some other types of outreach and collect them so that we can highlight what types of outreach we do at Auburn University. But you have to have all six.

 

Dr. Gerber:  I understand that from the explanation. If before the first number we wrote “if all the following conditions are met,” would that make it clearer? I only suggest that if you consider it a friendly amendment.

 

Dr. Struempler:  I would not have a problem with that.

 

Dr.  Gerber: I so move. The wording would be “if all the following conditions are met.”

 

Dr. Bradley:  I would consider this a friendly amendment so I will just write it in. We do have motion #1 on the floor right now. It stands as it is friendly amended right now.

 

The question was called.

 

Kem Krueger, Pharmacology:  Item #5 says “new knowledge is generated for the discipline.” How does that differ from research?

 

David Wilson, VP Outreach: [answer to question inaudible]

 

The motion passed with a unanimous voice vote.

 

Dr. Bradley:  Now we have motion 2. Motion 2 simply states that we will pass this report on to the Faculty Handbook Committee along with motion #1, and ask them to devise an implementation, which they will bring back to the Senate.

 

[Name inaudible]:  I am a visitor, but in the parentheses you have “to the to the.”

 

Dr. Bradley: That is a friendly correction. Any discussion on this motion?

 

The motion passed with a unanimous voice vote.

 

Information Items

 

a. Report from the Teaching Effectiveness Committee – Dr. Jeff Fergus

 

I am here to report on the recommendations of the Teaching Effectiveness Committee on modifying the teaching evaluation form. First, you will notice that the written document has been available since June, because we were originally going to present at the June meeting. This process began about two years ago, so actually, the previous year’s Teaching Effectiveness Committee was involved.

 

Beginning of report-

 

Report on Recommended Changes to the Teaching Effectiveness Survey

 

The Teaching Effectiveness Committee recommends that the 8 questions currently on the Teaching Effectiveness Survey be replaced with the 9 questions shown below and that the survey be renamed the “Teaching Evaluation Survey.”

 

 

The committee recommends that the results of the survey be used, along with other evaluation methods, to identify potential teaching problems, which may require further investigation.  The results of the survey should NOT be used to rank faculty nor used as the sole means for evaluating teaching effectiveness.

 

Current Questions

 

1.   The instructor explained the course material clearly.

2.   The instructor was actively helpful when students had problems.

3.   The instructor was well prepared for each class.

4.   The instructor spoke audibly and clearly.

5.   The instructor stimulated my thinking.

6.   The instructor made the course objectives and my responsibilities clear to me at the beginning of the course.

7.   The instructor motivated me to do my best work.

8.   The instructor organized the class well throughout the quarter.

 

Proposed Questions

 

Evaluation of Instructor

1.   The instructor clearly stated the course objectives and student responsibilities.

2.   The instructor was well organized.

3.   The instructor was prepared for class.

4.   The instructor clearly explained the course material.

5.   The instructor communicated effectively.

6.   The instructor was helpful.

7.   Overall, the instructor was effective.

 

Evaluation of Course

8.   The course content was interesting.

9.   The course content was valuable / beneficial.

 

 

SACS Requirements for Teaching Evaluation

 

The SACS Criteria for Accreditation contains statements requiring the evaluation of both undergraduate (“Instruction must be evaluated regularly and the results used to ensure quality instruction.” – from Criterion 4.2.4) and graduate (“There must be frequent, systematic evaluation of graduate instruction and, if appropriate, revision of the instructional process based on the results of this evaluation.” – from Criterion 4.3.5) instruction.  In addition, the Report of the SACS Reaffirmation Committee (April 6-9, 1993) contained the following two statements specifically addressing the teaching effectiveness survey:

 

Recommendation 12:  The committee recommends that the teaching evaluation form be reviewed and that the institution establish a procedure that insures the regular evaluation of instruction and the development of plans to improve instruction in areas found to be deficient.

 

Suggestion 9:  The committee suggests that the review of the teaching evaluation form be accomplished by a committee comprised of students and faculty representatives.

 

The institution must respond to recommendations from SACS, so the Teaching Effectiveness Committee has reviewed the Teaching Effectiveness Survey.

 

Process for Evaluation of Current Survey

 

The Teaching Effectiveness Committee began revising the Teaching Effectiveness Survey in Fall 1999.   Revision of the form was required to reflect changes (e.g. course numbering, college names) associated with the transition to the semester system.  The committee considered revising the questions at that time, but then decided that more time was needed to make proper changes.

 

In Spring 2000, the Teaching Effectiveness Survey was sent to all teaching faculty, who were asked to rate each question as to whether it should or should not be included and to provide comments on the survey.  The results of this survey indicated the faculty objected most strongly to the questions regarding stimulation (#5) and motivation (#7).  The two most recurring suggestions for additional questions were the need for a question on overall effectiveness and questions on the course content (as opposed to the teaching).

 

In Summer 2000, based on the results of the faculty survey, a set of revised questions was developed and pilot tested.  The pilot test consisted of having 647 students in 29 sections complete two surveys (the current survey as well as a pilot survey).  In the pilot test, for a given instructor the overall scores from the two surveys were essentially the same.  One interesting result of the survey was that the scores on questions related to the course content were consistently lower than those for the instructor.  Another interesting comparison was that the scores for the instructors being organized and those for being prepared, were significantly different.  A recurring comment from the faculty survey was that these two questions were redundant, but the results of the pilot test indicate that this may not be the case.  The only rewording that resulted in a significant difference in scores was broadening the question on speaking clearly (#4) to include effective communication.  The scores on the revised question were lower presumably since both verbal and non-verbal forms of communication were being considered in response to the revised question.

 

In Fall 2000, based on the results of the pilot test, the survey was again revised.  Input from students and faculty on whether they preferred the current survey or the revised survey was obtained.  This input was obtained at an AAUP Forum on Teaching Effectiveness, from student organizations and from several classes.  Most faculty and students preferred the revised questions to the current questions.  Although the revised questions were generally preferred, there were some recurring comments from a minority of the respondents, which will be discussed below.

 

Summary of Revisions

 

The revised questions are generally more succinct than the current questions.  Other than rewording, the significant changes are summarized below:

 

Addition of questions on course content: The questions on the course content were added to allow students to express their dissatisfaction with the course without necessarily giving low scores for instruction.  In addition, information on the students’ attitude towards the course is useful in interpreting the survey results. 

 

Addition of question on overall effectiveness: The addition of a question on the overall effectiveness was the most common suggestion from the faculty survey.

 

Elimination of motivation / stimulation questions:  The questions on motivation and stimulation received the lowest scores and raised the most objections in the faculty survey.

 

Student / Faculty Input on Revised Questions

 

Students and faculty generally preferred the revised questions.  Some (not a majority) faculty and students had some concerns about the revised survey.  The recurring concerns are given below, along with some explanation of the committee’s response.

 

The motivation / stimulation questions are important and should be included: Motivation is more important in distinguishing great teachers from adequate teachers, rather than adequate teachers from poor teachers.  The committee believes the survey can only effectively distinguish poor teaching from adequate teaching.  Even with the motivation question, other criteria would be needed to identify exceptional teaching (e.g. for awards), so the questions are not necessary.

 

The survey has mixed purposes (evaluation / improving effectiveness) and is not effective for improving teaching:  The survey should be used only to identify problems in teaching – NOT to rank instructors and not as a sole source for evaluation.

 

The results are misused (i.e. ranking, type of course / attitude of students not considered) or not used (i.e. lack of impact).  This is an issue in the interpretation and use of the results and cannot be addressed by modifying the questions.

 

Issues for Further Evaluation

 

The Teaching Effectiveness Committee recognizes that there are still improvements that can and should be made on the process for evaluating teaching effectiveness.  Revision of the survey is just the first step.

 

There is considerable interest in mid-term evaluations, because they provide rapid response and the results can benefit students in the class providing the input.  However, students may fear retribution for negative responses in a mid-term survey, so more candid (and thus more valid) responses are expected at the end-of-term.  Also, many changes are more effectively made when the instructor next teaches the class rather than in the middle of the term.  In addition, faculty desiring mid-term input can implement individual custom surveys.  While the committee believes that the current survey is most effective if administered at the end of the term, it recognizes the value of mid-term evaluations.  Although individual instructors can develop and administer their own custom surveys, the development of a set of evaluation or assessment instruments, from which instructors could choose, would be helpful.

 

The committee considered the possibility of administering the mid-term, or possibly the end-of-term, survey via the Internet.  On-line surveys would reduce the time and cost associated with the survey.  However, there are security and privacy issues, which must be addressed.  In addition, the response rate of on-line surveys may be lower than those administered in the classroom.  Instructors can currently use WebCT to develop and administer on-line surveys to their classes.

 

As mentioned above, the committee believes that the teaching evaluation survey should be used along with other assessment techniques.  Although this is already stated in the Faculty Handbook, it appears that some programs rely primarily on the teaching effectiveness survey for evaluating teaching effectiveness.  The development of a set of alternative assessment instruments or techniques may facilitate and encourage the use of a variety of assessment methods.

 

The survey or other assessment methods can be used to evaluate teaching effectiveness, but the important next step is to use the results to improve the quality of instruction.  Although, this will be done primarily at the departmental level, the development of programs to help faculty develop and improve teaching skills would be helpful.

 

The university will be establishing a Center for Teaching and Learning.  Although the center is still being organized, one objective of the center is to improve teaching, which should include addressing the remaining issues listed above.  The Teaching Effectiveness Committee plans to work in coordination with the Center for Teaching and Learning to establish processes to support continual improvement of the quality of instruction at Auburn University.

 

End of report-

 

Dr. Fergus: The reason we did this is related to the SACS criteria. There are two criteria requiring that we do some sort of evaluation. Not only that, but in the last SACS review, there was a recommendation, which are items we must respond to, to review this survey. That is the primary reason why we did this.

 

This whole process started two years ago in the Fall of 1999 when we looked at the survey. We had to make changes to the format because of the difference in semesters and quarters. So, after we did that we started to look at the questions and we decided that we didn’t have time to adequately study the questions at that time. So, we decided to take more time to review the content of the questions.

 

The first thing we did, as you may remember, in the Spring of 2000, we sent out a survey to all faculty. Basically, we sent out a current survey and asked whether the current questions should be there or not. The main result was that the two questions involving motivation and stimulated thinking were least favored by far.  Comments suggested a need for a question about overall effectiveness. And there were a lot of complaints about the misuse of the information from the form. So, based on that we looked at the questions and revised them in the Summer of 2000. We developed a pilot survey with 12 questions, some reworded questions and some new questions. We sent that out to 600+ in about 29 sections. They filled out both the pilot survey and the current survey. We took those results to compare them.

 

We found that a number of people thought that two questions, one that was whether the instructor was prepared and the other was for whether he or she was organized, were redundant. However, there was a significant difference in scores. Moreover, in general, instruction was rated higher than the courses. It was fairly consistent. They were scaled together but were fairly consistent.

 

So based on that, we developed revised questions again and looked for some more information. At an AAUP form on teaching effectiveness,  we got some input on what we had proposed at that time. We also sent the survey out to students in various classes and other faculty, and we found that there were still a lot of people who thought that those motivation and stimulation questions should be in it. So we looked at that again. There were a lot of questions about having the survey at midterm rather than at the end of the term. Again, there were a lot of complaints over use and misuse.

 

Again, we revised it. First, I want you know that we don’t think we’ve solved everything, even though we think we have an improved survey. There are, however, still issues that we need to look at. We think there is value to the issue of the midterm surveys, but we think this particular instrument is implemented at the end of the term. I think you get better feedback when students aren’t concerned over retribution and for its purpose in evaluating the course, it is a better  time.

 

The possibility of online surveys is something we should look at. It is certainly more convenient for analyzing and collecting data. On the other hand, the response rate may be lower.

 

We certainly don’t think that the survey should be the only means of assessment, so those other means have to be developed. Another thing is that even if this survey evaluates teaching effectively, we need a system in place to do something about it if there are problems. The Center of Teaching and Learning seems to be a place where some of these things can be done. The plan is for this Committee to work with that Center to go beyond just modifying the surveys.

 

I will summarize the changes we have made and show you the revised questions. First of all, one of the things we did was to expand the question “The instructor spoke audibly and clearly.” We wanted to expand that beyond just speaking to include written forms of communication as well. That is one of the changes.

 

We made several of the questions significantly shorter and took out some of the qualifiers. For example, this first item is referring to the question, “The instructor was actively helpful when students had problems.” We got rid of the “actively” and “students had problems.”

 

As I mentioned, we added questions about the course content. Again, that is to distinguish between the student being dissatisfied with the course or with the teaching. We think that would be useful in interpreting the results to see if part of the reason students are dissatisfied is because they don’t want to be taking the course.

 

We did add a question on overall effectiveness. We eliminated the questions on motivation and stimulation. We talked about this quite a lot because there were a lot of comments and these really were important because a good teacher should be able to motivate students and stimulate thinking. But the reason we left those off is because we think that the use of this survey is really to distinguish whether you have someone who is adequately teaching versus someone who is doing a lousy job. We don’t think that this survey can really distinguish the great teachers from the adequate teachers. That’s really where those questions are most useful, for finding the inspiring teachers. We chose to decide that this survey not be used for that purpose, so those questions are not on this.

 

Our recommendation is that the eight questions we currently use be replaced with the nine questions I just showed you and that we call the survey the Teaching Evaluation Survey. One of the problems that people have with this survey is that it mixes up evaluation with improving effectiveness. We think again that the purpose of this survey is to identify if there are problems. If a problem is identified, there needs to be more evaluation of other types to identify if there really is a problem.

 

So, the committee recommends that the results of this survey be used along with other evaluation methods to identify potential problems that may require further investigation. The results of this survey should not be used to rank faculty or used as a sole means of evaluating teaching.

 

Are there any questions?

 

Dr. Bradley: The value of some discussion now, since we are not voting on anything today, is that Jeff would get some feedback so he could take back some comments to the committee and incorporate anything necessary before they bring it for a  vote before the Senate some time later.

 

Dr. Fergus:  I am not actually on the committee anymore, so I could agree to do anything, I guess. Peter Hasti, who is the new chair, was unable to be here. I think he said he would make sure someone else on the committee was here.

 

Herb Rotfeld, Steering Committee:  When I was on the Rules Committee several years back, there was a tendency to appoint people to this committee who scored high on the teacher evaluation form. That is part of why I am having the problem here. You start by saying SACS wants these reports to be ways of improving teaching. Then you talk about assessing to find out adequate versus bad teaching.

 

I’ll ask the management questions. What research drove you in compiling this report? None is cited; none is listed. There are hundreds of studies done in everything from American Psychologist to marketing journals and academic journals on how to construct these things and why these forms are a waste of time for students and faculty. I don’t find anything in here on research driving this report in how it is put together, how it should be used, or how it could help us improve the quality of teaching at Auburn University.

 

Secondly, I would like to see how the data should be analyzed. What usually happens when you present something like this with ordinal data is people give ratio scores, averages at the bottom line. A member of the P&T Committee one time said that 4.2 was average teaching. That was her bottom line. I don’t know what the Committee says now. I do know that I hear some administrators talk this way all the time. How should this data be analyzed? Nothing is said in the report.

 

In fact, in terms of putting the questions together, I am bothered because you haven’t said that you didn’t want a certain question in there because students wanted to express themselves. This is not supposed to be an exercise in venting. It is a substantive effort to improve the quality of teaching at Auburn. I saw nothing in this report that says how this material should be used in teaching. In fact, that’s the bottom question here. How should the data from this be used to make decisions?  Exactly what sort of data could be used in certain ways. If any management person is giving a report, you are also saying here is the data we are going to collect, you are also saying if the data comes out this way, we do this, etc. There is nothing in this report that indicates exactly how this data will be used.

 

Basically, in terms of compiling, we have your statement saying you want other data collected, with no talk of what that data will be. You talk of midterm evaluations in your intuitive statements, but you have no research in here saying how they  will be done. You only have an intuitive statement saying that end of the semester is better than midterm. Why? I don’t know.

 

I have a study with a evaluation form that made a comparison between students who took a class and students who watched a 10-minute video of that same teacher with the sound off and then filled out a teacher evaluation form. The scores were the same. I have heard that some schools have up to 29 items on their forms. Personally, I dislike the evaluation form you’ve come up with. This is the one form of quantitative data that comes out, and administrators will use it to rank order faculty. It is going to replace everything else. I think if you are going to spend all of these years revising the evaluation form and say this is what you are doing, you need to say this is the form and the answers to this form are going to improve the quality of teaching at Auburn. That is not in your report.

 

Dr. Fergus:  We actually talked about that. We decided that has to be done at the departmental level. This is just a form that will indicate if there might be a problem. If an instructor gets a score that is low for that type of class, and there are different standards for, let’s say, Math 160 and a graduate level course, there might be a problem and something needs to be done. I agree that this isn’t enough. This is the first step.

 

Isabelle Thompson, Senate Secretary:  I am the former chair of this Committee and Jeff inherited this whole problem from me. The one thing I told the committee when we started looking at this form was let’s please don’t do that. People are going to hate us if we mess with this form. But the Committee thought that the form was bad enough that we should consider the questions. The first thing we did was to go to the web and look at what other universities were doing. We did indeed do our review of research, Herb, quite a bit. We, of course, found that people have different opinions about these forms and their usefulness varies. We also knew from the beginning that there has to be a form. SACS requires there to be a form, and it seems to me that the work Jeff and the committee have done since I was fortunate enough to rotate off has been to take what we already knew, be very realistic about what we had to do, and contextualize it for the Auburn community. I know how much work we did before I left and they’ve done quite a bit more. I also have great faith in the statistical analysis that was done as well. I would just like to commend Jeff for doing an extraordinary job on this.

 

William Gale, Mechanical Engineering: [speaks in opposition to large numbers of questions on the teaching evaluation form]

 

Dr. Fergus:  To follow that up, from a personal experience, the most useful thing for me is the comments anyway. The numbers give me an overall feel. If it were a longer survey, there would be fewer comments, which is what I find most useful.

 

Cindy Brunner, Pathobiology:  Herb, I am here to tell you that you don’t have to get teaching effectiveness survey points to be on this committee, because I am on it. I don’t think Herb told us that this form needed 29 or 30 questions. I think what he is suggesting is that there is a great degree of variability at how this task is approached. I think he was taking the Committee to task for presenting only this as evidence of its activities over the past several years. One of the reasons I volunteered for this Committee is because I am eagerly anticipating a new Center. This is something that has been discussed for years and has never come to pass. I think this is a good thing. I am certain that the Teaching Effectiveness Committee is going to give more than just questions to prove effectiveness. But, I think that tinkering with questions was needed.

 

Mike Solomon, Consumer Affairs:  I agree generally with Herb in the sense that in my experience these forms can be manipulated and used in favor of a person or not. In my experience as a Department Head, I know that sometimes these ratings can be inflated by the instructor and it is very simple to do that because basically happy campers give good marks for instruction. How does that happen? We give them A’s. Grade inflation is part of the issue here. I am just wondering if the Committee considered asking the students what they are anticipating their grade to be. I know that some other institutions use that to moderate the final score in the sense that if the student is expecting to get a C or a D in a course, he or she doesn’t tend to be very generous in ratings. Perhaps that should be taken into account so that we don’t just reward professors who give A’s and B’s.

 

Dr. Fergus: We did talk about that and part of the form has that on there. The front of the form has that on it. It is on the form. You have to take that with a grain of salt because what the students says he expects is not always accurate. But it is there. That is another thing that could be done is to use more of the information that is already on the form.

 

Christa Slaton, Political Science:  One thing I might suggest that the Committee does as a next step is when you write your recommendations to articulate ways in which the forms have been misused in the past so that you state much more clearly and give more guidance because there is a lot of variation between Departments. If the Committee works to collect examples of how it is misused and state that in its recommendations, I think that would be helpful.

 

Herb Rotfeld, Steering Committee:  I would like to follow-up. My basic issue here is how should this data be analyzed? You say you have the request about expected grade in the front, but there is no statement about how that data should be used. The data analysis reported on this has traditionally been a collection of frequency counts and averages, which is improper use of information. So, I really think you need to say how the data should be analyzed from this form, exactly how the data should be used to help teaching, and you should consider a set of qualitative items. I know a Department Chair that insisted on doing all the department evaluations himself came to the students and said that pay raises would be influenced by what you say here. This has a nice impact on evaluation scores.

 

Dr. Fergus: We did talk about that, but it implies not only how it is going to be used but the response. All of that we thought was beyond what we could do. We thought that could be done at the department and with the Teaching and Learning Center. That was the reason we stopped and decided that we couldn’t solve the whole problem. We didn’t talk a lot about qualitative questions. We thought this was a general response. I think part of the problem with that is that every particular program has a different type of question they would like to ask. To find a common set of questions beyond general comments is difficult.

 

Randy Pipes, Counseling and Counseling Psychology:  Just a couple of things I would like for the Committee to consider. Item 1 is a compound sentence. Normally, in looking at items of measurement you don’t use compound statements. I would ask the Committee if they would edit that in some way or make it two. Secondly, in looking at the seven items that would be used to evaluate the instructor, it seems to me that five of them tend to center around organizational and communication issues. They do not identify average teaching from quality teaching, but I am still concerned that five out of the seven items seem to focus around speaking clearly and being organized. I would like to see a reconsideration of those items to see if you could beef those up a bit.

 

Dr. Fergus: I said the reason we decided not to put the motivation, but I can bring that up to the Committee again. In terms of what the instructor knows, that came up a lot. The general thought was that students really aren’t in a position to judge what the professor knows. They can judge whether it sounds like they know what they are talking about, but it is difficult for them to say whether the professor knows his stuff or not. An example of that is when a student says a teacher didn’t know algebra or calculus. Obviously, the instructor did, but the student had problems. That was the problem with that.

 

Dr. Pipes: My point there is not about a specific item. I am just raising the question of the percentage of the instrument on something that really doesn’t capture the substantive aspect of teaching.

 

Ralph Paxton, Pharmacology:  I am also struck of the lack of the word “learn.” Is this to be a measure of what students have learned? There is not one question in there that gives them the opportunity to say “I learned a lot,” or “ I learned very little.”

 

Dr. Fergus:  We talked about making it a teaching and learning survey at one point, but the thinking was if we did that it was like opening a can of worms. Once you decide to try to find out what the student is learning, there are a lot more questions you have to ask and a lot more you have to do. We thought that was beyond what we are trying to do here.

 

Dr. Paxton: I guess I am confused. What is the point of teaching, then?

 

Dr. Fergus:  Well, this survey is an evaluation of how well the instructor taught. That is only one measure.

 

Dr. Paxton: But isn’t learning a big issue in that? Otherwise, I could be the best teacher in the world and not teach them anything.

 

Dr. Fergus:  It is. I see your point, but I guess we thought that was going to need a different instrument to address that properly.

 

Dr. Paxton:  One last thought. It would seem more appropriate to call something like this a student satisfaction survey, because it has nothing to do with learning. We are not asking whether they learned anything or not.

 

Cindy Brunner, Pathobiology:  Ralph, there are institutions that call these student satisfaction forms. It sounds really ridiculous, but sometimes students don’t seem to be effective judges of teaching. We give exams that, to me, clearly show whether they are learning what you expect them to. Students will swear up and down that they didn’t learn a darn thing in a course, when the fact is they scored very well on an exam and they were basically clueless on day one. I know it sounds unreasonable, but I think we are already testing that.

 

Ralph Henderson, Clinical Sciences: Herb, let’s say the questions were valid. I am curious as to what analytical tools of management that you suggest. Sincerely, there is a role for management people in development of this because it is very difficult to make an assessment of someone’s capabilities with relatively short opportunities for evaluation.

 

Dr. Rotfeld:  I will give a general answer that I sent a report off to a couple of friends who are former department heads teaching organizational management and I would tell their comments but they are not compliments. They said this was a student satisfaction survey.

 

Dr. Bradley:  Just a little Robert’s Rules interjection: We need to avoid getting into conversations with each other and address your comments to Jeff.

 

Dr. Bradley:  All of these comments will appear in the minutes. You can encourage Peter Hastie to read the minutes of this meeting.

 

B. Report from the SACS Self-Study – Dr. Gene Clothiaux

 

Let me be quick and brief. Seven committees have been activated and they are at work.They have been broken up into subcommittees to handle different parts of the criteria for accreditation. This will go on until May, at which time we hope everything will be finished, if not before that.

 

We have conducted two surveys, one for the staff and the other for the administrative and professional group. The response from the staff was about 34% and the response from the academic and professional group was about 43%. At first we thought it would be higher, but it didn’t turn out that way.

 

If you wish to see the questions that were asked of the group, you can view them on the web. We have a site for SACS on the web. It is not accessible anywhere except at the address www.auburn.edu/academic/provost/sacs. We have asked to have that put on the homepage, but haven’t succeeded in getting there yet. We would also like to get it on the Provost’s page so you could access it from the front. As it is, you have to put this in if you want to see it.

 

We have the Self-Study from 10 years ago and the visiting team’s review from 10 years ago. If you go to that SACS review and click on it, you’ll get all of the requirements, the recommendations, suggestions, etc.

 

We have the committees for this time. We have a timetable. If you go to SACS Commission, you’ll get the homepage for SACS, and from there you’ll get any information you want, including the criteria for accreditation. If you want to look at the SACS survey, you can go there and link into the questions. Once you link to them, you will also get a word result at the end. If you click on result you will bring up that question and find a breakdown of how the people in a particular group felt about that particular question.

 

We are in the process of preparing surveys and questionnaires for Deans, Department Heads, and faculty. That was done 10 years ago. The faculty survey 10 years ago had something like 134 questions. I don’t know what percentage answered. I couldn’t find that. This is all in the archives in the library. I don’t think we’ll have 134 questions this time. I figured that there were a little over 90 people serving on the committees for SACS, and if those 90 people reply we’ll have about 10 percent. The hope is to get a bit more than that. We hope to get something around 40-50%. In fact, I was against putting out a faculty survey to begin with because the response is usually so low, but the Steering Committee feels otherwise, so we will go ahead with it.

 

The question comes up whether we should survey the part-time faculty and GTAs. Ten years ago they surveyed the alumni. So you ask, do we survey the alumni? Then you ask do we survey the Board of Trustees. We will have to see.

 

The Steering Committee meets tomorrow and we will have some answers.

 

There is really not much to report except that this thing is ongoing.

 

The meeting was adjourned at 4:30 p.m.