Transcript Senate Meeting
January 15, 2019

Due to the projection equipment needing to be unplugged and restarted in room 238, we got a delayed start of about 7 minutes or so.

Michael Baginski, Chair: Welcome to the January 15 meeting of the University Senate.

If you are a senator or a substitute for a senator, please be sure you sign in onto the sheet at the top of the room. Be sure to pick up your clicker at the top of the room. Second, we need to establish a quorum. We have 87 Senators in the Senate and we need 45 for a quorum. Please press A on your clicker to show you are present.
Let the record show that we have 67 present to start the meeting, so a quorum is established. I now call the meeting to order.

I would like to remind you of some basic procedures for the Senate meeting for senators and guests. Let me explain the Senate rules about speaking. The rules of the Senate require that senators or substitute senators be allowed to speak first and then after they are done guests are welcome to speak. If you’d like to speak about an issue or ask a question, please go to the microphone on either side aisle. When it is your turn, state your name and whether or not you are a senator and the unit you represent.

Please limit yourself to one or two questions. Unless you are making a motion for an amendment before the Senate, meet with the speaker afterward to continue with anything else you have to discuss.

The agenda today was set by the Senate Steering Committee and posted on the Web site in advance, it’s now up on the screen.
The first order of business is to approve the minutes for the meeting of October 9, 2018. Those minutes have been posted on the Senate Web site. Are there any additions, changes, or corrections to the minutes?  Hearing none the minutes are approved by unanimous consent, that is different from last time.

Next we need to approve the minutes for the meeting of November 13, 2018, the Senate meeting. Those minutes have been posted on the Senate Web site also. Are there any additions, changes, or corrections to these minutes. Alright, hearing none the minutes are approved by unanimous consent.

Now, I personally have a statement to make. This in view of some of the things that happened recently. I have a few brief remarks regarding collegiality and decorum in the University Senate. Let me speak frankly about this. I have not been the best example of how to treat others in the Senate and for this I apologize. That said, I know how many of us feel passionately about certain issues and it’s difficult to keep our emotions in check, but regardless of our personal feelings about policy or individuals, I ask everybody to treat everyone with respect. Please do not insult or ridicule anyone or say anything that questions a person’s personal integrity or motives. You may always ask questions about policy, but I ask that everyone refrain from personal attacks and making statements that beg to question a person or their character, or to insult them, to be frank about it. For example; if you disagree with a policy or procedure of the Senate or procedures for making a change, if you are uncertain or feel that a policy is unfair, then clearly follow Robert’s Rules of Order to state why it’s unfair in your objection. Or you can discuss it with me after the meeting or later in the week.

Now I would now like to introduce the officers of the Senate and our administrative assistant.

Dan Svyantek is the immediate past chair, Nedret Billor is the chair-elect, Dr. Beverly Marshal is the secretary this year, and Adrienne Wilson is the secretary-elect: Herbert Jack Rotfeld is our Parliamentarian. Finally, our administrative assistant is Laura Kloberg.

The first item that we are going to talk about, Ralph Kingston, chair of the Faculty Handbook Review Committee will present two items.

[3:08]
Ralph Kingston, chair of Faculty Handbook Review Committee: Hello everybody.

I am going to make this quick, as usual.

We have two sets of changes that we want to make to the Constitution of the Senate and by definition to the Faculty Handbook today. The first one that you see here is a change to Article 5, Procedures, Chapter 2, which responds to the unofficial vote you had last semester.

The proposal here is to reduce the number of Senate meetings during each year by two. It is a proposal that comes out from the Senate Executive Committee and as you know basically already you receive a clear unofficial support from you all Senators in the straw poll last semester. More particularly this involves removing the meeting that usually takes place in August and that will be replaced by an orientation for new Senators. So those of you who are long in the Senate probably don’t remember this, but for those of us who are new to Senate business we know how bewildering the Senate can actually be. There is a lot of in-the-know conversation, a lot of acronyms and basically is felt that an orientation for new Senators would help improve the collegiality we just heard about.

So, removing the meeting in August and replacing it with an orientation for new Senators is the first part. The meeting that usually takes place in June, if you look back at Senate minutes, always struggles to achieve a quorum is also to be removed. So you don’t have to look back at the Senate meetings; last June a quorum requiring 44 Senators was not established since only 36 Senators recorded as present. In June 2017 45 Senators were present and a quorum was established. In June 2016, 44 Senators were present, the very minimum for quorum. None of those meetings met the two-thirds requirement for certain kinds of votes or amendments which is 58 votes, including the vote requirement you will have when this issue comes up for a vote in the next Senate meeting.

So, it would not be right for me to not point out that changing these meeting creates a significant gap of 3 to 4 months every year where the Senate will not meet in regular session. The fix for that of course is that Senators commit to attending the June meeting.
That is the information that you need in this policy. Does anybody have any questions on this policy proposed change before I move on to the next one? (no response)

The second change that the Faculty Handbook Review Committee has for you today is a change to the Retention Committee. There is a two-fold change going on here. The primary change is the transformation of the Retention Committee into the Enrollment and Retention Committee. So, the reason to make this change is to provide a mechanism for faculty to monitor and give input on university enrollment policies. Enrollment is a significant issue at present as colleges are choosing how to balance their responses to the budget model and as a university as a whole is growing total enrollment of students. So adding enrollment to the committee’s charge is also a means for faculty to input in a meaningful way on diversity policies and initiatives and adding enrollment allows faculty to input on the enrollment issues.

The second change you see up here is technical, caused by changes to the university organization chart, the administrative structure. We no longer have an Associat Provost for Undergraduate Affairs. My understanding is the Provost’s designee will be somebody in a position to feed enrollment data into the committee as required.

Does anybody have any questions on this change? They will both come up for vote in the next Senate meeting. Today I am just presenting them. I feel as if I am daring you…I just lost my dare.

Kelli Shomaker, VP for Business and Finance and CFO:
I just noticed the composition of the committee. Why would we not have the Vice President for Enrollment on that committee?

Ralph Kingston, chair of Faculty Handbook Review Committee:
I asked exactly the same question and I was assured that the Provost’s designee basically takes care of that. I could go and research…

Kelli Shomaker, VP for Business and Finance and CFO: Enrollment works directly for the President not the Provost, so I thought that would be a designee.

Ralph Kingston, chair of Faculty Handbook Review Committee:
I would basically ask a couple of more questions and have a more definitive answer for you before you have to vote next time. I asked the same question and that’s what I was told, great minds think alike. Thank you very much. [9:20]

Michael Baginski, Chair:
Next is an information item. Todd Steury is going to present this, on the use of student evaluations in teaching.

Todd Steury, member of the Teaching Effectiveness Committee
: Thank you, my name is Todd Streury. I am the past chair of the Teaching Effectiveness Committee, I am also an associate professor in the School of Forestry and Wildlife Sciences.

Today I want to tell you about a report that the Teaching Effectiveness Committee will be submitting very shortly. As some background to this report the SET questions, as most of you know, were revised in 2017. SET stands for the student evaluation of teaching. These are the questions that are asked of all students at the end of every semester for most of the classes here at the university. Those previous questions had been used since 2011, so it was about time for a revision. And as part of the revision of those questions which the process began in 2016, the process included an in-depth review of existing research on student evaluation of teaching questions and SETs in general.

Final TEC Report on SETs (pdf)

In this review the committee found that SETs appear to be of limited value for summative evaluations of teachers. Now summative evaluations are those that are absolute evalutations of teaching effectiveness, so, how good a teacher is. So we set out to develop questions of greatest value for formative evaluations of teaching. Formative evaluations are with a focus on improving teaching. We came up with questions that were based on the seven principles of good practice in undergraduate education by Chickering and Gamson. I will just quickly put those up (for you to see) they have to do with the instructor interacting with students. Did the instructor provide opportunities to cooperate among classmates? Did the instructor convey high expectations regarding the work load? Did the instructor provide an evaluation of the student’s progress? Did the instructor provide opportunities to apply their learning? And finally, Did the instructor prompt the student to think critically about the course material? Oh sorry, finally, Did the instructor provide an environment that was supportive of learning?

Now one of the things that you’ll notice in these 7 questions is there is no global question. The overall effectiveness of my teacher was…or my overall enjoyment of this class was…and I’ll come back to that as I go.

So, the Teaching Effictiveness Committee set out to put together a report on what we found in our research and generally we found that we had several goals. One was again to review the existing literature on the utility and the validity of SETs. One was to review the expert advise on how SETs should be used for evaluations of teaching effectiveness. And lastly, we wanted to come up with specific recommendations on how teachers should be evaluated at Auburn University. And we wanted to come up with recommendations that are both fair, but also efficient. One of the reasons that SET scores are often used is that they are easy. It’s just a number. And we wanted to come up with ways of evaluating teachers that are similarly fairly efficient to use.

In our review of the existing literature on SETs we came to a number of conclusions. Now, there is a fair amount of controversy almost in the literature on the use of SETs, but there are some key conclusions that can be made regardless of the source.

First of all, students are not qualified to evaluate faculty for teaching effectiveness. This seems to be a pretty strong conclusion no matter what source you look at. The evidence for the validity of SETs for measuring individual teaching effectiveness is weak, at best, and finally we found that there are many sources of bias in SET scores. [13:38]

So, let me break each of these down. First of all, students are not qualified to evaluate the effectiveness of a teacher. Specifically, there are numerous skills that define teaching effectiveness such as, knowledge, methods, course design, use of technology, course materials, grading, and a whole bunch of other things, that students just don’t have the information or the knowledge necessary to evaluate a teacher with respect to those skills. Instead SETs are widely accepted to be used to gather the collective use of a group of students about their experience. Okay?

SETs is a measure of effectiveness. One of the things we looked really closely at is are SETs questions valid instruments for measuring effectiveness? That is, do they actually give some indication of how effective a teacher is? Now one of the best sources of evidence for the validity of SETs are meta analyses. Meta analysis, for those of you that don’t know, is a study where researchers collect numerous studies on the topic and they essentially do a quantitative review. They take all the strengths of evidence for the particular question at hand and they combine them using meta analytical statistics to come up with a single overall measure of the effectiveness of the, in this case, student evaluation of teaching questions. There are a number of meta analyses that have been published on the validity of student evaluations of teaching including Cohen, 1981-82-83, Feldman in 1987, McCollum in 1984, Clayson in 2009, [15:19] and interestingly all of these studies did find that SETs are valid measures of teaching effectiveness in aggregate. I’ll come back to that in just a second.

They all found that the correlation coefficient, that is the strength of the relationship between SET scores and student learning range between .13 and .44, and just to explain how they did this, what they did was they looked at studies where there were multiple sections of a course and they correlated the scores from each of those sections from the SET questions with the grades on the final exam or the grades in the course. Some measure of student learning. Okay. Now, the important point that I want to make is that looking at this there’s a positive correlation between SET scores and student learning. So, SET scores might be a good measure of evaluating teaching effectiveness, right?

Here’s the inherent problem, this is an example data set showing the relationship between, we have SET scores on the x-axis on the bottom and some measure of student achievement in learning on the y-axis along the side, and the data here has an R-value of correlation coefficient of about .5, so higher than the best value found in any of those meta analyses. Okay. Now you can see that there is a positive relationship there, but the problem is if you look at any individual scores, they don’t carry a lot of meaning for evaluating for an individual instructor. For example, I’ve selected two points here; If we look at this point down here in red, this instructor got an average score of pretty close to 6, but his students didn’t actually learn very much. Compared this student up here in this corner who got an average score of 2, and yet his students learned an awful lot. So, this is the inherent problem with these SET scores is that while they may be correlated with student learning they aren’t necessarily good measures of how well an individual instructor has done.

The R-value has real meaning. If the R-value is .44 then then that means that 80% of the variation in SET scores is due to something other than how much the student learned. And only 20% of the variation in SET scores is driven by how much the student learned. So, these are valid instruments for measuring student learning in aggregate. You can use them to look across departments or across universities, but not at individual instructors. Now, as I said, 80% of the variation in the SET scores is driven by something else and that something else can be bias. There’re a lot of studies that demonstrated that there are biases in…I’m sorry I got ahead of myself. [18:17] One of the things that recent re-reviews that these meta-analyses found was that none of the previous meta-analytical studies considered something called small sample bias. Now, small sample bias is this idea that studies with very small sample sizes need really strong affects to get significant P-value. Studies without significant P-values don’t get published. So the idea behind this small sample bias is that perhaps the reason we see relationships in these meta-analyses studies is because studies that did get published had small sample sizes and those small sample sizes meant that they got really large effects. [19:10]
So, Utel in 2017, he re-analyzed each of these previous meta-analyses and here is some of his results. What you can see here is these are each of the studies used by Feldman in 1989 and each of the studies that Clayson used in 2009, and the studies are sorted from the largest sample size at the top to the smallest sample size at the bottom. This dotted line, right here in the middle, is a correlation coefficient of zero, so basically no effect.

What you see is that large studies tended to have no effect, but as the sample sizes of those studies got smaller and smaller they had bigger and bigger effects. So this was clear evidence of small sample bias in previous meta-analysis studies. What Utel did is he re-analyzed previous data and he adjusted their correlation coefficients for the small sample bias and he found that the adjusted R-value ranged from .05 to .27. So even smaller than previously found. Furthermore, Utel analyzed a new data set and found that their adjusted R range from -.02 to -.04. So, the point is that even though previous studies have suggested that there is a relationship and these SETs are valid measures of learning, those results may have been due to small sample bias and there really may not be any relationship at all.

Now as I mentioned, 80% of the variation in SET scores are driven by something else. One source of that variation could be bias and there’ve been numerous studies that have found bias in set scores, those bias could be caused by faculty rank, faculty gender, student gender, faculty expressiveness, student motivation, whether the course is required or not, the expected grade, the level of the course (is it an undergraduate or freshman or junior level course; freshman or sophomore level course as opposed to a junior or senior level course?), what is the class size?, what is the academic discipline?, student workload?

We also looked at these global questions. Again, global questions are these overall questions, such as, what the instructor’s overall effectiveness was. Generally, we found that global questions were subject to even more problems with bias than other studies. In addition, most of the experts agree that global questions are particularly inappropriate for evaluating instructors. In addition to having weaker validity global questions have a low reliability. Reliability is this idea that if you ask a student to score an instructor on one day and then you turn around and ask them to give a score a week later, they will give very different scores. So, there’s low repeatability in the scores that individual students give.

Experts also agree that it’s generally not fair to evaluate something as complex as the teaching that an instructor does over the entire semester based upon a single score and most of the experts agree that if you are going to use scores, you should use all the scores not just a single global question. Or maybe at best or worst a mean of the scores rather than a global question. And the experts believe that using global scores kind of violates the standards related to personnel evaluations.

So, what are the implications of the findings of the committee? SETs are not valid instruments for measuring effectiveness of individual teachers. And as a rule SETs should not be used in summative manner for evaluating teachers. Again, this is evaluating the absolute effectiveness of a teacher.

So, what are the recommendations of the Teaching Effectiveness Committee? Again, we recommend that SETs should not be used in any kind of summative manner. SETs could be used in a formative manner. What I mean by that is not just could they be used to improve teaching, but regardless of the absolute scores, do scores improve over time? If you have an instructor who’s scores consistently improve over time that may, may, indicate an instructor who cares about teaching and puts effort into trying to improve their teaching effectiveness. [23:40] Additionally, if SETs scores are going to be used we have a number of recommendations related to the number of evaluations. In general the experts agree that a minimum of 6 to 8 courses total need to be used. Courses with fewer than 10 raters should be excluded. For summative employment decisions there’s a minimum of 70% response rates. Anyone getting 70% response rates? For formative considerations then, much smaller, a 30% response rate is acceptable. Again, the TEC recommends that no global questions be used. [24:25]

Again, the use of all scores is preferred, or if you must, the means from the scores from the 7 questions.
Finally, we recommend that there be no comparisons with other faculty, this is still prone and open to bias due to gender, race, or other characteristics. And most importantly the TEC recommends that SETs must be used with other metrics of teaching. There are a number of different metrics that could be generated with respect to teaching effectiveness, these could include peer review of course material, peer review of course instruction, which is having one of you colleagues sit in on your class, review of metrics of teaching effort by supervisors for example; were course policies followed or grades submitted on time?, those kind of things. You could have review by expert sources like the Biggio Center, you could have exit and alumni ratings, employer ratings, teaching scholarship, so is the teacher publishing in peer review outlets?, teaching awards, learning outcome measures, teaching portfolios, and finally, self-evaluations of teaching.

I will point out that you could generate rubrics for any of these particular metrics that will then give you a score. Again, we wanted to come up with recommendations that are efficient, so these metrics you could generate scores for them that could then be used very efficiently.

That’s it, with that I will take any questions. [26:01]

Spencer Durham, pharmacy practice, senator: Thank you for the presentation, it was very informative. Some of my colleagues actually wanted me to ask today a question about, are these questions you are talking about specific to the undergraduate programs or should we be using them in the professional and the graduate schools? Could you comment on that?

Todd Steury, member of the Teaching Effectiveness Committee
: Yes. When the TEC set out to make these questions we intended for them to be used rather generally. We recognize there are limitations, there are certain situations in which these questions couldn’t be used. And graduate classes, of course, present a particular problem because a lot of the time graduate classes are small or very focused in nature, so they could be used for graduate classes, but whether or not they should be used kind of depends upon the individual class. Does that answer your question?

Spencer Durham, pharmacy practice, senator: Thank you.

Ed Youngblood, senator, Communication and Journalism: This may be outside of what you are talking about. You were talking about scoring with peer evaluations and things like that, have you all looked at any measures that people have developed that you would recommend?

Todd Steury, member of the Teaching Effectiveness Committee
: In our report right now, we have an appendix that is an example of a peer review rubric that could be used. We didn’t want to get too specific in the exact rubrics that would be used so we provided an example. We generally feel that these things should be developed by the individual units.

Ed Youngblood, senator, Communication and Journalism: Is that report on the Web site?

Todd Steury, member of the Teaching Effectiveness Committee
: It’s not up yet, but we should submit it pretty soon.

Ed Youngblood, senator, Communication and Journalism: We are in the middle of trying to develop exactly that kind of thing, so that might be a good starting point for us. Thank you, very informative. [28:00]

Roy Hartfield, Aerospace Engineering, not a senator: There is some language in the Handbook that was put there probably 25 years ago that has to do with student evaluations not being used promotion and tenure decisions. It is used, in (? could not understand what was said) the Handbook, regularly. Is there any interest in your committee in making changes to that language or…?

Todd Steury, member of the Teaching Effectiveness Committee
: The TEC doesn’t have the power to make any changes to the Handbook, so these are just the recommendations of the committee. Certainly, if the Faculty Handbook Committee wanted to make changes or the Faculty Senate was interested in making changes to the Faculty Handbook the TEC would definitely be willing to work with anybody in terms of those changes, but it is out of our hands. All we can do is make recommendations.

Roy Hartfield, Aerospace Engineering, not a senator: Thank you.

Todd Steury, member of the Teaching Effectiveness Committee
: anything else? Okay, thank you very much.

Michael Baginski, Chair:
Thank you very much. [29:29]

Now, Jennifer Kerpelman, the interim Vice President for Research, is going to talk about the supporting research. I just want to say one thing about these teaching evaluations. I remember when we had paper and you would get a much higher response rate, but I don’t know if we could ever come back to that.

Jennifer Kerpelman, Interim Vice President for Research:
I’m going to give you a nice update on seven diverse ways that we’ve been supporting research or things that are in process for supporting research. On a number of these initiatives or problem-solving activities we’ve been working, the Faculty Research Committee and the Associate Deans for Research have been working together collaboratively to move these forward.

The first activity that we have and have had for a number of years is the Intramural Grant Program or IGP. So, this year we have 3 categories that people have submitted proposals to. One is faculty-initiated research project, second area is team research, and the third is the good to great category which we’ve had where (this is newer) where people submit to an external funder, they don’t get selected for funding but they get reviews that suggest their promising in terms of their work. They use the funding to improve their proposal, get more data that they need and then resubmit, and hopefully are successful in the revision.

There’s also a separate category with separate funding that is for the cyber initiative. You can see up here that we have had workshops, we received 95 submissions (that’s 10 more than last year, which is great) and we will be having the panel meet in February and then be announcing the winners of the IGP in March.

We also are undergoing currently an evaluation of the IGP, looking across time to see how effective it is and how much value added it’s bringing to both people being successful in obtaining funding, but also other indicators of visibility of their work. We will have more that we can share about that in the future. [32:42]

The second thing that I’d like to share is the incentivizing scholarship plans. As you know we had a Scholarship Incentive Program, the SIP, and that was terminated. The Senate ultimately asked that the Provost determine something to replace that. So, the Provost had Emmett Winn and me and Mike Fogle get together and work on some guidelines to help move this forward. We then received feedback from FRC members, feedback from the Associate Deans for Research, and also gathered feedback from others that then helped us to improve the guidelines. The Provost also had a chance to take a look at what we were developing, then it was sent out to the units for the academic leadership to review the guidelines and see what they thought about them and to provide some input. That came back to us and we incorporated that input and it is now with the Provost to review. There may be more input before it ultimately would go to the units to develop incentive programs that are tailored to what is incentivizing for the faculty in their colleges/schools/departments. The only requirement is that everybody needs to have some scholarship incentive plans in place for their faculty. It can be at the department level or the school or college level. There also may be some central components that help with the scholarship incentives, so that is still in process. That’s something to be on the lookout for and it will be moving forward this year.

A third area that I believe, I don’t get as many questions about the faculty and student research symposia, it used to be when we would talk about them people would go; ‘What is that?’ So I think we’ve been doing them long enough now that most people are aware of them and are participating, which is great. The next student symposium is going to be April 9. More will be coming out about that. The announcements have come out for students to submit their abstracts, but the faculty symposium that I will focus on here…what that is designed to do is to raise visibility of faculty research across campus. It is an opportunity for faculty to network and meet each other across disciplines, learn what others are doing on a topic that may be similar to your interests but have a very different approach to it from a very different discipline. And then a third thing that we have strengthened just this past year and want to continue to strengthen is the opportunity for external potential partners, external potential funders to be participating in at least some aspects of the faculty symposium. So, this past year we had some round tables that had external partners meeting with faculty about common areas where they exchanged information. That was something we pilot tested this past year and we will likely grow more of those opportunities in years to come.

We also, just to one more thing, we have creative scholarship showcase that has grown in the excitement and the size of it every year since we started it. In this past year it was at the Jule Collins Smith Museum and the opening night was spectacular. Then for 2 weeks we had the work being exhibited and we had special events. I anticipate that it will continue to be a very good event and that’s every other year.

Another thing that we are working on to support research that has certainly been coming up in a lot of different discussions over a number of years is the research infrastructure and how do we need to improve that to really supply core support for our research. This is becoming more important as we move into more interdisciplinary activity, because we have to have the supports that allow us to be successful in our work together. So, we have committees, sub-committees that include a diverse group of people. So we do have FRC represented, we have Associate Deans for Research, we have other faculty, we have staff on these different committees. And we are looking at the areas you see here; equipment, space, computing IT and statistical support, and we recently added the personnel, so those people that are in administration that help support our research activities. These groups are going full tilt, they’ve been giving reports, we will have another report later this month and ultimately, we will have for each group recommendations talking about what are our needs, what are our gaps, what are some of the priorities, where we really need to grow the infrastructure and improve the infrastructure to support our research activity. This will be given to the incoming Vice President for Research, of course, to use as planning is being made to enhance these supports. [38:22]

Speaking of interdisciplinary work, that’s something we hear more and more the importance of bringing different disciplines together, different expertise together because the problems the challenges we face in society today and the questions we have really require different kinds of expertise coming together. Three of the ways that Auburn has been supporting interdisciplinary research are what you see here.

We have Health Sciences Research Initiative that has had a lot of attention and activity. It is an area where every school and college has an interest, has activity. We have brought in a consultant, Marietta Harrison, who’s been working with us since last fall. She’s had a couple of visits to the campus. We had an open forum that was very well attended and participated in. She will be back at the end of January. And on the 28 January, if you haven’t seen it yet, she will have another open forum where, it’s 11–12:30 in the same location at the School of Nursing in their large auditorium style classroom and people can also zoom in if you are not able to be there in person. She has submitted an interim report asking for some input from those of us that are working closely with her. There will be highlights coming out about her interim report very soon. We do have a Web site up now for the Health Science Initiative. We are getting our monies worth with her, she is really attending to who we are, our capacity, our opportunities, and I think it is going to be very useful. The Steering Committee will be working with that. We have a lot more meetings set up for her visit. She’ll be with us the 28 January through February 1. And she may have another visit after that. We should have an excellent final report that will help us chart the course forward with our Health Science Initiative.

Other interdisciplinary areas that we are working in, I am sure you are aware of, we are working on the PAIR. We have teams that receive funding and we have been working very closely with those teams to make sure they have what they need to be successful and we will be receiving their 6-month reports this month, and will be providing them with additional input, support, guidance to insure that they are continuing to move forward in the ways that they would like to move forward.

We also have a lot of teams that came together because of the PAIR competition who actually said, “Well we got together, we really got excited and we want to continue working together and looking for support.” So, our office has been working with some of those groups to continue to help them identify ways that they can move forward even though they did not get the PAIR funding.

Then one other way, a few years ago we had the strategic cluster hires. We have 5 clusters that are continuing to work together and grow their activities. We are having them provide some updates at our University Research Committee meetings. We are continuing to reach out and work with them and we are looking at their accomplishments, so we are gathering some data on, what are they producing as a cluster? Also, what can we do to continue to support their efforts. Back to the infrastructure assessment, some of that will be critical for helping all of these teams to continue to make progress going forward.

A couple more updates. This is something that is in process. It is invisible to a lot of us as faculty except when we have frustrations because we are waiting too long to get something processed, be it a protocol or a proposal. Electronic Research Administration is something that uses technology to link databases that help us to be much more effective and efficient, reduces the administrative burden on faculty and also enhances the capacity of those that support our research activity in the administration world to be more effective.

So, we have had IBM come in and do an assessment of the current state of where we are. Had a lot of pain points identified. We have followed up with listening sessions to learn more about some of these pain points that seem to be most prominent. We have had IBM come back and they are helping us right now to envision the end state as an R1. What do we need to do to stay and R1? And really function as an R1 in terms of how we manage our research activity, so they are helping us with that and develop a journey map. And we are having electronic research administration vendors come in over this month and next, provide demonstrations of what their systems do. A lot of this is behind scenes for your day-to-day activity but as we make improvements, as we are doing research, and we need the support of these systems we will see the benefits. We are also making improvements with the systems we have based on what we’ve learned.

The final thing that I want to bring up is something that actually has been going on in pilot phases for a little while, but we are trying to make it a more known and somewhat formally know process and we call it the Facilitated Solutions Process. Sometimes when we’re having frustrations or problems around our research activity we feel like there is nowhere to turn. What do you do? Now, where possible we have people who are working closely with us who can solve the problem, but sometimes things aren’t as easy to solve, they’re more complex, they’re just challenging. So, there are some of us who got together because I am an associate dean for research in my typical job, and we see these frustrations and we’re thinking there has to be a better way to work together. We have a couple of associate deans for research, a couple of folks from the Vice President for Research Office, and a couple of folks from the Business Office, who are working together to provide some movement to facilitate bringing stake-holders together when there’s a problem that really needs more attention, moderating the process so that everybody feels like they are being heard and is part of the solution that we are moving forward to achieve. And then helping make sure that we have the right decision maker in place so that we can actually do something about it and then testing out to see how the best solution that we’ve identified works. An example that we did over the summer was: there were some differences and views about the payment for research participant incentives and how that was working. The thresholds that were set that if you were over a certain threshold then you had additional paperwork and reporting that you had to do. And faculty felt those thresholds were too low. We seem to be at an impasse, or weren’t totally working it out. So, we got together, FRC actually took a real lead in this, brought in ADRs, we brought in faculty, we brought in folks from the Business Office, we all got together and we had a dialogue about the situation and what people’s concerns were, possible ways we could solve it…ultimately it resulted in improving the policy that everybody felt they could live with and was moving everybody in the direction that they’d hoped it would go. That was a positive outcome. I think we can use a similar process, I and the group I’ve been working with, for other kind of challenges we face with our research. We are working on making this more visible, even getting a Web page out so that if you have something you think rises to the level that this kind of support might be helpful, you can reach out to us. For now, I can be a point person, so if any of you are thinking right now, “I’ve got one I’d like to bring forward.” We do have input from a survey that went out in the fall, some of you may have taken it, asking about all the different ways that you experience things that affect research, both directly with research offices, but also with things like parking. So, some of the information we’ve gathered from those surveys and from the IBM discussions, we have some areas that might be rising to the top that our group might put out there as part of our continuing effort to make improvements in support for researchers and research activity so that we have less frustrations. And when we do have frustrations we have a way to address them.

Those are my highlights. If you have questions about anything I shared I am happy to address them now and also talk if you have additional things you’d like to talk about, talk with you outside of this meeting. Anything? Okay.

Michael Baginski, Chair: 
This concludes our formal agenda for today.

Is there any unfinished business?  Hearing none.

Is there any new business?  …. Hearing none, I now adjourn the meeting. [48:28]