Interview with Jennifer Sweeney

Jennifer Sweeney teaches at the College of Computing and Informatics at Drexel University and in the Department of Information Studies at UCLA, and is a program evaluation consultant for libraries and other public agencies and nonprofits. Prior to joining Drexel, Dr. Sweeney developed measurement instruments for K-16 educational interventions for the University of California, Davis School of Education, and provided evaluation services for the California Center for the Book, the California Library Association, and Smith & Lehmann Consulting. She is scheduled to teach a series of classes for Library Juice Academy, which we are calling the “Painless Research” series. We describe the series as follows:

The Painless Research Series provides an overview of basic research techniques needed by library managers and other staff in different workplace sectors, such as service quality, customer satisfaction, and operational metrics, or in specific tools such as surveys and focus groups. Participants develop skills in formulating typical research questions and strategies, making use of existing studies and data, collecting and analyzing data, and tailoring presentations for different audiences.

Jennifer Sweeney agreed to do an interview here, to help give people a better sense of what will be covered in these classes, what needs they address, and a little bit about herself as the instructor.

Jennifer, thanks for agreeing to do this interview. I’d like to start out by asking a little bit about your background, how you came to be qualified to teach this series of classes.

I started out as a reference librarian in a small technical consulting firm and then later in a couple of college and university libraries, but I was always interested in the research side of just about every question that came across the desk. After a while I realized I wanted to focus more on research in my work, so I left reference and found a great job as an analyst in the library at University of California Davis, handling all sorts of data collection and research projects related to running a large ARL library. It was the best job ever–I was totally bitten by the research bug.

The next step for me was a PhD in information studies, where I started to notice something a little disturbing about the quality of the research in our field: it’s not that great. Plenty of great research ideas, far too many actual studies with problems in the research methods in one way or another. Faulty assumptions, inappropriate strategies, flawed analysis, you name it. Our field of library science/information studies/information science—whatever you want to call it—is a fascinating and multi-faceted discipline, with a weak research foundation.

So I figured I could help students and practitioners by presenting basic research methods in a clear and understandable way…so that when they go out to do research, they won’t make the same mistakes. That’s the basis for the series I’m doing for LJA.

Just so it’s clear to readers: While some people who take your classes might be interested in doing research for publication, the focus is on research that would be done within an institution to better manage services. But the basic principles are the same. It seems to me that one thing that is special about this series of courses is that your background gives you the ability to apply high methodological standards to concrete situations. But I wonder, are the methodological issues easier to deal with when you’re just looking to improve decision-making versus forming the kind of general claims that are made in academic research?

The methodological issues are not necessarily any easier to deal with in applied research settings. There are a couple of reasons for this. One is that in the real world, there are always other things going on that influence the research environment—you cannot conduct a true controlled experiment the same way you would in a laboratory. The problem in LIS is that when we do conduct our “quasi-experiments”–which we do all the time-—we don’t take the time to explore and account for those other variables that could be affecting our results. When we fail to explain these factors, we run a greater risk of coming to false conclusions.

The other reason is that “action research” that is done in work settings for decision making often involves some kind of evaluation, which creates another set of complexities because we are now bringing value judgments into the mix. The stakeholders have to agree on what’s important, and how (or whether) the results will be used. These questions directly affect the way the research is conducted. And stakeholders don’t always agree on things!

That is a helpful bit of orientation to “action research” as you call it. I wonder if you could outline the four classes for readers. What are they about and what will participants take away?

The course series “Painless Research” is designed to provide a basic set of skills for library administration or public services staff who need to evaluate their services but have no research or evaluation experience. The idea is to get you started with some knowledge and hands-on activities, explained in plain English. Evaluation research is not hard to do, but you need to know the techniques so you don’t make expensive mistakes or waste your time on useless measures.

A lot of research texts are hard to digest, so I try to present concepts in everyday language.

We start out with the course on “Evaluating Service Quality,” which focuses on how to gather and analyze information about how library users feel about services, what they want, etc. We target just a couple of key areas: what is service quality, why measuring quality is different than measuring other things, the techniques you should use, and how to use results to customize staff training and help the library improve.

The second course in the series, “Easy Patron Surveys,” gets into the details of survey design and implementation. Surveys are kind of mysterious to a lot of us, but they are really a lot of fun. Question design, sampling, and basic advice on how to get a survey out there and collect good data are the highlights of this course.

Getting to Know Your Users Through Interviews and Focus Groups” covers the in-depth qualitative data gathering that you can’t do with surveys. Talking to people and capturing what is said entails a totally different skill set. I do a lot of interviewing and focus groups, and it can be pretty intense while also immensely satisfying and fun. But you need the tools and skills to be able to get the information you need, because it is a much more labor-intensive activity than a survey.

We wind up the series with “Everyday Statistics for Librarians.” I’ve been working with library students and working professionals for years now, and the feedback I get is that it’s not that the math is difficult-—it’s not—-it’s just never been explained very well. We focus on just a few of the most useful functions, and it’s really fascinating to get to use math to describe and explain what is happening with trends, do some forecasting. This is how you generate data to base decisions on, to make a case for a grant project, and so on. Mistakes can be expensive, and in our state of constant evolution, we need all the solid information we can get our hands on.

I think it’s a great group of courses, and I want to thank you for designing them, and I look forward to some good interaction. I think we can close the interview here, but I will just add for readers that if you have any questions about these courses, feel free to contact Jennifer Sweeney at jennifer.sweeney@comcast.net.

Thank you for the opportunity Rory. I am looking forward to working with everyone this summer and fall!

One thought on “Interview with Jennifer Sweeney

  1. Pingback: Library Juice » Interview with Jennifer Sweeney

Leave a Reply

Your email address will not be published. Required fields are marked *