Tuesday, July 18, 2006

Is virtual reference successful? Part II (Hint: yes it is)

In Part II I'd like to respond specifically to two comments. The first is from Morgan Fielman. The second from Pascal Lupien, author the article I'm discussing.

RESPONDING TO FIELMAN

Commenter Morgan Fielman wrote, "The original poster seems to have missed the point of this article, which is primarily about software."

No, I get that the point of the article is primarily about the software and not the customer experience. But the article is so broadly written and lacking in detail that it ends up saying nothing more specific than "VR software has problems."

My questions, unanswered by the article are, What software? What problems? Two of the products Lupien writes about (Tutor.com’s and QuestionPoints') recently underwent complete overhauls, in effect becoming completely new products. It is unclear from Lupien's article which versions he's writing about but my sense is he's writing about the older versions. If that's correct then most of the article is, at best, moot.

Granted, the larger issue of whether or not the software is effective is a valid issue that warrants exploration and discussion. Fielman goes on to ask, "but how can customers be satisfied when the software we use is so poor?" I say that's the wrong question. The question is "Are customers satisfied?" The answer in our customers’ experience is yes, they are satisfied. We didn’t find this out by polling 20 libraries. We found this out by asking the customers. Another good question might be, "Do the problems with VR software affect the quality of the customer experience, and if so how and to what extent." There are many people at collaborative VR services looking at a lot of data to answer that question. Lupien’s article suggests that problems with software affect the customer experience but offers no actual data to back it up. He mentions problems with popup windows, problems with Windows service pack 2, and problems with serving customers who use Macs, but he is not specific about which software products exhibit which problems and to what extent. And again, Lupien is not clear which version of Tutor and QuestionPoint he’s talking about. The newer versions of both products are compatible with Mac users, and have no problems with service pack 2 issues that I’m aware of.


Fielman concludes his comment by saying "original VR supporters have realized that this service just isn't cutting it." The fact is our service has been cutting it for almost 5 years, and we have the hard data and glowing customer comments to prove it. If your VR service isn't cutting it, you need to ask why. Are your staff trained on the software? Are they enthusiastic? What are you customer service standards? Do your librarians give kick-ass customer service in f2f encounters? What quality control mechanisms do you have in place? Do you examine your transcripts for quality? Do you have regular and convenient service hours? Are you available 24/7? (going 24/7 made a huge difference in our usage, even though usage mostly grew during hours we were already open -- go figure…) And finally, but certainly not last, do you consistently and effectively market your service to your customers? Do they know you exist???

If your service ain't cutting it maybe you need to answer these questions first before blaming the software, which is an easy way out. Consider that here in New Jersey using standard VR software (currently QP, formerly Tutor/LSSI’s eGain-based software) we’re cutting it and then some. Other statewide collaboratives are doing quite well too. And we're all working very diligently with our respective vendors to ensure that our VR platforms are stable and highly functional. While the current glitch here and there can be a real and undeniable pain in the ass, it hasn't prevented us from delivering a high quality and slightly mind-blowing experience to our customers.

RESPONDING TO LUPIEN

First, I’d like to thank Pascal Lupien for taking the time to offer an extremely well-written and thoughtful comment in response to my first post. I’d like to assure him that contrary to his assertion, I’ve read his article through thoroughly a few times. I have no problem with bad news about VR. I just want accurate and somewhat substantiated news. I’m offering up the reality of my experience at QandANJ to counter the broad statements that Lupien makes. Now to some of his specific comments.

He writes, Perhaps these results aren’t what proponents of VR would prefer to hear, but they do represent a problem that needs to be discussed, for the sake of our users.

I do not consider myself a proponent of VR, I consider myself of proponent of libraries. It is my desire that libraries remain relevant to our customers by offering a suite of high quality services. Collaborative VR is one such service, offering our customers 24/7 access where and when they want it. I want to see libraries changing their customers’ perceptions about what libraries can offer them. I want libraries to blow customer expectations out of the water. I want libraries to be around in 50 years. It is not that I don’t want to hear bad news about VR software. I’m perfectly open to hearing about the problems with the current stable of VR software offerings. It’s just that I want to hear facts, not conjecture. And I want those facts to be couched in some meaningful context and always tied back, to whatever extent possible, to the impact on our customers. I didn’t get this from Lupien’s article.

Lupien writes, To respond to the person who claimed that software is the last thing that matters about VR, I say tell that to the user who is unable to log in because she uses a Mac, or because her computer has pop-up blockers. Tell that to the user who is "kicked off" in the middle of a session because the VR software does not function properly with the library’s licensed databases. These things happen regularly, and this article makes an attempt to discuss them.

I’m pleased to see Lupien talking directly about the impact on customers. Clearly we agree that it would be optimal if VR software worked across all platforms, had no problems with pop-up blockers, and worked 100% of the time so no user was ever "kicked off." I am not suggesting that these problems don't exist, I am asking to what extent do they exist, and to what extent do they impact the customer's experience and satisfaction with VR service. Because Lupien fails to identify what versions of the various VR products he tested, and is repeatedly non-specific regarding his data, the article fails to answer these questions.

Lupien grants that, "many regular VR users appreciate the service," and that he wasn’t contesting that fact. Our experience suggests that it is not "many" but most.

Lupien writes, "Shouldn’t we be thinking about these potential users as well, rather than focusing on those who already use and appreciate the service? Shouldn’t we be trying to determine if one software product could help us to improve the experience for all users, not merely the satisfied ones? Perhaps some would fear doing this, as it would reveal that their VR service isn’t as successful and user-friendly as they like to claim?

Yes, we should absolutely be thinking about our potential users, and we should always be shooting for a platform that will provide high quality service to everyone. Again, it's a matter of facts and context. Lupien's article disappoints me on both counts.

Lupien writes, "The point of this article is to focus on users who are unable to log in to begin with, who encounter technical problems during a transaction, or who choose not to use the service because they would be required to disable pop-up blockers or use a particular browser, etc. We’ll never know how these users feel about VR, because they don’t get far enough into a VR transaction to make…comments.

Actually, we have some way of knowing. We ask. Yes sir, right there on the front page of QandANJ we say, "Click here to give us feedback on how our new software is working for you." Here's a sample of what we find: Since May 1st (79 days), we have received 23 comments. 16 of them were specifically technical (some were positive, some were of the nature, "it wasn't fast enough".) One comment came from a Mac user, 3 came from customers accessing us through the AOL interface and browser. So Mr. Lupien, we do make an effort to compile and monitor such information, looking for problematic trends with an eye on improving the service.

Finally, Lupien suggests that I have not been keeping up with the VR literature and if I had "taken the time to consider some of the issues discussed in this article before jumping on that user-centric high horse" I would have "come away with a better understanding of what is happening beyond QandANJ."

I can assure Mr. Lupien that I keep up quite well with VR literature thank you, and I'm familiar with Coffman and Arret's article, which you can read here (right at the bottom of the page, after Brenda Bailey-Hainer's reasoned response.) And if speaking from a place of fact and experience instead of conjecture and generality puts me on a high horse then what can I say? Giddyup.

In Part III (much shorter, I promise) I'll address the VR software versus IM question.

Epilogue: Customer comment from today: "I am exceedingly impressed. First time in ages I felt like I was getting something positive for my tax dollars." (Our funders sure hate to see this... Ha Ha )

Wednesday, July 12, 2006

Is virtual reference successful? Part I (Hint: yes it is)

Pascal Lupien begins his recent article on virtual reference (Virtual Reference in the Age of Pop-Up Blockers, Firewalls, and Service Pack 2 , By: Lupien, Pascal, Online, Jul/Aug2006, Vol. 30, Issue 4) "by declaring that, "the evidence indicates that libraries are not satisfied with the service." Say what? Aside from the fact that the statement is so overly broad as to be false on the face of it (which libraries? which services?), it's not about whether the libraries are satisfied with the service, IT'S ABOUT WHETHER THE CUSTOMERS ARE SATISFIED WITH THE SERVICE.

The fact that Lupien goes on for nearly 3500 words with nary a mention of customer satisfaction epitomizes to me the worst of librarian-centric thinking at the expense of customer experience. 3500 words with:
  • No mention of how VR customers love and rave about the convenience of the service.
  • No mention of how VR customers love and rave about having a live person available to assist them with their information needs.
  • No mention of how VR has changed our customers' perceptions of what libraries can offer them.
  • No mention of how VR has helped make libraries more relevant to our customers by meeting their needs and exceeding their expectations.

I am feeling weary after reading Lupien's article. Weary because there is so much wrong with it that it almost demands a line-by-line critique in the spirit of Twain on Fenimore Cooper. Well Lupien isn't Fenimore Cooper and I'm certainly not Twain, and besides I'm really, really tired.

So let me address a few errors, raise a few eyebrows (two, to be precise) and share some of my own experience - uh, make that our customers’ experience - with VR via QandANJ.

A moment to share my creds: I've been involved with QandANJ since it's inception in 2001 (before that, actually,) helping to build, manage and promote the service. I've looked at thousands of transcripts and thousands of customer feedback forms. I know that our usage is through the roof. We handle as many "calls" as we can limited only by our ability to offer deeper staffing. I know that our customers tend to be very satisfied, and I know WHY our customers tend to be very satisfied. If you want to delve deeper into our stats and findings, take a look at this presentation from the VRD Conference in 2003. (there's more here) The numbers may be a little dated, but the story they tell and the trends they point to remain just as true today.

I'm not making this stuff up... Here's one of my favorite comments:



If you think this is cherry picking, it ain't. We get our share of negative comments too (usually younger users, usually wanting "faster, faster, faster" service.) The reality is our customers are happy. Why? Here's what they tell us:








We have hundreds of pages of single-spaced pages with thousands of comments that go on and on in these veins. There are many other successful collaborative VR projects like those in Maryland, Colorado, and Cleveland that could show you similar comments from their satisfied customers. The challenge isn't attracting the customers, it's managing to grow the staffing of the service to keep pace with the demand!

In part 2, I'll get a bit more nit-picky with other elements of Lupien's article.

Saturday, July 08, 2006

International Live Chat Study finds 30% Ready Reference Questions

Although many folks have declared Ready Reference to be dead in this Googleized reference environment, our research reported preliminary results of a large international study that found 30% of the live chat questions to be of this type! Another interesting finding, users were inappropriate (rude, impatient, goofing off, inappropriate questions or language) in less than 1% of the transcripts!

Since returning from ALA in New Orleans I have been scrambling to get caught up (OY!) & have been wanting to post these finding and others reported in 2 presentations that Lynn Connaway(of OCLC) and I gave at the conference. The PowerPt. slides have just been posted to the "Seeking Synchronicity" grant website and can be viewed by clicking on the links below. Both presentations provide preliminary results from our 2 year grant project, supported by IMLS, which is now 3/4 through the 1st year of a 2 year study.

The 1st presentation was for the QuestionPoint User's Group Meeting on 6/25/06 and was called "Seeking Synchronicity: Evaluating Virtual Reference Transcripts"

This presentation discussed the results of our analysis of 256 live chat transcripts (selected randomly) from QuestionPoint and conduct of 7 focus groups (live chat reference users, non-users, and librarians). As can be seen in the ppt, we did the following analyses of the chat transcripts (& here's a glimpse of some of our results, see ppt for more).

Geographical Distribution (the most questions were received by California, Maryland, and Australia; the most questions referred/answered were by California, Australia, and Maryland)

Type of Library Receiving Question (most by Consortium, Public, University, Medical, Law, State)

Type of Question Asked (using Katz/Kaske/Arnold categories) (Subject Search, 37%; Ready Reference 30%; Procedural, 25%; Inappropriate <1%)

Subject of Question (using Dewey Decimal Classification (Social Science, 42%; History & Geography 21%; Science 11%)

Service Duration (Mean 13 min. 53 sec.; Median 10 min. 37 sec.)

Interpersonal Communication (found dimensions that facilitators or were barriers to positive chat interactions, see ppt for more detail).

The 2nd presentation was for the Library Research Round Table forum on 6/24/06 and was called "Face-Work in Chat Reference Encounters." We've analyzed an international sample of 226 live chat transcripts from QuestionPoint using a framework from the work of the great sociologist Erving Goffman (yes, Erving not Irving!). Results of this research provide us with a way to help understand how import ritual behavior (like greetings, closings, apologies, polite behavior, etc.) is in live chat as well as in face-to-face reference encounters.

Hope the above tantalizes you enough to take a look at the ppt slides. Handouts will also be available soon at the grant website!