I recently had this question from Kelly:
I can really relate to your post as our organization has undergone upheaval after upheaval and is looking at another round of multiple changes. Some of the upheavals really could have been done better with an evolutionary approach. However, how does one implement new technology that includes new functionality without disruption?
For example, our portal has been evolutionary -- if anything it's been impeded by an incremental post (but I'll leave that for a blog post!). However, MPOW is in the process of implementing a new telephony system that will include all employees -- not just call center agents. This software adds VOIP, headsets, presence awareness and messaging for all employees. These are disruptive features that will require quite a bit of training. I know that if it could have been done incrementally -- it would be better accepted. Yet -- how can one do this in this example?
Great question and one that, in my opinion, illustrates that no matter how hard we try to integrate change into our regular organizational structure we will at some point always face moments of disruptive change. Disruptive change is not always bad! The situation you describe, moving to VOIP telephony, will be good in the long run. There may be growing pains, certainly, but the end result should prove the effort worthwhile.
But moving to a VOIP system is not something that is planned overnight, and it is this sort of big move that can benefit from having a structure already in place that allows an organization to implement change smoothly and with as little stress placed upon customers and staff as possible.
You want to build change into your organizational structure. Constant, smooth change – evolutionary and not revolutionary -- better allows the organization to move forward without the seismic fits and starts so commonly associated with the major upheavals of discontinuous change. But this type of change is not easy to institutionalize. In order for this smooth change to become a hallmark of your organization you need to build it into every stage of your planning structure. By building this type of change into the organization’s structure you will be better situated to deal with the occasional disruptive change that is bound to come along.
There are many ways to integrate change into an organization’s structure, but my favorite way to is to create an environment where customers and staff are involved in facilitating change and maintaining the ability to change at all levels. This is something that Laura Savastinuk and I have been thinking a lot about, and we discuss it in our upcoming book on Library 2.0. It's about taking the steps necessary to implement services while, at the same time, establishing a regular process for reviewing and evaluating the worth of those services.
This institutionalized change follows a three-step cycle;
All three stages require that staff from all levels within the library participate. They also call for customer participation or input. The key factor in the success of this plan is the use of vertical teams.
Vertical teams, very simply put, are team structures that include staff from all levels of an organization – from frontline to the director level, and everyone in between. Vertical teams, like vertical communications, serve to flatten the organization, reinforce the sense of worth for staff from all levels of the company, and instill a sense of responsibility that everyone feels towards everyone else.
Once someone from every level is on board, planning for a new VOIP will be a much more open and inclusive project. Having frontline staff play a role in the planning and roll-out phases, as well as the training and post-roll-out evaluative phase, will mean that all staff will be discussing and thinking about the new service. This doesn't mean that everyone will be on board, but it does mean that none of the process should be "out of the blue" or dangerously disruptive. Staff from every level will be a stakeholder in the process and will play a role in the project's success or failure. Sometimes incremental change is simply not feasible, and the more open and inclusive your planning process can be the greater the reward in the end.
O’Reilly has taken steps to consolidate use of the term “Web 2.0”, claiming it as a service mark. This has caused several worried library folk to contact me regarding “Library 2.0” and its usage.
I first published the term “Library 2.0” in September of 2005. I have always considered the term “Library 2.0”, used alone or in combinations such as “Library 2.0 Conference”, to be in the public domain, usable by anyone, and not subject to trademark or service mark registration. I would hate to see this changed by anyone attempting to turn the term itself into a commercial venture.web2.0
For years we've been feeling depth charges rumbling throughout this profession: MARC, some say, should die, and traditional cataloging ain't lookin' too good, either. It's time to dis-integrate the catalog, weave it into the Web, and push forward to the future.
Coburn was one of the key players in the first Internet technology boom of the 1990s, and he’s a player in the new Web 2.0 surge as well. His company, Coburn Ventures, employs theories of change to target and invest in new technology companies. His book, The Change Function, argues that for technology ventures to succeed the level of change that is placed upon end users should be incremental or evolutionary, and not revolutionary. Technologies that rely upon the “build it and they will come” theory are bound to fail. Users, Coburn argues, are resistant to major change, and people are only willing to change when “the pain in moving to a new technology is lower than the pain of staying in the status quo”. In other words, if it’s easy enough to use and does something pretty valuable then it will succeed, no matter its price. Take the iPod for example, or Amazon.com.
Successful companies must look to their users and find out what they want. But the current technology industry, Coburn argues, is supplier-centric. They don’t look to their users and try to find incremental improvements that users are willing to adopt on a large scale. Instead, most technology companies look for the big kill, the huge product that will “revolutionize” they way people do something. Unfortunately, as Coburn points out, users don’t always want a revolutionary new do-everything satellite-enabled-PDA-talking-phone, sometimes they just want an easier to use mobile phone.
There are good counter arguments that can be made about Coburn’s theory. Sometimes radical change is very quickly embraced, as was the case with the emergence of the web and the now ubiquitous home computer. But I want to take this discussion to libraries, and here is where I really get that “why did it have to be written” feeling.
Coburn’s argument parallels what many proponents of library change, including myself, have been saying all along – for change to be successful it must be continuous, regular, and almost imperceptible. Successful change is not the old school variety of change that comes every few years and is accompanied by massive upheavals, frightened staff, and upset customers. Successful change is constant change, and constant change cannot be discontinuous or fractured. Constant change is fluid; it’s evolutionary, not revolutionary.
Note: Eric Schnell at The Medium is the Message has an excellent post on Thomas Kuhn and paradigm shifts. Take a look.
Simple! Right? Of course, elements of Library 2.0 (sans the name) were being discussed, debated, and even in a few rare instances implemented before we had this conversation. But we now had a name for this new model for library service, and after a few more trips to Starbucks, Michael launched LibraryCrunch in September 2005.In October 2005 I attempted to Google "Library 2.0" and came up with nothing close to the concepts that we were discussing and were beginning to be discussed by others in the blogosphere. Now that same search yields a variety of results.
Library 2.0 faced early criticism before it could even be clearly defined. There was a very early critical analysis of the concept when it was still evolving which ultimately seemed to increase interest in the idea.
Library 2.0 has come a long way in the past 10 months. It has survived despite the controversy and criticism. We now have a clear, yet fluid definition: user collaboration, constant and purposeful change, and reaching the long tail. Librarians from around the world are discussing this concept. It is a topic at conferences, courses are being taught on it, there are a growing number of librarians blogging it, and books are being written on it, including one by Michael and me. It will be exciting to see what will happen over the next year with Library 2.0.
This post has been removed. On 5/19/06 I received an email from the CEO of the "1.0 Delivery Company" (herein referred to as "the corporation") that the ALA contracted to manage the Library 2.0 Boot Camp. That email led me to believe that legal action was being threatened.
I stand by my assertion that "the corporation" chose the wrong tools to teach a course on Library 2.0. I remain convinced that there exist better open source options instead of the proprietary and less flexible tools that "the corporation" chose. WordPress, Drupal, and certain tools from 37 Signals, as well as other options, would have allowed the students to use the same 2.0 tools in class that they will be using outside of class. This argument is very well made in the comments to a post on Wandering Eyre.
As one of several original writers on the concept and theory of Library 2.0, I believe I am well positioned to express my personal opinion regarding anyone's attempt to interpret and teach on this topic. My opinion continues to be that there are others better qualified to manage a course on Library 2.0. That being said, I have no desire or intent to defame or harm "the corporation", and in fact I do not believe that I have harmed or defamed "the corporation". However, I have no desire to meet "the corporation" in court; despite the fact that I believe my criticism of their business practices was protected speech.
See also Flickr blog.
Lots of cool new things such as pull-down menus from member icons and some slick new navigation menus. Plus, Organizr no longer requires Flash and uses Java instead.
Michael Stephens and Jenny Levine are teaching a course for ALA titled Library 2.0, and as part of that course they asked me to submit a brief podcast discussing John Blyberg’s four Transformative Realms and how several things at my branch library fit into that model.
The volume is a bit low, but that’s my fault and not theirs!
Or download MP3 and listen at your leisure.
If you believe, as I do, that there is a crisis in library education that threatens the very existence of libraries and librarianship, you are likely to draw a negative reaction from a variety of people. First, there are the millenniarist librarians and pseudo-librarians who, intoxicated with self-indulgence and technology, will dismiss you as a "Luddite" or worse. They and their yips and yawps can safely be left to their acronymic backwaters and the dubious delights of clicking and surfing.
--Michael Gorman in American Libraries, May 2006
I have yet to stray into the overtly political on this blog, and I make every effort to be as open and accepting as possible with all viewpoints. I certainly understand that my enjoyment of technology is not shared by all librarians, and I make it my goal when discussing technology to do it in a way that is neither complicated nor evangelical. Mr. Gorman's words are frighteningly elitist and epitomize the arrogant and condescending parochialism that many of us are fleeing.
EDIT: As of this writing the above piece is not on the ALA American Libraries public page. However, you can read the article on ebrary if you are an ALA member. The article is also available on ProQuest Research Library.
We all have traditional library practices we hold dear. For some people it's being able to offer classical literature to the reading public. For me it's knowing that our customers can ask us almost anything and expect a good and solid answer. But we also have sacred cows -- things we never question and hold onto far longer than we should. For me, the two items below are sacred cows I would like to see changed. Please remember that I am speaking from a public library perspective, and your opinion may (almost certainly) differ.
1. Why do we place author-name spine labels on hardcover books when staff and customers can find the same books without the labels? How much do we spend each year to make and place (and replace) these labels -- not just sticker cost but staff time? I like genre labels, but author labels?
For example, of these books waiting to be shelved only one has a spine label. Are spine labels needed on the other two?
2. Is Dewey still serving us well in non-fiction? If you have genre labels on fiction books, why not have subject labels on non-fiction books and then simply shelve your materials by subject area (and by author within each subject area)? Imagine having all the computer books in one place, the baby name books with parenting, and Plato and Aristotle and Nietzsche all in one area called philosophy. Too radical?
Are we really keeping our "collections organized so that [our] users can easily locate the resources they need"? Perhaps very large facilities can justify Dewey. But does Dewey classification best serve buildings under thirty or forty-thousand square feet?
Are we spending too much time trying to force our users to utilize Dewey -- and losing too many users despite our effort? If we really want to make our users comfortable and serve them well (and allow them to serve themselves well) then perhaps some of us need to reexamine our shelving and labeling methods.
Certainly there are many arguments against subject-based non-fiction shelving (notice I did not say bookstore-style shelving). When you scale to very large collections -- say several thousand titles on flowers -- then you probably want a more rigid structure. But if you have collections on that scale then you probably also fall outside 90% of today's public libraries.
“It’s as much today about the channels as it is about the information...it seems to me the notion of information literacy has moved beyond merely knowing, accessing and evaluating the usefulness of information. Today's it's also about having a comfort level with the preferred channels (& formats) that our users expect to be able to find and retrieve information in. These channels include among others, RSS feeds, Instant Messenger, SMS/Text Messaging, podcasts, videocasts, wikis, remixes, mashups etc.
There exists today such a huge number of channels that we need to be aware of. Some may argue that this quantity is far beyond the ability of the frontline librarian to manage. But I wonder about the way this information map really works. If these channels are viewed as paths or roads to a single destination -- "the answer" or "the resource" -- then the librarian can still play a vital role in assisting customers in finding the best road to reach that destination. But knowing the nest road is not enough, as not everyone likes taking the highway.
I still get the vast majority of my information from print and online reading -- literally crawling the print publications or websites I prefer and looking for interesting nuggets here and there. I know how primitive this is, but I've used Netvibes and FeedBurner and Bloglines and, in the end, RSS aggregators always end up leaving me feeling overwhelmed and under informed. I still make daily use of Netvibes, but if I look at my information flow I see that I use email and IM (and the social networks that go with those two tools) as my secondary information channels right after my print and online reading. RSS also falls into this second tier. Following in as my third tier information channels are Flickr, Wikipedia and numerous other wikis, text messaging, and the telephone (be it Skype or Vonage).
What I'm driving at here is that I, and everyone else, choose the channels that I am most comfortable with using. I think Helene is right on when she says "it's ...about having a comfort level with the preferred channels". This does not mean I choose the best channel for any one particular type of information, or even that I choose the channel rationally. But the librarian -- behind the desk or online or wherever -- must deal with the reality of my channel choices and work within the boundaries I have drawn. If I want information on coping with a dependant parent, and I am uncomfortable with scholarly materials or simply feel more comfortable in a less formal information environment, then I may be shown several social networks, wikis, and podcasts that address my concerns. The librarian directs me in this manner because I am far more likely to see these more personal tools as dependable resources for my information needs. The librarian's role here is to direct me to credible social tools, online or local, while at the same time reminding me that there are other, more scholarly and perhaps more factual resources available should I desire.
Social Searching, another Helene Blowers post, fits here in conjunction with Microsoft's recent announcement of Windows Live QnA. This social search service has the capability of being both a very broad search engine and a far more narrow, circle-of-trust style search engine that will allow users to choose those who they think are best situated to serve their information needs. In this case, the role of the librarian may extend to all of us, following the idea that in today's online environment "Everybody has a librarian inside".
I didn't intend to write this rambling piece but Helene's posts made me think. Yesterday's examination of knowledge management still has me thinking about ways to capture and preserve and, most importantly, disseminate the great amount of information we deal with daily. Plug in the need to be aware of several channels for reaching that information and our task increases substantially.
Your desk librarian spends twenty minutes answering a question on voter turnout by demographic in your county.
The library’s telephone help line steers a student to a unique online resource to help him answer a complicated homework assignment question that seems to come up every year.
Your IT department troubleshoots and solves a customer’s emailed question regarding a problem they had accessing your catalog through the newest version of Firefox.
Three questions, three answers. The people who took the time to answer these questions were proud of their work, and rightfully so. They were faced with difficult or complicated queries and they invested the time and expertise to find the correct answers. As librarians we do this every day. We find answers or solve problems for our customers, and most of the time we do a very good job.
But then we fall down. We invest the time, which is really money, and then we don’t bank the investment. The individual librarian with the explicit knowledge may remember the transaction and be able to use it in the future, they may even share it with a coworker or two, but ultimately the answer and the question that provoked it will be lost.
Librarians are, in general, not good at knowledge management. We fail to adequately capture the knowledge we create so that it can be used over and over again, either by librarians or library customers.
Take, for instance, the above example of the emailed question that the IT department fielded. IT personnel could have entered both the question and answer into a knowledge base that would then have been accessible to anyone having the same question. Likewise with so many of our desk, telephone, and IM questions – we excel at answering but fail at managing those questions.
Into this equation we need to add the internal knowledge that our library holds. You know the department head who’s about to retire? How much tacit knowledge does she have that has not been harnessed, and how can we turn it into explicit (and captured) knowledge? Multiply this by every individual in your library and you’ll quickly see that there is a huge amount of knowledge out there that can very easily be lost.
So what can we do? Perhaps the first thing we need to do is take a close look at how much money every question/answer transaction actually costs us. With this figure in hand we will comprehend the amount of money (and intellectual capital) lost with every transaction, and knowledge of this large amount may serve as incentive to change. Then we need to look at those veteran staff who harbor such a wealth of tacit knowledge and think about the loss we would suffer if they left without imparting their knowledge on others.
There are numerous technologies available to assist us in better managing our knowledge. Knowledge bases, document and email management, wikis, and blogs can all provide assistance. Another area to look to for guidance is the commercial help desk where corporations strive for maximum efficiency and effectiveness. As a service organization we have the luxury of being able to invest more time into providing better, higher quality answers than most commercial help desks, but this does not mean that we cannot learn from help desks and use the tools they use to better serve our customers.
By laying out clear goals we can better visualize where we want to go. A program aimed at managing the internal and external knowledge within our libraries could begin with these simple goals:
One other goal for any library serious about managing knowledge is the creation of one centralized authority responsible for system-wide KM initiatives. This could be a new position (such as a Chief Knowledge Officer) or, more likely, a newly assigned responsibility within an existing department. The key, as with any new initiative, is to have one person (or team) that is responsible and accountable for that initiative. Spreading such a large goal out among many different departments or personnel is simply a recipe for failure.
Any serious knowledge management initiative will not be easy. Training staff to record their knowledge is difficult but not impossible. New technologies can facilitate this effort. Upfront costs may also seem high until you analyze the actual costs involved in not capturing this knowledge – and therefore repeating over and over again the same reference transactions, the same studies, and the same projects and investigations.
It looks like one of Microsoft’s newest services is about to launch. We got to see this when I was out in Seattle in January but we were under a non-disclosure agreement.
Called QnA, this new service will harness the expert in everyone (well, at least on one subject, right?) and offer answers to questions, much like Google Answers or Yahoo Answers.
The new service goes into private beta soon, so I’ll write more about this as I get to test it. In the meantime, here are some screenshots courtesy of Microsoft:
This new Windows Live Search vertical offering will help consumers
simply find what they need, from anywhere by providing a place for
people to ask any question, get credible answers and vote on the quality
of the responses on any given topic from a large community of helpful
people, not just experts. This service will allow consumers to tap into
the power of the online community by facilitating a melting pot of human
knowledge that isn't easily accessible or available on the Internet
Topics will range from business, health, arts, sports, technology and
* Does ivy kill trees?
* What's a good, inexpensive moving company in Seattle?
* Any great ideas on getting motivated to exercise?
* What's the best chocolate chip cookie recipe?
Key features include:
* In a one-to-many system, consumers may pose questions to the
Windows Live QnA community, thereby creating a store of human knowledge
containing facts, opinions and experiences on topics ranging from
business, health, arts, sports, technology and more.
* People then can rate answers and reputation-based scoring is
available so you and others know which sources are most reputable.
* Questions are tagged so others can easily find similar or
related questions and answers to learn from
* The ability to mark and remove inappropriate content
Ultimately, QnA will be deeply integrated with Windows Live Search,
providing a rich, integrated searching service - enabling you to search
and find answers on the Web, or from experts on a given topic as part of
a vertical search experience.
Windows Live QnA beta is the latest example of our efforts to continue
to redefine search to make it faster and more relevant for our consumers
with live connections to information they want. We want to put the
consumer in control of their search experience, customize it for their
context, present search results in a usable format, and empower users to
make their own choices.
See also from SearchEngineWatch
The first article, “Smells Like Teen Progress”, looks at how Condé Nast Publications is about to launch a website for teen girls whose content will be primarily created by its users. “It may be premature to call this the dawn of Digital Magazine 2.0, but at least in some quarters Digital Magazine 1.5 isn’t wholly far-fetched.”
The other article, “O Click Al Ye Faithful” (BW really is good at article titles), looks at Sister Judith Zoebelein, a former resident of Long Island and now the Vatican’s editorial director of the Internet Office of the Holy See. Sister Judith is making it her goal to create a MySpace-like website for Catholics to interact. “Collaboration is the key…people will be able to find each other and work together online, and then go back and use what they have learned or done in their own communities”.
[At this time, both articles are subscriber-only. Links will be posted as the articles become available.]