Today I heard, for the first time, the phrase “information snacks,” which refers to small easily digested chunks of information that readers gravitate toward because they’re too lazy to read an actual article. As with most terms related to the commodification of information, generally in reference to Web content, this one immediately turned my stomach. It sounds like the sort of phrase that someone in Knowledge Management (i.e. watered-down librarianship) would throw around during a meeting. “Let’s think of our website as the table, and our fat slovenly readers have just wedged their guts in between it and their office chairs. They are already beginning to salivate onto their keyboards, so what can we serve them as starters? Information wedges with ranch dressing, anyone?”
Since I have been doing way too much data entry lately for a person with a master’s degree, I have been thinking about automation and its role in information production, collection, retrieval, and dissemination. Inevitably, the more automation is introduced into information management, the greater the risk is of so-called “dirty data,” or information that has been soiled by the presence of odd characters, extra spaces, and other aberrations as a result of importing and exporting between different platforms and/or programs, the automatic populating of fields, etc. Cleaning up these aberrations is usually no small task, and therefore, to my knowledge, infrequently done. Many people consider the savings they gain from dispensing with personnel no longer needed to input data to be worth what they see as a minor loss in quality. These people think it’s more important to get more information out on the table than for that information to be in the best “digestible” condition possible. I think of these people as the fast food vendors of information, and frankly, they disgust me. They are usually people who have wormed their way into positions of influence regarding how information will be disseminated without having the proper credentials for doing so (e.g. training and education in librarianship). Often they are people with sales, marketing, and advertising backgrounds who think that the package the information comes wrapped in is more important than the information itself. Never mind that once you’ve opened the package, you can’t find what you’re looking for, and if you’re lucky enough to finally do so, it’s marred by weird characters and gaping holes in the text.
I realize I’ve begun to ramble here, but there is a point. Much like what happened to manufacturing with the Industrial Revolution is happening now with information. The Web has transformed how people think about and interact with information, and more than ever before it is being thought of as a product to be marketed and sold. In manufacturing, when transition occurred from handmade products to machine-made products, there was a tremendous increase in production numbers, but with an accompanying loss in quality and durability. This loss was considered to be of a collateral nature; it was offset by the huge increase in profit. Now, we see the same thing happening with information, but what are the implications of a loss in quality when it comes to information? It seems to me that there are potentially even more far-reaching effects than those that resulted from the Industrial Revolution. Often, the consumers of flawed information on the Web have no idea of the flaws, and so they take this information at face value. From working in a public library, I know that many people are indiscriminate in their consumption of online information. They do not know how to evaluate where the information is coming from; they think it is all legitimate. Getting them to the point where they can effectively evaluate the quality of the information is one step. But then it becomes even more important for the “respected” information purveyors (I’m thinking mainly of libraries and academic websites in general) to act responsible in terms of the information they are disseminating, and how easy they are making it for their users to discover.
Everyone is always looking for corners to cut, particularly where there is money to be made. Well, some things shouldn’t be done faster and more efficiently. Books should be cataloged by people, and catalog records shouldn’t be dumped into a library’s catalog from another source without first adapting them to local practice. Articles should be indexed by real live human beings using keywords from a thesaurus that was also created by humans. Search interfaces should be powerful, yet easy to use, and allow for searching with accuracy and precision. And the resulting information should be displayed in a clean and legible format. Is that so much to ask? Apparently it is.
Ben
/ November 21, 2007>Thats way too mcuh information to process in one sitting from my google reader window. Couldn’t you have just provided a nice abstract at the top?