Seredipity? Fate? Coincidence? Regardless of how you define the pattern of events that led to the Health Science Honor Society, Eta Sigma Gamma, pitching their organization at the same class I was prepping to teach, I have to say that the final result has been great. As part of some new initiatives, we’ve been exploring new areas for developing partnerships with other groups on campus. Eta Sigma Gamma had just published an article about distracted driving attitudes and behaviors here at our campus. With the option of either starting a new research project or developing their paper into another scholarship form, we seemed to have found a match. In various meetings and email exchanges, the students, myself, and our display coordinator were able to develop display ideas. Here are the results of the project!
With scholarly communications on the brain, I was thrilled to learn about Microsoft Academic Search (MAS). Building on the work of resources like Google Scholar that gathers and indexes journal articles and Scopus and Web of Science that gathers information about how and where articles are cited by other researchers, Microsoft Academic Search brings these two items together in a *free* search. MAS also enhances the information with direct organization and department scholarly output comparisons. Some features, like the Call for Papers (CFP) Calendar, are very much in beta but would be an overwhelmingly useful tool for academics to help manage the many, many CFPs and conference submission deadlines.
To better understand my resource comparisons, I took a sample author and compared the citation results in MAS and Scopus; images of my search results appear in the slideshow below. The author information in MAS and Scopus both listed 4 articles publications, indicating that perhaps this information was correct. Yet, when I looked at the 4 articles listed in each, I found that MAS had a duplicate entry for one article and, therefore, completely missed a publication. Using the Help on MAS, I could get a look at all of the publishers they are working with and could identify that the missing article was from Wolters Kluwer/Lippincott Williams & Witkins. For the health sciences, nursing, and medicine, they are a pretty major publisher so the absence of their information in MAS would be a significant hindrance. At the same time, the openness with which Microsoft lists their sources made this tracking down easy.
At the article level, in Scopus, the article “Breast cancer disparities and decision-making among U.S. women” was cited 24 times since its publication in 2007 (although Scopus lists the default “Cited by since 1996”) and has 84 references within the paper. In MAS, only 60 references are listed and 15 cited papers are listed. What happened to the rest? This may be one of the drawbacks to the *free* service. Looking a little closer at the details, it appears that Scopus is more complete while MAS currently stops at about 2010 for its citations. I’m guessing a similar limitation may be the issue with the references. While the resource appears to be in a very early beta phase, may need some data corrections, and, currently, it still doesn’t seem to account for other measures of impact such as social media sharing, it looks like I have another new toy to share with faculty in the Fall….
For updates on MAS, you can stalk, er, follow them more directly @MSFTAcademic
Stemming from a conversation with one of my faculty members, I began trying to define, explain, and provide support to the concepts of impact factors or other journal/article evaluation tools. Being a smaller, more instruction-focused campus, we don’t currently use impact factor ratings as part of the scholarship evaluation of our faculty. However, more and more my faculty are collaborating with researchers at other institutions, so they (and I) need to know how to speak the language of modern scholarly communication. To help expedite your questions in the area, here are a few key terms to be familiar with and tools you can use to support your curious faculty.
- measures the number of citations from the average article in a journal over a span of about 2 years
- started in the 1960s by Thompson Reuters
- originally used as a collection development tool to help identify most popular journals for library purchasing
- impact factor for a journal is now incorporated as a way to evaluate individual article impact, influencing where authors try to publish
- Problem: peaked around the 1990s, as the advent and increased utilization of the internet has moved people away from using print resources and only having scholarly communication in journals
- Journal Citation Reports (for JMUers)– Thompson Reuters tool to look up impact factors
- Journal Citation Reports (Product site)
Eigenfactor – “measure of the journal’s total importance to the scientific community” – aka big journals=big scores
Altmetrics – tracking system that attempts to note not just the electronic article usage in digital forms like Twitter or CiteULike, but also other information resources like datasets or blogs. This is tough to tackle but the various tools below are starting to develop some interesting methodologies
- Altmetrics look up tools
- At the Medical Library Association conference in Seattle, I also saw an awesome poster by Drew Wright about Altmetrics.
So, like many things, the digital age, the increased retrieve-and-shareability of research is changing how we consider the value of research. In the full circle of things, I wonder how these other metrics, particularly Altmetrics, can impact our collection development, too. I look forward to discussing these concepts and more at the ACRL Scholarly Communications Roadshow at JMU.
Google, the elephant in every room right now, is starting to see the rewards of its large-scale digitization project. First, the Google bookstore has opened up, fostering a potential rivalry with Amazon. From the minimal spot checking I’ve done of titles and price comparisons, Google and Amazon seem about evenly matched at about $9.99 a digital book. The main differences are the method of reading and accessing your works and what devices will or will not work with your new eBook. Amazon likes to have you download your work with the option of syncing information back to its cloud; Google wants you to read in the cloud and download only if necessary. Either way, both are designed for multi-platform reading so you can start your book on your iPad but then continue right where you left off in the book on your iPhone when stuck waiting at a doctor’s appointment, for example. In short, no more clunky carrying. However, the differences between the two remain in the platform accessibility. Google works with a lot of resources, but not some of the major players like Blackberries (WTF?!) and Amazon’s Kindle (less of shock here). Amazon’s establishment in the market and Google’s non-development of a hardware device still make Amazon the ubiquitous eBookstore. Further research TBDAG (To be done after grading).
The other, less flashy Google news is the growing use of the scanned Google books to develop what the New York Times calls Humanities 2.0 or the statistical analysis of work usage over time within works. Below is a sample of what you can do with such statistical analysis:
Having been a literature person, this trend actually isn’t that new. Pawing through concordances of Dante’s Commedia, such as Terrill Shepard’s, are still common in literary analysis these days. Through these methods, scholars have identified trends in word usage, such as having Inferno, Purgatorio, and Paradiso all end with the word stelle (the Italian plural for stars). What a lovely way to end a poem about a religious epic? (And what a lovely name for a girl…). However, for works that have not be read millions of times after seven centuries, Google’s ability to digitize both major and minor works from the Victorian age and allow similar linguistic analysis has a vast potential with revolutionizing the world of literary research. However, I think the “Handle with Care” concept is also useful to make sure we don’t lose the art of literary analysis and criticism since statistics are only part of accessing a culture, a time, and a psyche of a character and author. I’m intrigued to see how this vast plethora of information will continue to transform literature and what we think of as research today. What are your thoughts? Is all of this a boon or a bust?
The most exciting, intriguing read this week has undoubtably been the confession piece “The Shadow Scholar” from the Chronicle of Higher Education. The article is a tell-all confessional from the point of view of a writer for these term paper mills and the details are eerily familiar. Poor writing skills, pressure, and easy access to funds all appear as trademark components for keeping this industry quite alive, including a better than average salary for this individual. For anyone working with writing students, I recommend this article as a Must Read.
For those who don’t despise Wikipedia, you’ll be gaining more ground in the credibility department as UC Berkeley has started gearing assignments, students, and professors to improving the accuracy of the site’s contents, at least in regards to public policy. You can read more about this collaboration and the lasting impact on students from the UC Berkeley PR article, “UC Berkeley students help improve Wikipedia’s credibility.”
Given some other fun events going on in my life (yeah for upcoming holidays!), I have a few more backlogged items I’ll have to get to later, such as McKinsey Consulting’s report “Winning by Degrees: The Strategies of Highly-Productive Higher-Education Institutions.”
Although I started using Twitter almost a year ago now (wow, has time flown by), this week seems the universe seems to want me to move beyond my current, occasional playing with the social media resource to learn more about the power that has led this VC-run idea to become the phenomenon that it is. As part of a new routine, I’m making more time to walk, and hence, I need more audiobooks to consume as part of the needed distraction away from such said exercise. In Joel Comm’s Twitter Power, I’ve been able to have a well-crafted review of the various other social media sites, hear how they compare and contrast with Twitter, and look forward to hearing more about his advice and techniques for making Twitter work for me. While the some of the advice thus far (I’m about 1.5 hours out of an almost 6.5 hour audiobook) is a bit of the tediously obvious (make sure you choose the right username so people can find you; make sure to link your website to your Twitter profile, etc.), his additional advice as to how to add multiple websites to your profile gives me hope that I’ll actually learn something from the book. The book does have a sales/advertising bent, but I figure the methods will still apply to the general outreach my library may need to promote programs, events, and new resources.
On another note, my ProjectMuse Twitter feed helped me stumble upon a Society for Scholarly Publishing blog entry regarding the relationship between Twitter and scholarly communication. For the academics among us, Clarke’s concise discussion does an excellent job providing the short hand notes to the Twitter discussion as well as indicating Twitter’s value as a general social media tool apart from others like Facebook. Furthermore, he ties the topic back into the idea of scholarly communication today. To say the least, I highly recommend perusing this entry for even the avid librarian Twitter user.
Now, in relation to all of these, what have you, in the nebulous fog of the blogosphere, discovered in your Twitter-riffic adventures?