Although I am an alum of SJSU’s SLIS program, I mostly filter out the plethora of on-going emails I still receive from the school about courses, talks, and student groups, but one message caught my eye –
Abram, a rather prolific figure in the library world, had popped up across my research and news radar several times but I’ve lacked the follow through to consistently read his blog, New Stephen’s Lighthouse. However, watching his recorded talk was truly one of the most interesting and engaging presentations I’ve seen on libraries (all types) in very possibly forever (so far). Abram presents great content, applicable situations, and sassy humor to illustrate some of our profession’s foibles and areas for improvement. If you don’t want to spend an hour watching the talk, here are my main takeaways:
- Geotagging (changing the answers based on the space/audience) – why aren’t we doing this? His example – public consumer health data. Do teens need the same information as senior citizens about HIV/AIDS? No, so why do we keep helping them in the same, uniform way?
- We need to inform our users that they are being manipulated by geotagging, search engine optimization, and content farms like AOL
- Our future for reference service is in providing transformational (learning) interactions, not transactional (end-product search results list only) interactions
- The value of reading – do we really need to be persnickety about how people read? Nah. Instead, focus on the fact that they are reading at all. Plus, learn about some other eReading apps: 24symbols and Bookish
- So that manipulation factor? It’s not just search engine results. In short, Apple seems to take a very narrow view of the Protection for Private Blocking and Screening of Offensive Material US legal code 230.c.2.A. The Apple iStore’s iPhone Developer Program License Agreement allows Apple to determine what is or isn’t acceptable for you to buy with your own money. This isn’t just an issue of obscene or pornographic materials, but things interpreted as defamous. For example, Mark Fiore, Pulitzer-prize winning satirist, had his NewsToons political cartoon app rejected since it was “making fun of the Balloon Boy hoax and the pair that famously crashed a White House party.”
- Things we can do to get back in touch with our community’s needs, not our own navel gazing.
- Act Like a User Day (ex: ADA sort of way). Can people use a wheelchair through the library well? Do the websites we design actually make sense when used with a screen reader? In my one-armed state, I’ve become acutely aware of the limitations to do things like use pump soap dispensers, cut food with fork and knife, or get a fair score on xBox Kinect games (penalties for missing arm are bogus).
- Digital Download day – a petting zone of different technologies+showing users how to download their content to any device (pushes staff training+comfort).
In an attempt to blog my Mid-Atlantic Chapter Medical Library Association continuing education experience, I wrote the following post. With the event having occurred over 2 months ago, I thought submitting to the public here would be better than leaving the item in the “pending review” category of the MAC-MLA WordPress blog, especially since I still refer to these notes when talking with my students. Enjoy.
EMB? PICO(TT)? FRISBE? What the frank? As a new health sciences and nursing librarian, I’ve found myself encountering this new alphabet soup without a lot of context or deeper understanding of how to 1) identify meaning from these letter scrambles and 2) help my students, researchers, and clinicians with applying these concepts to their studies and practice. Perhaps lacking all but the cape, Connie Schardt, Evidence Based Medicine Superhero, taught a great two-part session today on “Evidence Based Medicine and the Medical Librarian.” While packets of close to 100 pages appear to overwhelm, they greatly helped support the session by providing hands on activities, such as how to identify a study or question from the abstract. Patient problems, Intervention of a hypothetical test item, Comparison to other intervention options or placebos, Outcome review, and understanding the Type of question and the Type of study is a series of ideas that need a bit of mental unpacking that the activities definitely provided. End of session Jeopardy rounds were also great ways to test our understanding in a more fast, fluid form way. However, despite the traction gained in the first half, I personally found the evaluation component the most helpful and substantial to my ability to teach health information literacy. FRISBE, not frisbee like the non-traditional golf game, provides a good checklist to help evaluate tests for bias. From my own work, the search is never the hardest part of research. For the bullet point friendly, FRISBE stands for
- F – Follow up (no missing persons, please)
- R – Randomized population assignments and concealed allocation of people to different groups
- I – Intention to treat
- S – Similar baseline between the randomized groups
- B – Blinded service and treatment within the populations
- E – Equal treatment of both populations
When we applied this scheme to an article, we could start fleshing out and justifying why an article was strong or weak in concrete terms. Now, after sharing my comments from the afternoon, does anyone else have any advice, tips, or tricks for a new librarian to the field?
As part of a project to provide additional resources to our student workers about how to identify peer-reviewed resources (since EBSCO is a bit of a crap shoot), I decided to create a short tutorial about defining the terminology, show how search results may or may not identify peer-review-age, and how to navigate the peer-review identification authority, Ulrichs. With the help of our amazing Center for Information Technology (CIT), I got a rundown on Camtasia vs Captivate recording options, equipment, and systems. Thinking this would be on the shorter side (>10 minutes), I decided to test out Camtasia since I was already familiar with Jing. Plus, our Camtasia setup allowed submission of videos to a Relay server for automatic captioning instead of manually editing the captions in Captivate….or so I thought. Being the responsible person I am, I drafted up the script ahead of time (apart from some minor tweaks) and thought it would be great. However, here are some of the captions
My text: “…about the types of resources students use for their research.”
Relay-suggested caption: “…every horse themed use for their research.”
My text: “In particular, professors are requiring students to use peer-reviewed articles for their research resources.”
Relay-suggested caption: “the killer perfect and I are requiring didn’t eat Peer Reviewed articles for their Easter treat”
And that’s just within the first 10 seconds. Other gems include
My text: “so use the dropdown box in QuickSearch to limit your search to Just Scholarly Articles”
Relay-suggested caption: “the is the goddamn boxing cricket to many a six contests scholarly article”
So for a >5 min video, I probably spent about another hour fixing the captions. To view the final product (at least of this round), go to JMUtube and check out “How to Identify Peer-Reviewed or Refereed Resources.”