Wednesday, November 18, 2009
Formally published papers that have been through a traditional prepublication peer review process remain the most important means of communicating science today. Researchers depend on them to learn about the latest advances in their fields and to report their own findings. The intentions of traditional peer review are certainly noble: ... . In principle, this system enables science to move forward on the collective confidence of previously published work. Unfortunately, the traditional system has inspired methods of measuring impact that are suboptimal for their intended use.
Peer-reviewed journals have served an important purpose in evaluating submitted papers and readying them for publication. In theory, one could browse the pages of the most relevant journals to stay current with research on a particular topic. But as the scientific community has grown, so has the number of journals—to the point where over 800,000 new articles appeared in PubMed in 2008 ... and the total is now over 19 million ... . The sheer number makes it impossible for any scientist to read every paper relevant to their research, and a difficult choice has to be made about which papers to read. Journals help by categorizing papers by subject, but there remain in most fields far too many journals and papers to follow.
As a result, we need good filters for quality, importance, and relevance to apply to scientific literature. There are many we could use but the majority of scientists filter by preferentially reading articles from specific journals—1those they view as the highest quality and the most important. These selections are highly subjective but the authors' personal experience is that most scientists, when pressed, will point to the Thomson ISI Journal Impact Factor  as an external and “objective” measure for ranking the impact of specific journals and the individual articles within them.
Yet the impact factor, which averages the number of citations per eligible article in each journal, is deeply flawed both in principle and in practice as a tool for filtering the literature. It is mathematically problematic ... with around 80% of a journal impact factor attributable to around 20% of the papers, even for journals like Nature ... . It is very sensitive to the categorisation of papers as “citeable” ... and it is controlled by a private company that does not have any obligation to make the underlying data or processes of analysis available. [snip]
Though the impact factor is flawed, it may be useful for evaluating journals in some contexts, and other more sophisticated metrics for journals are emerging ... . But for the job of assessing the importance of specific papers, the impact factor—or any other journal-based metric for that matter—cannot escape an even more fundamental problem: it is simply not designed to capture qualities of individual papers.
If choosing which articles to read on the basis of journal-level metrics is not effective, then we need a measure of importance that tells us about the article. It makes sense that when choosing which of a set of articles to read, we should turn to “article-level metrics,” yet in practice data on individual articles are rarely considered, let alone seriously measured.
Perhaps the main reason for this absence is a practical one. Accurate determining the importance of an article takes years and is very difficult to do objectively. The “gold standard” of article impact is formal citations in the scholarly literature, but citation metrics have their own challenges. One is that citation metrics do not take the “sentiment” of the citation into account, so while an article that is heavily cited for being wrong is perhaps important in its own way ... , using citation counts without any context can be misleading. The biggest problem, though, is the time-delay inherent in citations. [snip]
The Trouble with Comments
A common solution proposed for getting rapid feedback on scientific publications is inspired by the success of many Web-based commenting forums. Sites like Stack Overflow, Wikipedia, and Hacker News each have an expert community that contributes new information and debates its value and accuracy. It is not difficult to imagine translating this dynamic into a scholarly research setting where scientists discuss interesting papers. A spirited, intelligent comment thread can also help raise the profile of an article and engage the broader community in a conversation about the science.
Unfortunately, commenting in the scientific community simply hasn't worked, at least not generally. [snip]
Part of this resistance to commenting may relate to technical issues, but the main reason is likely social. For one thing, researchers are unsure how to behave in this new space. We are used to criticizing articles in the privacy of offices and local journal clubs, not in a public, archived forum. [snip]
Another issue is that the majority of people making hiring and granting decisions do not consider commenting a valuable contribution. [snip]
Then there is simply the size of the community. [snip] But it also means that if only 100 people read a paper, it will be lucky if even one of them leaves a comments
Technical Solutions to Social Problems
Given the lack of incentive, are there ways of capturing article-level metrics from what researchers do anyway? A simple way of measuring interest in a specific paper might be via usage and download statistics; for example, how many times a paper has been viewed or downloaded, how many unique users have shown an interest, or how long they lingered. [snip] These statistics may not be completely accurate but they are consistent, comparable, and considered sufficiently immune to cheating to be the basis for a billion dollar Web advertising industry.
A more important criticism of download statistics is that it is a crude measure of actual use. How many of the downloaded papers are even read, let alone digested in detail and acted upon? What we actually want to measure is how much influence an article has, not how many people clicked on the download button thinking they “might read it later.” A more valuable metric might be the number of people who have actively chosen to include the paper in their own personal library. [snip]
Examples of such tools are Zotero, Citeulike, Connotea, and Mendeley, which all allow the researcher to collect papers into their library while they are browsing on the Web, often in a single click using convenient “bookmarklets.” The user usually has the option of adding tags, comments, or ratings as part of the bookmarking process. [snip]
Metrics collected by reference management software are especially intriguing because they offer a measure of active interest without requiring researchers to do anything more than what they are already doing. Scientists collect the papers they find interesting, take notes on them, and store the information in a place that is accessible and useful to them. [snip]
Part of the solution to encouraging valuable contributions, then, may simply be that the default settings involve sharing and that people rarely change them. A potentially game-changing incentive, however, may be the power to influence peers. [snip]
It is too early to tell whether any specific tools will last, but they already demonstrate an important principle: a tool that works within the workflow that researchers are already using can more easily capture and aggregate useful information. [snip]
The Great Thing about Metrics…Is That There Are So Many to Choose From
There are numerous article-level metrics ... and each has its own advantages and problems. Citation counts are an excellent measure of influence and impact but are very slow to collect. Download statistics are rapid to collect but may be misleading. Comments can provide valuable and immediate feedback, but are currently sparse ... .. Bookmarking statistics can be both rapid to collect and contain high quality information but are largely untested and require the widespread adoption of unfamiliar tools. Alongside these we have “expert ratings” by services such as Faculty of 1000 and simple rating schemes.
“Other Indicators of Impact” include ratings and comments, which, like page views, are immediate but may offer more insight because users are more likely to have read the article and found it compelling enough to respond. Additional other indicators are bookmarks, used by some people to keep track of articles of interest to them, and blog posts and trackbacks, which indicate where else on the Web the article has been mentioned and can be useful for linking to a broader discussion. It is clear that all of the types of data provide different dimensions, which together can give a clearer picture of an article's impact.
[snip] As recently shown ... , scientific impact is not a simple concept that can be described by a single number. The key point is that journal impact factor is a very poor measure of article impact. And, obviously, the fact that an article is highly influential by any measure does not necessarily mean it should be.
Many researchers will continue to rely on journals as filters, but the more you can incorporate effective filtering tools into your research process, the more you will stay up-to-date with advancing knowledge. The question is not whether you should take article-level metrics seriously but how you can use them most effectively to assist your own research endeavours. We need sophisticated metrics to ask sophisticated questions about different aspects of scientific impact and we need further research into both the most effective measurement techniques and the most effective uses of these in policy and decision making. For this reason we strongly support efforts to collect and present diverse types of article-level metrics without any initial presumptions as to which metric is most valuable. [snip]
As Clay Shirky famously said ... , you can complain about information overload but the only way to deal with it is to build and use better filters. It is no longer sufficient to depend on journals as your only filter; instead, it is time to start evaluating papers on their own merits. Our only options are to publish less or to filter more effectively, and any response that favours publishing less doesn't make sense, either logistically, financially, or ethically. The issue is not how to stop people from publishing, it is how to build better filters, both systematically and individually. At the same time, we can use available tools, networks, and tools built on networks to help with this task.
So in the spirit of science, let's keep learning and experimenting, and keep the practice and dissemination of science evolving for the times.
!!! Thanks To / Garrett Eastman / Librarian / Rowland Institute at Harvard / For The HeadsUp !!!
>>> While These Insights and Suggestions Are An Important Contribution To The Conversation , In Many Ways The Views And Recommendation Are Far From Radical <<<
See My Presentation Delivered At the Workshop On Peer Review, Trieste, Italy, May 23-24 2003
"Alternative Peer Review: Quality Management for 21st Century Scholarship"
>>> See In Particular > 'Seize The E! Section >>> Embrace the potential of the digital environment to facilitate access, retrieval, use, and navigation of electronic scholarship.
>>It's A Large PPT (200+ Slides) But IMHO ... Well Worth The Experience [:-)]<<
The Big Picture(sm): Visual Browsing in Web and non-Web Databases
To ReQuote T.S. Elloit >
"Where is the wisdom we have lost in knowledge? Where is the knowledge that we have lost in information?"/ T.S. Eliot / The Rock (1934) pt.1
To Quote Me >
"It's Not About Publication, It's About Ideas"
>> We Now Have The Computational Power To Make Real-Time Conceptual Navigation An EveryDay Occurrence <<<
!! Let Us Use It To Navigate Ideas !!!
Indeed Let Us Continue "... experimenting, and keep the practice and dissemination of science evolving for the times."
Thursday, October 29, 2009
eCAT is an electronic lab notebook (ELN) developed by Axiope Limited.
It is the first online ELN, the first ELN to be developed in close collaboration with lab scientists, and the first ELN to be targeted at researchers in non-commercial institutions. eCAT was developed in response to feedback from users of a predecessor product. By late 2006 the basic concept had been clarified: a highly scalable web-based collaboration tool that possessed the basic capabilities of commercial ELNs, i.e. a permissions system, controlled sharing, an audit trail, electronic signature and search, and a front end that looked like the electronic counterpart to a paper notebook.
During the development of the beta version feedback was incorporated from many groups including the FDA's Center for Biologics Evaluation & Research, Uppsala University, Children's Hospital Boston, Alex Swarbrick's lab at the Garvan Institute in Sydney and Martin Spitaler at Imperial College. More than 100 individuals and groups worldwide then participated in the beta testing between September 2008 and June 2009. The generally positive response is reflected in the following quote about how one lab is making use of eCAT: "Everyone uses it as an electronic notebook, so they can compile the diverse collections of data that we generate as biologists, such as images and spreadsheets. We use to it to take minutes of meetings. We also use it to manage our common stocks of antibodies, plasmids and so on. Finally, perhaps the most important feature for us is the ability to link records, reagents and experiments."
By developing eCAT in close collaboration with lab scientists, Axiope has come up with a practical and easy to use product that meets the need of scientists to manage, store and share data online. eCAT is already being perceived as a product that labs can continue to use as their data management and sharing grows in scale and complexity.
The complete article is [now] available as a provisional PDF
The fully formatted PDF and HTML versions are in production [10-29-09]
!!! Thanks To / Garrett Eastman / Librarian / Rowland Institute at Harvard / For The HeadsUp !!!
Wednesday, September 16, 2009
As discussed recently, we at PLoS feel that there is much to be gained from assessing research articles on their own merits rather than on the basis of the journal (and its impact factor) where the work happens to be published. [snip]
PLoS has therefore embarked on a program to aggregate a range of available data about an article and place that data on the article itself. The data are found on the new tab called ‘Metrics’, available on all articles. A reader can now scan the various metrics to determine the extent to which the article has been viewed, cited, covered in the media and so forth. With the addition of usage data to the article-level metrics we have taken another step towards providing the community with valuable data that can be used and analyzed.
In order to make article-level metrics as open and useful as possible, we are providing our entire dataset as a downloadable spreadsheet and we encourage interested researchers to download the data and perform their own analyses.
YouTube Video (Thanks To Garrett Eastman / Librarian / Rowland Institute At Harvard / For The HeadsUp)
Article-Level Download Metrics—What Are They Good For?
Sunday, September 13, 2009
Friday/ September 18, 1:15pm / Pound Hall Room 100 (Map) / Free and Open to the Public /
In Person > RSVP Requested / Webcast > Live at 1:15 pm ET.
This event is co-sponsored by the Harvard Business School Knowledge and Library Services, Harvard Law School Library, and the Office for Scholarly Communication.
In the future, frontier research in many fields will increasingly require the collaboration of globally distributed groups of researchers needing access to distributed computing, data resources and support for remote access to expensive, multi-national specialized facilities such as telescopes and accelerators or specialist data archives.
There is also a general belief that an important road to innovation will be provided by multi-disciplinary and collaborative research – from bio-informatics and earth systems science to social science and archeology. There will also be an explosion in the amount of research data collected in the next decade - petabytes will be common in many fields. These future research requirements constitute the 'eResearch' agenda.
Powerful software services will be widely deployed on top of the academic research networks to form the necessary 'Cyberinfrastructure' to provide a collaborative research environment for the global academic community.
The difficulties in combining data and information from distributed sources, the multi-disciplinary nature of research and collaboration, and the need to move to present researchers with tooling that enable them to express what they want to do rather than how to do it highlight the need for an ecosystem of Semantic Computing technologies.
Such technologies will further facilitate information sharing and discovery, will enable reasoning over information, and will allow us to start thinking about knowledge and how it can be handled by computers.This talk will review the elements of this vision and explain the need for semantic-oriented computing by exploring eResearch projects that have successfully applied relevant technologies — and anticipated impact on scholarly communication as we know it today.
It will also suggest that a software + service model with scientific services delivered from the cloud will become an increasingly accepted model for research.
Lee Dirks is the Director of Education & Scholarly Communications in Microsoft’s External Research division, where he manages a variety of research programs related to open access to research data, interoperability of archives and repositories, preservation of digital information as well as the application of new technologies to facilitate teaching and learning in higher education.An 20-year veteran across multiple information management fields,
Lee holds an M.L.S. degree from the University of North Carolina-Chapel Hill as well as a post-masters degree in Preservation Administration from Columbia University. In addition to past positions at Columbia and with OCLC (Preservation Resources), Lee has held a variety of roles at Microsoft since joining the company in 1996 - namely as the corporate archivist, then corporate librarian, and as a senior manager in the corporate market research organization.In addition to participation on several (US) National Science Foundation task forces, Lee also teaches as adjunct faculty at the iSchool at the University of Washington, and serves on the advisory boards for the University of Washington Libraries as well as the iSchool's Master of Science in Information Science program.
During his career, his team's work on the library intranet site at Microsoft was recognized as a "Center of Excellence Award for Technology" in 2003 by the Special Library Association's (SLA) Business & Finance Division. Additionally, Lee was presented with the 2006 Microsoft Marketing Excellence Award by Microsoft CEO Steve Ballmer – for a marketing & engineering partnership around a breakthrough market opportunity analysis process which is now a standard operating procedure across Microsoft.
In Person > RSVP Requested / Webcast > Live at 1:15 pm ET.
Sunday, September 6, 2009
>>> Best Viewed In Firefox <<<Examples of engagement include writing a blog post in response to someone else, bookmarking an article, leaving a comment on a blog, or clicking a link to read a news item.
PostRank measures engagement by analyzing the types and frequency of an audience's interaction with online content. An item's PostRank score represents how interesting and relevant people have found it to be. The more interesting or relevant an item is, the more work they will do to share or respond to that item so interactions that require more effort are weighted higher.
PostRank scoring is based on analysis of the "5 Cs" of engagement: creating, critiquing, chatting, collecting, and clicking. By collecting interaction engagement_metrics in these categories the overall engagement score is calculated and the PostRank value is determined.
The 5 Cs of Engagement
The strongest form of engagement is demonstrated by using an item as inspiration to create your own, for example, writing your own blog post that responds to or refutes someone else's blog post. Creation requires the most thought and investment of time, actively generates conversation, and therefore indicates the highest level of engagement.
Reading a blog post and then leaving a comment requires an investment of time, thought and effort (or sometimes just typing and name-calling...), and is a form of conversation. However, it requires less effort than writing a whole blog post. So while it is an important action, it does not indicate as much engagement as Creating.
Sharing and discussing information can often be started with one click, so it doesn't require a major investment of effort. However, a desire to share is a strong indication of relevance, and the act of sharing and its ensuing discussion are acts of conversation. Use of social media applications like Twitter encourage both the sharing of information and the resulting conversations. As a result, social media "chatting" indicates a good level of engagement.
Bookmarking or submitting items to social sites also tend to be "one-click" actions. They are intentional acts of archiving and sharing, but don't require much time or effort. However, the sharing that occurs often sparks conversations, so Collecting does demonstrate some engagement.
Activities like clicks and page views indicate lower engagement because they're passive interactions. Clicking a link to read a blog post doesn't require much work, and you're not giving anything back except your reading time. It is an intentional act, however, and thus indicates a mild level of interest and engagement. Which may grow after the item is read.
[snip]Engagement Sources We Track
Engagement sources evolve as new and interesting ways of interacting with with online content evolves. Here are several examples of engagement data sources that are included in PostRank:
- Views - Real-time > Pageviews within RSS readers and via PostRank widgets
- Clicks - Real-time > Clicks within RSS readers and via PostRank widgets
- Comments - Periodic updates > The number of comments on the item
- Google Trackbacks - Periodic updates > The number of links to the item from other websites
- FriendFeed - Real-time >The number of comments and likes on the item
- Digg - Real-time > The number of diggs, and comments on the item
- Reddit - Real-time > The number of comments and votes (up and down) on the item
- Tumblr - Real-time > The number of Tumblr mentions
- del.icio.us - Real-time > The number of bookmarks saved
- Ma.gnolia - Real-time > The number of bookmarks saved
- Diigo - Real-time > The number of bookmarks saved
- Furl - Real-time > The number of bookmarks saved
- Twitter - Real-time > The number of Twitter mentions
- Jaiku - Real-time > The number of Jaiku mentions
- Identi.ca - Real-time > The number of Identi.ca mentions
- Brightkite - Real-time > The number of Brightkite mentions
- Twit Army - Real-timec > The number of Twit Army mentions
- Blip - Real-time > The number of Blip mentions
- Feecle - Real-time > The number of Feecle mentions
- MexicoDiario - Real-time > The number of MexicoDiario mentions
Press & Web 2.0 Media Coverage
Saturday, September 5, 2009
Open access combined with Web 2.0 networking tools is fast changing the traditional journals’ functions and framework and the publishers’ role. As content is more and more available online in digital repositories and on the web an integrated, interconnected, multidisciplinary information environment is evolving and Oldenburg’s model disintegrates: the journal is no more the main referring unit of the scholarly output, as it used to be mainly for STM disciplines, but scholars attention is deeply concentrated on article level.
New journal models are thus evolving. In the first part of this presentation authors discuss these new experimental journal models, i.e. - overlay journals - interjournals - different levels journals In the second part of the presentation authors drive readers’ attention on the role commercial publishers could play in this digital seamless writing arena. According to the authors, publishers should concentrate much more on value-added services both for authors, readers and libraries, such as navigational services, discovery services, archiving and ex-post evaluation services.
La crescita della letteratura scientifica ad accesso aperto e i nuovi strumenti del Web 2.0 stanno rapidamente cambiando le tradizionali funzioni del periodico scientifico. Il modello di Henry Oldenburg si disintegra e la rivista scientifica cessa di essere il principale output intellettuale della ricerca, dal momento che l’attenzione degli studiosi è ormai tutta concentrata a livello dell’articolo (dalla ricerca fino alle nuove metriche di valutazione).
Le riviste tradizionali conservano un valore che è legato in modo prevalente ormai all’avanzamento nella carriera accademica più che all’aggiornamento scientifico. Nuovi modelli di riviste stanno emergendo in questo contesto: gli "overlay journals", gli "interjournals" e i "different levels journals". Dal momento che il contenuto non è più il valore aggiunto di una pubblicazione, quale ruolo spetta agli editori scientifici oggi? Gli autori sostengono che il futuro dell’editoria scientifica è legato al contesto digitale ovvero all’offerta di servizi a valore aggiunto differenziati per autori, lettori e biblioteche.
Source and Full Text Available At
Sunday, August 9, 2009
BMJ 2009;339:b2680 / Published 21 July 2009, doi:10.1136/bmj.b2680
To understand belief in a specific scientific claim by studying the pattern of citations among papers stating it.
A complete citation network was constructed from all PubMed indexed English literature papers addressing the belief that β amyloid, a protein accumulated in the brain in Alzheimer’s disease, is produced by and injures skeletal muscle of patients with inclusion body myositis. Social network theory and graph theory were used to analyse this network.
Main outcome measures
Citation bias, amplification, and invention, and their effects on determining authority.
The network contained 242 papers and 675 citations addressing the belief, with 220 553 citation paths supporting it. Unfounded authority was established by citation bias against papers that refuted or weakened the belief; amplification, the marked expansion of the belief system by papers presenting no data addressing it; and forms of invention such as the conversion of hypothesis into fact through citation alone. Extension of this network into text within grants funded by the National Institutes of Health and obtained through the Freedom of Information Act showed the same phenomena present and sometimes used to justify requests for funding.
Citation is both an impartial scholarly method and a powerful form of social communication. Through distortions in its social use that include bias, amplification, and invention, citation can be used to generate information cascades resulting in unfounded authority of claims. Construction and analysis of a claim specific citation network may clarify the nature of a published belief system and expose distorted methods of social citation.
Diversion, Invention, and Socialized Medicine
1.1. Motivation 1
1.2. Analytical focus 5
1.3. Objectives 7
1.4. Approach 7
2. Characteristics of scientific knowledge infrastructures 9
2.1. Theoretical analysis 10
2.2. Empirical analysis: Emerging knowledge infrastructures 34
2.3. Visions of scientific knowledge infrastructures 55
2.4. Synthesis 57
3. Structure of scientific knowledge 83
3.1. Objectives 83
3.2. Theoretical foundations 87
3.3. Object-oriented model of scientific knowledge 102
3.4. Elements of scientific knowledge 124
4. Implications 187
4.1. Feasibility: IS Cybrarium 187
4.2. Conclusion 196
Source and Detailed Table Of Contents
Saturday, August 8, 2009
For centuries a small number of writers were confronted by many thousands of readers. This changed toward the end of the last century. It began with the daily press opening to its readers space for 'letters to the editor.' And today... at any moment the reader is ready to turn into a writer.
Maureen Dowd: If you were out with a girl and she started twittering about it in the middle, would that be a deal-breaker or a turn-on?
Maureen Dowd: Why did you think the answer to e-mail was a new kind of e-mail?
Biz Stone: With Twitter, it’s as easy to unfollow as it is to follow.
—The New York Times, 2009 (1)
—Walter Benjamin, 1931 (2)
All registered users are able to add Notes, Comments, and Ratings to any article...Highlight the text to be annotated, and then click the 'Add a note to the text' link in the right-hand navigation menu of the article ...Notes can be started at any point within the text, but for ease of reading we ask that you do not begin Notes in the middle of words.
—Public Library of Science, 2009 (3)
THE SANDBOX OF IDEAS
It’s reassuring to read that our colleagues at The Public Library of Science have remained true to the integrity of the word, if not the sentence or thought. PLoS One has raised this banner for verbal integrity in a cheery commercial entitled "PLoS Journals Sandbox: A Place to Learn and Play (3) ." The new format, which permits instant interruption of on-line, formal scientific papers, is certainly in keeping with the temper of our time. Were this to have been the practice in old-fashioned print libraries, many of our journals would by now resemble kitty litter.
In the Age of Twitter we’ve become accustomed to bell-tones and roving thumbs in every venue of human life. We call it social networking when we summon up Facebook, YouTube, or MySpace—and it’s no longer limited to teenagers. Twitter and the other social networks have been used by nearly one in five of online adults ages 25 to 34 (4) . Nowadays, in the plenary sessions of national scientific meetings, one sees heads bowed in homage to the Holy Book of Face or tweeting to Twitter in fewer than 140 characters of text.
Biz Stone, the founder of Twitter, explains:
Twitter is a service for friends, family, and co–workers to communicate and stay connected through the exchange of quick, frequent answers to one simple question: What are you doing? (5)
And as for science: what are we doing? Today, on screens large and small, every online scientific paper is just a cursor stroke away. That makes it possible, as Benjamin predicted, for any reader to turn into a writer. No surprise, then, that PLoS and other new venture journals encourage us to adorn the digital text with notes and comments, blogs and tweets. [snip] Right on to the Public Library of Science! How fitting it is that PLoS, the youngest kid on the block of reputable science journals, is out to compete in the sandbox of ideas (3) .
ENDANGERED SPECIES OF PRINT
It’s no secret that scientific journals have been losing readers of their printed versions to the greater audience on the web. For many scientific journals, the number of "hits" they receive daily online is a factor or two greater than their monthly print circulation. [snip]The printed word still retains a good chunk of older devotees, but even these are as likely as their younger colleagues to prefer electronic to printed copies of their favorite journals (8) . [snip]
This sea change in the way that information is handled and supported has worried many and frightened a few (9) . We might recall that scientific journals as we know them are relatively recent arrivals on the scene and have moved along paths trod by the general culture. [snip]. Science and publishing became professionalized at the dawn of the Enlightenment. The two oldest scientific journals on record are The Philosophical Transactions of the Royal Society (London) and the Journal de Scavants (Paris), both founded in 1665. Originally filled with material of general interest for fellow citizens of "the republic of letters," they soon morphed into publications that reported the most rigorous science of the day (10) . [snip]
IT HAS NOT ESCAPED OUR NOTICE
The mold was struck for the modern scientific paper between the two world wars. [snip] .Today the acronymic IMRaD formula (Introduction, Methods, Results, and Discussion) is now required by all reputable journals, including this one. But there’s always been wiggle-room around the canonical IMRaD format; most journals are enlivened by letters to the editor, rebuttals, conference proceedings, abstracts of meetings, news reports, etc. Walter Benjamin’s description in 1931 of the marketplace of print still applies to the market in scientific ideas:
Today there is hardly a gainfully employed European who could not, in principle, find an opportunity to publish somewhere or other comments on his work, grievances, documentary reports, or that sort of thing. Thus, the distinction between author and public is about to lose its basic character (13).
He would have loved texting and Twitter; I can imagine his pleasure at running his thumbs over the passing comments and pertinent grievances as he "follows" and "unfollows" as both author and reader
In this context, one can only imagine what the epochal Watson-Crick paper would look like these days on PLoS. Their 1953 paper was written as a "Letter to the Editor" in Nature and never underwent peer review. John Maddox, editor-in-chief at the time, later admitted that "the Crick and Watson paper could not have been refereed: its correctness is self-evident." That’s a matter of dispute, as we’ll see (14) . The Watson-Crick paper begins with:
We wish to suggest a structure for the salt of deoxyribose nucleic acid (D.N.A.). This structure has novel features which are of considerable biological interest...The ending of the paper is of course perhaps the best known in scientific prose:
It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.But for many of us, the real action is in the acknowledgments at the end:
We are much indebted to Dr. Jerry Donohue for constant advice and criticism, especially on interatomic distances. We have also been stimulated by a knowledge of the general nature of the unpublished experimental results and ideas of Dr. M. H. F. Wilkins, Dr. R. E. Franklin and their co-workers at King’s College, London (15).One need only to imagine what tweets, twoops, formal corrections, and comments might decorate these passages on PLoSOne today. Pauling, Chargaff, Avery, Meselson, Cairns, Donohue, Perutz, Franklin, and Wilkins would have had their say:
This structure has novel features COMMENT: YEAH! HYDROGEN BONDING, LINUS!) which are of considerable biological interest. COMMENT: FOR WHICH I WROTE THE CHEMISTRY, ERWIN FORMAL CORRECTION: IT’S THE GENETIC MATERIAL, YOU FOOLS!, GENES! OSWALDIt has not escaped our notice that the specific pairing FORMAL CORRECTION: BASE PAIRING A/T=G/C, ERWIN we have postulated immediately suggests a possible copying mechanism for the genetic material. COMMENT: LIKE WHAT? CONSERVED? SEMI? MATT COMMENT: MORTAL OR IMMORTAL? CAIRNSWe are much indebted to Dr. Jerry Donohue for constant advice and criticism, especially on interatomic distances. FORMAL CORRECTION: SEZ YOU! I TOLD YOU ABOUT THE KETO TO ENOL TAUTOMERS. YOU KNEW SQUAT FROM THE CHEMISTRY! JERRY We have also been stimulated by a knowledge of the general nature FORMAL CORRECTION: I SHOWED YOU THEIR PICTURES, MAX of the unpublished experimental results and ideas of Dr. M. H. F. Wilkins, Dr. R. E. Franklin FORMAL CORRECTION: YOU PEEKED, "DARK LADY" and their co-workers at King’s College, London COMMENT: OUR TWO FOLLOWING PAPERS ARE DATA, YOURS IS A LEAP, MAURY.ARCADES TO THE BORDER
Walter Benjamin, (1895–1940) the quintessential European intellect and literary omnivore, would have loved having a COMMENTS and FORMAL CORRECTIONS option at his finger-tips. [snip]
More to the point: much of the Arcades Project prefigures the home page of a social network on the Web. Benjamin literally explores a network: the linked indoor shopping arcades of nineteenth century Paris, the Passages (16) . I imagine a Benjamin today, reincarnated as the perennial flaneur; who follows a path in the Arcade of Panoramas. He stops occasionally at one site or another site. The flaneur ambles (surfs) along a protected space (MySpace) in which bustling crowds are reflected in shiny Windows. He adjusts his cravat in a store-front mirror (Facebook), and when the bell-tone rings in his pocket, he takes out his timepiece (Blackberry). He looks past his mirror image (YouTube), to find two generations of followers (Twitter).
Were Benjamin to log on to Twitter, he’d have thousands of tweets on hand to send to generations of followers. [snip]In the century of the common man film was art without "aura" and accessible to all:
Magician and surgeon compare to painter and cameraman. The painter maintains in his work a natural distance from reality, the cameraman penetrates deeply into its web...Thus, for contemporary man the representation of reality by the film is incomparably more significant than that of the painter, since it offers, precisely because of the thoroughgoing permeation of reality with mechanical equipment, an aspect of reality which is free of all equipment (2) .I can see Benjamin now tweeting, now twoopsing, now blogging, now surfing, now scrolling. His thumbs move quickly over the tiny keys—the sandbox of images in sight. He tweets directly to Biz Jones and the other followers of WB (his nom-de-tweet), an upbeat quote from Paul Valery (1928). Valery and WB were sure that other great gadgets would soon supplant celluloid film:
Pretty good prediction, no? Isn’t that "simple movement of the hand" what the thumbs are doing these days on a Blackberry. The quote is also about twice the 140 characters that Biz Stone permits, but heck, WB could have split it in two.
It’s less than 140 characters. I’d bet that Benjamin would have been at home in our new world of texting and tweets, blogs and hand-helds. In the Age of Twitter, he’d be ready to play in the sandbox of ideas, and we wait for his FASEB Journal essay in "Milestones."
Source and Full Text (Open Access?)
Full Text Available
Thursday, August 6, 2009
It has been founded by scientists who find they are unable to submit their articles to arXiv.org because of Cornell University's policy of endorsements and moderation designed to filter out e-prints that they consider inappropriate.ViXra is an open repository for new scientific articles. It does not endorse e-prints accepted on its website, neither does it review them against criteria such as correctness or author's credentials.
In 1991 the electronic e-print archive, now known as arXiv.org, was founded at Los Alamos National Laboritories. In the early days of the World Wide Web it was open to submissions from all scientific researchers, but gradually a policy of moderation was employed to block articles that the administrators considered unsuitable. In 2004 this was replaced by a system of endorsements to reduce the workload and place responsibility of moderation on the endorsers. The stated intention was to permit anybody from the scientific community to continue contributing. However many of us who had successfully submitted e-prints before then found that we were no longer able to. Even those with doctorates in physics and long histories of publication in scientific journals can no longer contribute to the arXiv unless they can find an endorser in a suitable research institution.
The policies of Cornell University who now control the arXiv are so strict that even when someone succeeds in finding an endorser their e-print may still be rejected or moved to the "physics" category of the arXiv where it is likely to get less attention. Those who endorse articles that Cornell find unsuitable are under threat of losing their right to endorse or even their own ability to submit e-prints. Given the harm this might cause to their careers it is no surprise that endorsers are very conservative when considering articles from people they do not know. [snip]
It is inevitable that viXra will therefore contain e-prints that many scientists will consider clearly wrong and unscientific. However, it will also be a repository for new ideas that the scientific establishment is not currently willing to consider. Other perfectly conventional e-prints will be found here simply because the authors were not able to find a suitable endorser for the arXiv or because they prefer a more open system. It is our belief that anybody who considers themselves to have done scientific work should have the right to place it in an archive in order to communicate the idea to a wide public. They should also be allowed to stake their claim of priority in case the idea is recognised as important in the future.
In part viXra.org is a parody of arXiv.org to highlight Cornell University's unacceptable censorship policy. It is also an experiment to see what kind of scientific work is being excluded by the arXiv. But most of all it is a serious and permanent e-print archive for scientific work. Unlike arXiv.org it is truly open to scientists from all walks of life.
Fledgling site challenges arXiv server