Posted by: bluesyemre | January 19, 2021

Evaluating #Publishers as partners with #Libraries and #HigherEducation

Business team checking clipboard list

Editor’s Note: Today’s post is by Rachel Caldwell. Rachel is scholarly communication librarian at University of Tennessee Knoxville. She has a dual Master’s degree in Information Science and Library Science from Indiana University Bloomington. Prior to her position in scholarly communication, Rachel worked in museums and as an instruction librarian.

No library can subscribe to every publication that might be of interest to their communities. Most academic libraries base collections and resource allocation decisions on quantifiable metrics, such as cost per use. These are important considerations, but they are not holistic. Traditional, quantifiable, collections-based metrics overlook a wide range of important aspects of relationships between academic libraries, their institutions, and suppliers of one kind or another. One such aspect is determining whether the business practices of a supplier/vendor support or match the mission and values of the library.

Many vendors supplying academic libraries with collections and other resources engage in practices that are not only markedly out of step with the values of libraries but also misaligned with the broader values of many public institutions of higher education (HE). In North America, this is especially true among land-grant colleges and universities, including both public and private institutions, whose missions and values prioritize conducting research for the public good and providing broad access to education. Publishers and citation index providers are important examples of vendors often out of sync with such values, not least because of the tremendous amounts of money institutions of HE pay them. Journal publishers are particularly visible because of their work with faculty as authors, the role of journal metrics in retention, tenure, and promotion (RTP) decisions, and the recent spate of libraries canceling their “big deal packages.” All of this leads to a great deal of tension between journal publishers, authors, institutions, and libraries, often centered on “quality” of publications (using the Impact Factor as a measure) versus “costliness” of access, including not only subscriptions but also open access article processing charges (APCs).

To address this misalignment, libraries would benefit from evaluating how well a publisher’s practices align with the values of libraries, public and land-grant institutions, and some learned societies. Such a system would do several things. For example, it would inform library decision-makers about licensing choices, as well as aid in communication with a library’s campus community about the nature of the decisions it may face. To be clear, such an evaluation system would not be used by the typical institution as the sole metric for licensing decision-making, nor would it be used only in licensing decisions.

In a recent article, I have proposed such a system provisionally: Publishers Acting as Partners with Public Institutions of Higher Education and Land-grant Universities (PAPPIHELU, or PAPPI for short). It is important to note that this system has not been vetted or used by any library or institution. It is a provisional system intended to provide an example of how values-based metrics could be applied to journal publishers. The evaluation criteria are based on values widely held by libraries, land-grant institutions of HE, and some learned societies. Notably, these societies serve primarily as a proxy for researchers who often share many of the same values as libraries and HE institutions. Under the PAPPI evaluation system, journal publisher practices in ten different categories earn points toward an overall score that reveals how well the publisher’s practices align with the values of libraries, land-grant institutions, and some learned societies.

In other words, this provisional system evaluates how well a journal publisher acts as a partner with land-grant institutions and libraries. The partnership criteria stem from three key values identified by examining the histories and missions (including those of relevant professional organizations) of the three entities:

  • Democratization of information and education emphasizes the value of public access to knowledge held by libraries and institutions, especially land-grant institutions. PAPPI criteria relating to this value include public access and open access (OA) to scholarship/research (and not just through immediate “gold” or “diamond/platinum” OA publishing); history of politically lobbying against government policies meant to broaden access to research/scholarly literature; and permitting reuse of articles for noncommercial educational purposes.
  • A second value, information exchange, relates to making research/scholarship findable and shareable through open, interoperable metadata and long-term, non-profit-run preservation. Information exchange ensures that literature reviews are well-informed, leading to solid studies and scholarship; thus, many learned societies share this value with academic libraries. Criteria related to this value include the following: the publisher voluntarily contributes metadata to repositories and indexes; metadata is OA; authors are identified with ORCIDs; journals are archived or preserved with CLOCKSS, Portico, or a similar service.
  • Finally, all three entities—academic libraries, land-grant institutions of HE, some learned societies — value the sustainability of scholarship as a scholarly enterprise. This value relates to the agency of authors and institutions and is especially important in evaluating whether a publisher acts as a partner. Related criteria include not only reasonable article processing charges (APCs) but also policies that allow researchers at any institution to contribute to the scholarly record if their work meets peer-review standards; researchers retaining agency over their work through authors’ retention of copyright and/or agreement to license their work under the least-restrictive Creative Commons open licenses that facilitate access and reuse; requirements for non-disclosure agreements in contracts with libraries/institutions; and permitting text and data mining at no cost for scholarly purposes. (For particulars on these categories and credits, please see the PAPPI wiki.)

Points earned by a publisher result in PAPPI designations similar to the U.S. Green Building Council’s Leadership in Energy and Environmental Design (LEED) certification for buildings. In both, the designations represent levels of achievement in meeting the system’s outlined practices. LEED has four levels of certification (platinum, gold, silver, and certified) for the construction of buildings. PAPPI proposes three levels of credit for publishers’ practices: Tier 1 for publishers earning 62 points or more, that is, 70% of all possible points (89 total); PAPPI Tier 2 for those earning at least 45 points (51%); and PAPPI Tier 3 for those earning at least 27 points (30%).

To test this partnership scoring system, I evaluated the practices of four different journal publishers. I included Evolutionary Ecology Limited (EEL) and the Society for Neuroscience (SfN), which, respectively, publish one and two journals. I also evaluated Elsevier and the University of California/California Digital Library’s e-Scholarship Publishing, taking a random sample of journal titles from each of their respective publication lists. While, unsurprisingly, the journals published by the library and by the learned society scored highest — because PAPPI was designed with library and society values in mind — a few observations from the results bear special attention.

table showing journal scores
Scores for Selected Publishers. A more detailed breakdown of these scores is available in the PAPPI Wiki. Table reused under CC BY license from “A Provisional System to Evaluate Journal Publishers Based on Partnership Practices and Values Shared with Academic Institutions and Libraries”, https://doi.org/10.3390/publications8030039.

I deliberately chose to evaluate a society publisher that was a member of the Scientific Society Publisher Alliance (SSPA), in part because many SSPA members publish their own journals. Those that are not working with a larger publisher are able to set their own journal publishing policies and priorities, aligning with their missions and values. Though not all societies share the values of libraries and land-grants, many SSPA members do. For example, SSPA members publish both hybrid OA journals and fully OA journals, meaning that they do not publish all articles “gold” OA by default; they still maintain subscription-only content. But unlike many hybrid publishers, much of their subscription-only research is made publicly accessible after brief embargo periods.

Observation #1: Many SSPA members make 100% of articles available in PubMed Central (PMC) and in their journals after a three-, six-, or twelve-month embargo — not just the articles that authors paid to make immediately OA through the “gold” option, nor just those with NIH funding — taking the significant onus of “green” open archiving off of institutions and authors. (Of the eleven society members, four deposit all journal content in PMC — SfN is one of these — two deposit all content from half or more of their journals.) This, and the fact that they are mission-driven organizations grounded in providing support to researchers and students in myriad ways beyond journal publishing, earned the SSPA-member publisher a high score as a partner with HE and libraries. Libraries may want to reach out to these publishers, and the faculty making up these societies’ memberships, and begin cultivating conversations about partnerships and collaboration if they have not done so already.

In the same vein, it is not a surprise that e-Scholarship Publishing scored highly. It is a member of both the Library Publishing Coalition and the Open Access Scholarly Publishers Association (OASPA), publishing only gold or platinum OA journals. Their APCs are low to nonexistent and they operate with a mission-driven business model. Naturally, their journals benefit from the relationship with the library. Metadata is well-understood by the library publisher, so the eScholarship journals’ discoverability scores were comparatively high. In contrast, faculty-owned publisher EEL scored well in many areas, but suffered from low scores in discoverability and publishing practices (e.g., preservation, membership in industry organizations) — practices most aligned with the value of information exchange.

While their information exchange scores were low, EEL’s scores for practices related to democratization of access and sustainability — copyright terms, educational reuse, and business practices — were relatively high.

Observation #2: Allotting to EEL’s journal, or any journal in a similar state, institutional resources to improve its visibility and impact makes sense when viewed from a partnership lens. That is, if a library is considering canceling an EEL subscription, the library may want to offer them additional (and different) support. This could begin as a consultation with librarians on the topics of metadata creation or preservation practices, and might result in an ongoing partnership with a library for related services. This observation demonstrates PAPPI’s utility as a means for identifying potential partners that could benefit not only from libraries’ budgets first and foremost but from librarians’ knowledge of discoverability and best practices in publishing. These are intellectual resources and allotments for which librarians should be recognized and availed of by likeminded, nonprofit publishers.

Elsevier scored lowest of the four evaluated publishers, largely due to practices related to democratization of access and sustainability of scholarship as a scholarly enterprise (i.e., its copyright and licensing terms, business model, and business practices, such as a history of lobbying against OA policies), but one criterion deserves special attention in the Elsevier evaluation. In “Publishing Practices,” three publishers, including Elsevier, scored highly but of these Elsevier was the only COPE member. COPE membership earns a publisher the highest number of points possible (5 points) for any single credit in PAPPI. If the society publisher, SfN, became a COPE member — which, based on their current practices, could seemingly be done easily as they meet nearly every COPE requirement — then the learned society would have the highest score in Publishing Practices. This is significant because PAPPI metrics in Publishing Practices are arguably the least controversial metrics, being focused not on rights, permissions, access, or business practices, but instead on practices and standards that scholarly publishers have, by and large, agreed to set for themselves. In other words, if a publisher scores highly in Publishing Practices — and especially if they are also a COPE member — they could be considered a “good publisher.” With these metrics in mind, SfN could possibly be considered at least as good of a publisher as Elsevier, if not better.

This leads to Observation #3: SfN’s Publishing Practices score demonstrates that it is doing the same as or more than the largest publisher in following or creating best practices, yet it and many other learned societies and small publishers often struggle more than larger publishers with visibility, drawing a high number of submissions, and improving their journals’ impact factors (IFs). A high IF is important to a journal’s success, and while there is broad agreement that many current uses of the IF are deeply flawed, HE continues to rely on it for many decisions, spurring a large number of submissions to the journals with the highest IFs. The situation is a catch-22. Values-based metrics could ease (not erase) some of the tension caused by relying on one dominant but flawed quantitative evaluation for journals (the IF) by providing additional metrics by which to compare publishers. In this case, the metrics would reflect commitment to access, information exchange, and the sustainability of scholarship as a scholarly enterprise.

It is worth noting that Eugene Garfield, creator of the IF, intended for librarians to use it in making subscription and collection decisions. Its application has shifted in powerful ways. It is not wild to suggest that librarians want additional metrics by which to make resource decisions. Nor is it wild to suggest that publishers want additional metrics by which to demonstrate their support for researchers and highlight contributions to the sustainability of scholarship.

There is no individual substitute for the IF. However, metrics comparing researchers’ individual scholarly outputs now include not only which journals accepted their work (and those journals’ IFs), but also their individual citation metrics (including their Hirsch index, or h-index) and alternative metrics for a more robust understanding of an author’s research contributions. And the San Francisco Declaration on Research Assessment (SFDORA) states that none of this is an adequate substitute for actually reading and considering the scholarly merit of each work. Hopefully, faculty are looking at all of these aspects when considering another researcher’s “success,” not just the accepting journals’ IFs. (Admittedly, it is an ongoing effort, but these additional metrics certainly help rather than hinder the effort.)

Just as we continue to add depth to our evaluation of research itself, metrics that compare publishers’ practices can improve our understanding of each publisher’s interest in contributing toward the health of institutions and libraries and the independence of the scholarly enterprise. Hopefully, institutions and libraries look at all of these aspects when considering what it is that makes a publisher a good partner, not just its journals’ IFs.

This shift in perspective could influence resource allocation decisions in the future. For example, the SfN publishes two journals. One is fully-OA, one is hybrid-OA. Many libraries do not pay APCs for a hybrid journal, but those same libraries pay APCs to gold OA journals run by publishers that do not act as partners. Instead, a publisher with high scores across all PAPPI categories might warrant a library paying all APCs within reason, regardless of whether the journal is hybrid or not. Thus, publishers contributing to green OA efforts, either through PMC or other means, might receive increased support for their practices if they align with partnership practices overall. On the flip side, a publisher with low scores, especially in practices related to democratization of access, may warrant little or no APC coverage by the library, and perhaps even a significant review of library subscriptions with that publisher.

Because PAPPI scores quantify practices and values to more easily and holistically compare publishers, they could also aid libraries in making subscription cancellation decisions. For example, compare a PAPPI score to the metric of cost per use (CPU). Just as the IF is useful, flawed, and pervasive, so too is the CPU metric in library collection decisions. CPU is practical but is also limited and lacking in vision, both in terms of recognizing what libraries have to offer as partners with publishers and in terms of what benefits or harms libraries and institutions might accrue from publishers. For example, if a publisher is contributing to public HE missions outside of libraries, perhaps through mission-driven support for graduate students and early career researchers and short embargo periods followed by publisher-fulfilled open archiving procedures, these are important metrics to consider because library collection budgets, in many ways, belong to HE holistically. Such publisher practices are simply not reflected in a CPU analysis.

What if, in addition to CPU, PAPPI scores factored into budgetary decisions and led to more cost-sharing opportunities between libraries and their institutions? Perhaps costs from any publisher with a sufficient level of usage and a high enough PAPPI score would be fully funded by the library (according to a library’s budget allowances), but a sufficient CPU and low PAPPI score would trigger a cost-share with university administration or particular colleges or departments. If colleges or administrations were asked to pay a percentage of costs invoiced by publishers that do not meet partnership values, that could lead to a number of interesting conversations. What the library saves could go into supporting open scholarship infrastructure, à la David Lewis.

Put another way, if PAPPI scores were applied by an institution or library, a PAPPI Tier 1 publisher might receive an increase in support from the library. A PAPPI Tier 2 publisher that scores highly in democratization of access and sustainability of the scholarly enterprise might also receive an increase in support. Resource allocations to PAPPI Tier 2 publishers with low scores in these same values would warrant further review, as would any allocation to a Tier 3 publisher or below.

PAPPI scores could be relevant to HE in other ways, too. Values-based metrics may be utilized by researchers, especially those at land-grant universities, for whom democratization of access and research for the public good can be institutional imperatives. In their CVs, these researchers might list not only the IF (which identifies top journals by journal citation counts) but also the PAPPI tier (which identifies top publishers by values and partner practices) of any journal in which they publish, identify any COPE member journals/publishers, and present a variety of other metrics to give a robust presentation of their work and its reach.

Additionally, a publisher’s PAPPI score, as a measure of partnership practices, could also be shared across an institution to help inform decision-making about the publisher’s parent company and their other products (e.g., data analytics products). And PAPPI, or any other values-based evaluation system, could expand beyond publishers to form the basis of similar scoring systems for any vendor: analytics companies, textbook publishers, scholarly monograph publishers, digital content providers in the humanities, citation index providers in the sciences, and so on. It would take work to develop appropriate metrics, but PAPPI provides a model for doing so.

There have been ongoing conversations about how libraries and institutions might evaluate publishers and vendors. PAPPI is just one model. It considers libraries not as islands but as organizations responsive to, and working with, researchers and institutions. Finding shared values among members of our community and using those values to evaluate organizations with which we work is an effort that impacts, and should include, all of us. Especially now, amidst the COVID-19 pandemic, as libraries and institutions decide how to spend shrinking budgets and diminishing energies, values-based metrics suggest that investing in the work of likeminded, “like-missioned” partners, and supporting under-resourced communities, may be a way to find real solutions to resource concerns. Even if somewhat costly in the short-term, pursuing these solutions matters now more than ever for long-term sustainability and self-determination. The PAPPI system for evaluating publisher practices is one way, albeit provisional, to identify publishing partners who have earned our support. I hope we can figure out how to give it to them.

Rachel Caldwell

Rachel Caldwell is scholarly communication librarian at University of Tennessee Knoxville. She has a dual Master’s degree in Information Science and Library Science from Indiana University Bloomington. Prior to her position in scholarly communication, Rachel worked in museums and as an instruction librarian.

https://bit.ly/39ICEJ6


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Categories

%d bloggers like this: