Assessing a human mediated current awareness service
In this paper, we present an approach for analyzing the behavior of editors in the large current awareness service 'NEP: New Economics Papers'. We processed data from more than 38,000 issues derived f
In this paper, we present an approach for analyzing the behavior of editors in the large current awareness service “NEP: New Economics Papers”. We processed data from more than 38,000 issues derived from 90 different NEP reports over the past ten years. The aim of our analysis was to gain an inside to the editor behaviour when creating an issue and to look for factors that influence the success of a report. In our study we looked at the following features: average editing time, the average number of papers in an issue and the editor effort measured on presorted issues as relative search length (RSL). We found an average issue size of 12.4 documents per issue. The average editing time is rather low with 14.5 minute. We get to the point that the success of a report is mainly driven by its topic and the number of subscribers, as well as proactive action by the editor to promote the report in her community.
💡 Research Summary
This paper presents a comprehensive empirical study of editor behavior within the New Economics Papers (NEP) current‑awareness service, a hybrid platform that combines human editorial judgment with algorithmic presorting to deliver weekly selections of recent economics literature to subscribers. The authors assembled a longitudinal dataset covering ten years (2013‑2022), comprising 38,274 issues generated across 90 distinct NEP reports. For each issue they extracted metadata (publication date, number of papers, subscriber count before and after release) and server‑side logs that record the exact timestamps when an editor opened the candidate‑paper pool and when the issue was finally saved.
Three quantitative indicators were defined to capture editorial effort and its possible impact on report success. First, average editing time was measured as the interval between the first interaction with the pool and the final save operation; the overall mean was 14.5 minutes (SD = 6.2 min), indicating that editors work quickly, likely because the presorted list already surfaces the most relevant papers. Second, average issue size was calculated as the mean number of papers per issue, yielding 12.4 documents (SD = 3.1). This modest size balances information overload against the need for topical breadth. Third, the Relative Search Length (RSL) was introduced as a novel effort metric: RSL = (position of the last paper actually examined by the editor) ÷ (total pool size). The average RSL of 0.22 (SD = 0.09) shows that editors typically examine only the top 20 % of the presorted list before finalising an issue.
The authors then investigated determinants of report “success”, operationalised as the percentage change in subscriber count after an issue’s release. Correlation analysis revealed a strong positive relationship between the absolute number of subscribers and subsequent growth (r = 0.68, p < 0.001). Topic analysis further demonstrated that reports focused on macro‑economics, monetary policy, and finance achieved the highest average subscriber growth (≈ 12 % per issue), whereas more niche areas such as economic history or pure theory grew at only ≈ 4 %. Importantly, editors who actively promoted their issues through external channels—social media, academic mailing lists, personal blogs—experienced an additional ≈ 8 percentage‑point boost in subscriber growth, highlighting the value of proactive outreach.
In the discussion, the paper argues that the observed low editing times and modest RSL values confirm the efficiency gains provided by the presorting algorithm, yet they also underscore the continued centrality of human expertise in curating high‑quality issue content. The authors suggest that future improvements could involve topic‑aware recommendation models that better align the presorted pool with the specific interests of each report’s audience, as well as real‑time feedback loops that surface subscriber reading patterns to editors. Such enhancements would allow editors to fine‑tune their selections without sacrificing the speed advantages already demonstrated.
The study concludes that the success of a NEP report is driven primarily by three factors: the intrinsic appeal of its subject area, the existing subscriber base, and the editor’s willingness to actively market the issue within their scholarly community. By quantifying editorial effort and linking it to measurable outcomes, the paper provides a solid empirical foundation for designing more effective hybrid current‑awareness services that leverage both algorithmic assistance and human judgment. Future work is proposed to explore collaborative editing workflows, automated quality‑assessment metrics, and longitudinal subscriber satisfaction surveys to further refine the balance between automation and editorial stewardship.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...