Binary Scientific Star Coauthors Core Size

Binary Scientific Star Coauthors Core Size
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

It is examined whether the relationship $ J \propto A/r^{\alpha}$, and the subsequent coauthor core notion (Ausloos 2013), between the number ($J$) of joint publications (JP) by a “main scientist” (LI) with her/his coauthors (CAs) can be extended to a team-like system. This is done by considering that each coauthor can be so strongly tied to the LI that they are forming {\it binary scientific star} (BSS) systems with respect to their other collaborators. Moreover, publications in peer review journals and in “proceedings”, both often thought to be of “different quality”, are separetely distinguished. The role of a time interval for measuring $J$ and $\alpha$ is also examined. New indirect measures are also introduced. For making the point, two LI cases with numerous CAs are studied. It is found that only a few BSS need to be usefully examined. The exponent $\alpha$ turns out to be “second scientist” weakly dependent, but still “size” and “publication type” dependent, according to the number of CAs or JP. The CA core value is found to be (CA or JP) size and publication type dependent, but remains in an understandable range. Somewhat unexpectedly, no special qualitative difference on the binary scientific star CA core value is found between publications in peer review journals and in proceedings. In conclusion, some remark is made on partner cooperation in BSS teams. It is suggested that such measures can serve as criteria for distinguishing the role of scientists in a team.


💡 Research Summary

The paper investigates whether the co‑author core concept introduced by Ausloos (2013) can be extended from a single “leading investigator” (LI) to a team‑oriented framework. In the original work, the number of joint publications (J) that an LI has with each co‑author (CA) follows a Zipf‑like law J ∝ 1/r, where r is the rank of the CA by productivity. Ausloos later refined this to a power‑law J = A / r^α (α ≤ 1) and defined the mₐ index as the largest rank r for which r ≤ J(r). The mₐ therefore measures the size of the LI’s “core” of most important collaborators, analogous to the h‑index for papers.

The present study introduces the notion of a “binary scientific star” (BSS), i.e., the pair formed by an LI and a specific CA. The authors ask whether the same power‑law holds for the distribution of joint publications within each BSS, and whether the parameters α and mₐ depend on the type of publication (peer‑reviewed journal versus “proceedings”, the latter including conference papers, book chapters, encyclopedia entries) or on the time window considered.

Two prolific scientists are examined: H. E. Stanley (HES) and M. Ausloos (MA). Their publication records comprise more than 1,100 papers (HES) and 600 papers (MA), with roughly 600 and 300 distinct co‑authors respectively. The data are split into four categories: (i) all publications, (ii) only journal articles (j), (iii) only proceedings‑type items (p), and (iv) the sum of (j) and (p). Moreover, each category is divided into two time intervals – an early period of about 30 years (≈1990‑2000) and a later period of about 10 years (≈2000‑2010) – yielding 18 distinct data sets.

Statistical analysis proceeds by ranking each CA (or each BSS partner) according to the number of joint publications J(r), plotting log J versus log r, and fitting the power‑law J = A / r^α by non‑linear least squares. The goodness‑of‑fit is assessed via the coefficient of determination R². The mₐ index is obtained directly from the ranked list as the largest r satisfying r ≤ J(r). Additional aggregate measures are introduced: the total number of joint publications X = ∑_{r=1}^{r_M} J(r), and the skewness (skw) and kurtosis (krt) of the J‑distribution, to capture its asymmetry and tail heaviness.

Key findings are:

  1. Power‑law validity – Across all 18 data sets and for each of the four selected BSS pairs (HES & SH, HES & SB, MA & RC, MA & NV) the log‑log plots are essentially linear. R² values range from 0.86 to 0.99, confirming that the J = A / r^α model is appropriate for describing co‑author productivity both at the LI level and within individual BSS.

  2. Exponent α – The fitted α values lie between 0.85 and 1.14. α shows a weak dependence on the “second scientist” (the CA forming the BSS) but is largely stable across publication types. Proceedings tend to yield slightly lower α (i.e., a flatter decay) than journal articles, yet the differences are not statistically significant.

  3. Core size mₐ – For the full LI‑wide co‑author set, mₐ ranges from 10 to 26. Within BSS, mₐ values are smaller, typically between 5 and 15, reflecting that only a handful of collaborators dominate the joint output of a given pair. Proceedings generally produce a modest reduction in mₐ (by 2–3 units) compared with journals, but the core remains within an understandable range.

  4. King and Queen effects – Some BSS exhibit an upturn at r = 1 (the “king effect”) and a plateau or gentle decline for r = 2‑3 (the “queen effect”). These patterns indicate that a single CA can be exceptionally tightly coupled to the LI, while the next few collaborators still retain relatively high productivity.

  5. Distribution shape – All J‑distributions are positively skewed (skw > 3) and leptokurtic (krt > 10), confirming that a small number of co‑authors account for a disproportionate share of joint publications—a hallmark of clustered collaboration networks.

  6. Temporal evolution – Comparing the early (≈30 yr) and later (≈10 yr) windows reveals a slight decrease in α (≈0.05) and a modest increase in mₐ (1–2 units) in the more recent period. This suggests that research teams have grown larger, with a broader core of collaborators, while the rank‑productivity decay becomes marginally less steep.

The authors conclude that the BSS framework, together with the mₐ index, provides a concise quantitative description of scientific teamwork. The lack of a pronounced difference between journal and proceedings outputs challenges the common perception that proceedings are of lower “quality”. Moreover, the identification of king/queen effects offers a simple visual cue for detecting dominant partnerships within a research group.

Potential applications include: (i) assessing the role and influence of individual scientists within large collaborations, (ii) informing funding agencies or institutions about the structure and stability of research teams, and (iii) guiding the design of policies that encourage balanced collaboration rather than over‑reliance on a few “star” partners.

Future work is suggested in several directions: extending the analysis to higher‑order “stars” (triples, quadruples), integrating citation impact to combine productivity and influence, testing the methodology across diverse disciplines, and exploring dynamic network models that capture the evolution of BSS relationships over time.


Comments & Academic Discussion

Loading comments...

Leave a Comment