How to project a bipartite network?
The one-mode projecting is extensively used to compress the bipartite networks. Since the one-mode projection is always less informative than the bipartite representation, a proper weighting method is required to better retain the original information. In this article, inspired by the network-based resource-allocation dynamics, we raise a weighting method, which can be directly applied in extracting the hidden information of networks, with remarkably better performance than the widely used global ranking method as well as collaborative filtering. This work not only provides a creditable method in compressing bipartite networks, but also highlights a possible way for the better solution of a long-standing challenge in modern information science: How to do personal recommendation?
💡 Research Summary
The paper addresses a fundamental problem in network science and information retrieval: how to compress a bipartite (two‑mode) network into a one‑mode projection without discarding essential structural information. Traditional one‑mode projections simply count the number of shared neighbors between two nodes of the same type, thereby ignoring the heterogeneity of those neighbors and often leading to severe information loss. This limitation is especially problematic for recommendation systems, where the quality of inferred similarity directly impacts recommendation accuracy.
To overcome these shortcomings, the authors propose a weighting scheme inspired by the resource‑allocation (RA) dynamics that have been successfully applied to link prediction and diffusion processes. In the RA‑based projection, each node on one side of the bipartite graph distributes a unit of “resource” equally among its incident edges; each neighbor then redistributes the received resource equally among its own connections on the opposite side. The amount of resource that finally flows from node i to node j on the same side becomes the weight w_{ij}. Formally, the weight can be expressed as
\
Comments & Academic Discussion
Loading comments...
Leave a Comment