Anne-Marie Kermarrec, François Taïani, and Juan M. Tirado

Scaling Out Link Prediction with SNAPLE: 1 Billion Edges and Beyond

16th ACM/IFIP/USENIX International Middleware Conference, Vancouver, Canada, 7-11 December, pp. 247-258, 2015 (12p.)

A growing number of organizations are seeking to analyze extra large graphs in a timely and resource-efficient manner. With some graphs containing well over a billion elements, these organizations are turning to distributed graph-computing platforms that can scale out easily in existing data-centers and clouds. Unfortunately such platforms usually impose programming models that can be ill suited to typical graph computations, fundamentally undermining their potential benefits. In this paper, we consider how the emblematic problem of link-prediction can be implemented efficiently in gather-apply-scatter (GAS) platforms, a popular distributed graph-computation model. Our proposal, called SNAPLE, exploits a novel highly-localized vertex scoring technique, and minimizes the cost of data flow while maintaining prediction quality. When used within GraphLab, SNAPLE can scale to very large graphs that a standard implementation of link prediction on GraphLab cannot handle. More precisely, we show that SNAPLE can process a graph containing 1.4 billions edges on a 256 cores cluster in less than three minutes, with no penalty in the quality of predictions. This result corresponds to an over-linear speedup of 30 against a 20-core standalone machine running a non-distributed state-of-the-art solution.

ACM Copyright Notice: © ACM, 2015. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the 16th Annual Middleware Conference (Middleware '15).

complete document


doi: (publisher's link)


[Maison.png]Back to Home

Last generated on 5 Feb 2020     Valid HTML 4.0!