Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Beyond Worst-Case (In)approximability of Nonsubmodular Influence Maximization (1710.02827v2)

Published 8 Oct 2017 in cs.SI and cs.DS

Abstract: We consider the problem of maximizing the spread of influence in a social network by choosing a fixed number of initial seeds, formally referred to as the influence maximization problem. It admits a $(1-1/e)$-factor approximation algorithm if the influence function is submodular. Otherwise, in the worst case, the problem is NP-hard to approximate to within a factor of $N{1-\varepsilon}$. This paper studies whether this worst-case hardness result can be circumvented by making assumptions about either the underlying network topology or the cascade model. All of our assumptions are motivated by many real life social network cascades. First, we present strong inapproximability results for a very restricted class of networks called the (stochastic) hierarchical blockmodel, a special case of the well-studied (stochastic) blockmodel in which relationships between blocks admit a tree structure. We also provide a dynamic-program based polynomial time algorithm which optimally computes a directed variant of the influence maximization problem on hierarchical blockmodel networks. Our algorithm indicates that the inapproximability result is due to the bidirectionality of influence between agent-blocks. Second, we present strong inapproximability results for a class of influence functions that are "almost" submodular, called 2-quasi-submodular. Our inapproximability results hold even for any 2-quasi-submodular $f$ fixed in advance. This result also indicates that the "threshold" between submodularity and nonsubmodularity is sharp, regarding the approximability of influence maximization.

Citations (16)

Summary

We haven't generated a summary for this paper yet.