Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Simulating systematic bias in attributed social networks and its effect on rankings of minority nodes (2010.11546v2)

Published 22 Oct 2020 in cs.SI

Abstract: Network analysis provides powerful tools to learn about a variety of social systems. However, most analyses implicitly assume that the considered relational data is error-free, reliable and accurately reflects the system to be analysed. Especially if the network consists of multiple groups, this assumption conflicts with a range of systematic biases, measurement errors and other inaccuracies that are well documented in the literature. To investigate the effects of such errors we introduce a framework for simulating systematic bias in attributed networks. Our framework enables us to model erroneous edge observations that are driven by external node attributes or errors arising from the (hidden) network structure itself. We exemplify how systematic inaccuracies distort conclusions drawn from network analyses on the network analysis task of minority representations in degree-based rankings. By analysing synthetic and real networks with varying homophily levels and group sizes, we find that introducing systematic edge errors can result both in a strongly increased or decreased ranking of the minority. The observed effect depends both on the type of edge error considered and level of homophily in the system. We thus conclude that the implications of systematic bias in edge data depend on an interplay between network topology and type of systematic error. This emphasises the need for an error model framework as developed here, which provides a first step towards studying the effects of systematic edge-uncertainty for various network analysis tasks.

Summary

We haven't generated a summary for this paper yet.