Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Code Word Detection in Fraud Investigations using a Deep-Learning Approach (2103.09606v1)

Published 17 Mar 2021 in cs.CL and cs.AI

Abstract: In modern litigation, fraud investigators often face an overwhelming number of documents that must be reviewed throughout a matter. In the majority of legal cases, fraud investigators do not know beforehand, exactly what they are looking for, nor where to find it. In addition, fraudsters may use deception to hide their behaviour and intentions by using code words. Effectively, this means fraud investigators are looking for a needle in the haystack without knowing what the needle looks like. As part of a larger research program, we use a framework to expedite the investigation process applying text-mining and machine learning techniques. We structure this framework using three well-known methods in fraud investigations: (i) the fraud triangle (ii) the golden ("W") investigation questions, and (iii) the analysis of competing hypotheses. With this framework, it is possible to automatically organize investigative data, so it is easier for investigators to find answers to typical investigative questions. In this research, we focus on one of the components of this framework: the identification of the usage of code words by fraudsters. Here for, a novel (annotated) synthetic data set is created containing such code words, hidden in normal email communication. Subsequently, a range of machine learning techniques are employed to detect such code words. We show that the state-of-the-art BERT model significantly outperforms other methods on this task. With this result, we demonstrate that deep neural LLMs can reliably (F1 score of 0.9) be applied in fraud investigations for the detection of code words.

Summary

We haven't generated a summary for this paper yet.