Papers
Topics
Authors
Recent
Search
2000 character limit reached

Generative Pretraining at Scale: Transformer-Based Encoding of Transactional Behavior for Fraud Detection

Published 22 Dec 2023 in cs.LG and cs.AI | (2312.14406v1)

Abstract: In this work, we introduce an innovative autoregressive model leveraging Generative Pretrained Transformer (GPT) architectures, tailored for fraud detection in payment systems. Our approach innovatively confronts token explosion and reconstructs behavioral sequences, providing a nuanced understanding of transactional behavior through temporal and contextual analysis. Utilizing unsupervised pretraining, our model excels in feature representation without the need for labeled data. Additionally, we integrate a differential convolutional approach to enhance anomaly detection, bolstering the security and efficacy of one of the largest online payment merchants in China. The scalability and adaptability of our model promise broad applicability in various transactional contexts.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (14)
  1. Session-based Recommendations with Recurrent Neural Networks.
  2. Attention Is All You Need.
  3. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.
  4. Self-Attentive Sequential Recommendation.
  5. BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer.
  6. S3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization.
  7. Contrastive Learning for Sequential Recommendation.
  8. Parameter-Efficient Transfer from Sequential Behaviors for User Modeling and Recommendation.
  9. UPRec: User-Aware Pre-training for Recommender Systems.
  10. Towards Universal Sequence Representation Learning for Recommender Systems.
  11. Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5).
  12. M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems.
  13. Progressive Generation of Long Text with Pretrained Language Models.
  14. Language Models are Few-Shot Learners.
Citations (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.