Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LLaMA-E: Empowering E-commerce Authoring with Object-Interleaved Instruction Following (2308.04913v2)

Published 9 Aug 2023 in cs.CL, cs.AI, and cs.IR

Abstract: E-commerce authoring entails creating engaging, diverse, and targeted content to enhance preference elicitation and retrieval experience. While LLMs have revolutionized content generation, they often fall short in e-commerce applications due to their limited memorization of domain-specific features. This paper proposes LLaMA-E, the unified e-commerce authoring models that address the contextual preferences of customers, sellers, and platforms, the essential objects in e-commerce operation. We design the instruction set derived from tasks of ads generation, query-enhanced product title rewriting, product classification, purchase intent speculation, and general e-commerce Q&A. The instruction formulation ensures the interleaved cover of the presented and required object features, allowing the alignment of base models to parameterise e-commerce knowledge comprehensively. The proposed LLaMA-E models achieve state-of-the-art evaluation performance and exhibit the advantage in zero-shot practical applications. To our knowledge, this is the first LLM tailored to empower authoring applications with comprehensive scenario understanding by integrating features focused on participated objects.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kaize Shi (9 papers)
  2. Xueyao Sun (5 papers)
  3. Dingxian Wang (14 papers)
  4. Yinlin Fu (2 papers)
  5. Guandong Xu (93 papers)
  6. Qing Li (429 papers)
Citations (3)
Youtube Logo Streamline Icon: https://streamlinehq.com