Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimizing and Fine-tuning Large Language Model for Urban Renewal (2311.15490v1)

Published 27 Nov 2023 in cs.CL and cs.AI

Abstract: This study aims to innovatively explore adaptive applications of LLMs (LLM) in urban renewal. It also aims to improve its performance and text generation quality for knowledge question-answering (QA) tasks. Based on the ChatGLM, we automatically generate QA datasets using urban renewal scientific literature corpora in a self-instruct manner and then conduct joint fine-tuning training on the model using the Prefix and LoRA fine-tuning methods to create an LLM for urban renewal. By guiding the LLM to automatically generate QA data based on prompt words and given text, it is possible to quickly obtain datasets in the urban renewal field and provide data support for the fine-tuning training of LLMs. The experimental results show that the joint fine-tuning training method proposed in this study can significantly improve the performance of LLM on the QA tasks. Compared with LoRA fine-tuning, the method improves the Bleu and Rouge metrics on the test by about 5%; compared with the model before fine-tuning, the method improves the Bleu and Rouge metrics by about 15%-20%. This study demonstrates the effectiveness and superiority of the joint fine-tuning method using Prefix and LoRA for ChatGLM in the urban renewal knowledge QA tasks. It provides a new approach for fine-tuning LLMs on urban renewal-related tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xi Wang (275 papers)
  2. Xianyao Ling (1 paper)
  3. Tom Zhang (5 papers)
  4. Xuecao Li (2 papers)
  5. Shaolan Wang (1 paper)
  6. Zhixing Li (9 papers)
  7. Liang Zhang (357 papers)
  8. Peng Gong (11 papers)
Citations (5)