The repository contains the code that accompanies our SIGDIAL 2024 paper.
Note
A Japanese version of this README is available here.
User reviews on e-commerce and review sites are crucial for making purchase decisions, although creating detailed reviews is time-consuming and labor-intensive. In this study, we propose a novel use of dialogue systems to facilitate user review creation by generating reviews from information gathered during interview dialogues with users. To validate our approach, we implemented our system using GPT-4 and conducted comparative experiments from the perspectives of system users and review readers. The results indicate that participants who used our system rated their interactions positively. Additionally, reviews generated by our system required less editing to achieve user satisfaction compared to those by the baseline. We also evaluated the reviews from the readers’ perspective and found that our system-generated reviews are more helpful than those written by humans. Despite challenges with the fluency of the generated reviews, our method offers a promising new approach to review writing.
Our system is designed to facilitate user review writing via an interview dialogue. It first interviews the user to gather detailed product information, then generates a review based on the dialogue history. The application is developed using Flask and utilizes OpenAI models at each stage. Customization options allow users to adjust the model, language, and other parameters via settings/config.py.
├── app.py # Flask application entry point
├── modules/ # Core modules for generation and utilities
├── resources/
│ ├── questionnaire/ # Predefined questions for baseline system
│ └── guidance/ # Guidance text for users
├── settings/ # Configuration file (`config.py`)
├── templates/ # HTML templates
├── static/ # Static assets (CSS, JavaScript, images)
├── data/ # Logs and completed session outputs
├── requirements.txt # Python dependencies
├── run.sh # Shell script to start the application
├── README.md # English README file
└── README_ja.md # Japanese README file
Follow these steps to get up and running quickly:
-
Clone the repository
git clone https://github.com/UEC-InabaLab/InterviewToReview.git cd InterviewToReview -
Install dependencies
pip install -r requirements.txt
-
Configure your API key
Create a
.envfile in the project root and add your OpenAI API key:API_KEY="{YOUR_OPENAI_API_KEY}"
You can create
.envfile by copying the provided.env.examplefile. -
Start the application
bash run.sh
This will launch a local Flask server (default at http://127.0.0.1:8000).
System settings are defined in settings/config.py. You can customize:
MODEL: The OpenAI model to use (e.g.,gpt-4).BOT_TYPE: Dialogue mode (gptorrule-based).- If
rule-based, the system will use fixed questions from./resources/questionnaire/. For more details, please see Section 4.1.2 of our paper. - If
BOT_TYPEis set togpt, the system uses the OpenAI model specified inMODELto generate questions for each turn of the dialogue.
- If
LANG: Language code (jafor Japanese,enfor English).MIN_QUESTIONS,MAX_QUESTIONS: Range for the number of interview questions. (i.e., the number of dialogue turns)TEMPERATURE_INTERVIEW,TEMPERATURE_REVIEW,TEMPERATURE_RATING: Temperature settings for the LLM responses during the interview, review text generation, and rating prediction phases.SAVE_COMPLETED: Enable or disable saving of completed sessions.
- Confirmed to work on
Python 3.10.12.
@inproceedings{tanaka2024user,
title = {User Review Writing via Interview with Dialogue Systems},
author = {Yoshiki Tanaka and Michimasa Inaba},
booktitle = {Proceedings of the 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue},
year = {2024},
url = {https://aclanthology.org/2024.sigdial-1.37/}
}
