🚀
Cuff-KT: Tackling Learners' Real-time Learning Pattern Adjustment via Tuning-Free Knowledge State-Guided Model Updating (KDD2025)
PyTorch implementation of Cuff-KT.
Place the assist15, assist17, comp, xes3g5m, and dbe-kt22 source files in the dataset directory, and process the data using the following commands respectively:
python preprocess_data.py --data_name assistments15
python preprocess_data.py --data_name assistments17
python preprocess_data.py --data_name comp
python preprocess_data.py --data_name xes3g5m
python preprocess_data.py --data_name dbe_kt22You can also download the dataset from dataset and place it in the dataset directory.
The statistics of the five datasets after processing are as follows:
| Datasets | #learners | #questions | #concepts | #interactions |
|---|---|---|---|---|
| assist15 | 17,115 | 100 | 100 | 676,288 |
| assist17 | 1,708 | 3,162 | 411 | 934,638 |
| comp | 5,000 | 7,460 | 445 | 668,927 |
| xes3g5m | 5,000 | 7,242 | 1,221 | 1,771,657 |
| dbe-kt22 | 1,186 | 212 | 127 | 306,904 |
Git clone this repository and create conda environment:
conda create -n cuff python=3.11.9
conda activate cuff
pip install -r requirements.txt Alternatively, download the environment package from environment and execute the following commands in sequence:
- Navigate to the conda installation directory: /anaconda (or miniconda)/envs/
- Create a folder named
cuffin that directory - Extract the downloaded environment package to the conda environment using the command:
tar -xzvf cuff.tar.gz -C /anaconda (or miniconda)/envs/cuff/
conda activate cuffYou can execute experiments directly using the following commands:
- Controllable Parameter Generation
CUDA_VISIBLE_DEVICES=0 python main.py --exp intra --model_name [dkt, atdkt] --data_name [assistments15, assistments17, comp, xes3g5m, dbe_kt22] --method cuff --rank 1 --control [ecod, pca, iforest, lof, cuff] --ratio [0, 0.2, 0.4, 0.6, 0.8, 1]
CUDA_VISIBLE_DEVICES=0 python main.py --exp intra --model_name [dkvmn, stablekt, dimkt, diskt] --data_name [assistments15, assistments17, comp, xes3g5m, dbe_kt22] --method cuff --rank 1 --control [ecod, pca, iforest, lof, cuff] --ratio [0, 0.2, 0.4, 0.6, 0.8, 1] --convert True- Tuning-Free and Fast Prediction
-
- baselines
CUDA_VISIBLE_DEVICES=0 python main.py --exp [intra, inter] --model_name [dkt, atdkt] --data_name [assistments15, assistments17, comp, xes3g5m, dbe_kt22]
CUDA_VISIBLE_DEVICES=0 python main.py --exp [intra, inter] --model_name [dkvmn, stablekt, dimkt, diskt] --data_name [assistments15, assistments17, comp, xes3g5m, dbe_kt22] --convert True
CUDA_VISIBLE_DEVICES=0 python main.py --exp [intra, inter] --model_name [dkt, atdkt] --data_name [assistments15, assistments17, comp, xes3g5m, dbe_kt22] --method [fft, adapter, bitfit]
CUDA_VISIBLE_DEVICES=0 python main.py --exp [intra, inter] --model_name [dkvmn, stablekt, dimkt, diskt] --data_name [assistments15, assistments17, comp, xes3g5m, dbe_kt22] --method [fft, adapter, bitfit] --convert True-
- cuff-kt
CUDA_VISIBLE_DEVICES=0 python main.py --exp [intra, inter] --model_name [dkt, atdkt] --data_name [assistments15, assistments17, comp, xes3g5m, dbe_kt22] --method cuff --rank 1
CUDA_VISIBLE_DEVICES=0 python main.py --exp [intra, inter] --model_name [dkvmn, stablekt, dimkt, diskt] --data_name [assistments15, assistments17, comp, xes3g5m, dbe_kt22] --method cuff --rank 1 --convert True- Flexible Application
CUDA_VISIBLE_DEVICES=0 python main.py --exp [intra, inter] --model_name [dkt, atdkt] --data_name [assistments15, assistments17, comp, xes3g5m, dbe_kt22] --method cuff+ --rank 1
CUDA_VISIBLE_DEVICES=0 python main.py --exp [intra, inter] --model_name [dkvmn, stablekt, dimkt, diskt] --data_name [assistments15, assistments17, comp, xes3g5m, dbe_kt22] --method cuff+ --rank 1 --convert TrueIf you find our work valuable, we would appreciate your citation:
@misc{zhou2025cuffkttacklinglearnersrealtime,
title={Cuff-KT: Tackling Learners' Real-time Learning Pattern Adjustment via Tuning-Free Knowledge State Guided Model Updating},
author={Yiyun Zhou and Zheqi Lv and Shengyu Zhang and Jingyuan Chen},
year={2025},
eprint={2505.19543},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2505.19543},
}