Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
arXiv (2023) - Comments
doi: 10.48550/arxiv.2310.03693 

Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, Peter Henderson