Skip to content

Commit 23d1188

Browse files
committed
Update ArmoRM blog
1 parent 7354603 commit 23d1188

File tree

1 file changed

+12
-1
lines changed
  • content/posts/2024-05-29-multi-objective-reward-modeling

1 file changed

+12
-1
lines changed

content/posts/2024-05-29-multi-objective-reward-modeling/index.md

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,18 @@ We provide the implementation details of our ArmoRM model, including the archite
8686
- **Parameter Initialization**: [FsfairX-LLaMA3-RM-v0.1](https://huggingface.co/sfairXC/FsfairX-LLaMA3-RM-v0.1), a [Bradley-Terry reward model](https://rlhflow.github.io/posts/2024-03-23-bradley-terry-reward-model/) trained from [Llama3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), using our [RLHFlow codebase for Reward Modeling](https://github.com/RLHFlow/RLHF-Reward-Modeling/tree/main/bradley-terry-rm).
8787
- **Training**: Linear Probing (training the newly initialized linear layer only while keeping all transformer layers frozen)
8888
- We tried full fine-tuning from [Llama3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and [FsfairX-LLaMA3-RM-v0.1](https://huggingface.co/sfairXC/FsfairX-LLaMA3-RM-v0.1) using the approach of [Linear-Probing then Full Fine-Tuning](https://arxiv.org/abs/2202.10054) (LP-FT), but we have not found notable performance improvement over this simple linear probing approach. Therefore, we stick to the linear probing approach for its efficiency (in terms of compute costs and memory requirements).
89-
- **Datasets**: [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer), [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback), [BeaverTails](https://huggingface.co/datasets/PKU-Alignment/BeaverTails), [Argilla-Capybara](https://huggingface.co/datasets/argilla/Capybara-Preferences-Filtered), [Argilla-Math-Preferences](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo), [CodeUltraFeedback](https://huggingface.co/datasets/coseal/CodeUltraFeedback)
89+
- **Datasets**: [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer), [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback), [BeaverTails](https://huggingface.co/datasets/PKU-Alignment/BeaverTails), [Argilla-Capybara](https://huggingface.co/datasets/argilla/Capybara-Preferences-Filtered), [Argilla-Math-Preferences](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo), [CodeUltraFeedback](https://huggingface.co/datasets/coseal/CodeUltraFeedback), [Argilla-OpenOrca](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs)
90+
- **Objectives**: We have $k=19$ reward objectives in total obtained from the datasets:
91+
- [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer): `helpsteer-helpfulness`,`helpsteer-correctness`,`helpsteer-coherence`,
92+
`helpsteer-complexity`,`helpsteer-verbosity`
93+
- [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback): `ultrafeedback-overall_score`, `ultrafeedback-instruction_following`, `ultrafeedback-truthfulness`, `ultrafeedback-honesty`,`ultrafeedback-helpfulness`
94+
- The [Argilla-Math-Preferences](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo) dataset shares the objective `ultrafeedback-instruction_following`
95+
- [BeaverTails](https://huggingface.co/datasets/PKU-Alignment/BeaverTails): `beavertails-is_safe`
96+
- [CodeUltraFeedback](https://huggingface.co/datasets/coseal/CodeUltraFeedback): `code-complexity`,
97+
`code-style`,`code-explanation`,`code-instruction-following`,`code-readability`
98+
- [Prometheus](https://huggingface.co/datasets/prometheus-eval/Feedback-Collection): `prometheus-score`
99+
- [Argilla-Capybara](https://huggingface.co/datasets/argilla/Capybara-Preferences-Filtered): `argilla-overall_quality`
100+
- [Argilla-OpenOrca](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs): `argilla-judge_lm`
90101
- **Data Processing**: When merging multiple datasets with absolute ratings (e.g., [UltraFeedback](https://arxiv.org/abs/2310.01377) and [HelpSteer](https://arxiv.org/abs/2311.09528)), we observe some issues with the data. Here, we present the issues and our approach to tackle them:
91102
1. **Different Rating Scales**: Different datasets may have different scales for the ratings. For instance, [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer?row=0) has a rating scale of 0-4, while [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback)'s is 1-10. We linearly transform all ratings to make them between 0 and 1. For [BeaverTails](https://huggingface.co/datasets/PKU-Alignment/BeaverTails) with True/False ratings (indicating safe or unsafe), we treat True as 1 and False as 0.
92103
2. **Similar Objectives**: There are some very similar objectives from different datasets. For example, the `Helpfulness` objective appears in both HelpSteer and UltraFeedback, and the `Correctness` objective of HelpSteer is quite similar to the `Truthfulness` of UltraFeedback. After carefully examining the datasets, we decided to treat similar objectives as separate objectives, as they are rated by different judges following different rubrics. For instance, data from HelpSteer are rated by 200 U.S.-based human annotators following customized rubrics, and UltraFeedback data are labeled with GPT-4 following another set of rubrics.

0 commit comments

Comments
 (0)