Alignment has greatly improved large language models (LLMs)' output quality at the cost of diversity, yielding highly similar outputs across generations. We propose Base-Aligned Model Collaboration (BACo), an inference-time token-level model collaboration framework that dynamically combines a base LLM with its aligned counterpart to optimize diversity and quality. BACo employs routing strategies that determine, at each token, from which model to decode based on next-token prediction uncertainty and predicted contents' semantic role. Prior diversity-promoting methods, such as retraining, prompt engineering, and multi-sampling methods, improve diversity but often degrade quality or require costly decoding or post-training. In contrast, BACo achieves both high diversity and quality post hoc within a single pass, while offering strong controllability. We explore a family of routing strategies across three open-ended generation tasks and 13 metrics covering diversity and quality. BACo consistently surpasses state-of-the-art inference-time baselines. With our best router, BACo achieves a 21.3% joint improvement in diversity and quality. Human evaluations also mirror these improvements. The results suggest that collaboration between base and aligned models can optimize and control diversity and quality.
Figure 1: BACo is an inference-time token-level model collaboration framework that combines a base model's diversity with its aligned counterpart's quality. (A) A comparison of generated outputs. The aligned model produces high-quality but low-diversity outputs, while the base model produces high-diversity but low-quality outputs. BACo optimizes both diversity and quality by dynamically routing between them. The probabilities of token(s) are in grey next to text boxes. (B) Illustration of the diversity-quality trade-off space. Single models face a steep trade-off, where improving diversity by adjusting configuration (e.g., by increasing temperature) degrades quality. BACo achieves a better Pareto curve and allows for easy traversal across this frontier by adjusting the router's threshold. The examples in this figure are modified for simplicity.
Example of the generation at the inference time. The example presents 4 parallel generations, showing that BACo presents better diversity while maintaining quality. The implementation is on the Github page.
@article{wang2025optimizing,
title={Optimizing Diversity and Quality through Base-Aligned Model Collaboration},
author={Wang, Yichen and Yang, Chenghao and Huang, Tenghao and Chen, Muhao and May, Jonathan and Lee, Mina},
journal={arXiv preprint arXiv:2511.05650},
year={2025}
}