PromptBench is a comprehensive framework from Microsoft designed for the standardized evaluation of large language models. It provides a wide array of benchmarks and tools to assess model capabilities across various tasks, facilitating robust comparisons and research in the LLM space.