How does Gemini 2.5 handle long contexts?

 title: 'Figure 7 | (Left) Total memorization rates for both exact and approximate memorization. Gemini 2.X model family memorize significantly less than all prior models. (Right) Personal information memorization rates. We observed no instances of personal information being included in outputs classified as memorization for Gemini 2.X, and no instances of high-severity personal data in outputs classified as memorization in prior Gemini models.'

Gemini 2.5 models build on the success of Gemini 1.5 in processing long-context queries[1]. New modeling advances allow Gemini 2.5 Pro to surpass the performance of Gemini 1.5 Pro in processing long context input sequences of up to 1M tokens[1].

Both Gemini 2.5 Pro and Gemini 2.5 Flash can process long-form text, whole codebases, and long form audio and video data[1]. Modeling and data advances helped improve the quality of the million-length context[1].