Skip to content

fix: update _evals_common to be compatible with litellm >=1.83.0#6599

Closed
quad2524 wants to merge 6 commits intogoogleapis:mainfrom
quad2524:issue-6598-litellm-version
Closed

fix: update _evals_common to be compatible with litellm >=1.83.0#6599
quad2524 wants to merge 6 commits intogoogleapis:mainfrom
quad2524:issue-6598-litellm-version

Conversation

@quad2524
Copy link
Copy Markdown
Contributor

Summary
This PR updates the litellm dependency to version 1.83.0 or higher. This upgrade is necessary to bypass known security vulnerabilities present in version 1.82.7.

Because litellm introduced changes to how models and providers are validated, I have also updated the internal utility functions and associated tests to maintain compatibility.

Changes
Dependency Update: Bumped litellm version requirement in setup files.

Core Logic (_evals_common): Updated _is_litellm_model to utilize the newer get_llm_provider pattern. This ensures we accurately validate model strings against LiteLLM’s supported provider list.

Test Suite:

Refactored mocks to account for the new return signature of litellm.get_llm_provider, which now includes additional metadata (model, provider, etc.).

Updated get_valid_models mocks to ensure consistent behavior during unit testing.

Fixes #6598 🦕

@quad2524 quad2524 requested a review from a team as a code owner April 16, 2026 19:32
@product-auto-label product-auto-label Bot added size: s Pull request size is small. api: vertex-ai Issues related to the googleapis/python-aiplatform API. labels Apr 16, 2026
@quad2524 quad2524 force-pushed the issue-6598-litellm-version branch from 9590ca7 to 24dff9c Compare April 16, 2026 19:38
@quad2524 quad2524 changed the title chore: Update litellm version for vulnerability remediation fix: update litellm to >=1.83.0 to resolve security vulnerability Apr 16, 2026
@matthew29tang
Copy link
Copy Markdown
Contributor

Can you resolve the merge conflicts? Then I can run the workflow checks

@matthew29tang matthew29tang self-assigned this Apr 20, 2026
@quad2524
Copy link
Copy Markdown
Contributor Author

Can you resolve the merge conflicts? Then I can run the workflow checks

Done

@jsondai
Copy link
Copy Markdown
Contributor

jsondai commented Apr 20, 2026

LGTM

@matthew29tang matthew29tang added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Apr 20, 2026
@yoshi-kokoro yoshi-kokoro removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Apr 20, 2026
@matthew29tang
Copy link
Copy Markdown
Contributor

Hi, the lint has failed. Can you fix it?

nox > black --check --diff docs google vertexai tests noxfile.py setup.py
--- /tmpfs/src/github/python-aiplatform/vertexai/_genai/_evals_common.py	2026-04-20 22:00:03+00:00
+++ /tmpfs/src/github/python-aiplatform/vertexai/_genai/_evals_common.py	2026-04-20 22:30:57.589545+00:00
@@ -738,11 +738,11 @@
     if litellm is None:
         return False
 
     try:
         litellm.get_llm_provider(model)
-        return True 
+        return True
     except ValueError:
         return False
 
 
 def _is_gemini_model(model: str) -> bool:
would reformat /tmpfs/src/github/python-aiplatform/vertexai/_genai/_evals_common.py
--- /tmpfs/src/github/python-aiplatform/tests/unit/vertexai/genai/test_evals.py	2026-04-20 22:00:03+00:00
+++ /tmpfs/src/github/python-aiplatform/tests/unit/vertexai/genai/test_evals.py	2026-04-20 22:31:04.453888+00:00
@@ -3673,16 +3673,22 @@
     def test_run_inference_with_litellm_openai_request_format(
         self,
         mock_api_client_fixture,
     ):
         """Tests inference with LiteLLM where the row contains a chat completion request body."""
-        with mock.patch(
-            "vertexai._genai._evals_common.litellm"
-        ) as mock_litellm, mock.patch(
-            "vertexai._genai._evals_common._call_litellm_completion"
-        ) as mock_call_litellm_completion:
-            mock_litellm.get_llm_provider.return_value = ("gpt-4o", "openai", None , None)
+        with (
+            mock.patch("vertexai._genai._evals_common.litellm") as mock_litellm,
+            mock.patch(
+                "vertexai._genai._evals_common._call_litellm_completion"
+            ) as mock_call_litellm_completion,
+        ):
+            mock_litellm.get_llm_provider.return_value = (
+                "gpt-4o",
+                "openai",
+                None,
+                None,
+            )
             prompt_df = pd.DataFrame(
                 [
                     {
                         "model": "gpt-4o",
                         "messages": [
would reformat /tmpfs/src/github/python-aiplatform/tests/unit/vertexai/genai/test_evals.py

@quad2524
Copy link
Copy Markdown
Contributor Author

Hi, the lint has failed. Can you fix it?

Fixed linting

@matthew29tang matthew29tang added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Apr 21, 2026
@yoshi-kokoro yoshi-kokoro removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Apr 21, 2026
@matthew29tang matthew29tang added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Apr 21, 2026
@yoshi-kokoro yoshi-kokoro removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Apr 21, 2026
@matthew29tang
Copy link
Copy Markdown
Contributor

matthew29tang commented Apr 21, 2026

Appreciate the quick turnarounds on the PR feedback.

We have received some urgent internal guidance that we want to maintain the same lower bound, but we need to have a stricter upper bound on a minor version given the impact that the litellm vulnerability had.

I plan to proceed with #6617, and then we can merge in your PR with the updates to the evals files. Is that okay with you?

@quad2524
Copy link
Copy Markdown
Contributor Author

That works for me. Thanks!

@matthew29tang
Copy link
Copy Markdown
Contributor

Great, #6617 is now merged in. Can you revert your changes in setup.py?

@matthew29tang matthew29tang changed the title fix: update litellm to >=1.83.0 to resolve security vulnerability fix: update _evals_common litellm to be compatible with litellm >=1.83.0 Apr 21, 2026
@matthew29tang matthew29tang changed the title fix: update _evals_common litellm to be compatible with litellm >=1.83.0 fix: update _evals_common to be compatible with litellm >=1.83.0 Apr 21, 2026
@matthew29tang matthew29tang added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Apr 21, 2026
@yoshi-kokoro yoshi-kokoro removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Apr 21, 2026
@matthew29tang matthew29tang added kokoro:force-run Add this label to force Kokoro to re-run the tests. and removed kokoro:force-run Add this label to force Kokoro to re-run the tests. labels Apr 21, 2026
@yoshi-kokoro yoshi-kokoro removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Apr 21, 2026
@matthew29tang matthew29tang added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Apr 21, 2026
@yoshi-kokoro yoshi-kokoro removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Apr 21, 2026
@matthew29tang matthew29tang added the ready to pull Ready to be merged into the codebase. label Apr 21, 2026
copybara-service Bot pushed a commit that referenced this pull request Apr 21, 2026
--
24dff9c by Alan <argnarf@gmail.com>:

fix: update litellm to >=1.83.0 to resolve security vulnerability

--
8cda891 by Alan <argnarf@gmail.com>:

fix linting errors

--
fb1efd0 by Alan <argnarf@gmail.com>:

Removed version pinning from PR

COPYBARA_INTEGRATE_REVIEW=#6599 from quad2524:issue-6598-litellm-version 18c9d68
PiperOrigin-RevId: 903440345
@matthew29tang
Copy link
Copy Markdown
Contributor

Thanks for the contribution! It's now merged as ac5a5e4

copybara-service Bot pushed a commit that referenced this pull request Apr 27, 2026
--
68eaca8 by Casey West <caseywest@google.com>:

fix(deps): bump litellm cap to >=1.83.7 for additional CVE remediation

The current cap of <1.83.7 (set in #6617) clears CVE-2026-35030 in
litellm 1.83.0 but excludes four additional CVEs patched in 1.83.7:
GHSA-r75f-5x8p-qvmc, GHSA-jjhc-v7c2-5hh6, GHSA-xqmj-j6mv-4862,
GHSA-69x8-hrgq-fjj8 (disclosed 2026-04-11/24).

Required by google/adk-python#5489, which pins
litellm>=1.83.7,<=1.83.14 in its own dependencies and currently fails
to install alongside google-cloud-aiplatform[evaluation] because of
this cap. Requested by @sasha-gitg in the ADK PR review. The code
adaptation for litellm 1.83.x already shipped in #6599
(vertexai/_genai/_evals_common.py via get_llm_provider), so this is
purely a version-pin change.

Verified: nox -s lint and nox -s lint_setup_py pass; the
litellm-touching tests in tests/unit/vertexai/genai/test_evals.py
pass against installed litellm at both 1.83.7 (lower bound) and
1.83.14 (upper bound).
COPYBARA_INTEGRATE_REVIEW=#6645 from cwest:topic/bump-litellm-cap 638e6fa
PiperOrigin-RevId: 906452948
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

api: vertex-ai Issues related to the googleapis/python-aiplatform API. ready to pull Ready to be merged into the codebase. size: s Pull request size is small.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Please repin litellm to >1.83

4 participants