Skip to content

Conversation

@Tyler-Rak
Copy link

@Tyler-Rak Tyler-Rak commented Dec 7, 2025

User description

Fixes #2120

Problem

GPT-5 models were hardcoded to use reasoning_effort='minimal', ignoring the user's config.reasoning_effort setting.

Solution

Modified litellm_ai_handler.py to:

  • Read reasoning_effort from configuration instead of hardcoding
  • Support values: 'none', 'low', 'medium', 'high'
  • Default to 'none' for non-thinking models, 'low' for thinking models
  • Add logging to show which reasoning effort level is being used

Testing

Verified with GPT-5 model that:

  • Config setting reasoning_effort = "medium" is now respected
  • Logs confirm: "Using reasoning_effort=medium for GPT-5 model (from config)"
  • Model correctly uses medium reasoning effort in API calls

PR Type

Bug fix


Description

  • Respect user's reasoning_effort config setting for GPT-5 models

  • Replace hardcoded 'minimal'/'low' values with configurable defaults

  • Support reasoning effort values: 'none', 'low', 'medium', 'high'

  • Add logging to show which reasoning effort level is being used


Diagram Walkthrough

flowchart LR
  A["GPT-5 Model Request"] --> B{"Model Type?"}
  B -->|"Thinking Model"| C["Use config or default 'low'"]
  B -->|"Non-thinking Model"| D["Use config or default 'none'"]
  C --> E["Set reasoning_effort parameter"]
  D --> E
  E --> F["Log effort level used"]
Loading

File Walkthrough

Relevant files
Bug fix
litellm_ai_handler.py
Make GPT-5 reasoning_effort configurable with smart defaults

pr_agent/algo/ai_handlers/litellm_ai_handler.py

  • Read reasoning_effort from config instead of hardcoding values
  • Use 'low' as default for thinking models, 'none' for non-thinking
    models
  • Validate config value against supported efforts list
  • Add info logging to display which reasoning effort is being used
+17/-8   

Previously, GPT-5 models had reasoning_effort hardcoded to 'minimal',
which caused two issues:
1. 'minimal' is not supported by some models (e.g., gpt-5.1-codex)
2. User's config.reasoning_effort setting was completely ignored

This fix:
- Reads and respects user's reasoning_effort config value
- Uses valid defaults: 'none' for non-thinking models, 'low' for thinking
- Adds logging to show which value is being used

Fixes qodo-ai#2120
@qodo-merge-for-open-source
Copy link
Contributor

qodo-merge-for-open-source bot commented Dec 7, 2025

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
🟢
No security concerns identified No security vulnerabilities detected by AI analysis. Human verification advised for critical code.
Ticket Compliance
🟡
🎫 #2120
🟢 Fix the root cause in litellm_ai_handler.py where reasoning_effort is hardcoded
Respect the user's reasoning_effort configuration setting instead of hardcoding values
Allow users to configure reasoning_effort in .pr_agent.toml and have it properly applied
🔴 Fix the issue where gpt-5.1 and gpt-5.1-codex always receive reasoning_effort='minimal',
causing API failures
Support the correct reasoning_effort values for gpt-5.1-codex ('low', 'medium', 'high')
and avoid using 'minimal'
Verify that the fix works correctly with actual OpenAI API calls for gpt-5.1-codex model
Confirm that the default 'none' value works for gpt-5.1 non-thinking models
Test that gpt-5.1-codex properly accepts 'low', 'medium', and 'high' values from config
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
🟢
Consistent Naming Conventions

Objective: All new variables, functions, and classes must follow the project's established naming
standards

Status: Passed

No Dead or Commented-Out Code

Objective: Keep the codebase clean by ensuring all submitted code is active and necessary

Status: Passed

Robust Error Handling

Objective: Ensure potential errors and edge cases are anticipated and handled gracefully throughout
the code

Status: Passed

Single Responsibility for Functions

Objective: Each function should have a single, well-defined responsibility

Status: Passed

When relevant, utilize early return

Objective: In a code snippet containing multiple logic conditions (such as 'if-else'), prefer an
early return on edge cases than deep nesting

Status: Passed

  • Update
Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
⚪ - Requires Further Human Verification
🏷️ - Compliance label

@qodo-merge-for-open-source
Copy link
Contributor

qodo-merge-for-open-source bot commented Dec 7, 2025

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
General
Refactor logic to improve readability
Suggestion Impact:The commit implements the exact refactoring suggested. It stores the validation result in `is_config_valid` variable, adds a `source` variable to track whether config or default is used, restructures the conditional logic to eliminate redundant checks, adds a warning log for invalid configuration values, and updates the info log to use the `source` variable with quoted effort value.

code diff:

-
-                if model.endswith('_thinking'):
-                    # For thinking models, use config value or default to 'low'
-                    effort = config_effort if config_effort in supported_efforts else 'low'
+                is_config_valid = config_effort in supported_efforts
+                source = "config"
+
+                if is_config_valid:
+                    effort = config_effort
                 else:
-                    # For non-thinking models, use config value or default to 'none'
-                    # If 'none' fails for specific models (e.g., codex), they should set config to 'low'
-                    effort = config_effort if config_effort in supported_efforts else 'none'
+                    source = "default"
+                    if config_effort is not None:
+                        get_logger().warning(
+                            f"Invalid reasoning_effort '{config_effort}' in config. "
+                            f"Using default. Supported values: {supported_efforts}"
+                        )
+                    if model.endswith('_thinking'):
+                        effort = 'low'
+                    else:
+                        effort = 'none'
 
                 thinking_kwargs_gpt5 = {
                     "reasoning_effort": effort,
                     "allowed_openai_params": ["reasoning_effort"],
                 }
-                get_logger().info(f"Using reasoning_effort={effort} for GPT-5 model (from {'config' if config_effort in supported_efforts else 'default'})")
+                get_logger().info(f"Using reasoning_effort='{effort}' for GPT-5 model (from {source})")

Refactor the reasoning_effort logic to remove redundant checks by storing the
validation result in a variable, and add a warning log for invalid configuration
values.

pr_agent/algo/ai_handlers/litellm_ai_handler.py [295-313]

 # Respect user's reasoning_effort config setting
 # Supported values: 'none', 'low', 'medium', 'high'
 # Note: gpt-5.1 supports 'none', but gpt-5.1-codex does not
 config_effort = get_settings().config.reasoning_effort
 supported_efforts = ['none', 'low', 'medium', 'high']
+is_config_valid = config_effort in supported_efforts
+source = "config"
 
-if model.endswith('_thinking'):
-    # For thinking models, use config value or default to 'low'
-    effort = config_effort if config_effort in supported_efforts else 'low'
+if is_config_valid:
+    effort = config_effort
 else:
-    # For non-thinking models, use config value or default to 'none'
-    # If 'none' fails for specific models (e.g., codex), they should set config to 'low'
-    effort = config_effort if config_effort in supported_efforts else 'none'
+    source = "default"
+    if config_effort is not None:
+        get_logger().warning(
+            f"Invalid reasoning_effort '{config_effort}' in config. "
+            f"Using default. Supported values: {supported_efforts}"
+        )
+    if model.endswith('_thinking'):
+        effort = 'low'
+    else:
+        effort = 'none'
 
 thinking_kwargs_gpt5 = {
     "reasoning_effort": effort,
     "allowed_openai_params": ["reasoning_effort"],
 }
-get_logger().info(f"Using reasoning_effort={effort} for GPT-5 model (from {'config' if config_effort in supported_efforts else 'default'})")
+get_logger().info(f"Using reasoning_effort='{effort}' for GPT-5 model (from {source})")

[Suggestion processed]

Suggestion importance[1-10]: 6

__

Why: The suggestion correctly identifies redundant checks and proposes a refactoring that improves readability. It also enhances usability by adding a warning log for invalid configuration values, which is a valuable improvement.

Low
Learned
best practice
Use safe configuration attribute access

Use safe attribute access with getattr() to handle cases where
config.reasoning_effort may not exist. This prevents AttributeError if the
configuration attribute is missing or None.

pr_agent/algo/ai_handlers/litellm_ai_handler.py [298-307]

-config_effort = get_settings().config.reasoning_effort
+config = getattr(get_settings(), 'config', None)
+config_effort = getattr(config, 'reasoning_effort', None) if config else None
 supported_efforts = ['none', 'low', 'medium', 'high']
 
 if model.endswith('_thinking'):
     # For thinking models, use config value or default to 'low'
     effort = config_effort if config_effort in supported_efforts else 'low'
 else:
     # For non-thinking models, use config value or default to 'none'
     # If 'none' fails for specific models (e.g., codex), they should set config to 'low'
     effort = config_effort if config_effort in supported_efforts else 'none'
  • Apply / Chat
Suggestion importance[1-10]: 6

__

Why:
Relevant best practice - Use safe dictionary access methods like .get() with default values when accessing configuration attributes that may not exist to prevent AttributeError exceptions at runtime.

Low
  • Update
  • Author self-review: I have reviewed the PR code suggestions, and addressed the relevant ones.

- Store validation result in variable to avoid redundant checks
- Add warning log when invalid reasoning_effort value is configured
- Improve source tracking in info log
- Makes code more maintainable and easier to debug
@Tyler-Rak
Copy link
Author

Code review suggestions addressed:

Suggestion 1 (Refactor logic): Implemented in commit 956f366

  • Improved readability by storing validation result in variable
  • Added warning log for invalid configuration values

Suggestion 2 (getattr defensive coding): Not implementing

  • get_settings() always returns a valid Dynaconf object
  • config.reasoning_effort is always defined with default value "medium" in configuration.toml
  • Existing validation already handles None/invalid values
  • Adding getattr() would add unnecessary complexity

All relevant suggestions have been addressed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

gpt-5.1 and gpt-5.1-codex always receive reasoning_effort="minimal", causing failures with the OpenAI API

1 participant