fix: address critical architecture gaps vs core philosophy#1274
fix: address critical architecture gaps vs core philosophy#1274MervinPraison merged 3 commits intomainfrom
Conversation
Fixes 3 critical architecture violations: Gap 1: YAML Feature Parity - Added missing features - Added autonomy, guardrails, approval support to YAML configuration - Updated agents_generator.py to extract and pass these configs to PraisonAgent - Added CLI field mapping (--trust -> approval, --guardrail -> guardrails) Gap 2: Agent Constructor Parameter Cleanup - Improved API - Replaced Optional[Any] type annotations with proper Union types - Added forward references for better IDE support and type checking Gap 3: Database Implementation Configuration - Made constants configurable - Added tool_output_limit field to OutputConfig (default 16000) - Made hardcoded DEFAULT_TOOL_OUTPUT_LIMIT configurable via OutputConfig - Updated agent.tool_output_limit usage in tool_execution.py All changes maintain backward compatibility and follow AGENTS.md guidelines. Tested: Basic agent creation, YAML parsing, configurable limits work correctly. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
|
@coderabbitai review |
|
/review |
|
@gemini review this PR |
✅ Actions performedReview triggered.
|
Code Review by Qodo
1.
|
|
Important Review skippedBot user detected. To trigger a single review, invoke the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughType annotations in Agent parameters are updated to use forward references. Support for configurable Changes
Sequence Diagram(s)sequenceDiagram
participant Config as OutputConfig
participant Agent as Agent.__init__
participant Exec as ToolExecutionMixin
Config->>Agent: tool_output_limit defined
Agent->>Agent: Extract tool_output_limit<br/>from OutputConfig or use default
Agent->>Agent: Store self.tool_output_limit
Note over Agent: Instance ready with<br/>configured output limit
Exec->>Agent: Query self.tool_output_limit
Agent-->>Exec: Return configured limit
Exec->>Exec: Apply limit to tool output<br/>truncation logic
sequenceDiagram
participant CLI as CLI Arguments
participant Gen as AgentsGenerator
participant YAML as YAML Config
participant Agent as PraisonAgent
CLI->>Gen: --autonomy, --guardrail, --approval
Gen->>Gen: _merge_cli_config:<br/>Map --guardrail→guardrails<br/>Convert --trust→approval
Gen->>YAML: Extract autonomy, guardrails,<br/>approval from role details
YAML->>Agent: Pass as constructor<br/>parameters
Agent->>Agent: Initialize with advanced<br/>safety features
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request introduces a configurable tool_output_limit for agents, allowing users to control the maximum character count for tool outputs via OutputConfig. It also refines the type annotations in the Agent constructor for better clarity and updates the agent generator to support CLI overrides for autonomy, guardrails, and approval (including a mapping from --trust to approval). Feedback was provided regarding the new type hints in agent.py, which were found to be overly restrictive by omitting supported types like str and dict for several parameters, and missing necessary TYPE_CHECKING imports for forward-referenced types.
| memory: Optional[Union[bool, 'MemoryConfig', 'MemoryManager']] = None, | ||
| knowledge: Optional[Union[bool, List[str], 'KnowledgeConfig', 'Knowledge']] = None, | ||
| planning: Optional[Union[bool, 'PlanningConfig']] = False, | ||
| reflection: Optional[Union[bool, 'ReflectionConfig']] = None, | ||
| guardrails: Optional[Union[bool, Callable, 'GuardrailConfig']] = None, | ||
| web: Optional[Union[bool, 'WebConfig']] = None, | ||
| context: Optional[Union[bool, 'ManagerConfig', 'ContextManager']] = None, | ||
| autonomy: Optional[Union[bool, Dict[str, Any], 'AutonomyConfig']] = None, | ||
| verification_hooks: Optional[List[Any]] = None, # Deprecated: use autonomy=AutonomyConfig(verification_hooks=[...]) | ||
| output: Optional[Union[str, Any]] = None, # Union[str preset, OutputConfig] | ||
| execution: Optional[Union[str, Any]] = None, # Union[str preset, ExecutionConfig] | ||
| templates: Optional[Any] = None, # TemplateConfig | ||
| caching: Optional[Union[bool, Any]] = None, # Union[bool, CachingConfig] | ||
| hooks: Optional[Union[List[Any], Any]] = None, # Union[list, HooksConfig] | ||
| skills: Optional[Union[List[str], Any]] = None, # Union[list, SkillsConfig] | ||
| approval: Optional[Union[bool, Any]] = None, # Union[bool, ApprovalProtocol backend] | ||
| output: Optional[Union[str, 'OutputConfig']] = None, | ||
| execution: Optional[Union[str, 'ExecutionConfig']] = None, | ||
| templates: Optional['TemplateConfig'] = None, | ||
| caching: Optional[Union[bool, 'CachingConfig']] = None, | ||
| hooks: Optional[Union[List[Any], 'HooksConfig']] = None, | ||
| skills: Optional[Union[List[str], 'SkillsConfig']] = None, | ||
| approval: Optional[Union[bool, 'ApprovalProtocol']] = None, | ||
| tool_timeout: Optional[int] = None, # P8/G11: Timeout in seconds for each tool call | ||
| learn: Optional[Union[bool, Any]] = None, # Union[bool, LearnConfig] - Continuous learning (peer to memory) | ||
| learn: Optional[Union[bool, 'LearnConfig']] = None, # Continuous learning (peer to memory) |
There was a problem hiding this comment.
The updated type annotations for the Agent constructor are overly restrictive and omit several valid input types supported by the implementation.
- Missing
strsupport: Parameters likememory,knowledge,planning,reflection,guardrails,web,caching,hooks, andskillsall support string presets or single sources via theresolveutility (e.g.,memory="redis",planning="gpt-4o"), butstris missing from theirUniontypes. - Incomplete
approvaltype: Theapprovalparameter supportsstr(for presets like "safe"),'ApprovalConfig', and'ApprovalProtocol', but the current hint only includesbooland'ApprovalProtocol'. - Missing
dictforlearn: Thelearnparameter supports a configuration dictionary, which is not reflected in the type hint. - Missing
TYPE_CHECKINGimports: Forward-referenced types such as'MemoryManager','Knowledge','ManagerConfig','ContextManager','AutonomyConfig', and'ApprovalProtocol'are used in string literals but are not imported in theTYPE_CHECKINGblock at the top of the file. This will prevent type checkers and IDEs from resolving these types, defeating the purpose of the improvement.
| memory: Optional[Union[bool, 'MemoryConfig', 'MemoryManager']] = None, | |
| knowledge: Optional[Union[bool, List[str], 'KnowledgeConfig', 'Knowledge']] = None, | |
| planning: Optional[Union[bool, 'PlanningConfig']] = False, | |
| reflection: Optional[Union[bool, 'ReflectionConfig']] = None, | |
| guardrails: Optional[Union[bool, Callable, 'GuardrailConfig']] = None, | |
| web: Optional[Union[bool, 'WebConfig']] = None, | |
| context: Optional[Union[bool, 'ManagerConfig', 'ContextManager']] = None, | |
| autonomy: Optional[Union[bool, Dict[str, Any], 'AutonomyConfig']] = None, | |
| verification_hooks: Optional[List[Any]] = None, # Deprecated: use autonomy=AutonomyConfig(verification_hooks=[...]) | |
| output: Optional[Union[str, Any]] = None, # Union[str preset, OutputConfig] | |
| execution: Optional[Union[str, Any]] = None, # Union[str preset, ExecutionConfig] | |
| templates: Optional[Any] = None, # TemplateConfig | |
| caching: Optional[Union[bool, Any]] = None, # Union[bool, CachingConfig] | |
| hooks: Optional[Union[List[Any], Any]] = None, # Union[list, HooksConfig] | |
| skills: Optional[Union[List[str], Any]] = None, # Union[list, SkillsConfig] | |
| approval: Optional[Union[bool, Any]] = None, # Union[bool, ApprovalProtocol backend] | |
| output: Optional[Union[str, 'OutputConfig']] = None, | |
| execution: Optional[Union[str, 'ExecutionConfig']] = None, | |
| templates: Optional['TemplateConfig'] = None, | |
| caching: Optional[Union[bool, 'CachingConfig']] = None, | |
| hooks: Optional[Union[List[Any], 'HooksConfig']] = None, | |
| skills: Optional[Union[List[str], 'SkillsConfig']] = None, | |
| approval: Optional[Union[bool, 'ApprovalProtocol']] = None, | |
| tool_timeout: Optional[int] = None, # P8/G11: Timeout in seconds for each tool call | |
| learn: Optional[Union[bool, Any]] = None, # Union[bool, LearnConfig] - Continuous learning (peer to memory) | |
| learn: Optional[Union[bool, 'LearnConfig']] = None, # Continuous learning (peer to memory) | |
| memory: Optional[Union[bool, str, 'MemoryConfig', 'MemoryManager']] = None, | |
| knowledge: Optional[Union[bool, str, List[str], 'KnowledgeConfig', 'Knowledge']] = None, | |
| planning: Optional[Union[bool, str, 'PlanningConfig']] = False, | |
| reflection: Optional[Union[bool, str, 'ReflectionConfig']] = None, | |
| guardrails: Optional[Union[bool, str, Callable, 'GuardrailConfig']] = None, | |
| web: Optional[Union[bool, str, 'WebConfig']] = None, | |
| context: Optional[Union[bool, 'ManagerConfig', 'ContextManager']] = None, | |
| autonomy: Optional[Union[bool, Dict[str, Any], 'AutonomyConfig']] = None, | |
| verification_hooks: Optional[List[Any]] = None, # Deprecated: use autonomy=AutonomyConfig(verification_hooks=[...]) | |
| output: Optional[Union[str, 'OutputConfig']] = None, | |
| execution: Optional[Union[str, 'ExecutionConfig']] = None, | |
| templates: Optional['TemplateConfig'] = None, | |
| caching: Optional[Union[bool, str, 'CachingConfig']] = None, | |
| hooks: Optional[Union[str, List[Any], 'HooksConfig']] = None, | |
| skills: Optional[Union[str, List[str], 'SkillsConfig']] = None, | |
| approval: Optional[Union[bool, str, 'ApprovalConfig', 'ApprovalProtocol']] = None, | |
| tool_timeout: Optional[int] = None, # P8/G11: Timeout in seconds for each tool call | |
| learn: Optional[Union[bool, Dict[str, Any], 'LearnConfig']] = None, # Continuous learning (peer to memory) |
| # Extract YAML configuration for advanced features | ||
| autonomy_config = details.get('autonomy') | ||
| guardrails_config = details.get('guardrails') | ||
| approval_config = details.get('approval') | ||
|
|
||
| agent = PraisonAgent( | ||
| name=role_filled, | ||
| role=role_filled, |
There was a problem hiding this comment.
2. Yaml approval dict crashes 🐞 Bug ≡ Correctness
AgentsGenerator passes YAML approval: values directly into Agent(approval=...), but a YAML mapping becomes a dict and is stored as _approval_backend; later tool execution calls backend.request_approval(...) and will raise at runtime because dict has no such method.
Agent Prompt
## Issue description
YAML `approval:` mappings are passed as plain dicts into `Agent(approval=...)`. The agent stores unknown types as `_approval_backend`, and tool execution later calls `backend.request_approval(...)`, which will crash for dict inputs.
## Issue Context
- YAML is loaded with `yaml.safe_load`, so mappings become dicts.
- `Agent.__init__` supports `approval=True/False`, permission preset strings, `ApprovalConfig`, and backend objects implementing `ApprovalProtocol`.
## Fix Focus Areas
- src/praisonai/praisonai/agents_generator.py[429-446]
- src/praisonai/praisonai/agents_generator.py[1199-1219]
- src/praisonai-agents/praisonaiagents/agent/agent.py[1613-1652]
## Suggested fix
In `AgentsGenerator._run_praisonai`, detect `isinstance(approval_config, dict)` and convert it into a valid `approval=` value (backend object or `ApprovalConfig`). Reuse the wrapper’s existing resolver:
- import `resolve_approval_config` from `praisonai.cli.features.approval`
- map YAML fields (e.g., `backend`/`backend_name`, `approve_all_tools`/`all_tools`, `approval_timeout`/`timeout`) to `resolve_approval_config(...)`
If you want this supported in the core SDK too, additionally handle `dict` in `Agent.__init__` by constructing `ApprovalConfig` (but ensure any string backend names are resolved to actual backend instances before use).
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
|
@copilot Do a thorough review of this PR. Read ALL existing reviewer comments above first. Review areas:
|
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/praisonai-agents/praisonaiagents/agent/agent.py (1)
793-803:⚠️ Potential issue | 🟠 MajorValidate
tool_output_limitbefore storing it.Line 793 accepts any value from config without bounds/type checks. Invalid values (e.g.,
"5000"or-1) can break or invert truncation behavior downstream.🔧 Proposed fix
- tool_output_limit = getattr(_output_config, 'tool_output_limit', DEFAULT_TOOL_OUTPUT_LIMIT) + raw_tool_output_limit = getattr(_output_config, 'tool_output_limit', DEFAULT_TOOL_OUTPUT_LIMIT) + if not isinstance(raw_tool_output_limit, int) or raw_tool_output_limit <= 0: + raise ValueError( + f"Agent '{name or 'Agent'}': output.tool_output_limit must be a positive integer. " + "Set output=OutputConfig(tool_output_limit=16000) or another positive value." + ) + tool_output_limit = raw_tool_output_limitAs per coding guidelines "Error handling: Fail fast with clear error messages; include remediation hints in exceptions".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/praisonai-agents/praisonaiagents/agent/agent.py` around lines 793 - 803, Validate the value retrieved from _output_config for tool_output_limit before assigning: ensure tool_output_limit is an integer and >= 0 (optionally accept numeric strings by converting with int() only if purely numeric); if validation fails, raise a ValueError with a clear remediation message referencing tool_output_limit (e.g., "tool_output_limit must be a non-negative integer, got: ...") and, if desired, cap excessively large values to DEFAULT_TOOL_OUTPUT_LIMIT or another defined MAX to avoid inverted/truncated behavior downstream; update the code paths that set tool_output_limit to perform this check and error handling where DEFAULT_TOOL_OUTPUT_LIMIT, _output_config, and tool_output_limit are used.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/praisonai-agents/praisonaiagents/agent/agent.py`:
- Around line 479-496: The function parameter type annotations use undefined
forward-references and an incorrect name: add imports for MemoryConfig,
KnowledgeConfig, Knowledge, ContextManager, AutonomyConfig, ApprovalProtocol,
and LearnConfig into the existing if TYPE_CHECKING: block so those names are
available for forward references; replace the non-existent ManagerConfig
referenced for the context parameter with ContextConfig (or use ContextConfig
and/or ContextManager as appropriate) to match context/models.py; and expand the
approval parameter annotation to include str and ApprovalConfig (and any
Approval backend interface type used at runtime, e.g., ApprovalBackend) so it
matches runtime behavior (accepts presets like "safe"/"read_only",
ApprovalConfig instances, and backend instances). Ensure you update the
signature symbols memory, knowledge, context, autonomy, approval, and learn
accordingly.
In `@src/praisonai-agents/praisonaiagents/config/feature_configs.py`:
- Around line 629-631: The tool_output_limit field currently allows invalid
values; add validation at the config model boundary (the class where
tool_output_limit: int = 16000 is defined) to fail fast on non-integer or
non-positive inputs by raising a clear ValueError; implement this using the
model's validation hook (e.g., a Pydantic `@validator`("tool_output_limit") or the
model's __post_init__/root_validator) to coerce/check type and ensure
tool_output_limit > 0 and include a descriptive error message referencing
tool_output_limit.
In `@src/praisonai/praisonai/agents_generator.py`:
- Around line 272-273: The loop in agents_generator.py uses an unused variable
yaml_field in "for cli_field, yaml_field in field_mappings.items()"; change it
to iterate only keys (e.g., "for cli_field in field_mappings:" or "for cli_field
in field_mappings.keys():") so yaml_field is removed, and keep the existing body
that references cli_field and agent_overrides unchanged (ensure no other code
expects yaml_field).
---
Outside diff comments:
In `@src/praisonai-agents/praisonaiagents/agent/agent.py`:
- Around line 793-803: Validate the value retrieved from _output_config for
tool_output_limit before assigning: ensure tool_output_limit is an integer and
>= 0 (optionally accept numeric strings by converting with int() only if purely
numeric); if validation fails, raise a ValueError with a clear remediation
message referencing tool_output_limit (e.g., "tool_output_limit must be a
non-negative integer, got: ...") and, if desired, cap excessively large values
to DEFAULT_TOOL_OUTPUT_LIMIT or another defined MAX to avoid inverted/truncated
behavior downstream; update the code paths that set tool_output_limit to perform
this check and error handling where DEFAULT_TOOL_OUTPUT_LIMIT, _output_config,
and tool_output_limit are used.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 04843664-b768-4d1c-a2e0-08dc52f5835d
📒 Files selected for processing (4)
src/praisonai-agents/praisonaiagents/agent/agent.pysrc/praisonai-agents/praisonaiagents/agent/tool_execution.pysrc/praisonai-agents/praisonaiagents/config/feature_configs.pysrc/praisonai/praisonai/agents_generator.py
| memory: Optional[Union[bool, 'MemoryConfig', 'MemoryManager']] = None, | ||
| knowledge: Optional[Union[bool, List[str], 'KnowledgeConfig', 'Knowledge']] = None, | ||
| planning: Optional[Union[bool, 'PlanningConfig']] = False, | ||
| reflection: Optional[Union[bool, 'ReflectionConfig']] = None, | ||
| guardrails: Optional[Union[bool, Callable, 'GuardrailConfig']] = None, | ||
| web: Optional[Union[bool, 'WebConfig']] = None, | ||
| context: Optional[Union[bool, 'ManagerConfig', 'ContextManager']] = None, | ||
| autonomy: Optional[Union[bool, Dict[str, Any], 'AutonomyConfig']] = None, | ||
| verification_hooks: Optional[List[Any]] = None, # Deprecated: use autonomy=AutonomyConfig(verification_hooks=[...]) | ||
| output: Optional[Union[str, Any]] = None, # Union[str preset, OutputConfig] | ||
| execution: Optional[Union[str, Any]] = None, # Union[str preset, ExecutionConfig] | ||
| templates: Optional[Any] = None, # TemplateConfig | ||
| caching: Optional[Union[bool, Any]] = None, # Union[bool, CachingConfig] | ||
| hooks: Optional[Union[List[Any], Any]] = None, # Union[list, HooksConfig] | ||
| skills: Optional[Union[List[str], Any]] = None, # Union[list, SkillsConfig] | ||
| approval: Optional[Union[bool, Any]] = None, # Union[bool, ApprovalProtocol backend] | ||
| output: Optional[Union[str, 'OutputConfig']] = None, | ||
| execution: Optional[Union[str, 'ExecutionConfig']] = None, | ||
| templates: Optional['TemplateConfig'] = None, | ||
| caching: Optional[Union[bool, 'CachingConfig']] = None, | ||
| hooks: Optional[Union[List[Any], 'HooksConfig']] = None, | ||
| skills: Optional[Union[List[str], 'SkillsConfig']] = None, | ||
| approval: Optional[Union[bool, 'ApprovalProtocol']] = None, | ||
| tool_timeout: Optional[int] = None, # P8/G11: Timeout in seconds for each tool call | ||
| learn: Optional[Union[bool, Any]] = None, # Union[bool, LearnConfig] - Continuous learning (peer to memory) | ||
| learn: Optional[Union[bool, 'LearnConfig']] = None, # Continuous learning (peer to memory) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# 1) Confirm current undefined-name lint errors on this file
ruff check src/praisonai-agents/praisonaiagents/agent/agent.py --select F821
# 2) Confirm runtime branches accept values not represented by the annotation
rg -n -C2 "approval: Optional\\[Union\\[bool" src/praisonai-agents/praisonaiagents/agent/agent.py
rg -n -C3 "isinstance\\(approval, str\\)|isinstance\\(approval, ApprovalConfig\\)" src/praisonai-agents/praisonaiagents/agent/agent.py
# 3) Locate symbol definitions/import paths before adding TYPE_CHECKING imports
rg -n "class (MemoryManager|Knowledge|ManagerConfig|ContextManager|AutonomyConfig|ApprovalProtocol|ApprovalConfig|LearnConfig)\\b" src/praisonai-agents/praisonaiagents --type pyRepository: MervinPraison/PraisonAI
Length of output: 5358
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# 1) Find where approval parameter is actually used/validated in __init__
echo "=== Approval handling in __init__ ==="
rg -n "approval" src/praisonai-agents/praisonaiagents/agent/agent.py -A 2 -B 2 | head -100
# 2) Find class definitions for the 7 flagged types
echo -e "\n=== Class definitions ==="
rg -n "^class (MemoryManager|Knowledge|ManagerConfig|ContextManager|AutonomyConfig|ApprovalProtocol|LearnConfig)" src/praisonai-agents/praisonaiagents --type py
# 3) Check current TYPE_CHECKING imports in agent.py
echo -e "\n=== Current TYPE_CHECKING block ==="
rg -n "if TYPE_CHECKING:" src/praisonai-agents/praisonaiagents/agent/agent.py -A 20Repository: MervinPraison/PraisonAI
Length of output: 6321
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# 1) Find MemoryManager and MemoryConfig
echo "=== MemoryManager and MemoryConfig ==="
rg -n "^class (MemoryManager|MemoryConfig)" src/praisonai-agents/praisonaiagents --type py
# 2) Check what ManagerConfig is (or if it exists)
echo -e "\n=== ManagerConfig search ==="
rg -n "class ManagerConfig" src/praisonai-agents/praisonaiagents --type py
# 3) Check how context parameter is actually used in __init__
echo -e "\n=== Context parameter handling ==="
rg -n "self\._context|self\.context|isinstance\(context" src/praisonai-agents/praisonaiagents/agent/agent.py -A 1 -B 1 | head -80
# 4) Check imports at start of agent.py
echo -e "\n=== Existing imports in agent.py ==="
head -100 src/praisonai-agents/praisonaiagents/agent/agent.py | grep -E "^from|^import"Repository: MervinPraison/PraisonAI
Length of output: 222
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# 1) Search broadly for MemoryManager
echo "=== MemoryManager search ==="
rg -i "memorymanager" src/praisonai-agents/praisonaiagents --type py
# 2) Check context initialization in agent.py to understand parameter intent
echo -e "\n=== Context initialization/usage ==="
rg -n "self\._context_manager|self\.context" src/praisonai-agents/praisonaiagents/agent/agent.py | head -30
# 3) Look at line 485 context annotation and find what's initialized there
echo -e "\n=== Lines 900-1300 looking for context init ==="
sed -n '900,1300p' src/praisonai-agents/praisonaiagents/agent/agent.py | grep -n -A 3 -B 3 "context"
# 4) Check what types memory parameter actually accepts at runtime
echo -e "\n=== Memory parameter handling ==="
rg -n "isinstance\(memory" src/praisonai-agents/praisonaiagents/agent/agent.py -A 2 | head -50Repository: MervinPraison/PraisonAI
Length of output: 1705
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# 1) Find if MemoryManager is imported or defined anywhere
echo "=== MemoryManager in memory module ==="
rg -n "MemoryManager" src/praisonai-agents/praisonaiagents/memory --type py | head -20
# 2) Check memory parameter type handling at runtime
echo -e "\n=== Memory parameter runtime handling ==="
sed -n '1300,1400p' src/praisonai-agents/praisonaiagents/agent/agent.py | grep -n -A 5 -B 5 "memory"
# 3) Get actual MemoryConfig and MemoryManager type alias definition
echo -e "\n=== MemoryParam type alias ==="
rg -n "MemoryParam|MemoryManager" src/praisonai-agents/praisonaiagents/config/feature_configs.py -B 2 -A 2
# 4) Check for knowledge initialization to understand Knowledge vs KnowledgeConfig
echo -e "\n=== Knowledge parameter handling ==="
rg -n "isinstance\(knowledge" src/praisonai-agents/praisonaiagents/agent/agent.py -A 3 | head -40
# 5) Check what's imported in agent.py from memory module
echo -e "\n=== Memory imports in agent.py ==="
rg -n "from.*memory" src/praisonai-agents/praisonaiagents/agent/agent.pyRepository: MervinPraison/PraisonAI
Length of output: 105
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# 1) List what's in the memory module
echo "=== Memory module structure ==="
find src/praisonai-agents/praisonaiagents/memory -type f -name "*.py" | head -20
# 2) Get MemoryParam definition fully
echo -e "\n=== MemoryParam definition ==="
rg -n "MemoryParam" src/praisonai-agents/praisonaiagents/config/feature_configs.py -B 1 -A 1
# 3) Check if Any is actually a stand-in for MemoryManager
echo -e "\n=== MemoryManager comment/note ==="
rg -n "MemoryManager" src/praisonai-agents/praisonaiagents --type py -B 2 -A 2
# 4) Check approval/protocols to see if ApprovalConfig is there
echo -e "\n=== ApprovalConfig definition ==="
rg -n "^class ApprovalConfig" src/praisonai-agents/praisonaiagents --type py -B 1 -A 5
# 5) Verify context parameter annotation intent (line 485)
echo -e "\n=== Context parameter checks ==="
sed -n '480,490p' src/praisonai-agents/praisonaiagents/agent/agent.pyRepository: MervinPraison/PraisonAI
Length of output: 5663
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check what config types exist for context module
echo "=== Context module configs ==="
rg -n "^class.*Config" src/praisonai-agents/praisonaiagents/context --type py
# Check memory module for manager type
echo -e "\n=== Memory module classes ==="
rg -n "^class" src/praisonai-agents/praisonaiagents/memory/memory.py | head -20
# Find all *Config classes that might relate to the 7 flagged types
echo -e "\n=== All *Config classes ==="
rg -n "^class.*Config" src/praisonai-agents/praisonaiagents/config/feature_configs.py | grep -E "(Memory|Planning|Reflection|Guardrail|Web|Output|Execution|Template|Caching|Hooks|Skills|Learn)"
# Check if there's a PlanningConfig, ReflectionConfig etc
echo -e "\n=== Search for specific configs ==="
rg -n "^class (PlanningConfig|ReflectionConfig|GuardrailConfig|WebConfig|OutputConfig|ExecutionConfig|TemplateConfig|CachingConfig|HooksConfig|SkillsConfig)" src/praisonai-agents/praisonaiagents --type pyRepository: MervinPraison/PraisonAI
Length of output: 2038
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Definitively check if ManagerConfig exists anywhere
echo "=== ManagerConfig search ==="
rg -n "class ManagerConfig" src/praisonai-agents/praisonaiagents --type py
# Check what config is actually used for context in code
echo -e "\n=== Context config usage in agent.py ==="
sed -n '1750,1960p' src/praisonai-agents/praisonaiagents/agent/agent.py | grep -n "isinstance\|context" | head -30
# Verify imports at top of agent.py to see what's already imported
echo -e "\n=== Imports in agent.py (first 200 lines) ==="
sed -n '1,200p' src/praisonai-agents/praisonaiagents/agent/agent.py | grep "^from\|^import"
# Check if MemoryManager is used anywhere else in codebase
echo -e "\n=== MemoryManager usage outside agent.py ==="
rg -n "MemoryManager" src/praisonai-agents/praisonaiagents --type py | grep -v agent.pyRepository: MervinPraison/PraisonAI
Length of output: 95
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check what context parameter actually resolves to during init
echo "=== Context parameter processing ==="
sed -n '1750,2000p' src/praisonai-agents/praisonaiagents/agent/agent.py | head -80
# Check if there's a context config anywhere
echo -e "\n=== ContextConfig vs ContextManager ==="
rg -n "class (ContextConfig|ContextManager)" src/praisonai-agents/praisonaiagents/context --type py -A 3
# Look at exact annotation line and compare to what's passed
echo -e "\n=== Check what gets passed for context param ==="
rg -n "self\._context_param|self\.context_param" src/praisonai-agents/praisonaiagents/agent/agent.py
# Search where context parameter is stored/used
echo -e "\n=== How context param is stored ==="
rg -n "context_param\|_context_param" src/praisonai-agents/praisonaiagents/agent/agent.py | head -20Repository: MervinPraison/PraisonAI
Length of output: 2823
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check if there's a ManagerConfig somewhere or if it's a typo
echo "=== Search broader for Manager ==="
rg -n "class.*Manager" src/praisonai-agents/praisonaiagents/context --type py | grep -i config
# What does context parameter actually accept?
echo -e "\n=== Check context param handling in property getter ==="
sed -n '1880,1970p' src/praisonai-agents/praisonaiagents/agent/agent.py
# Verify which types exist for annotations
echo -e "\n=== Summary: which types are actually defined ==="
echo "Knowledge: $(rg -c 'class Knowledge' src/praisonai-agents/praisonaiagents/knowledge/knowledge.py)"
echo "ContextManager: $(rg -c 'class ContextManager' src/praisonai-agents/praisonaiagents/context/manager.py)"
echo "ManagerConfig: $(rg -c 'class ManagerConfig' src/praisonai-agents/praisonaiagents)"
echo "MemoryManager: $(rg -c 'class MemoryManager' src/praisonai-agents/praisonaiagents)"
echo "ApprovalConfig: $(rg -c 'class ApprovalConfig' src/praisonai-agents/praisonaiagents/approval/protocols.py)"Repository: MervinPraison/PraisonAI
Length of output: 101
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Just list what's defined
echo "=== Memory module __init__.py exports ==="
cat src/praisonai-agents/praisonaiagents/memory/__init__.py
echo -e "\n=== ContextManager full definition start ==="
head -20 src/praisonai-agents/praisonaiagents/context/manager.py
echo -e "\n=== Check for MemoryManager in any __init__.py ==="
rg "MemoryManager" src/praisonai-agents/praisonaiagents/memory/__init__.py
echo -e "\n=== What is Memory class in memory.py ==="
sed -n '179,190p' src/praisonai-agents/praisonaiagents/memory/memory.pyRepository: MervinPraison/PraisonAI
Length of output: 7274
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check if Memory class in memory.py might be the "MemoryManager"
echo "=== Memory class in memory.py ==="
sed -n '179,195p' src/praisonai-agents/praisonaiagents/memory/memory.py
# Check what approval actually does at runtime with str
echo -e "\n=== Approval str handling ==="
sed -n '1617,1635p' src/praisonai-agents/praisonaiagents/agent/agent.py
# Final check: are there other imports/usages showing what the types should be?
echo -e "\n=== How approval imports ApprovalConfig at runtime ==="
rg -n "from.*approval.*import.*ApprovalConfig" src/praisonai-agents/praisonaiagents/agent/agent.pyRepository: MervinPraison/PraisonAI
Length of output: 1964
Type annotations have forward-reference and definition issues that require fixes.
The annotations at lines 479–496 have two distinct problems:
-
Forward references (F821): Names like
MemoryConfig,Knowledge,ContextManager,AutonomyConfig,ApprovalProtocol, andLearnConfigare flagged as undefined because they are not in theTYPE_CHECKINGblock. These need imports in theif TYPE_CHECKING:section (currently at line 178). -
ManagerConfig does not exist: Line 485 references
ManagerConfigwhich is not defined anywhere in the codebase. This should beContextConfig(fromcontext/models.py:246) or removed if onlyContextManageris needed. -
Approval annotation is incomplete: Line 494 annotation only includes
boolandApprovalProtocol, but runtime code (lines 1621–1644) acceptsstr(permission presets like"safe","read_only"),ApprovalConfigobjects, and backend instances. The annotation must include these.
🔧 Proposed fix
if TYPE_CHECKING:
+ from ..approval.protocols import ApprovalConfig, ApprovalProtocol
+ from ..config.feature_configs import LearnConfig, MemoryConfig
+ from ..context.models import ContextConfig
+ from ..context.manager import ContextManager
+ from ..knowledge.knowledge import Knowledge
+ from ..agent.autonomy import AutonomyConfig
from ..task.task import Task
from .handoff import Handoff, HandoffConfig, HandoffResult
from ..rag.models import RAGResult, ContextPack
from ..eval.results import EvaluationLoopResult
- context: Optional[Union[bool, 'ManagerConfig', 'ContextManager']] = None,
+ context: Optional[Union[bool, 'ContextManager']] = None,
autonomy: Optional[Union[bool, Dict[str, Any], 'AutonomyConfig']] = None,
verification_hooks: Optional[List[Any]] = None, # Deprecated: use autonomy=AutonomyConfig(verification_hooks=[...])
output: Optional[Union[str, 'OutputConfig']] = None,
execution: Optional[Union[str, 'ExecutionConfig']] = None,
templates: Optional['TemplateConfig'] = None,
caching: Optional[Union[bool, 'CachingConfig']] = None,
hooks: Optional[Union[List[Any], 'HooksConfig']] = None,
skills: Optional[Union[List[str], 'SkillsConfig']] = None,
- approval: Optional[Union[bool, 'ApprovalProtocol']] = None,
+ approval: Optional[Union[bool, str, 'ApprovalConfig', 'ApprovalProtocol']] = None,
tool_timeout: Optional[int] = None, # P8/G11: Timeout in seconds for each tool call🧰 Tools
🪛 Ruff (0.15.7)
[error] 479-479: Undefined name MemoryManager
(F821)
[error] 480-480: Undefined name Knowledge
(F821)
[error] 485-485: Undefined name ManagerConfig
(F821)
[error] 485-485: Undefined name ContextManager
(F821)
[error] 486-486: Undefined name AutonomyConfig
(F821)
[error] 494-494: Undefined name ApprovalProtocol
(F821)
[error] 496-496: Undefined name LearnConfig
(F821)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/praisonai-agents/praisonaiagents/agent/agent.py` around lines 479 - 496,
The function parameter type annotations use undefined forward-references and an
incorrect name: add imports for MemoryConfig, KnowledgeConfig, Knowledge,
ContextManager, AutonomyConfig, ApprovalProtocol, and LearnConfig into the
existing if TYPE_CHECKING: block so those names are available for forward
references; replace the non-existent ManagerConfig referenced for the context
parameter with ContextConfig (or use ContextConfig and/or ContextManager as
appropriate) to match context/models.py; and expand the approval parameter
annotation to include str and ApprovalConfig (and any Approval backend interface
type used at runtime, e.g., ApprovalBackend) so it matches runtime behavior
(accepts presets like "safe"/"read_only", ApprovalConfig instances, and backend
instances). Ensure you update the signature symbols memory, knowledge, context,
autonomy, approval, and learn accordingly.
| # Tool output limit - maximum characters for tool output | ||
| # Default: 16000 chars (≈ 4000 tokens) to prevent runaway tool outputs | ||
| tool_output_limit: int = 16000 |
There was a problem hiding this comment.
Validate tool_output_limit at config boundary.
Please fail fast for non-integer or non-positive values. Invalid values can produce broken truncation math downstream.
🔧 Proposed fix
`@dataclass`
class OutputConfig:
@@
tool_output_limit: int = 16000
+
+ def __post_init__(self) -> None:
+ if not isinstance(self.tool_output_limit, int) or self.tool_output_limit <= 0:
+ raise ValueError(
+ "OutputConfig.tool_output_limit must be a positive integer."
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Tool output limit - maximum characters for tool output | |
| # Default: 16000 chars (≈ 4000 tokens) to prevent runaway tool outputs | |
| tool_output_limit: int = 16000 | |
| # Tool output limit - maximum characters for tool output | |
| # Default: 16000 chars (≈ 4000 tokens) to prevent runaway tool outputs | |
| tool_output_limit: int = 16000 | |
| def __post_init__(self) -> None: | |
| if not isinstance(self.tool_output_limit, int) or self.tool_output_limit <= 0: | |
| raise ValueError( | |
| "OutputConfig.tool_output_limit must be a positive integer." | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/praisonai-agents/praisonaiagents/config/feature_configs.py` around lines
629 - 631, The tool_output_limit field currently allows invalid values; add
validation at the config model boundary (the class where tool_output_limit: int
= 16000 is defined) to fail fast on non-integer or non-positive inputs by
raising a clear ValueError; implement this using the model's validation hook
(e.g., a Pydantic `@validator`("tool_output_limit") or the model's
__post_init__/root_validator) to coerce/check type and ensure tool_output_limit
> 0 and include a descriptive error message referencing tool_output_limit.
| for cli_field, yaml_field in field_mappings.items(): | ||
| if cli_field in agent_overrides: |
There was a problem hiding this comment.
Fix unused loop variable in field mapping loop.
yaml_field is declared but never used (Ruff B007).
🔧 Proposed fix
- for cli_field, yaml_field in field_mappings.items():
+ for cli_field, _yaml_field in field_mappings.items():🧰 Tools
🪛 Ruff (0.15.7)
[warning] 272-272: Loop control variable yaml_field not used within loop body
Rename unused yaml_field to _yaml_field
(B007)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/praisonai/praisonai/agents_generator.py` around lines 272 - 273, The loop
in agents_generator.py uses an unused variable yaml_field in "for cli_field,
yaml_field in field_mappings.items()"; change it to iterate only keys (e.g.,
"for cli_field in field_mappings:" or "for cli_field in field_mappings.keys():")
so yaml_field is removed, and keep the existing body that references cli_field
and agent_overrides unchanged (ensure no other code expects yaml_field).
Agent-Logs-Url: https://github.com/MervinPraison/PraisonAI/sessions/908b1ad0-b9c1-4fda-a330-bba68180b90c Co-authored-by: MervinPraison <454862+MervinPraison@users.noreply.github.com>
Here's the thorough review across all requested areas (commit 7cb2fa7 fixes a minor docstring issue found during review): 1. Bloat check ✅ 2. Security ✅
3. Performance ✅
4. Tests
5. Backward compatibility ✅
6. Code quality — one issue fixed, one to note Fixed (7cb2fa7): The To note: When both # If agent_overrides = {'trust': True, 'approval': SomeConfig}
# After loop: {'approval': True} ← SomeConfig is silently discardedThis is probably acceptable ("trust" implies full auto-approval), but documenting this precedence rule would avoid confusion. Also: the forward reference string annotations (e.g., 7. DRY / naming ✅ |
|
@claude verify and fix if any issues |
|
Claude finished @MervinPraison's task —— View job Fixed all critical issues identified by code reviewers ✅I have successfully verified and fixed all 4 critical issues identified by the reviewers, following AGENTS.md architecture principles. Todo List:
Fixes Implemented:✅ Issue #1: Type hints exclude supported inputs
✅ Issue #2: YAML approval dict crashes
✅ Issue #3: Mixin missing tool_output_limit
✅ Issue #4: CLI flags not forwarded
✅ Additional fixes from CodeRabbit:
Architecture Compliance:All fixes follow AGENTS.md principles:
Commit: |
- Fix type annotations in Agent.__init__ to include missing str, dict types - Add missing TYPE_CHECKING imports for forward references - Fix YAML approval dict handling to prevent runtime crashes - Add safe fallback for ToolExecutionMixin.tool_output_limit - Forward missing CLI flags (autonomy, guardrail, approval) to YAML config - Add validation to OutputConfig.tool_output_limit - Fix unused variable warning in field mappings loop Resolves issues found by CodeRabbitAI, Qodo, Gemini, and Copilot reviewers. Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
Uh oh!
There was an error while loading. Please reload this page.