orka.cli package

OrKa CLI Package

Modular CLI architecture with backward compatibility.

This package provides a clean modular structure for the OrKa CLI while maintaining 100% backward compatibility with existing code. All functions that were previously available in orka.orka_cli are now properly organized into focused modules but remain accessible through the same import patterns.

Architecture Overview

Core Modules: - types - Type definitions for events and payloads - core - Core functionality including run_cli_entrypoint - utils - Shared utilities like setup_logging - parser - Command-line argument parsing logic

Command Modules: - memory/ - Memory management commands (stats, cleanup, configure, watch) - orchestrator/ - Orchestrator operations (run commands)

Backward Compatibility: All existing imports continue to work:

```python # These imports still work exactly as before from orka.orka_cli import run_cli_entrypoint, memory_stats, setup_logging

# Module usage also works import orka.orka_cli result = orka.orka_cli.run_cli_entrypoint(config, input_text) ```

Benefits of Modular Structure: - Easier maintenance and testing - Clear separation of concerns - Improved code organization - Extensible architecture for new features

async orka.cli.run_cli_entrypoint(config_path: str, input_text: str, log_to_file: bool = False) Dict[str, Any] | List[Event] | str[source]

🚀 Primary programmatic entry point - run OrKa workflows from any application.

What makes this special: - Universal Integration: Call OrKa from any Python application seamlessly - Flexible Output: Returns structured data perfect for further processing - Production Ready: Handles errors gracefully with comprehensive logging - Development Friendly: Optional file logging for debugging workflows

Integration Patterns:

1. Simple Q&A Integration: ```python result = await run_cli_entrypoint(

“configs/qa_workflow.yml”, “What is machine learning?”, log_to_file=False

) # Returns: {“answer_agent”: “Machine learning is…”} ```

2. Complex Workflow Integration: ```python result = await run_cli_entrypoint(

“configs/content_moderation.yml”, user_generated_content, log_to_file=True # Debug complex workflows

) # Returns: {“safety_check”: True, “sentiment”: “positive”, “topics”: [“tech”]} ```

3. Batch Processing Integration: ```python results = [] for item in dataset:

result = await run_cli_entrypoint(

“configs/classifier.yml”, item[“text”], log_to_file=False

) results.append(result)

```

Return Value Intelligence: - Dict: Agent outputs mapped by agent ID (most common) - List: Complete event trace for debugging complex workflows - String: Simple text output for basic workflows

Perfect for: - Web applications needing AI capabilities - Data processing pipelines with AI components - Microservices requiring intelligent decision making - Research applications with custom AI workflows

orka.cli.setup_logging(verbose: bool = False)[source]

Setup logging configuration.

orka.cli.memory_cleanup(args)[source]

Clean up expired memory entries.

orka.cli.memory_configure(args)[source]

Enhanced memory configuration with RedisStack testing.

orka.cli.memory_stats(args)[source]

Display memory usage statistics.

orka.cli.memory_watch(args)[source]

Modern TUI interface with Textual (default) or Rich fallback.

async orka.cli.run_orchestrator(args)[source]

Run the orchestrator with the given configuration.

orka.cli.create_parser()[source]

Create and configure the main argument parser.

orka.cli.setup_subcommands(parser)[source]

Set up all subcommands and their arguments.

class orka.cli.Event[source]

Bases: TypedDict

🎯 Complete event record - comprehensive tracking of orchestration activities.

Purpose: Captures complete context for every action in your AI workflow, providing full traceability and enabling sophisticated monitoring and debugging.

Event Lifecycle: 1. Creation: Agent generates event with rich context 2. Processing: Event flows through orchestration pipeline 3. Storage: Event persisted to memory for future analysis 4. Analysis: Event used for monitoring, debugging, and optimization

Fields: - agent_id: Which agent generated this event - event_type: What type of action occurred - timestamp: Precise timing for performance analysis - payload: Rich event data with status and context - run_id: Links events across a single workflow execution - step: Sequential ordering within the workflow

agent_id: str
event_type: str
timestamp: str
payload: EventPayload
run_id: str | None
step: int | None
class orka.cli.EventPayload[source]

Bases: TypedDict

📊 Event payload structure - standardized data format for orchestration events.

Purpose: Provides consistent structure for all events flowing through OrKa workflows, enabling reliable monitoring, debugging, and analytics across complex AI systems.

Fields: - message: Human-readable description of what happened - status: Machine-readable status for automated processing - data: Rich structured data for detailed analysis and debugging

message: str
status: str
data: Dict[str, Any] | None

Subpackages

Submodules