Skip to content

API Reference Overview

The Veotools API is organized into several modules, each handling specific aspects of video generation and processing.

Complete API Documentation

The following sections provide auto-generated documentation from the source code docstrings.

Core Module

veotools.core

CLASS DESCRIPTION
VeoClient

Singleton client for Google GenAI API interactions.

StorageManager
ProgressTracker

Track and report progress for long-running operations.

ModelConfig

Configuration and capabilities for different Veo video generation models.

Classes

VeoClient

VeoClient()

Singleton client for Google GenAI API interactions.

This class implements a singleton pattern to ensure only one client instance is created throughout the application lifecycle. It manages the authentication and connection to Google's Generative AI API.

ATTRIBUTE DESCRIPTION
client

The underlying Google GenAI client instance.

RAISES DESCRIPTION
ValueError

If GEMINI_API_KEY environment variable is not set.

Examples:

>>> client = VeoClient()
>>> api_client = client.client
>>> # Use api_client for API calls

Initialize the GenAI client with API key from environment.

The client is only initialized once, even if init is called multiple times.

RAISES DESCRIPTION
ValueError

If GEMINI_API_KEY is not found in environment variables.

METHOD DESCRIPTION
__new__

Create or return the singleton instance.

Source code in src/veotools/core.py
def __init__(self):
    """Initialize the GenAI client with API key from environment.

    The client is only initialized once, even if __init__ is called multiple times.

    Raises:
        ValueError: If GEMINI_API_KEY is not found in environment variables.
    """
    if self._client is None:
        api_key = os.getenv('GEMINI_API_KEY')
        if not api_key:
            raise ValueError("GEMINI_API_KEY not found in .env file")
        self._client = genai.Client(api_key=api_key)
Attributes
client property
client

Get the Google GenAI client instance.

RETURNS DESCRIPTION

genai.Client: The initialized GenAI client.

Functions
__new__
__new__()

Create or return the singleton instance.

RETURNS DESCRIPTION
VeoClient

The singleton VeoClient instance.

Source code in src/veotools/core.py
def __new__(cls):
    """Create or return the singleton instance.

    Returns:
        VeoClient: The singleton VeoClient instance.
    """
    if cls._instance is None:
        cls._instance = super().__new__(cls)
    return cls._instance

StorageManager

StorageManager(base_path: Optional[str] = None)

Manage output directories for videos, frames, and temp files.

Default resolution order for base path: 1. VEO_OUTPUT_DIR environment variable (if set) 2. Current working directory (./output) 3. Package-adjacent directory (../output) as a last resort

METHOD DESCRIPTION
get_video_path

Get the full path for a video file.

get_frame_path

Get the full path for a frame image file.

get_temp_path

Get the full path for a temporary file.

cleanup_temp

Remove all files from the temporary directory.

get_url

Convert a file path to a file:// URL.

Source code in src/veotools/core.py
def __init__(self, base_path: Optional[str] = None):
    """Manage output directories for videos, frames, and temp files.

    Default resolution order for base path:
    1. VEO_OUTPUT_DIR environment variable (if set)
    2. Current working directory (./output)
    3. Package-adjacent directory (../output) as a last resort
    """
    resolved_base: Path

    # 1) Environment override
    env_base = os.getenv("VEO_OUTPUT_DIR")
    if base_path:
        resolved_base = Path(base_path)
    elif env_base:
        resolved_base = Path(env_base)
    else:
        # 2) Prefer CWD/output for installed packages (CLI/scripts)
        cwd_candidate = Path.cwd() / "output"
        try:
            cwd_candidate.mkdir(parents=True, exist_ok=True)
            resolved_base = cwd_candidate
        except Exception:
            # 3) As a last resort, place beside the installed package
            try:
                package_root = Path(__file__).resolve().parents[2]
                candidate = package_root / "output"
                candidate.mkdir(parents=True, exist_ok=True)
                resolved_base = candidate
            except Exception:
                # Final fallback: user home
                resolved_base = Path.home() / "output"

    self.base_path = resolved_base
    self.base_path.mkdir(parents=True, exist_ok=True)

    self.videos_dir = self.base_path / "videos"
    self.frames_dir = self.base_path / "frames"
    self.temp_dir = self.base_path / "temp"

    for dir_path in [self.videos_dir, self.frames_dir, self.temp_dir]:
        dir_path.mkdir(parents=True, exist_ok=True)
Functions
get_video_path
get_video_path(filename: str) -> Path

Get the full path for a video file.

PARAMETER DESCRIPTION
filename

Name of the video file.

TYPE: str

RETURNS DESCRIPTION
Path

Full path to the video file in the videos directory.

TYPE: Path

Examples:

>>> manager = StorageManager()
>>> path = manager.get_video_path("output.mp4")
>>> print(path)  # /path/to/output/videos/output.mp4
Source code in src/veotools/core.py
def get_video_path(self, filename: str) -> Path:
    """Get the full path for a video file.

    Args:
        filename: Name of the video file.

    Returns:
        Path: Full path to the video file in the videos directory.

    Examples:
        >>> manager = StorageManager()
        >>> path = manager.get_video_path("output.mp4")
        >>> print(path)  # /path/to/output/videos/output.mp4
    """
    return self.videos_dir / filename
get_frame_path
get_frame_path(filename: str) -> Path

Get the full path for a frame image file.

PARAMETER DESCRIPTION
filename

Name of the frame file.

TYPE: str

RETURNS DESCRIPTION
Path

Full path to the frame file in the frames directory.

TYPE: Path

Examples:

>>> manager = StorageManager()
>>> path = manager.get_frame_path("frame_001.jpg")
>>> print(path)  # /path/to/output/frames/frame_001.jpg
Source code in src/veotools/core.py
def get_frame_path(self, filename: str) -> Path:
    """Get the full path for a frame image file.

    Args:
        filename: Name of the frame file.

    Returns:
        Path: Full path to the frame file in the frames directory.

    Examples:
        >>> manager = StorageManager()
        >>> path = manager.get_frame_path("frame_001.jpg")
        >>> print(path)  # /path/to/output/frames/frame_001.jpg
    """
    return self.frames_dir / filename
get_temp_path
get_temp_path(filename: str) -> Path

Get the full path for a temporary file.

PARAMETER DESCRIPTION
filename

Name of the temporary file.

TYPE: str

RETURNS DESCRIPTION
Path

Full path to the file in the temp directory.

TYPE: Path

Examples:

>>> manager = StorageManager()
>>> path = manager.get_temp_path("processing.tmp")
>>> print(path)  # /path/to/output/temp/processing.tmp
Source code in src/veotools/core.py
def get_temp_path(self, filename: str) -> Path:
    """Get the full path for a temporary file.

    Args:
        filename: Name of the temporary file.

    Returns:
        Path: Full path to the file in the temp directory.

    Examples:
        >>> manager = StorageManager()
        >>> path = manager.get_temp_path("processing.tmp")
        >>> print(path)  # /path/to/output/temp/processing.tmp
    """
    return self.temp_dir / filename
cleanup_temp
cleanup_temp()

Remove all files from the temporary directory.

This method safely removes all files in the temp directory while preserving the directory structure. Errors during deletion are silently ignored.

Examples:

>>> manager = StorageManager()
>>> manager.cleanup_temp()
>>> # All temp files are now deleted
Source code in src/veotools/core.py
def cleanup_temp(self):
    """Remove all files from the temporary directory.

    This method safely removes all files in the temp directory while preserving
    the directory structure. Errors during deletion are silently ignored.

    Examples:
        >>> manager = StorageManager()
        >>> manager.cleanup_temp()
        >>> # All temp files are now deleted
    """
    for file in self.temp_dir.glob("*"):
        try:
            file.unlink()
        except:
            pass
get_url
get_url(path: Path) -> Optional[str]

Convert a file path to a file:// URL.

PARAMETER DESCRIPTION
path

Path to the file.

TYPE: Path

RETURNS DESCRIPTION
Optional[str]

Optional[str]: File URL if the file exists, None otherwise.

Examples:

>>> manager = StorageManager()
>>> video_path = manager.get_video_path("test.mp4")
>>> url = manager.get_url(video_path)
>>> print(url)  # file:///absolute/path/to/output/videos/test.mp4
Source code in src/veotools/core.py
def get_url(self, path: Path) -> Optional[str]:
    """Convert a file path to a file:// URL.

    Args:
        path: Path to the file.

    Returns:
        Optional[str]: File URL if the file exists, None otherwise.

    Examples:
        >>> manager = StorageManager()
        >>> video_path = manager.get_video_path("test.mp4")
        >>> url = manager.get_url(video_path)
        >>> print(url)  # file:///absolute/path/to/output/videos/test.mp4
    """
    if path.exists():
        return f"file://{path.absolute()}"
    return None

ProgressTracker

ProgressTracker(callback: Optional[Callable] = None)

Track and report progress for long-running operations.

This class provides a simple interface for tracking progress updates during video generation and processing operations. It supports custom callbacks or falls back to logging.

ATTRIBUTE DESCRIPTION
callback

Function to call with progress updates.

current_progress

Current progress percentage (0-100).

logger

Logger instance for default progress reporting.

Examples:

>>> def my_callback(msg: str, pct: int):
...     print(f"{msg}: {pct}%")
>>> tracker = ProgressTracker(callback=my_callback)
>>> tracker.start("Processing")
>>> tracker.update("Halfway", 50)
>>> tracker.complete("Done")

Initialize the progress tracker.

PARAMETER DESCRIPTION
callback

Optional callback function that receives (message, percent). If not provided, uses default logging.

TYPE: Optional[Callable] DEFAULT: None

METHOD DESCRIPTION
default_progress

Default progress callback that logs to the logger.

update

Update progress and trigger callback.

start

Mark the start of an operation (0% progress).

complete

Mark the completion of an operation (100% progress).

Source code in src/veotools/core.py
def __init__(self, callback: Optional[Callable] = None):
    """Initialize the progress tracker.

    Args:
        callback: Optional callback function that receives (message, percent).
                 If not provided, uses default logging.
    """
    self.callback = callback or self.default_progress
    self.current_progress = 0
    self.logger = logging.getLogger(__name__)
Functions
default_progress
default_progress(message: str, percent: int)

Default progress callback that logs to the logger.

PARAMETER DESCRIPTION
message

Progress message.

TYPE: str

percent

Progress percentage.

TYPE: int

Source code in src/veotools/core.py
def default_progress(self, message: str, percent: int):
    """Default progress callback that logs to the logger.

    Args:
        message: Progress message.
        percent: Progress percentage.
    """
    self.logger.info(f"{message}: {percent}%")
update
update(message: str, percent: int)

Update progress and trigger callback.

PARAMETER DESCRIPTION
message

Progress message to display.

TYPE: str

percent

Current progress percentage (0-100).

TYPE: int

Source code in src/veotools/core.py
def update(self, message: str, percent: int):
    """Update progress and trigger callback.

    Args:
        message: Progress message to display.
        percent: Current progress percentage (0-100).
    """
    self.current_progress = percent
    self.callback(message, percent)
start
start(message: str = 'Starting')

Mark the start of an operation (0% progress).

PARAMETER DESCRIPTION
message

Starting message, defaults to "Starting".

TYPE: str DEFAULT: 'Starting'

Source code in src/veotools/core.py
def start(self, message: str = "Starting"):
    """Mark the start of an operation (0% progress).

    Args:
        message: Starting message, defaults to "Starting".
    """
    self.update(message, 0)
complete
complete(message: str = 'Complete')

Mark the completion of an operation (100% progress).

PARAMETER DESCRIPTION
message

Completion message, defaults to "Complete".

TYPE: str DEFAULT: 'Complete'

Source code in src/veotools/core.py
def complete(self, message: str = "Complete"):
    """Mark the completion of an operation (100% progress).

    Args:
        message: Completion message, defaults to "Complete".
    """
    self.update(message, 100)

ModelConfig

Configuration and capabilities for different Veo video generation models.

This class manages model-specific configurations and builds generation configs based on model capabilities. It handles feature availability, parameter validation, and safety settings.

ATTRIBUTE DESCRIPTION
MODELS

Dictionary of available models and their configurations.

METHOD DESCRIPTION
get_config

Get configuration for a specific model.

build_generation_config

Build a generation configuration based on model capabilities.

Functions
get_config classmethod
get_config(model: str) -> dict

Get configuration for a specific model.

PARAMETER DESCRIPTION
model

Model identifier (with or without "models/" prefix).

TYPE: str

RETURNS DESCRIPTION
dict

Model configuration dictionary containing capabilities and defaults.

TYPE: dict

Examples:

>>> config = ModelConfig.get_config("veo-3.0-fast-generate-preview")
>>> print(config["name"])  # "Veo 3.0 Fast"
>>> print(config["supports_duration"])  # False
Source code in src/veotools/core.py
@classmethod
def get_config(cls, model: str) -> dict:
    """Get configuration for a specific model.

    Args:
        model: Model identifier (with or without "models/" prefix).

    Returns:
        dict: Model configuration dictionary containing capabilities and defaults.

    Examples:
        >>> config = ModelConfig.get_config("veo-3.0-fast-generate-preview")
        >>> print(config["name"])  # "Veo 3.0 Fast"
        >>> print(config["supports_duration"])  # False
    """
    if model.startswith("models/"):
        model = model.replace("models/", "")

    return cls.MODELS.get(model, cls.MODELS["veo-3.0-fast-generate-preview"])
build_generation_config classmethod
build_generation_config(model: str, **kwargs) -> GenerateVideosConfig

Build a generation configuration based on model capabilities.

This method creates a GenerateVideosConfig object with parameters appropriate for the specified model. It validates parameters against model capabilities and handles safety settings.

PARAMETER DESCRIPTION
model

Model identifier to use for generation.

TYPE: str

**kwargs

Generation parameters including: - number_of_videos: Number of videos to generate (default: 1) - duration_seconds: Video duration (if supported by model) - enhance_prompt: Whether to enhance the prompt (if supported) - fps: Frames per second (if supported) - aspect_ratio: Video aspect ratio (e.g., "16:9") - negative_prompt: Negative prompt for generation - person_generation: Person generation setting - safety_settings: List of safety settings - cached_content: Cached content handle

DEFAULT: {}

RETURNS DESCRIPTION
GenerateVideosConfig

types.GenerateVideosConfig: Configuration object for video generation.

RAISES DESCRIPTION
ValueError

If aspect_ratio is not supported by the model.

Examples:

>>> config = ModelConfig.build_generation_config(
...     "veo-3.0-fast-generate-preview",
...     number_of_videos=2,
...     aspect_ratio="16:9"
... )
Source code in src/veotools/core.py
@classmethod
def build_generation_config(cls, model: str, **kwargs) -> types.GenerateVideosConfig:
    """Build a generation configuration based on model capabilities.

    This method creates a GenerateVideosConfig object with parameters
    appropriate for the specified model. It validates parameters against
    model capabilities and handles safety settings.

    Args:
        model: Model identifier to use for generation.
        **kwargs: Generation parameters including:
            - number_of_videos: Number of videos to generate (default: 1)
            - duration_seconds: Video duration (if supported by model)
            - enhance_prompt: Whether to enhance the prompt (if supported)
            - fps: Frames per second (if supported)
            - aspect_ratio: Video aspect ratio (e.g., "16:9")
            - negative_prompt: Negative prompt for generation
            - person_generation: Person generation setting
            - safety_settings: List of safety settings
            - cached_content: Cached content handle

    Returns:
        types.GenerateVideosConfig: Configuration object for video generation.

    Raises:
        ValueError: If aspect_ratio is not supported by the model.

    Examples:
        >>> config = ModelConfig.build_generation_config(
        ...     "veo-3.0-fast-generate-preview",
        ...     number_of_videos=2,
        ...     aspect_ratio="16:9"
        ... )
    """
    config = cls.get_config(model)

    params = {
        "number_of_videos": kwargs.get("number_of_videos", 1)
    }

    if config["supports_duration"] and "duration_seconds" in kwargs:
        params["duration_seconds"] = kwargs["duration_seconds"]

    if config["supports_enhance"]:
        params["enhance_prompt"] = kwargs.get("enhance_prompt", False)

    if config["supports_fps"] and "fps" in kwargs:
        params["fps"] = kwargs["fps"]

    # Aspect ratio (e.g., "16:9"; Veo 3 limited to 16:9; Veo 2 supports 16:9 and 9:16)
    if config.get("supports_aspect_ratio") and "aspect_ratio" in kwargs and kwargs["aspect_ratio"]:
        ar = str(kwargs["aspect_ratio"])  # normalize
        model_key = model.replace("models/", "")
        if model_key.startswith("veo-3.0"):
            allowed = {"16:9"}
        elif model_key.startswith("veo-2.0"):
            allowed = {"16:9", "9:16"}
        else:
            allowed = {"16:9"}
        if ar not in allowed:
            raise ValueError(f"aspect_ratio '{ar}' not supported for model '{model_key}'. Allowed: {sorted(allowed)}")
        params["aspect_ratio"] = ar

    # Docs-backed pass-throughs
    if "negative_prompt" in kwargs and kwargs["negative_prompt"]:
        params["negative_prompt"] = kwargs["negative_prompt"]

    if "person_generation" in kwargs and kwargs["person_generation"]:
        # Person generation options vary by model/region; pass through as provided
        params["person_generation"] = kwargs["person_generation"]

    # Safety settings (optional, SDK >= 1.30.0 for some modalities). Accept either
    # a list of dicts {category, threshold} or already-constructed types.SafetySetting.
    safety_settings = kwargs.get("safety_settings")
    if safety_settings:
        normalized: list = []
        for item in safety_settings:
            try:
                if hasattr(item, "category") and hasattr(item, "threshold"):
                    normalized.append(item)
                elif isinstance(item, dict):
                    normalized.append(types.SafetySetting(
                        category=item.get("category"),
                        threshold=item.get("threshold"),
                    ))
            except Exception:
                # Ignore malformed entries
                continue
        if normalized:
            params["safety_settings"] = normalized

    # Cached content handle (best-effort pass-through if supported)
    if "cached_content" in kwargs and kwargs["cached_content"]:
        params["cached_content"] = kwargs["cached_content"]

    # Construct config, dropping unknown fields if the SDK doesn't support them
    try:
        return types.GenerateVideosConfig(**params)
    except TypeError:
        # Remove optional fields that may not be recognized by this client version
        for optional_key in ["safety_settings", "cached_content"]:
            params.pop(optional_key, None)
        return types.GenerateVideosConfig(**params)

Models Module

veotools.models

CLASS DESCRIPTION
JobStatus

Enumeration of possible job statuses for video generation tasks.

VideoMetadata

Metadata information for a video file.

VideoResult

Result object for video generation operations.

WorkflowStep

Individual step in a video processing workflow.

Workflow

Container for a multi-step video processing workflow.

Classes

JobStatus

Bases: Enum

Enumeration of possible job statuses for video generation tasks.

ATTRIBUTE DESCRIPTION
PENDING

Job has been created but not yet started.

PROCESSING

Job is currently being processed.

COMPLETE

Job has finished successfully.

FAILED

Job has failed with an error.

VideoMetadata

VideoMetadata(fps: float = 24.0, duration: float = 0.0, width: int = 0, height: int = 0)

Metadata information for a video file.

ATTRIBUTE DESCRIPTION
fps

Frames per second of the video.

duration

Duration of the video in seconds.

width

Width of the video in pixels.

height

Height of the video in pixels.

frame_count

Total number of frames in the video.

Examples:

>>> metadata = VideoMetadata(fps=30.0, duration=10.0, width=1920, height=1080)
>>> print(metadata.frame_count)  # 300
>>> print(metadata.to_dict())

Initialize video metadata.

PARAMETER DESCRIPTION
fps

Frames per second (default: 24.0).

TYPE: float DEFAULT: 24.0

duration

Video duration in seconds (default: 0.0).

TYPE: float DEFAULT: 0.0

width

Video width in pixels (default: 0).

TYPE: int DEFAULT: 0

height

Video height in pixels (default: 0).

TYPE: int DEFAULT: 0

METHOD DESCRIPTION
to_dict

Convert metadata to a dictionary.

Source code in src/veotools/models.py
def __init__(self, fps: float = 24.0, duration: float = 0.0, 
             width: int = 0, height: int = 0):
    """Initialize video metadata.

    Args:
        fps: Frames per second (default: 24.0).
        duration: Video duration in seconds (default: 0.0).
        width: Video width in pixels (default: 0).
        height: Video height in pixels (default: 0).
    """
    self.fps = fps
    self.duration = duration
    self.width = width
    self.height = height
    self.frame_count = int(fps * duration) if duration > 0 else 0
Functions
to_dict
to_dict() -> Dict[str, Any]

Convert metadata to a dictionary.

RETURNS DESCRIPTION
Dict[str, Any]

Dict[str, Any]: Dictionary containing all metadata fields.

Source code in src/veotools/models.py
def to_dict(self) -> Dict[str, Any]:
    """Convert metadata to a dictionary.

    Returns:
        Dict[str, Any]: Dictionary containing all metadata fields.
    """
    return {
        "fps": self.fps,
        "duration": self.duration,
        "width": self.width,
        "height": self.height,
        "frame_count": self.frame_count
    }

VideoResult

VideoResult(path: Optional[Path] = None, operation_id: Optional[str] = None)

Result object for video generation operations.

This class encapsulates all information about a video generation task, including its status, progress, metadata, and any errors.

ATTRIBUTE DESCRIPTION
id

Unique identifier for this result.

path

Path to the generated video file.

url

URL to access the video (if available).

operation_id

Google API operation ID for tracking.

status

Current status of the generation job.

progress

Progress percentage (0-100).

metadata

Video metadata (fps, duration, resolution).

prompt

Text prompt used for generation.

model

Model used for generation.

error

Error information if generation failed.

created_at

Timestamp when the job was created.

completed_at

Timestamp when the job completed.

Examples:

>>> result = VideoResult()
>>> result.update_progress("Generating", 50)
>>> print(result.status)  # JobStatus.PROCESSING
>>> result.update_progress("Complete", 100)
>>> print(result.status)  # JobStatus.COMPLETE

Initialize a video result.

PARAMETER DESCRIPTION
path

Optional path to the video file.

TYPE: Optional[Path] DEFAULT: None

operation_id

Optional Google API operation ID.

TYPE: Optional[str] DEFAULT: None

METHOD DESCRIPTION
to_dict

Convert the result to a JSON-serializable dictionary.

update_progress

Update the progress of the video generation.

mark_failed

Mark the job as failed with an error.

Source code in src/veotools/models.py
def __init__(self, path: Optional[Path] = None, operation_id: Optional[str] = None):
    """Initialize a video result.

    Args:
        path: Optional path to the video file.
        operation_id: Optional Google API operation ID.
    """
    self.id = str(uuid4())
    self.path = path
    self.url = None
    self.operation_id = operation_id
    self.status = JobStatus.PENDING
    self.progress = 0
    self.metadata = VideoMetadata()
    self.prompt = None
    self.model = None
    self.error = None
    self.created_at = datetime.now()
    self.completed_at = None
Functions
to_dict
to_dict() -> Dict[str, Any]

Convert the result to a JSON-serializable dictionary.

RETURNS DESCRIPTION
Dict[str, Any]

Dict[str, Any]: Dictionary representation of the video result.

Source code in src/veotools/models.py
def to_dict(self) -> Dict[str, Any]:
    """Convert the result to a JSON-serializable dictionary.

    Returns:
        Dict[str, Any]: Dictionary representation of the video result.
    """
    return {
        "id": self.id,
        "path": str(self.path) if self.path else None,
        "url": self.url,
        "operation_id": self.operation_id,
        "status": self.status.value,
        "progress": self.progress,
        "metadata": self.metadata.to_dict(),
        "prompt": self.prompt,
        "model": self.model,
        "error": str(self.error) if self.error else None,
        "created_at": self.created_at.isoformat(),
        "completed_at": self.completed_at.isoformat() if self.completed_at else None
    }
update_progress
update_progress(message: str, percent: int)

Update the progress of the video generation.

Automatically updates the status based on progress: - 0%: PENDING - 1-99%: PROCESSING - 100%: COMPLETE

PARAMETER DESCRIPTION
message

Progress message (currently unused but kept for API compatibility).

TYPE: str

percent

Progress percentage (0-100).

TYPE: int

Source code in src/veotools/models.py
def update_progress(self, message: str, percent: int):
    """Update the progress of the video generation.

    Automatically updates the status based on progress:
    - 0%: PENDING
    - 1-99%: PROCESSING
    - 100%: COMPLETE

    Args:
        message: Progress message (currently unused but kept for API compatibility).
        percent: Progress percentage (0-100).
    """
    self.progress = percent
    if percent >= 100:
        self.status = JobStatus.COMPLETE
        self.completed_at = datetime.now()
    elif percent > 0:
        self.status = JobStatus.PROCESSING
mark_failed
mark_failed(error: Exception)

Mark the job as failed with an error.

PARAMETER DESCRIPTION
error

The exception that caused the failure.

TYPE: Exception

Source code in src/veotools/models.py
def mark_failed(self, error: Exception):
    """Mark the job as failed with an error.

    Args:
        error: The exception that caused the failure.
    """
    self.status = JobStatus.FAILED
    self.error = error
    self.completed_at = datetime.now()

WorkflowStep

WorkflowStep(action: str, params: Dict[str, Any])

Individual step in a video processing workflow.

ATTRIBUTE DESCRIPTION
id

Unique identifier for this step.

action

Action to perform (e.g., "generate", "stitch").

params

Parameters for the action.

result

Result of executing this step.

created_at

Timestamp when the step was created.

Initialize a workflow step.

PARAMETER DESCRIPTION
action

The action to perform.

TYPE: str

params

Parameters for the action.

TYPE: Dict[str, Any]

METHOD DESCRIPTION
to_dict

Convert the step to a dictionary.

Source code in src/veotools/models.py
def __init__(self, action: str, params: Dict[str, Any]):
    """Initialize a workflow step.

    Args:
        action: The action to perform.
        params: Parameters for the action.
    """
    self.id = str(uuid4())
    self.action = action
    self.params = params
    self.result = None
    self.created_at = datetime.now()
Functions
to_dict
to_dict() -> Dict[str, Any]

Convert the step to a dictionary.

RETURNS DESCRIPTION
Dict[str, Any]

Dict[str, Any]: Dictionary representation of the workflow step.

Source code in src/veotools/models.py
def to_dict(self) -> Dict[str, Any]:
    """Convert the step to a dictionary.

    Returns:
        Dict[str, Any]: Dictionary representation of the workflow step.
    """
    return {
        "id": self.id,
        "action": self.action,
        "params": self.params,
        "result": self.result.to_dict() if self.result else None,
        "created_at": self.created_at.isoformat()
    }

Workflow

Workflow(name: Optional[str] = None)

Container for a multi-step video processing workflow.

Workflows allow chaining multiple operations like generation, stitching, and processing into a single managed flow.

ATTRIBUTE DESCRIPTION
id

Unique identifier for this workflow.

name

Human-readable name for the workflow.

steps

List of workflow steps to execute.

TYPE: List[WorkflowStep]

current_step

Index of the currently executing step.

created_at

Timestamp when the workflow was created.

updated_at

Timestamp of the last update.

Examples:

>>> workflow = Workflow("my_video_project")
>>> workflow.add_step("generate", {"prompt": "sunset"})
>>> workflow.add_step("stitch", {"videos": ["a.mp4", "b.mp4"]})
>>> print(len(workflow.steps))  # 2

Initialize a workflow.

PARAMETER DESCRIPTION
name

Optional name for the workflow. If not provided, generates a timestamp-based name.

TYPE: Optional[str] DEFAULT: None

METHOD DESCRIPTION
add_step

Add a new step to the workflow.

to_dict

Convert the workflow to a dictionary.

from_dict

Create a workflow from a dictionary.

Source code in src/veotools/models.py
def __init__(self, name: Optional[str] = None):
    """Initialize a workflow.

    Args:
        name: Optional name for the workflow. If not provided,
             generates a timestamp-based name.
    """
    self.id = str(uuid4())
    self.name = name or f"workflow_{datetime.now().strftime('%Y%m%d_%H%M%S')}"
    self.steps: List[WorkflowStep] = []
    self.current_step = 0
    self.created_at = datetime.now()
    self.updated_at = datetime.now()
Functions
add_step
add_step(action: str, params: Dict[str, Any]) -> WorkflowStep

Add a new step to the workflow.

PARAMETER DESCRIPTION
action

The action to perform.

TYPE: str

params

Parameters for the action.

TYPE: Dict[str, Any]

RETURNS DESCRIPTION
WorkflowStep

The created workflow step.

TYPE: WorkflowStep

Source code in src/veotools/models.py
def add_step(self, action: str, params: Dict[str, Any]) -> WorkflowStep:
    """Add a new step to the workflow.

    Args:
        action: The action to perform.
        params: Parameters for the action.

    Returns:
        WorkflowStep: The created workflow step.
    """
    step = WorkflowStep(action, params)
    self.steps.append(step)
    self.updated_at = datetime.now()
    return step
to_dict
to_dict() -> Dict[str, Any]

Convert the workflow to a dictionary.

RETURNS DESCRIPTION
Dict[str, Any]

Dict[str, Any]: Dictionary representation of the workflow.

Source code in src/veotools/models.py
def to_dict(self) -> Dict[str, Any]:
    """Convert the workflow to a dictionary.

    Returns:
        Dict[str, Any]: Dictionary representation of the workflow.
    """
    return {
        "id": self.id,
        "name": self.name,
        "steps": [step.to_dict() for step in self.steps],
        "current_step": self.current_step,
        "created_at": self.created_at.isoformat(),
        "updated_at": self.updated_at.isoformat()
    }
from_dict classmethod
from_dict(data: Dict[str, Any]) -> Workflow

Create a workflow from a dictionary.

PARAMETER DESCRIPTION
data

Dictionary containing workflow data.

TYPE: Dict[str, Any]

RETURNS DESCRIPTION
Workflow

Reconstructed workflow instance.

TYPE: Workflow

Examples:

>>> data = {"id": "123", "name": "test", "current_step": 2}
>>> workflow = Workflow.from_dict(data)
>>> print(workflow.name)  # "test"
Source code in src/veotools/models.py
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'Workflow':
    """Create a workflow from a dictionary.

    Args:
        data: Dictionary containing workflow data.

    Returns:
        Workflow: Reconstructed workflow instance.

    Examples:
        >>> data = {"id": "123", "name": "test", "current_step": 2}
        >>> workflow = Workflow.from_dict(data)
        >>> print(workflow.name)  # "test"
    """
    workflow = cls(name=data.get("name"))
    workflow.id = data["id"]
    workflow.current_step = data.get("current_step", 0)
    return workflow

Video Processing Module

veotools.process.extractor

Frame extraction and video info utilities for Veo Tools.

Enhancements: - get_video_info now first attempts to use ffprobe for accurate metadata (fps, duration, width, height). If ffprobe is unavailable, it falls back to OpenCV-based probing.

FUNCTION DESCRIPTION
extract_frame

Extract a single frame from a video at the specified time offset.

extract_frames

Extract multiple frames from a video at specified time offsets.

get_video_info

Extract comprehensive metadata from a video file.

Classes

Functions

extract_frame

extract_frame(video_path: Path, time_offset: float = -1.0, output_path: Optional[Path] = None) -> Path

Extract a single frame from a video at the specified time offset.

Extracts and saves a frame from a video file as a JPEG image. Supports both positive time offsets (from start) and negative offsets (from end). Uses OpenCV for video processing and automatically manages storage paths.

PARAMETER DESCRIPTION
video_path

Path to the input video file.

TYPE: Path

time_offset

Time in seconds where to extract the frame. Positive values are from the start, negative values from the end. Defaults to -1.0 (1 second from the end).

TYPE: float DEFAULT: -1.0

output_path

Optional custom path for saving the extracted frame. If None, auto-generates a path using StorageManager.

TYPE: Optional[Path] DEFAULT: None

RETURNS DESCRIPTION
Path

The path where the extracted frame was saved.

TYPE: Path

RAISES DESCRIPTION
FileNotFoundError

If the input video file doesn't exist.

RuntimeError

If frame extraction fails (e.g., invalid time offset).

Examples:

Extract the last frame:

>>> frame_path = extract_frame(Path("video.mp4"))
>>> print(f"Frame saved to: {frame_path}")

Extract frame at 5 seconds:

>>> frame_path = extract_frame(Path("video.mp4"), time_offset=5.0)

Extract with custom output path:

>>> custom_path = Path("my_frame.jpg")
>>> frame_path = extract_frame(
...     Path("video.mp4"),
...     time_offset=10.0,
...     output_path=custom_path
... )
Source code in src/veotools/process/extractor.py
def extract_frame(
    video_path: Path,
    time_offset: float = -1.0,
    output_path: Optional[Path] = None
) -> Path:
    """Extract a single frame from a video at the specified time offset.

    Extracts and saves a frame from a video file as a JPEG image. Supports both
    positive time offsets (from start) and negative offsets (from end). Uses
    OpenCV for video processing and automatically manages storage paths.

    Args:
        video_path: Path to the input video file.
        time_offset: Time in seconds where to extract the frame. Positive values
            are from the start, negative values from the end. Defaults to -1.0
            (1 second from the end).
        output_path: Optional custom path for saving the extracted frame. If None,
            auto-generates a path using StorageManager.

    Returns:
        Path: The path where the extracted frame was saved.

    Raises:
        FileNotFoundError: If the input video file doesn't exist.
        RuntimeError: If frame extraction fails (e.g., invalid time offset).

    Examples:
        Extract the last frame:
        >>> frame_path = extract_frame(Path("video.mp4"))
        >>> print(f"Frame saved to: {frame_path}")

        Extract frame at 5 seconds:
        >>> frame_path = extract_frame(Path("video.mp4"), time_offset=5.0)

        Extract with custom output path:
        >>> custom_path = Path("my_frame.jpg")
        >>> frame_path = extract_frame(
        ...     Path("video.mp4"),
        ...     time_offset=10.0,
        ...     output_path=custom_path
        ... )
    """
    if not video_path.exists():
        raise FileNotFoundError(f"Video not found: {video_path}")

    storage = StorageManager()
    cap = cv2.VideoCapture(str(video_path))

    try:
        fps = cap.get(cv2.CAP_PROP_FPS)
        total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
        duration = total_frames / fps if fps > 0 else 0

        if time_offset < 0:
            target_time = max(0, duration + time_offset)
        else:
            target_time = min(duration, time_offset)

        target_frame = int(target_time * fps)

        cap.set(cv2.CAP_PROP_POS_FRAMES, target_frame)
        ret, frame = cap.read()

        if not ret:
            raise RuntimeError(f"Failed to extract frame at {target_time:.1f}s")

        if output_path is None:
            filename = f"frame_{video_path.stem}_at_{target_time:.1f}s.jpg"
            output_path = storage.get_frame_path(filename)

        cv2.imwrite(str(output_path), frame)

        return output_path

    finally:
        cap.release()

extract_frames

extract_frames(video_path: Path, times: list, output_dir: Optional[Path] = None) -> list

Extract multiple frames from a video at specified time offsets.

Extracts and saves multiple frames from a video file as JPEG images. Each time offset can be positive (from start) or negative (from end). Uses OpenCV for efficient batch frame extraction.

PARAMETER DESCRIPTION
video_path

Path to the input video file.

TYPE: Path

times

List of time offsets in seconds. Each can be positive (from start) or negative (from end).

TYPE: list

output_dir

Optional directory for saving frames. If None, uses StorageManager's default frame directory.

TYPE: Optional[Path] DEFAULT: None

RETURNS DESCRIPTION
list

List of Path objects where the extracted frames were saved. Order matches the input times list.

TYPE: list

RAISES DESCRIPTION
FileNotFoundError

If the input video file doesn't exist.

Examples:

Extract frames at multiple timestamps:

>>> frame_paths = extract_frames(
...     Path("video.mp4"),
...     [0.0, 5.0, 10.0, -1.0]  # Start, 5s, 10s, and 1s from end
... )
>>> print(f"Extracted {len(frame_paths)} frames")

Extract to custom directory:

>>> output_dir = Path("extracted_frames")
>>> frame_paths = extract_frames(
...     Path("movie.mp4"),
...     [1.0, 2.0, 3.0],
...     output_dir=output_dir
... )
Note

Failed frame extractions are silently skipped. The returned list may contain fewer paths than input times if some extractions fail.

Source code in src/veotools/process/extractor.py
def extract_frames(
    video_path: Path,
    times: list,
    output_dir: Optional[Path] = None
) -> list:
    """Extract multiple frames from a video at specified time offsets.

    Extracts and saves multiple frames from a video file as JPEG images. Each
    time offset can be positive (from start) or negative (from end). Uses
    OpenCV for efficient batch frame extraction.

    Args:
        video_path: Path to the input video file.
        times: List of time offsets in seconds. Each can be positive (from start)
            or negative (from end).
        output_dir: Optional directory for saving frames. If None, uses
            StorageManager's default frame directory.

    Returns:
        list: List of Path objects where the extracted frames were saved.
            Order matches the input times list.

    Raises:
        FileNotFoundError: If the input video file doesn't exist.

    Examples:
        Extract frames at multiple timestamps:
        >>> frame_paths = extract_frames(
        ...     Path("video.mp4"),
        ...     [0.0, 5.0, 10.0, -1.0]  # Start, 5s, 10s, and 1s from end
        ... )
        >>> print(f"Extracted {len(frame_paths)} frames")

        Extract to custom directory:
        >>> output_dir = Path("extracted_frames")
        >>> frame_paths = extract_frames(
        ...     Path("movie.mp4"),
        ...     [1.0, 2.0, 3.0],
        ...     output_dir=output_dir
        ... )

    Note:
        Failed frame extractions are silently skipped. The returned list may
        contain fewer paths than input times if some extractions fail.
    """
    if not video_path.exists():
        raise FileNotFoundError(f"Video not found: {video_path}")

    storage = StorageManager()
    cap = cv2.VideoCapture(str(video_path))
    frames = []

    try:
        fps = cap.get(cv2.CAP_PROP_FPS)
        total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
        duration = total_frames / fps if fps > 0 else 0

        for i, time_offset in enumerate(times):
            if time_offset < 0:
                target_time = max(0, duration + time_offset)
            else:
                target_time = min(duration, time_offset)

            target_frame = int(target_time * fps)

            cap.set(cv2.CAP_PROP_POS_FRAMES, target_frame)
            ret, frame = cap.read()

            if ret:
                if output_dir:
                    output_path = output_dir / f"frame_{i:03d}_at_{target_time:.1f}s.jpg"
                else:
                    filename = f"frame_{video_path.stem}_{i:03d}_at_{target_time:.1f}s.jpg"
                    output_path = storage.get_frame_path(filename)

                cv2.imwrite(str(output_path), frame)
                frames.append(output_path)

        return frames

    finally:
        cap.release()

get_video_info

get_video_info(video_path: Path) -> dict

Extract comprehensive metadata from a video file.

Retrieves video metadata including frame rate, duration, dimensions, and frame count. First attempts to use ffprobe for accurate metadata extraction, falling back to OpenCV if ffprobe is unavailable. This dual approach ensures maximum compatibility and accuracy.

PARAMETER DESCRIPTION
video_path

Path to the input video file.

TYPE: Path

RETURNS DESCRIPTION
dict

Video metadata containing: - fps (float): Frames per second - frame_count (int): Total number of frames - width (int): Video width in pixels - height (int): Video height in pixels - duration (float): Video duration in seconds

TYPE: dict

RAISES DESCRIPTION
FileNotFoundError

If the input video file doesn't exist.

Examples:

Get basic video information:

>>> info = get_video_info(Path("video.mp4"))
>>> print(f"Duration: {info['duration']:.2f}s")
>>> print(f"Resolution: {info['width']}x{info['height']}")
>>> print(f"Frame rate: {info['fps']} fps")

Check if video has expected properties:

>>> info = get_video_info(Path("movie.mp4"))
>>> if info['fps'] > 30:
...     print("High frame rate video")
>>> if info['width'] >= 1920:
...     print("HD or higher resolution")
Note
  • ffprobe (from FFmpeg) provides more accurate metadata when available
  • OpenCV fallback may have slight inaccuracies in frame rate calculation
  • All numeric values are guaranteed to be non-negative
  • Returns 0.0 for fps/duration if video properties cannot be determined
Source code in src/veotools/process/extractor.py
def get_video_info(video_path: Path) -> dict:
    """Extract comprehensive metadata from a video file.

    Retrieves video metadata including frame rate, duration, dimensions, and frame count.
    First attempts to use ffprobe for accurate metadata extraction, falling back to
    OpenCV if ffprobe is unavailable. This dual approach ensures maximum compatibility
    and accuracy.

    Args:
        video_path: Path to the input video file.

    Returns:
        dict: Video metadata containing:
            - fps (float): Frames per second
            - frame_count (int): Total number of frames
            - width (int): Video width in pixels
            - height (int): Video height in pixels
            - duration (float): Video duration in seconds

    Raises:
        FileNotFoundError: If the input video file doesn't exist.

    Examples:
        Get basic video information:
        >>> info = get_video_info(Path("video.mp4"))
        >>> print(f"Duration: {info['duration']:.2f}s")
        >>> print(f"Resolution: {info['width']}x{info['height']}")
        >>> print(f"Frame rate: {info['fps']} fps")

        Check if video has expected properties:
        >>> info = get_video_info(Path("movie.mp4"))
        >>> if info['fps'] > 30:
        ...     print("High frame rate video")
        >>> if info['width'] >= 1920:
        ...     print("HD or higher resolution")

    Note:
        - ffprobe (from FFmpeg) provides more accurate metadata when available
        - OpenCV fallback may have slight inaccuracies in frame rate calculation
        - All numeric values are guaranteed to be non-negative
        - Returns 0.0 for fps/duration if video properties cannot be determined
    """
    if not video_path.exists():
        raise FileNotFoundError(f"Video not found: {video_path}")

    # Try ffprobe for precise metadata
    try:
        cmd = [
            "ffprobe", "-v", "error",
            "-print_format", "json",
            "-show_format",
            "-show_streams",
            str(video_path)
        ]
        res = subprocess.run(cmd, capture_output=True, text=True, check=True)
        data = json.loads(res.stdout or "{}")
        video_stream = None
        for s in data.get("streams", []):
            if s.get("codec_type") == "video":
                video_stream = s
                break
        if video_stream:
            # FPS can be in r_frame_rate or avg_frame_rate as "num/den"
            fps_val = 0.0
            for key in ("avg_frame_rate", "r_frame_rate"):
                rate = video_stream.get(key)
                if isinstance(rate, str) and "/" in rate:
                    num, den = rate.split("/", 1)
                    try:
                        num_f, den_f = float(num), float(den)
                        if den_f > 0:
                            fps_val = num_f / den_f
                            break
                    except Exception:
                        pass
            width = int(video_stream.get("width", 0) or 0)
            height = int(video_stream.get("height", 0) or 0)
            duration = None
            # Prefer format duration
            if "format" in data and data["format"].get("duration"):
                try:
                    duration = float(data["format"]["duration"])  # seconds
                except Exception:
                    duration = None
            if duration is None and video_stream.get("duration"):
                try:
                    duration = float(video_stream["duration"])  # seconds
                except Exception:
                    duration = None
            frame_count = int(fps_val * duration) if fps_val and duration else 0
            return {
                "fps": fps_val or 0.0,
                "frame_count": frame_count,
                "width": width,
                "height": height,
                "duration": duration or 0.0,
            }
    except (subprocess.CalledProcessError, FileNotFoundError, json.JSONDecodeError):
        # Fall back to OpenCV below
        pass

    # Fallback: OpenCV probing
    cap = cv2.VideoCapture(str(video_path))
    try:
        fps = cap.get(cv2.CAP_PROP_FPS)
        frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
        width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
        height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
        duration = frame_count / fps if fps and fps > 0 else 0
        return {
            "fps": fps or 0.0,
            "frame_count": frame_count,
            "width": width,
            "height": height,
            "duration": duration,
        }
    finally:
        cap.release()

Video Stitching Module

veotools.stitch.seamless

Seamless video stitching for Veo Tools.

FUNCTION DESCRIPTION
stitch_videos

Seamlessly stitch multiple videos together into a single continuous video.

stitch_with_transitions

Stitch videos together with custom transition videos between them.

create_transition_points

Extract frames from two videos to analyze potential transition points.

Classes

Functions

stitch_videos

stitch_videos(video_paths: List[Path], overlap: float = 1.0, output_path: Optional[Path] = None, on_progress: Optional[Callable] = None) -> VideoResult

Seamlessly stitch multiple videos together into a single continuous video.

Combines multiple video files into one continuous video by concatenating them with optional overlap trimming. All videos are resized to match the dimensions of the first video. The output is optimized with H.264 encoding for broad compatibility.

PARAMETER DESCRIPTION
video_paths

List of paths to video files to stitch together, in order.

TYPE: List[Path]

overlap

Duration in seconds to trim from the end of each video (except the last one) to create smooth transitions. Defaults to 1.0.

TYPE: float DEFAULT: 1.0

output_path

Optional custom output path. If None, auto-generates a path using StorageManager.

TYPE: Optional[Path] DEFAULT: None

on_progress

Optional callback function called with progress updates (message, percent).

TYPE: Optional[Callable] DEFAULT: None

RETURNS DESCRIPTION
VideoResult

Object containing the stitched video path, metadata, and operation details.

TYPE: VideoResult

RAISES DESCRIPTION
ValueError

If no videos are provided or if fewer than 2 videos are found.

FileNotFoundError

If any input video file doesn't exist.

RuntimeError

If video processing fails.

Examples:

Stitch videos with default overlap:

>>> video_files = [Path("part1.mp4"), Path("part2.mp4"), Path("part3.mp4")]
>>> result = stitch_videos(video_files)
>>> print(f"Stitched video: {result.path}")

Stitch without overlap:

>>> result = stitch_videos(video_files, overlap=0.0)

Stitch with progress tracking:

>>> def show_progress(msg, pct):
...     print(f"Stitching: {msg} ({pct}%)")
>>> result = stitch_videos(
...     video_files,
...     overlap=2.0,
...     on_progress=show_progress
... )

Custom output location:

>>> result = stitch_videos(
...     video_files,
...     output_path=Path("final_movie.mp4")
... )
Note
  • Videos are resized to match the first video's dimensions
  • Uses H.264 encoding with CRF 23 for good quality/size balance
  • Automatically handles frame rate consistency
  • FFmpeg is used for final encoding if available, otherwise uses OpenCV
Source code in src/veotools/stitch/seamless.py
def stitch_videos(
    video_paths: List[Path],
    overlap: float = 1.0,
    output_path: Optional[Path] = None,
    on_progress: Optional[Callable] = None
) -> VideoResult:
    """Seamlessly stitch multiple videos together into a single continuous video.

    Combines multiple video files into one continuous video by concatenating them
    with optional overlap trimming. All videos are resized to match the dimensions
    of the first video. The output is optimized with H.264 encoding for broad
    compatibility.

    Args:
        video_paths: List of paths to video files to stitch together, in order.
        overlap: Duration in seconds to trim from the end of each video (except
            the last one) to create smooth transitions. Defaults to 1.0.
        output_path: Optional custom output path. If None, auto-generates a path
            using StorageManager.
        on_progress: Optional callback function called with progress updates (message, percent).

    Returns:
        VideoResult: Object containing the stitched video path, metadata, and operation details.

    Raises:
        ValueError: If no videos are provided or if fewer than 2 videos are found.
        FileNotFoundError: If any input video file doesn't exist.
        RuntimeError: If video processing fails.

    Examples:
        Stitch videos with default overlap:
        >>> video_files = [Path("part1.mp4"), Path("part2.mp4"), Path("part3.mp4")]
        >>> result = stitch_videos(video_files)
        >>> print(f"Stitched video: {result.path}")

        Stitch without overlap:
        >>> result = stitch_videos(video_files, overlap=0.0)

        Stitch with progress tracking:
        >>> def show_progress(msg, pct):
        ...     print(f"Stitching: {msg} ({pct}%)")
        >>> result = stitch_videos(
        ...     video_files,
        ...     overlap=2.0,
        ...     on_progress=show_progress
        ... )

        Custom output location:
        >>> result = stitch_videos(
        ...     video_files,
        ...     output_path=Path("final_movie.mp4")
        ... )

    Note:
        - Videos are resized to match the first video's dimensions
        - Uses H.264 encoding with CRF 23 for good quality/size balance
        - Automatically handles frame rate consistency
        - FFmpeg is used for final encoding if available, otherwise uses OpenCV
    """
    if not video_paths:
        raise ValueError("No videos provided to stitch")

    storage = StorageManager()
    progress = ProgressTracker(on_progress)
    result = VideoResult()

    try:
        progress.start("Preparing")

        for path in video_paths:
            if not path.exists():
                raise FileNotFoundError(f"Video not found: {path}")

        first_info = get_video_info(video_paths[0])
        fps = first_info["fps"]
        width = first_info["width"]
        height = first_info["height"]

        if output_path is None:
            filename = f"stitched_{result.id[:8]}.mp4"
            output_path = storage.get_video_path(filename)

        fourcc = cv2.VideoWriter_fourcc(*'mp4v')
        temp_path = output_path.parent / f"temp_{output_path.name}"
        out = cv2.VideoWriter(str(temp_path), fourcc, fps, (width, height))

        total_frames_written = 0
        total_videos = len(video_paths)

        for i, video_path in enumerate(video_paths):
            is_last_video = (i == total_videos - 1)
            percent = int((i / total_videos) * 90)
            progress.update(f"Processing {i+1}/{total_videos}", percent)

            cap = cv2.VideoCapture(str(video_path))
            total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))

            if not is_last_video and overlap > 0:
                frames_to_trim = int(fps * overlap)
                frames_to_use = max(1, total_frames - frames_to_trim)
            else:
                frames_to_use = total_frames

            frame_count = 0
            while frame_count < frames_to_use:
                ret, frame = cap.read()
                if not ret:
                    break

                if frame.shape[1] != width or frame.shape[0] != height:
                    frame = cv2.resize(frame, (width, height))

                out.write(frame)
                frame_count += 1
                total_frames_written += 1

            cap.release()

        out.release()

        import subprocess
        try:
            cmd = [
                "ffmpeg", "-i", str(temp_path),
                "-c:v", "libx264",
                "-preset", "fast",
                "-crf", "23",
                "-pix_fmt", "yuv420p",
                "-movflags", "+faststart",
                "-y",
                str(output_path)
            ]
            subprocess.run(cmd, check=True, capture_output=True)
            temp_path.unlink()
        except subprocess.CalledProcessError:
            import shutil
            shutil.move(str(temp_path), str(output_path))

        result.path = output_path
        result.url = storage.get_url(output_path)
        result.metadata = VideoMetadata(
            fps=fps,
            duration=total_frames_written / fps if fps > 0 else 0,
            width=width,
            height=height
        )

        progress.complete("Complete")
        result.update_progress("Complete", 100)

    except Exception as e:
        result.mark_failed(e)
        raise

    return result

stitch_with_transitions

stitch_with_transitions(video_paths: List[Path], transition_videos: List[Path], output_path: Optional[Path] = None, on_progress: Optional[Callable] = None) -> VideoResult

Stitch videos together with custom transition videos between them.

Combines multiple videos by inserting transition videos between each pair of main videos. The transitions are placed between consecutive videos to create smooth, cinematic connections between scenes.

PARAMETER DESCRIPTION
video_paths

List of main video files to stitch together, in order.

TYPE: List[Path]

transition_videos

List of transition videos to insert between main videos. Must have exactly len(video_paths) - 1 transitions.

TYPE: List[Path]

output_path

Optional custom output path. If None, auto-generates a path using StorageManager.

TYPE: Optional[Path] DEFAULT: None

on_progress

Optional callback function called with progress updates (message, percent).

TYPE: Optional[Callable] DEFAULT: None

RETURNS DESCRIPTION
VideoResult

Object containing the final stitched video with transitions.

TYPE: VideoResult

RAISES DESCRIPTION
ValueError

If the number of transition videos doesn't match the requirement (should be one less than the number of main videos).

FileNotFoundError

If any video file doesn't exist.

Examples:

Add transitions between three video clips:

>>> main_videos = [Path("scene1.mp4"), Path("scene2.mp4"), Path("scene3.mp4")]
>>> transitions = [Path("fade1.mp4"), Path("fade2.mp4")]
>>> result = stitch_with_transitions(main_videos, transitions)
>>> print(f"Final video with transitions: {result.path}")

With progress tracking:

>>> def track_progress(msg, pct):
...     print(f"Processing: {msg} - {pct}%")
>>> result = stitch_with_transitions(
...     main_videos,
...     transitions,
...     on_progress=track_progress
... )
Note

This function uses stitch_videos internally with overlap=0 to preserve transition videos exactly as provided.

Source code in src/veotools/stitch/seamless.py
def stitch_with_transitions(
    video_paths: List[Path],
    transition_videos: List[Path],
    output_path: Optional[Path] = None,
    on_progress: Optional[Callable] = None
) -> VideoResult:
    """Stitch videos together with custom transition videos between them.

    Combines multiple videos by inserting transition videos between each pair
    of main videos. The transitions are placed between consecutive videos to
    create smooth, cinematic connections between scenes.

    Args:
        video_paths: List of main video files to stitch together, in order.
        transition_videos: List of transition videos to insert between main videos.
            Must have exactly len(video_paths) - 1 transitions.
        output_path: Optional custom output path. If None, auto-generates a path
            using StorageManager.
        on_progress: Optional callback function called with progress updates (message, percent).

    Returns:
        VideoResult: Object containing the final stitched video with transitions.

    Raises:
        ValueError: If the number of transition videos doesn't match the requirement
            (should be one less than the number of main videos).
        FileNotFoundError: If any video file doesn't exist.

    Examples:
        Add transitions between three video clips:
        >>> main_videos = [Path("scene1.mp4"), Path("scene2.mp4"), Path("scene3.mp4")]
        >>> transitions = [Path("fade1.mp4"), Path("fade2.mp4")]
        >>> result = stitch_with_transitions(main_videos, transitions)
        >>> print(f"Final video with transitions: {result.path}")

        With progress tracking:
        >>> def track_progress(msg, pct):
        ...     print(f"Processing: {msg} - {pct}%")
        >>> result = stitch_with_transitions(
        ...     main_videos,
        ...     transitions,
        ...     on_progress=track_progress
        ... )

    Note:
        This function uses stitch_videos internally with overlap=0 to preserve
        transition videos exactly as provided.
    """
    if len(transition_videos) != len(video_paths) - 1:
        raise ValueError(f"Need {len(video_paths)-1} transitions for {len(video_paths)} videos")

    combined_paths = []
    for i, video in enumerate(video_paths[:-1]):
        combined_paths.append(video)
        combined_paths.append(transition_videos[i])
    combined_paths.append(video_paths[-1])

    return stitch_videos(
        combined_paths,
        overlap=0,
        output_path=output_path,
        on_progress=on_progress
    )

create_transition_points

create_transition_points(video_a: Path, video_b: Path, extract_points: Optional[dict] = None) -> tuple

Extract frames from two videos to analyze potential transition points.

Extracts representative frames from two videos that can be used to analyze how well they might transition together. Typically extracts the ending frame of the first video and the beginning frame of the second video.

PARAMETER DESCRIPTION
video_a

Path to the first video file.

TYPE: Path

video_b

Path to the second video file.

TYPE: Path

extract_points

Optional dictionary specifying extraction points: - "a_end": Time offset for frame extraction from video_a (default: -1.0) - "b_start": Time offset for frame extraction from video_b (default: 1.0) If None, uses default values.

TYPE: Optional[dict] DEFAULT: None

RETURNS DESCRIPTION
tuple

A tuple containing (frame_a_path, frame_b_path) where: - frame_a_path: Path to extracted frame from video_a - frame_b_path: Path to extracted frame from video_b

TYPE: tuple

RAISES DESCRIPTION
FileNotFoundError

If either video file doesn't exist.

RuntimeError

If frame extraction fails for either video.

Examples:

Extract transition frames with defaults:

>>> frame_a, frame_b = create_transition_points(
...     Path("clip1.mp4"),
...     Path("clip2.mp4")
... )
>>> print(f"Transition frames: {frame_a}, {frame_b}")

Custom extraction points:

>>> points = {"a_end": -2.0, "b_start": 0.5}
>>> frame_a, frame_b = create_transition_points(
...     Path("scene1.mp4"),
...     Path("scene2.mp4"),
...     extract_points=points
... )
Note
  • Default extracts 1 second before the end of video_a
  • Default extracts 1 second after the start of video_b
  • Negative values in extract_points count from the end of the video
  • These frames can be used to analyze color, composition, or content similarity for better transition planning
Source code in src/veotools/stitch/seamless.py
def create_transition_points(
    video_a: Path,
    video_b: Path,
    extract_points: Optional[dict] = None
) -> tuple:
    """Extract frames from two videos to analyze potential transition points.

    Extracts representative frames from two videos that can be used to analyze
    how well they might transition together. Typically extracts the ending frame
    of the first video and the beginning frame of the second video.

    Args:
        video_a: Path to the first video file.
        video_b: Path to the second video file.
        extract_points: Optional dictionary specifying extraction points:
            - "a_end": Time offset for frame extraction from video_a (default: -1.0)
            - "b_start": Time offset for frame extraction from video_b (default: 1.0)
            If None, uses default values.

    Returns:
        tuple: A tuple containing (frame_a_path, frame_b_path) where:
            - frame_a_path: Path to extracted frame from video_a
            - frame_b_path: Path to extracted frame from video_b

    Raises:
        FileNotFoundError: If either video file doesn't exist.
        RuntimeError: If frame extraction fails for either video.

    Examples:
        Extract transition frames with defaults:
        >>> frame_a, frame_b = create_transition_points(
        ...     Path("clip1.mp4"),
        ...     Path("clip2.mp4")
        ... )
        >>> print(f"Transition frames: {frame_a}, {frame_b}")

        Custom extraction points:
        >>> points = {"a_end": -2.0, "b_start": 0.5}
        >>> frame_a, frame_b = create_transition_points(
        ...     Path("scene1.mp4"),
        ...     Path("scene2.mp4"),
        ...     extract_points=points
        ... )

    Note:
        - Default extracts 1 second before the end of video_a
        - Default extracts 1 second after the start of video_b
        - Negative values in extract_points count from the end of the video
        - These frames can be used to analyze color, composition, or content
          similarity for better transition planning
    """
    from ..process.extractor import extract_frame

    if extract_points is None:
        extract_points = {
            "a_end": -1.0,
            "b_start": 1.0
        }

    frame_a = extract_frame(video_a, extract_points.get("a_end", -1.0))
    frame_b = extract_frame(video_b, extract_points.get("b_start", 1.0))

    return frame_a, frame_b

Bridge API Module

veotools.api.bridge

CLASS DESCRIPTION
Bridge

A fluent API bridge for chaining video generation and processing operations.

Classes

Bridge

Bridge(name: Optional[str] = None)

A fluent API bridge for chaining video generation and processing operations.

The Bridge class provides a convenient, chainable interface for combining multiple video operations like generation, stitching, and media management. It maintains an internal workflow and media queue to track operations and intermediate results.

ATTRIBUTE DESCRIPTION
workflow

Workflow object tracking all operations performed.

media_queue

List of media file paths in processing order.

TYPE: List[Path]

results

List of VideoResult objects from generation operations.

TYPE: List[VideoResult]

storage

StorageManager instance for file operations.

Examples:

Basic text-to-video generation:

>>> bridge = Bridge("my_project")
>>> result = bridge.generate("A cat playing").save()

Chain multiple generations and stitch:

>>> bridge = (Bridge("movie_project")
...     .generate("Opening scene")
...     .generate("Middle scene")
...     .generate("Ending scene")
...     .stitch(overlap=1.0)
...     .save(Path("final_movie.mp4")))

Image-to-video with continuation:

>>> bridge = (Bridge()
...     .add_media("photo.jpg")
...     .generate("The person starts walking")
...     .generate("They walk into the distance")
...     .stitch())
METHOD DESCRIPTION
with_progress

Set a progress callback for all subsequent operations.

add_media

Add media files to the processing queue.

generate

Generate a video using text prompt and optional media input.

generate_transition

Generate a transition video between the last two media items.

stitch

Stitch all videos in the queue into a single continuous video.

save

Save the final result to a specified path or return the current path.

get_workflow

Get the workflow object containing all performed operations.

to_dict

Convert the workflow to a dictionary representation.

clear

Clear the media queue, removing all queued media files.

Source code in src/veotools/api/bridge.py
def __init__(self, name: Optional[str] = None):
    self.workflow = Workflow(name)
    self.media_queue: List[Path] = []
    self.results: List[VideoResult] = []
    self.storage = StorageManager()
    self._on_progress: Optional[Callable] = None
Functions
with_progress
with_progress(callback: Callable) -> Bridge

Set a progress callback for all subsequent operations.

PARAMETER DESCRIPTION
callback

Function called with progress updates (message: str, percent: int).

TYPE: Callable

RETURNS DESCRIPTION
Bridge

Self for method chaining.

TYPE: Bridge

Examples:

>>> def show_progress(msg, pct):
...     print(f"{msg}: {pct}%")
>>> bridge = Bridge().with_progress(show_progress)
Source code in src/veotools/api/bridge.py
def with_progress(self, callback: Callable) -> 'Bridge':
    """Set a progress callback for all subsequent operations.

    Args:
        callback: Function called with progress updates (message: str, percent: int).

    Returns:
        Bridge: Self for method chaining.

    Examples:
        >>> def show_progress(msg, pct):
        ...     print(f"{msg}: {pct}%")
        >>> bridge = Bridge().with_progress(show_progress)
    """
    self._on_progress = callback
    return self
add_media
add_media(media: Union[str, Path, List[Union[str, Path]]]) -> Bridge

Add media files to the processing queue.

Adds one or more media files (images or videos) to the internal queue. These files can be used as inputs for subsequent generation operations.

PARAMETER DESCRIPTION
media

Single media path, or list of media paths to add to the queue.

TYPE: Union[str, Path, List[Union[str, Path]]]

RETURNS DESCRIPTION
Bridge

Self for method chaining.

TYPE: Bridge

Examples:

Add a single image:

>>> bridge = Bridge().add_media("photo.jpg")

Add multiple videos:

>>> files = ["video1.mp4", "video2.mp4", "video3.mp4"]
>>> bridge = Bridge().add_media(files)

Chain with Path objects:

>>> bridge = Bridge().add_media(Path("input.mp4"))
Source code in src/veotools/api/bridge.py
def add_media(self, media: Union[str, Path, List[Union[str, Path]]]) -> 'Bridge':
    """Add media files to the processing queue.

    Adds one or more media files (images or videos) to the internal queue.
    These files can be used as inputs for subsequent generation operations.

    Args:
        media: Single media path, or list of media paths to add to the queue.

    Returns:
        Bridge: Self for method chaining.

    Examples:
        Add a single image:
        >>> bridge = Bridge().add_media("photo.jpg")

        Add multiple videos:
        >>> files = ["video1.mp4", "video2.mp4", "video3.mp4"]
        >>> bridge = Bridge().add_media(files)

        Chain with Path objects:
        >>> bridge = Bridge().add_media(Path("input.mp4"))
    """
    if isinstance(media, list):
        for m in media:
            self.media_queue.append(Path(m))
            self.workflow.add_step("add_media", {"path": str(m)})
    else:
        self.media_queue.append(Path(media))
        self.workflow.add_step("add_media", {"path": str(media)})
    return self
generate
generate(prompt: str, model: str = 'veo-3.0-fast-generate-preview', **kwargs) -> Bridge

Generate a video using text prompt and optional media input.

Generates a video based on the prompt and the most recent media in the queue. The generation method is automatically selected based on the media type: - No media: text-to-video generation - Image media: image-to-video generation - Video media: video continuation generation

PARAMETER DESCRIPTION
prompt

Text description for video generation.

TYPE: str

model

Veo model to use. Defaults to "veo-3.0-fast-generate-preview".

TYPE: str DEFAULT: 'veo-3.0-fast-generate-preview'

**kwargs

Additional generation parameters including: - extract_at: Time offset for video continuation (float) - duration_seconds: Video duration (int) - person_generation: Person policy (str) - enhance: Whether to enhance prompt (bool)

DEFAULT: {}

RETURNS DESCRIPTION
Bridge

Self for method chaining.

TYPE: Bridge

RAISES DESCRIPTION
RuntimeError

If video generation fails.

Examples:

Text-to-video generation:

>>> bridge = Bridge().generate("A sunset over mountains")

Image-to-video with existing media:

>>> bridge = (Bridge()
...     .add_media("landscape.jpg")
...     .generate("Clouds moving across the sky"))

Video continuation:

>>> bridge = (Bridge()
...     .add_media("scene1.mp4")
...     .generate("The action continues", extract_at=-2.0))

Custom model and parameters:

>>> bridge = Bridge().generate(
...     "A dancing robot",
...     model="veo-2.0",
...     duration_seconds=10,
...     enhance=True
... )
Source code in src/veotools/api/bridge.py
def generate(self, prompt: str, model: str = "veo-3.0-fast-generate-preview", 
             **kwargs) -> 'Bridge':
    """Generate a video using text prompt and optional media input.

    Generates a video based on the prompt and the most recent media in the queue.
    The generation method is automatically selected based on the media type:
    - No media: text-to-video generation
    - Image media: image-to-video generation
    - Video media: video continuation generation

    Args:
        prompt: Text description for video generation.
        model: Veo model to use. Defaults to "veo-3.0-fast-generate-preview".
        **kwargs: Additional generation parameters including:
            - extract_at: Time offset for video continuation (float)
            - duration_seconds: Video duration (int)
            - person_generation: Person policy (str)
            - enhance: Whether to enhance prompt (bool)

    Returns:
        Bridge: Self for method chaining.

    Raises:
        RuntimeError: If video generation fails.

    Examples:
        Text-to-video generation:
        >>> bridge = Bridge().generate("A sunset over mountains")

        Image-to-video with existing media:
        >>> bridge = (Bridge()
        ...     .add_media("landscape.jpg")
        ...     .generate("Clouds moving across the sky"))

        Video continuation:
        >>> bridge = (Bridge()
        ...     .add_media("scene1.mp4")
        ...     .generate("The action continues", extract_at=-2.0))

        Custom model and parameters:
        >>> bridge = Bridge().generate(
        ...     "A dancing robot",
        ...     model="veo-2.0",
        ...     duration_seconds=10,
        ...     enhance=True
        ... )
    """
    step = self.workflow.add_step("generate", {
        "prompt": prompt,
        "model": model,
        **kwargs
    })

    if self.media_queue:
        last_media = self.media_queue[-1]

        if last_media.suffix.lower() in ['.jpg', '.jpeg', '.png', '.gif', '.bmp']:
            result = generate_from_image(
                last_media,
                prompt,
                model=model,
                on_progress=self._on_progress,
                **kwargs
            )
        else:
            result = generate_from_video(
                last_media,
                prompt,
                extract_at=kwargs.pop("extract_at", -1.0),
                model=model,
                on_progress=self._on_progress,
                **kwargs
            )
    else:
        result = generate_from_text(
            prompt,
            model=model,
            on_progress=self._on_progress,
            **kwargs
        )

    step.result = result
    self.results.append(result)

    if result.path:
        self.media_queue.append(result.path)

    return self
generate_transition
generate_transition(prompt: Optional[str] = None, model: str = 'veo-3.0-fast-generate-preview') -> Bridge

Generate a transition video between the last two media items.

Creates a smooth transition video that bridges the gap between the two most recent media items in the queue. The transition is generated from a frame extracted near the end of the second-to-last video.

PARAMETER DESCRIPTION
prompt

Description of the desired transition. If None, uses a default "smooth cinematic transition between scenes".

TYPE: Optional[str] DEFAULT: None

model

Veo model to use. Defaults to "veo-3.0-fast-generate-preview".

TYPE: str DEFAULT: 'veo-3.0-fast-generate-preview'

RETURNS DESCRIPTION
Bridge

Self for method chaining.

TYPE: Bridge

RAISES DESCRIPTION
ValueError

If fewer than 2 media items are in the queue.

Examples:

Generate default transition:

>>> bridge = (Bridge()
...     .add_media(["scene1.mp4", "scene2.mp4"])
...     .generate_transition())

Custom transition prompt:

>>> bridge = (Bridge()
...     .generate("Day scene")
...     .generate("Night scene")
...     .generate_transition("Gradual sunset transition"))
Note

The transition video is inserted between the last two media items, creating a sequence like: [media_a, transition, media_b, ...]

Source code in src/veotools/api/bridge.py
def generate_transition(self, prompt: Optional[str] = None, 
                       model: str = "veo-3.0-fast-generate-preview") -> 'Bridge':
    """Generate a transition video between the last two media items.

    Creates a smooth transition video that bridges the gap between the two most
    recent media items in the queue. The transition is generated from a frame
    extracted near the end of the second-to-last video.

    Args:
        prompt: Description of the desired transition. If None, uses a default
            "smooth cinematic transition between scenes".
        model: Veo model to use. Defaults to "veo-3.0-fast-generate-preview".

    Returns:
        Bridge: Self for method chaining.

    Raises:
        ValueError: If fewer than 2 media items are in the queue.

    Examples:
        Generate default transition:
        >>> bridge = (Bridge()
        ...     .add_media(["scene1.mp4", "scene2.mp4"])
        ...     .generate_transition())

        Custom transition prompt:
        >>> bridge = (Bridge()
        ...     .generate("Day scene")
        ...     .generate("Night scene")
        ...     .generate_transition("Gradual sunset transition"))

    Note:
        The transition video is inserted between the last two media items,
        creating a sequence like: [media_a, transition, media_b, ...]
    """
    if len(self.media_queue) < 2:
        raise ValueError("Need at least 2 media items to create transition")

    media_a = self.media_queue[-2]
    media_b = self.media_queue[-1]

    if not prompt:
        prompt = "smooth cinematic transition between scenes"

    step = self.workflow.add_step("generate_transition", {
        "media_a": str(media_a),
        "media_b": str(media_b),
        "prompt": prompt,
        "model": model
    })

    result = generate_from_video(
        media_a,
        prompt,
        extract_at=-0.5,
        model=model,
        on_progress=self._on_progress
    )

    step.result = result
    self.results.append(result)

    if result.path:
        self.media_queue.insert(-1, result.path)

    return self
stitch
stitch(overlap: float = 1.0) -> Bridge

Stitch all videos in the queue into a single continuous video.

Combines all video files in the media queue into one seamless video. Non-video files (images) are automatically filtered out. The result replaces the entire media queue.

PARAMETER DESCRIPTION
overlap

Duration in seconds to trim from the end of each video (except the last) for smooth transitions. Defaults to 1.0.

TYPE: float DEFAULT: 1.0

RETURNS DESCRIPTION
Bridge

Self for method chaining.

TYPE: Bridge

RAISES DESCRIPTION
ValueError

If fewer than 2 videos are available for stitching.

Examples:

Stitch with default overlap:

>>> bridge = (Bridge()
...     .generate("Scene 1")
...     .generate("Scene 2")
...     .generate("Scene 3")
...     .stitch())

Stitch without overlap:

>>> bridge = bridge.stitch(overlap=0.0)

Stitch with longer transitions:

>>> bridge = bridge.stitch(overlap=2.5)
Note

After stitching, the media queue contains only the final stitched video.

Source code in src/veotools/api/bridge.py
def stitch(self, overlap: float = 1.0) -> 'Bridge':
    """Stitch all videos in the queue into a single continuous video.

    Combines all video files in the media queue into one seamless video.
    Non-video files (images) are automatically filtered out. The result
    replaces the entire media queue.

    Args:
        overlap: Duration in seconds to trim from the end of each video
            (except the last) for smooth transitions. Defaults to 1.0.

    Returns:
        Bridge: Self for method chaining.

    Raises:
        ValueError: If fewer than 2 videos are available for stitching.

    Examples:
        Stitch with default overlap:
        >>> bridge = (Bridge()
        ...     .generate("Scene 1")
        ...     .generate("Scene 2")
        ...     .generate("Scene 3")
        ...     .stitch())

        Stitch without overlap:
        >>> bridge = bridge.stitch(overlap=0.0)

        Stitch with longer transitions:
        >>> bridge = bridge.stitch(overlap=2.5)

    Note:
        After stitching, the media queue contains only the final stitched video.
    """
    if len(self.media_queue) < 2:
        raise ValueError("Need at least 2 videos to stitch")

    video_paths = [
        p for p in self.media_queue 
        if p.suffix.lower() in ['.mp4', '.avi', '.mov', '.mkv']
    ]

    if len(video_paths) < 2:
        raise ValueError("Need at least 2 videos to stitch")

    step = self.workflow.add_step("stitch", {
        "videos": [str(p) for p in video_paths],
        "overlap": overlap
    })

    result = stitch_videos(
        video_paths,
        overlap=overlap,
        on_progress=self._on_progress
    )

    step.result = result
    self.results.append(result)

    if result.path:
        self.media_queue = [result.path]

    return self
save
save(output_path: Optional[Union[str, Path]] = None) -> Path

Save the final result to a specified path or return the current path.

Saves the most recent media file in the queue to the specified output path, or returns the current path if no output path is provided.

PARAMETER DESCRIPTION
output_path

Optional destination path. If provided, copies the current result to this location. If None, returns the current file path.

TYPE: Optional[Union[str, Path]] DEFAULT: None

RETURNS DESCRIPTION
Path

The path where the final result is located.

TYPE: Path

RAISES DESCRIPTION
ValueError

If no media is available to save.

Examples:

Save to custom location:

>>> final_path = bridge.save("my_video.mp4")
>>> print(f"Video saved to: {final_path}")

Get current result path:

>>> current_path = bridge.save()
>>> print(f"Current result: {current_path}")

Save with Path object:

>>> output_dir = Path("outputs")
>>> final_path = bridge.save(output_dir / "final_video.mp4")
Source code in src/veotools/api/bridge.py
def save(self, output_path: Optional[Union[str, Path]] = None) -> Path:
    """Save the final result to a specified path or return the current path.

    Saves the most recent media file in the queue to the specified output path,
    or returns the current path if no output path is provided.

    Args:
        output_path: Optional destination path. If provided, copies the current
            result to this location. If None, returns the current file path.

    Returns:
        Path: The path where the final result is located.

    Raises:
        ValueError: If no media is available to save.

    Examples:
        Save to custom location:
        >>> final_path = bridge.save("my_video.mp4")
        >>> print(f"Video saved to: {final_path}")

        Get current result path:
        >>> current_path = bridge.save()
        >>> print(f"Current result: {current_path}")

        Save with Path object:
        >>> output_dir = Path("outputs")
        >>> final_path = bridge.save(output_dir / "final_video.mp4")
    """
    if not self.media_queue:
        raise ValueError("No media to save")

    last_media = self.media_queue[-1]

    if output_path:
        output_path = Path(output_path)
        import shutil
        shutil.copy2(last_media, output_path)
        return output_path

    return last_media
get_workflow
get_workflow() -> Workflow

Get the workflow object containing all performed operations.

RETURNS DESCRIPTION
Workflow

The workflow tracking all operations and their parameters.

TYPE: Workflow

Examples:

>>> bridge = Bridge("project").generate("A scene").stitch()
>>> workflow = bridge.get_workflow()
>>> print(workflow.name)
Source code in src/veotools/api/bridge.py
def get_workflow(self) -> Workflow:
    """Get the workflow object containing all performed operations.

    Returns:
        Workflow: The workflow tracking all operations and their parameters.

    Examples:
        >>> bridge = Bridge("project").generate("A scene").stitch()
        >>> workflow = bridge.get_workflow()
        >>> print(workflow.name)
    """
    return self.workflow
to_dict
to_dict() -> dict

Convert the workflow to a dictionary representation.

RETURNS DESCRIPTION
dict

Dictionary containing workflow steps and metadata.

TYPE: dict

Examples:

>>> bridge = Bridge("test").generate("Scene")
>>> workflow_dict = bridge.to_dict()
>>> print(workflow_dict.keys())
Source code in src/veotools/api/bridge.py
def to_dict(self) -> dict:
    """Convert the workflow to a dictionary representation.

    Returns:
        dict: Dictionary containing workflow steps and metadata.

    Examples:
        >>> bridge = Bridge("test").generate("Scene")
        >>> workflow_dict = bridge.to_dict()
        >>> print(workflow_dict.keys())
    """
    return self.workflow.to_dict()
clear
clear() -> Bridge

Clear the media queue, removing all queued media files.

RETURNS DESCRIPTION
Bridge

Self for method chaining.

TYPE: Bridge

Examples:

>>> bridge = Bridge().add_media(["a.mp4", "b.mp4"]).clear()
>>> # Media queue is now empty
Source code in src/veotools/api/bridge.py
def clear(self) -> 'Bridge':
    """Clear the media queue, removing all queued media files.

    Returns:
        Bridge: Self for method chaining.

    Examples:
        >>> bridge = Bridge().add_media(["a.mp4", "b.mp4"]).clear()
        >>> # Media queue is now empty
    """
    self.media_queue.clear()
    return self

Functions

MCP API Module

veotools.api.mcp_api

MCP-friendly API wrappers for Veo Tools.

This module exposes small, deterministic, JSON-first functions intended for use in Model Context Protocol (MCP) servers. It builds on top of the existing blocking SDK functions by providing a non-blocking job lifecycle:

  • generate_start(params) -> submits a generation job and returns immediately
  • generate_get(job_id) -> fetches job status/progress/result
  • generate_cancel(job_id) -> requests cancellation for a running job

It also provides environment/system helpers: - preflight() -> checks API key, ffmpeg, and filesystem permissions - version() -> returns package and key dependency versions

Design notes: - Jobs are persisted as JSON files under StorageManager's base directory ("output/ops"). This allows stateless MCP handlers to inspect progress and results across processes. - A background thread runs the blocking generation call and updates job state via the JobStore. Cancellation is cooperative: the on_progress callback checks a cancel flag in the persisted job state and raises Cancelled.

FUNCTION DESCRIPTION
preflight

Check environment and system prerequisites for video generation.

version

Report package and dependency versions in a JSON-friendly format.

generate_start

Start a video generation job and return immediately with job details.

generate_get

Get the current status and results of a generation job.

generate_cancel

Request cancellation of a running generation job.

list_models

List available video generation models with their capabilities.

cache_create_from_files

Create a cached content handle from local file paths.

cache_get

Retrieve cached content metadata by cache name.

cache_list

List all cached content entries with their metadata.

cache_update

Update TTL or expiration time for a cached content entry.

cache_delete

Delete a cached content entry by name.

Classes

Cancelled

Bases: Exception

Exception raised to signal cooperative cancellation of a generation job.

This exception is raised internally when a job's cancel_requested flag is set to True, allowing for graceful termination of long-running operations.

JobRecord dataclass

JobRecord(job_id: str, status: str, progress: int, message: str, created_at: float, updated_at: float, cancel_requested: bool, kind: str, params: Dict[str, Any], result: Optional[Dict[str, Any]] = None, error_code: Optional[str] = None, error_message: Optional[str] = None, remote_operation_id: Optional[str] = None)

Data class representing a generation job's state and metadata.

Stores all information about a generation job including status, progress, parameters, results, and error information. Used for job persistence and state management across processes.

ATTRIBUTE DESCRIPTION
job_id

Unique identifier for the job.

TYPE: str

status

Current job status (pending|processing|complete|failed|cancelled).

TYPE: str

progress

Progress percentage (0-100).

TYPE: int

message

Current status message.

TYPE: str

created_at

Unix timestamp when job was created.

TYPE: float

updated_at

Unix timestamp of last update.

TYPE: float

cancel_requested

Whether cancellation has been requested.

TYPE: bool

kind

Generation type (text|image|video).

TYPE: str

params

Dictionary of generation parameters.

TYPE: Dict[str, Any]

result

Optional result data when job completes.

TYPE: Optional[Dict[str, Any]]

error_code

Optional error code if job fails.

TYPE: Optional[str]

error_message

Optional error description if job fails.

TYPE: Optional[str]

remote_operation_id

Optional ID from the remote API operation.

TYPE: Optional[str]

METHOD DESCRIPTION
to_json

Convert the job record to JSON string representation.

Functions
to_json
to_json() -> str

Convert the job record to JSON string representation.

RETURNS DESCRIPTION
str

JSON string representation of the job record.

TYPE: str

Source code in src/veotools/api/mcp_api.py
def to_json(self) -> str:
    """Convert the job record to JSON string representation.

    Returns:
        str: JSON string representation of the job record.
    """
    return json.dumps(asdict(self), ensure_ascii=False)

JobStore

JobStore(storage: Optional[StorageManager] = None)

File-based persistence layer for generation jobs.

Manages storage and retrieval of job records using JSON files in the filesystem. Each job is stored as a separate JSON file under the output/ops/{job_id}.json path structure.

This design allows stateless MCP handlers to inspect job progress and results across different processes and sessions.

ATTRIBUTE DESCRIPTION
storage

StorageManager instance for base path management.

ops_dir

Directory path where job files are stored.

Initialize the job store with optional custom storage manager.

PARAMETER DESCRIPTION
storage

Optional StorageManager instance. If None, creates a new one.

TYPE: Optional[StorageManager] DEFAULT: None

METHOD DESCRIPTION
create

Create a new job record on disk.

read

Read a job record from disk.

update

Update a job record with new values and persist to disk.

request_cancel

Request cancellation of a job by setting the cancel flag.

Source code in src/veotools/api/mcp_api.py
def __init__(self, storage: Optional[StorageManager] = None):
    """Initialize the job store with optional custom storage manager.

    Args:
        storage: Optional StorageManager instance. If None, creates a new one.
    """
    self.storage = storage or StorageManager()
    self.ops_dir = self.storage.base_path / "ops"
    self.ops_dir.mkdir(exist_ok=True)
Functions
create
create(record: JobRecord) -> None

Create a new job record on disk.

PARAMETER DESCRIPTION
record

JobRecord instance to persist.

TYPE: JobRecord

RAISES DESCRIPTION
OSError

If file creation fails.

Source code in src/veotools/api/mcp_api.py
def create(self, record: JobRecord) -> None:
    """Create a new job record on disk.

    Args:
        record: JobRecord instance to persist.

    Raises:
        OSError: If file creation fails.
    """
    path = self._path(record.job_id)
    path.write_text(record.to_json(), encoding="utf-8")
read
read(job_id: str) -> Optional[JobRecord]

Read a job record from disk.

PARAMETER DESCRIPTION
job_id

The unique job identifier.

TYPE: str

RETURNS DESCRIPTION
JobRecord

The job record if found, None otherwise.

TYPE: Optional[JobRecord]

RAISES DESCRIPTION
JSONDecodeError

If the stored JSON is invalid.

Source code in src/veotools/api/mcp_api.py
def read(self, job_id: str) -> Optional[JobRecord]:
    """Read a job record from disk.

    Args:
        job_id: The unique job identifier.

    Returns:
        JobRecord: The job record if found, None otherwise.

    Raises:
        json.JSONDecodeError: If the stored JSON is invalid.
    """
    path = self._path(job_id)
    if not path.exists():
        return None
    data = json.loads(path.read_text(encoding="utf-8"))
    return JobRecord(**data)
update
update(record: JobRecord, **updates: Any) -> JobRecord

Update a job record with new values and persist to disk.

PARAMETER DESCRIPTION
record

The JobRecord instance to update.

TYPE: JobRecord

**updates

Key-value pairs of attributes to update.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
JobRecord

The updated job record.

TYPE: JobRecord

RAISES DESCRIPTION
OSError

If file write fails.

Source code in src/veotools/api/mcp_api.py
def update(self, record: JobRecord, **updates: Any) -> JobRecord:
    """Update a job record with new values and persist to disk.

    Args:
        record: The JobRecord instance to update.
        **updates: Key-value pairs of attributes to update.

    Returns:
        JobRecord: The updated job record.

    Raises:
        OSError: If file write fails.
    """
    for k, v in updates.items():
        setattr(record, k, v)
    record.updated_at = time.time()
    self._path(record.job_id).write_text(record.to_json(), encoding="utf-8")
    return record
request_cancel
request_cancel(job_id: str) -> Optional[JobRecord]

Request cancellation of a job by setting the cancel flag.

PARAMETER DESCRIPTION
job_id

The unique job identifier.

TYPE: str

RETURNS DESCRIPTION
JobRecord

Updated job record if found, None otherwise.

TYPE: Optional[JobRecord]

RAISES DESCRIPTION
OSError

If file write fails.

Source code in src/veotools/api/mcp_api.py
def request_cancel(self, job_id: str) -> Optional[JobRecord]:
    """Request cancellation of a job by setting the cancel flag.

    Args:
        job_id: The unique job identifier.

    Returns:
        JobRecord: Updated job record if found, None otherwise.

    Raises:
        OSError: If file write fails.
    """
    record = self.read(job_id)
    if not record:
        return None
    record.cancel_requested = True
    record.updated_at = time.time()
    self._path(job_id).write_text(record.to_json(), encoding="utf-8")
    return record

Functions

preflight

preflight() -> Dict[str, Any]

Check environment and system prerequisites for video generation.

Performs comprehensive system checks to ensure all required dependencies and configurations are available for successful video generation operations. This includes API key validation, FFmpeg availability, and filesystem permissions.

RETURNS DESCRIPTION
dict

JSON-serializable dictionary containing: - ok (bool): Overall system readiness status - gemini_api_key (bool): Whether GEMINI_API_KEY is set - ffmpeg (dict): FFmpeg installation status and version info - write_permissions (bool): Whether output directory is writable - base_path (str): Absolute path to the base output directory

TYPE: Dict[str, Any]

Examples:

>>> status = preflight()
>>> if not status['ok']:
...     print("System not ready for generation")
...     if not status['gemini_api_key']:
...         print("Please set GEMINI_API_KEY environment variable")
...     if not status['ffmpeg']['installed']:
...         print("Please install FFmpeg for video processing")
>>> else:
...     print(f"System ready! Output directory: {status['base_path']}")
Note

This function is designed to be called before starting any video generation operations to ensure the environment is properly configured.

Source code in src/veotools/api/mcp_api.py
def preflight() -> Dict[str, Any]:
    """Check environment and system prerequisites for video generation.

    Performs comprehensive system checks to ensure all required dependencies
    and configurations are available for successful video generation operations.
    This includes API key validation, FFmpeg availability, and filesystem permissions.

    Returns:
        dict: JSON-serializable dictionary containing:
            - ok (bool): Overall system readiness status
            - gemini_api_key (bool): Whether GEMINI_API_KEY is set
            - ffmpeg (dict): FFmpeg installation status and version info
            - write_permissions (bool): Whether output directory is writable
            - base_path (str): Absolute path to the base output directory

    Examples:
        >>> status = preflight()
        >>> if not status['ok']:
        ...     print("System not ready for generation")
        ...     if not status['gemini_api_key']:
        ...         print("Please set GEMINI_API_KEY environment variable")
        ...     if not status['ffmpeg']['installed']:
        ...         print("Please install FFmpeg for video processing")
        >>> else:
        ...     print(f"System ready! Output directory: {status['base_path']}")

    Note:
        This function is designed to be called before starting any video generation
        operations to ensure the environment is properly configured.
    """
    """Check environment and system prerequisites.

    Returns a JSON-serializable dict with pass/fail details.
    """
    storage = StorageManager()
    base = storage.base_path

    # API key
    api_key_present = bool(os.getenv("GEMINI_API_KEY"))

    # ffmpeg
    ffmpeg_installed = False
    ffmpeg_version = None
    try:
        res = subprocess.run(["ffmpeg", "-version"], capture_output=True, text=True)
        if res.returncode == 0:
            ffmpeg_installed = True
            first_line = (res.stdout or res.stderr).splitlines()[0] if (res.stdout or res.stderr) else ""
            ffmpeg_version = first_line.strip()
    except FileNotFoundError:
        ffmpeg_installed = False

    # write permissions
    write_permissions = False
    try:
        base.mkdir(exist_ok=True)
        test_file = base / ".write_test"
        test_file.write_text("ok", encoding="utf-8")
        test_file.unlink()
        write_permissions = True
    except Exception:
        write_permissions = False

    return {
        "ok": api_key_present and write_permissions,
        "gemini_api_key": api_key_present,
        "ffmpeg": {"installed": ffmpeg_installed, "version": ffmpeg_version},
        "write_permissions": write_permissions,
        "base_path": str(base.resolve()),
    }

version

version() -> Dict[str, Any]

Report package and dependency versions in a JSON-friendly format.

Collects version information for veotools and its key dependencies, providing a comprehensive overview of the current software environment. Useful for debugging and support purposes.

RETURNS DESCRIPTION
dict

Dictionary containing: - veotools (str|None): veotools package version - dependencies (dict): Versions of key Python packages: - google-genai: Google GenerativeAI library version - opencv-python: OpenCV library version
- requests: HTTP requests library version - python-dotenv: Environment file loader version - ffmpeg (str|None): FFmpeg version string if available

TYPE: Dict[str, Any]

Examples:

>>> versions = version()
>>> print(f"veotools: {versions['veotools']}")
>>> print(f"Google GenAI: {versions['dependencies']['google-genai']}")
>>> if versions['ffmpeg']:
...     print(f"FFmpeg: {versions['ffmpeg']}")
>>> else:
...     print("FFmpeg not available")
Note

Returns None for any package that cannot be found or queried. This is expected behavior and not an error condition.

Source code in src/veotools/api/mcp_api.py
def version() -> Dict[str, Any]:
    """Report package and dependency versions in a JSON-friendly format.

    Collects version information for veotools and its key dependencies,
    providing a comprehensive overview of the current software environment.
    Useful for debugging and support purposes.

    Returns:
        dict: Dictionary containing:
            - veotools (str|None): veotools package version
            - dependencies (dict): Versions of key Python packages:
                - google-genai: Google GenerativeAI library version
                - opencv-python: OpenCV library version  
                - requests: HTTP requests library version
                - python-dotenv: Environment file loader version
            - ffmpeg (str|None): FFmpeg version string if available

    Examples:
        >>> versions = version()
        >>> print(f"veotools: {versions['veotools']}")
        >>> print(f"Google GenAI: {versions['dependencies']['google-genai']}")
        >>> if versions['ffmpeg']:
        ...     print(f"FFmpeg: {versions['ffmpeg']}")
        >>> else:
        ...     print("FFmpeg not available")

    Note:
        Returns None for any package that cannot be found or queried.
        This is expected behavior and not an error condition.
    """
    """Report package and dependency versions in a JSON-friendly format."""
    from importlib.metadata import PackageNotFoundError, version as pkg_version
    import veotools as veo

    def safe_ver(name: str) -> Optional[str]:
        try:
            return pkg_version(name)
        except PackageNotFoundError:
            return None
        except Exception:
            return None

    ffmpeg_info = None
    try:
        res = subprocess.run(["ffmpeg", "-version"], capture_output=True, text=True)
        if res.returncode == 0:
            ffmpeg_info = (res.stdout or res.stderr).splitlines()[0].strip()
    except Exception:
        ffmpeg_info = None

    return {
        "veotools": getattr(veo, "__version__", None),
        "dependencies": {
            "google-genai": safe_ver("google-genai"),
            "opencv-python": safe_ver("opencv-python"),
            "requests": safe_ver("requests"),
            "python-dotenv": safe_ver("python-dotenv"),
        },
        "ffmpeg": ffmpeg_info,
    }

generate_start

generate_start(params: Dict[str, Any]) -> Dict[str, Any]

Start a video generation job and return immediately with job details.

Initiates a video generation job in the background and returns immediately with job tracking information. The actual generation runs asynchronously and can be monitored using generate_get().

PARAMETER DESCRIPTION
params

Generation parameters dictionary containing: - prompt (str): Required text description for generation - model (str, optional): Model to use (defaults to veo-3.0-fast-generate-preview) - input_image_path (str, optional): Path to input image for image-to-video - input_video_path (str, optional): Path to input video for continuation - extract_at (float, optional): Time offset for video continuation - options (dict, optional): Additional model-specific options

TYPE: Dict[str, Any]

RETURNS DESCRIPTION
dict

Job information containing: - job_id (str): Unique job identifier for tracking - status (str): Initial job status ("processing") - progress (int): Initial progress (0) - message (str): Status message - kind (str): Generation type (text|image|video) - created_at (float): Job creation timestamp

TYPE: Dict[str, Any]

RAISES DESCRIPTION
ValueError

If required parameters are missing or invalid.

FileNotFoundError

If input media files don't exist.

Examples:

Start text-to-video generation:

>>> job = generate_start({"prompt": "A sunset over mountains"})
>>> print(f"Job started: {job['job_id']}")

Start image-to-video generation:

>>> job = generate_start({
...     "prompt": "The person starts walking",
...     "input_image_path": "photo.jpg"
... })

Start video continuation:

>>> job = generate_start({
...     "prompt": "The action continues",
...     "input_video_path": "scene1.mp4",
...     "extract_at": -2.0
... })

Start with custom model and options:

>>> job = generate_start({
...     "prompt": "A dancing robot",
...     "model": "veo-2.0",
...     "options": {"duration_seconds": 10, "enhance": True}
... })
Note

The job runs in a background thread. Use generate_get() to check progress and retrieve results when complete.

Source code in src/veotools/api/mcp_api.py
def generate_start(params: Dict[str, Any]) -> Dict[str, Any]:
    """Start a video generation job and return immediately with job details.

    Initiates a video generation job in the background and returns immediately
    with job tracking information. The actual generation runs asynchronously
    and can be monitored using generate_get().

    Args:
        params: Generation parameters dictionary containing:
            - prompt (str): Required text description for generation
            - model (str, optional): Model to use (defaults to veo-3.0-fast-generate-preview)
            - input_image_path (str, optional): Path to input image for image-to-video
            - input_video_path (str, optional): Path to input video for continuation
            - extract_at (float, optional): Time offset for video continuation
            - options (dict, optional): Additional model-specific options

    Returns:
        dict: Job information containing:
            - job_id (str): Unique job identifier for tracking
            - status (str): Initial job status ("processing")
            - progress (int): Initial progress (0)
            - message (str): Status message
            - kind (str): Generation type (text|image|video)
            - created_at (float): Job creation timestamp

    Raises:
        ValueError: If required parameters are missing or invalid.
        FileNotFoundError: If input media files don't exist.

    Examples:
        Start text-to-video generation:
        >>> job = generate_start({"prompt": "A sunset over mountains"})
        >>> print(f"Job started: {job['job_id']}")

        Start image-to-video generation:
        >>> job = generate_start({
        ...     "prompt": "The person starts walking",
        ...     "input_image_path": "photo.jpg"
        ... })

        Start video continuation:
        >>> job = generate_start({
        ...     "prompt": "The action continues",
        ...     "input_video_path": "scene1.mp4",
        ...     "extract_at": -2.0
        ... })

        Start with custom model and options:
        >>> job = generate_start({
        ...     "prompt": "A dancing robot",
        ...     "model": "veo-2.0",
        ...     "options": {"duration_seconds": 10, "enhance": True}
        ... })

    Note:
        The job runs in a background thread. Use generate_get() to check
        progress and retrieve results when complete.
    """
    """Start a generation job and return immediately.

    Expected params keys:
      - prompt: str (required)
      - model: str (optional; default used by underlying SDK)
      - input_image_path: str (optional)
      - input_video_path: str (optional)
      - extract_at: float (optional; for video continuation)
      - options: dict (optional; forwarded to SDK functions)
    """
    _validate_generate_inputs(params)

    kind = "text"
    if params.get("input_image_path"):
        kind = "image"
    elif params.get("input_video_path"):
        kind = "video"

    store = JobStore()
    record = _build_job(kind, params)
    store.create(record)

    # Start background worker
    worker = threading.Thread(target=_run_generation, args=(record.job_id,), daemon=True)
    worker.start()

    return {
        "job_id": record.job_id,
        "status": record.status,
        "progress": record.progress,
        "message": record.message,
        "kind": record.kind,
        "created_at": record.created_at,
    }

generate_get

generate_get(job_id: str) -> Dict[str, Any]

Get the current status and results of a generation job.

Retrieves the current state of a generation job including progress, status, and results if complete. This function can be called repeatedly to monitor job progress.

PARAMETER DESCRIPTION
job_id

The unique job identifier returned by generate_start().

TYPE: str

RETURNS DESCRIPTION
dict

Job status information containing: - job_id (str): The job identifier - status (str): Current status (processing|complete|failed|cancelled) - progress (int): Progress percentage (0-100) - message (str): Current status message - kind (str): Generation type (text|image|video) - remote_operation_id (str|None): Remote API operation ID if available - updated_at (float): Last update timestamp - result (dict, optional): Generation results when status is "complete" - error_code (str, optional): Error code if status is "failed" - error_message (str, optional): Error description if status is "failed"

TYPE: Dict[str, Any]

Dict[str, Any]

If job_id is not found, returns: - error_code (str): "VALIDATION" - error_message (str): Error description

Examples:

Check job progress:

>>> status = generate_get(job_id)
>>> print(f"Progress: {status['progress']}% - {status['message']}")

Wait for completion:

>>> import time
>>> while True:
...     status = generate_get(job_id)
...     if status['status'] == 'complete':
...         print(f"Video ready: {status['result']['path']}")
...         break
...     elif status['status'] == 'failed':
...         print(f"Generation failed: {status['error_message']}")
...         break
...     time.sleep(5)

Handle different outcomes:

>>> status = generate_get(job_id)
>>> if status['status'] == 'complete':
...     video_path = status['result']['path']
...     metadata = status['result']['metadata']
...     print(f"Success! Video: {video_path}")
...     print(f"Duration: {metadata['duration']}s")
... elif status['status'] == 'failed':
...     print(f"Error ({status['error_code']}): {status['error_message']}")
... else:
...     print(f"Still processing: {status['progress']}%")
Source code in src/veotools/api/mcp_api.py
def generate_get(job_id: str) -> Dict[str, Any]:
    """Get the current status and results of a generation job.

    Retrieves the current state of a generation job including progress,
    status, and results if complete. This function can be called repeatedly
    to monitor job progress.

    Args:
        job_id: The unique job identifier returned by generate_start().

    Returns:
        dict: Job status information containing:
            - job_id (str): The job identifier
            - status (str): Current status (processing|complete|failed|cancelled)
            - progress (int): Progress percentage (0-100)
            - message (str): Current status message
            - kind (str): Generation type (text|image|video)
            - remote_operation_id (str|None): Remote API operation ID if available
            - updated_at (float): Last update timestamp
            - result (dict, optional): Generation results when status is "complete"
            - error_code (str, optional): Error code if status is "failed"
            - error_message (str, optional): Error description if status is "failed"

        If job_id is not found, returns:
            - error_code (str): "VALIDATION"
            - error_message (str): Error description

    Examples:
        Check job progress:
        >>> status = generate_get(job_id)
        >>> print(f"Progress: {status['progress']}% - {status['message']}")

        Wait for completion:
        >>> import time
        >>> while True:
        ...     status = generate_get(job_id)
        ...     if status['status'] == 'complete':
        ...         print(f"Video ready: {status['result']['path']}")
        ...         break
        ...     elif status['status'] == 'failed':
        ...         print(f"Generation failed: {status['error_message']}")
        ...         break
        ...     time.sleep(5)

        Handle different outcomes:
        >>> status = generate_get(job_id)
        >>> if status['status'] == 'complete':
        ...     video_path = status['result']['path']
        ...     metadata = status['result']['metadata']
        ...     print(f"Success! Video: {video_path}")
        ...     print(f"Duration: {metadata['duration']}s")
        ... elif status['status'] == 'failed':
        ...     print(f"Error ({status['error_code']}): {status['error_message']}")
        ... else:
        ...     print(f"Still processing: {status['progress']}%")
    """
    """Get the current status of a generation job."""
    store = JobStore()
    record = store.read(job_id)
    if not record:
        return {"error_code": "VALIDATION", "error_message": f"job_id not found: {job_id}"}

    payload: Dict[str, Any] = {
        "job_id": record.job_id,
        "status": record.status,
        "progress": record.progress,
        "message": record.message,
        "kind": record.kind,
        "remote_operation_id": record.remote_operation_id,
        "updated_at": record.updated_at,
    }
    if record.result:
        payload["result"] = record.result
    if record.error_code:
        payload["error_code"] = record.error_code
        payload["error_message"] = record.error_message
    return payload

generate_cancel

generate_cancel(job_id: str) -> Dict[str, Any]

Request cancellation of a running generation job.

Attempts to cancel a generation job that is currently processing. Cancellation is cooperative - the job will stop at the next progress update checkpoint. Already completed or failed jobs cannot be cancelled.

PARAMETER DESCRIPTION
job_id

The unique job identifier to cancel.

TYPE: str

RETURNS DESCRIPTION
dict

Cancellation response containing: - job_id (str): The job identifier - status (str): "cancelling" if request was accepted

TYPE: Dict[str, Any]

Dict[str, Any]

If job_id is not found, returns: - error_code (str): "VALIDATION" - error_message (str): Error description

Examples:

Cancel a running job:

>>> response = generate_cancel(job_id)
>>> if 'error_code' not in response:
...     print(f"Cancellation requested for job {response['job_id']}")
... else:
...     print(f"Cancel failed: {response['error_message']}")

Check if cancellation succeeded:

>>> generate_cancel(job_id)
>>> time.sleep(2)
>>> status = generate_get(job_id)
>>> if status['status'] == 'cancelled':
...     print("Job successfully cancelled")
Note

Cancellation may not be immediate - the job will stop at the next progress checkpoint. Monitor with generate_get() to confirm cancellation.

Source code in src/veotools/api/mcp_api.py
def generate_cancel(job_id: str) -> Dict[str, Any]:
    """Request cancellation of a running generation job.

    Attempts to cancel a generation job that is currently processing.
    Cancellation is cooperative - the job will stop at the next progress
    update checkpoint. Already completed or failed jobs cannot be cancelled.

    Args:
        job_id: The unique job identifier to cancel.

    Returns:
        dict: Cancellation response containing:
            - job_id (str): The job identifier
            - status (str): "cancelling" if request was accepted

        If job_id is not found, returns:
            - error_code (str): "VALIDATION"
            - error_message (str): Error description

    Examples:
        Cancel a running job:
        >>> response = generate_cancel(job_id)
        >>> if 'error_code' not in response:
        ...     print(f"Cancellation requested for job {response['job_id']}")
        ... else:
        ...     print(f"Cancel failed: {response['error_message']}")

        Check if cancellation succeeded:
        >>> generate_cancel(job_id)
        >>> time.sleep(2)
        >>> status = generate_get(job_id)
        >>> if status['status'] == 'cancelled':
        ...     print("Job successfully cancelled")

    Note:
        Cancellation may not be immediate - the job will stop at the next
        progress checkpoint. Monitor with generate_get() to confirm cancellation.
    """
    """Request cancellation of a running generation job."""
    store = JobStore()
    record = store.request_cancel(job_id)
    if not record:
        return {"error_code": "VALIDATION", "error_message": f"job_id not found: {job_id}"}
    return {"job_id": job_id, "status": "cancelling"}

list_models

list_models(include_remote: bool = True) -> Dict[str, Any]

List available video generation models with their capabilities.

Retrieves information about available Veo models including their capabilities, default settings, and performance characteristics. Combines static model registry with optional remote model discovery.

PARAMETER DESCRIPTION
include_remote

Whether to include models discovered from the remote API. If True, attempts to fetch additional model information from Google's API. If False, returns only the static model registry. Defaults to True.

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION
dict

Model information containing: - models (list): List of model dictionaries, each containing: - id (str): Model identifier (e.g., "veo-3.0-fast-generate-preview") - name (str): Human-readable model name - capabilities (dict): Feature flags: - supports_duration (bool): Can specify custom duration - supports_enhance (bool): Can enhance prompts - supports_fps (bool): Can specify frame rate - supports_audio (bool): Can generate audio - default_duration (float|None): Default video duration in seconds - generation_time (float|None): Estimated generation time in seconds - source (str): Data source ("static", "remote", or "static+remote")

TYPE: Dict[str, Any]

Examples:

List all available models:

>>> models = list_models()
>>> for model in models['models']:
...     print(f"{model['name']} ({model['id']})")
...     if model['capabilities']['supports_duration']:
...         print(f"  Default duration: {model['default_duration']}s")

Find models with specific capabilities:

>>> models = list_models()
>>> audio_models = [
...     m for m in models['models']
...     if m['capabilities']['supports_audio']
... ]
>>> print(f"Found {len(audio_models)} models with audio support")

Use only static model registry:

>>> models = list_models(include_remote=False)
>>> static_models = [m for m in models['models'] if m['source'] == 'static']
Note

Results are cached for 10 minutes to improve performance. Remote model discovery failures are silently ignored - static registry is always available.

Source code in src/veotools/api/mcp_api.py
def list_models(include_remote: bool = True) -> Dict[str, Any]:
    """List available video generation models with their capabilities.

    Retrieves information about available Veo models including their capabilities,
    default settings, and performance characteristics. Combines static model
    registry with optional remote model discovery.

    Args:
        include_remote: Whether to include models discovered from the remote API.
            If True, attempts to fetch additional model information from Google's API.
            If False, returns only the static model registry. Defaults to True.

    Returns:
        dict: Model information containing:
            - models (list): List of model dictionaries, each containing:
                - id (str): Model identifier (e.g., "veo-3.0-fast-generate-preview")
                - name (str): Human-readable model name
                - capabilities (dict): Feature flags:
                    - supports_duration (bool): Can specify custom duration
                    - supports_enhance (bool): Can enhance prompts
                    - supports_fps (bool): Can specify frame rate
                    - supports_audio (bool): Can generate audio
                - default_duration (float|None): Default video duration in seconds
                - generation_time (float|None): Estimated generation time in seconds
                - source (str): Data source ("static", "remote", or "static+remote")

    Examples:
        List all available models:
        >>> models = list_models()
        >>> for model in models['models']:
        ...     print(f"{model['name']} ({model['id']})")
        ...     if model['capabilities']['supports_duration']:
        ...         print(f"  Default duration: {model['default_duration']}s")

        Find models with specific capabilities:
        >>> models = list_models()
        >>> audio_models = [
        ...     m for m in models['models']
        ...     if m['capabilities']['supports_audio']
        ... ]
        >>> print(f"Found {len(audio_models)} models with audio support")

        Use only static model registry:
        >>> models = list_models(include_remote=False)
        >>> static_models = [m for m in models['models'] if m['source'] == 'static']

    Note:
        Results are cached for 10 minutes to improve performance. Remote model
        discovery failures are silently ignored - static registry is always available.
    """
    """List available models and capability flags.

    Returns a JSON dict: { models: [ {id, name, capabilities, default_duration, generation_time, source} ] }
    """
    models: Dict[str, Dict[str, Any]] = {}

    # Seed from static registry
    for model_id, cfg in ModelConfig.MODELS.items():
        models[model_id] = {
            "id": model_id,
            "name": cfg.get("name", model_id),
            "capabilities": {
                "supports_duration": cfg.get("supports_duration", False),
                "supports_enhance": cfg.get("supports_enhance", False),
                "supports_fps": cfg.get("supports_fps", False),
                "supports_audio": cfg.get("supports_audio", False),
            },
            "default_duration": cfg.get("default_duration"),
            "generation_time": cfg.get("generation_time"),
            "source": "static",
        }

    # Optionally merge from remote discovery (best-effort)
    if include_remote:
        try:
            client = VeoClient().client
            if hasattr(client, "models") and hasattr(client.models, "list"):
                for remote in client.models.list():
                    # Expect names like "models/veo-3.0-fast-generate-preview"
                    raw_name = getattr(remote, "name", "") or ""
                    model_id = raw_name.replace("models/", "") if raw_name else getattr(remote, "base_model_id", None)
                    if not model_id:
                        continue
                    entry = models.get(model_id, {
                        "id": model_id,
                        "name": getattr(remote, "display_name", model_id),
                        "capabilities": {},
                    })
                    entry["source"] = (entry.get("source") or "") + ("+remote" if entry.get("source") else "remote")
                    models[model_id] = entry
        except Exception:
            # Ignore remote discovery errors; static list is sufficient
            pass

    # Basic cache to disk for 10 minutes
    try:
        store = JobStore()
        cache_path = store.ops_dir / "models.json"
        now = time.time()
        if cache_path.exists():
            try:
                cached = json.loads(cache_path.read_text(encoding="utf-8"))
                if now - float(cached.get("updated_at", 0)) < 600:
                    # Merge remote source flags if needed, else return cache
                    return cached.get("data", {"models": list(models.values())})
            except Exception:
                pass
        payload = {"models": list(models.values())}
        cache_path.write_text(json.dumps({"updated_at": now, "data": payload}), encoding="utf-8")
        return payload
    except Exception:
        return {"models": list(models.values())}

cache_create_from_files

cache_create_from_files(model: str, files: list[str], system_instruction: Optional[str] = None) -> Dict[str, Any]

Create a cached content handle from local file paths.

Uploads local files to create a cached content context that can be reused across multiple API calls for efficiency. This is particularly useful when working with large files or when making multiple requests with the same context.

PARAMETER DESCRIPTION
model

The model identifier to associate with the cached content.

TYPE: str

files

List of local file paths to upload and cache.

TYPE: list[str]

system_instruction

Optional system instruction to include with the cache.

TYPE: Optional[str] DEFAULT: None

RETURNS DESCRIPTION
dict

Cache creation result containing: - name (str): Unique cache identifier for future reference - model (str): The associated model identifier - system_instruction (str|None): The system instruction if provided - contents_count (int): Number of files successfully cached

TYPE: Dict[str, Any]

Dict[str, Any]

On failure, returns: - error_code (str): Error classification - error_message (str): Detailed error description

Examples:

Cache multiple reference images:

>>> result = cache_create_from_files(
...     "veo-3.0-fast-generate-preview",
...     ["ref1.jpg", "ref2.jpg", "ref3.jpg"],
...     "These are reference images for style consistency"
... )
>>> if 'name' in result:
...     cache_name = result['name']
...     print(f"Cache created: {cache_name}")
... else:
...     print(f"Cache creation failed: {result['error_message']}")

Cache video reference:

>>> result = cache_create_from_files(
...     "veo-2.0",
...     ["reference_video.mp4"]
... )
Note

Files are uploaded to Google's servers as part of the caching process. Ensure you have appropriate permissions for the files and comply with Google's usage policies.

Source code in src/veotools/api/mcp_api.py
def cache_create_from_files(model: str, files: list[str], system_instruction: Optional[str] = None) -> Dict[str, Any]:
    """Create a cached content handle from local file paths.

    Uploads local files to create a cached content context that can be reused
    across multiple API calls for efficiency. This is particularly useful when
    working with large files or when making multiple requests with the same context.

    Args:
        model: The model identifier to associate with the cached content.
        files: List of local file paths to upload and cache.
        system_instruction: Optional system instruction to include with the cache.

    Returns:
        dict: Cache creation result containing:
            - name (str): Unique cache identifier for future reference
            - model (str): The associated model identifier
            - system_instruction (str|None): The system instruction if provided
            - contents_count (int): Number of files successfully cached

        On failure, returns:
            - error_code (str): Error classification
            - error_message (str): Detailed error description

    Examples:
        Cache multiple reference images:
        >>> result = cache_create_from_files(
        ...     "veo-3.0-fast-generate-preview",
        ...     ["ref1.jpg", "ref2.jpg", "ref3.jpg"],
        ...     "These are reference images for style consistency"
        ... )
        >>> if 'name' in result:
        ...     cache_name = result['name']
        ...     print(f"Cache created: {cache_name}")
        ... else:
        ...     print(f"Cache creation failed: {result['error_message']}")

        Cache video reference:
        >>> result = cache_create_from_files(
        ...     "veo-2.0",
        ...     ["reference_video.mp4"]
        ... )

    Raises:
        The function catches all exceptions and returns them as error dictionaries
        rather than raising them directly.

    Note:
        Files are uploaded to Google's servers as part of the caching process.
        Ensure you have appropriate permissions for the files and comply with
        Google's usage policies.
    """
    """Create a cached content handle from local file paths.

    Returns { name, model, system_instruction?, contents_count } or { error_code, error_message } on failure.
    """
    try:
        client = VeoClient().client
        uploaded = []
        for f in files:
            p = Path(f)
            if not p.exists():
                return {"error_code": "VALIDATION", "error_message": f"File not found: {f}"}
            uploaded.append(client.files.upload(file=p))
        cfg = types.CreateCachedContentConfig(
            contents=uploaded,
            system_instruction=system_instruction if system_instruction else None,
        )
        cache = client.caches.create(model=model, config=cfg)
        return {
            "name": getattr(cache, "name", None),
            "model": model,
            "system_instruction": system_instruction,
            "contents_count": len(uploaded),
        }
    except Exception as e:
        return {"error_code": "UNKNOWN", "error_message": str(e)}

cache_get

cache_get(name: str) -> Dict[str, Any]

Retrieve cached content metadata by cache name.

Fetches information about a previously created cached content entry, including lifecycle information like expiration times and creation dates.

PARAMETER DESCRIPTION
name

The unique cache identifier returned by cache_create_from_files().

TYPE: str

RETURNS DESCRIPTION
dict

Cache metadata containing: - name (str): The cache identifier - ttl (str|None): Time-to-live if available - expire_time (str|None): Expiration timestamp if available - create_time (str|None): Creation timestamp if available

TYPE: Dict[str, Any]

Dict[str, Any]

On failure, returns: - error_code (str): Error classification - error_message (str): Detailed error description

Examples:

Check cache status:

>>> cache_info = cache_get(cache_name)
>>> if 'error_code' not in cache_info:
...     print(f"Cache {cache_info['name']} is active")
...     if cache_info.get('expire_time'):
...         print(f"Expires: {cache_info['expire_time']}")
... else:
...     print(f"Cache not found: {cache_info['error_message']}")
Note

Available metadata fields may vary depending on the Google GenAI library version and cache configuration.

Source code in src/veotools/api/mcp_api.py
def cache_get(name: str) -> Dict[str, Any]:
    """Retrieve cached content metadata by cache name.

    Fetches information about a previously created cached content entry,
    including lifecycle information like expiration times and creation dates.

    Args:
        name: The unique cache identifier returned by cache_create_from_files().

    Returns:
        dict: Cache metadata containing:
            - name (str): The cache identifier
            - ttl (str|None): Time-to-live if available
            - expire_time (str|None): Expiration timestamp if available
            - create_time (str|None): Creation timestamp if available

        On failure, returns:
            - error_code (str): Error classification
            - error_message (str): Detailed error description

    Examples:
        Check cache status:
        >>> cache_info = cache_get(cache_name)
        >>> if 'error_code' not in cache_info:
        ...     print(f"Cache {cache_info['name']} is active")
        ...     if cache_info.get('expire_time'):
        ...         print(f"Expires: {cache_info['expire_time']}")
        ... else:
        ...     print(f"Cache not found: {cache_info['error_message']}")

    Note:
        Available metadata fields may vary depending on the Google GenAI
        library version and cache configuration.
    """
    """Retrieve cached content metadata by name.

    Returns minimal metadata; fields vary by library version.
    """
    try:
        client = VeoClient().client
        cache = client.caches.get(name=name)
        out: Dict[str, Any] = {"name": getattr(cache, "name", name)}
        # Attempt to surface lifecycle info when available
        for k in ("ttl", "expire_time", "create_time"):
            v = getattr(cache, k, None)
            if v is not None:
                out[k] = v
        return out
    except Exception as e:
        return {"error_code": "UNKNOWN", "error_message": str(e)}

cache_list

cache_list() -> Dict[str, Any]

List all cached content entries with their metadata.

Retrieves a list of all cached content entries accessible to the current API key, including their metadata and lifecycle information.

RETURNS DESCRIPTION
dict

Cache listing containing: - caches (list): List of cache entries, each containing: - name (str): Cache identifier - model (str|None): Associated model if available - display_name (str|None): Human-readable name if available - create_time (str|None): Creation timestamp if available - update_time (str|None): Last update timestamp if available - expire_time (str|None): Expiration timestamp if available - usage_metadata (dict|None): Usage statistics if available

TYPE: Dict[str, Any]

Dict[str, Any]

On failure, returns: - error_code (str): Error classification - error_message (str): Detailed error description

Examples:

List all caches:

>>> cache_list_result = cache_list()
>>> if 'caches' in cache_list_result:
...     for cache in cache_list_result['caches']:
...         print(f"Cache: {cache['name']}")
...         if cache.get('model'):
...             print(f"  Model: {cache['model']}")
...         if cache.get('expire_time'):
...             print(f"  Expires: {cache['expire_time']}")
... else:
...     print(f"Failed to list caches: {cache_list_result['error_message']}")

Find caches by model:

>>> result = cache_list()
>>> if 'caches' in result:
...     veo3_caches = [
...         c for c in result['caches']
...         if c.get('model', '').startswith('veo-3')
...     ]
Note

Metadata availability depends on the Google GenAI library version and individual cache configurations.

Source code in src/veotools/api/mcp_api.py
def cache_list() -> Dict[str, Any]:
    """List all cached content entries with their metadata.

    Retrieves a list of all cached content entries accessible to the current
    API key, including their metadata and lifecycle information.

    Returns:
        dict: Cache listing containing:
            - caches (list): List of cache entries, each containing:
                - name (str): Cache identifier
                - model (str|None): Associated model if available
                - display_name (str|None): Human-readable name if available
                - create_time (str|None): Creation timestamp if available
                - update_time (str|None): Last update timestamp if available
                - expire_time (str|None): Expiration timestamp if available
                - usage_metadata (dict|None): Usage statistics if available

        On failure, returns:
            - error_code (str): Error classification
            - error_message (str): Detailed error description

    Examples:
        List all caches:
        >>> cache_list_result = cache_list()
        >>> if 'caches' in cache_list_result:
        ...     for cache in cache_list_result['caches']:
        ...         print(f"Cache: {cache['name']}")
        ...         if cache.get('model'):
        ...             print(f"  Model: {cache['model']}")
        ...         if cache.get('expire_time'):
        ...             print(f"  Expires: {cache['expire_time']}")
        ... else:
        ...     print(f"Failed to list caches: {cache_list_result['error_message']}")

        Find caches by model:
        >>> result = cache_list()
        >>> if 'caches' in result:
        ...     veo3_caches = [
        ...         c for c in result['caches']
        ...         if c.get('model', '').startswith('veo-3')
        ...     ]

    Note:
        Metadata availability depends on the Google GenAI library version
        and individual cache configurations.
    """
    """List cached content metadata entries.

    Returns { caches: [ {name, model?, display_name?, create_time?, update_time?, expire_time?, usage_metadata?} ] }
    """
    try:
        client = VeoClient().client
        items = []
        for cache in client.caches.list():
            entry: Dict[str, Any] = {"name": getattr(cache, "name", None)}
            for k in ("model", "display_name", "create_time", "update_time", "expire_time", "usage_metadata"):
                v = getattr(cache, k, None)
                if v is not None:
                    entry[k] = v
            items.append(entry)
        return {"caches": items}
    except Exception as e:
        return {"error_code": "UNKNOWN", "error_message": str(e)}

cache_update

cache_update(name: str, ttl_seconds: Optional[int] = None, expire_time_iso: Optional[str] = None) -> Dict[str, Any]

Update TTL or expiration time for a cached content entry.

Modifies the lifecycle settings of an existing cached content entry. You can specify either a TTL (time-to-live) in seconds or an absolute expiration time, but not both.

PARAMETER DESCRIPTION
name

The unique cache identifier to update.

TYPE: str

ttl_seconds

Optional time-to-live in seconds (e.g., 300 for 5 minutes).

TYPE: Optional[int] DEFAULT: None

expire_time_iso

Optional timezone-aware ISO-8601 datetime string (e.g., "2024-01-15T10:30:00Z").

TYPE: Optional[str] DEFAULT: None

RETURNS DESCRIPTION
dict

Update result containing: - name (str): The cache identifier - expire_time (str|None): New expiration time if available - ttl (str|None): New TTL setting if available - update_time (str|None): Update timestamp if available

TYPE: Dict[str, Any]

Dict[str, Any]

On failure, returns: - error_code (str): Error classification - error_message (str): Detailed error description

Examples:

Extend cache TTL to 1 hour:

>>> result = cache_update(cache_name, ttl_seconds=3600)
>>> if 'error_code' not in result:
...     print(f"Cache TTL updated: {result.get('ttl')}")
... else:
...     print(f"Update failed: {result['error_message']}")

Set specific expiration time:

>>> result = cache_update(
...     cache_name,
...     expire_time_iso="2024-12-31T23:59:59Z"
... )

Extend by 30 minutes:

>>> result = cache_update(cache_name, ttl_seconds=1800)
Note
  • Only one of ttl_seconds or expire_time_iso should be provided
  • TTL is relative to the current time when the update is processed
  • expire_time_iso should be in UTC timezone for consistency
Source code in src/veotools/api/mcp_api.py
def cache_update(name: str, ttl_seconds: Optional[int] = None, expire_time_iso: Optional[str] = None) -> Dict[str, Any]:
    """Update TTL or expiration time for a cached content entry.

    Modifies the lifecycle settings of an existing cached content entry.
    You can specify either a TTL (time-to-live) in seconds or an absolute
    expiration time, but not both.

    Args:
        name: The unique cache identifier to update.
        ttl_seconds: Optional time-to-live in seconds (e.g., 300 for 5 minutes).
        expire_time_iso: Optional timezone-aware ISO-8601 datetime string
            (e.g., "2024-01-15T10:30:00Z").

    Returns:
        dict: Update result containing:
            - name (str): The cache identifier
            - expire_time (str|None): New expiration time if available
            - ttl (str|None): New TTL setting if available
            - update_time (str|None): Update timestamp if available

        On failure, returns:
            - error_code (str): Error classification
            - error_message (str): Detailed error description

    Examples:
        Extend cache TTL to 1 hour:
        >>> result = cache_update(cache_name, ttl_seconds=3600)
        >>> if 'error_code' not in result:
        ...     print(f"Cache TTL updated: {result.get('ttl')}")
        ... else:
        ...     print(f"Update failed: {result['error_message']}")

        Set specific expiration time:
        >>> result = cache_update(
        ...     cache_name,
        ...     expire_time_iso="2024-12-31T23:59:59Z"
        ... )

        Extend by 30 minutes:
        >>> result = cache_update(cache_name, ttl_seconds=1800)

    Raises:
        Returns error dict instead of raising exceptions directly.

    Note:
        - Only one of ttl_seconds or expire_time_iso should be provided
        - TTL is relative to the current time when the update is processed
        - expire_time_iso should be in UTC timezone for consistency
    """
    """Update TTL or expire_time for a cache (one or the other).

    - ttl_seconds: integer seconds for TTL (e.g., 300)
    - expire_time_iso: timezone-aware ISO-8601 datetime string
    """
    try:
        client = VeoClient().client
        cfg_kwargs: Dict[str, Any] = {}
        if ttl_seconds is not None:
            cfg_kwargs["ttl"] = f"{int(ttl_seconds)}s"
        if expire_time_iso:
            cfg_kwargs["expire_time"] = expire_time_iso
        if not cfg_kwargs:
            return {"error_code": "VALIDATION", "error_message": "Provide ttl_seconds or expire_time_iso"}
        updated = client.caches.update(
            name=name,
            config=types.UpdateCachedContentConfig(**cfg_kwargs),
        )
        out: Dict[str, Any] = {"name": getattr(updated, "name", name)}
        for k in ("expire_time", "ttl", "update_time"):
            v = getattr(updated, k, None)
            if v is not None:
                out[k] = v
        return out
    except Exception as e:
        return {"error_code": "UNKNOWN", "error_message": str(e)}

cache_delete

cache_delete(name: str) -> Dict[str, Any]

Delete a cached content entry by name.

Permanently removes a cached content entry and all associated files from the system. This action cannot be undone.

PARAMETER DESCRIPTION
name

The unique cache identifier to delete.

TYPE: str

RETURNS DESCRIPTION
dict

Deletion result containing: - deleted (bool): True if deletion was successful - name (str): The cache identifier that was deleted

TYPE: Dict[str, Any]

Dict[str, Any]

On failure, returns: - error_code (str): Error classification - error_message (str): Detailed error description

Examples:

Delete a specific cache:

>>> result = cache_delete(cache_name)
>>> if result.get('deleted'):
...     print(f"Cache {result['name']} deleted successfully")
... else:
...     print(f"Deletion failed: {result.get('error_message')}")

Delete with error handling:

>>> result = cache_delete("non-existent-cache")
>>> if 'error_code' in result:
...     print(f"Error: {result['error_message']}")
Note

Deletion is permanent and cannot be reversed. Ensure you no longer need the cached content before calling this function.

Source code in src/veotools/api/mcp_api.py
def cache_delete(name: str) -> Dict[str, Any]:
    """Delete a cached content entry by name.

    Permanently removes a cached content entry and all associated files
    from the system. This action cannot be undone.

    Args:
        name: The unique cache identifier to delete.

    Returns:
        dict: Deletion result containing:
            - deleted (bool): True if deletion was successful
            - name (str): The cache identifier that was deleted

        On failure, returns:
            - error_code (str): Error classification
            - error_message (str): Detailed error description

    Examples:
        Delete a specific cache:
        >>> result = cache_delete(cache_name)
        >>> if result.get('deleted'):
        ...     print(f"Cache {result['name']} deleted successfully")
        ... else:
        ...     print(f"Deletion failed: {result.get('error_message')}")

        Delete with error handling:
        >>> result = cache_delete("non-existent-cache")
        >>> if 'error_code' in result:
        ...     print(f"Error: {result['error_message']}")

    Note:
        Deletion is permanent and cannot be reversed. Ensure you no longer
        need the cached content before calling this function.
    """
    """Delete a cached content entry by name."""
    try:
        client = VeoClient().client
        client.caches.delete(name)
        return {"deleted": True, "name": name}
    except Exception as e:
        return {"error_code": "UNKNOWN", "error_message": str(e)}