API Reference Overview¶
The Veotools API is organized into several modules, each handling specific aspects of video generation and processing.
Complete API Documentation¶
The following sections provide auto-generated documentation from the source code docstrings.
Core Module¶
veotools.core
¶
CLASS | DESCRIPTION |
---|---|
VeoClient |
Singleton client for Google GenAI API interactions. |
StorageManager |
|
ProgressTracker |
Track and report progress for long-running operations. |
ModelConfig |
Configuration and capabilities for different Veo video generation models. |
Classes¶
VeoClient
¶
Singleton client for Google GenAI API interactions.
This class implements a singleton pattern to ensure only one client instance is created throughout the application lifecycle. It manages the authentication and connection to Google's Generative AI API.
ATTRIBUTE | DESCRIPTION |
---|---|
client |
The underlying Google GenAI client instance.
|
RAISES | DESCRIPTION |
---|---|
ValueError
|
If GEMINI_API_KEY environment variable is not set. |
Examples:
Initialize the GenAI client with API key from environment.
The client is only initialized once, even if init is called multiple times.
RAISES | DESCRIPTION |
---|---|
ValueError
|
If GEMINI_API_KEY is not found in environment variables. |
METHOD | DESCRIPTION |
---|---|
__new__ |
Create or return the singleton instance. |
Source code in src/veotools/core.py
StorageManager
¶
Manage output directories for videos, frames, and temp files.
Default resolution order for base path: 1. VEO_OUTPUT_DIR environment variable (if set) 2. Current working directory (./output) 3. Package-adjacent directory (../output) as a last resort
METHOD | DESCRIPTION |
---|---|
get_video_path |
Get the full path for a video file. |
get_frame_path |
Get the full path for a frame image file. |
get_temp_path |
Get the full path for a temporary file. |
cleanup_temp |
Remove all files from the temporary directory. |
get_url |
Convert a file path to a file:// URL. |
Source code in src/veotools/core.py
Functions¶
get_video_path
¶
Get the full path for a video file.
PARAMETER | DESCRIPTION |
---|---|
filename
|
Name of the video file.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Path
|
Full path to the video file in the videos directory.
TYPE:
|
Examples:
>>> manager = StorageManager()
>>> path = manager.get_video_path("output.mp4")
>>> print(path) # /path/to/output/videos/output.mp4
Source code in src/veotools/core.py
get_frame_path
¶
Get the full path for a frame image file.
PARAMETER | DESCRIPTION |
---|---|
filename
|
Name of the frame file.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Path
|
Full path to the frame file in the frames directory.
TYPE:
|
Examples:
>>> manager = StorageManager()
>>> path = manager.get_frame_path("frame_001.jpg")
>>> print(path) # /path/to/output/frames/frame_001.jpg
Source code in src/veotools/core.py
get_temp_path
¶
Get the full path for a temporary file.
PARAMETER | DESCRIPTION |
---|---|
filename
|
Name of the temporary file.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Path
|
Full path to the file in the temp directory.
TYPE:
|
Examples:
>>> manager = StorageManager()
>>> path = manager.get_temp_path("processing.tmp")
>>> print(path) # /path/to/output/temp/processing.tmp
Source code in src/veotools/core.py
cleanup_temp
¶
Remove all files from the temporary directory.
This method safely removes all files in the temp directory while preserving the directory structure. Errors during deletion are silently ignored.
Examples:
Source code in src/veotools/core.py
get_url
¶
Convert a file path to a file:// URL.
PARAMETER | DESCRIPTION |
---|---|
path
|
Path to the file.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Optional[str]
|
Optional[str]: File URL if the file exists, None otherwise. |
Examples:
>>> manager = StorageManager()
>>> video_path = manager.get_video_path("test.mp4")
>>> url = manager.get_url(video_path)
>>> print(url) # file:///absolute/path/to/output/videos/test.mp4
Source code in src/veotools/core.py
ProgressTracker
¶
Track and report progress for long-running operations.
This class provides a simple interface for tracking progress updates during video generation and processing operations. It supports custom callbacks or falls back to logging.
ATTRIBUTE | DESCRIPTION |
---|---|
callback |
Function to call with progress updates.
|
current_progress |
Current progress percentage (0-100).
|
logger |
Logger instance for default progress reporting.
|
Examples:
>>> def my_callback(msg: str, pct: int):
... print(f"{msg}: {pct}%")
>>> tracker = ProgressTracker(callback=my_callback)
>>> tracker.start("Processing")
>>> tracker.update("Halfway", 50)
>>> tracker.complete("Done")
Initialize the progress tracker.
PARAMETER | DESCRIPTION |
---|---|
callback
|
Optional callback function that receives (message, percent). If not provided, uses default logging.
TYPE:
|
METHOD | DESCRIPTION |
---|---|
default_progress |
Default progress callback that logs to the logger. |
update |
Update progress and trigger callback. |
start |
Mark the start of an operation (0% progress). |
complete |
Mark the completion of an operation (100% progress). |
Source code in src/veotools/core.py
Functions¶
default_progress
¶
Default progress callback that logs to the logger.
PARAMETER | DESCRIPTION |
---|---|
message
|
Progress message.
TYPE:
|
percent
|
Progress percentage.
TYPE:
|
update
¶
Update progress and trigger callback.
PARAMETER | DESCRIPTION |
---|---|
message
|
Progress message to display.
TYPE:
|
percent
|
Current progress percentage (0-100).
TYPE:
|
Source code in src/veotools/core.py
start
¶
Mark the start of an operation (0% progress).
PARAMETER | DESCRIPTION |
---|---|
message
|
Starting message, defaults to "Starting".
TYPE:
|
complete
¶
Mark the completion of an operation (100% progress).
PARAMETER | DESCRIPTION |
---|---|
message
|
Completion message, defaults to "Complete".
TYPE:
|
ModelConfig
¶
Configuration and capabilities for different Veo video generation models.
This class manages model-specific configurations and builds generation configs based on model capabilities. It handles feature availability, parameter validation, and safety settings.
ATTRIBUTE | DESCRIPTION |
---|---|
MODELS |
Dictionary of available models and their configurations.
|
METHOD | DESCRIPTION |
---|---|
get_config |
Get configuration for a specific model. |
build_generation_config |
Build a generation configuration based on model capabilities. |
Functions¶
get_config
classmethod
¶
Get configuration for a specific model.
PARAMETER | DESCRIPTION |
---|---|
model
|
Model identifier (with or without "models/" prefix).
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
dict
|
Model configuration dictionary containing capabilities and defaults.
TYPE:
|
Examples:
>>> config = ModelConfig.get_config("veo-3.0-fast-generate-preview")
>>> print(config["name"]) # "Veo 3.0 Fast"
>>> print(config["supports_duration"]) # False
Source code in src/veotools/core.py
build_generation_config
classmethod
¶
Build a generation configuration based on model capabilities.
This method creates a GenerateVideosConfig object with parameters appropriate for the specified model. It validates parameters against model capabilities and handles safety settings.
PARAMETER | DESCRIPTION |
---|---|
model
|
Model identifier to use for generation.
TYPE:
|
**kwargs
|
Generation parameters including: - number_of_videos: Number of videos to generate (default: 1) - duration_seconds: Video duration (if supported by model) - enhance_prompt: Whether to enhance the prompt (if supported) - fps: Frames per second (if supported) - aspect_ratio: Video aspect ratio (e.g., "16:9") - negative_prompt: Negative prompt for generation - person_generation: Person generation setting - safety_settings: List of safety settings - cached_content: Cached content handle
DEFAULT:
|
RETURNS | DESCRIPTION |
---|---|
GenerateVideosConfig
|
types.GenerateVideosConfig: Configuration object for video generation. |
RAISES | DESCRIPTION |
---|---|
ValueError
|
If aspect_ratio is not supported by the model. |
Examples:
>>> config = ModelConfig.build_generation_config(
... "veo-3.0-fast-generate-preview",
... number_of_videos=2,
... aspect_ratio="16:9"
... )
Source code in src/veotools/core.py
322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 |
|
Models Module¶
veotools.models
¶
CLASS | DESCRIPTION |
---|---|
JobStatus |
Enumeration of possible job statuses for video generation tasks. |
VideoMetadata |
Metadata information for a video file. |
VideoResult |
Result object for video generation operations. |
WorkflowStep |
Individual step in a video processing workflow. |
Workflow |
Container for a multi-step video processing workflow. |
Classes¶
JobStatus
¶
Bases: Enum
Enumeration of possible job statuses for video generation tasks.
ATTRIBUTE | DESCRIPTION |
---|---|
PENDING |
Job has been created but not yet started.
|
PROCESSING |
Job is currently being processed.
|
COMPLETE |
Job has finished successfully.
|
FAILED |
Job has failed with an error.
|
VideoMetadata
¶
Metadata information for a video file.
ATTRIBUTE | DESCRIPTION |
---|---|
fps |
Frames per second of the video.
|
duration |
Duration of the video in seconds.
|
width |
Width of the video in pixels.
|
height |
Height of the video in pixels.
|
frame_count |
Total number of frames in the video.
|
Examples:
>>> metadata = VideoMetadata(fps=30.0, duration=10.0, width=1920, height=1080)
>>> print(metadata.frame_count) # 300
>>> print(metadata.to_dict())
Initialize video metadata.
PARAMETER | DESCRIPTION |
---|---|
fps
|
Frames per second (default: 24.0).
TYPE:
|
duration
|
Video duration in seconds (default: 0.0).
TYPE:
|
width
|
Video width in pixels (default: 0).
TYPE:
|
height
|
Video height in pixels (default: 0).
TYPE:
|
METHOD | DESCRIPTION |
---|---|
to_dict |
Convert metadata to a dictionary. |
Source code in src/veotools/models.py
VideoResult
¶
Result object for video generation operations.
This class encapsulates all information about a video generation task, including its status, progress, metadata, and any errors.
ATTRIBUTE | DESCRIPTION |
---|---|
id |
Unique identifier for this result.
|
path |
Path to the generated video file.
|
url |
URL to access the video (if available).
|
operation_id |
Google API operation ID for tracking.
|
status |
Current status of the generation job.
|
progress |
Progress percentage (0-100).
|
metadata |
Video metadata (fps, duration, resolution).
|
prompt |
Text prompt used for generation.
|
model |
Model used for generation.
|
error |
Error information if generation failed.
|
created_at |
Timestamp when the job was created.
|
completed_at |
Timestamp when the job completed.
|
Examples:
>>> result = VideoResult()
>>> result.update_progress("Generating", 50)
>>> print(result.status) # JobStatus.PROCESSING
>>> result.update_progress("Complete", 100)
>>> print(result.status) # JobStatus.COMPLETE
Initialize a video result.
PARAMETER | DESCRIPTION |
---|---|
path
|
Optional path to the video file.
TYPE:
|
operation_id
|
Optional Google API operation ID.
TYPE:
|
METHOD | DESCRIPTION |
---|---|
to_dict |
Convert the result to a JSON-serializable dictionary. |
update_progress |
Update the progress of the video generation. |
mark_failed |
Mark the job as failed with an error. |
Source code in src/veotools/models.py
Functions¶
to_dict
¶
Convert the result to a JSON-serializable dictionary.
RETURNS | DESCRIPTION |
---|---|
Dict[str, Any]
|
Dict[str, Any]: Dictionary representation of the video result. |
Source code in src/veotools/models.py
update_progress
¶
Update the progress of the video generation.
Automatically updates the status based on progress: - 0%: PENDING - 1-99%: PROCESSING - 100%: COMPLETE
PARAMETER | DESCRIPTION |
---|---|
message
|
Progress message (currently unused but kept for API compatibility).
TYPE:
|
percent
|
Progress percentage (0-100).
TYPE:
|
Source code in src/veotools/models.py
mark_failed
¶
Mark the job as failed with an error.
PARAMETER | DESCRIPTION |
---|---|
error
|
The exception that caused the failure.
TYPE:
|
WorkflowStep
¶
Individual step in a video processing workflow.
ATTRIBUTE | DESCRIPTION |
---|---|
id |
Unique identifier for this step.
|
action |
Action to perform (e.g., "generate", "stitch").
|
params |
Parameters for the action.
|
result |
Result of executing this step.
|
created_at |
Timestamp when the step was created.
|
Initialize a workflow step.
PARAMETER | DESCRIPTION |
---|---|
action
|
The action to perform.
TYPE:
|
params
|
Parameters for the action.
TYPE:
|
METHOD | DESCRIPTION |
---|---|
to_dict |
Convert the step to a dictionary. |
Source code in src/veotools/models.py
Functions¶
to_dict
¶
Convert the step to a dictionary.
RETURNS | DESCRIPTION |
---|---|
Dict[str, Any]
|
Dict[str, Any]: Dictionary representation of the workflow step. |
Source code in src/veotools/models.py
Workflow
¶
Container for a multi-step video processing workflow.
Workflows allow chaining multiple operations like generation, stitching, and processing into a single managed flow.
ATTRIBUTE | DESCRIPTION |
---|---|
id |
Unique identifier for this workflow.
|
name |
Human-readable name for the workflow.
|
steps |
List of workflow steps to execute.
TYPE:
|
current_step |
Index of the currently executing step.
|
created_at |
Timestamp when the workflow was created.
|
updated_at |
Timestamp of the last update.
|
Examples:
>>> workflow = Workflow("my_video_project")
>>> workflow.add_step("generate", {"prompt": "sunset"})
>>> workflow.add_step("stitch", {"videos": ["a.mp4", "b.mp4"]})
>>> print(len(workflow.steps)) # 2
Initialize a workflow.
PARAMETER | DESCRIPTION |
---|---|
name
|
Optional name for the workflow. If not provided, generates a timestamp-based name.
TYPE:
|
METHOD | DESCRIPTION |
---|---|
add_step |
Add a new step to the workflow. |
to_dict |
Convert the workflow to a dictionary. |
from_dict |
Create a workflow from a dictionary. |
Source code in src/veotools/models.py
Functions¶
add_step
¶
add_step(action: str, params: Dict[str, Any]) -> WorkflowStep
Add a new step to the workflow.
PARAMETER | DESCRIPTION |
---|---|
action
|
The action to perform.
TYPE:
|
params
|
Parameters for the action.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
WorkflowStep
|
The created workflow step.
TYPE:
|
Source code in src/veotools/models.py
to_dict
¶
Convert the workflow to a dictionary.
RETURNS | DESCRIPTION |
---|---|
Dict[str, Any]
|
Dict[str, Any]: Dictionary representation of the workflow. |
Source code in src/veotools/models.py
from_dict
classmethod
¶
from_dict(data: Dict[str, Any]) -> Workflow
Create a workflow from a dictionary.
PARAMETER | DESCRIPTION |
---|---|
data
|
Dictionary containing workflow data.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Workflow
|
Reconstructed workflow instance.
TYPE:
|
Examples:
>>> data = {"id": "123", "name": "test", "current_step": 2}
>>> workflow = Workflow.from_dict(data)
>>> print(workflow.name) # "test"
Source code in src/veotools/models.py
Video Processing Module¶
veotools.process.extractor
¶
Frame extraction and video info utilities for Veo Tools.
Enhancements:
- get_video_info
now first attempts to use ffprobe
for accurate metadata
(fps, duration, width, height). If ffprobe
is unavailable, it falls back
to OpenCV-based probing.
FUNCTION | DESCRIPTION |
---|---|
extract_frame |
Extract a single frame from a video at the specified time offset. |
extract_frames |
Extract multiple frames from a video at specified time offsets. |
get_video_info |
Extract comprehensive metadata from a video file. |
Classes¶
Functions¶
extract_frame
¶
extract_frame(video_path: Path, time_offset: float = -1.0, output_path: Optional[Path] = None) -> Path
Extract a single frame from a video at the specified time offset.
Extracts and saves a frame from a video file as a JPEG image. Supports both positive time offsets (from start) and negative offsets (from end). Uses OpenCV for video processing and automatically manages storage paths.
PARAMETER | DESCRIPTION |
---|---|
video_path
|
Path to the input video file.
TYPE:
|
time_offset
|
Time in seconds where to extract the frame. Positive values are from the start, negative values from the end. Defaults to -1.0 (1 second from the end).
TYPE:
|
output_path
|
Optional custom path for saving the extracted frame. If None, auto-generates a path using StorageManager.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Path
|
The path where the extracted frame was saved.
TYPE:
|
RAISES | DESCRIPTION |
---|---|
FileNotFoundError
|
If the input video file doesn't exist. |
RuntimeError
|
If frame extraction fails (e.g., invalid time offset). |
Examples:
Extract the last frame:
Extract frame at 5 seconds:
Extract with custom output path:
>>> custom_path = Path("my_frame.jpg")
>>> frame_path = extract_frame(
... Path("video.mp4"),
... time_offset=10.0,
... output_path=custom_path
... )
Source code in src/veotools/process/extractor.py
extract_frames
¶
Extract multiple frames from a video at specified time offsets.
Extracts and saves multiple frames from a video file as JPEG images. Each time offset can be positive (from start) or negative (from end). Uses OpenCV for efficient batch frame extraction.
PARAMETER | DESCRIPTION |
---|---|
video_path
|
Path to the input video file.
TYPE:
|
times
|
List of time offsets in seconds. Each can be positive (from start) or negative (from end).
TYPE:
|
output_dir
|
Optional directory for saving frames. If None, uses StorageManager's default frame directory.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
list
|
List of Path objects where the extracted frames were saved. Order matches the input times list.
TYPE:
|
RAISES | DESCRIPTION |
---|---|
FileNotFoundError
|
If the input video file doesn't exist. |
Examples:
Extract frames at multiple timestamps:
>>> frame_paths = extract_frames(
... Path("video.mp4"),
... [0.0, 5.0, 10.0, -1.0] # Start, 5s, 10s, and 1s from end
... )
>>> print(f"Extracted {len(frame_paths)} frames")
Extract to custom directory:
>>> output_dir = Path("extracted_frames")
>>> frame_paths = extract_frames(
... Path("movie.mp4"),
... [1.0, 2.0, 3.0],
... output_dir=output_dir
... )
Note
Failed frame extractions are silently skipped. The returned list may contain fewer paths than input times if some extractions fail.
Source code in src/veotools/process/extractor.py
96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 |
|
get_video_info
¶
Extract comprehensive metadata from a video file.
Retrieves video metadata including frame rate, duration, dimensions, and frame count. First attempts to use ffprobe for accurate metadata extraction, falling back to OpenCV if ffprobe is unavailable. This dual approach ensures maximum compatibility and accuracy.
PARAMETER | DESCRIPTION |
---|---|
video_path
|
Path to the input video file.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
dict
|
Video metadata containing: - fps (float): Frames per second - frame_count (int): Total number of frames - width (int): Video width in pixels - height (int): Video height in pixels - duration (float): Video duration in seconds
TYPE:
|
RAISES | DESCRIPTION |
---|---|
FileNotFoundError
|
If the input video file doesn't exist. |
Examples:
Get basic video information:
>>> info = get_video_info(Path("video.mp4"))
>>> print(f"Duration: {info['duration']:.2f}s")
>>> print(f"Resolution: {info['width']}x{info['height']}")
>>> print(f"Frame rate: {info['fps']} fps")
Check if video has expected properties:
>>> info = get_video_info(Path("movie.mp4"))
>>> if info['fps'] > 30:
... print("High frame rate video")
>>> if info['width'] >= 1920:
... print("HD or higher resolution")
Note
- ffprobe (from FFmpeg) provides more accurate metadata when available
- OpenCV fallback may have slight inaccuracies in frame rate calculation
- All numeric values are guaranteed to be non-negative
- Returns 0.0 for fps/duration if video properties cannot be determined
Source code in src/veotools/process/extractor.py
180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 |
|
Video Stitching Module¶
veotools.stitch.seamless
¶
Seamless video stitching for Veo Tools.
FUNCTION | DESCRIPTION |
---|---|
stitch_videos |
Seamlessly stitch multiple videos together into a single continuous video. |
stitch_with_transitions |
Stitch videos together with custom transition videos between them. |
create_transition_points |
Extract frames from two videos to analyze potential transition points. |
Classes¶
Functions¶
stitch_videos
¶
stitch_videos(video_paths: List[Path], overlap: float = 1.0, output_path: Optional[Path] = None, on_progress: Optional[Callable] = None) -> VideoResult
Seamlessly stitch multiple videos together into a single continuous video.
Combines multiple video files into one continuous video by concatenating them with optional overlap trimming. All videos are resized to match the dimensions of the first video. The output is optimized with H.264 encoding for broad compatibility.
PARAMETER | DESCRIPTION |
---|---|
video_paths
|
List of paths to video files to stitch together, in order.
TYPE:
|
overlap
|
Duration in seconds to trim from the end of each video (except the last one) to create smooth transitions. Defaults to 1.0.
TYPE:
|
output_path
|
Optional custom output path. If None, auto-generates a path using StorageManager.
TYPE:
|
on_progress
|
Optional callback function called with progress updates (message, percent).
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
VideoResult
|
Object containing the stitched video path, metadata, and operation details.
TYPE:
|
RAISES | DESCRIPTION |
---|---|
ValueError
|
If no videos are provided or if fewer than 2 videos are found. |
FileNotFoundError
|
If any input video file doesn't exist. |
RuntimeError
|
If video processing fails. |
Examples:
Stitch videos with default overlap:
>>> video_files = [Path("part1.mp4"), Path("part2.mp4"), Path("part3.mp4")]
>>> result = stitch_videos(video_files)
>>> print(f"Stitched video: {result.path}")
Stitch without overlap:
Stitch with progress tracking:
>>> def show_progress(msg, pct):
... print(f"Stitching: {msg} ({pct}%)")
>>> result = stitch_videos(
... video_files,
... overlap=2.0,
... on_progress=show_progress
... )
Custom output location:
Note
- Videos are resized to match the first video's dimensions
- Uses H.264 encoding with CRF 23 for good quality/size balance
- Automatically handles frame rate consistency
- FFmpeg is used for final encoding if available, otherwise uses OpenCV
Source code in src/veotools/stitch/seamless.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 |
|
stitch_with_transitions
¶
stitch_with_transitions(video_paths: List[Path], transition_videos: List[Path], output_path: Optional[Path] = None, on_progress: Optional[Callable] = None) -> VideoResult
Stitch videos together with custom transition videos between them.
Combines multiple videos by inserting transition videos between each pair of main videos. The transitions are placed between consecutive videos to create smooth, cinematic connections between scenes.
PARAMETER | DESCRIPTION |
---|---|
video_paths
|
List of main video files to stitch together, in order.
TYPE:
|
transition_videos
|
List of transition videos to insert between main videos. Must have exactly len(video_paths) - 1 transitions.
TYPE:
|
output_path
|
Optional custom output path. If None, auto-generates a path using StorageManager.
TYPE:
|
on_progress
|
Optional callback function called with progress updates (message, percent).
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
VideoResult
|
Object containing the final stitched video with transitions.
TYPE:
|
RAISES | DESCRIPTION |
---|---|
ValueError
|
If the number of transition videos doesn't match the requirement (should be one less than the number of main videos). |
FileNotFoundError
|
If any video file doesn't exist. |
Examples:
Add transitions between three video clips:
>>> main_videos = [Path("scene1.mp4"), Path("scene2.mp4"), Path("scene3.mp4")]
>>> transitions = [Path("fade1.mp4"), Path("fade2.mp4")]
>>> result = stitch_with_transitions(main_videos, transitions)
>>> print(f"Final video with transitions: {result.path}")
With progress tracking:
>>> def track_progress(msg, pct):
... print(f"Processing: {msg} - {pct}%")
>>> result = stitch_with_transitions(
... main_videos,
... transitions,
... on_progress=track_progress
... )
Note
This function uses stitch_videos internally with overlap=0 to preserve transition videos exactly as provided.
Source code in src/veotools/stitch/seamless.py
create_transition_points
¶
create_transition_points(video_a: Path, video_b: Path, extract_points: Optional[dict] = None) -> tuple
Extract frames from two videos to analyze potential transition points.
Extracts representative frames from two videos that can be used to analyze how well they might transition together. Typically extracts the ending frame of the first video and the beginning frame of the second video.
PARAMETER | DESCRIPTION |
---|---|
video_a
|
Path to the first video file.
TYPE:
|
video_b
|
Path to the second video file.
TYPE:
|
extract_points
|
Optional dictionary specifying extraction points: - "a_end": Time offset for frame extraction from video_a (default: -1.0) - "b_start": Time offset for frame extraction from video_b (default: 1.0) If None, uses default values.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
tuple
|
A tuple containing (frame_a_path, frame_b_path) where: - frame_a_path: Path to extracted frame from video_a - frame_b_path: Path to extracted frame from video_b
TYPE:
|
RAISES | DESCRIPTION |
---|---|
FileNotFoundError
|
If either video file doesn't exist. |
RuntimeError
|
If frame extraction fails for either video. |
Examples:
Extract transition frames with defaults:
>>> frame_a, frame_b = create_transition_points(
... Path("clip1.mp4"),
... Path("clip2.mp4")
... )
>>> print(f"Transition frames: {frame_a}, {frame_b}")
Custom extraction points:
>>> points = {"a_end": -2.0, "b_start": 0.5}
>>> frame_a, frame_b = create_transition_points(
... Path("scene1.mp4"),
... Path("scene2.mp4"),
... extract_points=points
... )
Note
- Default extracts 1 second before the end of video_a
- Default extracts 1 second after the start of video_b
- Negative values in extract_points count from the end of the video
- These frames can be used to analyze color, composition, or content similarity for better transition planning
Source code in src/veotools/stitch/seamless.py
Bridge API Module¶
veotools.api.bridge
¶
CLASS | DESCRIPTION |
---|---|
Bridge |
A fluent API bridge for chaining video generation and processing operations. |
Classes¶
Bridge
¶
A fluent API bridge for chaining video generation and processing operations.
The Bridge class provides a convenient, chainable interface for combining multiple video operations like generation, stitching, and media management. It maintains an internal workflow and media queue to track operations and intermediate results.
ATTRIBUTE | DESCRIPTION |
---|---|
workflow |
Workflow object tracking all operations performed.
|
media_queue |
List of media file paths in processing order.
TYPE:
|
results |
List of VideoResult objects from generation operations.
TYPE:
|
storage |
StorageManager instance for file operations.
|
Examples:
Basic text-to-video generation:
Chain multiple generations and stitch:
>>> bridge = (Bridge("movie_project")
... .generate("Opening scene")
... .generate("Middle scene")
... .generate("Ending scene")
... .stitch(overlap=1.0)
... .save(Path("final_movie.mp4")))
Image-to-video with continuation:
>>> bridge = (Bridge()
... .add_media("photo.jpg")
... .generate("The person starts walking")
... .generate("They walk into the distance")
... .stitch())
METHOD | DESCRIPTION |
---|---|
with_progress |
Set a progress callback for all subsequent operations. |
add_media |
Add media files to the processing queue. |
generate |
Generate a video using text prompt and optional media input. |
generate_transition |
Generate a transition video between the last two media items. |
stitch |
Stitch all videos in the queue into a single continuous video. |
save |
Save the final result to a specified path or return the current path. |
get_workflow |
Get the workflow object containing all performed operations. |
to_dict |
Convert the workflow to a dictionary representation. |
clear |
Clear the media queue, removing all queued media files. |
Source code in src/veotools/api/bridge.py
Functions¶
with_progress
¶
with_progress(callback: Callable) -> Bridge
Set a progress callback for all subsequent operations.
PARAMETER | DESCRIPTION |
---|---|
callback
|
Function called with progress updates (message: str, percent: int).
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Bridge
|
Self for method chaining.
TYPE:
|
Examples:
>>> def show_progress(msg, pct):
... print(f"{msg}: {pct}%")
>>> bridge = Bridge().with_progress(show_progress)
Source code in src/veotools/api/bridge.py
add_media
¶
add_media(media: Union[str, Path, List[Union[str, Path]]]) -> Bridge
Add media files to the processing queue.
Adds one or more media files (images or videos) to the internal queue. These files can be used as inputs for subsequent generation operations.
PARAMETER | DESCRIPTION |
---|---|
media
|
Single media path, or list of media paths to add to the queue.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Bridge
|
Self for method chaining.
TYPE:
|
Examples:
Add a single image:
Add multiple videos:
Chain with Path objects:
Source code in src/veotools/api/bridge.py
generate
¶
generate(prompt: str, model: str = 'veo-3.0-fast-generate-preview', **kwargs) -> Bridge
Generate a video using text prompt and optional media input.
Generates a video based on the prompt and the most recent media in the queue. The generation method is automatically selected based on the media type: - No media: text-to-video generation - Image media: image-to-video generation - Video media: video continuation generation
PARAMETER | DESCRIPTION |
---|---|
prompt
|
Text description for video generation.
TYPE:
|
model
|
Veo model to use. Defaults to "veo-3.0-fast-generate-preview".
TYPE:
|
**kwargs
|
Additional generation parameters including: - extract_at: Time offset for video continuation (float) - duration_seconds: Video duration (int) - person_generation: Person policy (str) - enhance: Whether to enhance prompt (bool)
DEFAULT:
|
RETURNS | DESCRIPTION |
---|---|
Bridge
|
Self for method chaining.
TYPE:
|
RAISES | DESCRIPTION |
---|---|
RuntimeError
|
If video generation fails. |
Examples:
Text-to-video generation:
Image-to-video with existing media:
>>> bridge = (Bridge()
... .add_media("landscape.jpg")
... .generate("Clouds moving across the sky"))
Video continuation:
>>> bridge = (Bridge()
... .add_media("scene1.mp4")
... .generate("The action continues", extract_at=-2.0))
Custom model and parameters:
>>> bridge = Bridge().generate(
... "A dancing robot",
... model="veo-2.0",
... duration_seconds=10,
... enhance=True
... )
Source code in src/veotools/api/bridge.py
99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 |
|
generate_transition
¶
generate_transition(prompt: Optional[str] = None, model: str = 'veo-3.0-fast-generate-preview') -> Bridge
Generate a transition video between the last two media items.
Creates a smooth transition video that bridges the gap between the two most recent media items in the queue. The transition is generated from a frame extracted near the end of the second-to-last video.
PARAMETER | DESCRIPTION |
---|---|
prompt
|
Description of the desired transition. If None, uses a default "smooth cinematic transition between scenes".
TYPE:
|
model
|
Veo model to use. Defaults to "veo-3.0-fast-generate-preview".
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Bridge
|
Self for method chaining.
TYPE:
|
RAISES | DESCRIPTION |
---|---|
ValueError
|
If fewer than 2 media items are in the queue. |
Examples:
Generate default transition:
Custom transition prompt:
>>> bridge = (Bridge()
... .generate("Day scene")
... .generate("Night scene")
... .generate_transition("Gradual sunset transition"))
Note
The transition video is inserted between the last two media items, creating a sequence like: [media_a, transition, media_b, ...]
Source code in src/veotools/api/bridge.py
stitch
¶
stitch(overlap: float = 1.0) -> Bridge
Stitch all videos in the queue into a single continuous video.
Combines all video files in the media queue into one seamless video. Non-video files (images) are automatically filtered out. The result replaces the entire media queue.
PARAMETER | DESCRIPTION |
---|---|
overlap
|
Duration in seconds to trim from the end of each video (except the last) for smooth transitions. Defaults to 1.0.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Bridge
|
Self for method chaining.
TYPE:
|
RAISES | DESCRIPTION |
---|---|
ValueError
|
If fewer than 2 videos are available for stitching. |
Examples:
Stitch with default overlap:
>>> bridge = (Bridge()
... .generate("Scene 1")
... .generate("Scene 2")
... .generate("Scene 3")
... .stitch())
Stitch without overlap:
Stitch with longer transitions:
Note
After stitching, the media queue contains only the final stitched video.
Source code in src/veotools/api/bridge.py
save
¶
Save the final result to a specified path or return the current path.
Saves the most recent media file in the queue to the specified output path, or returns the current path if no output path is provided.
PARAMETER | DESCRIPTION |
---|---|
output_path
|
Optional destination path. If provided, copies the current result to this location. If None, returns the current file path.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Path
|
The path where the final result is located.
TYPE:
|
RAISES | DESCRIPTION |
---|---|
ValueError
|
If no media is available to save. |
Examples:
Save to custom location:
Get current result path:
Save with Path object:
Source code in src/veotools/api/bridge.py
get_workflow
¶
get_workflow() -> Workflow
Get the workflow object containing all performed operations.
RETURNS | DESCRIPTION |
---|---|
Workflow
|
The workflow tracking all operations and their parameters.
TYPE:
|
Examples:
>>> bridge = Bridge("project").generate("A scene").stitch()
>>> workflow = bridge.get_workflow()
>>> print(workflow.name)
Source code in src/veotools/api/bridge.py
to_dict
¶
Convert the workflow to a dictionary representation.
RETURNS | DESCRIPTION |
---|---|
dict
|
Dictionary containing workflow steps and metadata.
TYPE:
|
Examples:
>>> bridge = Bridge("test").generate("Scene")
>>> workflow_dict = bridge.to_dict()
>>> print(workflow_dict.keys())
Source code in src/veotools/api/bridge.py
Functions¶
MCP API Module¶
veotools.api.mcp_api
¶
MCP-friendly API wrappers for Veo Tools.
This module exposes small, deterministic, JSON-first functions intended for use in Model Context Protocol (MCP) servers. It builds on top of the existing blocking SDK functions by providing a non-blocking job lifecycle:
- generate_start(params) -> submits a generation job and returns immediately
- generate_get(job_id) -> fetches job status/progress/result
- generate_cancel(job_id) -> requests cancellation for a running job
It also provides environment/system helpers: - preflight() -> checks API key, ffmpeg, and filesystem permissions - version() -> returns package and key dependency versions
Design notes: - Jobs are persisted as JSON files under StorageManager's base directory ("output/ops"). This allows stateless MCP handlers to inspect progress and results across processes. - A background thread runs the blocking generation call and updates job state via the JobStore. Cancellation is cooperative: the on_progress callback checks a cancel flag in the persisted job state and raises Cancelled.
FUNCTION | DESCRIPTION |
---|---|
preflight |
Check environment and system prerequisites for video generation. |
version |
Report package and dependency versions in a JSON-friendly format. |
generate_start |
Start a video generation job and return immediately with job details. |
generate_get |
Get the current status and results of a generation job. |
generate_cancel |
Request cancellation of a running generation job. |
list_models |
List available video generation models with their capabilities. |
cache_create_from_files |
Create a cached content handle from local file paths. |
cache_get |
Retrieve cached content metadata by cache name. |
cache_list |
List all cached content entries with their metadata. |
cache_update |
Update TTL or expiration time for a cached content entry. |
cache_delete |
Delete a cached content entry by name. |
Classes¶
Cancelled
¶
Bases: Exception
Exception raised to signal cooperative cancellation of a generation job.
This exception is raised internally when a job's cancel_requested flag is set to True, allowing for graceful termination of long-running operations.
JobRecord
dataclass
¶
JobRecord(job_id: str, status: str, progress: int, message: str, created_at: float, updated_at: float, cancel_requested: bool, kind: str, params: Dict[str, Any], result: Optional[Dict[str, Any]] = None, error_code: Optional[str] = None, error_message: Optional[str] = None, remote_operation_id: Optional[str] = None)
Data class representing a generation job's state and metadata.
Stores all information about a generation job including status, progress, parameters, results, and error information. Used for job persistence and state management across processes.
ATTRIBUTE | DESCRIPTION |
---|---|
job_id |
Unique identifier for the job.
TYPE:
|
status |
Current job status (pending|processing|complete|failed|cancelled).
TYPE:
|
progress |
Progress percentage (0-100).
TYPE:
|
message |
Current status message.
TYPE:
|
created_at |
Unix timestamp when job was created.
TYPE:
|
updated_at |
Unix timestamp of last update.
TYPE:
|
cancel_requested |
Whether cancellation has been requested.
TYPE:
|
kind |
Generation type (text|image|video).
TYPE:
|
params |
Dictionary of generation parameters.
TYPE:
|
result |
Optional result data when job completes.
TYPE:
|
error_code |
Optional error code if job fails.
TYPE:
|
error_message |
Optional error description if job fails.
TYPE:
|
remote_operation_id |
Optional ID from the remote API operation.
TYPE:
|
METHOD | DESCRIPTION |
---|---|
to_json |
Convert the job record to JSON string representation. |
JobStore
¶
JobStore(storage: Optional[StorageManager] = None)
File-based persistence layer for generation jobs.
Manages storage and retrieval of job records using JSON files in the
filesystem. Each job is stored as a separate JSON file under the
output/ops/{job_id}.json
path structure.
This design allows stateless MCP handlers to inspect job progress and results across different processes and sessions.
ATTRIBUTE | DESCRIPTION |
---|---|
storage |
StorageManager instance for base path management.
|
ops_dir |
Directory path where job files are stored.
|
Initialize the job store with optional custom storage manager.
PARAMETER | DESCRIPTION |
---|---|
storage
|
Optional StorageManager instance. If None, creates a new one.
TYPE:
|
METHOD | DESCRIPTION |
---|---|
create |
Create a new job record on disk. |
read |
Read a job record from disk. |
update |
Update a job record with new values and persist to disk. |
request_cancel |
Request cancellation of a job by setting the cancel flag. |
Source code in src/veotools/api/mcp_api.py
Functions¶
create
¶
create(record: JobRecord) -> None
Create a new job record on disk.
PARAMETER | DESCRIPTION |
---|---|
record
|
JobRecord instance to persist.
TYPE:
|
RAISES | DESCRIPTION |
---|---|
OSError
|
If file creation fails. |
Source code in src/veotools/api/mcp_api.py
read
¶
read(job_id: str) -> Optional[JobRecord]
Read a job record from disk.
PARAMETER | DESCRIPTION |
---|---|
job_id
|
The unique job identifier.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
JobRecord
|
The job record if found, None otherwise.
TYPE:
|
RAISES | DESCRIPTION |
---|---|
JSONDecodeError
|
If the stored JSON is invalid. |
Source code in src/veotools/api/mcp_api.py
update
¶
Update a job record with new values and persist to disk.
PARAMETER | DESCRIPTION |
---|---|
record
|
The JobRecord instance to update.
TYPE:
|
**updates
|
Key-value pairs of attributes to update.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
JobRecord
|
The updated job record.
TYPE:
|
RAISES | DESCRIPTION |
---|---|
OSError
|
If file write fails. |
Source code in src/veotools/api/mcp_api.py
request_cancel
¶
request_cancel(job_id: str) -> Optional[JobRecord]
Request cancellation of a job by setting the cancel flag.
PARAMETER | DESCRIPTION |
---|---|
job_id
|
The unique job identifier.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
JobRecord
|
Updated job record if found, None otherwise.
TYPE:
|
RAISES | DESCRIPTION |
---|---|
OSError
|
If file write fails. |
Source code in src/veotools/api/mcp_api.py
Functions¶
preflight
¶
Check environment and system prerequisites for video generation.
Performs comprehensive system checks to ensure all required dependencies and configurations are available for successful video generation operations. This includes API key validation, FFmpeg availability, and filesystem permissions.
RETURNS | DESCRIPTION |
---|---|
dict
|
JSON-serializable dictionary containing: - ok (bool): Overall system readiness status - gemini_api_key (bool): Whether GEMINI_API_KEY is set - ffmpeg (dict): FFmpeg installation status and version info - write_permissions (bool): Whether output directory is writable - base_path (str): Absolute path to the base output directory
TYPE:
|
Examples:
>>> status = preflight()
>>> if not status['ok']:
... print("System not ready for generation")
... if not status['gemini_api_key']:
... print("Please set GEMINI_API_KEY environment variable")
... if not status['ffmpeg']['installed']:
... print("Please install FFmpeg for video processing")
>>> else:
... print(f"System ready! Output directory: {status['base_path']}")
Note
This function is designed to be called before starting any video generation operations to ensure the environment is properly configured.
Source code in src/veotools/api/mcp_api.py
version
¶
Report package and dependency versions in a JSON-friendly format.
Collects version information for veotools and its key dependencies, providing a comprehensive overview of the current software environment. Useful for debugging and support purposes.
RETURNS | DESCRIPTION |
---|---|
dict
|
Dictionary containing:
- veotools (str|None): veotools package version
- dependencies (dict): Versions of key Python packages:
- google-genai: Google GenerativeAI library version
- opencv-python: OpenCV library version
TYPE:
|
Examples:
>>> versions = version()
>>> print(f"veotools: {versions['veotools']}")
>>> print(f"Google GenAI: {versions['dependencies']['google-genai']}")
>>> if versions['ffmpeg']:
... print(f"FFmpeg: {versions['ffmpeg']}")
>>> else:
... print("FFmpeg not available")
Note
Returns None for any package that cannot be found or queried. This is expected behavior and not an error condition.
Source code in src/veotools/api/mcp_api.py
generate_start
¶
Start a video generation job and return immediately with job details.
Initiates a video generation job in the background and returns immediately with job tracking information. The actual generation runs asynchronously and can be monitored using generate_get().
PARAMETER | DESCRIPTION |
---|---|
params
|
Generation parameters dictionary containing: - prompt (str): Required text description for generation - model (str, optional): Model to use (defaults to veo-3.0-fast-generate-preview) - input_image_path (str, optional): Path to input image for image-to-video - input_video_path (str, optional): Path to input video for continuation - extract_at (float, optional): Time offset for video continuation - options (dict, optional): Additional model-specific options
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
dict
|
Job information containing: - job_id (str): Unique job identifier for tracking - status (str): Initial job status ("processing") - progress (int): Initial progress (0) - message (str): Status message - kind (str): Generation type (text|image|video) - created_at (float): Job creation timestamp
TYPE:
|
RAISES | DESCRIPTION |
---|---|
ValueError
|
If required parameters are missing or invalid. |
FileNotFoundError
|
If input media files don't exist. |
Examples:
Start text-to-video generation:
>>> job = generate_start({"prompt": "A sunset over mountains"})
>>> print(f"Job started: {job['job_id']}")
Start image-to-video generation:
>>> job = generate_start({
... "prompt": "The person starts walking",
... "input_image_path": "photo.jpg"
... })
Start video continuation:
>>> job = generate_start({
... "prompt": "The action continues",
... "input_video_path": "scene1.mp4",
... "extract_at": -2.0
... })
Start with custom model and options:
>>> job = generate_start({
... "prompt": "A dancing robot",
... "model": "veo-2.0",
... "options": {"duration_seconds": 10, "enhance": True}
... })
Note
The job runs in a background thread. Use generate_get() to check progress and retrieve results when complete.
Source code in src/veotools/api/mcp_api.py
423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 |
|
generate_get
¶
Get the current status and results of a generation job.
Retrieves the current state of a generation job including progress, status, and results if complete. This function can be called repeatedly to monitor job progress.
PARAMETER | DESCRIPTION |
---|---|
job_id
|
The unique job identifier returned by generate_start().
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
dict
|
Job status information containing: - job_id (str): The job identifier - status (str): Current status (processing|complete|failed|cancelled) - progress (int): Progress percentage (0-100) - message (str): Current status message - kind (str): Generation type (text|image|video) - remote_operation_id (str|None): Remote API operation ID if available - updated_at (float): Last update timestamp - result (dict, optional): Generation results when status is "complete" - error_code (str, optional): Error code if status is "failed" - error_message (str, optional): Error description if status is "failed"
TYPE:
|
Dict[str, Any]
|
If job_id is not found, returns: - error_code (str): "VALIDATION" - error_message (str): Error description |
Examples:
Check job progress:
>>> status = generate_get(job_id)
>>> print(f"Progress: {status['progress']}% - {status['message']}")
Wait for completion:
>>> import time
>>> while True:
... status = generate_get(job_id)
... if status['status'] == 'complete':
... print(f"Video ready: {status['result']['path']}")
... break
... elif status['status'] == 'failed':
... print(f"Generation failed: {status['error_message']}")
... break
... time.sleep(5)
Handle different outcomes:
>>> status = generate_get(job_id)
>>> if status['status'] == 'complete':
... video_path = status['result']['path']
... metadata = status['result']['metadata']
... print(f"Success! Video: {video_path}")
... print(f"Duration: {metadata['duration']}s")
... elif status['status'] == 'failed':
... print(f"Error ({status['error_code']}): {status['error_message']}")
... else:
... print(f"Still processing: {status['progress']}%")
Source code in src/veotools/api/mcp_api.py
517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 |
|
generate_cancel
¶
Request cancellation of a running generation job.
Attempts to cancel a generation job that is currently processing. Cancellation is cooperative - the job will stop at the next progress update checkpoint. Already completed or failed jobs cannot be cancelled.
PARAMETER | DESCRIPTION |
---|---|
job_id
|
The unique job identifier to cancel.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
dict
|
Cancellation response containing: - job_id (str): The job identifier - status (str): "cancelling" if request was accepted
TYPE:
|
Dict[str, Any]
|
If job_id is not found, returns: - error_code (str): "VALIDATION" - error_message (str): Error description |
Examples:
Cancel a running job:
>>> response = generate_cancel(job_id)
>>> if 'error_code' not in response:
... print(f"Cancellation requested for job {response['job_id']}")
... else:
... print(f"Cancel failed: {response['error_message']}")
Check if cancellation succeeded:
>>> generate_cancel(job_id)
>>> time.sleep(2)
>>> status = generate_get(job_id)
>>> if status['status'] == 'cancelled':
... print("Job successfully cancelled")
Note
Cancellation may not be immediate - the job will stop at the next progress checkpoint. Monitor with generate_get() to confirm cancellation.
Source code in src/veotools/api/mcp_api.py
list_models
¶
List available video generation models with their capabilities.
Retrieves information about available Veo models including their capabilities, default settings, and performance characteristics. Combines static model registry with optional remote model discovery.
PARAMETER | DESCRIPTION |
---|---|
include_remote
|
Whether to include models discovered from the remote API. If True, attempts to fetch additional model information from Google's API. If False, returns only the static model registry. Defaults to True.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
dict
|
Model information containing: - models (list): List of model dictionaries, each containing: - id (str): Model identifier (e.g., "veo-3.0-fast-generate-preview") - name (str): Human-readable model name - capabilities (dict): Feature flags: - supports_duration (bool): Can specify custom duration - supports_enhance (bool): Can enhance prompts - supports_fps (bool): Can specify frame rate - supports_audio (bool): Can generate audio - default_duration (float|None): Default video duration in seconds - generation_time (float|None): Estimated generation time in seconds - source (str): Data source ("static", "remote", or "static+remote")
TYPE:
|
Examples:
List all available models:
>>> models = list_models()
>>> for model in models['models']:
... print(f"{model['name']} ({model['id']})")
... if model['capabilities']['supports_duration']:
... print(f" Default duration: {model['default_duration']}s")
Find models with specific capabilities:
>>> models = list_models()
>>> audio_models = [
... m for m in models['models']
... if m['capabilities']['supports_audio']
... ]
>>> print(f"Found {len(audio_models)} models with audio support")
Use only static model registry:
>>> models = list_models(include_remote=False)
>>> static_models = [m for m in models['models'] if m['source'] == 'static']
Note
Results are cached for 10 minutes to improve performance. Remote model discovery failures are silently ignored - static registry is always available.
Source code in src/veotools/api/mcp_api.py
763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 |
|
cache_create_from_files
¶
cache_create_from_files(model: str, files: list[str], system_instruction: Optional[str] = None) -> Dict[str, Any]
Create a cached content handle from local file paths.
Uploads local files to create a cached content context that can be reused across multiple API calls for efficiency. This is particularly useful when working with large files or when making multiple requests with the same context.
PARAMETER | DESCRIPTION |
---|---|
model
|
The model identifier to associate with the cached content.
TYPE:
|
files
|
List of local file paths to upload and cache.
TYPE:
|
system_instruction
|
Optional system instruction to include with the cache.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
dict
|
Cache creation result containing: - name (str): Unique cache identifier for future reference - model (str): The associated model identifier - system_instruction (str|None): The system instruction if provided - contents_count (int): Number of files successfully cached
TYPE:
|
Dict[str, Any]
|
On failure, returns: - error_code (str): Error classification - error_message (str): Detailed error description |
Examples:
Cache multiple reference images:
>>> result = cache_create_from_files(
... "veo-3.0-fast-generate-preview",
... ["ref1.jpg", "ref2.jpg", "ref3.jpg"],
... "These are reference images for style consistency"
... )
>>> if 'name' in result:
... cache_name = result['name']
... print(f"Cache created: {cache_name}")
... else:
... print(f"Cache creation failed: {result['error_message']}")
Cache video reference:
Note
Files are uploaded to Google's servers as part of the caching process. Ensure you have appropriate permissions for the files and comply with Google's usage policies.
Source code in src/veotools/api/mcp_api.py
883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 |
|
cache_get
¶
Retrieve cached content metadata by cache name.
Fetches information about a previously created cached content entry, including lifecycle information like expiration times and creation dates.
PARAMETER | DESCRIPTION |
---|---|
name
|
The unique cache identifier returned by cache_create_from_files().
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
dict
|
Cache metadata containing: - name (str): The cache identifier - ttl (str|None): Time-to-live if available - expire_time (str|None): Expiration timestamp if available - create_time (str|None): Creation timestamp if available
TYPE:
|
Dict[str, Any]
|
On failure, returns: - error_code (str): Error classification - error_message (str): Detailed error description |
Examples:
Check cache status:
>>> cache_info = cache_get(cache_name)
>>> if 'error_code' not in cache_info:
... print(f"Cache {cache_info['name']} is active")
... if cache_info.get('expire_time'):
... print(f"Expires: {cache_info['expire_time']}")
... else:
... print(f"Cache not found: {cache_info['error_message']}")
Note
Available metadata fields may vary depending on the Google GenAI library version and cache configuration.
Source code in src/veotools/api/mcp_api.py
cache_list
¶
List all cached content entries with their metadata.
Retrieves a list of all cached content entries accessible to the current API key, including their metadata and lifecycle information.
RETURNS | DESCRIPTION |
---|---|
dict
|
Cache listing containing: - caches (list): List of cache entries, each containing: - name (str): Cache identifier - model (str|None): Associated model if available - display_name (str|None): Human-readable name if available - create_time (str|None): Creation timestamp if available - update_time (str|None): Last update timestamp if available - expire_time (str|None): Expiration timestamp if available - usage_metadata (dict|None): Usage statistics if available
TYPE:
|
Dict[str, Any]
|
On failure, returns: - error_code (str): Error classification - error_message (str): Detailed error description |
Examples:
List all caches:
>>> cache_list_result = cache_list()
>>> if 'caches' in cache_list_result:
... for cache in cache_list_result['caches']:
... print(f"Cache: {cache['name']}")
... if cache.get('model'):
... print(f" Model: {cache['model']}")
... if cache.get('expire_time'):
... print(f" Expires: {cache['expire_time']}")
... else:
... print(f"Failed to list caches: {cache_list_result['error_message']}")
Find caches by model:
>>> result = cache_list()
>>> if 'caches' in result:
... veo3_caches = [
... c for c in result['caches']
... if c.get('model', '').startswith('veo-3')
... ]
Note
Metadata availability depends on the Google GenAI library version and individual cache configurations.
Source code in src/veotools/api/mcp_api.py
1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 |
|
cache_update
¶
cache_update(name: str, ttl_seconds: Optional[int] = None, expire_time_iso: Optional[str] = None) -> Dict[str, Any]
Update TTL or expiration time for a cached content entry.
Modifies the lifecycle settings of an existing cached content entry. You can specify either a TTL (time-to-live) in seconds or an absolute expiration time, but not both.
PARAMETER | DESCRIPTION |
---|---|
name
|
The unique cache identifier to update.
TYPE:
|
ttl_seconds
|
Optional time-to-live in seconds (e.g., 300 for 5 minutes).
TYPE:
|
expire_time_iso
|
Optional timezone-aware ISO-8601 datetime string (e.g., "2024-01-15T10:30:00Z").
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
dict
|
Update result containing: - name (str): The cache identifier - expire_time (str|None): New expiration time if available - ttl (str|None): New TTL setting if available - update_time (str|None): Update timestamp if available
TYPE:
|
Dict[str, Any]
|
On failure, returns: - error_code (str): Error classification - error_message (str): Detailed error description |
Examples:
Extend cache TTL to 1 hour:
>>> result = cache_update(cache_name, ttl_seconds=3600)
>>> if 'error_code' not in result:
... print(f"Cache TTL updated: {result.get('ttl')}")
... else:
... print(f"Update failed: {result['error_message']}")
Set specific expiration time:
Extend by 30 minutes:
Note
- Only one of ttl_seconds or expire_time_iso should be provided
- TTL is relative to the current time when the update is processed
- expire_time_iso should be in UTC timezone for consistency
Source code in src/veotools/api/mcp_api.py
1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 |
|
cache_delete
¶
Delete a cached content entry by name.
Permanently removes a cached content entry and all associated files from the system. This action cannot be undone.
PARAMETER | DESCRIPTION |
---|---|
name
|
The unique cache identifier to delete.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
dict
|
Deletion result containing: - deleted (bool): True if deletion was successful - name (str): The cache identifier that was deleted
TYPE:
|
Dict[str, Any]
|
On failure, returns: - error_code (str): Error classification - error_message (str): Detailed error description |
Examples:
Delete a specific cache:
>>> result = cache_delete(cache_name)
>>> if result.get('deleted'):
... print(f"Cache {result['name']} deleted successfully")
... else:
... print(f"Deletion failed: {result.get('error_message')}")
Delete with error handling:
>>> result = cache_delete("non-existent-cache")
>>> if 'error_code' in result:
... print(f"Error: {result['error_message']}")
Note
Deletion is permanent and cannot be reversed. Ensure you no longer need the cached content before calling this function.