77 Commits

Author SHA1 Message Date
affe933fec fix(video): 解决视频时间戳处理和编码参数问题
- 统一归零视频起始时间戳,避免源素材非 0 起始 PTS 造成封装后首帧冻结
- 修改 setpts 滤镜表达式为 setpts=PTS-STARTPTS 格式
- 为所有速度调整场景应用标准化的时间戳处理
- 添加视频编码参数测试文件,确保 B 帧在各种硬件加速下被禁用
- 为软件、QSV 和 CUDA 硬件加速添加 B 帧禁用测试用例
2026-03-06 17:30:34 +08:00
379e0bf999 fix(video): 优化视频编码参数配置
- 修改了 maxrate 参数的处理逻辑,当 maxrate 有效时设置 -b:v 为 maxrate
- 调整了 bufsize 计算方式,从 2 倍改为 1 倍 maxrate,适合短视频严格码率控制
- 添加了硬件加速 QSV 编码参数支持
- 修复了 NVENC VBV 模型生效条件判断逻辑
- 更新了 CRF/CQ 模式下的峰值码率限制实现
2026-03-04 18:05:00 +08:00
d54e6e948f fix(video): 解决 HLS 流中 B 帧导致的 PTS/DTS 回退问题
- 在硬件加速编码参数中添加 -bf 0 选项禁用 B 帧
- 在 CUDA 硬件加速模式下添加 B 帧禁用配置
- 在软件编码模式下添加 B 帧禁用参数设置
- 添加注释说明禁用 B 帧的原因是为了避免 TS 分片边界问题
2026-03-04 11:25:08 +08:00
ca90336905 feat(video): 添加视频编码最大码率限制功能
- 在 get_video_encode_args 函数中新增 maxrate 参数用于限制峰值码率
- 实现 CRF/CQ 模式下同时控制质量和峰值码率的功能
- 自动计算 bufsize 为 maxrate 的 2 倍值
- 更新 VideoHandler 类中的编码参数方法以传递码率限制
- 修改视频合成和渲染模块以应用输出规格中的码率设置
- 移除静态 VIDEO_ENCODE_ARGS 常量以支持动态参数生成
2026-03-04 10:03:33 +08:00
34e7d84d52 refactor(video): 重构视频裁切功能实现
- 将 crop_size 字段替换为 crop_scale 浮点数字段,支持缩放倍率控制
- 将 face_pos 字段重命名为 crop_pos,统一裁切位置控制
- 移除 zoom_cut 和 crop_size 字段,简化裁切参数
- 新增 _build_crop_filter 静态方法,统一构建裁切滤镜逻辑
- 优化裁切算法,支持按目标比例和倍率进行精确裁切
- 统一处理图像和视频的裁切逻辑,消除代码重复
- 添加 cropScale 参数的安全解析,防止非法数值导致错误
- 改进裁切位置解析,支持浮点数坐标并添加异常处理
2026-02-27 13:37:42 +08:00
9dd5b6237d refactor(worker): 合并渲染和TS封装任务为单一处理流程
- 将 RENDER_SEGMENT_VIDEO 和 PACKAGE_SEGMENT_TS 任务类型合并为 RENDER_SEGMENT_TS
- 移除独立的 PackageSegmentTsHandler,将其功能集成到 RenderSegmentTsHandler 中
- 更新任务执行器中的 GPU 资源分配配置
- 修改单元测试以适配新的任务类型名称
- 在 TaskType 枚举中保留历史任务类型的兼容性标记
- 更新常量定义和默认功能配置中的任务类型引用
- 添加视频精确裁剪和 TS 封装功能到渲染处理器中
2026-02-11 14:30:24 +08:00
c2ece02ecf feat(audio): 添加全局音频淡入淡出功能
- 在 Task 类中新增 get_global_audio_fade_in_ms 和 get_global_audio_fade_out_ms 方法
- 修改 prepare_audio.py 中的音频混音逻辑,支持全局淡入淡出参数
- 新增 _build_global_fade_filters 静态方法构建全局淡入淡出滤镜
- 更新音频混音命令构建逻辑,支持在混音后应用全局淡入淡出效果
- 为无BGM和仅BGM的情况添加全局淡入淡出滤镜支持
- 在amix混音后追加全局淡入淡出滤镜,与片段级音频效果独立处理
2026-02-11 11:53:34 +08:00
952b8f5c01 feat(video): 添加原始变速效果支持
- 在常量定义中新增 ospeed 效果类型用于兼容旧模板
- 在任务域中实现 get_ospeed_params 方法解析变速参数
- 修改视频渲染处理器合并 speed 与 ospeed 效果计算
- 更新时长计算逻辑以正确处理 ospeed 变速影响
- 新增 ospeed 参数验证和边界值处理机制
- 添加完整的 ospeed 效果单元测试覆盖各种场景
2026-02-10 12:20:20 +08:00
3cb2f8d02a test(handlers): 添加基础处理器并行传输相关单元测试
- 实现 download_files_parallel 方法的并发下载功能测试
- 验证上传文件并行处理收集URL功能的正确性
- 测试下载文件时设置锁等待时间跨度属性
- 验证无缓存下载时锁等待时间为零的场景
- 测试上传文件时设置详细的跨度属性功能
- 添加渲染视频效果相关的参数解析测试
- 实现存储服务上传指标
2026-02-07 18:29:54 +08:00
ef4cf549c4 feat(storage): 增强文件上传功能并添加详细的指标追踪
- 在存储服务中新增 upload_file_with_metrics 方法,返回上传结果和详细指标
- 为上传操作添加完整的指标收集,包括 HTTP 尝试次数、重试次数、状态码等
- 集成 OpenTelemetry 追踪,记录文件上传的关键属性和错误标记
- 改进缓存写回逻辑,添加缓存写入失败的日志记录
- 支持 Rclone 上传方式的指标追踪和回退到 HTTP 的情况记录
- 优化本地文件大小检查,避免重复的文件系统调用
- 添加更详细的错误日志,包含上传方法、状态码和错误类型信息
2026-02-07 18:29:20 +08:00
16ea45ad1c perf(cache): 优化缓存下载逻辑并添加性能指标追踪
- 实现了带等待时间统计的缓存锁获取功能
- 新增 get_or_download_with_metrics 方法返回详细的性能指标
- 在 tracing span 中记录锁等待时间、锁获取状态和缓存路径使用情况
- 优化缓存命中路径避免不必要的锁获取操作
- 添加了缓存文件就绪检查和复制功能的独立方法
- 增加了针对缓存锁超时但仍可使用就绪缓存的处理逻辑
- 新增了多个单元测试验证缓存锁定和指标报告功能
2026-02-07 03:45:52 +08:00
ad4a9cc869 feat(video): 添加视频缩放特效功能支持
- 在 EffectConfig 中新增 zoom 特效类型及参数解析
- 实现 get_zoom_params 方法用于获取缩放效果参数
- 更新文档注释说明 zoom 特效使用格式示例
- 修改渲染逻辑支持 zoom 特效的 filter_complex 处理
- 添加缩放特效的视频滤镜构建实现
- 统一处理 cameraShot 和 zoom 特效的效果叠加逻辑
2026-02-07 01:25:20 +08:00
88aa3adca1 feat(base): 添加单任务内文件传输并发功能
- 引入 ThreadPoolExecutor 实现并行下载和上传
- 新增 download_files_parallel 和 upload_files_parallel 方法
- 添加任务传输并发数配置选项 TASK_DOWNLOAD_CONCURRENCY 和 TASK_UPLOAD_CONCURRENCY
- 实现并发数配置的环境变量解析和验证逻辑
- 在多个处理器中应用并行下载优化文件获取性能
- 更新 .env.example 配置文件模板
- 移除 FFmpeg 命令日志长度限制
2026-02-07 00:38:43 +08:00
d955def63c feat(tracing): 增强文件下载上传的日志记录和追踪功能
- 添加任务上下文信息到日志前缀,便于追踪具体任务
- 在跨度中增加文件源URL和上传URL的属性记录
- 将存储服务中的info级别日志调整为debug级别以减少冗余输出
- 添加文件访问地址的调试日志输出
- 优化根日志级别设置允许DEBUG日志流入处理器
- 修复重试失败后的错误日志格式问题
2026-02-07 00:26:01 +08:00
9d16d3c6af feat(gpu): 添加 QSV 硬件加速支持
- 实现 QSV 设备初始化逻辑,支持 Intel 核显
- 区分 QSV 和 CUDA 设备初始化流程
- 添加 QSV 设备验证和配置处理
- 更新设备检测逻辑以支持不同硬件加速类型
- 实现 QSV 设备名称格式化和可用性设置
2026-02-07 00:25:43 +08:00
9b373dea34 feat(tracing): 集成 OpenTelemetry 链路追踪功能
- 在 base.py 中添加文件下载、上传和 FFmpeg 执行的链路追踪
- 在 api_client.py 中实现 API 请求的链路追踪和错误标记
- 在 lease_service.py 中添加租约续期的链路追踪支持
- 在 task_executor.py 中集成任务执行的完整链路追踪
- 新增 util/tracing.py 工具模块提供统一的追踪上下文管理
- 在 .env.example 中添加 OTEL 配置选项
- 在 index.py 中初始化和关闭链路追踪功能
2026-02-07 00:11:01 +08:00
c9a6133be9 fix(logger): 修复PyInstaller打包后的日志目录路径问题
- 添加sys.frozen判断来区分打包环境和开发环境
- 打包环境下使用sys.executable所在目录作为日志目录
- 开发环境下继续使用当前文件所在目录作为日志目录
- 防止打包后日志文件随临时解压目录丢失的问题
2026-02-06 14:02:14 +08:00
dd2d40c55b feat(logger): 重构日志系统配置
- 添加RotatingFileHandler支持日志轮转
- 配置多个日志处理器分别输出到控制台、全部日志文件和错误日志文件
- 设置不同日志级别的输出过滤
- 确保日志文件目录存在并正确初始化日志系统
- 移除原有的基础日志配置方式
2026-02-04 18:06:06 +08:00
c57524f174 feat(video): 添加源视频时长检测和帧冻结补足功能
- 探测源视频实际时长并计算变速后的有效时长
- 检测源视频时长不足的情况并记录警告日志
- 计算时长短缺并自动冻结最后一帧进行补足
- 更新 FFmpeg 命令构建逻辑以支持时长补足
- 合并转场 overlap 冻结和时长不足冻结的处理
- 添加必要的参数传递以支持时长检测功能
2026-02-04 17:59:46 +08:00
eeb21cada3 perf(render_video): 优化视频编码的关键帧间隔策略
- 根据视频总帧数动态计算 GOP 大小,避免关键帧过多
- 短视频使用全部帧数作为 GOP,确保只在开头有关键帧
- 正常视频保持 2 秒一个关键帧的策略
- 添加强制第一帧为关键帧的设置
- 优化关键帧最小间隔参数,提升编码效率
2026-02-04 17:46:01 +08:00
a70573395b feat(cache): 增强缓存锁机制支持进程存活检测
- 添加了锁元数据写入和读取功能,记录进程ID和启动时间
- 实现了进程存活检查机制,防止PID复用导致的死锁
- 引入了过期锁检测和自动清理机制
- 集成了psutil库进行系统进程监控
- 优化了缓存清理逻辑,支持跳过活跃锁文件
- 使用JSON格式存储锁元数据信息
2026-01-28 23:41:53 +08:00
ffb9d5390e feat(video): 添加视频渲染的宽高参数支持
- 在 render_video 函数中添加 width 和 height 参数传递
- 为 overlay 功能添加 scale 滤镜支持
- 更新 filter_complex 字符串以包含尺寸缩放逻辑
- 修改 overlay 处理流程以正确应用指定尺寸
- 添加相关参数的文档说明
2026-01-27 17:03:56 +08:00
6126856361 feat(cache): 实现上传文件缓存功能
- 在文件上传成功后将文件加入缓存系统
- 添加 add_to_cache 方法支持本地文件缓存
- 实现原子操作确保缓存写入安全
- 集成锁机制防止并发冲突
- 自动触发缓存清理策略
- 记录详细的缓存操作日志
2026-01-26 10:41:26 +08:00
a6263398ed fix(storage): 解决URL编码字符处理问题
- 添加了URL解码功能以处理编码字符(如%2F转换为/)
- 修复了URL匹配逻辑中的编码问题
- 确保替换操作正确处理已编码的路径字符
2026-01-24 23:33:50 +08:00
885b69233a feat(storage): 添加上传日志记录功能
- 导入 urllib.parse.unquote 模块用于 URL 解码
- 在使用 rclone 上传时添加上传目标 URL 的日志记录
- 便于调试和监控文件上传过程
2026-01-24 23:29:37 +08:00
9158854411 fix(video): 修复MP4合并时的路径处理问题
- 修改concat.txt中TS文件路径为相对路径,只写文件名
- 移除不必要的反斜杠替换逻辑
- 确保FFmpeg concat协议能正确识别文件路径
2026-01-24 22:59:35 +08:00
634dc6c855 fix(cache): 解决缓存清理时删除正在使用的文件问题
- 添加文件锁定检查机制避免删除正在使用的缓存文件
- 实现基于文件名提取cache_key的锁定状态检测
- 在删除前验证锁文件是否存在以确保安全清理
- 添加调试日志记录跳过的锁定文件信息
2026-01-24 22:57:57 +08:00
ca9093504f fix(video): 修复视频定格效果实现逻辑
- 修改cameraShot实现注释,明确标注定格效果功能
- 使用tpad代替freezeframes实现更准确的定格效果
- 更新滤镜链参数配置,确保定格时长正确应用
- 优化变量命名,提高代码可读性
- 调整concat拼接输入源,确保视频流正确连接
2026-01-21 16:48:05 +08:00
ceba9a17a4 fix(video): 解决视频overlay结束后颜色范围变化问题
- 视频overlay需要在末尾统一颜色范围,避免overlay结束后range从tv变为pc
- 添加format=yuv420p和setrange=tv参数来保持一致的颜色范围
- 确保视频overlay结束后的显示效果保持稳定
2026-01-21 16:24:38 +08:00
7acae2f708 fix(video): 修复硬件加速视频处理的颜色空间转换问题
- 修正CUDA/QSV硬件下载仅支持nv12格式输出的问题
- 实现两步转换流程:先下载到nv12格式再转为yuv420p
- 确保与RGBA/YUVA混合时颜色空间转换正确
- 更新文档说明硬件加速滤镜链的格式
2026-01-21 16:14:40 +08:00
ed8dca543e fix(video): 修复硬件加速滤镜中的颜色空间转换问题
- 将硬件下载后的格式从 nv12 改为 yuv420p
- 确保与 RGBA/YUVA 格式的 overlay 混合时颜色空间转换正确
- 解决复杂滤镜(如 lut3d, overlay, crop 等)在硬件表面的颜色显示问题
2026-01-21 16:12:53 +08:00
0a7a0dac89 feat(video): 支持视频格式叠加层渲染
- 添加对 .mov 视频格式叠加层的支持
- 实现视频叠加层结束后自动消失的功能
- 修改参数传递方式从 has_overlay 改为 overlay_file
- 添加 is_video_overlay 参数区分图片和视频叠加层
- 优化 overlay 滤镜参数根据文件类型动态设置
- 更新函数签名和文档注释以支持新的叠加层功能
2026-01-21 15:24:58 +08:00
797507d24b feat(storage): 添加文件上传的 Content-Type 检测功能
- 添加文件扩展名到 Content-Type 的映射表
- 实现根据文件扩展名获取对应 Content-Type 的函数
- 将上传日志中的调试信息改为信息级别并显示 Content-Type
- 使用正确的 Content-Type 替换默认的 application/octet-stream
- 支持 mp
2026-01-21 15:01:22 +08:00
f7ca07b9db debug(storage): 添加上传URL调试日志
- 在上传过程中添加HTTP URL的调试日志输出
2026-01-21 14:56:55 +08:00
4d5e57f61b feat(task): 优化 GPU 调度以支持特定任务类型
- 添加 GPU_REQUIRED_TASK_TYPES 集合定义需要 GPU 加速的任务类型
- 修改任务执行逻辑仅对需要 GPU 的任务类型获取 GPU 设备
- 更新 GPU 设备释放逻辑确保仅在实际分配设备时进行释放
- 改进日志记录和资源管理流程
2026-01-21 14:54:58 +08:00
b291f33486 feat(material-cache): 添加缓存锁机制防止并发冲突
- 实现跨进程缓存锁获取和释放功能
- 在下载过程中使用UUID生成唯一的临时文件名避免并发覆盖
- 添加超时机制和轮询间隔控制锁等待时间
- 修改清理逻辑跳过锁文件和下载中的临时文件
- 添加测试验证缓存锁功能正常工作

fix(ffmpeg): 优化FFmpeg命令执行和错误处理

- 添加默认日志级别为error减少冗余输出
- 修复subprocess运行参数传递方式
- 改进错误信息截取避免空值解码异常

refactor(system-info): 优化系统信息获取和缓存机制

- 实现FFmpeg版本、编解码器信息缓存避免重复查询
- 添加系统信息TTL缓存机制提升性能
- 实现GPU信息检查状态缓存避免重复检测
- 整合静态系统信息和动态信息分离处理

refactor(storage): 优化HTTP上传下载资源管理

- 使用上下文管理器确保请求连接正确关闭
- 修改rclone命令构建方式从字符串改为列表形式
- 改进错误处理截取stderr输出长度限制
- 优化响应处理避免资源泄露
2026-01-19 20:03:18 +08:00
0cc96a968b feat(gpu): 添加多显卡调度支持
- 新增 GPUDevice 数据类定义 GPU 设备信息
- 扩展 WorkerConfig 添加 gpu_devices 配置项
- 从环境变量 GPU_DEVICES 读取多显卡设备配置
- 实现 GPUScheduler 提供轮询调度功能
- 修改 FFmpeg 参数生成支持设备指定
- 添加线程本地存储管理当前 GPU 设备
- 更新任务执行器集成 GPU 设备分配
- 实现 GPU 设备自动检测和验证功能
- 添加相关日志记录和状态监控
2026-01-19 18:34:03 +08:00
e5c5a181d3 feat(config): 添加环境变量加载功能
- 集成 python-dotenv 库以支持 .env 文件
- 在主函数中添加 load_dotenv() 调用
- 实现环境配置的自动加载机制
2026-01-18 18:16:19 +08:00
f27490e9e1 feat(task): 支持图片素材类型的视频渲染
- 添加 IMAGE_EXTENSIONS 常量定义支持的图片格式
- 实现 get_material_type 方法优先使用服务端类型或根据URL后缀推断
- 添加 is_image_material 方法判断素材是否为图片类型
- 修改 RenderSegmentVideoHandler 支持图片转视频流程
- 实现 _convert_image_to_video 方法将静态图片转换为视频
- 更新下载步骤为先检测素材类型再确定输入文件扩展名
- 添加图片素材转换为视频的处理逻辑
- 重构步骤编号以匹配新的处理流程
- 优化错误提示信息支持HTTP/HTTPS协议检查
2026-01-18 13:52:46 +08:00
10c57a387f feat(config): 更新环境配置文件模板
- 修改 API_ENDPOINT 默认地址为本地开发地址
- 添加 WORKER_ID 配置项
- 新增硬件加速配置选项 HW_ACCEL
- 添加素材缓存配置 CACHE_ENABLED、CACHE_DIR、CACHE_MAX_SIZE_GB
- 新增下载 URL 映射配置 HTTP_REPLACE_MAP
- 更新上传方式配置选项和相关参数
- 重新组织配置项分组和注释说明
2026-01-17 17:43:36 +08:00
a72e1ef1a1 fix(video): 解决LUT路径中冒号转义问题
- 在LUT路径处理中添加冒号转义功能,避免FFmpeg filter语法冲突
- 保留原有的反斜杠转换逻辑
- 确保LUT文件路径在FFmpeg命令中正确解析
2026-01-17 16:57:16 +08:00
095e203fe6 feat(task): 增强素材URL处理和验证逻辑
- 添加详细的get_material_url方法文档说明优先级逻辑
- 新增get_source_ref方法用于获取素材源引用
- 新增get_bound_material_url方法用于获取绑定素材URL
- 在视频渲染处理器中添加HTTP URL格式验证检查
- 当素材URL格式无效时返回详细错误信息和调试日志
- 验证失败时返回E_SPEC_INVALID错误码并提示服务器需提供有效的boundMaterialUrl
2026-01-17 16:22:01 +08:00
fe757408b6 feat(cache): 添加素材缓存功能以避免重复下载
- 新增素材缓存配置选项包括启用状态、缓存目录和最大缓存大小
- 实现 MaterialCache 类提供缓存存储和检索功能
- 修改 download_file 方法支持缓存下载模式
- 添加缓存清理机制使用 LRU 策略管理磁盘空间
- 配置默认值优化本地开发体验
- 实现缓存统计和监控功能
2026-01-17 15:07:12 +08:00
d5cd0dca03 fix(api): 修复任务列表解析中的空值错误
- 将 data.get('data', {}).get('tasks', []) 修改为 data.get('data', {}).get('tasks') or []
- 防止当 tasks 字段为 None 时导致的解析异常
- 确保即使返回数据中没有 tasks 字段也能正常处理
2026-01-17 14:35:58 +08:00
2bded11a03 feat(task): 添加转场效果相关属性和方法
- 新增 get_transition_type、get_transition_ms、has_transition 方法用于处理转场类型和时长
- 新增 get_overlap_tail_ms、get_transition_in_type、get_transition_in_ms 等方法处理入场转场
- 新增 get_transition_out_type、get_transition_out_ms、has_transition_out 等方法处理出场转场
- 新增 get_overlap_head_ms、get_overlap_tail_ms_v2 方法计算头部和尾部重叠时长
- 更新渲染视频处理器中使用新的转场相关方法计算 overlap 时长
2026-01-14 09:30:09 +08:00
71bd2e59f9 feat(video): 添加硬件加速支持
- 定义硬件加速类型常量(none、qsv、cuda)
- 配置QSV和CUDA编码参数及预设
- 在WorkerConfig中添加硬件加速配置选项
- 实现基于硬件加速类型的编码参数动态获取
- 添加FFmpeg硬件加速解码和滤镜参数
- 检测并报告系统硬件加速支持信息
- 在API客户端中上报硬件加速配置和支持状态
2026-01-13 13:34:27 +08:00
a26c44a3cd feat(video): 添加视频特效处理功能
- 在常量模块中定义支持的特效类型(相机定格、缩放、模糊)
- 在任务域中创建Effect数据类,支持从字符串解析特效配置
- 实现cameraShot特效参数解析和默认值处理
- 扩展RenderSpec类,添加获取特效列表的方法
- 修改视频渲染处理器,集成特效滤镜构建逻辑
- 实现cameraShot特效的filter_complex滤镜图构建
- 添加fps参数支持和overlay检测逻辑优化
- 完成特效与转场overlap的兼容处理
2026-01-13 09:31:39 +08:00
9c6186ecd3 feat(video): 添加视频转场功能支持
- 在 TASK_TYPES 中新增 COMPOSE_TRANSITION 类型
- 定义 TRANSITION_TYPES 常量支持多种转场效果
- 在 TaskType 枚举中添加 COMPOSE_TRANSITION
- 创建 TransitionConfig 数据类处理转场配置
- 为 RenderSpec 添加 transition_in 和 transition_out 属性
- 在 Task 类中添加转场相关的方法
- 新增 ComposeTransitionHandler 处理转场合成任务
- 修改 PackageSegmentTsHandler 支持转场分片封装
- 修改 RenderSegmentVideoHandler 支持 overlap 区域生成
- 在 TaskExecutor 中注册转场处理器
2026-01-12 22:41:22 +08:00
2911a4eff8 refactor(core): 移除旧版 FFmpeg 业务逻辑并重构常量配置
- 删除 biz/ffmpeg.py 和 biz/task.py 旧版业务模块
- 删除 entity/ffmpeg.py FFmpeg 任务实体类
- 删除 config/__init__.py 旧版配置初始化
- 更新 constant/__init__.py 常量定义,从 v1/v2 版本改为统一版本
- 修改 handlers/base.py 基础处理器,替换 OSS 相关导入为存储服务
- 添加 subprocess_args 工具函数支持跨平台进程参数配置
- 新增 probe_video_info 函数用于视频信息探测
- 新增 probe_duration_json 函数用于媒体时长探测
2026-01-12 17:01:18 +08:00
24de32e6bb feat(render): 实现渲染系统v2核心架构
- 添加v2支持的任务类型常量定义
- 更新软件版本至0.0.9
- 定义v2统一音视频编码参数
- 实现系统信息工具get_sys_info_v2方法
- 新增get_capabilities和_get_gpu_info功能
- 创建core模块及TaskHandler抽象基类
- 添加渲染系统设计文档包括集群架构、v2 PRD和Worker PRD
- 实现任务处理器抽象基类及接口规范
2026-01-12 17:01:18 +08:00
357c0afb3b feat(util): 添加FFmpeg通用参数环境变量支持
- 通过FFMPEG_COMMON_ARGS环境变量传入通用FFmpeg参数
- 在执行FFmpeg命令时合并环境变量中的通用参数
- 保持原有FFmpeg参数传递机制不变
2026-01-10 22:51:48 +08:00
8de0564fef feat(biz): 更新FFmpeg任务启动功能以支持环境变量配置最大工作线程数
- 修改start_ffmpeg_task函数参数max_workers默认值为None
- 添加环境变量FFMPEG_MAX_WORKERS读取逻辑
- 当max_workers为None时从环境变量获取默认值,否则使用传入值
- 保持原有tracer和任务分析功能不变
2026-01-10 18:28:00 +08:00
c61f6d7521 refactor(ffmpeg): 优化FFmpeg任务处理逻辑
- 添加as_completed导入以支持并发任务执行
- 实现线程池并发处理子任务,提高执行效率
- 添加任务数量监控指标
- 实现快速失败机制,及时取消剩余任务
- 增强异常处理和错误日志记录
- 添加最大工作线程数参数配置
2026-01-01 00:09:31 +08:00
4ef57a208e feat(oss): 添加 HTTP_REPLACE_MAP 环境变量支持
- 实现 _apply_http_replace_map 函数用于 URL 替换
- 在上传文件时应用 HTTP_REPLACE_MAP 环境变量替换 URL
- 添加 http_url 属性到 trace span 中
- 支持通过环境变量配置 URL 替换规则
2025-12-31 17:28:38 +08:00
a415d8571d chore(constant): 更新软件版本号至0.0.8
- 将 SOFTWARE_VERSION 从 0.0.6 更新到 0.0.8

feat(util/oss): 支持自定义rclone配置文件路径

- 新增读取环境变量 RCLONE_CONFIG_FILE 来指定配置文件
- 当 RCLONE_CONFIG_FILE 为空时默认使用 rclone.conf
- 在调用 rclone 命令时加入 --config 参数以应用指定配置文件
2025-12-12 16:00:34 +08:00
4af52d5a54 fix(ffmpeg): 修复视频裁剪时间戳问题
- 在 trim 过滤器后添加 setpts 过滤器以重置时间戳
- 修复 skip、tail 和 show 效果的时间戳计算问题
- 确保裁剪后的视频片段时间戳从零开始
- 避免因时间戳不连续导致的播放问题
2025-12-09 18:07:48 +08:00
d7704005b6 feat(entity/ffmpeg.py): 添加grid4效果支持在ffmpeg.py中增加了对grid4效果的支持。该功能允许用户通过指定参数来创建一个四宫格视频布局,每个格子显示不同的视频片段,并且可以设置延迟时间以实现更丰富的视觉效果。具体改动包括:
- 解析`grid4`效果的参数,如果未提供则默认为1。
- 根据提供的或默认的分辨率分割视频流为四个部分。
-为每个分割后的视频流应用缩放和时间延迟处理。
- 创建黑色背景并使用overlay滤镜将处理后的视频流放置于正确的位置上,形成最终的四宫格布局。
2025-09-18 17:01:03 +08:00
f85ccea933 feat(constant): 更新软件版本号至 0.0.6- 在 constant/__init__.py 文件中将 SOFTWARE_VERSION 从 '0.0.5' 修改为 '0.0.6'
- 在 entity/ffmpeg.py 文件中添加了新的视频效果处理逻辑,支持显示特定时长的视频片段
2025-09-18 09:42:57 +08:00
0c7181911e refactor(entity): 优化视频变速和缩放效果处理
-将视频变速实现从 minterpolate 改为使用 setpts,避免 PTS 冲突问题
-简化缩放效果处理逻辑
2025-09-12 18:01:41 +08:00
cf43f6379e feat(ffmpeg): 使用 minterpolate 替代 fps 调整视频速度
- 将视频变速功能从直接调整帧率改为使用 minterpolate 滤镜- 通过设置 fps 和 mi_mode 参数实现平滑的视频慢放效果
- 解决了直接调整帧率可能导致的 PTS 冲突问题
2025-09-12 16:50:53 +08:00
ce8854404b fix(entity): 修复视频慢放时 PTS 冲突问题
- 修改视频变速功能,通过改变帧率实现慢放效果
-避免使用 setpts滤镜导致的 PTS 冲突
- 优化代码结构,提高可读性和可维护性
2025-09-12 14:54:01 +08:00
c36e838d4f fix(entity): 修复缩放效果
fix(entity): 移除 ffmpeg缩放和裁剪滤镜中的 setpts 指令

移除了 ffmpeg 缩放、裁剪和尾部处理滤镜中的 setpts=PTS-STARTPTS指令。这个指令在某些情况下可能导致视频处理出现问题,例如在使用 zoompan 滤镜时。此修改旨在提高视频处理的稳定性和正确性。

fix(entity): 修复缩放效果中中心点计算错误

-针对静态缩放和动态缩放分别修正了中心点计算公式
- 确保在不同缩放因子下,图像中心点位置保持正确

fix(entity): 修复 ffmpeg zoompan 滤镜参数

-将 zoompan 滤镜的参数从 'z=' 改为 'z=',统一参数格式- 此修改解决了 ffmpeg 在处理某些视频时可能遇到的参数解析问题

feat(zoom): 实现视频缩放特效的自定义中心点功能

- 添加代码以解析 posJson 数据,计算并设置缩放中心点
- 使用 zoompan滤镜替代原有的 scale 和 crop滤镜,支持动态缩放
- 优化静态缩放的实现,确保整个视频时长的应用

fix(entity): 修复视频缩放效果的 FFmpeg 命令

- 在 zoom_expr 中添加转义字符,以解决 FFmpeg 解析问题
- 修改缩放和裁剪滤镜的参数,提高视频处理的准确性
2025-09-12 13:20:10 +08:00
1571934943 fix(entity): 修复中心裁剪计算逻辑并优化 JSON 解析
- 在解析 posJson 时添加异常处理,避免无效 JSON 导致程序崩溃
- 修复中心裁剪计算逻辑中的取整问题,确保裁剪位置准确
2025-09-07 01:45:56 +08:00
35693ac83c build(constant): 更新软件版本号
- 将 SOFTWARE_VERSION 从 '0.0.4' 修改为 '0.0.5'
2025-09-06 15:44:24 +08:00
d154f2c74d feat(api): 添加模板属性 zoom_cut
在模板信息中增加了 zoom_cut 属性,用于获取模板的缩放裁剪信息。
2025-09-06 15:43:55 +08:00
bd0c44b17f tail效果 2025-08-12 14:22:26 +08:00
432472fd19 逻辑问题 2025-08-09 10:57:45 +08:00
8f0250df43 通过argv传skip_if_exist默认值 2025-08-08 13:58:01 +08:00
0209c5de3f 单独渲染模板 2025-08-08 13:58:01 +08:00
51e7d21f84 帧跳过、zoom 2025-08-08 13:58:01 +08:00
0770cb361d vsync 2025-08-05 17:43:01 +08:00
2f694da5fd hevc+重下模板 2025-08-05 12:43:27 +08:00
bf912037d1 lut 2025-08-01 17:24:14 +08:00
1119a7b030 onlyIf判断优化 2025-08-01 17:24:14 +08:00
5282e58a10 支持zoom_cut 2025-07-21 10:58:07 +08:00
f7141e5d4e Thread-span支持 2025-07-19 14:07:39 +08:00
f23bcfdd25 调小chunk-size 2025-07-18 13:54:38 +08:00
46 changed files with 7602 additions and 1484 deletions

View File

@@ -1,11 +1,73 @@
TEMPLATE_DIR=template/
API_ENDPOINT=https://zhentuai.com/task/v1
# ===================
# API 配置
# ===================
API_ENDPOINT=http://127.0.0.1:18084/api
ACCESS_KEY=TEST_ACCESS_KEY
WORKER_ID=1
# ===================
# 目录配置
# ===================
TEMP_DIR=tmp/
#REDIRECT_TO_URL=https://renderworker-deuvulkhes.cn-shanghai.fcapp.run/
# QSV
ENCODER_ARGS="-c:v h264_qsv -global_quality 28 -look_ahead 1"
# NVENC
#ENCODER_ARGS="-c:v h264_nvenc -cq:v 24 -preset:v p7 -tune:v hq -profile:v high"
UPLOAD_METHOD="rclone"
RCLONE_REPLACE_MAP="https://oss.zhentuai.com|alioss://frametour-assets,https://frametour-assets.oss-cn-shanghai.aliyuncs.com|alioss://frametour-assets"
# ===================
# 并发与调度
# ===================
#MAX_CONCURRENCY=4 # 最大并发任务数
#HEARTBEAT_INTERVAL=5 # 心跳间隔(秒)
#LEASE_EXTENSION_THRESHOLD=60 # 租约续期阈值(秒),提前多久续期
#LEASE_EXTENSION_DURATION=300 # 租约续期时长(秒)
# ===================
# 能力配置
# ===================
# 支持的任务类型,逗号分隔,默认全部支持
#CAPABILITIES=RENDER_SEGMENT_VIDEO,PREPARE_JOB_AUDIO,PACKAGE_SEGMENT_TS,FINALIZE_MP4
# ===================
# 超时配置
# ===================
#FFMPEG_TIMEOUT=3600 # FFmpeg 执行超时(秒)
#DOWNLOAD_TIMEOUT=300 # 下载超时(秒)
#UPLOAD_TIMEOUT=600 # 上传超时(秒)
#TASK_DOWNLOAD_CONCURRENCY=4 # 单任务内并行下载数(1-16)
#TASK_UPLOAD_CONCURRENCY=2 # 单任务内并行上传数(1-16)
# ===================
# 硬件加速与多显卡
# ===================
# 硬件加速类型: none, qsv, cuda
HW_ACCEL=none
# GPU 设备列表(逗号分隔的设备索引)
# 不配置时:自动检测所有设备
# 单设备示例:GPU_DEVICES=0
# 多设备示例:GPU_DEVICES=0,1,2
#GPU_DEVICES=0,1
# ===================
# 素材缓存
# ===================
#CACHE_ENABLED=true # 是否启用素材缓存
#CACHE_DIR= # 缓存目录,默认 TEMP_DIR/cache
#CACHE_MAX_SIZE_GB=0 # 最大缓存大小(GB),0 表示不限制
# ===================
# URL 映射(内网下载加速)
# ===================
# 格式: src1|dst1,src2|dst2
#HTTP_REPLACE_MAP="https://cdcdn.zhentuai.com|http://192.168.10.254:9000"
# ===================
# 上传配置
# ===================
# 上传方式: 默认 HTTP,可选 rclone
#UPLOAD_METHOD=rclone
#RCLONE_CONFIG_FILE= # rclone 配置文件路径
#RCLONE_REPLACE_MAP="https://oss.example.com|alioss://bucket"
# ===================
# OTel 链路追踪
# ===================
# 是否启用 OTel 追踪(默认 true)
#OTEL_ENABLED=true

2
.gitignore vendored
View File

@@ -32,3 +32,5 @@ target/
venv/
cython_debug/
.env
.serena
.claude

40
app.py
View File

@@ -1,40 +0,0 @@
import time
import flask
import config
import biz.task
import template
from telemetry import init_opentelemetry
from template import load_local_template
from util import api
load_local_template()
import logging
LOGGER = logging.getLogger(__name__)
init_opentelemetry(batch=False)
app = flask.Flask(__name__)
@app.get('/health/check')
def health_check():
return api.sync_center()
@app.post('/')
def do_nothing():
return "NOOP"
@app.post('/<task_id>')
def do_task(task_id):
task_info = api.get_task_info(task_id)
local_template_info = template.get_template_def(task_info.get("templateId"))
template_info = api.get_template_info(task_info.get("templateId"))
if local_template_info:
if local_template_info.get("updateTime") != template_info.get("updateTime"):
template.download_template(task_info.get("templateId"))
biz.task.start_task(task_info)
return "OK"
if __name__ == '__main__':
app.run(host="0.0.0.0", port=9998)

View File

@@ -1,155 +0,0 @@
import json
import os.path
import time
from concurrent.futures import ThreadPoolExecutor
from opentelemetry.trace import Status, StatusCode
from entity.ffmpeg import FfmpegTask
import logging
from util import ffmpeg, oss
from util.ffmpeg import fade_out_audio
from telemetry import get_tracer
logger = logging.getLogger('biz/ffmpeg')
def parse_ffmpeg_task(task_info, template_info):
tracer = get_tracer(__name__)
with tracer.start_as_current_span("parse_ffmpeg_task") as span:
tasks = []
# 中间片段
task_params_str = task_info.get("taskParams", "{}")
span.set_attribute("task_params", task_params_str)
task_params: dict = json.loads(task_params_str)
task_params_orig = json.loads(task_params_str)
with tracer.start_as_current_span("parse_ffmpeg_task.download_all") as sub_span:
with ThreadPoolExecutor(max_workers=8) as executor:
param_list: list[dict]
for param_list in task_params.values():
for param in param_list:
url = param.get("url")
if url.startswith("http"):
_, fn = os.path.split(url)
executor.submit(oss.download_from_oss, url, fn, True)
executor.shutdown(wait=True)
for part in template_info.get("video_parts"):
source, ext_data = parse_video(part.get('source'), task_params, template_info)
if not source:
logger.warning("no video found for part: " + str(part))
continue
only_if = part.get('only_if', '')
if only_if:
if not check_placeholder_exist(only_if, task_params_orig):
logger.info("because only_if exist, placeholder: %s not exist, skip part: %s", only_if, part)
continue
sub_ffmpeg_task = FfmpegTask(source)
sub_ffmpeg_task.resolution = template_info.get("video_size", "")
sub_ffmpeg_task.annexb = True
sub_ffmpeg_task.ext_data = ext_data or {}
sub_ffmpeg_task.frame_rate = template_info.get("frame_rate", 25)
sub_ffmpeg_task.center_cut = part.get("crop_mode", None)
for effect in part.get('effects', []):
sub_ffmpeg_task.add_effect(effect)
for lut in part.get('filters', []):
sub_ffmpeg_task.add_lut(os.path.join(template_info.get("local_path"), lut))
for audio in part.get('audios', []):
sub_ffmpeg_task.add_audios(os.path.join(template_info.get("local_path"), audio))
for overlay in part.get('overlays', []):
sub_ffmpeg_task.add_overlay(os.path.join(template_info.get("local_path"), overlay))
tasks.append(sub_ffmpeg_task)
output_file = "out_" + str(time.time()) + ".mp4"
task = FfmpegTask(tasks, output_file=output_file)
task.resolution = template_info.get("video_size", "")
overall = template_info.get("overall_template")
task.center_cut = template_info.get("crop_mode", None)
task.frame_rate = template_info.get("frame_rate", 25)
# if overall.get('source', ''):
# source, ext_data = parse_video(overall.get('source'), task_params, template_info)
# task.add_inputs(source)
# task.ext_data = ext_data or {}
for effect in overall.get('effects', []):
task.add_effect(effect)
for lut in overall.get('filters', []):
task.add_lut(os.path.join(template_info.get("local_path"), lut))
for audio in overall.get('audios', []):
task.add_audios(os.path.join(template_info.get("local_path"), audio))
for overlay in overall.get('overlays', []):
task.add_overlay(os.path.join(template_info.get("local_path"), overlay))
return task
def parse_video(source, task_params, template_info):
if source.startswith('PLACEHOLDER_'):
placeholder_id = source.replace('PLACEHOLDER_', '')
new_sources = task_params.get(placeholder_id, [])
_pick_source = {}
if type(new_sources) is list:
if len(new_sources) == 0:
logger.debug("no video found for placeholder: " + placeholder_id)
return None, _pick_source
else:
_pick_source = new_sources.pop(0)
new_sources = _pick_source.get("url")
if new_sources.startswith("http"):
_, source_name = os.path.split(new_sources)
oss.download_from_oss(new_sources, source_name, True)
return source_name, _pick_source
return new_sources, _pick_source
return os.path.join(template_info.get("local_path"), source), None
def check_placeholder_exist(placeholder_id, task_params):
if placeholder_id in task_params:
new_sources = task_params.get(placeholder_id, [])
if type(new_sources) is list:
if len(new_sources) == 0:
return False
else:
return True
return True
return False
def start_ffmpeg_task(ffmpeg_task):
tracer = get_tracer(__name__)
with tracer.start_as_current_span("start_ffmpeg_task") as span:
for task in ffmpeg_task.analyze_input_render_tasks():
result = start_ffmpeg_task(task)
if not result:
return False
ffmpeg_task.correct_task_type()
span.set_attribute("task.type", ffmpeg_task.task_type)
span.set_attribute("task.center_cut", str(ffmpeg_task.center_cut))
span.set_attribute("task.frame_rate", ffmpeg_task.frame_rate)
span.set_attribute("task.resolution", str(ffmpeg_task.resolution))
span.set_attribute("task.ext_data", json.dumps(ffmpeg_task.ext_data))
result = ffmpeg.start_render(ffmpeg_task)
if not result:
span.set_status(Status(StatusCode.ERROR))
return False
span.set_status(Status(StatusCode.OK))
return True
def clear_task_tmp_file(ffmpeg_task):
for task in ffmpeg_task.analyze_input_render_tasks():
clear_task_tmp_file(task)
try:
if os.getenv("TEMPLATE_DIR") not in ffmpeg_task.get_output_file():
os.remove(ffmpeg_task.get_output_file())
logger.info("delete tmp file: " + ffmpeg_task.get_output_file())
else:
logger.info("skip delete template file: " + ffmpeg_task.get_output_file())
except OSError:
logger.warning("delete tmp file failed: " + ffmpeg_task.get_output_file())
return False
return True
def probe_video_info(ffmpeg_task):
# 获取视频长度宽度和时长
return ffmpeg.probe_video_info(ffmpeg_task.get_output_file())

View File

View File

@@ -1,44 +0,0 @@
import json
from opentelemetry.trace import Status, StatusCode
from biz.ffmpeg import parse_ffmpeg_task, start_ffmpeg_task, clear_task_tmp_file, probe_video_info, fade_out_audio
from telemetry import get_tracer
from template import get_template_def
from util import api
def start_task(task_info):
tracer = get_tracer(__name__)
with tracer.start_as_current_span("start_task") as span:
task_info = api.normalize_task(task_info)
span.set_attribute("task", json.dumps(task_info))
span.set_attribute("scenicId", task_info.get("scenicId", "?"))
span.set_attribute("templateId", task_info.get("templateId"))
template_info = get_template_def(task_info.get("templateId"))
api.report_task_start(task_info)
ffmpeg_task = parse_ffmpeg_task(task_info, template_info)
result = start_ffmpeg_task(ffmpeg_task)
if not result:
span.set_status(Status(StatusCode.ERROR))
return api.report_task_failed(task_info)
width, height, duration = probe_video_info(ffmpeg_task)
span.set_attribute("probe.width", width)
span.set_attribute("probe.height", height)
span.set_attribute("probe.duration", duration)
# 音频淡出
new_fn = fade_out_audio(ffmpeg_task.get_output_file(), duration)
ffmpeg_task.set_output_file(new_fn)
oss_result = api.upload_task_file(task_info, ffmpeg_task)
if not oss_result:
span.set_status(Status(StatusCode.ERROR))
return api.report_task_failed(task_info)
# 获取视频长度宽度和时长
clear_task_tmp_file(ffmpeg_task)
api.report_task_success(task_info, videoInfo={
"width": width,
"height": height,
"duration": duration
})
span.set_status(Status(StatusCode.OK))
return None

View File

@@ -1,16 +0,0 @@
import datetime
import logging
from logging.handlers import TimedRotatingFileHandler
from dotenv import load_dotenv
load_dotenv()
logging.basicConfig(level=logging.INFO)
root_logger = logging.getLogger()
rf_handler = TimedRotatingFileHandler('all_log.log', when='midnight')
rf_handler.setFormatter(logging.Formatter("[%(asctime)s][%(name)s]%(levelname)s - %(message)s"))
rf_handler.setLevel(logging.DEBUG)
f_handler = TimedRotatingFileHandler('error.log', when='midnight')
f_handler.setLevel(logging.ERROR)
f_handler.setFormatter(logging.Formatter("[%(asctime)s][%(name)s][:%(lineno)d]%(levelname)s - - %(message)s"))
root_logger.addHandler(rf_handler)
root_logger.addHandler(f_handler)

View File

@@ -1,8 +1,99 @@
SUPPORT_FEATURE = (
'simple_render_algo',
'gpu_accelerate',
'gpu_accelerate',
'rapid_download',
'rclone_upload',
# -*- coding: utf-8 -*-
"""
常量定义
v2 版本常量,用于 Render Worker v2 API。
"""
# 软件版本
SOFTWARE_VERSION = '2.0.0'
# 支持的任务类型
TASK_TYPES = (
'RENDER_SEGMENT_TS',
'COMPOSE_TRANSITION',
'PREPARE_JOB_AUDIO',
'FINALIZE_MP4',
)
SOFTWARE_VERSION = '0.0.2'
# 默认能力
DEFAULT_CAPABILITIES = list(TASK_TYPES)
# 支持的转场类型(对应 FFmpeg xfade 参数)
TRANSITION_TYPES = (
'fade', # 淡入淡出(默认)
'dissolve', # 溶解过渡
'wipeleft', # 向左擦除
'wiperight', # 向右擦除
'wipeup', # 向上擦除
'wipedown', # 向下擦除
'slideleft', # 向左滑动
'slideright', # 向右滑动
'slideup', # 向上滑动
'slidedown', # 向下滑动
)
# 支持的特效类型
EFFECT_TYPES = (
'cameraShot', # 相机定格效果:在指定时间点冻结画面
'zoom', # 缩放效果(预留)
'blur', # 模糊效果(预留)
'ospeed', # 原始变速效果(兼容旧模板)
)
# 硬件加速类型
HW_ACCEL_NONE = 'none' # 纯软件编解码
HW_ACCEL_QSV = 'qsv' # Intel Quick Sync Video (核显/独显)
HW_ACCEL_CUDA = 'cuda' # NVIDIA NVENC/NVDEC
HW_ACCEL_TYPES = (HW_ACCEL_NONE, HW_ACCEL_QSV, HW_ACCEL_CUDA)
# 统一视频编码参数(软件编码,来自集成文档)
VIDEO_ENCODE_PARAMS = {
'codec': 'libx264',
'preset': 'medium',
'profile': 'main',
'level': '4.0',
'crf': '23',
'pix_fmt': 'yuv420p',
}
# QSV 硬件加速视频编码参数(Intel Quick Sync)
VIDEO_ENCODE_PARAMS_QSV = {
'codec': 'h264_qsv',
'preset': 'medium', # QSV 支持: veryfast, faster, fast, medium, slow, slower, veryslow
'profile': 'main',
'level': '4.0',
'global_quality': '23', # QSV 使用 global_quality 代替 crf(1-51,值越低质量越高)
'look_ahead': '1', # 启用前瞻分析提升质量
'pix_fmt': 'nv12', # QSV 硬件表面格式
}
# CUDA 硬件加速视频编码参数(NVIDIA NVENC)
VIDEO_ENCODE_PARAMS_CUDA = {
'codec': 'h264_nvenc',
'preset': 'p4', # NVENC 预设 p1-p7(p1 最快,p7 最慢/质量最高),p4 ≈ medium
'profile': 'main',
'level': '4.0',
'rc': 'vbr', # 码率控制模式:vbr 可变码率
'cq': '23', # 恒定质量模式的质量值(0-51)
'pix_fmt': 'yuv420p', # NVENC 输入格式(会自动转换)
}
# 统一音频编码参数
AUDIO_ENCODE_PARAMS = {
'codec': 'aac',
'bitrate': '128k',
'sample_rate': '48000',
'channels': '2',
}
# 错误码
ERROR_CODES = {
'E_INPUT_UNAVAILABLE': '素材不可访问',
'E_FFMPEG_FAILED': 'FFmpeg 执行失败',
'E_UPLOAD_FAILED': '上传失败',
'E_SPEC_INVALID': '渲染规格非法',
'E_TIMEOUT': '执行超时',
'E_UNKNOWN': '未知错误',
}

12
core/__init__.py Normal file
View File

@@ -0,0 +1,12 @@
# -*- coding: utf-8 -*-
"""
核心抽象层
包含任务处理器抽象基类等核心接口定义。
"""
from core.handler import TaskHandler
__all__ = [
'TaskHandler',
]

79
core/handler.py Normal file
View File

@@ -0,0 +1,79 @@
# -*- coding: utf-8 -*-
"""
任务处理器抽象基类
定义任务处理器的接口规范。
"""
from abc import ABC, abstractmethod
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from domain.task import Task, TaskType
from domain.result import TaskResult
class TaskHandler(ABC):
"""
任务处理器抽象基类
所有任务处理器都必须继承此类并实现相应方法。
"""
@abstractmethod
def handle(self, task: 'Task') -> 'TaskResult':
"""
处理任务的主方法
Args:
task: 任务实体
Returns:
TaskResult: 任务结果(成功或失败)
"""
pass
@abstractmethod
def get_supported_type(self) -> 'TaskType':
"""
返回此处理器支持的任务类型
Returns:
TaskType: 支持的任务类型枚举值
"""
pass
def before_handle(self, task: 'Task') -> None:
"""
处理前钩子(可选重写)
用于任务执行前的准备工作,如日志记录、资源检查等。
Args:
task: 任务实体
"""
pass
def after_handle(self, task: 'Task', result: 'TaskResult') -> None:
"""
处理后钩子(可选重写)
用于任务执行后的清理工作,如资源释放、统计记录等。
Args:
task: 任务实体
result: 任务结果
"""
pass
def validate_task(self, task: 'Task') -> bool:
"""
验证任务是否有效(可选重写)
Args:
task: 任务实体
Returns:
bool: 任务是否有效
"""
return True

24
domain/__init__.py Normal file
View File

@@ -0,0 +1,24 @@
# -*- coding: utf-8 -*-
"""
领域模型层
包含任务实体、结果、配置等核心数据结构。
"""
from domain.task import Task, TaskType, TaskStatus, RenderSpec, OutputSpec, AudioSpec, AudioProfile
from domain.result import TaskResult, ErrorCode, RETRY_CONFIG
from domain.config import WorkerConfig
__all__ = [
'Task',
'TaskType',
'TaskStatus',
'RenderSpec',
'OutputSpec',
'AudioSpec',
'AudioProfile',
'TaskResult',
'ErrorCode',
'RETRY_CONFIG',
'WorkerConfig',
]

182
domain/config.py Normal file
View File

@@ -0,0 +1,182 @@
# -*- coding: utf-8 -*-
"""
Worker 配置模型
定义 Worker 运行时的配置参数。
"""
import logging
import os
from dataclasses import dataclass, field
from typing import List, Optional
from constant import HW_ACCEL_NONE, HW_ACCEL_QSV, HW_ACCEL_CUDA, HW_ACCEL_TYPES
logger = logging.getLogger(__name__)
# 默认支持的任务类型
DEFAULT_CAPABILITIES = [
"RENDER_SEGMENT_TS",
"PREPARE_JOB_AUDIO",
"FINALIZE_MP4"
]
@dataclass
class WorkerConfig:
"""
Worker 配置
包含 Worker 运行所需的所有配置参数。
"""
# API 配置
api_endpoint: str
access_key: str
worker_id: str
# 并发控制
max_concurrency: int = 4
# 心跳配置
heartbeat_interval: int = 5 # 秒
# 租约配置
lease_extension_threshold: int = 60 # 秒,提前多久续期
lease_extension_duration: int = 300 # 秒,每次续期时长
# 目录配置
temp_dir: str = "/tmp/render_worker"
# 能力配置
capabilities: List[str] = field(default_factory=lambda: DEFAULT_CAPABILITIES.copy())
# FFmpeg 配置
ffmpeg_timeout: int = 3600 # 秒,FFmpeg 执行超时
# 下载/上传配置
download_timeout: int = 300 # 秒,下载超时
upload_timeout: int = 600 # 秒,上传超时
# 硬件加速配置
hw_accel: str = HW_ACCEL_NONE # 硬件加速类型: none, qsv, cuda
# GPU 设备配置(多显卡调度)
gpu_devices: List[int] = field(default_factory=list) # 空列表表示使用默认设备
# 素材缓存配置
cache_enabled: bool = True # 是否启用素材缓存
cache_dir: str = "" # 缓存目录,默认为 temp_dir/cache
cache_max_size_gb: float = 0 # 最大缓存大小(GB),0 表示不限制
@classmethod
def from_env(cls) -> 'WorkerConfig':
"""从环境变量创建配置"""
# API 端点,优先使用 V2 版本
api_endpoint = os.getenv('API_ENDPOINT_V2') or os.getenv('API_ENDPOINT', '')
if not api_endpoint:
raise ValueError("API_ENDPOINT_V2 or API_ENDPOINT environment variable is required")
# Access Key
access_key = os.getenv('ACCESS_KEY', '')
if not access_key:
raise ValueError("ACCESS_KEY environment variable is required")
# Worker ID
worker_id = os.getenv('WORKER_ID', '100001')
# 并发数
max_concurrency = int(os.getenv('MAX_CONCURRENCY', '4'))
# 心跳间隔
heartbeat_interval = int(os.getenv('HEARTBEAT_INTERVAL', '5'))
# 租约配置
lease_extension_threshold = int(os.getenv('LEASE_EXTENSION_THRESHOLD', '60'))
lease_extension_duration = int(os.getenv('LEASE_EXTENSION_DURATION', '300'))
# 临时目录
temp_dir = os.getenv('TEMP_DIR', os.getenv('TEMP', '/tmp/render_worker'))
# 能力列表
capabilities_str = os.getenv('CAPABILITIES', '')
if capabilities_str:
capabilities = [c.strip() for c in capabilities_str.split(',') if c.strip()]
else:
capabilities = DEFAULT_CAPABILITIES.copy()
# FFmpeg 超时
ffmpeg_timeout = int(os.getenv('FFMPEG_TIMEOUT', '3600'))
# 下载/上传超时
download_timeout = int(os.getenv('DOWNLOAD_TIMEOUT', '300'))
upload_timeout = int(os.getenv('UPLOAD_TIMEOUT', '600'))
# 硬件加速配置
hw_accel = os.getenv('HW_ACCEL', HW_ACCEL_NONE).lower()
if hw_accel not in HW_ACCEL_TYPES:
hw_accel = HW_ACCEL_NONE
# GPU 设备列表(用于多显卡调度)
gpu_devices_str = os.getenv('GPU_DEVICES', '')
gpu_devices: List[int] = []
if gpu_devices_str:
try:
gpu_devices = [int(d.strip()) for d in gpu_devices_str.split(',') if d.strip()]
except ValueError:
logger.warning(f"Invalid GPU_DEVICES value: {gpu_devices_str}, using auto-detect")
gpu_devices = []
# 素材缓存配置
cache_enabled = os.getenv('CACHE_ENABLED', 'true').lower() in ('true', '1', 'yes')
cache_dir = os.getenv('CACHE_DIR', '') # 空字符串表示使用默认路径
cache_max_size_gb = float(os.getenv('CACHE_MAX_SIZE_GB', '0'))
return cls(
api_endpoint=api_endpoint,
access_key=access_key,
worker_id=worker_id,
max_concurrency=max_concurrency,
heartbeat_interval=heartbeat_interval,
lease_extension_threshold=lease_extension_threshold,
lease_extension_duration=lease_extension_duration,
temp_dir=temp_dir,
capabilities=capabilities,
ffmpeg_timeout=ffmpeg_timeout,
download_timeout=download_timeout,
upload_timeout=upload_timeout,
hw_accel=hw_accel,
gpu_devices=gpu_devices,
cache_enabled=cache_enabled,
cache_dir=cache_dir if cache_dir else os.path.join(temp_dir, 'cache'),
cache_max_size_gb=cache_max_size_gb
)
def get_work_dir_path(self, task_id: str) -> str:
"""获取任务工作目录路径"""
return os.path.join(self.temp_dir, f"task_{task_id}")
def ensure_temp_dir(self) -> None:
"""确保临时目录存在"""
os.makedirs(self.temp_dir, exist_ok=True)
def is_hw_accel_enabled(self) -> bool:
"""是否启用了硬件加速"""
return self.hw_accel != HW_ACCEL_NONE
def is_qsv(self) -> bool:
"""是否使用 QSV 硬件加速"""
return self.hw_accel == HW_ACCEL_QSV
def is_cuda(self) -> bool:
"""是否使用 CUDA 硬件加速"""
return self.hw_accel == HW_ACCEL_CUDA
def has_multi_gpu(self) -> bool:
"""是否配置了多 GPU"""
return len(self.gpu_devices) > 1
def get_gpu_devices(self) -> List[int]:
"""获取 GPU 设备列表"""
return self.gpu_devices.copy()

31
domain/gpu.py Normal file
View File

@@ -0,0 +1,31 @@
# -*- coding: utf-8 -*-
"""
GPU 设备模型
定义 GPU 设备的数据结构。
"""
from dataclasses import dataclass
from typing import Optional
@dataclass
class GPUDevice:
"""
GPU 设备信息
Attributes:
index: 设备索引(对应 nvidia-smi 中的 GPU ID)
name: 设备名称(如 "NVIDIA GeForce RTX 3090"
memory_total: 显存总量(MB),可选
available: 设备是否可用
"""
index: int
name: str
memory_total: Optional[int] = None
available: bool = True
def __str__(self) -> str:
status = "available" if self.available else "unavailable"
mem_info = f", {self.memory_total}MB" if self.memory_total else ""
return f"GPU[{self.index}]: {self.name}{mem_info} ({status})"

105
domain/result.py Normal file
View File

@@ -0,0 +1,105 @@
# -*- coding: utf-8 -*-
"""
任务结果模型
定义错误码、重试配置、任务结果等数据结构。
"""
from enum import Enum
from dataclasses import dataclass
from typing import Optional, Dict, Any, List
class ErrorCode(Enum):
"""错误码枚举"""
E_INPUT_UNAVAILABLE = "E_INPUT_UNAVAILABLE" # 素材不可访问/404
E_FFMPEG_FAILED = "E_FFMPEG_FAILED" # FFmpeg 执行失败
E_UPLOAD_FAILED = "E_UPLOAD_FAILED" # 上传失败
E_SPEC_INVALID = "E_SPEC_INVALID" # renderSpec 非法
E_TIMEOUT = "E_TIMEOUT" # 执行超时
E_UNKNOWN = "E_UNKNOWN" # 未知错误
# 重试配置
RETRY_CONFIG: Dict[ErrorCode, Dict[str, Any]] = {
ErrorCode.E_INPUT_UNAVAILABLE: {
'max_retries': 3,
'backoff': [1, 2, 5] # 重试间隔(秒)
},
ErrorCode.E_FFMPEG_FAILED: {
'max_retries': 2,
'backoff': [1, 3]
},
ErrorCode.E_UPLOAD_FAILED: {
'max_retries': 3,
'backoff': [1, 2, 5]
},
ErrorCode.E_SPEC_INVALID: {
'max_retries': 0, # 不重试
'backoff': []
},
ErrorCode.E_TIMEOUT: {
'max_retries': 2,
'backoff': [5, 10]
},
ErrorCode.E_UNKNOWN: {
'max_retries': 1,
'backoff': [2]
},
}
@dataclass
class TaskResult:
"""
任务结果
封装任务执行的结果,包括成功数据或失败信息。
"""
success: bool
data: Optional[Dict[str, Any]] = None
error_code: Optional[ErrorCode] = None
error_message: Optional[str] = None
@classmethod
def ok(cls, data: Dict[str, Any]) -> 'TaskResult':
"""创建成功结果"""
return cls(success=True, data=data)
@classmethod
def fail(cls, error_code: ErrorCode, error_message: str) -> 'TaskResult':
"""创建失败结果"""
return cls(
success=False,
error_code=error_code,
error_message=error_message
)
def to_report_dict(self) -> Dict[str, Any]:
"""
转换为上报格式
用于 API 上报时的数据格式转换。
"""
if self.success:
return {'result': self.data}
else:
return {
'errorCode': self.error_code.value if self.error_code else 'E_UNKNOWN',
'errorMessage': self.error_message or 'Unknown error'
}
def can_retry(self) -> bool:
"""是否可以重试"""
if self.success:
return False
if not self.error_code:
return True
config = RETRY_CONFIG.get(self.error_code, {})
return config.get('max_retries', 0) > 0
def get_retry_config(self) -> Dict[str, Any]:
"""获取重试配置"""
if not self.error_code:
return {'max_retries': 1, 'backoff': [2]}
return RETRY_CONFIG.get(self.error_code, {'max_retries': 1, 'backoff': [2]})

624
domain/task.py Normal file
View File

@@ -0,0 +1,624 @@
# -*- coding: utf-8 -*-
"""
任务领域模型
定义任务类型、任务实体、渲染规格、输出规格等数据结构。
"""
import os
from dataclasses import dataclass, field
from datetime import datetime
from enum import Enum
from math import isfinite
from typing import Dict, Any, Optional, List
from urllib.parse import urlparse, unquote
# 支持的图片扩展名
IMAGE_EXTENSIONS = {'.jpg', '.jpeg', '.png', '.webp', '.bmp', '.gif'}
class TaskType(Enum):
"""任务类型枚举"""
RENDER_SEGMENT_TS = "RENDER_SEGMENT_TS" # 渲染+封装 TS(合并原 RENDER_SEGMENT_VIDEO + PACKAGE_SEGMENT_TS)
COMPOSE_TRANSITION = "COMPOSE_TRANSITION" # 合成转场效果
PREPARE_JOB_AUDIO = "PREPARE_JOB_AUDIO" # 生成全局音频
FINALIZE_MP4 = "FINALIZE_MP4" # 产出最终 MP4
# Deprecated: 历史任务类型,保留枚举值供兼容
RENDER_SEGMENT_VIDEO = "RENDER_SEGMENT_VIDEO"
PACKAGE_SEGMENT_TS = "PACKAGE_SEGMENT_TS"
# 支持的转场类型(对应 FFmpeg xfade 参数)
TRANSITION_TYPES = {
'fade': 'fade', # 淡入淡出(默认)
'dissolve': 'dissolve', # 溶解过渡
'wipeleft': 'wipeleft', # 向左擦除
'wiperight': 'wiperight', # 向右擦除
'wipeup': 'wipeup', # 向上擦除
'wipedown': 'wipedown', # 向下擦除
'slideleft': 'slideleft', # 向左滑动
'slideright': 'slideright', # 向右滑动
'slideup': 'slideup', # 向上滑动
'slidedown': 'slidedown', # 向下滑动
}
# 支持的特效类型
EFFECT_TYPES = {
'cameraShot', # 相机定格效果
'zoom', # 缩放效果(预留)
'blur', # 模糊效果(预留)
'ospeed', # 原始变速效果(兼容旧模板)
}
class TaskStatus(Enum):
"""任务状态枚举"""
PENDING = "PENDING"
RUNNING = "RUNNING"
SUCCESS = "SUCCESS"
FAILED = "FAILED"
@dataclass
class TransitionConfig:
"""
转场配置
用于 RENDER_SEGMENT_VIDEO 任务的入场/出场转场配置。
"""
type: str = "fade" # 转场类型
duration_ms: int = 500 # 转场时长(毫秒)
@classmethod
def from_dict(cls, data: Optional[Dict]) -> Optional['TransitionConfig']:
"""从字典创建 TransitionConfig"""
if not data:
return None
trans_type = data.get('type', 'fade')
# 验证转场类型是否支持
if trans_type not in TRANSITION_TYPES:
trans_type = 'fade'
return cls(
type=trans_type,
duration_ms=int(data.get('durationMs', 500))
)
def get_overlap_ms(self) -> int:
"""获取 overlap 时长(单边,为转场时长的一半)"""
return self.duration_ms // 2
def get_ffmpeg_transition(self) -> str:
"""获取 FFmpeg xfade 参数"""
return TRANSITION_TYPES.get(self.type, 'fade')
@dataclass
class Effect:
"""
特效配置
格式:type:params
例如:
- cameraShot:3,1 表示在第3秒定格1秒
- zoom:1.5,1.2,2 表示从第1.5秒开始放大 1.2 倍并持续 2 秒
"""
effect_type: str # 效果类型
params: str = "" # 参数字符串
@classmethod
def from_string(cls, effect_str: str) -> Optional['Effect']:
"""
从字符串解析 Effect
格式:type:params 或 type(无参数时)
"""
if not effect_str:
return None
parts = effect_str.split(':', 1)
effect_type = parts[0].strip()
if effect_type not in EFFECT_TYPES:
return None
params = parts[1].strip() if len(parts) > 1 else ""
return cls(effect_type=effect_type, params=params)
@classmethod
def parse_effects(cls, effects_str: Optional[str]) -> List['Effect']:
"""
解析效果字符串
格式:effect1|effect2|effect3
例如:cameraShot:3,1|zoom:1.5,1.2,2
"""
if not effects_str:
return []
effects = []
for part in effects_str.split('|'):
effect = cls.from_string(part.strip())
if effect:
effects.append(effect)
return effects
def get_camera_shot_params(self) -> tuple:
"""
获取 cameraShot 效果参数
Returns:
(start_sec, duration_sec): 开始时间和持续时间(秒)
"""
if self.effect_type != 'cameraShot':
return (0, 0)
if not self.params:
return (3, 1) # 默认值
parts = self.params.split(',')
try:
start = int(parts[0]) if len(parts) >= 1 else 3
duration = int(parts[1]) if len(parts) >= 2 else 1
return (start, duration)
except ValueError:
return (3, 1)
def get_zoom_params(self) -> tuple:
"""
获取 zoom 效果参数
Returns:
(start_sec, scale_factor, duration_sec): 起始时间、放大倍数、持续时长(秒)
"""
if self.effect_type != 'zoom':
return (0.0, 1.2, 1.0)
default_start_sec = 0.0
default_scale_factor = 1.2
default_duration_sec = 1.0
if not self.params:
return (default_start_sec, default_scale_factor, default_duration_sec)
parts = [part.strip() for part in self.params.split(',')]
try:
start_sec = float(parts[0]) if len(parts) >= 1 and parts[0] else default_start_sec
scale_factor = float(parts[1]) if len(parts) >= 2 and parts[1] else default_scale_factor
duration_sec = float(parts[2]) if len(parts) >= 3 and parts[2] else default_duration_sec
except ValueError:
return (default_start_sec, default_scale_factor, default_duration_sec)
if not isfinite(start_sec) or start_sec < 0:
start_sec = default_start_sec
if not isfinite(scale_factor) or scale_factor <= 1.0:
scale_factor = default_scale_factor
if not isfinite(duration_sec) or duration_sec <= 0:
duration_sec = default_duration_sec
return (start_sec, scale_factor, duration_sec)
def get_ospeed_params(self) -> float:
"""获取 ospeed 效果参数,返回 PTS 乘数(>0),无效时返回 1.0"""
if self.effect_type != 'ospeed':
return 1.0
if not self.params:
return 1.0
try:
factor = float(self.params.strip())
except ValueError:
return 1.0
if not isfinite(factor) or factor <= 0:
return 1.0
return factor
@dataclass
class RenderSpec:
"""
渲染规格
用于 RENDER_SEGMENT_VIDEO 任务,定义视频渲染参数。
"""
crop_enable: bool = False
crop_scale: float = 1.0
speed: str = "1.0"
lut_url: Optional[str] = None
overlay_url: Optional[str] = None
effects: Optional[str] = None
video_crop: Optional[str] = None
crop_pos: Optional[str] = None
transitions: Optional[str] = None
# 转场配置(PRD v2 新增)
transition_in: Optional[TransitionConfig] = None # 入场转场
transition_out: Optional[TransitionConfig] = None # 出场转场
@classmethod
def from_dict(cls, data: Optional[Dict]) -> 'RenderSpec':
"""从字典创建 RenderSpec"""
if not data:
return cls()
# 安全解析 cropScale:接受浮点数或字符串浮点数,非法值回退到 1.0
try:
crop_scale = float(data.get('cropScale', 1.0))
if crop_scale <= 0 or not isfinite(crop_scale):
crop_scale = 1.0
except (ValueError, TypeError):
crop_scale = 1.0
return cls(
crop_enable=data.get('cropEnable', False),
crop_scale=crop_scale,
speed=str(data.get('speed', '1.0')),
lut_url=data.get('lutUrl'),
overlay_url=data.get('overlayUrl'),
effects=data.get('effects'),
video_crop=data.get('videoCrop'),
crop_pos=data.get('cropPos'),
transitions=data.get('transitions'),
transition_in=TransitionConfig.from_dict(data.get('transitionIn')),
transition_out=TransitionConfig.from_dict(data.get('transitionOut'))
)
def has_transition_in(self) -> bool:
"""是否有入场转场"""
return self.transition_in is not None and self.transition_in.duration_ms > 0
def has_transition_out(self) -> bool:
"""是否有出场转场"""
return self.transition_out is not None and self.transition_out.duration_ms > 0
def get_overlap_head_ms(self) -> int:
"""获取头部 overlap 时长(毫秒)"""
if self.has_transition_in():
return self.transition_in.get_overlap_ms()
return 0
def get_overlap_tail_ms(self) -> int:
"""获取尾部 overlap 时长(毫秒)"""
if self.has_transition_out():
return self.transition_out.get_overlap_ms()
return 0
def get_effects(self) -> List['Effect']:
"""获取解析后的特效列表"""
return Effect.parse_effects(self.effects)
@dataclass
class OutputSpec:
"""
输出规格
用于 RENDER_SEGMENT_VIDEO 任务,定义视频输出参数。
"""
width: int = 1080
height: int = 1920
fps: int = 30
bitrate: int = 4000000
codec: str = "h264"
@classmethod
def from_dict(cls, data: Optional[Dict]) -> 'OutputSpec':
"""从字典创建 OutputSpec"""
if not data:
return cls()
return cls(
width=data.get('width', 1080),
height=data.get('height', 1920),
fps=data.get('fps', 30),
bitrate=data.get('bitrate', 4000000),
codec=data.get('codec', 'h264')
)
@dataclass
class AudioSpec:
"""
音频规格
用于 PREPARE_JOB_AUDIO 任务中的片段叠加音效。
"""
audio_url: Optional[str] = None
volume: float = 1.0
fade_in_ms: int = 10
fade_out_ms: int = 10
start_ms: int = 0
delay_ms: int = 0
loop_enable: bool = False
@classmethod
def from_dict(cls, data: Optional[Dict]) -> Optional['AudioSpec']:
"""从字典创建 AudioSpec"""
if not data:
return None
return cls(
audio_url=data.get('audioUrl'),
volume=float(data.get('volume', 1.0)),
fade_in_ms=int(data.get('fadeInMs', 10)),
fade_out_ms=int(data.get('fadeOutMs', 10)),
start_ms=int(data.get('startMs', 0)),
delay_ms=int(data.get('delayMs', 0)),
loop_enable=data.get('loopEnable', False)
)
@dataclass
class AudioProfile:
"""
音频配置
用于 PREPARE_JOB_AUDIO 任务的全局音频参数。
"""
sample_rate: int = 48000
channels: int = 2
codec: str = "aac"
@classmethod
def from_dict(cls, data: Optional[Dict]) -> 'AudioProfile':
"""从字典创建 AudioProfile"""
if not data:
return cls()
return cls(
sample_rate=data.get('sampleRate', 48000),
channels=data.get('channels', 2),
codec=data.get('codec', 'aac')
)
@dataclass
class Task:
"""
任务实体
表示一个待执行的渲染任务。
"""
task_id: str
task_type: TaskType
priority: int
lease_expire_time: datetime
payload: Dict[str, Any]
@classmethod
def from_dict(cls, data: Dict) -> 'Task':
"""从 API 响应字典创建 Task"""
lease_time_str = data.get('leaseExpireTime', '')
# 解析 ISO 8601 时间格式
if lease_time_str:
if lease_time_str.endswith('Z'):
lease_time_str = lease_time_str[:-1] + '+00:00'
try:
lease_expire_time = datetime.fromisoformat(lease_time_str)
except ValueError:
# 解析失败时使用当前时间 + 5分钟
lease_expire_time = datetime.now()
else:
lease_expire_time = datetime.now()
return cls(
task_id=str(data['taskId']),
task_type=TaskType(data['taskType']),
priority=data.get('priority', 0),
lease_expire_time=lease_expire_time,
payload=data.get('payload', {})
)
def get_job_id(self) -> str:
"""获取作业 ID"""
return str(self.payload.get('jobId', ''))
def get_segment_id(self) -> Optional[str]:
"""获取片段 ID(如果有)"""
segment_id = self.payload.get('segmentId')
return str(segment_id) if segment_id else None
def get_plan_segment_index(self) -> int:
"""获取计划片段索引"""
return int(self.payload.get('planSegmentIndex', 0))
def get_duration_ms(self) -> int:
"""获取时长(毫秒)"""
return int(self.payload.get('durationMs', 5000))
def get_material_url(self) -> Optional[str]:
"""
获取素材 URL
优先使用 boundMaterialUrl(实际可下载的 HTTP URL),
如果不存在则回退到 sourceRef(可能是 slot 引用)。
Returns:
素材 URL,如果都不存在返回 None
"""
return self.payload.get('boundMaterialUrl') or self.payload.get('sourceRef')
def get_source_ref(self) -> Optional[str]:
"""获取素材源引用(slot 标识符,如 device:xxx)"""
return self.payload.get('sourceRef')
def get_bound_material_url(self) -> Optional[str]:
"""获取绑定的素材 URL(实际可下载的 HTTP URL)"""
return self.payload.get('boundMaterialUrl')
def get_material_type(self) -> str:
"""
获取素材类型
优先使用服务端下发的 materialType 字段,
如果不存在则根据 URL 后缀自动推断。
Returns:
素材类型:"video""image"
"""
# 优先使用服务端下发的类型
material_type = self.payload.get('materialType')
if material_type in ('video', 'image'):
return material_type
# 降级:根据 URL 后缀推断
material_url = self.get_material_url()
if material_url:
parsed = urlparse(material_url)
path = unquote(parsed.path)
_, ext = os.path.splitext(path)
if ext.lower() in IMAGE_EXTENSIONS:
return 'image'
# 默认视频类型
return 'video'
def is_image_material(self) -> bool:
"""判断素材是否为图片类型"""
return self.get_material_type() == 'image'
def get_render_spec(self) -> RenderSpec:
"""获取渲染规格"""
return RenderSpec.from_dict(self.payload.get('renderSpec'))
def get_output_spec(self) -> OutputSpec:
"""获取输出规格"""
return OutputSpec.from_dict(self.payload.get('output'))
def get_transition_type(self) -> Optional[str]:
"""获取转场类型(来自 TaskPayload 顶层)"""
return self.payload.get('transitionType')
def get_transition_ms(self) -> int:
"""获取转场时长(毫秒,来自 TaskPayload 顶层)"""
return int(self.payload.get('transitionMs', 0))
def has_transition(self) -> bool:
"""是否有转场效果"""
return self.get_transition_ms() > 0
def get_overlap_tail_ms(self) -> int:
"""
获取尾部 overlap 时长(毫秒)
转场发生在当前片段与下一片段之间,当前片段需要在尾部多渲染 overlap 帧。
overlap = transitionMs / 2
"""
return self.get_transition_ms() // 2
def get_transition_in_type(self) -> Optional[str]:
"""获取入场转场类型(来自前一片段的出场转场)"""
return self.payload.get('transitionInType')
def get_transition_in_ms(self) -> int:
"""获取入场转场时长(毫秒)"""
return int(self.payload.get('transitionInMs', 0))
def get_transition_out_type(self) -> Optional[str]:
"""获取出场转场类型(当前片段的转场配置)"""
return self.payload.get('transitionOutType')
def get_transition_out_ms(self) -> int:
"""获取出场转场时长(毫秒)"""
return int(self.payload.get('transitionOutMs', 0))
def has_transition_in(self) -> bool:
"""是否有入场转场"""
return self.get_transition_in_ms() > 0
def has_transition_out(self) -> bool:
"""是否有出场转场"""
return self.get_transition_out_ms() > 0
def get_overlap_head_ms(self) -> int:
"""
获取头部 overlap 时长(毫秒)
入场转场来自前一个片段,当前片段需要在头部多渲染 overlap 帧。
overlap = transitionInMs / 2
"""
return self.get_transition_in_ms() // 2
def get_overlap_tail_ms_v2(self) -> int:
"""
获取尾部 overlap 时长(毫秒)- 使用新的字段名
出场转场用于当前片段与下一片段之间,当前片段需要在尾部多渲染 overlap 帧。
overlap = transitionOutMs / 2
"""
return self.get_transition_out_ms() // 2
def get_bgm_url(self) -> Optional[str]:
"""获取 BGM URL"""
return self.payload.get('bgmUrl')
def get_total_duration_ms(self) -> int:
"""获取总时长(毫秒)"""
return int(self.payload.get('totalDurationMs', 0))
def get_segments(self) -> List[Dict]:
"""获取片段列表"""
return self.payload.get('segments', [])
def get_global_audio_fade_in_ms(self) -> int:
"""获取全局音频淡入时长(毫秒),0 表示不淡入"""
return int(self.payload.get('globalAudioFadeInMs', 0))
def get_global_audio_fade_out_ms(self) -> int:
"""获取全局音频淡出时长(毫秒),0 表示不淡出"""
return int(self.payload.get('globalAudioFadeOutMs', 0))
def get_audio_profile(self) -> AudioProfile:
"""获取音频配置"""
return AudioProfile.from_dict(self.payload.get('audioProfile'))
def get_video_url(self) -> Optional[str]:
"""获取视频 URL(用于 PACKAGE_SEGMENT_TS)"""
return self.payload.get('videoUrl')
def get_audio_url(self) -> Optional[str]:
"""获取音频 URL(用于 PACKAGE_SEGMENT_TS)"""
return self.payload.get('audioUrl')
def get_start_time_ms(self) -> int:
"""获取开始时间(毫秒)"""
return int(self.payload.get('startTimeMs', 0))
def get_m3u8_url(self) -> Optional[str]:
"""获取 m3u8 URL(用于 FINALIZE_MP4)"""
return self.payload.get('m3u8Url')
def get_ts_list(self) -> List[str]:
"""获取 TS 列表(用于 FINALIZE_MP4)"""
return self.payload.get('tsList', [])
# ========== COMPOSE_TRANSITION 相关方法 ==========
def get_transition_id(self) -> Optional[str]:
"""获取转场 ID(用于 COMPOSE_TRANSITION)"""
return self.payload.get('transitionId')
def get_prev_segment(self) -> Optional[Dict]:
"""获取前一个片段信息(用于 COMPOSE_TRANSITION)"""
return self.payload.get('prevSegment')
def get_next_segment(self) -> Optional[Dict]:
"""获取后一个片段信息(用于 COMPOSE_TRANSITION)"""
return self.payload.get('nextSegment')
def get_transition_config(self) -> Optional[TransitionConfig]:
"""获取转场配置(用于 COMPOSE_TRANSITION)"""
return TransitionConfig.from_dict(self.payload.get('transition'))
# ========== PACKAGE_SEGMENT_TS 转场相关方法 ==========
def is_transition_segment(self) -> bool:
"""是否为转场分片(用于 PACKAGE_SEGMENT_TS)"""
return self.payload.get('isTransitionSegment', False)
def should_trim_head(self) -> bool:
"""是否需要裁剪头部 overlap(用于 PACKAGE_SEGMENT_TS)"""
return self.payload.get('trimHead', False)
def should_trim_tail(self) -> bool:
"""是否需要裁剪尾部 overlap(用于 PACKAGE_SEGMENT_TS)"""
return self.payload.get('trimTail', False)
def get_trim_head_ms(self) -> int:
"""获取头部裁剪时长(毫秒)"""
return int(self.payload.get('trimHeadMs', 0))
def get_trim_tail_ms(self) -> int:
"""获取尾部裁剪时长(毫秒)"""
return int(self.payload.get('trimTailMs', 0))

View File

@@ -1,380 +0,0 @@
import json
import os
import time
import uuid
from typing import Any
DEFAULT_ARGS = ("-shortest",)
ENCODER_ARGS = ("-c:v", "h264", ) if not os.getenv("ENCODER_ARGS", False) else os.getenv("ENCODER_ARGS", "").split(" ")
VIDEO_ARGS = ("-profile:v", "high", "-level:v", "4", )
AUDIO_ARGS = ("-c:a", "aac", "-b:a", "128k", "-ar", "48000", "-ac", "2", )
MUTE_AUDIO_INPUT = ("-f", "lavfi", "-i", "anullsrc=cl=stereo:r=48000", )
class FfmpegTask(object):
effects: list[str]
def __init__(self, input_file, task_type='copy', output_file=''):
self.annexb = False
if type(input_file) is str:
if input_file.endswith(".ts"):
self.annexb = True
self.input_file = [input_file]
elif type(input_file) is list:
self.input_file = input_file
else:
self.input_file = []
self.zoom_cut = None
self.center_cut = None
self.ext_data = {}
self.task_type = task_type
self.output_file = output_file
self.mute = True
self.speed = 1
self.frame_rate = 25
self.resolution = None
self.subtitles = []
self.luts = []
self.audios = []
self.overlays = []
self.effects = []
def __repr__(self):
_str = f'FfmpegTask(input_file={self.input_file}, task_type={self.task_type}'
if len(self.luts) > 0:
_str += f', luts={self.luts}'
if len(self.audios) > 0:
_str += f', audios={self.audios}'
if len(self.overlays) > 0:
_str += f', overlays={self.overlays}'
if self.annexb:
_str += f', annexb={self.annexb}'
if self.effects:
_str += f', effects={self.effects}'
if self.mute:
_str += f', mute={self.mute}'
_str += f', center_cut={self.center_cut}'
return _str + ')'
def analyze_input_render_tasks(self):
for i in self.input_file:
if type(i) is str:
continue
elif isinstance(i, FfmpegTask):
if i.need_run():
yield i
def need_run(self):
"""
判断是否需要运行
:rtype: bool
:return:
"""
if self.annexb:
return True
# TODO: copy from url
return not self.check_can_copy()
def add_inputs(self, *inputs):
self.input_file.extend(inputs)
def add_overlay(self, *overlays):
for overlay in overlays:
if str(overlay).endswith('.ass'):
self.subtitles.append(overlay)
else:
self.overlays.append(overlay)
self.correct_task_type()
def add_audios(self, *audios):
self.audios.extend(audios)
self.correct_task_type()
self.check_audio_track()
def add_lut(self, *luts):
self.luts.extend(luts)
self.correct_task_type()
def add_effect(self, *effects):
self.effects.extend(effects)
self.correct_task_type()
def get_output_file(self):
if self.task_type == 'copy':
return self.input_file[0]
if self.output_file == '':
self.set_output_file()
return self.output_file
def correct_task_type(self):
if self.check_can_copy():
self.task_type = 'copy'
elif self.check_can_concat():
self.task_type = 'concat'
else:
self.task_type = 'encode'
def check_can_concat(self):
if len(self.luts) > 0:
return False
if len(self.overlays) > 0:
return False
if len(self.subtitles) > 0:
return False
if len(self.effects) > 0:
return False
if self.speed != 1:
return False
if self.zoom_cut is not None:
return False
if self.center_cut is not None:
return False
return True
def check_can_copy(self):
if len(self.luts) > 0:
return False
if len(self.overlays) > 0:
return False
if len(self.subtitles) > 0:
return False
if len(self.effects) > 0:
return False
if self.speed != 1:
return False
if len(self.audios) >= 1:
return False
if len(self.input_file) > 1:
return False
if self.zoom_cut is not None:
return False
if self.center_cut is not None:
return False
return True
def check_audio_track(self):
...
def get_ffmpeg_args(self):
args = ['-y', '-hide_banner']
if self.task_type == 'encode':
input_args = []
filter_args = []
output_args = [*VIDEO_ARGS, *AUDIO_ARGS, *ENCODER_ARGS, *DEFAULT_ARGS]
if self.annexb:
output_args.append("-bsf:v")
output_args.append("h264_mp4toannexb")
output_args.append("-reset_timestamps")
output_args.append("1")
video_output_str = "[0:v]"
audio_output_str = ""
audio_track_index = 0
effect_index = 0
for input_file in self.input_file:
input_args.append("-i")
if type(input_file) is str:
input_args.append(input_file)
elif isinstance(input_file, FfmpegTask):
input_args.append(input_file.get_output_file())
if self.center_cut == 1:
pos_json_str = self.ext_data.get('posJson', '{}')
pos_json = json.loads(pos_json_str)
_v_w = pos_json.get('imgWidth', 1)
_f_x = pos_json.get('ltX', 0)
_f_x2 = pos_json.get('rbX', 0)
_x = f'{float((_f_x2 + _f_x)/(2 * _v_w)) :.4f}*iw-ih*ih/(2*iw)'
filter_args.append(f"{video_output_str}crop=x={_x}:y=0:w=ih*ih/iw:h=ih[v_cut{effect_index}]")
video_output_str = f"[v_cut{effect_index}]"
effect_index += 1
for effect in self.effects:
if effect.startswith("cameraShot:"):
param = effect.split(":", 2)[1]
if param == '':
param = "3,1,0"
_split = param.split(",")
start = 3
duration = 1
rotate_deg = 0
if len(_split) >= 3:
if _split[2] == '':
rotate_deg = 0
else:
rotate_deg = int(_split[2])
if len(_split) >= 2:
duration = float(_split[1])
if len(_split) >= 1:
start = float(_split[0])
_start_out_str = "[eff_s]"
_mid_out_str = "[eff_m]"
_end_out_str = "[eff_e]"
filter_args.append(f"{video_output_str}split=3{_start_out_str}{_mid_out_str}{_end_out_str}")
filter_args.append(f"{_start_out_str}select=lt(n\\,{int(start*self.frame_rate)}){_start_out_str}")
filter_args.append(f"{_end_out_str}select=gt(n\\,{int(start*self.frame_rate)}){_end_out_str}")
filter_args.append(f"{_mid_out_str}select=eq(n\\,{int(start*self.frame_rate)}){_mid_out_str}")
filter_args.append(f"{_mid_out_str}tpad=start_mode=clone:start_duration={duration:.4f}{_mid_out_str}")
if rotate_deg != 0:
filter_args.append(f"{_mid_out_str}rotate=PI*{rotate_deg}/360{_mid_out_str}")
# filter_args.append(f"{video_output_str}trim=start=0:end={start+duration},tpad=stop_mode=clone:stop_duration={duration},setpts=PTS-STARTPTS{_start_out_str}")
# filter_args.append(f"tpad=start_mode=clone:start_duration={duration},setpts=PTS-STARTPTS{_start_out_str}")
# filter_args.append(f"{_end_out_str}trim=start={start}{_end_out_str}")
video_output_str = f"[v_eff{effect_index}]"
# filter_args.append(f"{_end_out_str}{_start_out_str}overlay=eof_action=pass{video_output_str}")
filter_args.append(f"{_start_out_str}{_mid_out_str}{_end_out_str}concat=n=3:v=1:a=0,setpts=N/{self.frame_rate}/TB{video_output_str}")
effect_index += 1
elif effect.startswith("ospeed:"):
param = effect.split(":", 2)[1]
if param == '':
param = "1"
if param != "1":
# 视频变速
effect_index += 1
filter_args.append(f"{video_output_str}setpts={param}*PTS[v_eff{effect_index}]")
video_output_str = f"[v_eff{effect_index}]"
elif effect.startswith("zoom:"):
...
...
for lut in self.luts:
filter_args.append(f"{video_output_str}lut3d=file={lut}{video_output_str}")
if self.resolution:
filter_args.append(f"{video_output_str}scale={self.resolution.replace('x', ':')}[v]")
video_output_str = "[v]"
for overlay in self.overlays:
input_index = input_args.count("-i")
input_args.append("-i")
input_args.append(overlay)
if os.getenv("OLD_FFMPEG"):
filter_args.append(f"{video_output_str}[{input_index}:v]scale2ref=iw:ih[v]")
else:
filter_args.append(f"{video_output_str}[{input_index}:v]scale=rw:rh[v]")
filter_args.append(f"[v][{input_index}:v]overlay=1:eof_action=endall[v]")
video_output_str = "[v]"
for subtitle in self.subtitles:
filter_args.append(f"{video_output_str}ass={subtitle}[v]")
video_output_str = "[v]"
output_args.append("-map")
output_args.append(video_output_str)
output_args.append("-r")
output_args.append(f"{self.frame_rate}")
if self.mute:
input_index = input_args.count("-i")
input_args += MUTE_AUDIO_INPUT
filter_args.append(f"[{input_index}:a]acopy[a]")
audio_track_index += 1
audio_output_str = "[a]"
else:
audio_output_str = "[0:a]"
audio_track_index += 1
for audio in self.audios:
input_index = input_args.count("-i")
input_args.append("-i")
input_args.append(audio.replace("\\", "/"))
audio_track_index += 1
filter_args.append(f"{audio_output_str}[{input_index}:a]amix=duration=shortest:dropout_transition=0:normalize=0[a]")
audio_output_str = "[a]"
if audio_output_str:
output_args.append("-map")
output_args.append(audio_output_str)
_filter_args = [] if len(filter_args) == 0 else ["-filter_complex", ";".join(filter_args)]
return args + input_args + _filter_args + output_args + [self.get_output_file()]
elif self.task_type == 'concat':
# 无法通过 annexb 合并的
input_args = []
output_args = [*DEFAULT_ARGS]
filter_args = []
audio_output_str = ""
audio_track_index = 0
# output_args
if len(self.input_file) == 1:
_file = self.input_file[0]
from util.ffmpeg import probe_video_audio
if type(_file) is str:
input_args += ["-i", _file]
self.mute = not probe_video_audio(_file)
elif isinstance(_file, FfmpegTask):
input_args += ["-i", _file.get_output_file()]
self.mute = not probe_video_audio(_file.get_output_file())
else:
_tmp_file = "tmp_concat_" + str(time.time()) + ".txt"
from util.ffmpeg import probe_video_audio
with open(_tmp_file, "w", encoding="utf-8") as f:
for input_file in self.input_file:
if type(input_file) is str:
f.write("file '" + input_file + "'\n")
elif isinstance(input_file, FfmpegTask):
f.write("file '" + input_file.get_output_file() + "'\n")
input_args += ["-f", "concat", "-safe", "0", "-i", _tmp_file]
self.mute = not probe_video_audio(_tmp_file, "concat")
output_args.append("-map")
output_args.append("0:v")
output_args.append("-c:v")
output_args.append("copy")
if self.mute:
input_index = input_args.count("-i")
input_args += MUTE_AUDIO_INPUT
audio_output_str = f"[{input_index}:a]"
audio_track_index += 1
else:
audio_output_str = "[0:a]"
audio_track_index += 1
for audio in self.audios:
input_index = input_args.count("-i")
input_args.append("-i")
input_args.append(audio.replace("\\", "/"))
audio_track_index += 1
filter_args.append(f"{audio_output_str}[{input_index}:a]amix=duration=shortest:dropout_transition=0:normalize=0[a]")
audio_output_str = "[a]"
if audio_output_str:
output_args.append("-map")
if audio_track_index <= 1:
output_args.append(audio_output_str[1:-1])
else:
output_args.append(audio_output_str)
output_args += AUDIO_ARGS
if self.annexb:
output_args.append("-bsf:v")
output_args.append("h264_mp4toannexb")
output_args.append("-bsf:a")
output_args.append("setts=pts=DTS")
output_args.append("-f")
output_args.append("mpegts" if self.annexb else "mp4")
_filter_args = [] if len(filter_args) == 0 else ["-filter_complex", ";".join(filter_args)]
return args + input_args + _filter_args + output_args + [self.get_output_file()]
elif self.task_type == 'copy':
if len(self.input_file) == 1:
if type(self.input_file[0]) is str:
if self.input_file[0] == self.get_output_file():
return []
return args + ["-i", self.input_file[0]] + ["-c", "copy", self.get_output_file()]
return []
def set_output_file(self, file=None):
if file is None:
if self.output_file == '':
if self.annexb:
self.output_file = "rand_" + str(uuid.uuid4()) + ".ts"
else:
self.output_file = "rand_" + str(uuid.uuid4()) + ".mp4"
else:
if isinstance(file, FfmpegTask):
if file == self:
return
self.output_file = file.get_output_file()
if type(file) is str:
self.output_file = file
def check_annexb(self):
for input_file in self.input_file:
if type(input_file) is str:
if self.task_type == 'encode':
return self.annexb
elif self.task_type == 'concat':
return False
elif self.task_type == 'copy':
return self.annexb
else:
return False
elif isinstance(input_file, FfmpegTask):
if not input_file.check_annexb():
return False
return True

20
handlers/__init__.py Normal file
View File

@@ -0,0 +1,20 @@
# -*- coding: utf-8 -*-
"""
任务处理器层
包含各种任务类型的具体处理器实现。
"""
from handlers.base import BaseHandler
from handlers.render_video import RenderSegmentTsHandler
from handlers.compose_transition import ComposeTransitionHandler
from handlers.prepare_audio import PrepareJobAudioHandler
from handlers.finalize_mp4 import FinalizeMp4Handler
__all__ = [
'BaseHandler',
'RenderSegmentTsHandler',
'ComposeTransitionHandler',
'PrepareJobAudioHandler',
'FinalizeMp4Handler',
]

981
handlers/base.py Normal file
View File

@@ -0,0 +1,981 @@
# -*- coding: utf-8 -*-
"""
任务处理器基类
提供所有处理器共用的基础功能。
"""
import os
import json
import logging
import shutil
import tempfile
import subprocess
import threading
from concurrent.futures import ThreadPoolExecutor, as_completed
from abc import ABC
from typing import Optional, List, Dict, Any, Tuple, TYPE_CHECKING
from opentelemetry.trace import SpanKind
from core.handler import TaskHandler
from domain.task import Task
from domain.result import TaskResult, ErrorCode
from domain.config import WorkerConfig
from services import storage
from services.cache import MaterialCache
from util.tracing import (
bind_trace_context,
capture_otel_context,
get_current_task_context,
mark_span_error,
start_span,
)
from constant import (
HW_ACCEL_NONE, HW_ACCEL_QSV, HW_ACCEL_CUDA,
VIDEO_ENCODE_PARAMS, VIDEO_ENCODE_PARAMS_QSV, VIDEO_ENCODE_PARAMS_CUDA
)
if TYPE_CHECKING:
from services.api_client import APIClientV2
logger = logging.getLogger(__name__)
def get_video_encode_args(hw_accel: str = HW_ACCEL_NONE, maxrate: Optional[int] = None) -> List[str]:
"""
根据硬件加速配置获取视频编码参数
Args:
hw_accel: 硬件加速类型 (none, qsv, cuda)
maxrate: 最大码率(bps),用于限制 CRF/CQ 模式的峰值码率。
例如 4000000 表示 4Mbps。
bufsize 设为 1x maxrate(1 秒窗口),适合短视频(<10s)的严格码率控制。
Returns:
FFmpeg 视频编码参数列表
"""
has_maxrate = maxrate is not None and maxrate > 0
if hw_accel == HW_ACCEL_QSV:
params = VIDEO_ENCODE_PARAMS_QSV
args = [
'-c:v', params['codec'],
'-preset', params['preset'],
'-profile:v', params['profile'],
'-level', params['level'],
'-global_quality', params['global_quality'],
'-look_ahead', params['look_ahead'],
# 禁用 B 帧,避免独立 TS 分片在 HLS 边界出现 PTS/DTS 回退
'-bf', '0',
]
elif hw_accel == HW_ACCEL_CUDA:
params = VIDEO_ENCODE_PARAMS_CUDA
args = [
'-c:v', params['codec'],
'-preset', params['preset'],
'-profile:v', params['profile'],
'-level', params['level'],
'-rc', params['rc'],
'-cq', params['cq'],
# 有 maxrate 时设置 -b:v 为 maxrate,让 NVENC VBV 模型真正生效;
# 无 maxrate 时保持 -b:v 0(纯 CQ 质量模式)
'-b:v', f'{maxrate // 1000}k' if has_maxrate else '0',
# 禁用 B 帧,避免独立 TS 分片在 HLS 边界出现 PTS/DTS 回退
'-bf', '0',
]
else:
# 软件编码(默认)
params = VIDEO_ENCODE_PARAMS
args = [
'-c:v', params['codec'],
'-preset', params['preset'],
'-profile:v', params['profile'],
'-level', params['level'],
'-crf', params['crf'],
'-pix_fmt', params['pix_fmt'],
# 禁用 B 帧,避免独立 TS 分片在 HLS 边界出现 PTS/DTS 回退
'-bf', '0',
]
# CRF/CQ + maxrate 上限:保留质量控制的同时限制峰值码率
# bufsize = 1x maxrate(1 秒窗口),对短视频 VBV 约束更紧,避免缓冲区过大导致码率失控
if has_maxrate:
maxrate_k = f'{maxrate // 1000}k'
bufsize_k = maxrate_k # 1x maxrate,短视频下收敛更快
args.extend(['-maxrate', maxrate_k, '-bufsize', bufsize_k])
return args
def get_hwaccel_decode_args(hw_accel: str = HW_ACCEL_NONE, device_index: Optional[int] = None) -> List[str]:
"""
获取硬件加速解码参数(输入文件之前使用)
Args:
hw_accel: 硬件加速类型 (none, qsv, cuda)
device_index: GPU 设备索引,用于多显卡调度
Returns:
FFmpeg 硬件加速解码参数列表
"""
if hw_accel == HW_ACCEL_CUDA:
# CUDA 硬件加速解码
args = ['-hwaccel', 'cuda']
# 多显卡模式下指定设备
if device_index is not None:
args.extend(['-hwaccel_device', str(device_index)])
args.extend(['-hwaccel_output_format', 'cuda'])
return args
elif hw_accel == HW_ACCEL_QSV:
# QSV 硬件加速解码
args = ['-hwaccel', 'qsv']
# QSV 在 Windows 上使用 -qsv_device
if device_index is not None:
args.extend(['-qsv_device', str(device_index)])
args.extend(['-hwaccel_output_format', 'qsv'])
return args
else:
return []
def get_hwaccel_filter_prefix(hw_accel: str = HW_ACCEL_NONE) -> str:
"""
获取硬件加速滤镜前缀(用于 hwdownload 从 GPU 到 CPU)
注意:由于大多数复杂滤镜(如 lut3d, overlay, crop 等)不支持硬件表面,
我们需要在滤镜链开始时将硬件表面下载到系统内存。
CUDA/QSV hwdownload 只支持 nv12 格式输出,因此需要两步转换:
1. hwdownload,format=nv12 - 从 GPU 下载到 CPU
2. format=yuv420p - 转换为标准格式(确保与 RGBA/YUVA overlay 混合时颜色正确)
Args:
hw_accel: 硬件加速类型
Returns:
需要添加到滤镜链开头的 hwdownload 滤镜字符串
"""
if hw_accel == HW_ACCEL_CUDA:
return 'hwdownload,format=nv12,format=yuv420p,'
elif hw_accel == HW_ACCEL_QSV:
return 'hwdownload,format=nv12,format=yuv420p,'
else:
return ''
# v2 统一视频编码参数(兼容旧代码,使用软件编码)
VIDEO_ENCODE_ARGS = get_video_encode_args(HW_ACCEL_NONE)
# v2 统一音频编码参数
AUDIO_ENCODE_ARGS = [
'-c:a', 'aac',
'-b:a', '128k',
'-ar', '48000',
'-ac', '2',
]
FFMPEG_LOGLEVEL = 'error'
def subprocess_args(include_stdout: bool = True) -> Dict[str, Any]:
"""
创建跨平台的 subprocess 参数
在 Windows 上使用 Pyinstaller --noconsole 打包时,需要特殊处理以避免弹出命令行窗口。
Args:
include_stdout: 是否包含 stdout 捕获
Returns:
subprocess.run 使用的参数字典
"""
ret: Dict[str, Any] = {}
# Windows 特殊处理
if hasattr(subprocess, 'STARTUPINFO'):
si = subprocess.STARTUPINFO()
si.dwFlags |= subprocess.STARTF_USESHOWWINDOW
ret['startupinfo'] = si
ret['env'] = os.environ
# 重定向 stdin 避免 "handle is invalid" 错误
ret['stdin'] = subprocess.PIPE
if include_stdout:
ret['stdout'] = subprocess.PIPE
return ret
def probe_video_info(video_file: str) -> Tuple[int, int, float]:
"""
探测视频信息(宽度、高度、时长)
Args:
video_file: 视频文件路径
Returns:
(width, height, duration) 元组,失败返回 (0, 0, 0)
"""
try:
result = subprocess.run(
[
'ffprobe', '-v', 'error',
'-select_streams', 'v:0',
'-show_entries', 'stream=width,height:format=duration',
'-of', 'csv=s=x:p=0',
video_file
],
capture_output=True,
timeout=30,
**subprocess_args(False)
)
if result.returncode != 0:
logger.warning(f"ffprobe failed for {video_file}")
return 0, 0, 0
output = result.stdout.decode('utf-8').strip()
if not output:
return 0, 0, 0
lines = output.split('\n')
if len(lines) >= 2:
wh = lines[0].strip()
duration_str = lines[1].strip()
width, height = wh.split('x')
return int(width), int(height), float(duration_str)
return 0, 0, 0
except Exception as e:
logger.warning(f"probe_video_info error: {e}")
return 0, 0, 0
def probe_duration_json(file_path: str) -> Optional[float]:
"""
使用 ffprobe JSON 输出探测媒体时长
Args:
file_path: 媒体文件路径
Returns:
时长(秒),失败返回 None
"""
try:
result = subprocess.run(
[
'ffprobe', '-v', 'error',
'-show_entries', 'format=duration',
'-of', 'json',
file_path
],
capture_output=True,
timeout=30,
**subprocess_args(False)
)
if result.returncode != 0:
return None
data = json.loads(result.stdout.decode('utf-8'))
duration = data.get('format', {}).get('duration')
return float(duration) if duration else None
except Exception as e:
logger.warning(f"probe_duration_json error: {e}")
return None
class BaseHandler(TaskHandler, ABC):
"""
任务处理器基类
提供所有处理器共用的基础功能,包括:
- 临时目录管理
- 文件下载/上传
- FFmpeg 命令执行
- GPU 设备管理(多显卡调度)
- 日志记录
"""
# 线程本地存储:用于存储当前线程的 GPU 设备索引
_thread_local = threading.local()
DEFAULT_TASK_DOWNLOAD_CONCURRENCY = 4
DEFAULT_TASK_UPLOAD_CONCURRENCY = 2
MAX_TASK_TRANSFER_CONCURRENCY = 16
def __init__(self, config: WorkerConfig, api_client: 'APIClientV2'):
"""
初始化处理器
Args:
config: Worker 配置
api_client: API 客户端
"""
self.config = config
self.api_client = api_client
self.material_cache = MaterialCache(
cache_dir=config.cache_dir,
enabled=config.cache_enabled,
max_size_gb=config.cache_max_size_gb
)
self.task_download_concurrency = self._resolve_task_transfer_concurrency(
"TASK_DOWNLOAD_CONCURRENCY",
self.DEFAULT_TASK_DOWNLOAD_CONCURRENCY
)
self.task_upload_concurrency = self._resolve_task_transfer_concurrency(
"TASK_UPLOAD_CONCURRENCY",
self.DEFAULT_TASK_UPLOAD_CONCURRENCY
)
def _resolve_task_transfer_concurrency(self, env_name: str, default_value: int) -> int:
"""读取并规范化任务内传输并发数配置。"""
raw_value = os.getenv(env_name)
if raw_value is None or not raw_value.strip():
return default_value
try:
parsed_value = int(raw_value.strip())
except ValueError:
logger.warning(
f"Invalid {env_name} value '{raw_value}', using default {default_value}"
)
return default_value
if parsed_value < 1:
logger.warning(f"{env_name} must be >= 1, forcing to 1")
return 1
if parsed_value > self.MAX_TASK_TRANSFER_CONCURRENCY:
logger.warning(
f"{env_name}={parsed_value} exceeds limit {self.MAX_TASK_TRANSFER_CONCURRENCY}, "
f"using {self.MAX_TASK_TRANSFER_CONCURRENCY}"
)
return self.MAX_TASK_TRANSFER_CONCURRENCY
return parsed_value
def download_files_parallel(
self,
download_jobs: List[Dict[str, Any]],
timeout: Optional[int] = None
) -> Dict[str, Dict[str, Any]]:
"""
单任务内并行下载多个文件。
Args:
download_jobs: 下载任务列表。每项字段:
- key: 唯一标识
- url: 下载地址
- dest: 目标文件路径
- required: 是否关键文件(可选,默认 True)
- use_cache: 是否使用缓存(可选,默认 True)
timeout: 单文件下载超时(秒)
Returns:
key -> 结果字典:
- success: 是否成功
- url: 原始 URL
- dest: 目标文件路径
- required: 是否关键文件
"""
if not download_jobs:
return {}
normalized_jobs: List[Dict[str, Any]] = []
seen_keys = set()
for download_job in download_jobs:
job_key = str(download_job.get("key", "")).strip()
job_url = str(download_job.get("url", "")).strip()
job_dest = str(download_job.get("dest", "")).strip()
if not job_key or not job_url or not job_dest:
raise ValueError("Each download job must include non-empty key/url/dest")
if job_key in seen_keys:
raise ValueError(f"Duplicate download job key: {job_key}")
seen_keys.add(job_key)
normalized_jobs.append({
"key": job_key,
"url": job_url,
"dest": job_dest,
"required": bool(download_job.get("required", True)),
"use_cache": bool(download_job.get("use_cache", True)),
})
if timeout is None:
timeout = self.config.download_timeout
parent_otel_context = capture_otel_context()
task_context = get_current_task_context()
task_prefix = f"[task:{task_context.task_id}] " if task_context else ""
results: Dict[str, Dict[str, Any]] = {}
def _run_download_job(download_job: Dict[str, Any]) -> bool:
with bind_trace_context(parent_otel_context, task_context):
return self.download_file(
download_job["url"],
download_job["dest"],
timeout=timeout,
use_cache=download_job["use_cache"],
)
max_workers = min(self.task_download_concurrency, len(normalized_jobs))
if max_workers <= 1:
for download_job in normalized_jobs:
is_success = _run_download_job(download_job)
results[download_job["key"]] = {
"success": is_success,
"url": download_job["url"],
"dest": download_job["dest"],
"required": download_job["required"],
}
else:
with ThreadPoolExecutor(
max_workers=max_workers,
thread_name_prefix="TaskDownload",
) as executor:
future_to_job = {
executor.submit(_run_download_job, download_job): download_job
for download_job in normalized_jobs
}
for completed_future in as_completed(future_to_job):
download_job = future_to_job[completed_future]
is_success = False
try:
is_success = bool(completed_future.result())
except Exception as exc:
logger.error(
f"{task_prefix}Parallel download raised exception for "
f"key={download_job['key']}: {exc}"
)
results[download_job["key"]] = {
"success": is_success,
"url": download_job["url"],
"dest": download_job["dest"],
"required": download_job["required"],
}
success_count = sum(1 for item in results.values() if item["success"])
logger.debug(
f"{task_prefix}Parallel download completed: {success_count}/{len(normalized_jobs)}"
)
return results
def upload_files_parallel(
self,
upload_jobs: List[Dict[str, Any]]
) -> Dict[str, Dict[str, Any]]:
"""
单任务内并行上传多个文件。
Args:
upload_jobs: 上传任务列表。每项字段:
- key: 唯一标识
- task_id: 任务 ID
- file_type: 文件类型(video/audio/ts/mp4)
- file_path: 本地文件路径
- file_name: 文件名(可选)
- required: 是否关键文件(可选,默认 True)
Returns:
key -> 结果字典:
- success: 是否成功
- url: 上传后的访问 URL(失败为 None)
- file_path: 本地文件路径
- required: 是否关键文件
"""
if not upload_jobs:
return {}
normalized_jobs: List[Dict[str, Any]] = []
seen_keys = set()
for upload_job in upload_jobs:
job_key = str(upload_job.get("key", "")).strip()
task_id = str(upload_job.get("task_id", "")).strip()
file_type = str(upload_job.get("file_type", "")).strip()
file_path = str(upload_job.get("file_path", "")).strip()
if not job_key or not task_id or not file_type or not file_path:
raise ValueError(
"Each upload job must include non-empty key/task_id/file_type/file_path"
)
if job_key in seen_keys:
raise ValueError(f"Duplicate upload job key: {job_key}")
seen_keys.add(job_key)
normalized_jobs.append({
"key": job_key,
"task_id": task_id,
"file_type": file_type,
"file_path": file_path,
"file_name": upload_job.get("file_name"),
"required": bool(upload_job.get("required", True)),
})
parent_otel_context = capture_otel_context()
task_context = get_current_task_context()
task_prefix = f"[task:{task_context.task_id}] " if task_context else ""
results: Dict[str, Dict[str, Any]] = {}
def _run_upload_job(upload_job: Dict[str, Any]) -> Optional[str]:
with bind_trace_context(parent_otel_context, task_context):
return self.upload_file(
upload_job["task_id"],
upload_job["file_type"],
upload_job["file_path"],
upload_job.get("file_name")
)
max_workers = min(self.task_upload_concurrency, len(normalized_jobs))
if max_workers <= 1:
for upload_job in normalized_jobs:
result_url = _run_upload_job(upload_job)
results[upload_job["key"]] = {
"success": bool(result_url),
"url": result_url,
"file_path": upload_job["file_path"],
"required": upload_job["required"],
}
else:
with ThreadPoolExecutor(
max_workers=max_workers,
thread_name_prefix="TaskUpload",
) as executor:
future_to_job = {
executor.submit(_run_upload_job, upload_job): upload_job
for upload_job in normalized_jobs
}
for completed_future in as_completed(future_to_job):
upload_job = future_to_job[completed_future]
result_url = None
try:
result_url = completed_future.result()
except Exception as exc:
logger.error(
f"{task_prefix}Parallel upload raised exception for "
f"key={upload_job['key']}: {exc}"
)
results[upload_job["key"]] = {
"success": bool(result_url),
"url": result_url,
"file_path": upload_job["file_path"],
"required": upload_job["required"],
}
success_count = sum(1 for item in results.values() if item["success"])
logger.debug(
f"{task_prefix}Parallel upload completed: {success_count}/{len(normalized_jobs)}"
)
return results
# ========== GPU 设备管理 ==========
def set_gpu_device(self, device_index: int) -> None:
"""
设置当前线程的 GPU 设备索引
由 TaskExecutor 在任务执行前调用。
Args:
device_index: GPU 设备索引
"""
self._thread_local.gpu_device = device_index
def get_gpu_device(self) -> Optional[int]:
"""
获取当前线程的 GPU 设备索引
Returns:
GPU 设备索引,未设置则返回 None
"""
return getattr(self._thread_local, 'gpu_device', None)
def clear_gpu_device(self) -> None:
"""
清除当前线程的 GPU 设备索引
由 TaskExecutor 在任务执行后调用。
"""
if hasattr(self._thread_local, 'gpu_device'):
del self._thread_local.gpu_device
# ========== FFmpeg 参数生成 ==========
def get_video_encode_args(self, maxrate: Optional[int] = None) -> List[str]:
"""
获取当前配置的视频编码参数
Args:
maxrate: 最大码率(bps),用于限制峰值码率
Returns:
FFmpeg 视频编码参数列表
"""
return get_video_encode_args(self.config.hw_accel, maxrate=maxrate)
def get_hwaccel_decode_args(self) -> List[str]:
"""
获取硬件加速解码参数(支持设备指定)
Returns:
FFmpeg 硬件加速解码参数列表
"""
device_index = self.get_gpu_device()
return get_hwaccel_decode_args(self.config.hw_accel, device_index)
def get_hwaccel_filter_prefix(self) -> str:
"""
获取硬件加速滤镜前缀
Returns:
需要添加到滤镜链开头的 hwdownload 滤镜字符串
"""
return get_hwaccel_filter_prefix(self.config.hw_accel)
def before_handle(self, task: Task) -> None:
"""处理前钩子"""
logger.debug(f"[task:{task.task_id}] Before handle: {task.task_type.value}")
def after_handle(self, task: Task, result: TaskResult) -> None:
"""处理后钩子"""
status = "success" if result.success else "failed"
logger.debug(f"[task:{task.task_id}] After handle: {status}")
def create_work_dir(self, task_id: str = None) -> str:
"""
创建临时工作目录
Args:
task_id: 任务 ID(用于目录命名)
Returns:
工作目录路径
"""
# 确保临时根目录存在
os.makedirs(self.config.temp_dir, exist_ok=True)
# 创建唯一的工作目录
prefix = f"task_{task_id}_" if task_id else "task_"
work_dir = tempfile.mkdtemp(dir=self.config.temp_dir, prefix=prefix)
logger.debug(f"Created work directory: {work_dir}")
return work_dir
def cleanup_work_dir(self, work_dir: str) -> None:
"""
清理临时工作目录
Args:
work_dir: 工作目录路径
"""
if not work_dir or not os.path.exists(work_dir):
return
try:
shutil.rmtree(work_dir)
logger.debug(f"Cleaned up work directory: {work_dir}")
except Exception as e:
logger.warning(f"Failed to cleanup work directory {work_dir}: {e}")
def download_file(self, url: str, dest: str, timeout: int = None, use_cache: bool = True) -> bool:
"""
下载文件(支持缓存)
Args:
url: 文件 URL
dest: 目标路径
timeout: 超时时间(秒)
use_cache: 是否使用缓存(默认 True)
Returns:
是否成功
"""
if timeout is None:
timeout = self.config.download_timeout
task_context = get_current_task_context()
task_prefix = f"[task:{task_context.task_id}] " if task_context else ""
logger.debug(f"{task_prefix}Downloading from: {url} -> {dest}")
with start_span(
"render.task.file.download",
kind=SpanKind.CLIENT,
attributes={
"render.file.source_url": url,
"render.file.destination": dest,
"render.file.use_cache": use_cache,
},
) as span:
try:
lock_wait_ms = 0
lock_acquired = False
cache_path_used = "unknown"
if use_cache:
result, cache_metrics = self.material_cache.get_or_download_with_metrics(
url,
dest,
timeout=timeout
)
lock_wait_ms = int(cache_metrics.get("lock_wait_ms", 0))
lock_acquired = bool(cache_metrics.get("lock_acquired", False))
cache_path_used = str(cache_metrics.get("cache_path_used", "unknown"))
else:
result = storage.download_file(url, dest, timeout=timeout)
cache_path_used = "direct"
if span is not None:
span.set_attribute("render.file.lock_wait_ms", lock_wait_ms)
span.set_attribute("render.file.lock_acquired", lock_acquired)
span.set_attribute("render.file.cache_path_used", cache_path_used)
if result:
file_size = os.path.getsize(dest) if os.path.exists(dest) else 0
logger.debug(f"{task_prefix}Downloaded: {url} -> {dest} ({file_size} bytes)")
if span is not None:
span.set_attribute("render.file.size_bytes", file_size)
return result
except Exception as e:
mark_span_error(span, str(e), ErrorCode.E_INPUT_UNAVAILABLE.value)
logger.error(f"{task_prefix}Download failed: {e}")
logger.debug(f"{task_prefix}Download source address: {url}")
return False
def upload_file(
self,
task_id: str,
file_type: str,
file_path: str,
file_name: str = None
) -> Optional[str]:
"""
上传文件并返回访问 URL
Args:
task_id: 任务 ID
file_type: 文件类型(video/audio/ts/mp4)
file_path: 本地文件路径
file_name: 文件名(可选)
Returns:
访问 URL,失败返回 None
"""
local_file_exists = os.path.exists(file_path)
local_file_size = os.path.getsize(file_path) if local_file_exists else 0
with start_span(
"render.task.file.upload",
kind=SpanKind.CLIENT,
attributes={
"render.file.type": file_type,
"render.file.path": file_path,
"render.file.timeout_seconds": self.config.upload_timeout,
"render.file.local_exists": local_file_exists,
"render.file.local_size_bytes": local_file_size,
},
) as span:
upload_info = self.api_client.get_upload_url(task_id, file_type, file_name)
if not upload_info:
mark_span_error(span, "get upload url failed", ErrorCode.E_UPLOAD_FAILED.value)
logger.error(f"[task:{task_id}] Failed to get upload URL")
return None
upload_url = upload_info.get('uploadUrl')
access_url = upload_info.get('accessUrl')
if not upload_url:
mark_span_error(span, "invalid upload url response", ErrorCode.E_UPLOAD_FAILED.value)
logger.error(f"[task:{task_id}] Invalid upload URL response")
return None
logger.debug(
f"[task:{task_id}] Upload target address: uploadUrl={upload_url}, accessUrl={access_url}"
)
if span is not None:
span.set_attribute("render.file.upload_url", upload_url)
if access_url:
span.set_attribute("render.file.access_url", access_url)
try:
result, upload_metrics = storage.upload_file_with_metrics(
upload_url,
file_path,
timeout=self.config.upload_timeout,
)
upload_method = str(upload_metrics.get("upload_method", "unknown"))
http_attempts = int(upload_metrics.get("http_attempts", 0))
http_retry_count = int(upload_metrics.get("http_retry_count", 0))
http_status_code = int(upload_metrics.get("http_status_code", 0))
http_replace_applied = bool(upload_metrics.get("http_replace_applied", False))
content_type = str(upload_metrics.get("content_type", ""))
error_type = str(upload_metrics.get("error_type", ""))
rclone_attempted = bool(upload_metrics.get("rclone_attempted", False))
rclone_succeeded = bool(upload_metrics.get("rclone_succeeded", False))
rclone_fallback_http = bool(upload_metrics.get("rclone_fallback_http", False))
if span is not None:
span.set_attribute("render.file.upload_success", bool(result))
span.set_attribute("render.file.upload_method", upload_method)
span.set_attribute("render.file.http_attempts", http_attempts)
span.set_attribute("render.file.http_retry_count", http_retry_count)
span.set_attribute("render.file.http_replace_applied", http_replace_applied)
span.set_attribute("render.file.rclone_attempted", rclone_attempted)
span.set_attribute("render.file.rclone_succeeded", rclone_succeeded)
span.set_attribute("render.file.rclone_fallback_http", rclone_fallback_http)
if content_type:
span.set_attribute("render.file.content_type", content_type)
if http_status_code > 0:
span.set_attribute("render.file.http_status_code", http_status_code)
if error_type:
span.set_attribute("render.file.error_type", error_type)
if result:
file_size = local_file_size if local_file_size > 0 else os.path.getsize(file_path)
logger.info(
f"[task:{task_id}] Uploaded: {file_path} ({file_size} bytes)"
)
logger.debug(f"[task:{task_id}] Uploaded access address: {access_url or upload_url}")
if span is not None:
span.set_attribute("render.file.size_bytes", file_size)
cache_write_back = "skipped"
if access_url:
cache_added = self.material_cache.add_to_cache(access_url, file_path)
cache_write_back = "success" if cache_added else "failed"
if not cache_added:
logger.warning(f"[task:{task_id}] Upload cache write back failed: {file_path}")
if span is not None:
span.set_attribute("render.file.cache_write_back", cache_write_back)
return access_url
mark_span_error(
span,
f"upload failed(method={upload_method}, status={http_status_code}, retries={http_retry_count}, error={error_type})",
ErrorCode.E_UPLOAD_FAILED.value
)
logger.error(
f"[task:{task_id}] Upload failed: {file_path}, method={upload_method}, "
f"http_status={http_status_code}, retries={http_retry_count}, error_type={error_type}"
)
return None
except Exception as e:
mark_span_error(span, str(e), ErrorCode.E_UPLOAD_FAILED.value)
logger.error(f"[task:{task_id}] Upload error: {e}")
return None
def run_ffmpeg(
self,
cmd: List[str],
task_id: str,
timeout: int = None
) -> bool:
"""
执行 FFmpeg 命令
Args:
cmd: FFmpeg 命令参数列表
task_id: 任务 ID(用于日志)
timeout: 超时时间(秒)
Returns:
是否成功
"""
if timeout is None:
timeout = self.config.ffmpeg_timeout
cmd_to_run = list(cmd)
if cmd_to_run and cmd_to_run[0] == 'ffmpeg' and '-loglevel' not in cmd_to_run:
cmd_to_run[1:1] = ['-loglevel', FFMPEG_LOGLEVEL]
# 日志记录命令(不限制长度)
cmd_str = ' '.join(cmd_to_run)
logger.info(f"[task:{task_id}] FFmpeg: {cmd_str}")
with start_span(
"render.task.ffmpeg.run",
attributes={
"render.ffmpeg.timeout_seconds": timeout,
"render.ffmpeg.command": cmd_str,
},
) as span:
try:
run_args = subprocess_args(False)
run_args['stdout'] = subprocess.DEVNULL
run_args['stderr'] = subprocess.PIPE
result = subprocess.run(
cmd_to_run,
timeout=timeout,
**run_args
)
if span is not None:
span.set_attribute("render.ffmpeg.return_code", result.returncode)
if result.returncode != 0:
stderr = (result.stderr or b'').decode('utf-8', errors='replace')[:1000]
logger.error(f"[task:{task_id}] FFmpeg failed (code={result.returncode}): {stderr}")
mark_span_error(span, stderr or "ffmpeg failed", ErrorCode.E_FFMPEG_FAILED.value)
return False
return True
except subprocess.TimeoutExpired:
logger.error(f"[task:{task_id}] FFmpeg timeout after {timeout}s")
mark_span_error(span, f"timeout after {timeout}s", ErrorCode.E_TIMEOUT.value)
return False
except Exception as e:
logger.error(f"[task:{task_id}] FFmpeg error: {e}")
mark_span_error(span, str(e), ErrorCode.E_FFMPEG_FAILED.value)
return False
def probe_duration(self, file_path: str) -> Optional[float]:
"""
探测媒体文件时长
Args:
file_path: 文件路径
Returns:
时长(秒),失败返回 None
"""
# 首先尝试 JSON 输出方式
duration = probe_duration_json(file_path)
if duration is not None:
return duration
# 回退到旧方式
try:
_, _, duration = probe_video_info(file_path)
return float(duration) if duration else None
except Exception as e:
logger.warning(f"Failed to probe duration: {file_path} -> {e}")
return None
def get_file_size(self, file_path: str) -> int:
"""
获取文件大小
Args:
file_path: 文件路径
Returns:
文件大小(字节)
"""
try:
return os.path.getsize(file_path)
except Exception:
return 0
def ensure_file_exists(self, file_path: str, min_size: int = 0) -> bool:
"""
确保文件存在且大小满足要求
Args:
file_path: 文件路径
min_size: 最小大小(字节)
Returns:
是否满足要求
"""
if not os.path.exists(file_path):
return False
return os.path.getsize(file_path) >= min_size

View File

@@ -0,0 +1,287 @@
# -*- coding: utf-8 -*-
"""
转场合成处理器
处理 COMPOSE_TRANSITION 任务,将相邻两个片段的 overlap 区域进行混合,生成转场效果。
使用 FFmpeg xfade 滤镜实现多种转场效果。
"""
import os
import logging
from typing import List, Optional
from handlers.base import BaseHandler
from domain.task import Task, TaskType, TransitionConfig, TRANSITION_TYPES
from domain.result import TaskResult, ErrorCode
logger = logging.getLogger(__name__)
class ComposeTransitionHandler(BaseHandler):
"""
转场合成处理器
职责:
- 下载前一个片段的视频(含尾部 overlap)
- 下载后一个片段的视频(含头部 overlap)
- 使用 xfade 滤镜合成转场效果
- 上传转场视频产物
关键约束:
- 转场任务必须等待前后两个片段的 RENDER_SEGMENT_VIDEO 都完成后才能执行
- 输出编码参数必须与片段视频一致,确保后续 TS 封装兼容
- 转场视频不含音频轨道(音频由 PREPARE_JOB_AUDIO 统一处理)
"""
def get_supported_type(self) -> TaskType:
return TaskType.COMPOSE_TRANSITION
def handle(self, task: Task) -> TaskResult:
"""处理转场合成任务"""
work_dir = self.create_work_dir(task.task_id)
try:
# 解析参数
transition_id = task.get_transition_id()
prev_segment = task.get_prev_segment()
next_segment = task.get_next_segment()
transition_config = task.get_transition_config()
output_spec = task.get_output_spec()
# 参数验证
if not transition_id:
return TaskResult.fail(
ErrorCode.E_SPEC_INVALID,
"Missing transitionId"
)
if not prev_segment or not prev_segment.get('videoUrl'):
return TaskResult.fail(
ErrorCode.E_SPEC_INVALID,
"Missing prevSegment.videoUrl"
)
if not next_segment or not next_segment.get('videoUrl'):
return TaskResult.fail(
ErrorCode.E_SPEC_INVALID,
"Missing nextSegment.videoUrl"
)
if not transition_config:
return TaskResult.fail(
ErrorCode.E_SPEC_INVALID,
"Missing transition config"
)
# 获取 overlap 时长
overlap_tail_ms = prev_segment.get('overlapTailMs', 0)
overlap_head_ms = next_segment.get('overlapHeadMs', 0)
transition_duration_ms = transition_config.duration_ms
# 验证 overlap 时长
if overlap_tail_ms <= 0 or overlap_head_ms <= 0:
return TaskResult.fail(
ErrorCode.E_SPEC_INVALID,
f"Invalid overlap duration: tail={overlap_tail_ms}ms, head={overlap_head_ms}ms"
)
logger.info(
f"[task:{task.task_id}] Composing transition: {transition_config.type}, "
f"duration={transition_duration_ms}ms, "
f"overlap_tail={overlap_tail_ms}ms, overlap_head={overlap_head_ms}ms"
)
# 1. 并行下载前后片段视频
prev_video_file = os.path.join(work_dir, 'prev_segment.mp4')
next_video_file = os.path.join(work_dir, 'next_segment.mp4')
download_results = self.download_files_parallel([
{
'key': 'prev_video',
'url': prev_segment['videoUrl'],
'dest': prev_video_file,
'required': True
},
{
'key': 'next_video',
'url': next_segment['videoUrl'],
'dest': next_video_file,
'required': True
}
])
prev_result = download_results.get('prev_video')
if not prev_result or not prev_result['success']:
return TaskResult.fail(
ErrorCode.E_INPUT_UNAVAILABLE,
f"Failed to download prev segment video: {prev_segment['videoUrl']}"
)
next_result = download_results.get('next_video')
if not next_result or not next_result['success']:
return TaskResult.fail(
ErrorCode.E_INPUT_UNAVAILABLE,
f"Failed to download next segment video: {next_segment['videoUrl']}"
)
# 2. 获取前一个片段的实际时长
prev_duration = self.probe_duration(prev_video_file)
if not prev_duration:
return TaskResult.fail(
ErrorCode.E_FFMPEG_FAILED,
"Failed to probe prev segment duration"
)
# 3. 构建转场合成命令
output_file = os.path.join(work_dir, 'transition.mp4')
cmd = self._build_command(
prev_video_file=prev_video_file,
next_video_file=next_video_file,
output_file=output_file,
prev_duration_sec=prev_duration,
overlap_tail_ms=overlap_tail_ms,
overlap_head_ms=overlap_head_ms,
transition_config=transition_config,
output_spec=output_spec
)
# 4. 执行 FFmpeg
if not self.run_ffmpeg(cmd, task.task_id):
return TaskResult.fail(
ErrorCode.E_FFMPEG_FAILED,
"FFmpeg transition composition failed"
)
# 5. 验证输出文件
if not self.ensure_file_exists(output_file, min_size=1024):
return TaskResult.fail(
ErrorCode.E_FFMPEG_FAILED,
"Transition output file is missing or too small"
)
# 6. 获取实际时长
actual_duration = self.probe_duration(output_file)
actual_duration_ms = int(actual_duration * 1000) if actual_duration else transition_duration_ms
# 7. 上传产物
transition_video_url = self.upload_file(task.task_id, 'video', output_file)
if not transition_video_url:
return TaskResult.fail(
ErrorCode.E_UPLOAD_FAILED,
"Failed to upload transition video"
)
return TaskResult.ok({
'transitionVideoUrl': transition_video_url,
'actualDurationMs': actual_duration_ms
})
except Exception as e:
logger.error(f"[task:{task.task_id}] Unexpected error: {e}", exc_info=True)
return TaskResult.fail(ErrorCode.E_UNKNOWN, str(e))
finally:
self.cleanup_work_dir(work_dir)
def _build_command(
self,
prev_video_file: str,
next_video_file: str,
output_file: str,
prev_duration_sec: float,
overlap_tail_ms: int,
overlap_head_ms: int,
transition_config: TransitionConfig,
output_spec
) -> List[str]:
"""
构建转场合成命令
使用 xfade 滤镜合成转场效果:
1. 从前一个片段截取尾部 overlap 区域
2. 从后一个片段截取头部 overlap 区域
3. 使用 xfade 进行混合
注意:
- 转场视频时长很短,需要特别处理 GOP 大小
- 确保第一帧是关键帧以便后续 TS 封装
Args:
prev_video_file: 前一个片段视频路径
next_video_file: 后一个片段视频路径
output_file: 输出文件路径
prev_duration_sec: 前一个片段总时长(秒)
overlap_tail_ms: 尾部 overlap 时长(毫秒)
overlap_head_ms: 头部 overlap 时长(毫秒)
transition_config: 转场配置
output_spec: 输出规格
Returns:
FFmpeg 命令参数列表
"""
# 计算时间参数
overlap_tail_sec = overlap_tail_ms / 1000.0
overlap_head_sec = overlap_head_ms / 1000.0
# 前一个片段的尾部 overlap 起始位置
tail_start_sec = prev_duration_sec - overlap_tail_sec
# 转场时长(使用两个 overlap 区域的总和,xfade 会将两段合成为此时长)
# 注意:xfade 的输出时长 = overlap_tail + overlap_head - duration
# 当 duration = overlap_tail + overlap_head 时,输出时长约等于 duration
transition_duration_sec = min(overlap_tail_sec, overlap_head_sec)
# 获取 xfade 转场类型
xfade_transition = transition_config.get_ffmpeg_transition()
# 构建滤镜
# [0:v] trim 截取前一个片段的尾部 overlap
# [1:v] trim 截取后一个片段的头部 overlap
# xfade 混合两段视频
filter_complex = (
f"[0:v]trim=start={tail_start_sec},setpts=PTS-STARTPTS[v0];"
f"[1:v]trim=end={overlap_head_sec},setpts=PTS-STARTPTS[v1];"
f"[v0][v1]xfade=transition={xfade_transition}:duration={transition_duration_sec}:offset=0[outv]"
)
cmd = [
'ffmpeg', '-y', '-hide_banner',
'-i', prev_video_file,
'-i', next_video_file,
'-filter_complex', filter_complex,
'-map', '[outv]',
]
# 编码参数(根据硬件加速配置动态获取)
cmd.extend(self.get_video_encode_args(maxrate=output_spec.bitrate))
# 帧率
fps = output_spec.fps
# 计算输出视频的预估帧数
# xfade 输出时长 ≈ overlap_tail + overlap_head - transition_duration
output_duration_sec = overlap_tail_sec + overlap_head_sec - transition_duration_sec
total_frames = int(output_duration_sec * fps)
# 动态调整 GOP 大小:对于短视频,GOP 不能大于总帧数
# 确保至少有 1 个关键帧(第一帧),最小 GOP = 1
if total_frames <= 1:
gop_size = 1
elif total_frames < fps:
# 短于 1 秒的视频,使用全部帧数作为 GOP(整个视频只有开头一个关键帧)
gop_size = total_frames
else:
# 正常情况,每秒一个关键帧(比标准的 2 秒更密集,适合短视频)
gop_size = fps
cmd.extend(['-r', str(fps)])
cmd.extend(['-g', str(gop_size)])
cmd.extend(['-keyint_min', str(min(gop_size, fps // 2 or 1))])
# 强制第一帧为关键帧
cmd.extend(['-force_key_frames', 'expr:eq(n,0)'])
# 无音频
cmd.append('-an')
# 输出文件
cmd.append(output_file)
return cmd

200
handlers/finalize_mp4.py Normal file
View File

@@ -0,0 +1,200 @@
# -*- coding: utf-8 -*-
"""
最终 MP4 合并处理器
处理 FINALIZE_MP4 任务,将所有 TS 分片合并为最终可下载的 MP4 文件。
"""
import os
import logging
from typing import List
from handlers.base import BaseHandler
from domain.task import Task, TaskType
from domain.result import TaskResult, ErrorCode
logger = logging.getLogger(__name__)
class FinalizeMp4Handler(BaseHandler):
"""
最终 MP4 合并处理器
职责:
- 下载所有 TS 分片
- 使用 concat demuxer 合并
- 产出最终 MP4(remux,不重编码)
- 上传 MP4 产物
关键约束:
- 优先使用 remux(复制流,不重新编码)
- 使用 aac_adtstoasc bitstream filter 处理音频
"""
def get_supported_type(self) -> TaskType:
return TaskType.FINALIZE_MP4
def handle(self, task: Task) -> TaskResult:
"""处理 MP4 合并任务"""
work_dir = self.create_work_dir(task.task_id)
try:
# 获取 TS 列表
ts_list = task.get_ts_list()
m3u8_url = task.get_m3u8_url()
if not ts_list and not m3u8_url:
return TaskResult.fail(
ErrorCode.E_SPEC_INVALID,
"Missing tsList or m3u8Url"
)
output_file = os.path.join(work_dir, 'final.mp4')
if ts_list:
# 方式1:使用 TS 列表
result = self._process_ts_list(task, work_dir, ts_list, output_file)
else:
# 方式2:使用 m3u8 URL
result = self._process_m3u8(task, work_dir, m3u8_url, output_file)
if not result.success:
return result
# 验证输出文件
if not self.ensure_file_exists(output_file, min_size=4096):
return TaskResult.fail(
ErrorCode.E_FFMPEG_FAILED,
"MP4 output file is missing or too small"
)
# 获取文件大小
file_size = self.get_file_size(output_file)
# 上传产物
mp4_url = self.upload_file(task.task_id, 'mp4', output_file)
if not mp4_url:
return TaskResult.fail(
ErrorCode.E_UPLOAD_FAILED,
"Failed to upload MP4"
)
return TaskResult.ok({
'mp4Url': mp4_url,
'fileSizeBytes': file_size
})
except Exception as e:
logger.error(f"[task:{task.task_id}] Unexpected error: {e}", exc_info=True)
return TaskResult.fail(ErrorCode.E_UNKNOWN, str(e))
finally:
self.cleanup_work_dir(work_dir)
def _process_ts_list(
self,
task: Task,
work_dir: str,
ts_list: List[str],
output_file: str
) -> TaskResult:
"""
使用 TS 列表处理
Args:
task: 任务实体
work_dir: 工作目录
ts_list: TS URL 列表
output_file: 输出文件路径
Returns:
TaskResult
"""
# 1. 并行下载所有 TS 分片
download_jobs = []
for i, ts_url in enumerate(ts_list):
download_jobs.append({
'key': str(i),
'url': ts_url,
'dest': os.path.join(work_dir, f'seg_{i}.ts'),
'required': True
})
download_results = self.download_files_parallel(download_jobs)
ts_files = []
for i, ts_url in enumerate(ts_list):
result = download_results.get(str(i))
if not result or not result['success']:
return TaskResult.fail(
ErrorCode.E_INPUT_UNAVAILABLE,
f"Failed to download TS segment {i}: {ts_url}"
)
ts_files.append(result['dest'])
logger.info(f"[task:{task.task_id}] Downloaded {len(ts_files)} TS segments")
# 2. 创建 concat 文件列表
concat_file = os.path.join(work_dir, 'concat.txt')
with open(concat_file, 'w', encoding='utf-8') as f:
for ts_file in ts_files:
# FFmpeg concat 路径相对于 concat.txt 所在目录,只需写文件名
ts_filename = os.path.basename(ts_file)
f.write(f"file '{ts_filename}'\n")
# 3. 构建合并命令(remux,不重编码)
cmd = [
'ffmpeg', '-y', '-hide_banner',
'-f', 'concat',
'-safe', '0',
'-i', concat_file,
'-c', 'copy', # 复制流,不重编码
'-bsf:a', 'aac_adtstoasc', # 音频 bitstream filter
output_file
]
# 4. 执行 FFmpeg
if not self.run_ffmpeg(cmd, task.task_id):
return TaskResult.fail(
ErrorCode.E_FFMPEG_FAILED,
"MP4 concatenation failed"
)
return TaskResult.ok({})
def _process_m3u8(
self,
task: Task,
work_dir: str,
m3u8_url: str,
output_file: str
) -> TaskResult:
"""
使用 m3u8 URL 处理
Args:
task: 任务实体
work_dir: 工作目录
m3u8_url: m3u8 URL
output_file: 输出文件路径
Returns:
TaskResult
"""
# 构建命令
cmd = [
'ffmpeg', '-y', '-hide_banner',
'-protocol_whitelist', 'file,http,https,tcp,tls',
'-i', m3u8_url,
'-c', 'copy',
'-bsf:a', 'aac_adtstoasc',
output_file
]
# 执行 FFmpeg
if not self.run_ffmpeg(cmd, task.task_id):
return TaskResult.fail(
ErrorCode.E_FFMPEG_FAILED,
"MP4 conversion from m3u8 failed"
)
return TaskResult.ok({})

347
handlers/prepare_audio.py Normal file
View File

@@ -0,0 +1,347 @@
# -*- coding: utf-8 -*-
"""
全局音频准备处理器
处理 PREPARE_JOB_AUDIO 任务,生成整个视频的连续音频轨道。
"""
import os
import logging
from typing import List, Dict, Optional
from handlers.base import BaseHandler, AUDIO_ENCODE_ARGS
from domain.task import Task, TaskType, AudioSpec, AudioProfile
from domain.result import TaskResult, ErrorCode
logger = logging.getLogger(__name__)
class PrepareJobAudioHandler(BaseHandler):
"""
全局音频准备处理器
职责:
- 下载全局 BGM
- 下载各片段叠加音效
- 构建复杂混音命令
- 执行混音
- 上传音频产物
关键约束:
- 全局 BGM 连续生成一次,贯穿整个时长
- 禁止使用 amix normalize=1
- 只对叠加音轨做极短淡入淡出(5-20ms)
- 不对 BGM 做边界 fade
"""
def get_supported_type(self) -> TaskType:
return TaskType.PREPARE_JOB_AUDIO
def handle(self, task: Task) -> TaskResult:
"""处理音频准备任务"""
work_dir = self.create_work_dir(task.task_id)
try:
# 解析参数
total_duration_ms = task.get_total_duration_ms()
if total_duration_ms <= 0:
return TaskResult.fail(
ErrorCode.E_SPEC_INVALID,
"Invalid totalDurationMs"
)
total_duration_sec = total_duration_ms / 1000.0
audio_profile = task.get_audio_profile()
bgm_url = task.get_bgm_url()
segments = task.get_segments()
# 1. 并行下载 BGM 与叠加音效
bgm_file = os.path.join(work_dir, 'bgm.mp3') if bgm_url else None
download_jobs = []
if bgm_url and bgm_file:
download_jobs.append({
'key': 'bgm',
'url': bgm_url,
'dest': bgm_file,
'required': False
})
sfx_download_candidates = []
for i, seg in enumerate(segments):
audio_spec_data = seg.get('audioSpecJson')
if not audio_spec_data:
continue
audio_spec = AudioSpec.from_dict(audio_spec_data)
if not audio_spec or not audio_spec.audio_url:
continue
sfx_file = os.path.join(work_dir, f'sfx_{i}.mp3')
job_key = f'sfx_{i}'
sfx_download_candidates.append({
'key': job_key,
'file': sfx_file,
'spec': audio_spec,
'segment': seg
})
download_jobs.append({
'key': job_key,
'url': audio_spec.audio_url,
'dest': sfx_file,
'required': False
})
download_results = self.download_files_parallel(download_jobs)
if bgm_url:
bgm_result = download_results.get('bgm')
if not bgm_result or not bgm_result['success']:
logger.warning(f"[task:{task.task_id}] Failed to download BGM")
bgm_file = None
sfx_files = []
for sfx_candidate in sfx_download_candidates:
sfx_result = download_results.get(sfx_candidate['key'])
if sfx_result and sfx_result['success']:
sfx_files.append({
'file': sfx_candidate['file'],
'spec': sfx_candidate['spec'],
'segment': sfx_candidate['segment']
})
else:
logger.warning(f"[task:{task.task_id}] Failed to download SFX {sfx_candidate['key']}")
# 2. 构建音频混音命令
output_file = os.path.join(work_dir, 'audio_full.aac')
global_fade_in_ms = task.get_global_audio_fade_in_ms()
global_fade_out_ms = task.get_global_audio_fade_out_ms()
cmd = self._build_audio_command(
bgm_file=bgm_file,
sfx_files=sfx_files,
output_file=output_file,
total_duration_sec=total_duration_sec,
audio_profile=audio_profile,
global_fade_in_ms=global_fade_in_ms,
global_fade_out_ms=global_fade_out_ms
)
# 3. 执行 FFmpeg
if not self.run_ffmpeg(cmd, task.task_id):
return TaskResult.fail(
ErrorCode.E_FFMPEG_FAILED,
"Audio mixing failed"
)
# 4. 验证输出文件
if not self.ensure_file_exists(output_file, min_size=1024):
return TaskResult.fail(
ErrorCode.E_FFMPEG_FAILED,
"Audio output file is missing or too small"
)
# 5. 上传产物
audio_url = self.upload_file(task.task_id, 'audio', output_file)
if not audio_url:
return TaskResult.fail(
ErrorCode.E_UPLOAD_FAILED,
"Failed to upload audio"
)
return TaskResult.ok({
'audioUrl': audio_url
})
except Exception as e:
logger.error(f"[task:{task.task_id}] Unexpected error: {e}", exc_info=True)
return TaskResult.fail(ErrorCode.E_UNKNOWN, str(e))
finally:
self.cleanup_work_dir(work_dir)
def _build_audio_command(
self,
bgm_file: Optional[str],
sfx_files: List[Dict],
output_file: str,
total_duration_sec: float,
audio_profile: AudioProfile,
global_fade_in_ms: int = 0,
global_fade_out_ms: int = 0
) -> List[str]:
"""
构建音频混音命令
Args:
bgm_file: BGM 文件路径(可选)
sfx_files: 叠加音效列表
output_file: 输出文件路径
total_duration_sec: 总时长(秒)
audio_profile: 音频配置
global_fade_in_ms: 全局音频淡入时长(毫秒),0 不应用
global_fade_out_ms: 全局音频淡出时长(毫秒),0 不应用
Returns:
FFmpeg 命令参数列表
"""
sample_rate = audio_profile.sample_rate
channels = audio_profile.channels
# 构建全局 afade 滤镜(作用于最终混合音频,在 amix 之后)
global_fade_filters = self._build_global_fade_filters(
total_duration_sec, global_fade_in_ms, global_fade_out_ms
)
# 情况1:无 BGM 也无叠加音效 -> 生成静音
if not bgm_file and not sfx_files:
if global_fade_filters:
return [
'ffmpeg', '-y', '-hide_banner',
'-f', 'lavfi',
'-i', f'anullsrc=r={sample_rate}:cl=stereo',
'-t', str(total_duration_sec),
'-af', ','.join(global_fade_filters),
'-c:a', 'aac', '-b:a', '128k',
output_file
]
return [
'ffmpeg', '-y', '-hide_banner',
'-f', 'lavfi',
'-i', f'anullsrc=r={sample_rate}:cl=stereo',
'-t', str(total_duration_sec),
'-c:a', 'aac', '-b:a', '128k',
output_file
]
# 情况2:仅 BGM,无叠加音效
if not sfx_files:
af_arg = []
if global_fade_filters:
af_arg = ['-af', ','.join(global_fade_filters)]
return [
'ffmpeg', '-y', '-hide_banner',
'-i', bgm_file,
'-t', str(total_duration_sec),
*af_arg,
'-c:a', 'aac', '-b:a', '128k',
'-ar', str(sample_rate), '-ac', str(channels),
output_file
]
# 情况3:BGM + 叠加音效 -> 复杂滤镜
inputs = []
if bgm_file:
inputs.extend(['-i', bgm_file])
for sfx in sfx_files:
inputs.extend(['-i', sfx['file']])
filter_parts = []
input_idx = 0
# BGM 处理(或生成静音底轨)
if bgm_file:
filter_parts.append(
f"[0:a]atrim=0:{total_duration_sec},asetpts=PTS-STARTPTS,"
f"apad=whole_dur={total_duration_sec}[bgm]"
)
input_idx = 1
else:
filter_parts.append(
f"anullsrc=r={sample_rate}:cl=stereo,"
f"atrim=0:{total_duration_sec}[bgm]"
)
input_idx = 0
# 叠加音效处理
sfx_labels = []
for i, sfx in enumerate(sfx_files):
idx = input_idx + i
spec = sfx['spec']
seg = sfx['segment']
# 计算时间参数
start_time_ms = seg.get('startTimeMs', 0)
duration_ms = seg.get('durationMs', 5000)
delay_ms = start_time_ms + spec.delay_ms
delay_sec = delay_ms / 1000.0
duration_sec = duration_ms / 1000.0
# 淡入淡出参数(极短,5-20ms)
fade_in_sec = spec.fade_in_ms / 1000.0
fade_out_sec = spec.fade_out_ms / 1000.0
# 音量
volume = spec.volume
label = f"sfx{i}"
sfx_labels.append(f"[{label}]")
# 构建滤镜:延迟 + 淡入淡出 + 音量
# 注意:只对叠加音轨做淡入淡出,不对 BGM 做
sfx_filter = (
f"[{idx}:a]"
f"adelay={int(delay_ms)}|{int(delay_ms)},"
f"afade=t=in:st={delay_sec}:d={fade_in_sec},"
f"afade=t=out:st={delay_sec + duration_sec - fade_out_sec}:d={fade_out_sec},"
f"volume={volume}"
f"[{label}]"
)
filter_parts.append(sfx_filter)
# 混音(关键:normalize=0,禁止归一化)
# dropout_transition=0 表示输入结束时不做渐变
mix_inputs = "[bgm]" + "".join(sfx_labels)
num_inputs = 1 + len(sfx_files)
# 如果有全局 fade,amix 输出到中间标签后再追加 afade;否则直接输出到 [out]
if global_fade_filters:
filter_parts.append(
f"{mix_inputs}amix=inputs={num_inputs}:duration=first:"
f"dropout_transition=0:normalize=0[mixed]"
)
filter_parts.append(
f"[mixed]{','.join(global_fade_filters)}[out]"
)
else:
filter_parts.append(
f"{mix_inputs}amix=inputs={num_inputs}:duration=first:"
f"dropout_transition=0:normalize=0[out]"
)
filter_complex = ';'.join(filter_parts)
cmd = ['ffmpeg', '-y', '-hide_banner'] + inputs + [
'-filter_complex', filter_complex,
'-map', '[out]',
'-c:a', 'aac', '-b:a', '128k',
'-ar', str(sample_rate), '-ac', str(channels),
output_file
]
return cmd
@staticmethod
def _build_global_fade_filters(
total_duration_sec: float,
global_fade_in_ms: int,
global_fade_out_ms: int
) -> List[str]:
"""
构建全局音频淡入/淡出滤镜列表
在 amix 混音输出之后追加,作用于最终混合音频。
与片段级 audioSpecJson 中的 fadeInMs/fadeOutMs(仅作用于单个叠加音效)独立。
Args:
total_duration_sec: 总时长(秒)
global_fade_in_ms: 全局淡入时长(毫秒),0 不应用
global_fade_out_ms: 全局淡出时长(毫秒),0 不应用
Returns:
afade 滤镜字符串列表,可能为空
"""
filters = []
if global_fade_in_ms > 0:
fade_in_sec = global_fade_in_ms / 1000.0
filters.append(f"afade=t=in:st=0:d={fade_in_sec}")
if global_fade_out_ms > 0:
fade_out_sec = global_fade_out_ms / 1000.0
fade_out_start = total_duration_sec - fade_out_sec
filters.append(f"afade=t=out:st={fade_out_start}:d={fade_out_sec}")
return filters

988
handlers/render_video.py Normal file
View File

@@ -0,0 +1,988 @@
# -*- coding: utf-8 -*-
"""
渲染+TS封装处理器
处理 RENDER_SEGMENT_TS 任务,将原素材渲染为视频并封装为 TS 分片。
支持转场 overlap 区域的帧冻结生成和精确裁剪。
"""
import os
import logging
from typing import List, Optional, Tuple
from urllib.parse import urlparse, unquote
from handlers.base import BaseHandler
from domain.task import Task, TaskType, RenderSpec, OutputSpec, Effect, IMAGE_EXTENSIONS
from domain.result import TaskResult, ErrorCode
logger = logging.getLogger(__name__)
def _get_extension_from_url(url: str) -> str:
"""从 URL 提取文件扩展名"""
parsed = urlparse(url)
path = unquote(parsed.path)
_, ext = os.path.splitext(path)
return ext.lower() if ext else ''
class RenderSegmentTsHandler(BaseHandler):
"""
渲染+TS封装处理器
职责:
- 下载素材文件
- 下载 LUT 文件(如有)
- 下载叠加层(如有)
- 下载音频(如有)
- 构建 FFmpeg 渲染命令
- 执行渲染(支持帧冻结生成 overlap 区域)
- 裁剪 overlap 区域(如需要)
- 封装为 TS 分片
- 上传产物
"""
def get_supported_type(self) -> TaskType:
return TaskType.RENDER_SEGMENT_TS
def handle(self, task: Task) -> TaskResult:
"""处理视频渲染任务"""
work_dir = self.create_work_dir(task.task_id)
try:
# 解析参数
material_url = task.get_material_url()
if not material_url:
return TaskResult.fail(
ErrorCode.E_SPEC_INVALID,
"Missing material URL (boundMaterialUrl or sourceRef)"
)
# 检查 URL 格式:必须是 HTTP 或 HTTPS 协议
if not material_url.startswith(('http://', 'https://')):
source_ref = task.get_source_ref()
bound_url = task.get_bound_material_url()
logger.error(
f"[task:{task.task_id}] Invalid material URL format: '{material_url}'. "
f"boundMaterialUrl={bound_url}, sourceRef={source_ref}. "
f"Server should provide boundMaterialUrl with HTTP/HTTPS URL."
)
return TaskResult.fail(
ErrorCode.E_SPEC_INVALID,
f"Invalid material URL: '{material_url}' is not a valid HTTP/HTTPS URL. "
f"Server must provide boundMaterialUrl."
)
render_spec = task.get_render_spec()
output_spec = task.get_output_spec()
duration_ms = task.get_duration_ms()
# 1. 检测素材类型并确定输入文件扩展名
is_image = task.is_image_material()
if is_image:
# 图片素材:根据 URL 确定扩展名
ext = _get_extension_from_url(material_url)
if not ext or ext not in IMAGE_EXTENSIONS:
ext = '.jpg' # 默认扩展名
input_file = os.path.join(work_dir, f'input{ext}')
else:
input_file = os.path.join(work_dir, 'input.mp4')
# 2. 构建并行下载任务(主素材 + 可选 LUT + 可选叠加层 + 可选音频)
audio_url = task.get_audio_url()
audio_file = None
lut_file = os.path.join(work_dir, 'lut.cube') if render_spec.lut_url else None
overlay_file = None
if render_spec.overlay_url:
# 根据 URL 后缀确定文件扩展名
overlay_url_lower = render_spec.overlay_url.lower()
if overlay_url_lower.endswith('.jpg') or overlay_url_lower.endswith('.jpeg'):
overlay_ext = '.jpg'
elif overlay_url_lower.endswith('.mov'):
overlay_ext = '.mov'
else:
overlay_ext = '.png'
overlay_file = os.path.join(work_dir, f'overlay{overlay_ext}')
download_jobs = [
{
'key': 'material',
'url': material_url,
'dest': input_file,
'required': True
}
]
if render_spec.lut_url and lut_file:
download_jobs.append({
'key': 'lut',
'url': render_spec.lut_url,
'dest': lut_file,
'required': False
})
if render_spec.overlay_url and overlay_file:
download_jobs.append({
'key': 'overlay',
'url': render_spec.overlay_url,
'dest': overlay_file,
'required': False
})
if audio_url:
audio_file = os.path.join(work_dir, 'audio.aac')
download_jobs.append({
'key': 'audio',
'url': audio_url,
'dest': audio_file,
'required': True
})
download_results = self.download_files_parallel(download_jobs)
material_result = download_results.get('material')
if not material_result or not material_result['success']:
return TaskResult.fail(
ErrorCode.E_INPUT_UNAVAILABLE,
f"Failed to download material: {material_url}"
)
if render_spec.lut_url:
lut_result = download_results.get('lut')
if not lut_result or not lut_result['success']:
logger.warning(f"[task:{task.task_id}] Failed to download LUT, continuing without it")
lut_file = None
if render_spec.overlay_url:
overlay_result = download_results.get('overlay')
if not overlay_result or not overlay_result['success']:
logger.warning(f"[task:{task.task_id}] Failed to download overlay, continuing without it")
overlay_file = None
if audio_url:
audio_dl = download_results.get('audio')
if not audio_dl or not audio_dl['success']:
return TaskResult.fail(
ErrorCode.E_INPUT_UNAVAILABLE,
f"Failed to download audio: {audio_url}"
)
# 3. 图片素材转换为视频
if is_image:
video_input_file = os.path.join(work_dir, 'input_video.mp4')
if not self._convert_image_to_video(
image_file=input_file,
output_file=video_input_file,
duration_ms=duration_ms,
output_spec=output_spec,
render_spec=render_spec,
task_id=task.task_id
):
return TaskResult.fail(
ErrorCode.E_FFMPEG_FAILED,
"Failed to convert image to video"
)
# 使用转换后的视频作为输入
input_file = video_input_file
logger.info(f"[task:{task.task_id}] Image converted to video successfully")
# 4. 探测源视频时长(仅对视频素材)
# 用于检测时长不足并通过冻结最后一帧补足
source_duration_sec = None
if not is_image:
source_duration = self.probe_duration(input_file)
if source_duration:
source_duration_sec = source_duration
speed = float(render_spec.speed) if render_spec.speed else 1.0
if speed > 0:
# 计算变速后的有效时长
effective_duration_sec = source_duration_sec / speed
required_duration_sec = duration_ms / 1000.0
# 如果源视频时长不足,记录日志
if effective_duration_sec < required_duration_sec:
shortage_sec = required_duration_sec - effective_duration_sec
logger.warning(
f"[task:{task.task_id}] Source video duration insufficient: "
f"effective={effective_duration_sec:.2f}s (speed={speed}), "
f"required={required_duration_sec:.2f}s, "
f"will freeze last frame for {shortage_sec:.2f}s"
)
# 5. 计算 overlap 时长(用于转场帧冻结)
# 头部 overlap: 来自前一片段的出场转场
overlap_head_ms = task.get_overlap_head_ms()
# 尾部 overlap: 当前片段的出场转场
overlap_tail_ms = task.get_overlap_tail_ms_v2()
# 6. 构建 FFmpeg 命令
output_file = os.path.join(work_dir, 'output.mp4')
cmd = self._build_command(
input_file=input_file,
output_file=output_file,
render_spec=render_spec,
output_spec=output_spec,
duration_ms=duration_ms,
lut_file=lut_file,
overlay_file=overlay_file,
overlap_head_ms=overlap_head_ms,
overlap_tail_ms=overlap_tail_ms,
source_duration_sec=source_duration_sec
)
# 7. 执行 FFmpeg
if not self.run_ffmpeg(cmd, task.task_id):
return TaskResult.fail(
ErrorCode.E_FFMPEG_FAILED,
"FFmpeg rendering failed"
)
# 8. 验证输出文件
if not self.ensure_file_exists(output_file, min_size=4096):
return TaskResult.fail(
ErrorCode.E_FFMPEG_FAILED,
"Output file is missing or too small"
)
# 9. Overlap 裁剪(仅非转场分片、且有需要裁剪的 overlap 时)
is_transition_seg = task.is_transition_segment()
trim_head = task.should_trim_head()
trim_tail = task.should_trim_tail()
trim_head_ms = task.get_trim_head_ms()
trim_tail_ms = task.get_trim_tail_ms()
needs_video_trim = not is_transition_seg and (
(trim_head and trim_head_ms > 0) or
(trim_tail and trim_tail_ms > 0)
)
processed_video = output_file
if needs_video_trim:
processed_video = os.path.join(work_dir, 'trimmed_video.mp4')
trim_cmd = self._build_trim_command(
video_file=output_file,
output_file=processed_video,
trim_head_ms=trim_head_ms if trim_head else 0,
trim_tail_ms=trim_tail_ms if trim_tail else 0,
output_spec=output_spec
)
logger.info(f"[task:{task.task_id}] Trimming video: head={trim_head_ms}ms, tail={trim_tail_ms}ms")
if not self.run_ffmpeg(trim_cmd, task.task_id):
return TaskResult.fail(
ErrorCode.E_FFMPEG_FAILED,
"Video trim failed"
)
if not self.ensure_file_exists(processed_video, min_size=1024):
return TaskResult.fail(
ErrorCode.E_FFMPEG_FAILED,
"Trimmed video file is missing or too small"
)
# 10. 封装 TS
start_time_ms = task.get_start_time_ms()
start_sec = start_time_ms / 1000.0
duration_sec = duration_ms / 1000.0
ts_output = os.path.join(work_dir, 'segment.ts')
ts_cmd = self._build_ts_package_command(
video_file=processed_video,
audio_file=audio_file,
output_file=ts_output,
start_sec=start_sec,
duration_sec=duration_sec
)
if not self.run_ffmpeg(ts_cmd, task.task_id):
return TaskResult.fail(
ErrorCode.E_FFMPEG_FAILED,
"TS packaging failed"
)
if not self.ensure_file_exists(ts_output, min_size=1024):
return TaskResult.fail(
ErrorCode.E_FFMPEG_FAILED,
"TS output file is missing or too small"
)
# 11. 获取 EXTINF 时长 + 上传 TS
actual_duration = self.probe_duration(ts_output)
extinf_duration = actual_duration if actual_duration else duration_sec
ts_url = self.upload_file(task.task_id, 'ts', ts_output)
if not ts_url:
return TaskResult.fail(
ErrorCode.E_UPLOAD_FAILED,
"Failed to upload TS"
)
return TaskResult.ok({
'tsUrl': ts_url,
'extinfDurationSec': extinf_duration
})
except Exception as e:
logger.error(f"[task:{task.task_id}] Unexpected error: {e}", exc_info=True)
return TaskResult.fail(ErrorCode.E_UNKNOWN, str(e))
finally:
self.cleanup_work_dir(work_dir)
@staticmethod
def _build_crop_filter(
render_spec: 'RenderSpec',
width: int,
height: int,
task_id: str = ''
) -> Optional[str]:
"""
构建裁切滤镜
crop_enable 时:以目标比例为基准,按 crop_scale 倍率裁切,crop_pos 控制位置(默认居中)。
Returns:
crop 滤镜字符串,无需裁切时返回 None
"""
if render_spec.crop_enable:
scale = render_spec.crop_scale
target_ratio = width / height
# 解析裁切位置,默认居中
fx, fy = 0.5, 0.5
if render_spec.crop_pos:
try:
fx, fy = map(float, render_spec.crop_pos.split(','))
except ValueError:
logger.warning(f"[task:{task_id}] Invalid crop position: {render_spec.crop_pos}, using center")
fx, fy = 0.5, 0.5
# 基准:源中最大的目标比例矩形,再除以倍率
return (
f"crop='min(iw,ih*{target_ratio})/{scale}':'min(ih,iw/{target_ratio})/{scale}':"
f"'(iw-min(iw,ih*{target_ratio})/{scale})*{fx}':"
f"'(ih-min(ih,iw/{target_ratio})/{scale})*{fy}'"
)
return None
def _convert_image_to_video(
self,
image_file: str,
output_file: str,
duration_ms: int,
output_spec: OutputSpec,
render_spec: RenderSpec,
task_id: str
) -> bool:
"""
将图片转换为视频
使用 FFmpeg 将静态图片转换为指定时长的视频,
同时应用缩放填充和变速处理。
Args:
image_file: 输入图片文件路径
output_file: 输出视频文件路径
duration_ms: 目标时长(毫秒)
output_spec: 输出规格
render_spec: 渲染规格
task_id: 任务 ID(用于日志)
Returns:
是否成功
"""
width = output_spec.width
height = output_spec.height
fps = output_spec.fps
# 计算实际时长(考虑变速)
speed = float(render_spec.speed) if render_spec.speed else 1.0
if speed <= 0:
speed = 1.0
# 变速后的实际播放时长
actual_duration_sec = (duration_ms / 1000.0) / speed
# 构建 FFmpeg 命令
cmd = [
'ffmpeg', '-y', '-hide_banner',
'-loop', '1', # 循环输入图片
'-i', image_file,
'-t', str(actual_duration_sec), # 输出时长
]
# 构建滤镜:缩放填充到目标尺寸
filters = []
# 裁切处理
crop_filter = self._build_crop_filter(render_spec, width, height, task_id)
if crop_filter:
filters.append(crop_filter)
# 缩放填充
filters.append(
f"scale={width}:{height}:force_original_aspect_ratio=decrease,"
f"pad={width}:{height}:(ow-iw)/2:(oh-ih)/2:black"
)
# 格式转换(确保兼容性)
filters.append("format=yuv420p")
cmd.extend(['-vf', ','.join(filters)])
# 计算总帧数,动态调整 GOP
total_frames = int(actual_duration_sec * fps)
if total_frames <= 1:
gop_size = 1
elif total_frames < fps:
gop_size = total_frames
else:
gop_size = fps * 2 # 正常情况,2 秒一个关键帧
# 编码参数
cmd.extend([
'-c:v', 'libx264',
'-preset', 'fast',
'-crf', '18',
'-r', str(fps),
'-g', str(gop_size),
'-keyint_min', str(min(gop_size, fps // 2 or 1)),
'-force_key_frames', 'expr:eq(n,0)',
'-an', # 无音频
output_file
])
logger.info(f"[task:{task_id}] Converting image to video: {actual_duration_sec:.2f}s at {fps}fps")
return self.run_ffmpeg(cmd, task_id)
def _build_trim_command(
self,
video_file: str,
output_file: str,
trim_head_ms: int,
trim_tail_ms: int,
output_spec
) -> List[str]:
"""
构建视频精确裁剪命令(重编码方式)
使用 trim 滤镜进行精确帧级裁剪,而非 -ss/-t 参数的关键帧裁剪。
Args:
video_file: 输入视频路径
output_file: 输出视频路径
trim_head_ms: 头部裁剪时长(毫秒)
trim_tail_ms: 尾部裁剪时长(毫秒)
output_spec: 输出规格
Returns:
FFmpeg 命令参数列表
"""
original_duration = self.probe_duration(video_file)
if not original_duration:
original_duration = 10.0
trim_head_sec = trim_head_ms / 1000.0
trim_tail_sec = trim_tail_ms / 1000.0
start_time = trim_head_sec
end_time = original_duration - trim_tail_sec
vf_filter = f"trim=start={start_time}:end={end_time},setpts=PTS-STARTPTS"
cmd = [
'ffmpeg', '-y', '-hide_banner',
'-i', video_file,
'-vf', vf_filter,
]
cmd.extend(self.get_video_encode_args(maxrate=output_spec.bitrate))
fps = output_spec.fps
cmd.extend(['-r', str(fps)])
output_duration_sec = end_time - start_time
total_frames = int(output_duration_sec * fps)
if total_frames <= 1:
gop_size = 1
elif total_frames < fps:
gop_size = total_frames
else:
gop_size = fps
cmd.extend(['-g', str(gop_size)])
cmd.extend(['-keyint_min', str(min(gop_size, fps // 2 or 1))])
cmd.extend(['-force_key_frames', 'expr:eq(n,0)'])
cmd.append('-an')
cmd.append(output_file)
return cmd
def _build_ts_package_command(
self,
video_file: str,
audio_file: Optional[str],
output_file: str,
start_sec: float,
duration_sec: float
) -> List[str]:
"""
构建 TS 封装命令
将视频和对应时间区间的音频封装为 TS 分片。
视频使用 copy 模式(已经过渲染/裁剪)。
支持无音频模式(video-only TS)。
Args:
video_file: 视频文件路径(已处理)
audio_file: 音频文件路径(可选,None 时生成 video-only TS)
output_file: 输出文件路径
start_sec: 音频开始时间(秒)
duration_sec: 音频时长(秒)
Returns:
FFmpeg 命令参数列表
"""
cmd = [
'ffmpeg', '-y', '-hide_banner',
'-i', video_file,
]
if audio_file:
cmd.extend(['-ss', str(start_sec), '-t', str(duration_sec), '-i', audio_file])
cmd.extend(['-map', '0:v:0', '-map', '1:a:0', '-c:v', 'copy', '-c:a', 'copy'])
else:
cmd.extend(['-c:v', 'copy'])
cmd.extend([
'-output_ts_offset', str(start_sec),
'-muxdelay', '0',
'-muxpreload', '0',
'-f', 'mpegts',
output_file
])
return cmd
def _build_command(
self,
input_file: str,
output_file: str,
render_spec: RenderSpec,
output_spec: OutputSpec,
duration_ms: int,
lut_file: Optional[str] = None,
overlay_file: Optional[str] = None,
overlap_head_ms: int = 0,
overlap_tail_ms: int = 0,
source_duration_sec: Optional[float] = None
) -> List[str]:
"""
构建 FFmpeg 渲染命令
Args:
input_file: 输入文件路径
output_file: 输出文件路径
render_spec: 渲染规格
output_spec: 输出规格
duration_ms: 目标时长(毫秒)
lut_file: LUT 文件路径(可选)
overlay_file: 叠加层文件路径(可选)
overlap_head_ms: 头部 overlap 时长(毫秒)
overlap_tail_ms: 尾部 overlap 时长(毫秒)
source_duration_sec: 源视频实际时长(秒),用于检测时长不足
Returns:
FFmpeg 命令参数列表
"""
cmd = ['ffmpeg', '-y', '-hide_banner']
# 硬件加速解码参数(在输入文件之前)
hwaccel_args = self.get_hwaccel_decode_args()
if hwaccel_args:
cmd.extend(hwaccel_args)
# 输入文件
cmd.extend(['-i', input_file])
# 叠加层输入
if overlay_file:
cmd.extend(['-i', overlay_file])
# 构建视频滤镜链
filters = self._build_video_filters(
render_spec=render_spec,
output_spec=output_spec,
duration_ms=duration_ms,
lut_file=lut_file,
overlay_file=overlay_file,
overlap_head_ms=overlap_head_ms,
overlap_tail_ms=overlap_tail_ms,
source_duration_sec=source_duration_sec
)
# 应用滤镜
# 检测是否为 filter_complex 格式(包含分号或方括号标签)
is_filter_complex = ';' in filters or (filters.startswith('[') and ']' in filters)
if is_filter_complex or overlay_file:
# 使用 filter_complex 处理
cmd.extend(['-filter_complex', filters])
elif filters:
cmd.extend(['-vf', filters])
# 编码参数(根据硬件加速配置动态获取)
cmd.extend(self.get_video_encode_args(maxrate=output_spec.bitrate))
# 帧率
fps = output_spec.fps
cmd.extend(['-r', str(fps)])
# 时长(包含 overlap 区域)
total_duration_ms = duration_ms + overlap_head_ms + overlap_tail_ms
duration_sec = total_duration_ms / 1000.0
cmd.extend(['-t', str(duration_sec)])
# 动态调整 GOP 大小:对于短视频,GOP 不能大于总帧数
total_frames = int(duration_sec * fps)
if total_frames <= 1:
gop_size = 1
elif total_frames < fps:
# 短于 1 秒的视频,使用全部帧数作为 GOP(整个视频只有开头一个关键帧)
gop_size = total_frames
else:
# 正常情况,2 秒一个关键帧
gop_size = fps * 2
cmd.extend(['-g', str(gop_size)])
cmd.extend(['-keyint_min', str(min(gop_size, fps // 2 or 1))])
# 强制第一帧为关键帧
cmd.extend(['-force_key_frames', 'expr:eq(n,0)'])
# 无音频(视频片段不包含音频)
cmd.append('-an')
# 输出文件
cmd.append(output_file)
return cmd
def _build_video_filters(
self,
render_spec: RenderSpec,
output_spec: OutputSpec,
duration_ms: int,
lut_file: Optional[str] = None,
overlay_file: Optional[str] = None,
overlap_head_ms: int = 0,
overlap_tail_ms: int = 0,
source_duration_sec: Optional[float] = None
) -> str:
"""
构建视频滤镜链
Args:
render_spec: 渲染规格
output_spec: 输出规格
duration_ms: 目标时长(毫秒)
lut_file: LUT 文件路径
overlay_file: 叠加层文件路径(支持图片 png/jpg 和视频 mov)
overlap_head_ms: 头部 overlap 时长(毫秒)
overlap_tail_ms: 尾部 overlap 时长(毫秒)
source_duration_sec: 源视频实际时长(秒),用于检测时长不足
Returns:
滤镜字符串
"""
filters = []
width = output_spec.width
height = output_spec.height
fps = output_spec.fps
# 判断 overlay 类型
has_overlay = overlay_file is not None
is_video_overlay = has_overlay and overlay_file.lower().endswith('.mov')
# 解析 effects
effects = render_spec.get_effects()
has_complex_effect = any(
effect.effect_type in {'cameraShot', 'zoom'}
for effect in effects
)
# 硬件加速时需要先 hwdownload(将 GPU 表面下载到系统内存)
hwaccel_prefix = self.get_hwaccel_filter_prefix()
if hwaccel_prefix:
# 去掉末尾的逗号,作为第一个滤镜
filters.append(hwaccel_prefix.rstrip(','))
# 1. 变速处理(合并 RenderSpec.speed 与 ospeed 效果)
speed = float(render_spec.speed) if render_spec.speed else 1.0
if speed <= 0:
speed = 1.0
ospeed_factor = 1.0
for effect in effects:
if effect.effect_type == 'ospeed':
ospeed_factor = effect.get_ospeed_params()
break
combined_pts_factor = (1.0 / speed) * ospeed_factor
# 统一归零视频起始时间戳,避免源素材非 0 起始 PTS 造成封装后首帧冻结
if combined_pts_factor != 1.0:
filters.append(f"setpts={combined_pts_factor}*(PTS-STARTPTS)")
else:
filters.append("setpts=PTS-STARTPTS")
# 2. LUT 调色
if lut_file:
# 路径中的反斜杠需要转换,冒号需要转义(FFmpeg filter语法中冒号是特殊字符)
lut_path = lut_file.replace('\\', '/').replace(':', r'\:')
filters.append(f"lut3d='{lut_path}'")
# 3. 裁切处理
crop_filter = self._build_crop_filter(render_spec, width, height)
if crop_filter:
filters.append(crop_filter)
# 4. 缩放和填充
scale_filter = (
f"scale={width}:{height}:force_original_aspect_ratio=decrease,"
f"pad={width}:{height}:(ow-iw)/2:(oh-ih)/2:black"
)
filters.append(scale_filter)
# 5. 特效处理(cameraShot / zoom 需要使用 filter_complex)
if has_complex_effect:
return self._build_filter_complex_with_effects(
base_filters=filters,
effects=effects,
fps=fps,
width=width,
height=height,
has_overlay=has_overlay,
is_video_overlay=is_video_overlay,
overlap_head_ms=overlap_head_ms,
overlap_tail_ms=overlap_tail_ms,
use_hwdownload=bool(hwaccel_prefix),
duration_ms=duration_ms,
render_spec=render_spec,
source_duration_sec=source_duration_sec
)
# 6. 帧冻结(tpad)- 用于转场 overlap 区域和时长不足补足
# 注意:tpad 必须在缩放之后应用
tpad_parts = []
# 计算是否需要额外的尾部冻结(源视频时长不足)
extra_tail_freeze_sec = 0.0
if source_duration_sec is not None:
# 使用已计算的 combined_pts_factor
effective_duration_sec = source_duration_sec * combined_pts_factor
required_duration_sec = duration_ms / 1000.0
# 如果源视频时长不足,需要冻结最后一帧来补足
if effective_duration_sec < required_duration_sec:
extra_tail_freeze_sec = required_duration_sec - effective_duration_sec
if overlap_head_ms > 0:
# 头部冻结:将第一帧冻结指定时长
head_duration_sec = overlap_head_ms / 1000.0
tpad_parts.append(f"start_mode=clone:start_duration={head_duration_sec}")
# 尾部冻结:合并 overlap 和时长不足的冻结
total_tail_freeze_sec = (overlap_tail_ms / 1000.0) + extra_tail_freeze_sec
if total_tail_freeze_sec > 0:
# 将最后一帧冻结指定时长
tpad_parts.append(f"stop_mode=clone:stop_duration={total_tail_freeze_sec}")
if tpad_parts:
filters.append(f"tpad={':'.join(tpad_parts)}")
# 7. 构建最终滤镜
if has_overlay:
# 使用 filter_complex 格式
base_filters = ','.join(filters) if filters else 'copy'
overlay_scale = f"scale={width}:{height}"
# 视频 overlay 使用 eof_action=pass(结束后消失),图片 overlay 使用默认行为(保持显示)
overlay_params = 'eof_action=pass' if is_video_overlay else ''
overlay_filter = f"overlay=0:0:{overlay_params}" if overlay_params else 'overlay=0:0'
# 视频 overlay 需要在末尾统一颜色范围,避免 overlay 结束后 range 从 tv 变为 pc
range_fix = ',format=yuv420p,setrange=tv' if is_video_overlay else ''
return f"[0:v]{base_filters}[base];[1:v]{overlay_scale}[overlay];[base][overlay]{overlay_filter}{range_fix}"
else:
return ','.join(filters) if filters else ''
def _build_filter_complex_with_effects(
self,
base_filters: List[str],
effects: List[Effect],
fps: int,
width: int,
height: int,
has_overlay: bool = False,
is_video_overlay: bool = False,
overlap_head_ms: int = 0,
overlap_tail_ms: int = 0,
use_hwdownload: bool = False,
duration_ms: int = 0,
render_spec: Optional[RenderSpec] = None,
source_duration_sec: Optional[float] = None
) -> str:
"""
构建包含特效的 filter_complex 滤镜图
cameraShot / zoom 效果都在此处统一处理并按 effects 顺序叠加。
Args:
base_filters: 基础滤镜列表
effects: 特效列表
fps: 帧率
width: 输出宽度
height: 输出高度
has_overlay: 是否有叠加层
is_video_overlay: 叠加层是否为视频格式(如 .mov)
overlap_head_ms: 头部 overlap 时长
overlap_tail_ms: 尾部 overlap 时长
use_hwdownload: 是否使用了硬件加速解码(已在 base_filters 中包含 hwdownload)
duration_ms: 目标时长(毫秒)
render_spec: 渲染规格(用于获取变速参数)
source_duration_sec: 源视频实际时长(秒),用于检测时长不足
Returns:
filter_complex 格式的滤镜字符串
"""
filter_parts = []
# 基础滤镜链
base_chain = ','.join(base_filters) if base_filters else 'copy'
# 当前输出标签
current_output = '[v_base]'
filter_parts.append(f"[0:v]{base_chain}{current_output}")
# 处理每个特效
effect_idx = 0
for effect in effects:
if effect.effect_type == 'cameraShot':
start_sec, duration_sec = effect.get_camera_shot_params()
if start_sec <= 0 or duration_sec <= 0:
continue
# cameraShot 实现(定格效果):
# 1. fps + split 分割
# 2. 第一路:trim(0, start) + tpad冻结duration秒
# 3. 第二路:trim(start, end)
# 4. concat 拼接
split_out_a = f'[eff{effect_idx}_a]'
split_out_b = f'[eff{effect_idx}_b]'
frozen_out = f'[eff{effect_idx}_frozen]'
rest_out = f'[eff{effect_idx}_rest]'
effect_output = f'[v_eff{effect_idx}]'
# fps + split
filter_parts.append(
f"{current_output}fps=fps={fps},split{split_out_a}{split_out_b}"
)
# 第一路:trim(0, start) + tpad冻结
# tpad=stop_mode=clone 将最后一帧冻结指定时长
filter_parts.append(
f"{split_out_a}trim=start=0:end={start_sec},setpts=PTS-STARTPTS,"
f"tpad=stop_mode=clone:stop_duration={duration_sec}{frozen_out}"
)
# 第二路:trim 从 start 开始
filter_parts.append(
f"{split_out_b}trim=start={start_sec},setpts=PTS-STARTPTS{rest_out}"
)
# concat 拼接
filter_parts.append(
f"{frozen_out}{rest_out}concat=n=2:v=1:a=0{effect_output}"
)
current_output = effect_output
effect_idx += 1
elif effect.effect_type == 'zoom':
start_sec, scale_factor, duration_sec = effect.get_zoom_params()
if start_sec < 0 or scale_factor <= 1.0 or duration_sec <= 0:
continue
zoom_end_sec = start_sec + duration_sec
base_out = f'[eff{effect_idx}_base]'
zoom_source_out = f'[eff{effect_idx}_zoom_src]'
zoom_scaled_out = f'[eff{effect_idx}_zoom_scaled]'
effect_output = f'[v_eff{effect_idx}]'
zoom_enable = f"'between(t,{start_sec},{zoom_end_sec})'"
filter_parts.append(
f"{current_output}split=2{base_out}{zoom_source_out}"
)
filter_parts.append(
f"{zoom_source_out}scale=iw*{scale_factor}:ih*{scale_factor},"
f"crop={width}:{height}:(in_w-{width})/2:(in_h-{height})/2{zoom_scaled_out}"
)
filter_parts.append(
f"{base_out}{zoom_scaled_out}overlay=0:0:enable={zoom_enable}{effect_output}"
)
current_output = effect_output
effect_idx += 1
# 帧冻结(tpad)- 用于转场 overlap 区域和时长不足补足
tpad_parts = []
# 计算是否需要额外的尾部冻结(源视频时长不足)
extra_tail_freeze_sec = 0.0
if source_duration_sec is not None and render_spec is not None and duration_ms > 0:
speed = float(render_spec.speed) if render_spec.speed else 1.0
if speed <= 0:
speed = 1.0
ospeed_factor = 1.0
for effect in effects:
if effect.effect_type == 'ospeed':
ospeed_factor = effect.get_ospeed_params()
break
combined_pts_factor = (1.0 / speed) * ospeed_factor
effective_duration_sec = source_duration_sec * combined_pts_factor
required_duration_sec = duration_ms / 1000.0
# 如果源视频时长不足,需要冻结最后一帧来补足
if effective_duration_sec < required_duration_sec:
extra_tail_freeze_sec = required_duration_sec - effective_duration_sec
if overlap_head_ms > 0:
head_duration_sec = overlap_head_ms / 1000.0
tpad_parts.append(f"start_mode=clone:start_duration={head_duration_sec}")
# 尾部冻结:合并 overlap 和时长不足的冻结
total_tail_freeze_sec = (overlap_tail_ms / 1000.0) + extra_tail_freeze_sec
if total_tail_freeze_sec > 0:
tpad_parts.append(f"stop_mode=clone:stop_duration={total_tail_freeze_sec}")
if tpad_parts:
tpad_output = '[v_tpad]'
filter_parts.append(f"{current_output}tpad={':'.join(tpad_parts)}{tpad_output}")
current_output = tpad_output
# 最终输出
if has_overlay:
# 叠加层处理
# 视频 overlay 使用 eof_action=pass(结束后消失),图片 overlay 使用默认行为(保持显示)
overlay_params = 'eof_action=pass' if is_video_overlay else ''
overlay_filter = f"overlay=0:0:{overlay_params}" if overlay_params else 'overlay=0:0'
overlay_scale = f"scale={width}:{height}"
overlay_output = '[v_overlay]'
# 视频 overlay 需要在末尾统一颜色范围,避免 overlay 结束后 range 从 tv 变为 pc
range_fix = ',format=yuv420p,setrange=tv' if is_video_overlay else ''
filter_parts.append(f"[1:v]{overlay_scale}{overlay_output}")
filter_parts.append(f"{current_output}{overlay_output}{overlay_filter}{range_fix}")
else:
# 移除最后一个标签,直接输出
# 将最后一个滤镜的输出标签替换为空(直接输出)
if filter_parts:
last_filter = filter_parts[-1]
# 移除末尾的输出标签
if last_filter.endswith(current_output):
filter_parts[-1] = last_filter[:-len(current_output)]
return ';'.join(filter_parts)

261
index.py
View File

@@ -1,42 +1,237 @@
from time import sleep
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
RenderWorker v2 入口
import config
import biz.task
from telemetry import init_opentelemetry
from template import load_local_template
from util import api
支持 v2 API 协议的渲染 Worker,处理以下任务类型:
- RENDER_SEGMENT_TS: 渲染视频片段并封装为 TS
- PREPARE_JOB_AUDIO: 生成全局音频
- FINALIZE_MP4: 产出最终 MP4
import os
import glob
使用方法:
python index.py
load_local_template()
环境变量:
API_ENDPOINT_V2: v2 API 端点(或使用 API_ENDPOINT)
ACCESS_KEY: Worker 认证密钥
WORKER_ID: Worker ID(默认 100001)
MAX_CONCURRENCY: 最大并发数(默认 4)
HEARTBEAT_INTERVAL: 心跳间隔秒数(默认 5)
TEMP_DIR: 临时文件目录
"""
import sys
import time
import signal
import logging
import os
from logging.handlers import RotatingFileHandler
LOGGER = logging.getLogger(__name__)
init_opentelemetry()
from dotenv import load_dotenv
while True:
# print(get_sys_info())
print("waiting for task...")
from domain.config import WorkerConfig
from services.api_client import APIClientV2
from services.task_executor import TaskExecutor
from constant import SOFTWARE_VERSION
from util.tracing import initialize_tracing, shutdown_tracing
# 日志配置
def setup_logging():
"""配置日志系统,输出到控制台和文件"""
# 日志格式
log_format = '[%(asctime)s] [%(levelname)s] [%(name)s] %(message)s'
date_format = '%Y-%m-%d %H:%M:%S'
formatter = logging.Formatter(log_format, date_format)
# 获取根logger
root_logger = logging.getLogger()
# 允许 DEBUG 日志流入各 handler(具体是否落盘由 handler 级别决定)
root_logger.setLevel(logging.DEBUG)
# 清除已有的handlers(避免重复)
root_logger.handlers.clear()
# 1. 控制台handler(只输出WARNING及以上级别)
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.WARNING)
console_handler.setFormatter(formatter)
root_logger.addHandler(console_handler)
# 确定日志目录:PyInstaller 打包后 __file__ 指向临时解压目录,日志会随之丢失
# 使用 sys.frozen 判断是否为打包环境,打包后取 exe 所在目录
if getattr(sys, 'frozen', False):
log_dir = os.path.dirname(sys.executable)
else:
log_dir = os.path.dirname(os.path.abspath(__file__))
# 2. 所有日志文件handler(all_log.log)
all_log_path = os.path.join(log_dir, 'all_log.log')
all_log_handler = RotatingFileHandler(
all_log_path,
maxBytes=10*1024*1024, # 10MB
backupCount=5,
encoding='utf-8'
)
all_log_handler.setLevel(logging.DEBUG) # 记录所有级别
all_log_handler.setFormatter(formatter)
root_logger.addHandler(all_log_handler)
# 3. 错误日志文件handler(error.log)
error_log_path = os.path.join(log_dir, 'error.log')
error_log_handler = RotatingFileHandler(
error_log_path,
maxBytes=10*1024*1024, # 10MB
backupCount=5,
encoding='utf-8'
)
error_log_handler.setLevel(logging.ERROR) # 只记录ERROR及以上
error_log_handler.setFormatter(formatter)
root_logger.addHandler(error_log_handler)
# 初始化日志系统
setup_logging()
logger = logging.getLogger('worker')
class WorkerV2:
"""
v2 渲染 Worker 主类
负责:
- 配置加载
- API 客户端初始化
- 任务执行器管理
- 主循环运行
- 优雅退出处理
"""
def __init__(self):
"""初始化 Worker"""
# 加载配置
try:
task_list = api.sync_center()
except Exception as e:
LOGGER.error("sync_center error", exc_info=e)
sleep(5)
continue
if len(task_list) == 0:
# 删除当前文件夹下所有以.mp4、.ts结尾的文件
for file_globs in ['*.mp4', '*.ts', 'tmp_concat*.txt']:
for file_path in glob.glob(file_globs):
self.config = WorkerConfig.from_env()
except ValueError as e:
logger.error(f"Configuration error: {e}")
sys.exit(1)
tracing_enabled = initialize_tracing(self.config.worker_id, SOFTWARE_VERSION)
logger.info("OTel tracing %s", "enabled" if tracing_enabled else "disabled")
# 初始化 API 客户端
self.api_client = APIClientV2(self.config)
# 初始化任务执行器
self.task_executor = TaskExecutor(self.config, self.api_client)
# 运行状态
self.running = True
# 确保临时目录存在
self.config.ensure_temp_dir()
# 注册信号处理器
self._setup_signal_handlers()
def _setup_signal_handlers(self):
"""设置信号处理器"""
# Windows 不支持 SIGTERM
signal.signal(signal.SIGINT, self._signal_handler)
if hasattr(signal, 'SIGTERM'):
signal.signal(signal.SIGTERM, self._signal_handler)
def _signal_handler(self, signum, frame):
"""
信号处理,优雅退出
Args:
signum: 信号编号
frame: 当前栈帧
"""
signal_name = signal.Signals(signum).name
logger.info(f"Received signal {signal_name}, initiating shutdown...")
self.running = False
def run(self):
"""主循环"""
logger.info("=" * 60)
logger.info("RenderWorker v2 Starting")
logger.info("=" * 60)
logger.info(f"Worker ID: {self.config.worker_id}")
logger.info(f"API Endpoint: {self.config.api_endpoint}")
logger.info(f"Max Concurrency: {self.config.max_concurrency}")
logger.info(f"Heartbeat Interval: {self.config.heartbeat_interval}s")
logger.info(f"Capabilities: {', '.join(self.config.capabilities)}")
logger.info(f"Temp Directory: {self.config.temp_dir}")
logger.info("=" * 60)
consecutive_errors = 0
max_consecutive_errors = 10
while self.running:
try:
os.remove(file_path)
print(f"Deleted file: {file_path}")
# 心跳同步并拉取任务
current_task_ids = self.task_executor.get_current_task_ids()
tasks = self.api_client.sync(current_task_ids)
# 提交新任务
for task in tasks:
if self.task_executor.submit_task(task):
logger.info(f"Submitted task: {task.task_id} ({task.task_type.value})")
# 重置错误计数
consecutive_errors = 0
# 等待下次心跳
time.sleep(self.config.heartbeat_interval)
except KeyboardInterrupt:
logger.info("Keyboard interrupt received")
self.running = False
except Exception as e:
LOGGER.error(f"Error deleting file {file_path}", exc_info=e)
sleep(5)
for task in task_list:
print("start task:", task)
try:
biz.task.start_task(task)
except Exception as e:
LOGGER.error("task_start error", exc_info=e)
consecutive_errors += 1
logger.error(f"Worker loop error ({consecutive_errors}/{max_consecutive_errors}): {e}", exc_info=True)
# 连续错误过多,增加等待时间
if consecutive_errors >= max_consecutive_errors:
logger.error("Too many consecutive errors, waiting 30 seconds...")
time.sleep(30)
consecutive_errors = 0
else:
time.sleep(5)
# 优雅关闭
self._shutdown()
def _shutdown(self):
"""优雅关闭"""
logger.info("Shutting down...")
# 等待当前任务完成
current_count = self.task_executor.get_current_task_count()
if current_count > 0:
logger.info(f"Waiting for {current_count} running task(s) to complete...")
# 关闭执行器
self.task_executor.shutdown(wait=True)
# 关闭 API 客户端
self.api_client.close()
shutdown_tracing()
logger.info("Worker stopped")
def main():
"""主函数"""
# 加载 .env 文件(如果存在)
load_dotenv()
logger.info(f"RenderWorker v{SOFTWARE_VERSION}")
# 创建并运行 Worker
worker = WorkerV2()
worker.run()
if __name__ == '__main__':
main()

View File

@@ -1,7 +1,8 @@
requests~=2.32.3
psutil~=6.1.0
python-dotenv~=1.0.1
opentelemetry-api~=1.30.0
opentelemetry-sdk~=1.30.0
opentelemetry-exporter-otlp~=1.30.0
opentelemetry-api~=1.35.0
opentelemetry-sdk~=1.35.0
opentelemetry-exporter-otlp~=1.35.0
opentelemetry-instrumentation-threading~=0.56b0
flask~=3.1.0

18
services/__init__.py Normal file
View File

@@ -0,0 +1,18 @@
# -*- coding: utf-8 -*-
"""
服务层
包含 API 客户端、任务执行器、租约服务、存储服务等组件。
"""
from services.api_client import APIClientV2
from services.lease_service import LeaseService
from services.task_executor import TaskExecutor
from services import storage
__all__ = [
'APIClientV2',
'LeaseService',
'TaskExecutor',
'storage',
]

485
services/api_client.py Normal file
View File

@@ -0,0 +1,485 @@
# -*- coding: utf-8 -*-
"""
v2 API 客户端
实现与渲染服务端 v2 接口的通信。
"""
import logging
import subprocess
import time
import requests
from typing import Dict, List, Optional, Any
from urllib.parse import urlparse
from opentelemetry.trace import SpanKind, Status, StatusCode
from domain.task import Task
from domain.config import WorkerConfig
from util.system import get_hw_accel_info_str
from util.tracing import inject_trace_headers, mark_span_error, start_span
logger = logging.getLogger(__name__)
class APIClientV2:
"""
v2 API 客户端
负责与渲染服务端的所有 HTTP 通信。
"""
SYSTEM_INFO_TTL_SECONDS = 30
def __init__(self, config: WorkerConfig):
"""
初始化 API 客户端
Args:
config: Worker 配置
"""
self.config = config
self.base_url = config.api_endpoint.rstrip('/')
self.access_key = config.access_key
self.worker_id = config.worker_id
self.session = requests.Session()
self._ffmpeg_version: Optional[str] = None
self._codec_info: Optional[str] = None
self._hw_accel_info: Optional[str] = None
self._gpu_info: Optional[str] = None
self._gpu_info_checked = False
self._static_system_info: Optional[Dict[str, Any]] = None
self._system_info_cache: Optional[Dict[str, Any]] = None
self._system_info_cache_ts = 0.0
# 设置默认请求头
self.session.headers.update({
'Content-Type': 'application/json',
'Accept': 'application/json'
})
def _request_with_trace(
self,
method: str,
url: str,
*,
task_id: Optional[str] = None,
span_name: str = "",
**kwargs: Any,
) -> requests.Response:
request_kwargs = dict(kwargs)
headers = request_kwargs.pop("headers", None)
if task_id:
request_kwargs["headers"] = inject_trace_headers(headers)
elif headers:
request_kwargs["headers"] = headers
parsed_url = urlparse(url)
attributes = {
"http.request.method": method.upper(),
"url.path": parsed_url.path,
"server.address": parsed_url.hostname or "",
}
if parsed_url.port:
attributes["server.port"] = parsed_url.port
name = span_name or f"render.api.{method.lower()}"
with start_span(name, task_id=task_id, kind=SpanKind.CLIENT, attributes=attributes) as span:
try:
response = self.session.request(method=method, url=url, **request_kwargs)
except Exception as exc:
mark_span_error(span, str(exc), "HTTP_REQUEST_ERROR")
raise
if span is not None:
span.set_attribute("http.response.status_code", response.status_code)
if response.status_code >= 400:
span.set_status(Status(StatusCode.ERROR, f"HTTP {response.status_code}"))
return response
def sync(self, current_task_ids: List[str]) -> List[Task]:
"""
心跳同步并拉取任务
Args:
current_task_ids: 当前正在执行的任务 ID 列表
Returns:
List[Task]: 新分配的任务列表
"""
url = f"{self.base_url}/render/v2/worker/sync"
# 将 task_id 转换为整数(服务端期望 []int64)
task_ids_int = [int(tid) for tid in current_task_ids if tid.isdigit()]
payload = {
'accessKey': self.access_key,
'workerId': self.worker_id,
'capabilities': self.config.capabilities,
'maxConcurrency': self.config.max_concurrency,
'currentTaskCount': len(current_task_ids),
'currentTaskIds': task_ids_int,
'ffmpegVersion': self._get_ffmpeg_version(),
'codecInfo': self._get_codec_info(),
'systemInfo': self._get_system_info()
}
try:
resp = self.session.post(url, json=payload, timeout=10)
resp.raise_for_status()
data = resp.json()
if data.get('code') != 200:
logger.warning(f"Sync failed: {data.get('message')}")
return []
# 解析任务列表
tasks = []
for task_data in data.get('data', {}).get('tasks') or []:
try:
task = Task.from_dict(task_data)
tasks.append(task)
except Exception as e:
logger.error(f"Failed to parse task: {e}")
if tasks:
logger.info(f"Received {len(tasks)} new tasks")
return tasks
except requests.exceptions.Timeout:
logger.warning("Sync timeout")
return []
except requests.exceptions.RequestException as e:
logger.error(f"Sync request error: {e}")
return []
except Exception as e:
logger.error(f"Sync error: {e}")
return []
def report_start(self, task_id: str) -> bool:
"""
报告任务开始
Args:
task_id: 任务 ID
Returns:
bool: 是否成功
"""
url = f"{self.base_url}/render/v2/task/{task_id}/start"
try:
resp = self._request_with_trace(
method="POST",
url=url,
task_id=task_id,
span_name="render.task.api.report_start",
json={'workerId': self.worker_id},
timeout=10,
)
if resp.status_code == 200:
logger.debug(f"[task:{task_id}] Start reported")
return True
else:
logger.warning(f"[task:{task_id}] Report start failed: {resp.status_code}")
return False
except Exception as e:
logger.error(f"[task:{task_id}] Report start error: {e}")
return False
def report_success(self, task_id: str, result: Dict[str, Any]) -> bool:
"""
报告任务成功
Args:
task_id: 任务 ID
result: 任务结果数据
Returns:
bool: 是否成功
"""
url = f"{self.base_url}/render/v2/task/{task_id}/success"
try:
resp = self._request_with_trace(
method="POST",
url=url,
task_id=task_id,
span_name="render.task.api.report_success",
json={
'workerId': self.worker_id,
'result': result
},
timeout=10,
)
if resp.status_code == 200:
logger.debug(f"[task:{task_id}] Success reported")
return True
else:
logger.warning(f"[task:{task_id}] Report success failed: {resp.status_code}")
return False
except Exception as e:
logger.error(f"[task:{task_id}] Report success error: {e}")
return False
def report_fail(self, task_id: str, error_code: str, error_message: str) -> bool:
"""
报告任务失败
Args:
task_id: 任务 ID
error_code: 错误码
error_message: 错误信息
Returns:
bool: 是否成功
"""
url = f"{self.base_url}/render/v2/task/{task_id}/fail"
try:
resp = self._request_with_trace(
method="POST",
url=url,
task_id=task_id,
span_name="render.task.api.report_fail",
json={
'workerId': self.worker_id,
'errorCode': error_code,
'errorMessage': error_message[:1000] # 限制长度
},
timeout=10,
)
if resp.status_code == 200:
logger.debug(f"[task:{task_id}] Failure reported")
return True
else:
logger.warning(f"[task:{task_id}] Report fail failed: {resp.status_code}")
return False
except Exception as e:
logger.error(f"[task:{task_id}] Report fail error: {e}")
return False
def get_upload_url(self, task_id: str, file_type: str, file_name: str = None) -> Optional[Dict[str, str]]:
"""
获取上传 URL
Args:
task_id: 任务 ID
file_type: 文件类型(video/audio/ts/mp4)
file_name: 文件名(可选)
Returns:
Dict 包含 uploadUrl 和 accessUrl,失败返回 None
"""
url = f"{self.base_url}/render/v2/task/{task_id}/uploadUrl"
payload = {'fileType': file_type}
if file_name:
payload['fileName'] = file_name
try:
resp = self._request_with_trace(
method="POST",
url=url,
task_id=task_id,
span_name="render.task.api.get_upload_url",
json=payload,
timeout=10,
)
if resp.status_code == 200:
data = resp.json()
if data.get('code') == 200:
return data.get('data')
logger.warning(f"[task:{task_id}] Get upload URL failed: {resp.status_code}")
return None
except Exception as e:
logger.error(f"[task:{task_id}] Get upload URL error: {e}")
return None
def extend_lease(self, task_id: str, extension: int = None) -> bool:
"""
延长租约
Args:
task_id: 任务 ID
extension: 延长秒数(默认使用配置值)
Returns:
bool: 是否成功
"""
if extension is None:
extension = self.config.lease_extension_duration
url = f"{self.base_url}/render/v2/task/{task_id}/extend-lease"
try:
resp = self._request_with_trace(
method="POST",
url=url,
task_id=task_id,
span_name="render.task.api.extend_lease",
params={
'workerId': self.worker_id,
'extension': extension
},
timeout=10,
)
if resp.status_code == 200:
logger.debug(f"[task:{task_id}] Lease extended by {extension}s")
return True
else:
logger.warning(f"[task:{task_id}] Extend lease failed: {resp.status_code}")
return False
except Exception as e:
logger.error(f"[task:{task_id}] Extend lease error: {e}")
return False
def get_task_info(self, task_id: str) -> Optional[Dict]:
"""
获取任务详情
Args:
task_id: 任务 ID
Returns:
任务详情字典,失败返回 None
"""
url = f"{self.base_url}/render/v2/task/{task_id}"
try:
resp = self._request_with_trace(
method="GET",
url=url,
task_id=task_id,
span_name="render.task.api.get_task_info",
timeout=10,
)
if resp.status_code == 200:
data = resp.json()
if data.get('code') == 200:
return data.get('data')
return None
except Exception as e:
logger.error(f"[task:{task_id}] Get task info error: {e}")
return None
def _get_ffmpeg_version(self) -> str:
"""获取 FFmpeg 版本"""
if self._ffmpeg_version is not None:
return self._ffmpeg_version
try:
result = subprocess.run(
['ffmpeg', '-version'],
capture_output=True,
text=True,
timeout=5
)
first_line = result.stdout.split('\n')[0]
if 'version' in first_line:
parts = first_line.split()
for i, part in enumerate(parts):
if part == 'version' and i + 1 < len(parts):
self._ffmpeg_version = parts[i + 1]
return self._ffmpeg_version
self._ffmpeg_version = 'unknown'
return self._ffmpeg_version
except Exception:
self._ffmpeg_version = 'unknown'
return self._ffmpeg_version
def _get_codec_info(self) -> str:
"""获取支持的编解码器信息"""
if self._codec_info is not None:
return self._codec_info
try:
result = subprocess.run(
['ffmpeg', '-codecs'],
capture_output=True,
text=True,
timeout=5
)
# 检查常用编解码器
codecs = []
output = result.stdout
if 'libx264' in output:
codecs.append('libx264')
if 'libx265' in output or 'hevc' in output:
codecs.append('libx265')
if 'aac' in output:
codecs.append('aac')
if 'libfdk_aac' in output:
codecs.append('libfdk_aac')
self._codec_info = ', '.join(codecs) if codecs else 'unknown'
return self._codec_info
except Exception:
self._codec_info = 'unknown'
return self._codec_info
def _get_system_info(self) -> Dict[str, Any]:
"""获取系统信息"""
try:
now = time.monotonic()
if (
self._system_info_cache
and now - self._system_info_cache_ts < self.SYSTEM_INFO_TTL_SECONDS
):
return self._system_info_cache
import platform
import psutil
if self._hw_accel_info is None:
self._hw_accel_info = get_hw_accel_info_str()
if self._static_system_info is None:
self._static_system_info = {
'os': platform.system(),
'cpu': f"{psutil.cpu_count()} cores",
'memory': f"{psutil.virtual_memory().total // (1024**3)}GB",
'hwAccelConfig': self.config.hw_accel, # 当前配置的硬件加速
'hwAccelSupport': self._hw_accel_info, # 系统支持的硬件加速
}
info = dict(self._static_system_info)
info.update({
'cpuUsage': f"{psutil.cpu_percent()}%",
'memoryAvailable': f"{psutil.virtual_memory().available // (1024**3)}GB",
})
# 尝试获取 GPU 信息
gpu_info = self._get_gpu_info()
if gpu_info:
info['gpu'] = gpu_info
self._system_info_cache = info
self._system_info_cache_ts = now
return info
except Exception:
return {}
def _get_gpu_info(self) -> Optional[str]:
"""获取 GPU 信息"""
if self._gpu_info_checked:
return self._gpu_info
self._gpu_info_checked = True
try:
result = subprocess.run(
['nvidia-smi', '--query-gpu=name', '--format=csv,noheader'],
capture_output=True,
text=True,
timeout=5
)
if result.returncode == 0:
gpu_name = result.stdout.strip().split('\n')[0]
self._gpu_info = gpu_name
except Exception:
self._gpu_info = None
return self._gpu_info
def close(self):
"""关闭会话"""
self.session.close()

583
services/cache.py Normal file
View File

@@ -0,0 +1,583 @@
# -*- coding: utf-8 -*-
"""
素材缓存服务
提供素材下载缓存功能,避免相同素材重复下载。
"""
import json
import os
import hashlib
import logging
import shutil
import time
import uuid
from typing import Any, Dict, Optional, Tuple
from urllib.parse import urlparse, unquote
import psutil
from services import storage
logger = logging.getLogger(__name__)
def _extract_cache_key(url: str) -> str:
"""
从 URL 提取缓存键
去除签名等查询参数,保留路径作为唯一标识。
Args:
url: 完整的素材 URL
Returns:
缓存键(URL 路径的 MD5 哈希)
"""
parsed = urlparse(url)
# 使用 scheme + host + path 作为唯一标识(忽略签名等查询参数)
cache_key_source = f"{parsed.scheme}://{parsed.netloc}{unquote(parsed.path)}"
return hashlib.md5(cache_key_source.encode('utf-8')).hexdigest()
def _get_file_extension(url: str) -> str:
"""
从 URL 提取文件扩展名
Args:
url: 素材 URL
Returns:
文件扩展名(如 .mp4, .png),无法识别时返回空字符串
"""
parsed = urlparse(url)
path = unquote(parsed.path)
_, ext = os.path.splitext(path)
return ext.lower() if ext else ''
class MaterialCache:
"""
素材缓存管理器
负责素材文件的缓存存储和检索。
"""
LOCK_TIMEOUT_SEC = 30.0
LOCK_POLL_INTERVAL_SEC = 0.1
LOCK_STALE_SECONDS = 24 * 60 * 60
DOWNLOAD_LOCK_TIMEOUT_SEC = 5.0
def __init__(self, cache_dir: str, enabled: bool = True, max_size_gb: float = 0):
"""
初始化缓存管理器
Args:
cache_dir: 缓存目录路径
enabled: 是否启用缓存
max_size_gb: 最大缓存大小(GB),0 表示不限制
"""
self.cache_dir = cache_dir
self.enabled = enabled
self.max_size_bytes = int(max_size_gb * 1024 * 1024 * 1024) if max_size_gb > 0 else 0
if self.enabled:
os.makedirs(self.cache_dir, exist_ok=True)
logger.info(f"Material cache initialized: {cache_dir}")
def get_cache_path(self, url: str) -> str:
"""
获取素材的缓存文件路径
Args:
url: 素材 URL
Returns:
缓存文件的完整路径
"""
cache_key = _extract_cache_key(url)
ext = _get_file_extension(url)
filename = f"{cache_key}{ext}"
return os.path.join(self.cache_dir, filename)
def _get_lock_path(self, cache_key: str) -> str:
"""获取缓存锁文件路径"""
assert self.cache_dir
return os.path.join(self.cache_dir, f"{cache_key}.lock")
def _write_lock_metadata(self, lock_fd: int, lock_path: str) -> bool:
"""写入锁元数据,失败则清理锁文件"""
try:
try:
process_start_time = psutil.Process(os.getpid()).create_time()
except Exception as e:
process_start_time = None
logger.warning(f"Cache lock process start time error: {e}")
metadata = {
'pid': os.getpid(),
'process_start_time': process_start_time,
'created_at': time.time()
}
with os.fdopen(lock_fd, 'w', encoding='utf-8') as lock_file:
json.dump(metadata, lock_file)
return True
except Exception as e:
try:
os.close(lock_fd)
except Exception:
pass
self._remove_lock_file(lock_path, f"write metadata failed: {e}")
return False
def _read_lock_metadata(self, lock_path: str) -> Optional[dict]:
"""读取锁元数据,失败返回 None(兼容历史空锁文件)"""
try:
with open(lock_path, 'r', encoding='utf-8') as lock_file:
data = json.load(lock_file)
return data if isinstance(data, dict) else None
except Exception:
return None
def _is_process_alive(self, pid: int, expected_start_time: Optional[float]) -> bool:
"""判断进程是否存活并校验启动时间(防止 PID 复用)"""
try:
process = psutil.Process(pid)
if expected_start_time is None:
return process.is_running()
actual_start_time = process.create_time()
return abs(actual_start_time - expected_start_time) < 1.0
except psutil.NoSuchProcess:
return False
except Exception as e:
logger.warning(f"Cache lock process check error: {e}")
return True
def _is_lock_stale(self, lock_path: str) -> bool:
"""判断锁是否过期(进程已退出或超过最大存活时长)"""
if not os.path.exists(lock_path):
return False
now = time.time()
metadata = self._read_lock_metadata(lock_path)
if metadata:
created_at = metadata.get('created_at')
if isinstance(created_at, (int, float)) and now - created_at > self.LOCK_STALE_SECONDS:
return True
pid = metadata.get('pid')
pid_value = int(pid) if isinstance(pid, int) or (isinstance(pid, str) and pid.isdigit()) else None
expected_start_time = metadata.get('process_start_time')
expected_start_time_value = (
expected_start_time if isinstance(expected_start_time, (int, float)) else None
)
if pid_value is not None and not self._is_process_alive(pid_value, expected_start_time_value):
return True
return self._is_lock_stale_by_mtime(lock_path, now)
return self._is_lock_stale_by_mtime(lock_path, now)
def _is_lock_stale_by_mtime(self, lock_path: str, now: float) -> bool:
"""基于文件时间判断锁是否过期"""
try:
mtime = os.path.getmtime(lock_path)
return now - mtime > self.LOCK_STALE_SECONDS
except Exception as e:
logger.warning(f"Cache lock stat error: {e}")
return False
def _remove_lock_file(self, lock_path: str, reason: str = "") -> bool:
"""删除锁文件"""
try:
os.remove(lock_path)
if reason:
logger.info(f"Cache lock removed: {lock_path} ({reason})")
return True
except FileNotFoundError:
return True
except Exception as e:
logger.warning(f"Cache lock remove error: {e}")
return False
def _acquire_lock(self, cache_key: str, timeout_sec: Optional[float] = None) -> Optional[str]:
"""获取缓存锁(跨进程安全)"""
if not self.enabled:
return None
wait_timeout_sec = self.LOCK_TIMEOUT_SEC if timeout_sec is None else max(float(timeout_sec), 0.0)
lock_path = self._get_lock_path(cache_key)
deadline = time.monotonic() + wait_timeout_sec
while True:
try:
fd = os.open(lock_path, os.O_CREAT | os.O_EXCL | os.O_WRONLY)
if not self._write_lock_metadata(fd, lock_path):
return None
return lock_path
except FileExistsError:
if self._is_lock_stale(lock_path):
removed = self._remove_lock_file(lock_path, "stale lock")
if removed:
continue
if time.monotonic() >= deadline:
logger.warning(f"Cache lock timeout ({wait_timeout_sec:.1f}s): {lock_path}")
return None
time.sleep(self.LOCK_POLL_INTERVAL_SEC)
except Exception as e:
logger.warning(f"Cache lock error: {e}")
return None
def _acquire_lock_with_wait(
self,
cache_key: str,
timeout_sec: Optional[float] = None
) -> Tuple[Optional[str], int]:
"""获取缓存锁并返回等待时长(毫秒)"""
start_time = time.monotonic()
lock_path = self._acquire_lock(cache_key, timeout_sec=timeout_sec)
lock_wait_ms = max(int((time.monotonic() - start_time) * 1000), 0)
return lock_path, lock_wait_ms
def _release_lock(self, lock_path: Optional[str]) -> None:
"""释放缓存锁"""
if not lock_path:
return
self._remove_lock_file(lock_path)
def is_cached(self, url: str) -> Tuple[bool, str]:
"""
检查素材是否已缓存
Args:
url: 素材 URL
Returns:
(是否已缓存, 缓存文件路径)
"""
if not self.enabled:
return False, ''
cache_path = self.get_cache_path(url)
exists = os.path.exists(cache_path) and os.path.getsize(cache_path) > 0
return exists, cache_path
def _is_cache_file_ready(self, cache_path: str) -> bool:
"""缓存文件是否已就绪(存在且大小大于 0)"""
try:
return os.path.exists(cache_path) and os.path.getsize(cache_path) > 0
except Exception:
return False
def _copy_cache_to_dest(self, cache_path: str, dest: str) -> Tuple[bool, int]:
"""将缓存文件复制到目标路径并返回结果与文件大小"""
try:
shutil.copy2(cache_path, dest)
try:
os.utime(cache_path, None)
except Exception as e:
logger.debug(f"Failed to update cache access time: {e}")
file_size = os.path.getsize(dest) if os.path.exists(dest) else 0
return True, file_size
except Exception as e:
logger.warning(f"Failed to copy from cache: {e}")
return False, 0
def get_or_download(
self,
url: str,
dest: str,
timeout: int = 300,
max_retries: int = 5
) -> bool:
"""兼容旧接口:返回下载是否成功。"""
result, _ = self.get_or_download_with_metrics(
url=url,
dest=dest,
timeout=timeout,
max_retries=max_retries,
)
return result
def get_or_download_with_metrics(
self,
url: str,
dest: str,
timeout: int = 300,
max_retries: int = 5
) -> Tuple[bool, Dict[str, Any]]:
"""
从缓存获取素材,若未缓存则下载并缓存,并返回关键指标。
Args:
url: 素材 URL
dest: 目标文件路径(任务工作目录中的路径)
timeout: 下载超时时间(秒)
max_retries: 最大重试次数
Returns:
(是否成功, 指标字典)
"""
metrics: Dict[str, Any] = {
"lock_wait_ms": 0,
"lock_acquired": False,
"cache_path_used": "unknown",
}
# 确保目标目录存在
dest_dir = os.path.dirname(dest)
if dest_dir:
os.makedirs(dest_dir, exist_ok=True)
# 缓存未启用时直接下载
if not self.enabled:
result = storage.download_file(url, dest, max_retries=max_retries, timeout=timeout)
metrics["cache_path_used"] = "direct"
return result, metrics
cache_key = _extract_cache_key(url)
cache_path = self.get_cache_path(url)
def _try_serve_from_cache(log_prefix: str, delete_on_failure: bool = False) -> bool:
if not self._is_cache_file_ready(cache_path):
return False
copied, file_size = self._copy_cache_to_dest(cache_path, dest)
if copied:
metrics["cache_path_used"] = "cache"
logger.info(f"{log_prefix}: {url[:80]}... -> {dest} ({file_size} bytes)")
return True
if delete_on_failure:
try:
os.remove(cache_path)
except Exception:
pass
return False
if _try_serve_from_cache("Cache hit"):
return True, metrics
lock_path, lock_wait_ms = self._acquire_lock_with_wait(
cache_key,
timeout_sec=self.DOWNLOAD_LOCK_TIMEOUT_SEC,
)
metrics["lock_wait_ms"] = lock_wait_ms
if not lock_path:
if _try_serve_from_cache("Cache hit after lock timeout"):
return True, metrics
logger.warning(f"Cache lock unavailable, downloading without cache: {url[:80]}...")
result = storage.download_file(url, dest, max_retries=max_retries, timeout=timeout)
metrics["cache_path_used"] = "direct"
return result, metrics
metrics["lock_acquired"] = True
try:
if _try_serve_from_cache("Cache hit", delete_on_failure=True):
return True, metrics
# 未命中缓存,下载到缓存目录
logger.debug(f"Cache miss: {url[:80]}...")
# 先下载到临时文件(唯一文件名,避免并发覆盖)
temp_cache_path = os.path.join(
self.cache_dir,
f"{cache_key}.{uuid.uuid4().hex}.downloading"
)
try:
if not storage.download_file(url, temp_cache_path, max_retries=max_retries, timeout=timeout):
# 下载失败,清理临时文件
if os.path.exists(temp_cache_path):
os.remove(temp_cache_path)
return False, metrics
if not os.path.exists(temp_cache_path) or os.path.getsize(temp_cache_path) <= 0:
if os.path.exists(temp_cache_path):
os.remove(temp_cache_path)
return False, metrics
# 下载成功,原子替换缓存文件
os.replace(temp_cache_path, cache_path)
# 复制到目标路径
if not _try_serve_from_cache("Downloaded and cached", delete_on_failure=False):
return False, metrics
# 检查是否需要清理缓存
if self.max_size_bytes > 0:
self._cleanup_if_needed()
return True, metrics
except Exception as e:
logger.error(f"Cache download error: {e}")
# 清理临时文件
if os.path.exists(temp_cache_path):
try:
os.remove(temp_cache_path)
except Exception:
pass
return False, metrics
finally:
self._release_lock(lock_path)
def add_to_cache(self, url: str, source_path: str) -> bool:
"""
将本地文件添加到缓存
Args:
url: 对应的 URL(用于生成缓存键)
source_path: 本地文件路径
Returns:
是否成功
"""
if not self.enabled:
return False
if not os.path.exists(source_path):
logger.warning(f"Source file not found for cache: {source_path}")
return False
cache_key = _extract_cache_key(url)
lock_path = self._acquire_lock(cache_key)
if not lock_path:
logger.warning(f"Cache lock unavailable for adding: {url[:80]}...")
return False
try:
cache_path = self.get_cache_path(url)
# 先复制到临时文件
temp_cache_path = os.path.join(
self.cache_dir,
f"{cache_key}.{uuid.uuid4().hex}.adding"
)
shutil.copy2(source_path, temp_cache_path)
# 原子替换
os.replace(temp_cache_path, cache_path)
# 更新访问时间
os.utime(cache_path, None)
logger.info(f"Added to cache: {url[:80]}... <- {source_path}")
# 检查清理
if self.max_size_bytes > 0:
self._cleanup_if_needed()
return True
except Exception as e:
logger.error(f"Failed to add to cache: {e}")
if 'temp_cache_path' in locals() and os.path.exists(temp_cache_path):
try:
os.remove(temp_cache_path)
except Exception:
pass
return False
finally:
self._release_lock(lock_path)
def _cleanup_if_needed(self) -> None:
"""
检查并清理缓存(LRU 策略)
当缓存大小超过限制时,删除最久未访问的文件。
"""
if self.max_size_bytes <= 0:
return
try:
# 获取所有缓存文件及其信息
cache_files = []
total_size = 0
for filename in os.listdir(self.cache_dir):
if filename.endswith('.downloading') or filename.endswith('.lock'):
continue
file_path = os.path.join(self.cache_dir, filename)
if os.path.isfile(file_path):
stat = os.stat(file_path)
cache_files.append({
'path': file_path,
'size': stat.st_size,
'atime': stat.st_atime
})
total_size += stat.st_size
# 如果未超过限制,无需清理
if total_size <= self.max_size_bytes:
return
# 按访问时间排序(最久未访问的在前)
cache_files.sort(key=lambda x: x['atime'])
# 删除文件直到低于限制的 80%
target_size = int(self.max_size_bytes * 0.8)
deleted_count = 0
for file_info in cache_files:
if total_size <= target_size:
break
# 从文件名提取 cache_key,检查是否有锁(说明正在被使用)
filename = os.path.basename(file_info['path'])
cache_key = os.path.splitext(filename)[0]
lock_path = self._get_lock_path(cache_key)
if os.path.exists(lock_path):
if self._is_lock_stale(lock_path):
self._remove_lock_file(lock_path, "cleanup stale lock")
else:
# 该文件正在被其他任务使用,跳过删除
logger.debug(f"Cache cleanup: skipping locked file {filename}")
continue
try:
os.remove(file_info['path'])
total_size -= file_info['size']
deleted_count += 1
except Exception as e:
logger.warning(f"Failed to delete cache file: {e}")
if deleted_count > 0:
logger.info(f"Cache cleanup: deleted {deleted_count} files, current size: {total_size / (1024*1024*1024):.2f} GB")
except Exception as e:
logger.warning(f"Cache cleanup error: {e}")
def clear(self) -> None:
"""清空所有缓存"""
if not self.enabled:
return
try:
if os.path.exists(self.cache_dir):
shutil.rmtree(self.cache_dir)
os.makedirs(self.cache_dir, exist_ok=True)
logger.info("Cache cleared")
except Exception as e:
logger.error(f"Failed to clear cache: {e}")
def get_stats(self) -> dict:
"""
获取缓存统计信息
Returns:
包含缓存统计的字典
"""
if not self.enabled or not os.path.exists(self.cache_dir):
return {'enabled': False, 'file_count': 0, 'total_size_mb': 0}
file_count = 0
total_size = 0
for filename in os.listdir(self.cache_dir):
if filename.endswith('.downloading') or filename.endswith('.lock'):
continue
file_path = os.path.join(self.cache_dir, filename)
if os.path.isfile(file_path):
file_count += 1
total_size += os.path.getsize(file_path)
return {
'enabled': True,
'cache_dir': self.cache_dir,
'file_count': file_count,
'total_size_mb': round(total_size / (1024 * 1024), 2),
'max_size_gb': self.max_size_bytes / (1024 * 1024 * 1024) if self.max_size_bytes > 0 else 0
}

186
services/gpu_scheduler.py Normal file
View File

@@ -0,0 +1,186 @@
# -*- coding: utf-8 -*-
"""
GPU 调度器
提供多 GPU 设备的轮询调度功能。
"""
import logging
import threading
from typing import List, Optional
from domain.config import WorkerConfig
from domain.gpu import GPUDevice
from util.system import get_all_gpu_info, validate_gpu_device
from constant import HW_ACCEL_CUDA, HW_ACCEL_QSV
logger = logging.getLogger(__name__)
class GPUScheduler:
"""
GPU 调度器
实现多 GPU 设备的轮询(Round Robin)调度。
线程安全,支持并发任务执行。
使用方式:
scheduler = GPUScheduler(config)
# 在任务执行时
device_index = scheduler.acquire()
try:
# 执行任务
pass
finally:
scheduler.release(device_index)
"""
def __init__(self, config: WorkerConfig):
"""
初始化调度器
Args:
config: Worker 配置
"""
self._config = config
self._devices: List[GPUDevice] = []
self._next_index: int = 0
self._lock = threading.Lock()
self._enabled = False
# 初始化设备列表
self._init_devices()
def _init_devices(self) -> None:
"""初始化 GPU 设备列表"""
# 仅在启用硬件加速时才初始化
if self._config.hw_accel not in (HW_ACCEL_CUDA, HW_ACCEL_QSV):
logger.info("Hardware acceleration not enabled, GPU scheduler disabled")
return
configured_devices = self._config.gpu_devices
if self._config.hw_accel == HW_ACCEL_QSV:
# QSV 使用 Intel 核显,无 nvidia-smi,直接按配置或默认设备初始化
self._devices = self._init_qsv_devices(configured_devices)
elif configured_devices:
# CUDA:使用配置指定的设备并通过 nvidia-smi 验证
self._devices = self._validate_configured_devices(configured_devices)
else:
# CUDA:自动检测所有 NVIDIA 设备
self._devices = self._auto_detect_devices()
if self._devices:
self._enabled = True
device_info = ', '.join(str(d) for d in self._devices)
logger.info(f"GPU scheduler initialized with {len(self._devices)} device(s): {device_info}")
else:
logger.warning("No GPU devices available, scheduler disabled")
def _init_qsv_devices(self, configured_indices: List[int]) -> List[GPUDevice]:
"""
初始化 QSV 设备列表
QSV 使用 Intel 核显,没有 nvidia-smi 可用于检测。
若配置了 GPU_DEVICES 则直接信任配置,否则使用默认设备 0。
Args:
configured_indices: 配置的设备索引列表(可为空)
Returns:
QSV 设备列表
"""
indices = configured_indices if configured_indices else [0]
return [
GPUDevice(index=idx, name=f"QSV-{idx}", available=True)
for idx in indices
]
def _validate_configured_devices(self, indices: List[int]) -> List[GPUDevice]:
"""
验证配置的设备列表
Args:
indices: 配置的设备索引列表
Returns:
验证通过的设备列表
"""
devices = []
for index in indices:
if validate_gpu_device(index):
devices.append(GPUDevice(
index=index,
name=f"GPU-{index}",
available=True
))
else:
logger.warning(f"GPU device {index} is not available, skipping")
return devices
def _auto_detect_devices(self) -> List[GPUDevice]:
"""
自动检测所有可用 GPU
Returns:
检测到的设备列表
"""
all_devices = get_all_gpu_info()
# 过滤不可用设备
return [d for d in all_devices if d.available]
@property
def enabled(self) -> bool:
"""调度器是否启用"""
return self._enabled
@property
def device_count(self) -> int:
"""设备数量"""
return len(self._devices)
def acquire(self) -> Optional[int]:
"""
获取下一个可用的 GPU 设备(轮询调度)
Returns:
GPU 设备索引,如果调度器未启用或无设备则返回 None
"""
if not self._enabled or not self._devices:
return None
with self._lock:
device = self._devices[self._next_index]
self._next_index = (self._next_index + 1) % len(self._devices)
logger.debug(f"Acquired GPU device: {device.index}")
return device.index
def release(self, device_index: Optional[int]) -> None:
"""
释放 GPU 设备
当前实现为无状态轮询,此方法仅用于日志记录。
Args:
device_index: 设备索引
"""
if device_index is not None:
logger.debug(f"Released GPU device: {device_index}")
def get_status(self) -> dict:
"""
获取调度器状态信息
Returns:
状态字典
"""
return {
'enabled': self._enabled,
'device_count': len(self._devices),
'devices': [
{'index': d.index, 'name': d.name, 'available': d.available}
for d in self._devices
],
'hw_accel': self._config.hw_accel,
}

121
services/lease_service.py Normal file
View File

@@ -0,0 +1,121 @@
# -*- coding: utf-8 -*-
"""
租约续期服务
后台线程定期为正在执行的任务续期租约。
"""
import logging
import threading
import time
from typing import TYPE_CHECKING, Any, Optional
if TYPE_CHECKING:
from services.api_client import APIClientV2
from util.tracing import TaskTraceContext
from util.tracing import bind_trace_context, start_span
logger = logging.getLogger(__name__)
class LeaseService:
"""
租约续期服务
在后台线程中定期调用 API 延长任务租约,
防止长时间任务因租约过期被回收。
"""
def __init__(
self,
api_client: 'APIClientV2',
task_id: str,
interval: int = 60,
extension: int = 300,
parent_otel_context: Any = None,
task_trace_context: Optional['TaskTraceContext'] = None,
):
"""
初始化租约服务
Args:
api_client: API 客户端
task_id: 任务 ID
interval: 续期间隔(秒),默认 60 秒
extension: 每次续期时长(秒),默认 300 秒
"""
self.api_client = api_client
self.task_id = task_id
self.interval = interval
self.extension = extension
self.parent_otel_context = parent_otel_context
self.task_trace_context = task_trace_context
self.running = False
self.thread: threading.Thread = None
self._stop_event = threading.Event()
def start(self):
"""启动租约续期线程"""
if self.running:
logger.warning(f"[task:{self.task_id}] Lease service already running")
return
self.running = True
self._stop_event.clear()
self.thread = threading.Thread(
target=self._run,
name=f"LeaseService-{self.task_id}",
daemon=True
)
self.thread.start()
logger.debug(f"[task:{self.task_id}] Lease service started (interval={self.interval}s)")
def stop(self):
"""停止租约续期线程"""
if not self.running:
return
self.running = False
self._stop_event.set()
if self.thread and self.thread.is_alive():
self.thread.join(timeout=5)
logger.debug(f"[task:{self.task_id}] Lease service stopped")
def _run(self):
"""续期线程主循环"""
with bind_trace_context(self.parent_otel_context, self.task_trace_context):
while self.running:
if self._stop_event.wait(timeout=self.interval):
break
if self.running:
self._extend_lease()
def _extend_lease(self):
"""执行租约续期"""
with start_span(
"render.task.lease.extend",
task_id=self.task_id,
attributes={"render.lease.extension_seconds": self.extension},
):
try:
success = self.api_client.extend_lease(self.task_id, self.extension)
if success:
logger.debug(f"[task:{self.task_id}] Lease extended by {self.extension}s")
else:
logger.warning(f"[task:{self.task_id}] Failed to extend lease")
except Exception as e:
logger.warning(f"[task:{self.task_id}] Lease extension error: {e}")
def __enter__(self):
"""上下文管理器入口"""
self.start()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
"""上下文管理器出口"""
self.stop()
return False

296
services/storage.py Normal file
View File

@@ -0,0 +1,296 @@
# -*- coding: utf-8 -*-
"""
存储服务
提供文件上传/下载功能,支持 OSS 签名 URL 和 HTTP_REPLACE_MAP 环境变量。
"""
import os
import logging
import subprocess
from typing import Any, Dict, Optional, Tuple
from urllib.parse import unquote
import requests
logger = logging.getLogger(__name__)
# 文件扩展名到 Content-Type 的映射
_CONTENT_TYPE_MAP = {
'.mp4': 'video/mp4',
'.aac': 'audio/aac',
'.ts': 'video/mp2t',
'.m4a': 'audio/mp4',
}
def _get_content_type(file_path: str) -> str:
"""
根据文件扩展名获取 Content-Type
Args:
file_path: 文件路径
Returns:
Content-Type 字符串
"""
ext = os.path.splitext(file_path)[1].lower()
return _CONTENT_TYPE_MAP.get(ext, 'application/octet-stream')
def _apply_http_replace_map(url: str) -> str:
"""
应用 HTTP_REPLACE_MAP 环境变量替换 URL
Args:
url: 原始 URL
Returns:
替换后的 URL
"""
replace_map = os.getenv("HTTP_REPLACE_MAP", "")
if not replace_map:
return url
new_url = url
replace_list = [i.split("|", 1) for i in replace_map.split(",") if "|" in i]
for src, dst in replace_list:
new_url = new_url.replace(src, dst)
if new_url != url:
logger.debug(f"HTTP_REPLACE_MAP: {url} -> {new_url}")
return new_url
def upload_file(url: str, file_path: str, max_retries: int = 5, timeout: int = 60) -> bool:
"""兼容旧接口:仅返回上传是否成功。"""
result, _ = upload_file_with_metrics(
url=url,
file_path=file_path,
max_retries=max_retries,
timeout=timeout,
)
return result
def upload_file_with_metrics(
url: str,
file_path: str,
max_retries: int = 5,
timeout: int = 60
) -> Tuple[bool, Dict[str, Any]]:
"""
使用签名 URL 上传文件到 OSS
Args:
url: 签名 URL
file_path: 本地文件路径
max_retries: 最大重试次数
timeout: 超时时间(秒)
Returns:
(是否成功, 上传指标)
"""
metrics: Dict[str, Any] = {
"upload_method": "none",
"file_size_bytes": 0,
"content_type": "",
"http_attempts": 0,
"http_retry_count": 0,
"http_status_code": 0,
"http_replace_applied": False,
"rclone_attempted": False,
"rclone_succeeded": False,
"rclone_fallback_http": False,
"error_type": "",
}
if not os.path.exists(file_path):
logger.error(f"File not found: {file_path}")
metrics["error_type"] = "file_not_found"
return False, metrics
file_size = os.path.getsize(file_path)
metrics["file_size_bytes"] = file_size
logger.info(f"Uploading: {file_path} ({file_size} bytes)")
# 检查是否使用 rclone 上传
if os.getenv("UPLOAD_METHOD") == "rclone":
metrics["rclone_attempted"] = True
logger.debug(f"Uploading to: {url}")
result = _upload_with_rclone(url, file_path)
metrics["rclone_succeeded"] = result
if result:
metrics["upload_method"] = "rclone"
return True, metrics
# rclone 失败时回退到 HTTP
metrics["rclone_fallback_http"] = True
# 应用 HTTP_REPLACE_MAP 替换 URL
http_url = _apply_http_replace_map(url)
metrics["http_replace_applied"] = http_url != url
content_type = _get_content_type(file_path)
metrics["content_type"] = content_type
metrics["upload_method"] = "rclone_fallback_http" if metrics["rclone_fallback_http"] else "http"
logger.debug(f"Uploading to: {http_url} (Content-Type: {content_type})")
retries = 0
while retries < max_retries:
metrics["http_attempts"] = retries + 1
try:
with open(file_path, 'rb') as f:
with requests.put(
http_url,
data=f,
stream=True,
timeout=timeout,
headers={"Content-Type": content_type}
) as response:
status_code = int(getattr(response, 'status_code', 0) or 0)
metrics["http_status_code"] = status_code
response.raise_for_status()
logger.info(f"Upload succeeded: {file_path}")
metrics["error_type"] = ""
return True, metrics
except requests.exceptions.Timeout:
retries += 1
metrics["http_retry_count"] = retries
metrics["error_type"] = "timeout"
logger.warning(f"Upload timed out. Retrying {retries}/{max_retries}...")
except requests.exceptions.RequestException as e:
retries += 1
metrics["http_retry_count"] = retries
metrics["error_type"] = "request_exception"
response_obj = getattr(e, 'response', None)
status_code = getattr(response_obj, 'status_code', 0) if response_obj is not None else 0
if isinstance(status_code, int) and status_code > 0:
metrics["http_status_code"] = status_code
logger.warning(f"Upload failed ({e}). Retrying {retries}/{max_retries}...")
logger.error(f"Upload failed after {max_retries} retries: {file_path}")
return False, metrics
def _upload_with_rclone(url: str, file_path: str) -> bool:
"""
使用 rclone 上传文件
Args:
url: 目标 URL
file_path: 本地文件路径
Returns:
是否成功
"""
replace_map = os.getenv("RCLONE_REPLACE_MAP", "")
if not replace_map:
return False
config_file = os.getenv("RCLONE_CONFIG_FILE", "")
# 替换 URL
new_url = url
replace_list = [i.split("|", 1) for i in replace_map.split(",") if "|" in i]
for src, dst in replace_list:
new_url = new_url.replace(src, dst)
new_url = new_url.split("?", 1)[0] # 移除查询参数
new_url = unquote(new_url) # 解码 URL 编码的字符(如 %2F -> /)
if new_url == url:
return False
if new_url.startswith(("http://", "https://")):
logger.warning("rclone upload skipped: URL still starts with http after replace")
logger.debug(f"rclone upload skipped address: {new_url}")
return False
cmd = [
"rclone",
"copyto",
"--no-check-dest",
"--ignore-existing",
"--multi-thread-chunk-size",
"8M",
"--multi-thread-streams",
"8",
]
if config_file:
cmd.extend(["--config", config_file])
cmd.extend([file_path, new_url])
logger.debug(f"rclone command: {' '.join(cmd)}")
result = subprocess.run(cmd, capture_output=True, text=True)
if result.returncode == 0:
logger.info(f"rclone upload succeeded: {file_path}")
return True
stderr = (result.stderr or '').strip()
stderr = stderr[:500] if stderr else ""
logger.warning(f"rclone upload failed (code={result.returncode}): {file_path} {stderr}")
return False
def download_file(
url: str,
file_path: str,
max_retries: int = 5,
timeout: int = 30,
skip_if_exist: bool = False
) -> bool:
"""
使用签名 URL 下载文件
Args:
url: 签名 URL
file_path: 本地文件路径
max_retries: 最大重试次数
timeout: 超时时间(秒)
skip_if_exist: 如果文件存在则跳过
Returns:
是否成功
"""
# 如果文件已存在且跳过
if skip_if_exist and os.path.exists(file_path):
logger.debug(f"File exists, skipping download: {file_path}")
return True
logger.debug(f"Downloading: {url}")
# 确保目标目录存在
file_dir = os.path.dirname(file_path)
if file_dir:
os.makedirs(file_dir, exist_ok=True)
# 应用 HTTP_REPLACE_MAP 替换 URL
http_url = _apply_http_replace_map(url)
retries = 0
while retries < max_retries:
try:
with requests.get(http_url, timeout=timeout, stream=True) as response:
response.raise_for_status()
with open(file_path, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
if chunk:
f.write(chunk)
file_size = os.path.getsize(file_path)
logger.info(f"Download succeeded: {file_path} ({file_size} bytes)")
return True
except requests.exceptions.Timeout:
retries += 1
logger.warning(f"Download timed out. Retrying {retries}/{max_retries}...")
except requests.exceptions.RequestException as e:
retries += 1
logger.warning(f"Download failed ({e}). Retrying {retries}/{max_retries}...")
logger.error(f"Download failed after {max_retries} retries")
logger.debug(f"Download failed source address: {url}")
return False

289
services/task_executor.py Normal file
View File

@@ -0,0 +1,289 @@
# -*- coding: utf-8 -*-
"""
任务执行器
管理任务的并发执行,协调处理器、租约服务等组件。
"""
import logging
import threading
from concurrent.futures import ThreadPoolExecutor, Future
from typing import Dict, Optional, TYPE_CHECKING
from domain.task import Task, TaskType
# 需要 GPU 加速的任务类型
GPU_REQUIRED_TASK_TYPES = {
TaskType.RENDER_SEGMENT_TS,
TaskType.COMPOSE_TRANSITION,
}
from domain.config import WorkerConfig
from core.handler import TaskHandler
from services.lease_service import LeaseService
from services.gpu_scheduler import GPUScheduler
from util.tracing import (
capture_otel_context,
get_current_task_context,
mark_span_error,
start_span,
task_trace_scope,
)
if TYPE_CHECKING:
from services.api_client import APIClientV2
logger = logging.getLogger(__name__)
class TaskExecutor:
"""
任务执行器
负责任务的并发调度和执行,包括:
- 注册和管理任务处理器
- 维护任务执行状态
- 协调租约续期
- 上报执行结果
"""
def __init__(self, config: WorkerConfig, api_client: 'APIClientV2'):
"""
初始化任务执行器
Args:
config: Worker 配置
api_client: API 客户端
"""
self.config = config
self.api_client = api_client
# 任务处理器注册表
self.handlers: Dict[TaskType, TaskHandler] = {}
# 当前任务跟踪
self.current_tasks: Dict[str, Task] = {}
self.current_futures: Dict[str, Future] = {}
# 线程池
self.executor = ThreadPoolExecutor(
max_workers=config.max_concurrency,
thread_name_prefix="TaskWorker"
)
# 线程安全锁
self.lock = threading.Lock()
# GPU 调度器(如果启用硬件加速)
self.gpu_scheduler = GPUScheduler(config)
if self.gpu_scheduler.enabled:
logger.info(f"GPU scheduler enabled with {self.gpu_scheduler.device_count} device(s)")
# 注册处理器
self._register_handlers()
def _register_handlers(self):
"""注册所有任务处理器"""
# 延迟导入以避免循环依赖
from handlers.render_video import RenderSegmentTsHandler
from handlers.compose_transition import ComposeTransitionHandler
from handlers.prepare_audio import PrepareJobAudioHandler
from handlers.finalize_mp4 import FinalizeMp4Handler
handlers = [
RenderSegmentTsHandler(self.config, self.api_client),
ComposeTransitionHandler(self.config, self.api_client),
PrepareJobAudioHandler(self.config, self.api_client),
FinalizeMp4Handler(self.config, self.api_client),
]
for handler in handlers:
task_type = handler.get_supported_type()
self.handlers[task_type] = handler
logger.debug(f"Registered handler for {task_type.value}")
def get_current_task_ids(self) -> list:
"""
获取当前正在执行的任务 ID 列表
Returns:
任务 ID 列表
"""
with self.lock:
return list(self.current_tasks.keys())
def get_current_task_count(self) -> int:
"""
获取当前正在执行的任务数量
Returns:
任务数量
"""
with self.lock:
return len(self.current_tasks)
def can_accept_task(self) -> bool:
"""
检查是否可以接受新任务
Returns:
是否可以接受
"""
return self.get_current_task_count() < self.config.max_concurrency
def submit_task(self, task: Task) -> bool:
"""
提交任务到线程池
Args:
task: 任务实体
Returns:
是否提交成功
"""
with self.lock:
# 检查任务是否已在执行
if task.task_id in self.current_tasks:
logger.warning(f"[task:{task.task_id}] Task already running, skipping")
return False
# 检查并发上限
if len(self.current_tasks) >= self.config.max_concurrency:
logger.info(
f"[task:{task.task_id}] Max concurrency reached "
f"({self.config.max_concurrency}), rejecting task"
)
return False
# 检查是否有对应的处理器
if task.task_type not in self.handlers:
logger.error(f"[task:{task.task_id}] No handler for type: {task.task_type.value}")
return False
# 记录任务
self.current_tasks[task.task_id] = task
# 提交到线程池
future = self.executor.submit(self._process_task, task)
self.current_futures[task.task_id] = future
logger.info(f"[task:{task.task_id}] Submitted ({task.task_type.value})")
return True
def _process_task(self, task: Task):
"""
处理单个任务(在线程池中执行)
Args:
task: 任务实体
"""
task_id = task.task_id
handler = self.handlers.get(task.task_type)
device_index = None
lease_service = None
with task_trace_scope(task, span_name="render.task.execute") as task_span:
logger.info(f"[task:{task_id}] Starting {task.task_type.value}")
lease_service = LeaseService(
self.api_client,
task_id,
interval=self.config.lease_extension_threshold,
extension=self.config.lease_extension_duration,
parent_otel_context=capture_otel_context(),
task_trace_context=get_current_task_context(),
)
with start_span("render.task.lease.start"):
lease_service.start()
needs_gpu = task.task_type in GPU_REQUIRED_TASK_TYPES
if needs_gpu and self.gpu_scheduler.enabled:
with start_span("render.task.gpu.acquire"):
device_index = self.gpu_scheduler.acquire()
if device_index is not None:
logger.info(f"[task:{task_id}] Assigned to GPU device {device_index}")
try:
with start_span("render.task.report.start"):
self.api_client.report_start(task_id)
if not handler:
raise ValueError(f"No handler for task type: {task.task_type}")
if device_index is not None:
handler.set_gpu_device(device_index)
with start_span("render.task.handler.before"):
handler.before_handle(task)
with start_span("render.task.handler.execute"):
result = handler.handle(task)
with start_span("render.task.handler.after"):
handler.after_handle(task, result)
if result.success:
with start_span("render.task.report.success"):
self.api_client.report_success(task_id, result.data)
if task_span is not None:
task_span.set_attribute("render.task.result", "success")
logger.info(f"[task:{task_id}] Completed successfully")
else:
error_code = result.error_code.value if result.error_code else 'E_UNKNOWN'
with start_span("render.task.report.fail"):
self.api_client.report_fail(task_id, error_code, result.error_message or '')
mark_span_error(task_span, result.error_message or "task failed", error_code)
logger.error(f"[task:{task_id}] Failed: {result.error_message}")
except Exception as e:
mark_span_error(task_span, str(e), "E_UNKNOWN")
logger.error(f"[task:{task_id}] Exception: {e}", exc_info=True)
with start_span("render.task.report.exception"):
self.api_client.report_fail(task_id, 'E_UNKNOWN', str(e))
finally:
if handler:
handler.clear_gpu_device()
if device_index is not None:
with start_span("render.task.gpu.release"):
self.gpu_scheduler.release(device_index)
if lease_service is not None:
with start_span("render.task.lease.stop"):
lease_service.stop()
with self.lock:
self.current_tasks.pop(task_id, None)
self.current_futures.pop(task_id, None)
def shutdown(self, wait: bool = True):
"""
关闭执行器
Args:
wait: 是否等待所有任务完成
"""
logger.info("Shutting down task executor...")
# 关闭线程池
self.executor.shutdown(wait=wait)
# 清理状态
with self.lock:
self.current_tasks.clear()
self.current_futures.clear()
logger.info("Task executor shutdown complete")
def get_handler(self, task_type: TaskType) -> Optional[TaskHandler]:
"""
获取指定类型的处理器
Args:
task_type: 任务类型
Returns:
处理器实例,不存在则返回 None
"""
return self.handlers.get(task_type)

View File

@@ -1,35 +0,0 @@
import os
from constant import SOFTWARE_VERSION
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter as OTLPSpanHttpExporter
from opentelemetry.sdk.resources import DEPLOYMENT_ENVIRONMENT, HOST_NAME, Resource, SERVICE_NAME, SERVICE_VERSION
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor, SimpleSpanProcessor
def get_tracer(name):
return trace.get_tracer(name)
# 初始化 OpenTelemetry
def init_opentelemetry(batch=True):
# 设置服务名、主机名
resource = Resource(attributes={
SERVICE_NAME: "RENDER_WORKER",
SERVICE_VERSION: SOFTWARE_VERSION,
DEPLOYMENT_ENVIRONMENT: "Python",
HOST_NAME: os.getenv("ACCESS_KEY"),
})
# 使用HTTP协议上报
if batch:
span_processor = BatchSpanProcessor(OTLPSpanHttpExporter(
endpoint="https://oltp.jerryyan.top/v1/traces",
))
else:
span_processor = SimpleSpanProcessor(OTLPSpanHttpExporter(
endpoint="https://oltp.jerryyan.top/v1/traces",
))
trace_provider = TracerProvider(resource=resource, active_span_processor=span_processor)
trace.set_tracer_provider(trace_provider)

1
template/.gitignore vendored
View File

@@ -1 +0,0 @@
**/*

View File

@@ -1,126 +0,0 @@
import json
import os
import logging
from telemetry import get_tracer
from util import api, oss
TEMPLATES = {}
logger = logging.getLogger("template")
def check_local_template(local_name):
template_def = TEMPLATES[local_name]
base_dir = template_def.get("local_path")
for video_part in template_def.get("video_parts", []):
source_file = video_part.get("source", "")
if str(source_file).startswith("http"):
# download file
...
elif str(source_file).startswith("PLACEHOLDER_"):
continue
else:
if not os.path.isabs(source_file):
source_file = os.path.join(base_dir, source_file)
if not os.path.exists(source_file):
logger.error(f"{source_file} not found, please check the template definition")
raise Exception(f"{source_file} not found, please check the template definition")
for audio in video_part.get("audios", []):
if not os.path.isabs(audio):
audio = os.path.join(base_dir, audio)
if not os.path.exists(audio):
logger.error(f"{audio} not found, please check the template definition")
raise Exception(f"{audio} not found, please check the template definition")
for lut in video_part.get("luts", []):
if not os.path.isabs(lut):
lut = os.path.join(base_dir, lut)
if not os.path.exists(lut):
logger.error(f"{lut} not found, please check the template definition")
raise Exception(f"{lut} not found, please check the template definition")
for mask in video_part.get("overlays", []):
if not os.path.isabs(mask):
mask = os.path.join(base_dir, mask)
if not os.path.exists(mask):
logger.error(f"{mask} not found, please check the template definition")
raise Exception(f"{mask} not found, please check the template definition")
def load_template(template_name, local_path):
global TEMPLATES
logger.info(f"加载视频模板定义:【{template_name}{local_path})】")
template_def_file = os.path.join(local_path, "template.json")
if os.path.exists(template_def_file):
TEMPLATES[template_name] = json.load(open(template_def_file, 'rb'))
TEMPLATES[template_name]["local_path"] = local_path
try:
check_local_template(template_name)
logger.info(f"完成加载【{template_name}】模板")
except Exception as e:
logger.error(f"模板定义文件【{template_def_file}】有误,正在尝试重新下载模板", exc_info=e)
download_template(template_name)
def load_local_template():
for template_name in os.listdir(os.getenv("TEMPLATE_DIR")):
if template_name.startswith("_"):
continue
if template_name.startswith("."):
continue
target_path = os.path.join(os.getenv("TEMPLATE_DIR"), template_name)
if os.path.isdir(target_path):
load_template(template_name, target_path)
def get_template_def(template_id):
if template_id not in TEMPLATES:
download_template(template_id)
return TEMPLATES.get(template_id)
def download_template(template_id):
tracer = get_tracer(__name__)
with tracer.start_as_current_span("download_template"):
template_info = api.get_template_info(template_id)
if not os.path.isdir(template_info['local_path']):
os.makedirs(template_info['local_path'])
# download template assets
overall_template = template_info['overall_template']
video_parts = template_info['video_parts']
def _download_assets(_template):
if 'source' in _template:
if str(_template['source']).startswith("http"):
_, _fn = os.path.split(_template['source'])
new_fp = os.path.join(template_info['local_path'], _fn)
oss.download_from_oss(_template['source'], new_fp)
if _fn.endswith(".mp4"):
from util.ffmpeg import re_encode_and_annexb
new_fp = re_encode_and_annexb(new_fp)
_template['source'] = os.path.relpath(new_fp, template_info['local_path'])
if 'overlays' in _template:
for i in range(len(_template['overlays'])):
overlay = _template['overlays'][i]
if str(overlay).startswith("http"):
_, _fn = os.path.split(overlay)
oss.download_from_oss(overlay, os.path.join(template_info['local_path'], _fn))
_template['overlays'][i] = _fn
if 'luts' in _template:
for i in range(len(_template['luts'])):
lut = _template['luts'][i]
if str(lut).startswith("http"):
_, _fn = os.path.split(lut)
oss.download_from_oss(lut, os.path.join(template_info['local_path'], _fn))
_template['luts'][i] = _fn
if 'audios' in _template:
for i in range(len(_template['audios'])):
if str(_template['audios'][i]).startswith("http"):
_, _fn = os.path.split(_template['audios'][i])
oss.download_from_oss(_template['audios'][i], os.path.join(template_info['local_path'], _fn))
_template['audios'][i] = _fn
_download_assets(overall_template)
for video_part in video_parts:
_download_assets(video_part)
with open(os.path.join(template_info['local_path'], 'template.json'), 'w', encoding='utf-8') as f:
json.dump(template_info, f)
load_template(template_id, template_info['local_path'])
def analyze_template(template_id):
...

View File

@@ -0,0 +1,238 @@
# -*- coding: utf-8 -*-
import os
from contextlib import contextmanager
from types import SimpleNamespace
import pytest
from domain.config import WorkerConfig
from domain.result import TaskResult
from domain.task import TaskType
from handlers.base import BaseHandler
class _DummyApiClient:
pass
class _DummyHandler(BaseHandler):
def handle(self, task):
return TaskResult.ok({})
def get_supported_type(self):
return TaskType.RENDER_SEGMENT_TS
def _create_handler(tmp_path):
config = WorkerConfig(
api_endpoint='http://127.0.0.1:18084/api',
access_key='TEST_ACCESS_KEY',
worker_id='test-worker',
temp_dir=str(tmp_path),
cache_enabled=False,
cache_dir=str(tmp_path / 'cache')
)
return _DummyHandler(config, _DummyApiClient())
def test_download_files_parallel_collects_success_and_failure(tmp_path, monkeypatch):
handler = _create_handler(tmp_path)
handler.task_download_concurrency = 3
captured_calls = []
def _fake_download(url, dest, timeout=None, use_cache=True):
captured_calls.append((url, dest, timeout, use_cache))
os.makedirs(os.path.dirname(dest), exist_ok=True)
with open(dest, 'wb') as file_obj:
file_obj.write(b'1')
return not url.endswith('/fail')
monkeypatch.setattr(handler, 'download_file', _fake_download)
results = handler.download_files_parallel(
[
{'key': 'first', 'url': 'https://example.com/first', 'dest': str(tmp_path / 'first.bin')},
{'key': 'second', 'url': 'https://example.com/fail', 'dest': str(tmp_path / 'second.bin')},
{'key': 'third', 'url': 'https://example.com/third', 'dest': str(tmp_path / 'third.bin'), 'use_cache': False},
],
timeout=15,
)
assert len(captured_calls) == 3
assert results['first']['success'] is True
assert results['second']['success'] is False
assert results['third']['success'] is True
assert any(call_item[3] is False for call_item in captured_calls)
def test_download_files_parallel_rejects_duplicate_key(tmp_path):
handler = _create_handler(tmp_path)
with pytest.raises(ValueError, match='Duplicate download job key'):
handler.download_files_parallel(
[
{'key': 'dup', 'url': 'https://example.com/1', 'dest': str(tmp_path / '1.bin')},
{'key': 'dup', 'url': 'https://example.com/2', 'dest': str(tmp_path / '2.bin')},
]
)
def test_upload_files_parallel_collects_urls(tmp_path, monkeypatch):
handler = _create_handler(tmp_path)
handler.task_upload_concurrency = 2
def _fake_upload(task_id, file_type, file_path, file_name=None):
if file_type == 'video':
return f'https://cdn.example.com/{task_id}/{file_name or "video.mp4"}'
return None
monkeypatch.setattr(handler, 'upload_file', _fake_upload)
results = handler.upload_files_parallel(
[
{
'key': 'video_output',
'task_id': 'task-1',
'file_type': 'video',
'file_path': str(tmp_path / 'video.mp4'),
'file_name': 'output.mp4',
},
{
'key': 'audio_output',
'task_id': 'task-1',
'file_type': 'audio',
'file_path': str(tmp_path / 'audio.aac'),
},
]
)
assert results['video_output']['success'] is True
assert results['video_output']['url'] == 'https://cdn.example.com/task-1/output.mp4'
assert results['audio_output']['success'] is False
assert results['audio_output']['url'] is None
def test_download_file_sets_lock_wait_ms_span_attribute(tmp_path, monkeypatch):
handler = _create_handler(tmp_path)
destination = tmp_path / "download.bin"
class _FakeSpan:
def __init__(self):
self.attributes = {}
def set_attribute(self, key, value):
self.attributes[key] = value
fake_span = _FakeSpan()
@contextmanager
def _fake_start_span(name, kind=None, attributes=None):
if attributes:
fake_span.attributes.update(attributes)
yield fake_span
def _fake_get_or_download_with_metrics(url, dest, timeout=300, max_retries=5):
os.makedirs(os.path.dirname(dest), exist_ok=True)
with open(dest, 'wb') as file_obj:
file_obj.write(b'abc')
return True, {"lock_wait_ms": 1234, "lock_acquired": True, "cache_path_used": "cache"}
monkeypatch.setattr("handlers.base.start_span", _fake_start_span)
monkeypatch.setattr(
handler.material_cache,
"get_or_download_with_metrics",
_fake_get_or_download_with_metrics
)
assert handler.download_file("https://example.com/file.bin", str(destination), timeout=1, use_cache=True)
assert fake_span.attributes["render.file.lock_wait_ms"] == 1234
assert fake_span.attributes["render.file.lock_acquired"] is True
assert fake_span.attributes["render.file.cache_path_used"] == "cache"
def test_download_file_without_cache_sets_lock_wait_ms_zero(tmp_path, monkeypatch):
handler = _create_handler(tmp_path)
destination = tmp_path / "download-no-cache.bin"
class _FakeSpan:
def __init__(self):
self.attributes = {}
def set_attribute(self, key, value):
self.attributes[key] = value
fake_span = _FakeSpan()
@contextmanager
def _fake_start_span(name, kind=None, attributes=None):
if attributes:
fake_span.attributes.update(attributes)
yield fake_span
def _fake_storage_download(url, dest, timeout=30):
os.makedirs(os.path.dirname(dest), exist_ok=True)
with open(dest, 'wb') as file_obj:
file_obj.write(b'def')
return True
monkeypatch.setattr("handlers.base.start_span", _fake_start_span)
monkeypatch.setattr("handlers.base.storage.download_file", _fake_storage_download)
assert handler.download_file("https://example.com/file.bin", str(destination), timeout=1, use_cache=False)
assert fake_span.attributes["render.file.lock_wait_ms"] == 0
assert fake_span.attributes["render.file.lock_acquired"] is False
assert fake_span.attributes["render.file.cache_path_used"] == "direct"
def test_upload_file_sets_detailed_span_attributes(tmp_path, monkeypatch):
handler = _create_handler(tmp_path)
source_path = tmp_path / "upload.mp4"
source_path.write_bytes(b"abc123")
fake_span_attributes = {}
class _FakeSpan:
def set_attribute(self, key, value):
fake_span_attributes[key] = value
@contextmanager
def _fake_start_span(name, kind=None, attributes=None):
if attributes:
fake_span_attributes.update(attributes)
yield _FakeSpan()
handler.api_client = SimpleNamespace(
get_upload_url=lambda *args, **kwargs: {
"uploadUrl": "https://example.com/upload",
"accessUrl": "https://cdn.example.com/output.mp4",
}
)
monkeypatch.setattr("handlers.base.start_span", _fake_start_span)
monkeypatch.setattr(
"handlers.base.storage.upload_file_with_metrics",
lambda *args, **kwargs: (
True,
{
"upload_method": "http",
"http_attempts": 2,
"http_retry_count": 1,
"http_status_code": 200,
"http_replace_applied": True,
"content_type": "video/mp4",
"error_type": "",
"rclone_attempted": False,
"rclone_succeeded": False,
"rclone_fallback_http": False,
},
)
)
monkeypatch.setattr(handler.material_cache, "add_to_cache", lambda *args, **kwargs: True)
access_url = handler.upload_file("task-1", "video", str(source_path), "output.mp4")
assert access_url == "https://cdn.example.com/output.mp4"
assert fake_span_attributes["render.file.upload_success"] is True
assert fake_span_attributes["render.file.upload_method"] == "http"
assert fake_span_attributes["render.file.http_attempts"] == 2
assert fake_span_attributes["render.file.http_retry_count"] == 1
assert fake_span_attributes["render.file.http_status_code"] == 200
assert fake_span_attributes["render.file.http_replace_applied"] is True
assert fake_span_attributes["render.file.content_type"] == "video/mp4"
assert fake_span_attributes["render.file.cache_write_back"] == "success"

View File

@@ -0,0 +1,101 @@
# -*- coding: utf-8 -*-
import os
from services.cache import MaterialCache, _extract_cache_key
def test_cache_lock_acquire_release(tmp_path):
cache = MaterialCache(cache_dir=str(tmp_path), enabled=True, max_size_gb=0)
cache_key = _extract_cache_key("https://example.com/path/file.mp4?token=abc")
lock_path = cache._acquire_lock(cache_key)
assert lock_path
assert os.path.exists(lock_path)
cache._release_lock(lock_path)
assert not os.path.exists(lock_path)
def test_get_or_download_cache_hit_does_not_wait_lock(tmp_path, monkeypatch):
cache = MaterialCache(cache_dir=str(tmp_path), enabled=True, max_size_gb=0)
url = "https://example.com/path/video.mp4?token=abc"
cache_path = cache.get_cache_path(url)
with open(cache_path, 'wb') as file_obj:
file_obj.write(b'cached-data')
destination = tmp_path / "result.bin"
def _unexpected_acquire(*args, **kwargs):
raise AssertionError("cache hit path should not acquire lock")
monkeypatch.setattr(cache, "_acquire_lock", _unexpected_acquire)
assert cache.get_or_download(url, str(destination), timeout=1) is True
assert destination.read_bytes() == b'cached-data'
def test_get_or_download_lock_timeout_can_still_use_ready_cache(tmp_path, monkeypatch):
cache = MaterialCache(cache_dir=str(tmp_path), enabled=True, max_size_gb=0)
url = "https://example.com/path/audio.aac?token=abc"
cache_path = cache.get_cache_path(url)
with open(cache_path, 'wb') as file_obj:
file_obj.write(b'audio-cache')
destination = tmp_path / "audio.aac"
download_called = {"value": False}
monkeypatch.setattr(cache, "_acquire_lock", lambda *args, **kwargs: None)
def _fake_download(*args, **kwargs):
download_called["value"] = True
return False
monkeypatch.setattr("services.cache.storage.download_file", _fake_download)
assert cache.get_or_download(url, str(destination), timeout=1) is True
assert destination.read_bytes() == b'audio-cache'
assert download_called["value"] is False
def test_get_or_download_uses_short_lock_timeout(tmp_path, monkeypatch):
cache = MaterialCache(cache_dir=str(tmp_path), enabled=True, max_size_gb=0)
url = "https://example.com/path/segment.ts?token=abc"
destination = tmp_path / "segment.ts"
captured = {"timeout_sec": None}
def _fake_acquire(cache_key, timeout_sec=None):
captured["timeout_sec"] = timeout_sec
return None
monkeypatch.setattr(cache, "_acquire_lock", _fake_acquire)
monkeypatch.setattr("services.cache.storage.download_file", lambda *args, **kwargs: True)
assert cache.get_or_download(url, str(destination), timeout=1) is True
assert captured["timeout_sec"] == cache.DOWNLOAD_LOCK_TIMEOUT_SEC
def test_get_or_download_with_metrics_cache_hit_wait_zero(tmp_path):
cache = MaterialCache(cache_dir=str(tmp_path), enabled=True, max_size_gb=0)
url = "https://example.com/path/hit.mp4?token=abc"
cache_path = cache.get_cache_path(url)
with open(cache_path, 'wb') as file_obj:
file_obj.write(b'hit-data')
destination = tmp_path / "hit.mp4"
success, metrics = cache.get_or_download_with_metrics(url, str(destination), timeout=1)
assert success is True
assert metrics["lock_wait_ms"] == 0
assert metrics["lock_acquired"] is False
assert metrics["cache_path_used"] == "cache"
def test_get_or_download_with_metrics_reports_lock_wait_ms(tmp_path, monkeypatch):
cache = MaterialCache(cache_dir=str(tmp_path), enabled=True, max_size_gb=0)
url = "https://example.com/path/miss.mp4?token=abc"
destination = tmp_path / "miss.mp4"
monkeypatch.setattr(cache, "_acquire_lock_with_wait", lambda *args, **kwargs: (None, 4321))
monkeypatch.setattr("services.cache.storage.download_file", lambda *args, **kwargs: True)
success, metrics = cache.get_or_download_with_metrics(url, str(destination), timeout=1)
assert success is True
assert metrics["lock_wait_ms"] == 4321
assert metrics["lock_acquired"] is False
assert metrics["cache_path_used"] == "direct"

View File

@@ -0,0 +1,235 @@
# -*- coding: utf-8 -*-
import pytest
from domain.config import WorkerConfig
from domain.task import Effect, OutputSpec, RenderSpec
from handlers.render_video import RenderSegmentTsHandler
class _DummyApiClient:
pass
def _create_handler(tmp_path):
config = WorkerConfig(
api_endpoint='http://127.0.0.1:18084/api',
access_key='TEST_ACCESS_KEY',
worker_id='test-worker',
temp_dir=str(tmp_path),
cache_enabled=False,
cache_dir=str(tmp_path / 'cache')
)
return RenderSegmentTsHandler(config, _DummyApiClient())
def test_get_zoom_params_with_valid_values():
effect = Effect.from_string('zoom:1.5,1.35,2')
assert effect is not None
start_sec, scale_factor, duration_sec = effect.get_zoom_params()
assert start_sec == pytest.approx(1.5)
assert scale_factor == pytest.approx(1.35)
assert duration_sec == pytest.approx(2.0)
@pytest.mark.parametrize(
'effect_str',
[
'zoom:-1,0.9,-2',
'zoom:nan,inf,0',
'zoom:bad,value,data',
'zoom:,,',
],
)
def test_get_zoom_params_invalid_values_fallback_to_default(effect_str):
effect = Effect.from_string(effect_str)
assert effect is not None
assert effect.get_zoom_params() == (0.0, 1.2, 1.0)
def test_build_command_with_zoom_uses_filter_complex(tmp_path):
handler = _create_handler(tmp_path)
render_spec = RenderSpec(effects='zoom:1.5,1.4,2')
output_spec = OutputSpec(width=1080, height=1920, fps=30)
command = handler._build_command(
input_file='input.mp4',
output_file='output.mp4',
render_spec=render_spec,
output_spec=output_spec,
duration_ms=6000,
)
assert '-filter_complex' in command
assert '-vf' not in command
def test_build_video_filters_zoom_and_camera_shot_stack_in_order(tmp_path):
handler = _create_handler(tmp_path)
render_spec = RenderSpec(effects='cameraShot:3,1|zoom:1,1.2,2')
output_spec = OutputSpec(width=1080, height=1920, fps=30)
filters = handler._build_video_filters(
render_spec=render_spec,
output_spec=output_spec,
duration_ms=8000,
source_duration_sec=10.0,
)
camera_shot_marker = 'concat=n=2:v=1:a=0'
zoom_marker = "overlay=0:0:enable='between(t,1.0,3.0)'"
assert camera_shot_marker in filters
assert zoom_marker in filters
assert filters.index(camera_shot_marker) < filters.index(zoom_marker)
# ---------- ospeed 测试 ----------
def test_get_ospeed_params_with_valid_values():
effect = Effect.from_string('ospeed:2')
assert effect is not None
assert effect.get_ospeed_params() == pytest.approx(2.0)
effect2 = Effect.from_string('ospeed:0.5')
assert effect2 is not None
assert effect2.get_ospeed_params() == pytest.approx(0.5)
@pytest.mark.parametrize(
'effect_str',
[
'ospeed:0',
'ospeed:-1',
'ospeed:nan',
'ospeed:inf',
'ospeed:abc',
'ospeed:',
],
)
def test_get_ospeed_params_invalid_values_fallback(effect_str):
effect = Effect.from_string(effect_str)
assert effect is not None
assert effect.get_ospeed_params() == 1.0
def test_get_ospeed_params_no_params():
effect = Effect.from_string('ospeed')
assert effect is not None
assert effect.get_ospeed_params() == 1.0
def test_ospeed_does_not_trigger_filter_complex(tmp_path):
handler = _create_handler(tmp_path)
render_spec = RenderSpec(effects='ospeed:2')
output_spec = OutputSpec(width=1080, height=1920, fps=30)
command = handler._build_command(
input_file='input.mp4',
output_file='output.mp4',
render_spec=render_spec,
output_spec=output_spec,
duration_ms=6000,
)
assert '-vf' in command
assert '-filter_complex' not in command
# 验证 setpts 滤镜
vf_idx = command.index('-vf')
vf_value = command[vf_idx + 1]
assert 'setpts=2.0*(PTS-STARTPTS)' in vf_value
def test_default_speed_always_normalizes_pts(tmp_path):
handler = _create_handler(tmp_path)
render_spec = RenderSpec()
output_spec = OutputSpec(width=1080, height=1920, fps=30)
filters = handler._build_video_filters(
render_spec=render_spec,
output_spec=output_spec,
duration_ms=6000,
source_duration_sec=10.0,
)
assert 'setpts=PTS-STARTPTS' in filters
def test_ospeed_combined_with_speed(tmp_path):
handler = _create_handler(tmp_path)
# speed=2 → 1/2=0.5, ospeed=3 → 0.5*3=1.5
render_spec = RenderSpec(speed='2', effects='ospeed:3')
output_spec = OutputSpec(width=1080, height=1920, fps=30)
filters = handler._build_video_filters(
render_spec=render_spec,
output_spec=output_spec,
duration_ms=6000,
source_duration_sec=10.0,
)
assert 'setpts=1.5*(PTS-STARTPTS)' in filters
def test_ospeed_with_complex_effects(tmp_path):
handler = _create_handler(tmp_path)
render_spec = RenderSpec(effects='ospeed:2|zoom:1,1.2,2')
output_spec = OutputSpec(width=1080, height=1920, fps=30)
filters = handler._build_video_filters(
render_spec=render_spec,
output_spec=output_spec,
duration_ms=8000,
source_duration_sec=10.0,
)
# zoom 触发 filter_complex
assert '[v_base]' in filters
# setpts 在 base_chain 中
assert 'setpts=2.0*(PTS-STARTPTS)' in filters
# zoom 正常处理
assert "overlay=0:0:enable='between(t,1.0,3.0)'" in filters
def test_only_first_ospeed_is_used(tmp_path):
handler = _create_handler(tmp_path)
render_spec = RenderSpec(effects='ospeed:2|ospeed:5')
output_spec = OutputSpec(width=1080, height=1920, fps=30)
filters = handler._build_video_filters(
render_spec=render_spec,
output_spec=output_spec,
duration_ms=6000,
source_duration_sec=10.0,
)
assert 'setpts=2.0*(PTS-STARTPTS)' in filters
assert 'setpts=5.0*(PTS-STARTPTS)' not in filters
def test_ospeed_affects_tpad_calculation(tmp_path):
handler = _create_handler(tmp_path)
# ospeed:2 使 10s 视频变为 20s 有效时长,目标 6s → 无需 tpad
render_spec = RenderSpec(effects='ospeed:2')
output_spec = OutputSpec(width=1080, height=1920, fps=30)
filters = handler._build_video_filters(
render_spec=render_spec,
output_spec=output_spec,
duration_ms=6000,
source_duration_sec=10.0,
)
assert 'tpad' not in filters
# 对比:无 ospeed 时 10s 视频 → 目标 15s → 需要 5s tpad
render_spec_no_ospeed = RenderSpec()
filters_no_ospeed = handler._build_video_filters(
render_spec=render_spec_no_ospeed,
output_spec=output_spec,
duration_ms=15000,
source_duration_sec=10.0,
)
assert 'tpad' in filters_no_ospeed
assert 'stop_duration=5.0' in filters_no_ospeed

View File

@@ -0,0 +1,81 @@
# -*- coding: utf-8 -*-
import requests
from services import storage
class _FakeResponse:
def __init__(self, status_code=200):
self.status_code = status_code
def raise_for_status(self):
return None
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
return False
def test_upload_file_with_metrics_file_not_found(tmp_path):
missing_path = tmp_path / "missing.mp4"
success, metrics = storage.upload_file_with_metrics(
"https://example.com/upload",
str(missing_path),
max_retries=1,
timeout=1,
)
assert success is False
assert metrics["error_type"] == "file_not_found"
assert metrics["upload_method"] == "none"
def test_upload_file_with_metrics_http_success(tmp_path, monkeypatch):
source_path = tmp_path / "video.mp4"
source_path.write_bytes(b"content")
monkeypatch.setattr("services.storage.requests.put", lambda *args, **kwargs: _FakeResponse(200))
success, metrics = storage.upload_file_with_metrics(
"https://example.com/upload",
str(source_path),
max_retries=3,
timeout=1,
)
assert success is True
assert metrics["upload_method"] == "http"
assert metrics["http_attempts"] == 1
assert metrics["http_retry_count"] == 0
assert metrics["http_status_code"] == 200
assert metrics["content_type"] == "video/mp4"
def test_upload_file_with_metrics_retry_then_success(tmp_path, monkeypatch):
source_path = tmp_path / "audio.aac"
source_path.write_bytes(b"audio-bytes")
call_counter = {"count": 0}
def _fake_put(*args, **kwargs):
call_counter["count"] += 1
if call_counter["count"] == 1:
raise requests.exceptions.Timeout()
return _FakeResponse(200)
monkeypatch.setattr("services.storage.requests.put", _fake_put)
success, metrics = storage.upload_file_with_metrics(
"https://example.com/upload",
str(source_path),
max_retries=3,
timeout=1,
)
assert success is True
assert metrics["http_attempts"] == 2
assert metrics["http_retry_count"] == 1
assert metrics["http_status_code"] == 200
assert metrics["error_type"] == ""

View File

@@ -0,0 +1,51 @@
# -*- coding: utf-8 -*-
import importlib
from types import SimpleNamespace
import util.tracing as tracing_module
def _create_task_stub():
task_type = SimpleNamespace(value="RENDER_SEGMENT_TS")
return SimpleNamespace(
task_id="task-1001",
task_type=task_type,
get_job_id=lambda: "job-2002",
get_segment_id=lambda: "seg-3003",
)
def test_task_trace_scope_sets_and_resets_context(monkeypatch):
monkeypatch.setenv("OTEL_ENABLED", "false")
tracing = importlib.reload(tracing_module)
assert tracing.initialize_tracing("worker-1", "2.0.0") is False
assert tracing.get_current_task_context() is None
with tracing.task_trace_scope(_create_task_stub()) as span:
assert span is None
context = tracing.get_current_task_context()
assert context is not None
assert context.task_id == "task-1001"
assert context.task_type == "RENDER_SEGMENT_TS"
assert context.job_id == "job-2002"
assert context.segment_id == "seg-3003"
with tracing.start_span("render.task.sample.step") as child_span:
assert child_span is None
assert tracing.get_current_task_context() is None
def test_bind_trace_context_restores_previous_context(monkeypatch):
monkeypatch.setenv("OTEL_ENABLED", "false")
tracing = importlib.reload(tracing_module)
tracing.initialize_tracing("worker-1", "2.0.0")
context = tracing.TaskTraceContext(task_id="task-1", task_type="FINALIZE_MP4")
assert tracing.get_current_task_context() is None
with tracing.bind_trace_context(None, context):
assert tracing.get_current_task_context() == context
assert tracing.get_current_task_context() is None

View File

@@ -0,0 +1,25 @@
# -*- coding: utf-8 -*-
from constant import HW_ACCEL_CUDA, HW_ACCEL_NONE, HW_ACCEL_QSV
from handlers.base import get_video_encode_args
def _assert_bframe_disabled(args):
assert '-bf' in args
bf_index = args.index('-bf')
assert args[bf_index + 1] == '0'
def test_get_video_encode_args_disable_b_frames_for_software():
args = get_video_encode_args(HW_ACCEL_NONE)
_assert_bframe_disabled(args)
def test_get_video_encode_args_disable_b_frames_for_qsv():
args = get_video_encode_args(HW_ACCEL_QSV)
_assert_bframe_disabled(args)
def test_get_video_encode_args_disable_b_frames_for_cuda():
args = get_video_encode_args(HW_ACCEL_CUDA)
_assert_bframe_disabled(args)

15
util/__init__.py Normal file
View File

@@ -0,0 +1,15 @@
# -*- coding: utf-8 -*-
"""
工具模块
提供系统信息采集等工具函数。
"""
from util.system import get_sys_info, get_capabilities, get_gpu_info, get_ffmpeg_version
__all__ = [
'get_sys_info',
'get_capabilities',
'get_gpu_info',
'get_ffmpeg_version',
]

View File

@@ -1,256 +0,0 @@
import json
import logging
import os
import threading
import requests
from opentelemetry.trace import Status, StatusCode
import util.system
from telemetry import get_tracer
from util import oss
session = requests.Session()
logger = logging.getLogger(__name__)
def normalize_task(task_info):
...
return task_info
def sync_center():
"""
通过接口获取任务
:return: 任务列表
"""
from template import TEMPLATES, download_template
try:
response = session.post(os.getenv('API_ENDPOINT') + "/sync", json={
'accessKey': os.getenv('ACCESS_KEY'),
'clientStatus': util.system.get_sys_info(),
'templateList': [{'id': t.get('id', ''), 'updateTime': t.get('updateTime', '')} for t in
TEMPLATES.values()]
}, timeout=10)
response.raise_for_status()
except requests.RequestException as e:
logger.error("请求失败!", e)
return []
data = response.json()
logger.debug("获取任务结果:【%s", data)
if data.get('code', 0) == 200:
templates = data.get('data', {}).get('templates', [])
tasks = data.get('data', {}).get('tasks', [])
else:
tasks = []
templates = []
logger.warning("获取任务失败")
if os.getenv("REDIRECT_TO_URL", False) != False:
for task in tasks:
_sess = requests.Session()
logger.info("重定向任务【%s】至配置的地址:%s", task.get("id"), os.getenv("REDIRECT_TO_URL"))
url = f"{os.getenv('REDIRECT_TO_URL')}{task.get('id')}"
threading.Thread(target=requests.post, args=(url,)).start()
return []
for template in templates:
template_id = template.get('id', '')
if template_id:
logger.info("更新模板:【%s", template_id)
download_template(template_id)
return tasks
def get_template_info(template_id):
"""
通过接口获取模板信息
:rtype: Template
:param template_id: 模板id
:type template_id: str
:return: 模板信息
"""
tracer = get_tracer(__name__)
with tracer.start_as_current_span("get_template_info"):
with tracer.start_as_current_span("get_template_info.request") as req_span:
try:
req_span.set_attribute("http.method", "POST")
req_span.set_attribute("http.url", '{0}/template/{1}'.format(os.getenv('API_ENDPOINT'), template_id))
response = session.post('{0}/template/{1}'.format(os.getenv('API_ENDPOINT'), template_id), json={
'accessKey': os.getenv('ACCESS_KEY'),
}, timeout=10)
req_span.set_attribute("http.status_code", response.status_code)
req_span.set_attribute("http.response", response.text)
response.raise_for_status()
except requests.RequestException as e:
req_span.set_attribute("api.error", str(e))
logger.error("请求失败!", e)
return None
data = response.json()
logger.debug("获取模板信息结果:【%s", data)
remote_template_info = data.get('data', {})
template = {
'id': template_id,
'updateTime': remote_template_info.get('updateTime', template_id),
'scenic_name': remote_template_info.get('scenicName', '景区'),
'name': remote_template_info.get('name', '模版'),
'video_size': remote_template_info.get('resolution', '1920x1080'),
'frame_rate': 25,
'overall_duration': 30,
'video_parts': [
]
}
def _template_normalizer(template_info):
_template = {}
_placeholder_type = template_info.get('isPlaceholder', -1)
if _placeholder_type == 0:
# 固定视频
_template['source'] = template_info.get('sourceUrl', '')
elif _placeholder_type == 1:
# 占位符
_template['source'] = "PLACEHOLDER_" + template_info.get('sourceUrl', '')
_template['mute'] = template_info.get('mute', True)
_template['crop_mode'] = template_info.get('cropEnable', None)
else:
_template['source'] = None
_overlays = template_info.get('overlays', '')
if _overlays:
_template['overlays'] = _overlays.split(",")
_audios = template_info.get('audios', '')
if _audios:
_template['audios'] = _audios.split(",")
_luts = template_info.get('luts', '')
if _luts:
_template['luts'] = _luts.split(",")
_only_if = template_info.get('onlyIf', '')
if _only_if:
_template['only_if'] = _only_if
_effects = template_info.get('effects', '')
if _effects:
_template['effects'] = _effects.split("|")
return _template
# outer template definition
overall_template = _template_normalizer(remote_template_info)
template['overall_template'] = overall_template
# inter template definition
inter_template_list = remote_template_info.get('children', [])
for children_template in inter_template_list:
parts = _template_normalizer(children_template)
template['video_parts'].append(parts)
template['local_path'] = os.path.join(os.getenv('TEMPLATE_DIR'), str(template_id))
with get_tracer("api").start_as_current_span("get_template_info.template") as res_span:
res_span.set_attribute("normalized.response", json.dumps(template))
return template
def report_task_success(task_info, **kwargs):
tracer = get_tracer(__name__)
with tracer.start_as_current_span("report_task_success"):
with tracer.start_as_current_span("report_task_success.request") as req_span:
try:
req_span.set_attribute("http.method", "POST")
req_span.set_attribute("http.url",
'{0}/{1}/success'.format(os.getenv('API_ENDPOINT'), task_info.get("id")))
response = session.post('{0}/{1}/success'.format(os.getenv('API_ENDPOINT'), task_info.get("id")), json={
'accessKey': os.getenv('ACCESS_KEY'),
**kwargs
}, timeout=10)
req_span.set_attribute("http.status_code", response.status_code)
req_span.set_attribute("http.response", response.text)
response.raise_for_status()
req_span.set_status(Status(StatusCode.OK))
except requests.RequestException as e:
req_span.set_attribute("api.error", str(e))
logger.error("请求失败!", e)
return None
def report_task_start(task_info):
tracer = get_tracer(__name__)
with tracer.start_as_current_span("report_task_start"):
with tracer.start_as_current_span("report_task_start.request") as req_span:
try:
req_span.set_attribute("http.method", "POST")
req_span.set_attribute("http.url",
'{0}/{1}/start'.format(os.getenv('API_ENDPOINT'), task_info.get("id")))
response = session.post('{0}/{1}/start'.format(os.getenv('API_ENDPOINT'), task_info.get("id")), json={
'accessKey': os.getenv('ACCESS_KEY'),
}, timeout=10)
req_span.set_attribute("http.status_code", response.status_code)
req_span.set_attribute("http.response", response.text)
response.raise_for_status()
req_span.set_status(Status(StatusCode.OK))
except requests.RequestException as e:
req_span.set_attribute("api.error", str(e))
logger.error("请求失败!", e)
return None
def report_task_failed(task_info, reason=''):
tracer = get_tracer(__name__)
with tracer.start_as_current_span("report_task_failed") as span:
span.set_attribute("task_id", task_info.get("id"))
span.set_attribute("reason", reason)
with tracer.start_as_current_span("report_task_failed.request") as req_span:
try:
req_span.set_attribute("http.method", "POST")
req_span.set_attribute("http.url",
'{0}/{1}/fail'.format(os.getenv('API_ENDPOINT'), task_info.get("id")))
response = session.post('{0}/{1}/fail'.format(os.getenv('API_ENDPOINT'), task_info.get("id")), json={
'accessKey': os.getenv('ACCESS_KEY'),
'reason': reason
}, timeout=10)
req_span.set_attribute("http.status_code", response.status_code)
req_span.set_attribute("http.response", response.text)
response.raise_for_status()
req_span.set_status(Status(StatusCode.OK))
except requests.RequestException as e:
req_span.set_attribute("api.error", str(e))
req_span.set_status(Status(StatusCode.ERROR))
logger.error("请求失败!", e)
return None
def upload_task_file(task_info, ffmpeg_task):
tracer = get_tracer(__name__)
with get_tracer("api").start_as_current_span("upload_task_file") as span:
logger.info("开始上传文件: %s", task_info.get("id"))
span.set_attribute("file.id", task_info.get("id"))
with tracer.start_as_current_span("upload_task_file.request_upload_url") as req_span:
try:
req_span.set_attribute("http.method", "POST")
req_span.set_attribute("http.url",
'{0}/{1}/uploadUrl'.format(os.getenv('API_ENDPOINT'), task_info.get("id")))
response = session.post('{0}/{1}/uploadUrl'.format(os.getenv('API_ENDPOINT'), task_info.get("id")),
json={
'accessKey': os.getenv('ACCESS_KEY'),
}, timeout=10)
req_span.set_attribute("http.status_code", response.status_code)
req_span.set_attribute("http.response", response.text)
response.raise_for_status()
req_span.set_status(Status(StatusCode.OK))
except requests.RequestException as e:
span.set_attribute("api.error", str(e))
req_span.set_status(Status(StatusCode.ERROR))
logger.error("请求失败!", e)
return False
data = response.json()
url = data.get('data', "")
logger.info("开始上传文件: %s%s", task_info.get("id"), url)
return oss.upload_to_oss(url, ffmpeg_task.get_output_file())
def get_task_info(id):
try:
response = session.get(os.getenv('API_ENDPOINT') + "/" + id + "/info", params={
'accessKey': os.getenv('ACCESS_KEY'),
}, timeout=10)
response.raise_for_status()
except requests.RequestException as e:
logger.error("请求失败!", e)
return []
data = response.json()
logger.debug("获取任务结果:【%s", data)
if data.get('code', 0) == 200:
return data.get('data', {})

View File

@@ -1,244 +0,0 @@
import json
import logging
import os
import subprocess
from datetime import datetime
from typing import Optional, IO
from opentelemetry.trace import Status, StatusCode
from entity.ffmpeg import FfmpegTask, ENCODER_ARGS, VIDEO_ARGS, AUDIO_ARGS, MUTE_AUDIO_INPUT
from telemetry import get_tracer
logger = logging.getLogger(__name__)
def re_encode_and_annexb(file):
with get_tracer("ffmpeg").start_as_current_span("re_encode_and_annexb") as span:
span.set_attribute("file.path", file)
if not os.path.exists(file):
span.set_status(Status(StatusCode.ERROR))
return file
logger.info("ReEncodeAndAnnexb: %s", file)
has_audio = not not probe_video_audio(file)
ffmpeg_process = subprocess.run(["ffmpeg", "-y", "-hide_banner", "-vsync", "cfr", "-i", file,
*(set() if has_audio else MUTE_AUDIO_INPUT),
"-map", "0:v", "-map", "0:a" if has_audio else "1:a",
*VIDEO_ARGS, "-bsf:v", "h264_mp4toannexb",
*AUDIO_ARGS, "-bsf:a", "setts=pts=DTS",
*ENCODER_ARGS, "-shortest", "-fflags", "+genpts",
"-f", "mpegts", file + ".ts"])
logger.info(" ".join(ffmpeg_process.args))
span.set_attribute("ffmpeg.args", json.dumps(ffmpeg_process.args))
logger.info("ReEncodeAndAnnexb: %s, returned: %s", file, ffmpeg_process.returncode)
span.set_attribute("ffmpeg.code", ffmpeg_process.returncode)
if ffmpeg_process.returncode == 0:
span.set_status(Status(StatusCode.OK))
span.set_attribute("file.size", os.path.getsize(file+".ts"))
# os.remove(file)
return file+".ts"
else:
span.set_status(Status(StatusCode.ERROR))
return file
def start_render(ffmpeg_task: FfmpegTask):
tracer = get_tracer(__name__)
with tracer.start_as_current_span("start_render") as span:
span.set_attribute("ffmpeg.task", str(ffmpeg_task))
if not ffmpeg_task.need_run():
ffmpeg_task.set_output_file(ffmpeg_task.input_file[0])
span.set_status(Status(StatusCode.OK))
return True
ffmpeg_args = ffmpeg_task.get_ffmpeg_args()
if len(ffmpeg_args) == 0:
ffmpeg_task.set_output_file(ffmpeg_task.input_file[0])
span.set_status(Status(StatusCode.OK))
return True
ffmpeg_process = subprocess.run(["ffmpeg", "-progress", "-", "-loglevel", "error", *ffmpeg_args], stderr=subprocess.PIPE, **subprocess_args(True))
span.set_attribute("ffmpeg.args", json.dumps(ffmpeg_process.args))
logger.info(" ".join(ffmpeg_process.args))
ffmpeg_final_out = handle_ffmpeg_output(ffmpeg_process.stdout)
span.set_attribute("ffmpeg.out", ffmpeg_final_out)
logger.info("FINISH TASK, OUTPUT IS %s", ffmpeg_final_out)
code = ffmpeg_process.returncode
span.set_attribute("ffmpeg.code", code)
if code != 0:
span.set_attribute("ffmpeg.err", str(ffmpeg_process.stderr))
span.set_status(Status(StatusCode.ERROR, "FFMPEG异常退出"))
logger.error("FFMPEG ERROR: %s", ffmpeg_process.stderr)
return False
span.set_attribute("ffmpeg.out_file", ffmpeg_task.output_file)
try:
file_size = os.path.getsize(ffmpeg_task.output_file)
span.set_attribute("file.size", file_size)
if file_size < 4096:
span.set_status(Status(StatusCode.ERROR, "输出文件过小"))
logger.error("FFMPEG ERROR: OUTPUT FILE IS TOO SMALL")
return False
except OSError as e:
span.set_attribute("file.size", 0)
span.set_attribute("file.error", e.strerror)
span.set_status(Status(StatusCode.ERROR, "输出文件不存在"))
logger.error("FFMPEG ERROR: OUTPUT FILE NOT FOUND")
return False
span.set_status(Status(StatusCode.OK))
return True
def handle_ffmpeg_output(stdout: Optional[bytes]) -> str:
out_time = "0:0:0.0"
if stdout is None:
print("[!]STDOUT is null")
return out_time
speed = "0"
for line in stdout.split(b"\n"):
if line == b"":
break
if line.strip() == b"progress=end":
# 处理完毕
break
if line.startswith(b"out_time="):
out_time = line.replace(b"out_time=", b"").decode().strip()
if line.startswith(b"speed="):
speed = line.replace(b"speed=", b"").decode().strip()
print("[ ]Speed:", out_time, "@", speed)
return out_time+"@"+speed
def duration_str_to_float(duration_str: str) -> float:
_duration = datetime.strptime(duration_str, "%H:%M:%S.%f") - datetime(1900, 1, 1)
return _duration.total_seconds()
def probe_video_info(video_file):
tracer = get_tracer(__name__)
with tracer.start_as_current_span("probe_video_info") as span:
span.set_attribute("video.file", video_file)
# 获取宽度和高度
result = subprocess.run(
["ffprobe", '-v', 'error', '-select_streams', 'v:0', '-show_entries', 'stream=width,height:format=duration', '-of',
'csv=s=x:p=0', video_file],
stderr=subprocess.STDOUT,
**subprocess_args(True)
)
span.set_attribute("ffprobe.args", json.dumps(result.args))
span.set_attribute("ffprobe.code", result.returncode)
if result.returncode != 0:
span.set_status(Status(StatusCode.ERROR))
return 0, 0, 0
all_result = result.stdout.decode('utf-8').strip()
span.set_attribute("ffprobe.out", all_result)
if all_result == '':
span.set_status(Status(StatusCode.ERROR))
return 0, 0, 0
span.set_status(Status(StatusCode.OK))
wh, duration = all_result.split('\n')
width, height = wh.strip().split('x')
return int(width), int(height), float(duration)
def probe_video_audio(video_file, type=None):
tracer = get_tracer(__name__)
with tracer.start_as_current_span("probe_video_audio") as span:
span.set_attribute("video.file", video_file)
args = ["ffprobe", "-hide_banner", "-v", "error", "-select_streams", "a", "-show_entries", "stream=index", "-of", "csv=p=0"]
if type == 'concat':
args.append("-safe")
args.append("0")
args.append("-f")
args.append("concat")
args.append(video_file)
logger.info(" ".join(args))
result = subprocess.run(args, stderr=subprocess.STDOUT, **subprocess_args(True))
span.set_attribute("ffprobe.args", json.dumps(result.args))
span.set_attribute("ffprobe.code", result.returncode)
logger.info("probe_video_audio: %s", result.stdout.decode('utf-8').strip())
if result.returncode != 0:
return False
if result.stdout.decode('utf-8').strip() == '':
return False
return True
# 音频淡出2秒
def fade_out_audio(file, duration, fade_out_sec = 2):
if type(duration) == str:
try:
duration = float(duration)
except Exception as e:
logger.error("duration is not float: %s", e)
return file
tracer = get_tracer(__name__)
with tracer.start_as_current_span("fade_out_audio") as span:
span.set_attribute("audio.file", file)
if duration <= fade_out_sec:
return file
else:
new_fn = file + "_.mp4"
if os.path.exists(new_fn):
os.remove(new_fn)
logger.info("delete tmp file: " + new_fn)
try:
process = subprocess.run(["ffmpeg", "-i", file, "-c:v", "copy", "-c:a", "aac", "-af", "afade=t=out:st=" + str(duration - fade_out_sec) + ":d=" + str(fade_out_sec), "-y", new_fn], **subprocess_args(True))
span.set_attribute("ffmpeg.args", json.dumps(process.args))
logger.info(" ".join(process.args))
if process.returncode != 0:
span.set_status(Status(StatusCode.ERROR))
logger.error("FFMPEG ERROR: %s", process.stderr)
return file
else:
span.set_status(Status(StatusCode.OK))
return new_fn
except Exception as e:
span.set_status(Status(StatusCode.ERROR))
logger.error("FFMPEG ERROR: %s", e)
return file
# Create a set of arguments which make a ``subprocess.Popen`` (and
# variants) call work with or without Pyinstaller, ``--noconsole`` or
# not, on Windows and Linux. Typical use::
#
# subprocess.call(['program_to_run', 'arg_1'], **subprocess_args())
#
# When calling ``check_output``::
#
# subprocess.check_output(['program_to_run', 'arg_1'],
# **subprocess_args(False))
def subprocess_args(include_stdout=True):
# The following is true only on Windows.
if hasattr(subprocess, 'STARTUPINFO'):
# On Windows, subprocess calls will pop up a command window by default
# when run from Pyinstaller with the ``--noconsole`` option. Avoid this
# distraction.
si = subprocess.STARTUPINFO()
si.dwFlags |= subprocess.STARTF_USESHOWWINDOW
# Windows doesn't search the path by default. Pass it an environment so
# it will.
env = os.environ
else:
si = None
env = None
# ``subprocess.check_output`` doesn't allow specifying ``stdout``::
#
# Traceback (most recent call last):
# File "test_subprocess.py", line 58, in <module>
# **subprocess_args(stdout=None))
# File "C:\Python27\lib\subprocess.py", line 567, in check_output
# raise ValueError('stdout argument not allowed, it will be overridden.')
# ValueError: stdout argument not allowed, it will be overridden.
#
# So, add it only if it's needed.
if include_stdout:
ret = {'stdout': subprocess.PIPE}
else:
ret = {}
# On Windows, running this from the binary produced by Pyinstaller
# with the ``--noconsole`` option requires redirecting everything
# (stdin, stdout, stderr) to avoid an OSError exception
# "[Error 6] the handle is invalid."
ret.update({'stdin': subprocess.PIPE,
'startupinfo': si,
'env': env})
return ret

View File

@@ -1,121 +0,0 @@
import logging
import os
import requests
from opentelemetry.trace import Status, StatusCode
from telemetry import get_tracer
logger = logging.getLogger(__name__)
def upload_to_oss(url, file_path):
"""
使用签名URL上传文件到OSS
:param str url: 签名URL
:param str file_path: 文件路径
:return bool: 是否成功
"""
tracer = get_tracer(__name__)
with tracer.start_as_current_span("upload_to_oss") as span:
span.set_attribute("file.url", url)
span.set_attribute("file.path", file_path)
span.set_attribute("file.size", os.path.getsize(file_path))
max_retries = 5
retries = 0
if os.getenv("UPLOAD_METHOD") == "rclone":
with tracer.start_as_current_span("rclone_to_oss") as r_span:
replace_map = os.getenv("RCLONE_REPLACE_MAP")
r_span.set_attribute("rclone.replace_map", replace_map)
if replace_map != "":
replace_list = [i.split("|", 1) for i in replace_map.split(",")]
new_url = url
for (_src, _dst) in replace_list:
new_url = new_url.replace(_src, _dst)
new_url = new_url.split("?", 1)[0]
r_span.set_attribute("rclone.target_dir", new_url)
if new_url != url:
result = os.system(f"rclone copyto --no-check-dest --ignore-existing --multi-thread-chunk-size 32M --multi-thread-streams 8 {file_path} {new_url}")
r_span.set_attribute("rclone.result", result)
if result == 0:
span.set_status(Status(StatusCode.OK))
return True
else:
span.set_status(Status(StatusCode.ERROR))
while retries < max_retries:
with tracer.start_as_current_span("upload_to_oss.request") as req_span:
req_span.set_attribute("http.retry_count", retries)
try:
req_span.set_attribute("http.method", "PUT")
req_span.set_attribute("http.url", url)
with open(file_path, 'rb') as f:
response = requests.put(url, data=f, stream=True, timeout=60, headers={"Content-Type": "video/mp4"})
req_span.set_attribute("http.status_code", response.status_code)
req_span.set_attribute("http.response", response.text)
response.raise_for_status()
req_span.set_status(Status(StatusCode.OK))
span.set_status(Status(StatusCode.OK))
return True
except requests.exceptions.Timeout:
req_span.set_attribute("http.error", "Timeout")
req_span.set_status(Status(StatusCode.ERROR))
retries += 1
logger.warning(f"Upload timed out. Retrying {retries}/{max_retries}...")
except Exception as e:
req_span.set_attribute("http.error", str(e))
req_span.set_status(Status(StatusCode.ERROR))
retries += 1
logger.warning(f"Upload failed. Retrying {retries}/{max_retries}...")
span.set_status(Status(StatusCode.ERROR))
return False
def download_from_oss(url, file_path, skip_if_exist=False):
"""
使用签名URL下载文件到OSS
:param skip_if_exist: 如果存在就不下载了
:param str url: 签名URL
:param Union[LiteralString, str, bytes] file_path: 文件路径
:return bool: 是否成功
"""
tracer = get_tracer(__name__)
with tracer.start_as_current_span("download_from_oss") as span:
span.set_attribute("file.url", url)
span.set_attribute("file.path", file_path)
if skip_if_exist and os.path.exists(file_path):
span.set_attribute("file.exist", True)
span.set_attribute("file.size", os.path.getsize(file_path))
return True
logging.info("download_from_oss: %s", url)
file_dir, file_name = os.path.split(file_path)
if file_dir:
if not os.path.exists(file_dir):
os.makedirs(file_dir)
max_retries = 5
retries = 0
while retries < max_retries:
with tracer.start_as_current_span("download_from_oss.request") as req_span:
req_span.set_attribute("http.retry_count", retries)
try:
req_span.set_attribute("http.method", "GET")
req_span.set_attribute("http.url", url)
response = requests.get(url, timeout=15) # 设置超时时间
req_span.set_attribute("http.status_code", response.status_code)
with open(file_path, 'wb') as f:
f.write(response.content)
req_span.set_attribute("file.size", os.path.getsize(file_path))
req_span.set_status(Status(StatusCode.OK))
span.set_status(Status(StatusCode.OK))
return True
except requests.exceptions.Timeout:
req_span.set_attribute("http.error", "Timeout")
req_span.set_status(Status(StatusCode.ERROR))
retries += 1
logger.warning(f"Download timed out. Retrying {retries}/{max_retries}...")
except Exception as e:
req_span.set_attribute("http.error", str(e))
req_span.set_status(Status(StatusCode.ERROR))
retries += 1
logger.warning(f"Download failed. Retrying {retries}/{max_retries}...")
span.set_status(Status(StatusCode.ERROR))
return False

View File

@@ -1,24 +1,345 @@
# -*- coding: utf-8 -*-
"""
系统信息工具
提供系统信息采集功能。
"""
import logging
import os
import platform
from datetime import datetime
import subprocess
from typing import Optional, Dict, Any, List
import psutil
from constant import SUPPORT_FEATURE, SOFTWARE_VERSION
from constant import SOFTWARE_VERSION, DEFAULT_CAPABILITIES, HW_ACCEL_NONE, HW_ACCEL_QSV, HW_ACCEL_CUDA
from domain.gpu import GPUDevice
logger = logging.getLogger(__name__)
def get_sys_info():
"""
Returns a dictionary with system information.
获取系统信息
Returns:
dict: 系统信息字典
"""
mem = psutil.virtual_memory()
info = {
'version': SOFTWARE_VERSION,
'client_datetime': datetime.now().isoformat(),
'os': platform.system(),
'cpu': f"{os.cpu_count()} cores",
'memory': f"{mem.total // (1024**3)}GB",
'cpuUsage': f"{psutil.cpu_percent()}%",
'memoryAvailable': f"{mem.available // (1024**3)}GB",
'platform': platform.system(),
'runtime_version': 'Python ' + platform.python_version(),
'cpu_count': os.cpu_count(),
'cpu_usage': psutil.cpu_percent(),
'memory_total': psutil.virtual_memory().total,
'memory_available': psutil.virtual_memory().available,
'support_feature': SUPPORT_FEATURE
'pythonVersion': platform.python_version(),
'version': SOFTWARE_VERSION,
}
# 尝试获取 GPU 信息
gpu_info = get_gpu_info()
if gpu_info:
info['gpu'] = gpu_info
return info
def get_capabilities():
"""
获取 Worker 支持的能力列表
Returns:
list: 能力列表
"""
return DEFAULT_CAPABILITIES.copy()
def get_gpu_info() -> Optional[str]:
"""
尝试获取 GPU 信息
Returns:
str: GPU 信息,失败返回 None
"""
try:
# 尝试使用 nvidia-smi
result = subprocess.run(
['nvidia-smi', '--query-gpu=name', '--format=csv,noheader'],
capture_output=True,
text=True,
timeout=5
)
if result.returncode == 0:
gpu_name = result.stdout.strip().split('\n')[0]
return gpu_name
except Exception:
pass
return None
def get_ffmpeg_version() -> str:
"""
获取 FFmpeg 版本
Returns:
str: FFmpeg 版本号
"""
try:
result = subprocess.run(
['ffmpeg', '-version'],
capture_output=True,
text=True,
timeout=5
)
if result.returncode == 0:
first_line = result.stdout.split('\n')[0]
# 解析版本号,例如 "ffmpeg version 6.0 ..."
parts = first_line.split()
for i, part in enumerate(parts):
if part == 'version' and i + 1 < len(parts):
return parts[i + 1]
except Exception:
pass
return 'unknown'
def check_ffmpeg_encoder(encoder: str) -> bool:
"""
检查 FFmpeg 是否支持指定的编码器
Args:
encoder: 编码器名称,如 'h264_nvenc', 'h264_qsv'
Returns:
bool: 是否支持该编码器
"""
try:
result = subprocess.run(
['ffmpeg', '-hide_banner', '-encoders'],
capture_output=True,
text=True,
timeout=5
)
if result.returncode == 0:
return encoder in result.stdout
except Exception:
pass
return False
def check_ffmpeg_decoder(decoder: str) -> bool:
"""
检查 FFmpeg 是否支持指定的解码器
Args:
decoder: 解码器名称,如 'h264_cuvid', 'h264_qsv'
Returns:
bool: 是否支持该解码器
"""
try:
result = subprocess.run(
['ffmpeg', '-hide_banner', '-decoders'],
capture_output=True,
text=True,
timeout=5
)
if result.returncode == 0:
return decoder in result.stdout
except Exception:
pass
return False
def check_ffmpeg_hwaccel(hwaccel: str) -> bool:
"""
检查 FFmpeg 是否支持指定的硬件加速方法
Args:
hwaccel: 硬件加速方法,如 'cuda', 'qsv', 'dxva2', 'd3d11va'
Returns:
bool: 是否支持该硬件加速方法
"""
try:
result = subprocess.run(
['ffmpeg', '-hide_banner', '-hwaccels'],
capture_output=True,
text=True,
timeout=5
)
if result.returncode == 0:
return hwaccel in result.stdout
except Exception:
pass
return False
def detect_hw_accel_support() -> Dict[str, Any]:
"""
检测系统的硬件加速支持情况
Returns:
dict: 硬件加速支持信息
{
'cuda': {
'available': bool,
'gpu': str or None,
'encoder': bool, # h264_nvenc
'decoder': bool, # h264_cuvid
},
'qsv': {
'available': bool,
'encoder': bool, # h264_qsv
'decoder': bool, # h264_qsv
},
'recommended': str # 推荐的加速方式: 'cuda', 'qsv', 'none'
}
"""
result = {
'cuda': {
'available': False,
'gpu': None,
'encoder': False,
'decoder': False,
},
'qsv': {
'available': False,
'encoder': False,
'decoder': False,
},
'recommended': HW_ACCEL_NONE
}
# 检测 CUDA/NVENC 支持
gpu_info = get_gpu_info()
if gpu_info:
result['cuda']['gpu'] = gpu_info
result['cuda']['available'] = check_ffmpeg_hwaccel('cuda')
result['cuda']['encoder'] = check_ffmpeg_encoder('h264_nvenc')
result['cuda']['decoder'] = check_ffmpeg_decoder('h264_cuvid')
# 检测 QSV 支持
result['qsv']['available'] = check_ffmpeg_hwaccel('qsv')
result['qsv']['encoder'] = check_ffmpeg_encoder('h264_qsv')
result['qsv']['decoder'] = check_ffmpeg_decoder('h264_qsv')
# 推荐硬件加速方式(优先 CUDA,其次 QSV)
if result['cuda']['available'] and result['cuda']['encoder']:
result['recommended'] = HW_ACCEL_CUDA
elif result['qsv']['available'] and result['qsv']['encoder']:
result['recommended'] = HW_ACCEL_QSV
return result
def get_hw_accel_info_str() -> str:
"""
获取硬件加速支持信息的可读字符串
Returns:
str: 硬件加速支持信息描述
"""
support = detect_hw_accel_support()
parts = []
if support['cuda']['available']:
gpu = support['cuda']['gpu'] or 'Unknown GPU'
status = 'encoder+decoder' if support['cuda']['encoder'] and support['cuda']['decoder'] else (
'encoder only' if support['cuda']['encoder'] else 'decoder only' if support['cuda']['decoder'] else 'hwaccel only'
)
parts.append(f"CUDA({gpu}, {status})")
if support['qsv']['available']:
status = 'encoder+decoder' if support['qsv']['encoder'] and support['qsv']['decoder'] else (
'encoder only' if support['qsv']['encoder'] else 'decoder only' if support['qsv']['decoder'] else 'hwaccel only'
)
parts.append(f"QSV({status})")
if not parts:
return "No hardware acceleration available"
return ', '.join(parts) + f" [recommended: {support['recommended']}]"
def get_all_gpu_info() -> List[GPUDevice]:
"""
获取所有 NVIDIA GPU 信息
使用 nvidia-smi 查询所有 GPU 设备。
Returns:
GPU 设备列表,失败返回空列表
"""
try:
result = subprocess.run(
[
'nvidia-smi',
'--query-gpu=index,name,memory.total',
'--format=csv,noheader,nounits'
],
capture_output=True,
text=True,
timeout=10
)
if result.returncode != 0:
return []
devices = []
for line in result.stdout.strip().split('\n'):
if not line.strip():
continue
parts = [p.strip() for p in line.split(',')]
if len(parts) >= 2:
index = int(parts[0])
name = parts[1]
memory = int(parts[2]) if len(parts) >= 3 else None
devices.append(GPUDevice(
index=index,
name=name,
memory_total=memory,
available=True
))
return devices
except Exception as e:
logger.warning(f"Failed to detect GPUs: {e}")
return []
def validate_gpu_device(index: int) -> bool:
"""
验证指定索引的 GPU 设备是否可用
Args:
index: GPU 设备索引
Returns:
设备是否可用
"""
try:
result = subprocess.run(
[
'nvidia-smi',
'-i', str(index),
'--query-gpu=name',
'--format=csv,noheader'
],
capture_output=True,
text=True,
timeout=5
)
return result.returncode == 0 and bool(result.stdout.strip())
except Exception:
return False

260
util/tracing.py Normal file
View File

@@ -0,0 +1,260 @@
# -*- coding: utf-8 -*-
"""
OTel 链路追踪工具。
提供统一的 tracing 初始化、任务上下文管理与 Span 创建能力。
"""
import logging
import os
from contextlib import contextmanager, nullcontext
from contextvars import ContextVar
from dataclasses import dataclass
from typing import Any, Dict, Iterator, Mapping, Optional
from opentelemetry import context as otel_context
from opentelemetry import propagate, trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource, SERVICE_NAME, SERVICE_VERSION
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.trace import Span, SpanKind, Status, StatusCode
logger = logging.getLogger(__name__)
_DEFAULT_SERVICE_NAME = "RenderWorkerNext"
_DEFAULT_TRACER_NAME = "render.worker"
_OTEL_EXPORTER_OTLP_ENDPOINT = "https://oltp.jerryyan.top/v1/traces"
_TASK_ID_ATTR = "render.task.id"
_TASK_TYPE_ATTR = "render.task.type"
_JOB_ID_ATTR = "render.job.id"
_SEGMENT_ID_ATTR = "render.segment.id"
_ERROR_CODE_ATTR = "render.error.code"
_ERROR_MESSAGE_ATTR = "render.error.message"
_TRUE_VALUES = {"1", "true", "yes", "on"}
_TRACING_INITIALIZED = False
_TRACING_ENABLED = False
_TRACER_PROVIDER: Optional[TracerProvider] = None
_CURRENT_TASK_CONTEXT: ContextVar[Optional["TaskTraceContext"]] = ContextVar(
"render_worker_task_trace_context",
default=None,
)
@dataclass(frozen=True)
class TaskTraceContext:
"""任务维度的 tracing 上下文。"""
task_id: str
task_type: str
job_id: str = ""
segment_id: str = ""
def to_attributes(self) -> Dict[str, str]:
attributes = {
_TASK_ID_ATTR: self.task_id,
_TASK_TYPE_ATTR: self.task_type,
}
if self.job_id:
attributes[_JOB_ID_ATTR] = self.job_id
if self.segment_id:
attributes[_SEGMENT_ID_ATTR] = self.segment_id
return attributes
def _parse_bool(value: str, default: bool) -> bool:
if value is None:
return default
return value.strip().lower() in _TRUE_VALUES
def is_tracing_enabled() -> bool:
return _TRACING_ENABLED
def initialize_tracing(worker_id: str, service_version: str) -> bool:
"""
初始化 OTel tracing。
"""
global _TRACING_INITIALIZED
global _TRACING_ENABLED
global _TRACER_PROVIDER
if _TRACING_INITIALIZED:
return _TRACING_ENABLED
_TRACING_INITIALIZED = True
if not _parse_bool(os.getenv("OTEL_ENABLED"), default=True):
logger.info("OTel tracing disabled by OTEL_ENABLED")
_TRACING_ENABLED = False
return False
service_name = _DEFAULT_SERVICE_NAME
attributes: Dict[str, str] = {
SERVICE_NAME: service_name,
SERVICE_VERSION: service_version,
"render.worker.id": str(worker_id),
}
resource = Resource.create(attributes)
tracer_provider = TracerProvider(resource=resource)
tracer_provider.add_span_processor(
BatchSpanProcessor(
OTLPSpanExporter(endpoint=_OTEL_EXPORTER_OTLP_ENDPOINT)
)
)
trace.set_tracer_provider(tracer_provider)
_TRACING_ENABLED = True
if trace.get_tracer_provider() is tracer_provider:
_TRACER_PROVIDER = tracer_provider
logger.info("OTel tracing initialized (service=%s, worker=%s)", service_name, worker_id)
return True
def shutdown_tracing() -> None:
"""优雅关闭 tracing provider,刷新剩余 span。"""
global _TRACING_ENABLED
if not _TRACING_ENABLED:
return
provider = _TRACER_PROVIDER
if provider is not None:
try:
provider.shutdown()
except Exception as exc:
logger.warning("Failed to shutdown tracing provider: %s", exc)
_TRACING_ENABLED = False
def build_task_trace_context(task: Any) -> TaskTraceContext:
task_id = str(getattr(task, "task_id", ""))
task_type_obj = getattr(task, "task_type", "")
task_type = str(getattr(task_type_obj, "value", task_type_obj))
job_id = ""
if hasattr(task, "get_job_id"):
job_id = str(task.get_job_id() or "")
segment_id = ""
if hasattr(task, "get_segment_id"):
segment_value = task.get_segment_id()
segment_id = str(segment_value) if segment_value is not None else ""
return TaskTraceContext(
task_id=task_id,
task_type=task_type,
job_id=job_id,
segment_id=segment_id,
)
def get_current_task_context() -> Optional[TaskTraceContext]:
return _CURRENT_TASK_CONTEXT.get()
def capture_otel_context() -> Any:
return otel_context.get_current()
@contextmanager
def bind_trace_context(parent_otel_context: Any, task_context: Optional[TaskTraceContext]) -> Iterator[None]:
"""
在当前线程绑定父 OTel 上下文与任务上下文。
用于跨线程延续任务链路(例如租约续期线程)。
"""
otel_token = None
task_token = None
if parent_otel_context is not None:
otel_token = otel_context.attach(parent_otel_context)
if task_context is not None:
task_token = _CURRENT_TASK_CONTEXT.set(task_context)
try:
yield
finally:
if task_token is not None:
_CURRENT_TASK_CONTEXT.reset(task_token)
if otel_token is not None:
otel_context.detach(otel_token)
@contextmanager
def task_trace_scope(task: Any, span_name: str = "render.task.process") -> Iterator[Optional[Span]]:
"""创建任务根 Span 并绑定任务上下文。"""
task_context = build_task_trace_context(task)
task_token = _CURRENT_TASK_CONTEXT.set(task_context)
span_cm = nullcontext(None)
if _TRACING_ENABLED:
tracer = trace.get_tracer(_DEFAULT_TRACER_NAME)
span_cm = tracer.start_as_current_span(span_name, kind=SpanKind.CONSUMER)
try:
with span_cm as span:
if span is not None:
for key, value in task_context.to_attributes().items():
span.set_attribute(key, value)
yield span
finally:
_CURRENT_TASK_CONTEXT.reset(task_token)
@contextmanager
def start_span(
name: str,
*,
attributes: Optional[Mapping[str, Any]] = None,
kind: SpanKind = SpanKind.INTERNAL,
task_id: Optional[str] = None,
) -> Iterator[Optional[Span]]:
"""
创建任务内子 Span。
当 tracing 未启用,或当前不在任务上下文中且未显式传入 task_id 时,返回空上下文。
"""
task_context = get_current_task_context()
should_trace = _TRACING_ENABLED and (task_context is not None or bool(task_id))
if not should_trace:
with nullcontext(None) as span:
yield span
return
tracer = trace.get_tracer(_DEFAULT_TRACER_NAME)
with tracer.start_as_current_span(name, kind=kind) as span:
if task_context is not None:
for key, value in task_context.to_attributes().items():
span.set_attribute(key, value)
if task_id and (task_context is None or task_context.task_id != task_id):
span.set_attribute(_TASK_ID_ATTR, task_id)
if attributes:
for key, value in attributes.items():
if value is not None:
span.set_attribute(key, value)
yield span
def mark_span_error(span: Optional[Span], message: str, error_code: str = "") -> None:
"""标记 Span 为错误状态。"""
if span is None:
return
if error_code:
span.set_attribute(_ERROR_CODE_ATTR, error_code)
if message:
span.set_attribute(_ERROR_MESSAGE_ATTR, message[:500])
span.set_status(Status(StatusCode.ERROR, message[:200]))
def inject_trace_headers(headers: Optional[Mapping[str, str]] = None) -> Dict[str, str]:
"""向 HTTP 头注入当前 trace 上下文。"""
carrier = dict(headers) if headers else {}
if _TRACING_ENABLED:
propagate.inject(carrier)
return carrier