Overview
TheConversionManager is the core orchestration system for Frame’s conversion tasks. It manages a queue of pending tasks, enforces concurrency limits, and coordinates worker processes.
Architecture
The manager uses an actor-like pattern with message passing:- Main event loop - Processes messages and manages queue state
- Worker tasks - Spawn for each conversion, run FFmpeg
- Message channel - Async communication between components
ManagerMessage Types
Messages sent through the internal channel (manager.rs:27-33):Add a new task to the queue. Triggers queue processing.
Max concurrency setting changed. Triggers queue processing to start additional tasks if slots available.
Worker has started processing. Contains task ID and FFmpeg process PID.
Task finished successfully. Contains task ID. Triggers queue processing.
Task failed with error. Contains task ID and error details. Triggers queue processing.
Task Lifecycle
1. Enqueue
- Task added to queue (
VecDeque) - Task ID added to
queued_idsset (prevents duplicates) - Removed from
cancelled_tasks(allows re-queuing) process_queue()called to potentially start task
2. Started
When a worker slot is available:- Task removed from queue
- Task ID removed from
queued_ids - Task ID added to
running_tasksmap - Worker spawned with
run_ffmpeg_worker() - Worker sends
TaskStarted(id, pid)message - PID stored in
active_tasksmap
- Process terminated immediately
- Task removed from
running_tasksandactive_tasks
3. Progress Updates
Worker emits events during execution:4. Completion
On success:- Worker sends
TaskCompleted(id)message - Task removed from
running_tasks,cancelled_tasks,active_tasks conversion-completedevent emitted with output pathprocess_queue()called to start next queued task
- Worker sends
TaskError(id, error)message - Same cleanup as completion
conversion-errorevent emitted- Error logged to console and sent to frontend
5. Cancellation
User-initiated cancel:- Task ID added to
cancelled_tasksset - If running, FFmpeg process terminated (SIGKILL/TerminateProcess)
- Temporary upscale directory cleaned up
- If queued but not started, will be skipped when dequeued
Concurrency Commands
get_max_concurrency
Retrieves the current maximum number of concurrent conversions allowed.Signature
Response
Current maximum concurrency setting. Default:
2Example
set_max_concurrency
Updates the maximum number of concurrent conversions. Takes effect immediately.Signature
Parameters
New maximum concurrency value. Must be at least 1.
Behavior
When increased:- Immediately processes queue to start additional tasks up to new limit
- Running tasks continue
- No new tasks start until running count drops below new limit
Errors
- InvalidInput - Value is 0 or negative
Example
Process Control
Pause/Resume Implementation
Uses platform-specific process suspension: Unix (macOS/Linux):Termination
Cancellation uses forceful termination: Unix:Queue Processing Algorithm
Theprocess_queue() function (manager.rs:201-246) runs whenever:
- New task enqueued
- Task completed/failed
- Concurrency limit changed
State Synchronization
Thread-safe state managed with:- AtomicUsize - Max concurrency (lock-free reads)
- Mutex HashMap - Active tasks map (PID lookup)
- Mutex HashSet - Cancelled tasks set (cancellation checks)
Cleanup Operations
Temporary Upscale Directory
When ML upscaling is used, frames are extracted to:- Task completion (success or error)
- Task cancellation
Error Handling
Manager-specific errors:- Channel - Internal message channel closed (fatal)
- TaskNotFound - Pause/resume/cancel on non-existent task
- Shell - Failed to send signal to process (SIGSTOP/SIGKILL)
- InvalidInput - Invalid concurrency value (0 or negative)
Performance Considerations
Default Concurrency
Default: 2 concurrent tasks (types.rs:3) Rationale:- Video encoding is CPU/GPU intensive
- Multiple tasks compete for resources
- Higher concurrency may slow overall throughput
Queue Strategy
FIFO (First-In-First-Out) usingVecDeque:
- Fair processing order
- Predictable completion for users
- Simple implementation
Duplicate Prevention
queued_idsHashSet prevents duplicate queue entriesrunning_tasksHashMap prevents double-execution- Task ID uniqueness is frontend’s responsibility