Dispatcharr provides flexible stream management with support for multiple streaming backends, transcoding profiles, and intelligent connection handling.
Overview
The streaming system controls how video content is delivered to clients:
Stream Profiles : Configure how streams are processed and delivered
Multiple Backends : FFmpeg, VLC, Streamlink, or custom scripts
Connection Management : Track and limit concurrent connections
User Agents : Control client compatibility and identification
Failover : Automatic switching between stream sources
Real-time Monitoring : Track active connections and bandwidth
Stream Profiles
Profiles define how streams are processed before delivery to clients.
Profile Types
Proxy Pass-through with minimal processing Lowest latency, no transcoding
Redirect Direct client to original URL Zero server bandwidth usage
FFmpeg Transcoding Re-encode with custom parameters Quality/bandwidth optimization
Custom Backends Streamlink, VLC, or custom scripts Maximum flexibility
Profile Configuration
class StreamProfile :
name = CharField() # Profile identifier
command = CharField() # Executable path
parameters = TextField() # Command-line arguments
is_active = BooleanField()
user_agent = ForeignKey(UserAgent)
locked = BooleanField() # Prevent deletion/modification
Built-in Profiles
name = "Proxy"
command = "" # Built-in proxy functionality
parameters = ""
locked = True # Cannot be deleted
How it works:
Dispatcharr fetches the stream
Minimal buffering
Passes through to client
Tracks connection statistics
Use cases:
Default streaming mode
Low server resources
No transcoding needed
name = "Redirect"
command = "" # Built-in redirect functionality
parameters = ""
locked = True
How it works:
Returns HTTP 302 redirect
Client connects directly to source
Zero server bandwidth
Use cases:
Minimize server load
Client has direct access to source
No connection tracking needed
Redirect mode exposes the original stream URL to clients. Use only when appropriate.
name = "FFmpeg 720p"
command = "ffmpeg"
parameters = "-i {streamUrl} -c:v libx264 -preset veryfast -b:v 2M -maxrate 2.5M -bufsize 5M -vf scale=-2:720 -c:a aac -b:a 128k -f mpegts pipe:1"
user_agent = ForeignKey(UserAgent)
Parameters:
{streamUrl}: Replaced with actual stream URL
{userAgent}: Replaced with user agent string
Common transcoding profiles: Profile Resolution Video Bitrate Audio Bitrate Use Case Low 480p 800 Kbps 96 Kbps Mobile, slow connections Medium 720p 2 Mbps 128 Kbps Standard streaming High 1080p 5 Mbps 192 Kbps HD streaming 4K 2160p 15 Mbps 256 Kbps Premium quality
name = "Streamlink Best"
command = "streamlink"
parameters = "--http-header 'User-Agent= {userAgent} ' --stdout {streamUrl} best"
Benefits:
Handles HLS/DASH streams natively
Built-in quality selection
Plugin ecosystem for various platforms
Use cases:
Complex stream formats
Adaptive bitrate streams
Platform-specific handling
User Agents
Control how Dispatcharr identifies itself to stream sources:
class UserAgent :
name = CharField() # Friendly name
user_agent = CharField() # Full UA string
description = CharField()
is_active = BooleanField()
Common User Agents
# VLC Media Player
user_agent = "VLC/3.0.16 LibVLC/3.0.16"
# Safari on macOS
user_agent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15"
# iOS Safari
user_agent = "Mozilla/5.0 (iPhone; CPU iPhone OS 15_0 like Mac OS X) AppleWebKit/605.1.15"
# FFmpeg
user_agent = "Lavf/58.76.100"
Some IPTV providers require specific user agents. Configure these in Settings → User Agents and assign to stream profiles.
Connection Management
M3U Profile Connections
Each M3U account can have multiple profiles with connection limits:
class StreamProfile : # M3U account profile
max_streams = PositiveIntegerField() # 0 = unlimited
is_default = BooleanField()
is_active = BooleanField()
Redis Connection Tracking
# Track active streams
redis.set( f "channel_stream: { channel_id } " , stream_id)
redis.set( f "stream_profile: { stream_id } " , profile_id)
# Track profile connections
redis.incr( f "profile_connections: { profile_id } " )
# Check limits
current = int (redis.get( f "profile_connections: { profile_id } " ) or 0 )
if max_streams > 0 and current >= max_streams:
# Try next profile or stream
Connection Lifecycle
Stream Request
Client requests channel stream
Stream Selection
Algorithm finds available stream:
Check for existing active stream
Try streams in order
Check profile connection limits
Assign first available
Connection Tracking
Redis keys track:
Active stream for channel
Profile being used
Connection count per profile
Stream Delivery
Content delivered via selected profile
Release
When stream ends:
Remove Redis tracking keys
Decrement profile connection count
Free slot for next request
Connection Limits Example
Channel: ESPN
├── Stream 1: Provider A
│ ├── Profile A (max: 2 connections) → FULL (2/2)
│ └── Profile B (max: 5 connections) → Available (3/5)
└── Stream 2: Provider B
└── Profile C (unlimited) → Available
New viewer connects:
1. Try Stream 1 / Profile A → Full, skip
2. Try Stream 1 / Profile B → Assign ✓
Stream Hash System
Unique identification for streams across M3U refreshes:
class Stream :
stream_hash = CharField( unique = True , db_index = True )
@ classmethod
def generate_hash_key ( cls , name , url , tvg_id , keys = None ,
m3u_id = None , group = None , stream_id = None ):
# Configurable hash keys
keys = keys or [ 'name' , 'url' , 'tvg_id' ]
# For XtreamCodes, use stream_id instead of URL
if account_type == 'XC' and stream_id:
effective_url = stream_id # Stable across credential changes
# Build hash from selected fields
hash_parts = {key: values[key] for key in keys}
serialized = json.dumps(hash_parts, sort_keys = True )
return hashlib.sha256(serialized.encode()).hexdigest()
Why Hashing?
Streams maintain identity across M3U refreshes: Initial Import:
Stream: "CNN HD"
├── URL: http://provider.com:8080/user/pass/12345
├── Hash: abc123...
└── Assigned to Channel 100
After Credential Change:
Stream: "CNN HD"
├── URL: http://provider.com:8080/newuser/newpass/12345
├── Hash: abc123... (same!)
└── Still assigned to Channel 100
Hash based on stream_id for XC accounts, so channel assignments persist.
class Stream :
last_seen = DateTimeField() # Updated on refresh
is_stale = BooleanField() # Not seen recently
During M3U refresh:
All existing streams marked as not recently seen
Streams in new M3U have last_seen updated
Streams not updated marked as stale
Stale streams can be auto-deleted
Configurable Hash Keys
# From CoreSettings
M3U_HASH_KEYS = "name,url,tvg_id" # Comma-separated
# Examples:
"name,url" # Hash by name + URL
"tvg_id" # Hash by TVG ID only
"name,group" # Hash by name + group
"url,m3u_id" # Hash by URL + M3U account
Changing hash keys will cause all streams to regenerate new hashes, breaking existing channel assignments. Only change during initial setup.
Stream Statistics
Real-Time Metrics
class Stream :
current_viewers = PositiveIntegerField()
stream_stats = JSONField() # Video codec, resolution, etc.
stream_stats_updated_at = DateTimeField()
Tracked Statistics:
{
"video_codec" : "h264" ,
"resolution" : "1920x1080" ,
"bitrate_kbps" : 5000 ,
"fps" : 29.97 ,
"audio_codec" : "aac" ,
"audio_channels" : 2
}
Viewer Counting
def get_total_viewers ( channel_id ):
"""Get viewer count from Redis"""
redis_client = RedisClient.get_client()
return int (redis_client.get( f "channel: { channel_id } :viewers" ) or 0 )
Viewer counts displayed in real-time on channel listings and statistics pages.
Transcoding Workflows
Common Transcoding Scenarios
Problem : High-bitrate source streams exceed client bandwidthSolution : Transcode to lower bitrate# Profile: Mobile Optimized
ffmpeg -i {streamUrl} \
-c:v libx264 -preset veryfast \
-b:v 800k -maxrate 1M -bufsize 2M \
-vf scale=-2:480 \
-c:a aac -b:a 96k \
-f mpegts pipe:1
Result: 480p @ 800 Kbps for mobile devices
Problem : Inconsistent audio levels across channelsSolution : Apply dynamic audio processing# Profile: Normalized Audio
ffmpeg -i {streamUrl} \
-c:v copy \
-af loudnorm=I=-16:LRA=11:TP= -1.5 \
-c:a aac -b:a 192k \
-f mpegts pipe:1
Result: Consistent audio levels using EBU R128 loudness
Problem : CPU overload with multiple transcodesSolution : Use GPU hardware encoding# Profile: NVIDIA NVENC
ffmpeg -hwaccel cuda -i {streamUrl} \
-c:v h264_nvenc -preset p4 \
-b:v 3M -maxrate 4M -bufsize 8M \
-c:a copy \
-f mpegts pipe:1
Result: 5-10x faster transcoding with GPU
Profile Assignment Priority
# From channels/models.py
def get_stream_profile ( self ):
"""Get stream profile with priority:
1. Channel-specific profile
2. Stream-specific profile
3. Default system profile
"""
if self .stream_profile:
return self .stream_profile
if self .stream and self .stream.stream_profile:
return self .stream.stream_profile
return StreamProfile.objects.get(
id = CoreSettings.get_default_stream_profile_id()
)
Advanced Features
Dynamic Profile Switching
def update_stream_profile ( self , new_profile_id ):
"""Switch profile for active stream"""
redis_client = RedisClient.get_client()
# Get current stream and profile
stream_id = redis_client.get( f "channel_stream: { self .id } " )
current_profile_id = redis_client.get( f "stream_profile: { stream_id } " )
# Update profile mapping
redis_client.set( f "stream_profile: { stream_id } " , new_profile_id)
# Adjust connection counters
redis_client.decr( f "profile_connections: { current_profile_id } " )
redis_client.incr( f "profile_connections: { new_profile_id } " )
Profile switching allows changing transcoding parameters without disconnecting viewers. Useful for adaptive bitrate scenarios.
Custom Stream Properties
class Stream :
custom_properties = JSONField( default = dict )
Store provider-specific metadata:
{
"provider_id" : "12345" ,
"quality_tags" : [ "HD" , "50fps" , "H.265" ],
"language_tracks" : [ "eng" , "spa" , "fra" ],
"subtitle_langs" : [ "eng" , "spa" ],
"drm_protected" : false
}
Stream Channel Numbers
class Stream :
stream_chno = FloatField() # Provider channel number
Preserve original channel numbering from provider:
Provider Channel 2.1 → Dispatcharr Channel 102
├── Original: stream_chno = 2.1
└── Remapped: channel_number = 102.0
Real-World Use Cases
Multi-Tier Quality System
Channel Setup:
├── Profile A: Direct Proxy (Priority users)
├── Profile B: 1080p Transcode (Standard users)
└── Profile C: 720p Transcode (Mobile users)
Connection Distribution:
├── 10 Priority users → Direct streams
├── 50 Standard users → 1080p (max CPU load)
└── 100+ Mobile users → 720p (low bandwidth)
Geographic Distribution
Setup: VPN-routed Dispatcharr
├── Server: Netherlands (geo-unrestricted)
├── Provider: UK IPTV (geo-locked)
└── Clients: Worldwide
Result:
- All streams route through VPN
- Clients access geo-locked content
- Single VPN connection, multiple clients
- Bandwidth optimization via transcoding
Failover Architecture
Channel: Premium Sports
├── Stream 1: Primary Provider
│ ├── Profile 1 (3 connections) → Active
│ └── Profile 2 (5 connections) → Backup
├── Stream 2: Secondary Provider
│ └── Profile 3 (unlimited) → Backup
└── Stream 3: Tertiary Provider
└── Profile 4 (unlimited) → Last resort
Failover Logic:
1. 3 viewers → Stream 1 / Profile 1
2. 4th viewer → Stream 1 / Profile 2 (Profile 1 full)
3. 9th viewer → Stream 2 / Profile 3 (Stream 1 exhausted)
4. Provider 1 & 2 down → Stream 3 / Profile 4
Hardware Transcoding Farm
Server Configuration:
├── 4x NVIDIA GPUs
├── 8x Transcoding profiles (2 per GPU)
└── Load balanced across GPUs
Capacity:
├── 32 concurrent 1080p transcodes
├── 64 concurrent 720p transcodes
└── Automatic fallback to CPU if GPU full
Minimize Latency Use Proxy or Redirect profiles Avoid transcoding when possible
Reduce Bandwidth Transcode to lower bitrates Use hardware acceleration
Connection Pooling Redis-based tracking Efficient resource allocation
Monitor Health Real-time statistics Automatic failover on errors
Monitor system resources (CPU, RAM, GPU) when running transcoding profiles. Adjust concurrent connection limits accordingly.