```
This HTML5 player uses HLS (HTTP Live Streaming) format, which is automatically generated by IIS Media Services when you enable mobile device support.
## Required Redistributables
To ensure your application functions correctly with IIS Smooth Streaming, include the following redistributables:
- SDK redistributables for your specific VisioForge SDK
- MP4 redistributables:
- For x86 architectures: [VisioForge.DotNet.Core.Redist.MP4.x86](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.MP4.x86/)
- For x64 architectures: [VisioForge.DotNet.Core.Redist.MP4.x64](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.MP4.x64/)
You can add these packages through NuGet Package Manager in Visual Studio or via the command line:
```
Install-Package VisioForge.DotNet.Core.Redist.MP4.x64
```
## Advanced Configuration Options
For production environments, consider these additional configurations:
- **Multiple bitrate encoding**: Configure your VisioForge SDK to encode at multiple bitrates for optimal adaptive streaming
- **Custom manifest settings**: Modify the Smooth Streaming manifest for specialized playback requirements
- **Authentication**: Implement token-based authentication for secure streaming
- **Content encryption**: Enable DRM protection for sensitive content
- **Load balancing**: Configure multiple publishing points behind a load balancer for high-traffic scenarios
## Troubleshooting Common Issues
- **Connection failures**: Verify firewall settings allow traffic on the streaming port (typically 80 or 443)
- **Playback stuttering**: Check server resources and consider increasing buffer settings
- **Mobile compatibility issues**: Ensure mobile format generation is enabled and test across multiple devices
- **Quality issues**: Adjust encoding parameters and bitrate ladder configuration
## Conclusion
IIS Smooth Streaming, when implemented with VisioForge SDKs, provides a robust solution for adaptive video delivery across diverse network conditions and devices. By following this comprehensive guide, you can configure, implement, and optimize Smooth Streaming for your .NET applications.
For additional code samples and implementation examples, visit our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples).
---
*This documentation is provided by VisioForge. For additional support or information about our SDKs, please visit [www.visioforge.com](https://www.visioforge.com).*
---END OF PAGE---
# Local File: .\dotnet\general\network-streaming\index.md
---
title: Network Streaming Guide for .NET Development
description: Learn how to implement RTMP, RTSP, HLS, and NDI streaming in .NET applications. Includes code examples for live broadcasting, hardware acceleration, and integration with major streaming platforms.
sidebar_label: Network Streaming
order: 16
---
# Comprehensive Network Streaming Guide
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## Introduction to Network Streaming
Network streaming enables real-time transmission of audio and video content across the internet or local networks. VisioForge's comprehensive SDKs provide powerful tools for implementing various streaming protocols in your .NET applications, allowing you to create professional-grade broadcasting solutions with minimal development effort.
This guide covers all streaming options available in VisioForge SDKs, including implementation details, best practices, and code examples to help you select the most appropriate streaming technology for your specific requirements.
## Streaming Protocol Overview
VisioForge SDKs support a wide range of streaming protocols, each with unique advantages for different use cases:
### Real-Time Protocols
- **[RTMP (Real-Time Messaging Protocol)](rtmp.md)**: Industry-standard protocol for low-latency live streaming, widely used for live broadcasting to CDNs and streaming platforms
- **[RTSP (Real-Time Streaming Protocol)](rtsp.md)**: Ideal for IP camera integration and surveillance applications, offering precise control over media sessions
- **[SRT (Secure Reliable Transport)](srt.md)**: Advanced protocol designed for high-quality, low-latency video delivery over unpredictable networks
- **[NDI (Network Device Interface)](ndi.md)**: Professional-grade protocol for high-quality, low-latency video transmission over local networks
### HTTP-Based Streaming
- **[HLS (HTTP Live Streaming)](hls-streaming.md)**: Apple-developed protocol that breaks streams into downloadable segments, offering excellent compatibility with browsers and mobile devices
- **[HTTP MJPEG Streaming](http-mjpeg.md)**: Simple implementation for streaming motion JPEG over HTTP connections
- **[IIS Smooth Streaming](iis-smooth-streaming.md)**: Microsoft's adaptive streaming technology for delivering media through IIS servers
### Platform-Specific Solutions
- **[Windows Media Streaming (WMV)](wmv.md)**: Microsoft's native streaming format, ideal for Windows-centric deployments
- **[Adobe Flash Media Server](adobe-flash.md)**: Legacy streaming solution for Flash-based applications
### Cloud & Social Media Integration
- **[AWS S3](aws-s3.md)**: Direct streaming to Amazon Web Services S3 storage
- **[YouTube Live](youtube.md)**: Simplified integration with YouTube's live streaming platform
- **[Facebook Live](facebook.md)**: Direct broadcasting to Facebook's streaming service
## Key Components of Network Streaming
### Video Encoders
VisioForge SDKs provide multiple encoding options to balance quality, performance and compatibility:
#### Software Encoders
- **OpenH264**: Cross-platform software-based H.264 encoder
- **AVENC H264**: FFmpeg-based software encoder
#### Hardware-Accelerated Encoders
- **NVENC H264/HEVC**: NVIDIA GPU-accelerated encoding
- **QSV H264/HEVC**: Intel Quick Sync Video acceleration
- **AMF H264/HEVC**: AMD GPU-accelerated encoding
- **Apple Media H264**: macOS-specific hardware acceleration
## Best Practices for Network Streaming
### Performance Optimization
1. **Hardware acceleration**: Leverage GPU-based encoding where available for reduced CPU usage
2. **Resolution and framerate**: Match output to content type (60fps for gaming, 30fps for general content)
3. **Bitrate allocation**: Allocate 80-90% of bandwidth to video and 10-20% to audio
### Network Reliability
1. **Connection testing**: Verify upload speed before streaming
2. **Error handling**: Implement reconnection logic for disrupted streams
3. **Monitoring**: Track streaming metrics in real-time to identify issues
### Quality Assurance
1. **Pre-streaming checks**: Validate encoder settings and output parameters
2. **Quality monitoring**: Regularly check stream quality during broadcast
3. **Platform compliance**: Follow platform-specific requirements (YouTube, Facebook, etc.)
## Troubleshooting Common Issues
1. **Encoding overload**: If experiencing frame drops, reduce resolution or bitrate
2. **Connection failures**: Verify network stability and server addresses
3. **Audio/video sync**: Ensure proper timestamp synchronization between streams
4. **Platform rejection**: Confirm compliance with platform-specific requirements
5. **Hardware acceleration failures**: Verify driver installation and compatibility
## Conclusion
Network streaming with VisioForge SDKs provides a comprehensive solution for implementing professional-grade media broadcasting in your .NET applications. By understanding the available protocols and following best practices, you can create high-quality streaming experiences for your users across multiple platforms.
For protocol-specific implementation details, refer to the dedicated guides linked throughout this document.
---END OF PAGE---
# Local File: .\dotnet\general\network-streaming\ndi.md
---
title: NDI Network Video Streaming Integration Guide
description: Learn how to implement high-performance NDI streaming in .NET applications. Step-by-step guide for developers to set up low-latency video/audio transmission over IP networks with code examples and best practices.
sidebar_label: NDI
---
# Network Device Interface (NDI) Streaming Integration
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## What is NDI and Why Use It?
The VisioForge SDK's integration of Network Device Interface (NDI) technology provides a transformative solution for professional video production and broadcasting workflows. NDI has emerged as a leading industry standard for live production, enabling high-quality, ultra-low-latency video streaming over conventional Ethernet networks.
NDI significantly simplifies the process of sharing and managing multiple video streams across diverse devices and platforms. When implemented within the VisioForge SDK, it facilitates seamless transmission of high-definition video and audio content from servers to clients with exceptional performance characteristics. This makes the technology particularly valuable for applications including:
- Live broadcasting and streaming
- Professional video conferencing
- Multi-camera production setups
- Remote production workflows
- Educational and corporate presentation environments
The inherent flexibility and efficiency of NDI streaming technology substantially reduces dependency on specialized hardware configurations, delivering a cost-effective alternative to traditional SDI-based systems for professional-grade live video production.
## Installation Requirements
### Prerequisites for NDI Implementation
To successfully implement NDI streaming functionality within your application, you must install one of the following official NDI software packages:
1. **[NDI SDK](https://ndi.video/download-ndi-sdk/)** - Recommended for developers who need comprehensive access to NDI functionality
2. **[NDI Tools](https://ndi.video/tools/)** - Suitable for basic implementation and testing scenarios
These packages provide the necessary runtime components that enable NDI communication across your network infrastructure.
## Cross-Platform NDI Output Implementation
[!badge variant="dark" size="xl" text="VideoCaptureCoreX"] [!badge variant="dark" size="xl" text="VideoEditCoreX"] [!badge variant="dark" size="xl" text="MediaBlocksPipeline"]
### Understanding the NDIOutput Class Architecture
The `NDIOutput` class serves as the core implementation framework for NDI functionality within the VisioForge SDK ecosystem. This class encapsulates configuration properties and processing logic required for high-performance video-over-IP transmission using the NDI protocol. The architecture enables broadcast-quality video and audio transmission across standard network infrastructure without specialized hardware requirements.
#### Class Definition and Interface Implementation
```csharp
public class NDIOutput : IVideoEditXBaseOutput, IVideoCaptureXBaseOutput, IOutputVideoProcessor, IOutputAudioProcessor
```
The class implements several interfaces that provide comprehensive functionality for different output scenarios:
- `IVideoEditXBaseOutput` - Provides integration with video editing workflows
- `IVideoCaptureXBaseOutput` - Enables direct capture-to-NDI streaming capabilities
- `IOutputVideoProcessor` - Allows for advanced video processing during output
- `IOutputAudioProcessor` - Facilitates audio processing and manipulation in the NDI pipeline
### Configuration Properties
#### Video Processing Pipeline
```csharp
public MediaBlock CustomVideoProcessor { get; set; }
```
This property allows developers to extend the NDI streaming pipeline with custom video processing functionality. By assigning a custom `MediaBlock` implementation, you can integrate specialized video filters, transformations, or analysis algorithms before content is transmitted via NDI.
#### Audio Processing Pipeline
```csharp
public MediaBlock CustomAudioProcessor { get; set; }
```
Similar to the video processor property, this allows for insertion of custom audio processing logic. Common applications include dynamic audio level adjustment, noise reduction, or specialized audio effects that enhance the streaming experience.
#### NDI Sink Configuration
```csharp
public NDISinkSettings Sink { get; set; }
```
This property contains the comprehensive configuration parameters for the NDI output sink, including essential settings such as stream identification, compression options, and network transmission parameters.
### Constructor Overloads
#### Basic Constructor with Stream Name
```csharp
public NDIOutput(string name)
```
Creates a new NDI output instance with the specified stream name, which will identify this NDI source on the network.
**Parameters:**
- `name`: String identifier for the NDI stream visible to receivers on the network
#### Advanced Constructor with Pre-configured Settings
```csharp
public NDIOutput(NDISinkSettings settings)
```
Creates a new NDI output instance with comprehensive pre-configured sink settings for advanced implementation scenarios.
**Parameters:**
- `settings`: A fully configured `NDISinkSettings` object containing all required NDI configuration parameters
### Core Methods
#### Stream Identification
```csharp
public string GetFilename()
```
Returns the configured name of the NDI stream. This method maintains compatibility with file-based output interfaces in the SDK architecture.
**Returns:** The current NDI stream identifier
```csharp
public void SetFilename(string filename)
```
Updates the NDI stream identifier. This method is primarily used for compatibility with other output types that use filename-based identification.
**Parameters:**
- `filename`: The updated name for the NDI stream
#### Encoder Management
```csharp
public Tuple[] GetVideoEncoders()
```
Returns an empty array as NDI handles video encoding internally through its proprietary technology.
**Returns:** Empty array of encoder tuples
```csharp
public Tuple[] GetAudioEncoders()
```
Returns an empty array as NDI handles audio encoding internally through its proprietary technology.
**Returns:** Empty array of encoder tuples
## Implementation Examples
### Media Blocks SDK Implementation
The following example demonstrates how to configure an NDI output using the Media Blocks SDK architecture:
```cs
// Create an NDI output block with a descriptive stream name
var ndiSink = new NDISinkBlock("VisioForge Production Stream");
// Connect video source to the NDI output
// CreateNewInput method establishes a video input channel for the NDI sink
pipeline.Connect(videoSource.Output, ndiSink.CreateNewInput(MediaBlockPadMediaType.Video));
// Connect audio source to the NDI output
// CreateNewInput method establishes an audio input channel for the NDI sink
pipeline.Connect(audioSource.Output, ndiSink.CreateNewInput(MediaBlockPadMediaType.Audio));
```
### Video Capture SDK Implementation
This example shows how to integrate NDI streaming within the Video Capture SDK framework:
```cs
// Initialize NDI output with a network-friendly stream name
var ndiOutput = new NDIOutput("VisioForge_Studio_Output");
// Add the configured NDI output to the video capture pipeline
core.Outputs_Add(ndiOutput); // core represents the VideoCaptureCoreX instance
```
## Windows-Specific NDI Implementation
[!badge variant="dark" size="xl" text="VideoCaptureCore"] [!badge variant="dark" size="xl" text="VideoEditCore"]
For Windows-specific implementations, the SDK provides additional configuration options through the VideoCaptureCore or VideoEditCore components.
### Step-by-Step Implementation Guide
#### 1. Enable Network Streaming
First, activate the network streaming functionality:
```cs
VideoCapture1.Network_Streaming_Enabled = true;
```
#### 2. Configure Audio Streaming
Enable audio transmission alongside video content:
```cs
VideoCapture1.Network_Streaming_Audio_Enabled = true;
```
#### 3. Select NDI Protocol
Specify NDI as the preferred streaming format:
```csharp
VideoCapture1.Network_Streaming_Format = NetworkStreamingFormat.NDI;
```
#### 4. Create and Configure NDI Output
Initialize the NDI output with a descriptive name:
```cs
var streamName = "VisioForge NDI Streamer";
var ndiOutput = new NDIOutput(streamName);
```
#### 5. Assign the Output
Connect the configured NDI output to the video capture pipeline:
```cs
VideoCapture1.Network_Streaming_Output = ndiOutput;
```
#### 6. Generate the NDI URL (Optional)
For debugging or sharing purposes, you can generate the standard NDI protocol URL:
```cs
string ndiUrl = $"ndi://{System.Net.Dns.GetHostName()}/{streamName}";
Debug.WriteLine(ndiUrl);
```
## Advanced Integration Considerations
When implementing NDI streaming in production environments, consider the following factors:
- **Network bandwidth requirements** - NDI streams can consume significant bandwidth depending on resolution and framerate
- **Quality vs. latency tradeoffs** - Configure appropriate compression settings based on your specific use case
- **Multicast vs. unicast distribution** - Determine the optimal network transmission method based on your infrastructure
- **Hardware acceleration options** - Leverage GPU acceleration where available for improved performance
- **Discovery mechanism** - Consider how NDI sources will be discovered across network segments
## Related Components
- **NDISinkSettings** - Provides detailed configuration options for the NDI output sink
- **NDISinkBlock** - Implements the core NDI output functionality referenced in NDISinkSettings
- **MediaBlockPadMediaType** - Enum used to specify the type of media (video or audio) for input connections
---
Visit our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples) for additional code samples and implementation examples.
---END OF PAGE---
# Local File: .\dotnet\general\network-streaming\rtmp.md
---
title: RTMP Live Streaming for .NET Applications
description: Learn how to implement RTMP streaming in .NET apps with practical code examples. Covers hardware acceleration, cross-platform support, error handling, and integration with popular streaming platforms like YouTube and Facebook Live.
sidebar_label: RTMP
---
# RTMP Streaming with VisioForge SDKs
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## Introduction to RTMP Streaming
RTMP (Real-Time Messaging Protocol) is a robust communication protocol designed for high-performance transmission of audio, video, and data between a server and a client. VisioForge SDKs provide comprehensive support for RTMP streaming, enabling developers to create powerful streaming applications with minimal effort.
This guide covers implementation details for RTMP streaming across different VisioForge products, including cross-platform solutions and Windows-specific integrations.
## Cross-Platform RTMP Implementation
[!badge variant="dark" size="xl" text="VideoCaptureCoreX"] [!badge variant="dark" size="xl" text="VideoEditCoreX"] [!badge variant="dark" size="xl" text="MediaBlocksPipeline"]
The `RTMPOutput` class serves as the central configuration point for RTMP streaming in cross-platform scenarios. It implements multiple interfaces including `IVideoEditXBaseOutput` and `IVideoCaptureXBaseOutput`, making it versatile for both video editing and capture workflows.
### Setting Up RTMP Output
To begin implementing RTMP streaming, you need to create and configure an `RTMPOutput` instance:
```csharp
// Initialize with streaming URL
var rtmpOutput = new RTMPOutput("rtmp://your-streaming-server/stream-key");
// Alternatively, set the URL after initialization
var rtmpOutput = new RTMPOutput();
rtmpOutput.Sink.Location = "rtmp://your-streaming-server/stream-key";
```
### Integration with VisioForge SDKs
#### Video Capture SDK Integration
```csharp
// Add RTMP output to the Video Capture SDK engine
core.Outputs_Add(rtmpOutput, true); // core is an instance of VideoCaptureCoreX
```
#### Video Edit SDK Integration
```csharp
// Set RTMP as the output format for Video Edit SDK
core.Output_Format = rtmpOutput; // core is an instance of VideoEditCoreX
```
#### Media Blocks SDK Integration
```csharp
// Create an RTMP sink block
var rtmpSink = new RTMPSinkBlock(new RTMPSinkSettings()
{
Location = "rtmp://streaming-server/stream"
});
// Connect video and audio encoders to the RTMP sink
pipeline.Connect(h264Encoder.Output, rtmpSink.CreateNewInput(MediaBlockPadMediaType.Video));
pipeline.Connect(aacEncoder.Output, rtmpSink.CreateNewInput(MediaBlockPadMediaType.Audio));
```
## Video Encoder Configuration
### Supported Video Encoders
VisioForge provides extensive support for various video encoders, making it possible to optimize streaming based on available hardware:
- **OpenH264**: Default software encoder for most platforms
- **NVENC H264**: Hardware-accelerated encoding for NVIDIA GPUs
- **QSV H264**: Intel Quick Sync Video acceleration
- **AMF H264**: AMD GPU-based acceleration
- **HEVC/H265**: Various implementations including MF HEVC, NVENC HEVC, QSV HEVC, and AMF H265
### Implementing Hardware-Accelerated Encoding
For optimal performance, it's recommended to utilize hardware acceleration when available:
```csharp
// Check for NVIDIA encoder availability and use if present
if (NVENCH264EncoderSettings.IsAvailable())
{
rtmpOutput.Video = new NVENCH264EncoderSettings();
}
// Fall back to OpenH264 if hardware acceleration isn't available
else
{
rtmpOutput.Video = new OpenH264EncoderSettings();
}
```
## Audio Encoder Configuration
### Supported Audio Encoders
The SDK supports multiple AAC encoder implementations:
- **VO-AAC**: Default for non-Windows platforms
- **AVENC AAC**: Cross-platform implementation
- **MF AAC**: Default for Windows platforms
```csharp
// Configure MF AAC encoder on Windows platforms
rtmpOutput.Audio = new MFAACEncoderSettings();
// For macOS or other platforms
rtmpOutput.Audio = new VOAACEncoderSettings();
```
## Platform-Specific Considerations
### Windows Implementation
On Windows platforms, the default configuration uses:
- OpenH264 for video encoding
- MF AAC for audio encoding
Additionally, Windows supports Microsoft Media Foundation HEVC encoding for high-efficiency streaming.
### macOS Implementation
For macOS applications, the system uses:
- AppleMediaH264EncoderSettings for video encoding
- VO-AAC for audio encoding
### Automatic Platform Detection
The SDK handles platform differences automatically through conditional compilation:
```csharp
#if __MACOS__
Video = new AppleMediaH264EncoderSettings();
#else
Video = new OpenH264EncoderSettings();
#endif
```
## Best Practices for RTMP Streaming
### 1. Encoder Selection Strategy
Always verify encoder availability before attempting to use hardware acceleration:
```csharp
// Check for Intel Quick Sync availability
if (QSVH264EncoderSettings.IsAvailable())
{
rtmpOutput.Video = new QSVH264EncoderSettings();
}
// Check for NVIDIA acceleration
else if (NVENCH264EncoderSettings.IsAvailable())
{
rtmpOutput.Video = new NVENCH264EncoderSettings();
}
// Fall back to software encoding
else
{
rtmpOutput.Video = new OpenH264EncoderSettings();
}
```
### 2. Error Handling
Implement robust error handling to manage streaming failures gracefully:
```csharp
try
{
var rtmpOutput = new RTMPOutput(streamUrl);
// Configure and start streaming
}
catch (Exception ex)
{
logger.LogError($"RTMP streaming initialization failed: {ex.Message}");
// Implement appropriate error recovery
}
```
### 3. Resource Management
Ensure proper disposal of resources when streaming is complete:
```csharp
// In your cleanup routine
if (rtmpOutput != null)
{
rtmpOutput.Dispose();
rtmpOutput = null;
}
```
## Advanced RTMP Configuration
### Dynamic Encoder Selection
For applications that need to adapt to different environments, you can enumerate available encoders:
```csharp
var rtmpOutput = new RTMPOutput();
var availableVideoEncoders = rtmpOutput.GetVideoEncoders();
var availableAudioEncoders = rtmpOutput.GetAudioEncoders();
// Present options to users or select based on system capabilities
```
### Custom Sink Configuration
Fine-tune streaming parameters using the RTMPSinkSettings class:
```csharp
rtmpOutput.Sink = new RTMPSinkSettings
{
Location = "rtmp://streaming-server/stream"
};
```
## Windows-Specific RTMP Implementation
[!badge variant="dark" size="xl" text="VideoCaptureCore"] [!badge variant="dark" size="xl" text="VideoEditCore"]
For Windows-only applications, VisioForge provides an alternative implementation using FFmpeg:
```csharp
// Enable network streaming
VideoCapture1.Network_Streaming_Enabled = true;
// Set streaming format to RTMP using FFmpeg
VideoCapture1.Network_Streaming_Format = NetworkStreamingFormat.RTMP_FFMPEG_EXE;
// Create and configure FFmpeg output
var ffmpegOutput = new FFMPEGEXEOutput();
ffmpegOutput.FillDefaults(DefaultsProfile.MP4_H264_AAC, true);
ffmpegOutput.OutputMuxer = OutputMuxer.FLV;
// Assign output to the capture component
VideoCapture1.Network_Streaming_Output = ffmpegOutput;
// Enable audio streaming (required for many services)
VideoCapture1.Network_Streaming_Audio_Enabled = true;
```
## Streaming to Popular Platforms
### YouTube Live
```csharp
// Format: rtmp://a.rtmp.youtube.com/live2/ + [YouTube stream key]
VideoCapture1.Network_Streaming_URL = "rtmp://a.rtmp.youtube.com/live2/xxxx-xxxx-xxxx-xxxx";
```
### Facebook Live
```csharp
// Format: rtmps://live-api-s.facebook.com:443/rtmp/ + [Facebook stream key]
VideoCapture1.Network_Streaming_URL = "rtmps://live-api-s.facebook.com:443/rtmp/xxxx-xxxx-xxxx-xxxx";
```
### Custom RTMP Servers
```csharp
// Connect to any RTMP server
VideoCapture1.Network_Streaming_URL = "rtmp://your-streaming-server:1935/live/stream";
```
## Performance Optimization
To achieve optimal streaming performance:
1. **Use hardware acceleration** when available to reduce CPU load
2. **Monitor resource usage** during streaming to identify bottlenecks
3. **Adjust resolution and bitrate** based on available bandwidth
4. **Implement adaptive bitrate** for varying network conditions
5. **Consider GOP size** and keyframe intervals for streaming quality
## Troubleshooting Common Issues
- **Connection Failures**: Verify server URL format and network connectivity
- **Encoder Errors**: Confirm hardware encoder availability and drivers
- **Performance Issues**: Monitor CPU/GPU usage and adjust encoding parameters
- **Audio/Video Sync**: Check timestamp synchronization settings
## Conclusion
VisioForge's RTMP implementation provides developers with a powerful, flexible framework for creating robust streaming applications. By leveraging the appropriate SDK components and following the best practices outlined in this guide, you can create high-performance streaming solutions that work across platforms and integrate with popular streaming services.
## Related Resources
- [Streaming to Adobe Flash Media Server](adobe-flash.md)
- [YouTube Streaming Integration](youtube.md)
- [Facebook Live Implementation](facebook.md)
---END OF PAGE---
# Local File: .\dotnet\general\network-streaming\rtsp.md
---
title: RTSP Video Streaming Implementation in .NET
description: Learn how to implement RTSP streaming in .NET applications with hardware acceleration, cross-platform support, and best practices. Master video encoding, server configuration, and real-time streaming for security cameras and live broadcasting.
sidebar_label: RTSP Streaming
---
# Mastering RTSP Streaming with VisioForge SDKs
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## Introduction to RTSP
The Real-Time Streaming Protocol (RTSP) is a network control protocol designed for use in entertainment and communications systems to control streaming media servers. It acts like a "network remote control," allowing users to play, pause, and stop media streams. VisioForge SDKs harness the power of RTSP to provide robust video and audio streaming capabilities.
Our SDKs integrate RTSP with industry-standard codecs like **H.264 (AVC)** for video and **Advanced Audio Coding (AAC)** for audio. H.264 offers excellent video quality at relatively low bitrates, making it ideal for streaming over various network conditions. AAC provides efficient and high-fidelity audio compression. This powerful combination ensures reliable, high-definition audiovisual streaming suitable for demanding applications such as:
* **Security and Surveillance:** Delivering clear, real-time video feeds from IP cameras.
* **Live Broadcasting:** Streaming events, webinars, or performances to a wide audience.
* **Video Conferencing:** Enabling smooth, high-quality communication.
* **Remote Monitoring:** Observing industrial processes or environments remotely.
This guide delves into the specifics of implementing RTSP streaming using VisioForge SDKs, covering both modern cross-platform approaches and legacy Windows-specific methods.
## Cross-Platform RTSP Output (Recommended)
[!badge variant="dark" size="xl" text="VideoCaptureCoreX"] [!badge variant="dark" size="xl" text="VideoEditCoreX"] [!badge variant="dark" size="xl" text="MediaBlocksPipeline"]
The modern VisioForge SDKs (`CoreX` versions and Media Blocks) provide a flexible and powerful cross-platform RTSP server implementation built upon the robust GStreamer framework. This approach offers greater control, wider codec support, and compatibility across Windows, Linux, macOS, and other platforms.
### Core Component: `RTSPServerOutput`
The `RTSPServerOutput` class is the central configuration point for establishing an RTSP stream within the Video Capture or Video Edit SDKs (`CoreX` versions). It acts as a bridge between your capture/edit pipeline and the underlying RTSP server logic.
**Key Responsibilities:**
* **Interface Implementation:** Implements `IVideoEditXBaseOutput` and `IVideoCaptureXBaseOutput`, allowing seamless integration as an output format in both editing and capture scenarios.
* **Settings Management:** Holds the `RTSPServerSettings` object, which contains all the detailed configuration parameters for the server instance.
* **Codec Specification:** Defines the video and audio encoders that will be used to compress the media before streaming.
**Supported Encoders:**
VisioForge provides access to a wide array of encoders, allowing optimization based on hardware capabilities and target platforms:
* **Video Encoders:**
* **Hardware-Accelerated (Recommended for performance):**
* `NVENC` (NVIDIA): Leverages dedicated encoding hardware on NVIDIA GPUs.
* `QSV` (Intel Quick Sync Video): Utilizes integrated GPU capabilities on Intel processors.
* `AMF` (AMD Advanced Media Framework): Uses encoding hardware on AMD GPUs/APUs.
* **Software-Based (Platform-independent, higher CPU usage):**
* `OpenH264`: A widely compatible H.264 software encoder.
* `VP8` / `VP9`: Royalty-free video codecs developed by Google, offering good compression (often used with WebRTC, but available here).
* **Platform-Specific:**
* `MF HEVC` (Media Foundation HEVC): Windows-specific H.265/HEVC encoder for higher efficiency compression.
* **Audio Encoders:**
* **AAC Variants:**
* `VO-AAC`: A versatile, cross-platform AAC encoder.
* `AVENC AAC`: Utilizes FFmpeg's AAC encoder.
* `MF AAC`: Windows Media Foundation AAC encoder.
* **Other Formats:**
* `MP3`: Widely compatible but less efficient than AAC.
* `OPUS`: Excellent low-latency codec, ideal for interactive applications.
### Configuring the Stream: `RTSPServerSettings`
This class encapsulates all the parameters needed to define the behavior and properties of your RTSP server.
**Detailed Properties:**
* **Network Configuration:**
* `Port` (int): The TCP port the server listens on for incoming RTSP connections. The default is `8554`, a common alternative to the standard (often restricted) port 554. Ensure this port is open in firewalls.
* `Address` (string): The IP address the server binds to.
* `"127.0.0.1"` (Default): Listens only for connections from the local machine.
* `"0.0.0.0"`: Listens on all available network interfaces (use for public access).
* Specific IP (e.g., `"192.168.1.100"`): Binds only to that specific network interface.
* `Point` (string): The path component of the RTSP URL (e.g., `/live`, `/stream1`). Clients will connect to `rtsp://:`. Default is `"/live"`.
* **Stream Configuration:**
* `VideoEncoder` (IVideoEncoderSettings): An instance of a video encoder settings class (e.g., `OpenH264EncoderSettings`, `NVEncoderSettings`). This defines the codec, bitrate, quality, etc.
* `AudioEncoder` (IAudioEncoderSettings): An instance of an audio encoder settings class (e.g., `VOAACEncoderSettings`). Defines audio codec parameters.
* `Latency` (TimeSpan): Controls the buffering delay introduced by the server to smooth out network jitter. Default is 250 milliseconds. Higher values increase stability but also delay.
* **Authentication:**
* `Username` (string): If set, clients must provide this username for basic authentication.
* `Password` (string): If set, clients must provide this password along with the username.
* **Server Identity:**
* `Name` (string): A friendly name for the server, sometimes displayed by client applications.
* `Description` (string): A more detailed description of the stream content or server purpose.
* **Convenience Property:**
* `URL` (Uri): Automatically constructs the full RTSP connection URL based on the `Address`, `Port`, and `Point` properties.
### The Engine: `RTSPServerBlock` (Media Blocks SDK)
When using the Media Blocks SDK, the `RTSPServerBlock` represents the actual GStreamer-based element that performs the streaming.
**Functionality:**
* **Media Sink:** Acts as a terminal point (sink) in a media pipeline, receiving encoded video and audio data.
* **Input Pads:** Provides distinct `VideoInput` and `AudioInput` pads for connecting upstream video and audio sources/encoders.
* **GStreamer Integration:** Manages the underlying GStreamer `rtspserver` and related elements necessary for handling client connections and streaming RTP packets.
* **Availability Check:** The static `IsAvailable()` method allows checking if the necessary GStreamer plugins for RTSP streaming are present on the system.
* **Resource Management:** Implements `IDisposable` for proper cleanup of network sockets and GStreamer resources when the block is no longer needed.
### Practical Usage Examples
#### Example 1: Basic Server Setup (VideoCaptureCoreX / VideoEditCoreX)
```csharp
// 1. Choose and configure encoders
// Use hardware acceleration if available, otherwise fallback to software
var videoEncoder = H264EncoderBlock.GetDefaultSettings();
var audioEncoder = new VOAACEncoderSettings(); // Reliable cross-platform AAC
// 2. Configure server network settings
var settings = new RTSPServerSettings(videoEncoder, audioEncoder)
{
Port = 8554,
Address = "0.0.0.0", // Accessible from other machines on the network
Point = "/livefeed"
};
// 3. Create the output object
var rtspOutput = new RTSPServerOutput(settings);
// 4. Integrate with the SDK engine
// For VideoCaptureCoreX:
// videoCapture is an initialized instance of VideoCaptureCoreX
videoCapture.Outputs_Add(rtspOutput);
// For VideoEditCoreX:
// videoEdit is an initialized instance of VideoEditCoreX
// videoEdit.Output_Format = rtspOutput; // Set before starting editing/playback
```
#### Example 2: Media Blocks Pipeline
```csharp
// Assume 'pipeline' is an initialized MediaBlocksPipeline
// Assume 'videoSource' and 'audioSource' provide unencoded media streams
// 1. Create video and audio encoder settings
var videoEncoder = H264EncoderBlock.GetDefaultSettings();
var audioEncoder = new VOAACEncoderSettings();
// 2. Create RTSP server settings with a specific URL
var serverUri = new Uri("rtsp://192.168.1.50:8554/cam1");
var rtspSettings = new RTSPServerSettings(serverUri, videoEncoder, audioEncoder)
{
Description = "Camera Feed 1 - Warehouse"
};
// 3. Create the RTSP Server Block
if (!RTSPServerBlock.IsAvailable())
{
Console.WriteLine("RTSP Server components not available. Check GStreamer installation.");
return;
}
var rtspSink = new RTSPServerBlock(rtspSettings);
// Connect source directly to RTSP server block, because server block will use its own encoders
pipeline.Connect(videoSource.Output, rtspSink.VideoInput); // Connect source directly to video input of RTSP server block
pipeline.Connect(audioSource.Output, rtspSink.AudioInput); // Connect source directly to audio input of RTSP server block
Start the pipeline...
await pipeline.StartAsync();
```
#### Example 3: Advanced Configuration with Authentication
```csharp
// Using settings from Example 1...
var secureSettings = new RTSPServerSettings(videoEncoder, audioEncoder)
{
Port = 8555, // Use a different port
Address = "192.168.1.100", // Bind to a specific internal IP
Point = "/secure",
Username = "viewer",
Password = "VerySecretPassword!",
Latency = TimeSpan.FromMilliseconds(400), // Slightly higher latency
Name = "SecureStream",
Description = "Authorized access only"
};
var secureRtspOutput = new RTSPServerOutput(secureSettings);
// Add to VideoCaptureCoreX or set for VideoEditCoreX as before
// videoCapture.Outputs_Add(secureRtspOutput);
```
### Streaming Best Practices
1. **Encoder Selection Strategy:**
* **Prioritize Hardware:** Always prefer hardware encoders (NVENC, QSV, AMF) when available on the target system. They drastically reduce CPU load, allowing for higher resolutions, frame rates, or more simultaneous streams.
* **Software Fallback:** Use `OpenH264` as a reliable software fallback for broad compatibility when hardware acceleration isn't present or suitable.
* **Codec Choice:** H.264 remains the most widely compatible codec for RTSP clients. HEVC offers better compression but client support might be less universal.
2. **Latency Tuning:**
* **Interactivity vs. Stability:** Lower latency (e.g., 100-200ms) is crucial for applications like video conferencing but makes the stream more susceptible to network hiccups.
* **Broadcast/Surveillance:** Higher latency (e.g., 500ms-1000ms+) provides larger buffers, improving stream resilience over unstable networks (like Wi-Fi or the internet) at the cost of increased delay. Start with the default (250ms) and adjust based on observed stream quality and requirements.
3. **Network Configuration:**
* **Security First:** Implement `Username` and `Password` authentication for any stream not intended for public anonymous access.
* **Binding Address:** Use `"0.0.0.0"` cautiously. For enhanced security, bind explicitly to the network interface (`Address`) intended for client connections if possible.
* **Firewall Rules:** Meticulously configure system and network firewalls to allow incoming TCP connections on the chosen RTSP `Port`. Also, remember that RTP/RTCP (used for the actual media data) often use dynamic UDP ports; firewalls might need helper modules (like `nf_conntrack_rtsp` on Linux) or broad UDP port ranges opened (less secure).
4. **Resource Management:**
* **Dispose Pattern:** RTSP server instances hold network resources (sockets) and potentially complex GStreamer pipelines. *Always* ensure they are disposed of correctly using `using` statements or explicit `.Dispose()` calls in `finally` blocks to prevent resource leaks.
* **Graceful Shutdown:** When stopping the capture or edit process, ensure the output is properly removed or the pipeline is stopped cleanly to allow the RTSP server to shut down gracefully.
### Performance Considerations
Optimizing RTSP streaming involves balancing quality, latency, and resource usage:
1. **Encoder Impact:** This is often the biggest factor.
* **Hardware:** Significantly lower CPU usage, higher potential throughput. Requires compatible hardware and drivers.
* **Software:** High CPU load, especially at higher resolutions/framerates. Limits the number of concurrent streams on a single machine but works universally.
2. **Latency vs. Bandwidth:** Lower latency settings can sometimes lead to increased peak bandwidth usage as the system has less time to smooth out data transmission.
3. **Resource Monitoring:**
* **CPU:** Keep a close eye on CPU usage, particularly with software encoders. Overload leads to dropped frames and stuttering.
* **Memory:** Monitor RAM usage, especially if handling multiple streams or complex Media Blocks pipelines.
* **Network:** Ensure the server's network interface has sufficient bandwidth for the configured bitrate, resolution, and number of connected clients. Calculate required bandwidth (Video Bitrate + Audio Bitrate) * Number of Clients.
## Windows-Only RTSP Output (Legacy)
[!badge variant="dark" size="xl" text="VideoCaptureCore"] [!badge variant="dark" size="xl" text="VideoEditCore"]
The implementation includes several error handling mechanisms:
Older versions of the SDK (`VideoCaptureCore`, `VideoEditCore`) included a simpler, Windows-specific RTSP output mechanism. While functional, it offers less flexibility and codec support compared to the cross-platform `RTSPServerOutput`. **It is generally recommended to use the `CoreX` / Media Blocks approach for new projects.**
### How it Works
This method leverages built-in Windows components or specific bundled filters. Configuration is done directly via properties on the `VideoCaptureCore` or `VideoEditCore` object.
### Sample Configuration Code
```csharp
// Assuming VideoCapture1 is an instance of VisioForge.Core.VideoCapture.VideoCaptureCore
// 1. Enable network streaming globally for the component
VideoCapture1.Network_Streaming_Enabled = true;
// 2. Specifically enable audio streaming (optional, default might be true)
VideoCapture1.Network_Streaming_Audio_Enabled = true;
// 3. Select the desired RTSP format.
// RTSP_H264_AAC_SW indicates software encoding for both H.264 and AAC.
// Other options might exist depending on installed filters/components.
VideoCapture1.Network_Streaming_Format = VisioForge.Types.VFNetworkStreamingFormat.RTSP_H264_AAC_SW;
// 4. Configure Encoder Settings (using MP4Output as a container)
// Even though we aren't creating an MP4 file, the MP4Output class
// is used here to hold H.264 and AAC encoder settings.
var mp4OutputSettings = new VisioForge.Types.Output.MP4Output();
// Configure H.264 settings within mp4OutputSettings
// (Specific properties depend on the SDK version, e.g., bitrate, profile)
// mp4OutputSettings.Video_H264... = ...;
// Configure AAC settings within mp4OutputSettings
// (e.g., bitrate, sample rate)
// mp4OutputSettings.Audio_AAC... = ...;
// 5. Assign the settings container to the network streaming output
VideoCapture1.Network_Streaming_Output = mp4OutputSettings;
// 6. Define the RTSP URL clients will use
// The server will automatically listen on the specified port (5554 here).
VideoCapture1.Network_Streaming_URL = "rtsp://localhost:5554/vfstream";
// Use machine's actual IP instead of localhost for external access.
// After configuration, start the capture/playback as usual
// VideoCapture1.Start();
```
**Note:** This legacy method often relies on DirectShow filters or Media Foundation transforms available on the specific Windows system, making it less predictable and portable than the GStreamer-based cross-platform solution.
---
For more detailed examples and advanced use cases, explore the code samples provided in our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples).
---END OF PAGE---
# Local File: .\dotnet\general\network-streaming\srt.md
---
title: Implementing SRT Protocol Streaming in .NET
description: Learn how to integrate SRT (Secure Reliable Transport) protocol for low-latency video streaming in .NET applications. Includes code examples, hardware acceleration options, and best practices for reliable video delivery.
sidebar_label: SRT
---
# SRT Streaming Implementation Guide for VisioForge .NET SDKs
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## What is SRT and Why Should You Use It?
SRT (Secure Reliable Transport) is a high-performance streaming protocol designed for delivering high-quality, low-latency video across unpredictable networks. Unlike traditional streaming protocols, SRT excels in challenging network conditions by incorporating unique error recovery mechanisms and encryption features.
The VisioForge .NET SDKs provide comprehensive support for SRT streaming through an intuitive configuration API, enabling developers to implement secure, reliable video delivery in their applications with minimal effort.
## Getting Started with SRT in VisioForge
### Supported SDK Platforms
[!badge variant="dark" size="xl" text="VideoCaptureCoreX"] [!badge variant="dark" size="xl" text="VideoEditCoreX"] [!badge variant="dark" size="xl" text="MediaBlocksPipeline"]
### Basic SRT Configuration
Implementing SRT streaming in your application starts with specifying your streaming destination URL. The SRT URL follows a standard format that includes protocol, host, and port information.
#### Video Capture SDK Implementation
```csharp
// Initialize SRT output with destination URL
var srtOutput = new SRTOutput("srt://streaming-server:1234");
// Add the configured SRT output to your capture engine
videoCapture.Outputs_Add(srtOutput, true); // videoCapture is an instance of VideoCaptureCoreX
```
#### Media Blocks SDK Implementation
```csharp
// Create an SRT sink block with appropriate settings
var srtSink = new SRTMPEGTSSinkBlock(new SRTSinkSettings() { Uri = "srt://:8888" });
// Configure encoders for SRT compatibility
h264Encoder.Settings.ParseStream = false; // Disable parsing for H264 encoder
// Connect your video encoder to the SRT sink
pipeline.Connect(h264Encoder.Output, srtSink.CreateNewInput(MediaBlockPadMediaType.Video));
// Connect your audio encoder to the SRT sink
pipeline.Connect(aacEncoder.Output, srtSink.CreateNewInput(MediaBlockPadMediaType.Audio));
```
## Video Encoding Options for SRT Streaming
The VisioForge SDKs offer flexible encoding options to balance quality, performance, and hardware utilization. You can choose from software-based encoders or hardware-accelerated options based on your specific requirements.
### Software-Based Video Encoders
- **OpenH264**: The default cross-platform encoder that provides excellent compatibility across different environments
### Hardware-Accelerated Video Encoders
- **NVIDIA NVENC (H.264/HEVC)**: Leverages NVIDIA GPU acceleration for high-performance encoding
- **Intel Quick Sync Video (H.264/HEVC)**: Utilizes Intel's dedicated media processing hardware
- **AMD AMF (H.264/H.265)**: Enables hardware acceleration on AMD graphics processors
- **Microsoft Media Foundation HEVC**: Windows-specific hardware-accelerated encoder
#### Example: Configuring NVIDIA Hardware Acceleration
```csharp
// Set SRT output to use NVIDIA hardware acceleration
srtOutput.Video = new NVENCH264EncoderSettings();
```
## Audio Encoding for SRT Streams
Audio quality is critical for many streaming applications. The VisioForge SDKs provide multiple audio encoding options:
- **VO-AAC**: Cross-platform AAC encoder with consistent performance
- **AVENC AAC**: FFmpeg-based AAC encoder with extensive configuration options
- **MF AAC**: Microsoft Media Foundation AAC encoder (Windows-only)
The SDK automatically selects the most appropriate default audio encoder based on the platform:
- Windows systems default to MF AAC
- Other platforms default to VO AAC
## Platform-Specific Optimizations
### Windows-Specific Features
When running on Windows systems, the SDK can leverage Microsoft Media Foundation frameworks:
- MF AAC encoder provides efficient audio encoding
- MF HEVC encoder delivers high-quality, efficient video compression
### macOS Optimizations
On macOS platforms, the SDK automatically selects:
- Apple Media H264 encoder for optimized video encoding
- VO AAC encoder for reliable audio encoding
## Advanced SRT Configuration Options
### Custom Media Processing Pipeline
For applications with specialized requirements, the SDK supports custom processing for both video and audio streams:
```csharp
// Add custom video processing before encoding
srtOutput.CustomVideoProcessor = new SomeMediaBlock();
// Add custom audio processing before encoding
srtOutput.CustomAudioProcessor = new SomeMediaBlock();
```
These processors enable you to implement filters, transformations, or analytics before encoding and transmission.
### SRT Sink Configuration
Fine-tune your SRT connection using the SRTSinkSettings class:
```csharp
// Update the SRT destination URI
srtOutput.Sink.Uri = "srt://new-server:5678";
```
## Best Practices for SRT Streaming
### Optimizing Encoder Selection
1. **Hardware Acceleration Priority**: Always choose hardware-accelerated encoders when available. The performance benefits are significant, particularly for high-resolution streaming.
2. **Smart Fallback Mechanisms**: Implement encoder availability checks to automatically fall back to software encoding if hardware acceleration is unavailable:
```csharp
if (NVENCH264EncoderSettings.IsAvailable())
{
srtOutput.Video = new NVENCH264EncoderSettings();
}
else
{
srtOutput.Video = new OpenH264EncoderSettings();
}
```
### Performance Optimization
1. **Bitrate Configuration**: Carefully adjust encoder bitrates based on your content type and target network conditions. Higher bitrates increase quality but require more bandwidth.
2. **Resource Monitoring**: Monitor CPU and GPU usage during streaming to identify potential bottlenecks. If CPU usage is consistently high, consider switching to hardware acceleration.
3. **Latency Management**: Configure appropriate buffer sizes based on your latency requirements. Smaller buffers reduce latency but may increase susceptibility to network fluctuations.
## Troubleshooting SRT Implementations
### Common Issues and Solutions
#### Encoder Initialization Failures
- **Problem**: Selected encoder fails to initialize or throws exceptions
- **Solution**: Verify the encoder is supported on your platform and that required drivers are installed and up-to-date
#### Streaming Connection Problems
- **Problem**: Unable to establish SRT connection
- **Solution**: Confirm the SRT URL format is correct and that specified ports are open in all firewalls and network equipment
#### Performance Bottlenecks
- **Problem**: High CPU usage or dropped frames during streaming
- **Solution**: Consider switching to hardware-accelerated encoders or reducing resolution/bitrate
## Integration Examples
### Complete SRT Streaming Setup
```csharp
// Create and configure SRT output
var srtOutput = new SRTOutput("srt://streaming-server:1234");
// Configure video encoding - try hardware acceleration with fallback
if (NVENCH264EncoderSettings.IsAvailable())
{
var nvencSettings = new NVENCH264EncoderSettings();
nvencSettings.Bitrate = 4000000; // 4 Mbps
srtOutput.Video = nvencSettings;
}
else
{
var softwareSettings = new OpenH264EncoderSettings();
softwareSettings.Bitrate = 2000000; // 2 Mbps for software encoding
srtOutput.Video = softwareSettings;
}
// Add to capture engine
videoCapture.Outputs_Add(srtOutput, true);
// Start streaming
videoCapture.Start();
```
## Conclusion
SRT streaming in VisioForge .NET SDKs provides a powerful solution for high-quality, low-latency video delivery across challenging network conditions. By leveraging the flexible encoder options and configuration capabilities, developers can implement robust streaming solutions for a wide range of applications.
Whether you're building a live streaming platform, video conferencing solution, or content delivery system, the SRT protocol's combination of security, reliability, and performance makes it an excellent choice for modern video applications.
For more information about specific encoders or advanced configuration options, refer to the comprehensive VisioForge SDK documentation.
---END OF PAGE---
# Local File: .\dotnet\general\network-streaming\udp.md
---
title: UDP Video and Audio Streaming in .NET
description: Learn how to implement high-performance UDP streaming for video and audio in .NET applications. Detailed guide covers encoding, configuration, multicast support, and best practices for real-time media transmission.
sidebar_label: UDP
---
# UDP Streaming with VisioForge SDKs
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## Introduction to UDP Streaming
The User Datagram Protocol (UDP) is a lightweight, connectionless transport protocol that provides a simple interface between network applications and the underlying IP network. Unlike TCP, UDP offers minimal overhead and doesn't guarantee packet delivery, making it ideal for real-time applications where speed is crucial and occasional packet loss is acceptable.
VisioForge SDKs offer robust support for UDP streaming, enabling developers to implement high-performance, low-latency streaming solutions for various applications, including live broadcasts, video surveillance, and real-time communication systems.
## Key Features and Capabilities
The VisioForge SDK suite provides comprehensive UDP streaming functionality with the following key features:
### Video and Audio Codec Support
- **Video Codecs**: Full support for H.264 (AVC) and H.265 (HEVC), offering excellent compression efficiency while maintaining high video quality.
- **Audio Codec**: Advanced Audio Coding (AAC) support, providing superior audio quality at lower bitrates compared to older audio codecs.
### MPEG Transport Stream (MPEG-TS)
The SDK utilizes MPEG-TS as the container format for UDP streaming. MPEG-TS offers several advantages:
- Designed specifically for transmission over potentially unreliable networks
- Built-in error correction capabilities
- Support for multiplexing multiple audio and video streams
- Low latency characteristics ideal for live streaming
### FFMPEG Integration
VisioForge SDKs leverage the power of FFMPEG for UDP streaming, ensuring:
- High performance encoding and streaming
- Wide compatibility with various networks and receiving clients
- Reliable packet handling and stream management
### Unicast and Multicast Support
- **Unicast**: Point-to-point transmission from a single sender to a single receiver
- **Multicast**: Efficient distribution of the same content to multiple recipients simultaneously without duplicating bandwidth at the source
## Technical Implementation Details
UDP streaming in VisioForge SDKs involves several key technical components:
1. **Video Encoding**: Source video is compressed using H.264 or HEVC encoders with configurable parameters for bitrate, resolution, and frame rate.
2. **Audio Encoding**: Audio streams are processed through AAC encoders with adjustable quality settings.
3. **Multiplexing**: Video and audio streams are combined into a single MPEG-TS container.
4. **Packetization**: The MPEG-TS stream is divided into UDP packets of appropriate size for network transmission.
5. **Transmission**: Packets are sent over the network to specified unicast or multicast addresses.
The implementation prioritizes low latency while maintaining sufficient quality for professional applications. Advanced buffering mechanisms help manage network jitter and ensure smooth playback at the receiving end.
## Windows-only UDP Output Implementation
[!badge variant="dark" size="xl" text="VideoCaptureCore"] [!badge variant="dark" size="xl" text="VideoEditCore"]
### Step 1: Enable Network Streaming
The first step is to enable network streaming functionality in your application. This is done by setting the `Network_Streaming_Enabled` property to true:
```cs
VideoCapture1.Network_Streaming_Enabled = true;
```
### Step 2: Configure Audio Streaming (Optional)
If your application requires audio streaming alongside video, enable it with:
```cs
VideoCapture1.Network_Streaming_Audio_Enabled = true;
```
### Step 3: Set the Streaming Format
Specify UDP as the streaming format by setting the `Network_Streaming_Format` property to `UDP_FFMPEG_EXE`:
```cs
VideoCapture1.Network_Streaming_Format = NetworkStreamingFormat.UDP_FFMPEG_EXE;
```
### Step 4: Configure the UDP Stream URL
Set the destination URL for your UDP stream. For a basic unicast stream to localhost:
```cs
VideoCapture1.Network_Streaming_URL = "udp://127.0.0.1:10000?pkt_size=1316";
```
The `pkt_size` parameter defines the UDP packet size. The value 1316 is optimized for most network environments, allowing for efficient transmission while minimizing fragmentation.
### Step 5: Multicast Configuration (Optional)
For multicast streaming to multiple receivers, use a multicast address (typically in the range 224.0.0.0 to 239.255.255.255):
```cs
VideoCapture1.Network_Streaming_URL = "udp://239.101.101.1:1234?ttl=1&pkt_size=1316";
```
The additional parameters include:
- **ttl**: Time-to-live value that determines how many network hops the packets can traverse
- **pkt_size**: Packet size as explained above
### Step 6: Configure Output Settings
Finally, configure the streaming output parameters using the `FFMPEGEXEOutput` class:
```cs
var ffmpegOutput = new FFMPEGEXEOutput();
ffmpegOutput.FillDefaults(DefaultsProfile.MP4_H264_AAC, true);
ffmpegOutput.OutputMuxer = OutputMuxer.MPEGTS;
VideoCapture1.Network_Streaming_Output = ffmpegOutput;
```
This code:
1. Creates a new FFMPEG output configuration
2. Applies default settings for H.264 video and AAC audio
3. Specifies MPEG-TS as the container format
4. Assigns this configuration to the streaming output
## Advanced Configuration Options
### Bitrate Management
For optimal streaming performance, consider adjusting the video and audio bitrates based on your network capacity:
```cs
ffmpegOutput.VideoSettings.Bitrate = 2500000; // 2.5 Mbps for video
ffmpegOutput.AudioSettings.Bitrate = 128000; // 128 kbps for audio
```
### Resolution and Frame Rate
Lower resolutions and frame rates reduce bandwidth requirements:
```cs
VideoCapture1.Video_Resize_Enabled = true;
VideoCapture1.Video_Resize_Width = 1280; // 720p resolution
VideoCapture1.Video_Resize_Height = 720;
VideoCapture1.Video_FrameRate = 30; // 30 fps
```
### Buffer Size Configuration
Adjusting buffer sizes can help manage latency vs. stability trade-offs:
```cs
VideoCapture1.Network_Streaming_BufferSize = 8192; // in KB
```
## Best Practices for UDP Streaming
### Network Considerations
1. **Bandwidth Assessment**: Ensure sufficient bandwidth for your target quality. As a guideline:
- SD quality (480p): 1-2 Mbps
- HD quality (720p): 2.5-4 Mbps
- Full HD (1080p): 4-8 Mbps
2. **Network Stability**: UDP doesn't guarantee packet delivery. In unstable networks, consider:
- Reducing resolution or bitrate
- Implementing application-level error recovery
- Using forward error correction when available
3. **Firewall Configuration**: Ensure that UDP ports are open on both sender and receiver firewalls.
### Performance Optimization
1. **Hardware Acceleration**: When available, enable hardware acceleration for encoding:
```cs
ffmpegOutput.VideoSettings.HWAcceleration = HWAcceleration.Auto;
```
2. **Keyframe Intervals**: For lower latency, reduce keyframe (I-frame) intervals:
```cs
ffmpegOutput.VideoSettings.KeyframeInterval = 60; // One keyframe every 2 seconds at 30 fps
```
3. **Preset Selection**: Choose encoding presets based on your CPU capacity and latency requirements:
```cs
ffmpegOutput.VideoSettings.EncoderPreset = H264EncoderPreset.Ultrafast; // Lowest latency, higher bitrate
// or
ffmpegOutput.VideoSettings.EncoderPreset = H264EncoderPreset.Medium; // Balance between quality and CPU load
```
## Troubleshooting Common Issues
1. **Stream Not Receiving**: Verify network connectivity, port availability, and firewall settings.
2. **High Latency**: Check network congestion, reduce bitrate, or adjust buffer sizes.
3. **Poor Quality**: Increase bitrate, adjust encoding settings, or check for network packet loss.
4. **Audio/Video Sync Issues**: Ensure proper timestamp synchronization in your application.
## Conclusion
UDP streaming with VisioForge SDKs provides a powerful solution for real-time video and audio transmission with minimal latency. By leveraging H.264/HEVC video codecs, AAC audio, and MPEG-TS packaging, developers can create robust streaming applications suitable for a wide range of use cases.
The flexibility of the SDK allows for fine-tuning of all streaming parameters, enabling optimization for specific network conditions and quality requirements. Whether implementing a simple point-to-point stream or a complex multicast distribution system, VisioForge's UDP streaming capabilities provide the necessary tools for success.
---
Visit our [GitHub](https://github.com/visioforge/.Net-SDK-s-samples) page to get more code samples and working demonstrations of UDP streaming implementations.
---END OF PAGE---
# Local File: .\dotnet\general\network-streaming\wmv.md
---
title: WMV Network Streaming with .NET Development
description: Learn how to implement Windows Media Video (WMV) streaming in .NET applications. Step-by-step guide for developers covering setup, configuration, client connections, and performance optimization for network video streaming.
sidebar_label: Windows Media Video
---
# Windows Media Video (WMV) Network Streaming Implementation Guide
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge variant="dark" size="xl" text="VideoCaptureCore"]
## Introduction to WMV Streaming Technology
Windows Media Video (WMV) represents a versatile and powerful streaming technology developed by Microsoft. As an integral component of the Windows Media framework, WMV has established itself as a reliable solution for efficiently delivering video content across networks. This format utilizes sophisticated compression algorithms that substantially reduce file sizes while maintaining acceptable visual quality, making it particularly well-suited for streaming applications where bandwidth optimization is critical.
The WMV format supports an extensive range of video resolutions and bitrates, allowing developers to tailor their streaming implementations to accommodate varying network conditions and end-user requirements. This adaptability makes WMV an excellent choice for applications that need to serve diverse client environments with different connectivity constraints.
## Technical Overview of WMV Format
### Key Features and Capabilities
WMV implements the Advanced Systems Format (ASF) container, which provides several technical advantages for streaming applications:
- **Efficient compression**: Employs codec technology that balances quality with file size
- **Scalable bitrate adjustment**: Adapts to available bandwidth conditions
- **Error resilience**: Built-in mechanisms for packet loss recovery
- **Content protection**: Supports Digital Rights Management (DRM) when required
- **Metadata support**: Allows embedding of descriptive information about the stream
### Technical Specifications
| Feature | Specification |
|---------|---------------|
| Codec | VC-1 (primarily) |
| Container | ASF (Advanced Systems Format) |
| Supported resolutions | Up to 4K UHD (depending on profile) |
| Bitrate range | 10 Kbps to 20+ Mbps |
| Audio support | WMA (Windows Media Audio) |
| Streaming protocols | HTTP, RTSP, MMS |
## Windows-Only WMV Streaming Implementation
[!badge variant="dark" size="xl" text="VideoCaptureCore"]
The VisioForge SDK provides a robust framework for implementing WMV streaming in Windows environments. This implementation allows applications to broadcast video over networks while simultaneously capturing to a file if desired.
### Implementation Prerequisites
Before implementing WMV streaming in your application, ensure the following requirements are met:
1. Your development environment includes the VisioForge Video Capture SDK
2. Required redistributables are installed (details provided in the Deployment section)
3. Your application targets Windows operating systems
4. Network ports are properly configured and accessible
### Step-by-Step Implementation Guide
#### 1. Initialize the Video Capture Component
Begin by setting up the core video capture component in your application:
```cs
// Initialize the VideoCapture component
var VideoCapture1 = new VisioForge.Core.VideoCapture();
// Configure basic capture settings (adjust as needed)
// ...
```
#### 2. Enable Network Streaming
To activate network streaming functionality, you need to enable it explicitly and set the format to WMV:
```cs
// Enable network streaming
VideoCapture1.Network_Streaming_Enabled = true;
// Set the streaming format to WMV
VideoCapture1.Network_Streaming_Format = NetworkStreamingFormat.WMV;
```
#### 3. Configure WMV Output Settings
Create and configure a WMV output object with appropriate settings:
```cs
// Create WMV output configuration
var wmvOutput = new WMVOutput();
// Optional: Configure WMV-specific settings
wmvOutput.Bitrate = 2000000; // 2 Mbps
wmvOutput.KeyFrameInterval = 3; // seconds between keyframes
wmvOutput.Quality = 85; // Quality setting (0-100)
// Apply WMV output configuration
VideoCapture1.Network_Streaming_Output = wmvOutput;
// Set network port for client connections
VideoCapture1.Network_Streaming_Network_Port = 12345;
// Optional: Set maximum number of concurrent clients (default is 10)
VideoCapture1.Network_Streaming_Max_Clients = 25;
```
#### 4. Start the Streaming Process
Once everything is configured, you can start the streaming process:
```cs
// Start the streaming process
try {
VideoCapture1.Start();
// The streaming URL is now available for clients
string streamingUrl = VideoCapture1.Network_Streaming_URL;
// Display or log the streaming URL for client connections
Console.WriteLine($"Streaming available at: {streamingUrl}");
}
catch (Exception ex) {
// Handle any exceptions during streaming initialization
Console.WriteLine($"Streaming error: {ex.Message}");
}
```
### Advanced Configuration Options
#### Custom WMV Profiles
For more precise control over your WMV stream, you can implement custom encoding profiles:
```cs
// Create custom WMV profile
var customProfile = new WMVProfile();
customProfile.VideoCodec = WMVVideoCodec.WMV9;
customProfile.AudioCodec = WMVAudioCodec.WMAudioV9;
customProfile.VideoBitrate = 1500000; // 1.5 Mbps
customProfile.AudioBitrate = 128000; // 128 Kbps
customProfile.BufferWindow = 5000; // Buffer window in milliseconds
// Apply custom profile
wmvOutput.Profile = customProfile;
VideoCapture1.Network_Streaming_Output = wmvOutput;
```
## Client-Side Connection Implementation
Clients can connect to the WMV stream using Windows Media Player or any application that supports the Windows Media streaming protocol. The connection URL follows this format:
```
http://[server_ip]:[port]/
```
For example:
```
http://192.168.1.100:12345/
```
### Sample Client Connection Code
For programmatic connections to the WMV stream in client applications:
```cs
// Client-side WMV stream connection using Windows Media Player control
using System.Windows.Forms;
public partial class StreamViewerForm : Form
{
public StreamViewerForm(string streamUrl)
{
InitializeComponent();
// Assuming you have a Windows Media Player control named 'wmPlayer' on your form
wmPlayer.URL = streamUrl;
wmPlayer.Ctlcontrols.play();
}
}
```
## Performance Optimization
When implementing WMV network streaming, consider these optimization strategies:
1. **Adjust bitrate based on network conditions**: Lower bitrates for constrained networks
2. **Balance keyframe intervals**: Frequent keyframes improve seek performance but increase bandwidth
3. **Monitor CPU usage**: WMV encoding can be CPU-intensive; adjust quality settings accordingly
4. **Implement network quality detection**: Adapt streaming parameters dynamically
5. **Consider buffer settings**: Larger buffers improve stability but increase latency
## Troubleshooting Common Issues
| Issue | Possible Solution |
|-------|-------------------|
| Connection failures | Verify network port is open in firewall settings |
| Poor video quality | Increase bitrate or adjust compression settings |
| High CPU usage | Reduce resolution or frame rate |
| Client buffering | Adjust buffer window settings or reduce bitrate |
| Authentication errors | Verify credentials on both server and client |
## Deployment Requirements
### Required Redistributables
To successfully deploy applications using WMV streaming functionality, include the following redistributable packages:
- Video capture redist [x86](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VideoCapture.x86/) [x64](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VideoCapture.x64/)
### Installation Commands
Using NuGet Package Manager:
```
Install-Package VisioForge.DotNet.Core.Redist.VideoCapture.x64
```
Or for 32-bit systems:
```
Install-Package VisioForge.DotNet.Core.Redist.VideoCapture.x86
```
## Conclusion
WMV network streaming provides a reliable way to broadcast video content across networks in Windows environments. The VisioForge SDK simplifies implementation with its comprehensive API while giving developers fine-grained control over streaming parameters. By following the guidelines in this document, you can create robust streaming applications that deliver high-quality video content to multiple clients simultaneously.
For more advanced implementations and additional code samples, visit our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples).
---END OF PAGE---
# Local File: .\dotnet\general\network-streaming\youtube.md
---
title: YouTube Live Streaming Integration for .NET Apps
description: Learn how to implement YouTube RTMP streaming in .NET applications with step-by-step guidance on video encoders, audio configuration, and cross-platform optimization. Includes code examples and best practices for developers.
sidebar_label: YouTube Streaming
---
# YouTube Live Streaming with VisioForge SDKs
## Introduction to YouTube Streaming Integration
The YouTube RTMP output functionality in VisioForge SDKs enables developers to create robust .NET applications that stream high-quality video content directly to YouTube. This implementation leverages various video and audio encoders to optimize streaming performance across different hardware configurations and platforms. This comprehensive guide provides detailed instructions on setting up, configuring, and troubleshooting YouTube streaming in your applications.
## Supported SDK Platforms
[!badge variant="dark" size="xl" text="VideoCaptureCoreX"] [!badge variant="dark" size="xl" text="VideoEditCoreX"] [!badge variant="dark" size="xl" text="MediaBlocksPipeline"]
All major VisioForge SDK platforms provide cross-platform capabilities for YouTube streaming, ensuring consistent functionality across Windows, macOS, and other operating systems.
## Understanding the YouTubeOutput Class
The `YouTubeOutput` class serves as the primary interface for YouTube streaming configuration, offering extensive customization options including:
- **Video encoder selection and configuration**: Choose from multiple hardware-accelerated and software-based encoders
- **Audio encoder selection and configuration**: Configure AAC audio encoders with custom parameters
- **Custom video and audio processing**: Apply filters and transformations before streaming
- **YouTube-specific sink settings**: Fine-tune streaming parameters specific to YouTube's requirements
## Getting Started: Basic Setup Process
### Stream Key Configuration
The foundation of any YouTube streaming implementation begins with your YouTube stream key. This authentication token connects your application to your YouTube channel:
```csharp
// Initialize YouTube output with your stream key
var youtubeOutput = new YouTubeOutput("your-youtube-stream-key");
```
## Video Encoder Configuration Options
### Comprehensive Video Encoder Support
The SDK provides support for multiple video encoders, each optimized for different hardware environments and performance requirements:
| Encoder Type | Platform/Hardware | Performance Characteristics |
|--------------|-------------------|----------------------------|
| OpenH264 | Cross-platform (software) | CPU-intensive, widely compatible |
| NVENC H264 | NVIDIA GPUs | Hardware-accelerated, reduced CPU usage |
| QSV H264 | Intel CPUs with Quick Sync | Hardware-accelerated, efficient |
| AMF H264 | AMD GPUs | Hardware-accelerated for AMD hardware |
| HEVC/H265 | Various (where supported) | Higher compression efficiency |
### Dynamic Encoder Selection
The system intelligently selects default encoders based on the platform (OpenH264 on most platforms, Apple Media H264 on macOS). Developers can override these defaults to leverage specific hardware capabilities:
```csharp
// Example: Using NVIDIA NVENC encoder if available
if (NVENCH264EncoderSettings.IsAvailable())
{
youtubeOutput.Video = new NVENCH264EncoderSettings();
}
```
### Configuring Video Encoding Parameters
Each encoder supports customization of various parameters to optimize streaming quality and performance:
```csharp
var videoSettings = new OpenH264EncoderSettings
{
Bitrate = 4500000, // 4.5 Mbps
KeyframeInterval = 60, // Keyframe every 2 seconds at 30fps
// Add other encoder-specific settings as needed
};
youtubeOutput.Video = videoSettings;
```
## Audio Encoder Configuration
### Supported AAC Audio Encoders
The SDK supports multiple AAC audio encoders to ensure optimal audio quality across different platforms:
- **VO-AAC**: Default for non-Windows platforms, providing consistent audio encoding
- **AVENC AAC**: Alternative cross-platform option with different performance characteristics
- **MF AAC**: Windows-specific encoder leveraging Media Foundation
### Audio Encoder Configuration Example
```csharp
// Example: Configure audio encoder settings
var audioSettings = new VOAACEncoderSettings
{
Bitrate = 128000, // 128 kbps
SampleRate = 48000 // 48 kHz (YouTube recommended)
};
youtubeOutput.Audio = audioSettings;
```
## Platform-Specific Optimization Strategies
### Windows-Specific Features
- Leverages Media Foundation (MF) encoders for optimal Windows performance
- Provides extended HEVC/H265 encoding capabilities
- Defaults to MF AAC for audio encoding, optimized for the Windows platform
### macOS Implementation Considerations
- Automatically utilizes Apple Media H264 encoder for native performance
- Implements VO-AAC for audio encoding with macOS optimization
### Cross-Platform Compatibility Layer
- Falls back to OpenH264 for video on platforms without specific optimizations
- Utilizes VO-AAC for consistent audio encoding across diverse environments
## Best Practices for Optimal Streaming
### Hardware-Aware Encoder Selection
- Always verify encoder availability before implementing hardware-accelerated options
- Implement fallback mechanisms to OpenH264 when specialized hardware is unavailable
- Consider platform-specific encoder capabilities when designing cross-platform applications
### YouTube-Optimized Stream Settings
- Adhere to YouTube's recommended bitrates for your target resolution
- Implement the standard 2-second keyframe interval (60 frames at 30fps)
- Configure 48 kHz audio sample rate to meet YouTube's audio specifications
### Robust Error Management
- Develop comprehensive error handling for connection issues
- Implement continuous monitoring of encoder performance
- Create diagnostic tools to evaluate stream health during operation
## Complete Implementation Examples
### VideoCaptureCoreX/VideoEditCoreX Integration
This example demonstrates a complete YouTube streaming implementation with error handling for VideoCaptureCoreX/VideoEditCoreX:
```csharp
try
{
var youtubeOutput = new YouTubeOutput("your-stream-key");
// Configure video encoding
if (NVENCH264EncoderSettings.IsAvailable())
{
youtubeOutput.Video = new NVENCH264EncoderSettings
{
Bitrate = 4500000,
KeyframeInterval = 60
};
}
// Configure audio encoding
youtubeOutput.Audio = new MFAACEncoderSettings
{
Bitrate = 128000,
SampleRate = 48000
};
// Additional sink settings if needed
youtubeOutput.Sink.CustomProperty = "value";
// Add the output to the video capture instance
core.Outputs_Add(youtubeOutput, true); // core is an instance of VideoCaptureCoreX
// Or set the output for the video edit instance
videoEdit.Output_Format = youtubeOutput; // videoEdit is an instance of VideoEditCoreX
}
catch (Exception ex)
{
// Handle initialization errors
Console.WriteLine($"Failed to initialize YouTube output: {ex.Message}");
}
```
### Media Blocks SDK Implementation
For developers using the Media Blocks SDK, this example shows how to connect encoder components with the YouTube sink:
```csharp
// Create the YouTube sink block (using RTMP)
var youtubeSinkBlock = YouTubeSinkBlock(new YouTubeSinkSettings("streaming key"));
// Connect the video encoder to the sink block
pipeline.Connect(h264Encoder.Output, youtubeSinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
// Connect the audio encoder to the sink block
pipeline.Connect(aacEncoder.Output, youtubeSinkBlock.CreateNewInput(MediaBlockPadMediaType.Audio));
```
## Troubleshooting Common Issues
### Encoder Initialization Problems
- Verify hardware encoder availability through system diagnostics
- Ensure system meets all requirements for your chosen encoder
- Confirm proper installation of hardware-specific drivers for GPU acceleration
### Stream Connection Failures
- Validate stream key format and expiration status
- Test network connectivity to YouTube's streaming servers
- Verify YouTube service status through official channels
### Performance Optimization
- Monitor system resource utilization during streaming sessions
- Adjust encoding bitrates and settings based on available resources
- Consider switching to hardware acceleration when CPU usage is excessive
## Additional Resources and Documentation
- [Official YouTube Live Streaming Documentation](https://support.google.com/youtube/topic/9257891)
- [YouTube Technical Stream Requirements](https://support.google.com/youtube/answer/2853702)
By leveraging these detailed configuration options and best practices, developers can create robust YouTube streaming applications using VisioForge SDKs that deliver high-quality content while optimizing system resource utilization across multiple platforms.
---END OF PAGE---
# Local File: .\dotnet\general\output-formats\avi.md
---
title: AVI File Output Guide for .NET SDK Development
description: Learn how to implement AVI file output in .NET applications with step-by-step examples. Covers video and audio encoding options, hardware acceleration, cross-platform support, and best practices for developers working with multimedia container formats.
sidebar_label: AVI
---
# AVI File Output in VisioForge .NET SDKs
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
AVI (Audio Video Interleave) is a Microsoft-developed multimedia container format that stores both audio and video data in a single file with synchronized playback. It supports both compressed and uncompressed data, offering flexibility while sometimes resulting in larger file sizes.
## Technical Overview of AVI Format
AVI files use a RIFF (Resource Interchange File Format) structure to organize data. This format divides content into chunks, with each chunk containing either audio or video frames. Key technical aspects include:
- Container format supporting multiple audio and video codecs
- Interleaved audio and video data for synchronized playback
- Maximum file size of 4GB in standard AVI (extended to 16EB in OpenDML AVI)
- Support for multiple audio tracks and subtitles
- Widely supported across platforms and media players
Despite newer container formats like MP4 and MKV offering more features, AVI remains valuable for certain workflows due to its simplicity and compatibility with legacy systems.
## Cross-Platform AVI Implementation
[!badge variant="dark" size="xl" text="VideoCaptureCoreX"] [!badge variant="dark" size="xl" text="VideoEditCoreX"] [!badge variant="dark" size="xl" text="MediaBlocksPipeline"]
The [AVIOutput](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.Output.AVIOutput.html) class provides a robust way to configure and generate AVI files with various encoding options.
### Setting Up AVI Output
Create an `AVIOutput` instance by specifying a target filename:
```csharp
var aviOutput = new AVIOutput("output_video.avi");
```
This constructor automatically initializes default encoders:
- Video: OpenH264 encoder
- Audio: MP3 encoder
### Video Encoder Options
Configure video encoding through the `Video` property with several available encoders:
#### Standard Encoder
```csharp
// Open-source H.264 encoder for general use
aviOutput.Video = new OpenH264EncoderSettings();
```
#### Hardware-Accelerated Encoders
```csharp
// NVIDIA GPU acceleration
aviOutput.Video = new NVENCH264EncoderSettings(); // H.264
aviOutput.Video = new NVENCHEVCEncoderSettings(); // HEVC
// Intel Quick Sync acceleration
aviOutput.Video = new QSVH264EncoderSettings(); // H.264
aviOutput.Video = new QSVHEVCEncoderSettings(); // HEVC
// AMD GPU acceleration
aviOutput.Video = new AMFH264EncoderSettings(); // H.264
aviOutput.Video = new AMFHEVCEncoderSettings(); // HEVC
```
#### Special Purpose Encoder
```csharp
// Motion JPEG for high-quality frame-by-frame encoding
aviOutput.Video = new MJPEGEncoderSettings();
```
### Audio Encoder Options
The `Audio` property lets you configure audio encoding settings:
```csharp
// Standard MP3 encoding
aviOutput.Audio = new MP3EncoderSettings();
// AAC encoding options
aviOutput.Audio = new VOAACEncoderSettings();
aviOutput.Audio = new AVENCAACEncoderSettings();
aviOutput.Audio = new MFAACEncoderSettings(); // Windows only
```
### Integration with SDK Components
#### Video Capture SDK
```csharp
var core = new VideoCaptureCoreX();
core.Outputs_Add(aviOutput, true);
```
#### Video Edit SDK
```csharp
var core = new VideoEditCoreX();
core.Output_Format = aviOutput;
```
#### Media Blocks SDK
```csharp
var aac = new VOAACEncoderSettings();
var h264 = new OpenH264EncoderSettings();
var aviSinkSettings = new AVISinkSettings("output.avi");
var aviOutput = new AVIOutputBlock(aviSinkSettings, h264, aac);
```
### File Management
You can get or change the output filename after initialization:
```csharp
// Get current filename
string currentFile = aviOutput.GetFilename();
// Set new filename
aviOutput.SetFilename("new_output.avi");
```
### Complete Example
Here's a full example showing how to configure AVI output with hardware acceleration:
```csharp
// Create AVI output with specified filename
var aviOutput = new AVIOutput("high_quality_output.avi");
// Configure hardware-accelerated NVIDIA H.264 encoding
aviOutput.Video = new NVENCH264EncoderSettings();
// Configure AAC audio encoding
aviOutput.Audio = new VOAACEncoderSettings();
```
## Windows-Specific AVI Implementation
[!badge variant="dark" size="xl" text="VideoCaptureCore"] [!badge variant="dark" size="xl" text="VideoEditCore"]
The Windows-only components provide additional options for AVI output configuration.
### Basic Setup
Create the AVIOutput object:
```csharp
var aviOutput = new AVIOutput();
```
### Configuration Methods
#### Method 1: Using Settings Dialog
```csharp
var aviSettingsDialog = new AVISettingsDialog(
VideoCapture1.Video_Codecs.ToArray(),
VideoCapture1.Audio_Codecs.ToArray());
aviSettingsDialog.ShowDialog(this);
aviSettingsDialog.SaveSettings(ref aviOutput);
```
#### Method 2: Programmatic Configuration
First, get available codecs:
```csharp
// Populate codec lists
foreach (string codec in VideoCapture1.Video_Codecs)
{
cbVideoCodecs.Items.Add(codec);
}
foreach (string codec in VideoCapture1.Audio_Codecs)
{
cbAudioCodecs.Items.Add(codec);
}
```
Then set video and audio settings:
```csharp
// Configure video
aviOutput.Video_Codec = cbVideoCodecs.Text;
// Configure audio
aviOutput.ACM.Name = cbAudioCodecs.Text;
aviOutput.ACM.Channels = 2;
aviOutput.ACM.BPS = 16;
aviOutput.ACM.SampleRate = 44100;
aviOutput.ACM.UseCompression = true;
```
### Implementation
Apply settings and start capture:
```csharp
// Set output format
VideoCapture1.Output_Format = aviOutput;
// Set capture mode
VideoCapture1.Mode = VideoCaptureMode.VideoCapture;
// Set output file path
VideoCapture1.Output_Filename = "output.avi";
// Start capture
await VideoCapture1.StartAsync();
```
## Best Practices for AVI Output
### Encoder Selection Guidelines
1. **General-Purpose Applications**
- OpenH264 provides good compatibility and quality
- Suitable for most standard development scenarios
2. **Performance-Critical Applications**
- Use hardware-accelerated encoders (NVENC, QSV, AMF) when available
- Offers significant performance advantages with minimal quality loss
3. **Quality-Focused Applications**
- HEVC encoders provide better compression at similar quality
- MJPEG for scenarios requiring frame-by-frame accuracy
### Audio Encoding Recommendations
- MP3: Good compatibility with reasonable quality
- AAC: Better quality-to-size ratio, preferred for newer applications
- Choose based on your target platform and quality requirements
### Platform Considerations
- Some encoders are platform-specific:
- MF HEVC and MF AAC encoders are Windows-only
- Hardware-accelerated encoders require appropriate GPU support
- Check encoder availability with `GetVideoEncoders()` and `GetAudioEncoders()` when developing cross-platform applications
### Error Handling Tips
- Always verify encoder availability before use
- Implement fallback encoders for platform-specific scenarios
- Check file write permissions before setting output paths
## Troubleshooting Common Issues
### Codec Not Found
If you encounter "Codec not found" errors:
```csharp
// Check if codec is available before using
if (!VideoCapture1.Video_Codecs.Contains("H264"))
{
// Fall back to another codec or show error
MessageBox.Show("H264 codec not available. Please install required codecs.");
return;
}
```
### File Write Permission Issues
Handle permission-related errors:
```csharp
try
{
// Test write permissions
using (var fs = File.Create(outputPath, 1, FileOptions.DeleteOnClose)) { }
// If successful, proceed with AVI output
aviOutput.SetFilename(outputPath);
}
catch (UnauthorizedAccessException)
{
// Handle permission error
MessageBox.Show("Cannot write to the specified location. Please select another folder.");
}
```
### Memory Issues with Large Files
For handling large file recording:
```csharp
// Split recording into multiple files when size limit is reached
void SetupLargeFileRecording()
{
var aviOutput = new AVIOutput("recording_part1.avi");
// Set file size limit (3.5GB to stay under 4GB AVI limit)
aviOutput.MaxFileSize = 3.5 * 1024 * 1024 * 1024;
// Enable auto-split functionality
aviOutput.AutoSplit = true;
aviOutput.SplitNamingPattern = "recording_part{0}.avi";
// Apply to Video Capture
var core = new VideoCaptureCoreX();
core.Outputs_Add(aviOutput, true);
}
```
## Required Dependencies
### Video Capture SDK .Net
- [x86 Redist](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VideoCapture.x86/)
- [x64 Redist](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VideoCapture.x64/)
### Video Edit SDK .Net
- [x86 Redist](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VideoEdit.x86/)
- [x64 Redist](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VideoEdit.x64/)
## Additional Resources
- [VisioForge API Documentation](https://api.visioforge.org/dotnet/)
- [Sample Projects Repository](https://github.com/visioforge/.Net-SDK-s-samples)
- [Support and Community Forums](https://support.visioforge.com/)
---END OF PAGE---
# Local File: .\dotnet\general\output-formats\custom.md
---
title: DirectShow Custom Video Format Integration in .NET
description: Learn how to implement custom video output formats using DirectShow filters in .NET applications. Step-by-step guide for developers to create specialized video processing pipelines with codec configuration and format handling.
sidebar_label: Custom Output Formats
---
# Creating Custom Video Output Formats with DirectShow Filters
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net)
[!badge variant="dark" size="xl" text="VideoCaptureCore"] [!badge variant="dark" size="xl" text="VideoEditCore"]
## Overview
Working with video in .NET applications often requires custom output formats to meet specific project requirements. The VisioForge SDKs provide powerful capabilities to implement custom format outputs using DirectShow filters, giving developers precise control over audio and video processing pipelines.
This guide demonstrates practical techniques for implementing custom output formats that work seamlessly with both the Video Capture SDK .NET and Video Edit SDK .NET, allowing you to tailor your video applications to exact specifications.
## Why Use Custom Output Formats?
Custom output formats offer several advantages for .NET developers:
- Support for specialized video codecs not available in standard formats
- Fine-grained control over video and audio compression settings
- Integration with third-party DirectShow filters
- Ability to create proprietary or industry-specific output formats
- Optimization for specific use cases (streaming, archiving, editing)
## Getting Started with CustomOutput
The `CustomOutput` class is the cornerstone for configuring custom output settings in VisioForge SDKs. This class enables you to define and configure the filters used in your video processing pipeline.
Start by initializing a new instance:
```cs
var customOutput = new CustomOutput();
```
While our examples use the `VideoCaptureCore` class, developers using Video Edit SDK .NET can apply the same techniques with `VideoEditCore`.
## Implementation Strategies
There are two primary approaches to implementing custom format output with DirectShow filters:
### Strategy 1: Three-Component Pipeline
This modular approach divides the processing pipeline into three distinct components:
1. Audio codec
2. Video codec
3. Multiplexer (file format container)
This separation provides maximum flexibility and control over each stage of the process. You can use either standard DirectShow filters or specialized codecs for audio and video components.
#### Retrieving Available Codecs
Begin by populating your UI with available codecs and filters:
```cs
// Populate video codec options
foreach (string codec in VideoCapture1.Video_Codecs)
{
videoCodecDropdown.Items.Add(codec);
}
// Populate audio codec options
foreach (string codec in VideoCapture1.Audio_Codecs)
{
audioCodecDropdown.Items.Add(codec);
}
// Get all available DirectShow filters
foreach (string filter in VideoCapture1.DirectShow_Filters)
{
directShowAudioFilters.Items.Add(filter);
directShowVideoFilters.Items.Add(filter);
multiplexerFilters.Items.Add(filter);
fileWriterFilters.Items.Add(filter);
}
```
#### Configuring the Pipeline Components
Next, set up your video and audio processing components based on user selections:
```cs
// Set up video codec
if (useStandardVideoCodec.Checked)
{
customOutput.Video_Codec = videoCodecDropdown.Text;
customOutput.Video_Codec_UseFiltersCategory = false;
}
else
{
customOutput.Video_Codec = directShowVideoFilters.Text;
customOutput.Video_Codec_UseFiltersCategory = true;
}
// Set up audio codec
if (useStandardAudioCodec.Checked)
{
customOutput.Audio_Codec = audioCodecDropdown.Text;
customOutput.Audio_Codec_UseFiltersCategory = false;
}
else
{
customOutput.Audio_Codec = directShowAudioFilters.Text;
customOutput.Audio_Codec_UseFiltersCategory = true;
}
// Configure the multiplexer
customOutput.MuxFilter_Name = multiplexerFilters.Text;
customOutput.MuxFilter_IsEncoder = false;
```
#### Custom File Writer Configuration
For specialized outputs that require a dedicated file writer:
```cs
// Enable special file writer if needed
customOutput.SpecialFileWriter_Needed = useCustomFileWriter.Checked;
customOutput.SpecialFileWriter_FilterName = fileWriterFilters.Text;
```
This approach gives you granular control over each stage of the encoding process, making it ideal for complex output requirements.
### Strategy 2: All-in-One Filter
This streamlined approach uses a single DirectShow filter that combines the functionality of the multiplexer, video codec, and audio codec. The SDK intelligently handles detection of the filter's capabilities, determining whether it:
- Can directly write files without assistance
- Requires the standard DirectShow File Writer filter
- Needs a specialized file writer filter
Implementation is straightforward:
```cs
// Populate filter options from available DirectShow filters
foreach (string filter in VideoCapture1.DirectShow_Filters)
{
filterDropdown.Items.Add(filter);
}
// Configure the all-in-one filter
customOutput.MuxFilter_Name = selectedFilter.Text;
customOutput.MuxFilter_IsEncoder = true;
// Set up specialized file writer if required
customOutput.SpecialFileWriter_Needed = requiresCustomWriter.Checked;
customOutput.SpecialFileWriter_FilterName = fileWriterFilter.Text;
```
This approach is simpler to implement but offers less granular control over individual components of the encoding process.
## Simplifying Configuration with Dialog UI
For a more user-friendly implementation, VisioForge provides a built-in settings dialog that handles the configuration of custom formats:
```cs
// Create and configure the settings dialog
CustomFormatSettingsDialog settingsDialog = new CustomFormatSettingsDialog(
VideoCapture1.Video_Codecs.ToArray(),
VideoCapture1.Audio_Codecs.ToArray(),
VideoCapture1.DirectShow_Filters.ToArray());
// Apply settings to your CustomOutput instance
settingsDialog.SaveSettings(ref customOutput);
```
This dialog provides a complete UI for configuring all aspects of custom output formats, saving development time while still offering full flexibility.
## Implementing the Output Process
After configuring your custom format settings, you need to apply them to your capture or edit process:
```cs
// Apply the custom format configuration
VideoCapture1.Output_Format = customOutput;
// Set the capture mode
VideoCapture1.Mode = VideoCaptureMode.VideoCapture;
// Specify output file path
VideoCapture1.Output_Filename = "output_video.mp4";
// Start the capture or encoding process
await VideoCapture1.StartAsync();
```
## Performance Considerations
When implementing custom output formats, keep these performance tips in mind:
- DirectShow filters vary in efficiency and resource usage
- Test your filter combinations with typical input media
- Some third-party filters may introduce additional latency
- Consider memory usage when processing high-resolution video
- Filter compatibility may vary across different Windows versions
## Required Packages
To use custom DirectShow filters, ensure you have the appropriate redistributable packages installed:
### Video Capture SDK .Net
- [x86 Package](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VideoCapture.x86/)
- [x64 Package](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VideoCapture.x64/)
### Video Edit SDK .Net
- [x86 Package](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VideoEdit.x86/)
- [x64 Package](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VideoEdit.x64/)
## Troubleshooting
Common issues when working with custom DirectShow filters include:
- Filter compatibility conflicts
- Missing codecs or dependencies
- Registration issues with COM components
- Memory leaks in third-party filters
- Performance bottlenecks with complex filter graphs
If you encounter problems, verify that all required filters are properly registered on your system and that you have the latest versions of both the filters and the VisioForge SDK.
## Conclusion
Custom output formats using DirectShow filters provide powerful capabilities for .NET developers working with video applications. Whether you choose the flexibility of a three-component pipeline or the simplicity of an all-in-one filter approach, VisioForge's SDKs give you the tools you need to create exactly the output format your application requires.
---
For more code samples and implementation examples, visit our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples).
---END OF PAGE---
# Local File: .\dotnet\general\output-formats\ffmpeg-exe.md
---
title: FFMPEG Integration for VisioForge Video SDKs
description: Implement powerful FFMPEG.exe output in VisioForge .Net SDKs for video capture, editing, and processing. Learn how to configure video codecs, hardware acceleration, custom encoding parameters, and optimize performance for professional video applications.
sidebar_label: FFMPEG (exe)
---
# FFMPEG.exe Integration with VisioForge .Net SDKs
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net)
[!badge variant="dark" size="xl" text="VideoCaptureCore"] [!badge variant="dark" size="xl" text="VideoEditCore"]
## Introduction to FFMPEG Output in .NET
This guide provides detailed instructions for implementing FFMPEG.exe output in Windows applications using VisioForge's .NET SDKs. The integration works with both [Video Capture SDK .NET](https://www.visioforge.com/video-capture-sdk-net) and [Video Edit SDK .NET](https://www.visioforge.com/video-edit-sdk-net), utilizing the `VideoCaptureCore` and `VideoEditCore` engines.
FFMPEG functions as a powerful multimedia framework that enables developers to output to a wide variety of video and audio formats. Its flexibility stems from extensive codec support and granular control over encoding parameters for both video and audio streams.
## Why Use FFMPEG with VisioForge SDKs?
Integrating FFMPEG into your VisioForge-powered applications provides several technical advantages:
- **Format versatility**: Support for virtually all modern container formats
- **Codec flexibility**: Access to both open-source and proprietary codecs
- **Performance optimization**: Options for CPU and GPU acceleration
- **Customization depth**: Fine-grained control over encoding parameters
- **Cross-platform compatibility**: Consistent output on different systems
## Key Features and Capabilities
### Supported Output Formats
FFMPEG supports numerous container formats, including but not limited to:
- MP4 (MPEG-4 Part 14)
- WebM (VP8/VP9 with Vorbis/Opus)
- MKV (Matroska)
- AVI (Audio Video Interleave)
- MOV (QuickTime)
- WMV (Windows Media Video)
- FLV (Flash Video)
- TS (MPEG Transport Stream)
### Hardware Acceleration Options
Modern video encoding benefits from hardware acceleration technologies that significantly improve encoding speed and efficiency:
- **Intel QuickSync**: Leverages Intel integrated graphics for H.264 and HEVC encoding
- **NVIDIA NVENC**: Utilizes NVIDIA GPUs for accelerated encoding (requires compatible NVIDIA graphics card)
- **AMD AMF/VCE**: Employs AMD graphics processors for encoding acceleration
### Video Codec Support
The integration offers access to multiple video codecs with customizable parameters:
- **H.264/AVC**: Industry standard with excellent quality-to-size ratio
- **H.265/HEVC**: Higher efficiency codec for 4K+ content
- **VP9**: Google's open video codec used in WebM
- **AV1**: Next-generation open codec (where supported)
- **MPEG-2**: Legacy codec for DVD and broadcast compatibility
- **ProRes**: Professional codec for editing workflows
## Implementation Process
### 1. Setting Up Your Development Environment
Before implementing FFMPEG output, ensure your development environment is properly configured:
1. Create a new or open an existing .NET project
2. Install the appropriate VisioForge SDK NuGet packages
3. Add FFMPEG dependency packages (detailed in the Dependencies section)
4. Import the necessary namespaces in your code:
```csharp
using VisioForge.Core.Types;
using VisioForge.Core.Types.VideoCapture;
using VisioForge.Core.Types.VideoEdit;
```
### 2. Initializing FFMPEG Output
Start by creating an instance of `FFMPEGEXEOutput` to handle your output configuration:
```csharp
var ffmpegOutput = new FFMPEGEXEOutput();
```
This object will serve as the container for all your encoding settings and preferences.
### 3. Configuring Output Container Format
Set your desired output container format using the `OutputMuxer` property:
```csharp
ffmpegOutput.OutputMuxer = OutputMuxer.MP4;
```
Other common container options include:
- `OutputMuxer.MKV` - For Matroska container
- `OutputMuxer.WebM` - For WebM format
- `OutputMuxer.AVI` - For AVI format
- `OutputMuxer.MOV` - For QuickTime container
### 4. Video Encoder Configuration
FFMPEG provides multiple video encoder options. Select and configure the appropriate encoder based on your requirements and available hardware:
#### Standard CPU-Based H.264 Encoding
```csharp
var videoEncoder = new H264MFSettings
{
Bitrate = 5000000,
RateControlMode = RateControlMode.CBR
};
ffmpegOutput.Video = videoEncoder;
```
#### Hardware-Accelerated NVIDIA Encoding
```csharp
var nvidiaEncoder = new H264NVENCSettings
{
Bitrate = 8000000, // 8 Mbps
};
ffmpegOutput.Video = nvidiaEncoder;
```
#### Hardware-Accelerated Intel QuickSync Encoding
```csharp
var intelEncoder = new H264QSVSettings
{
Bitrate = 6000000
};
ffmpegOutput.Video = intelEncoder;
```
#### HEVC/H.265 Encoding for Higher Efficiency
```csharp
var hevcEncoder = new HEVCQSVSettings
{
Bitrate = 3000000,
};
ffmpegOutput.Video = hevcEncoder;
```
### 5. Audio Encoder Configuration
Configure your audio encoding settings based on quality requirements and target platform compatibility:
```csharp
var audioEncoder = new BasicAudioSettings
{
Bitrate = 192000, // 192 kbps
Channels = 2, // Stereo
SampleRate = 48000, // 48 kHz - professional standard
Encoder = AudioEncoder.AAC,
Mode = AudioMode.CBR
};
ffmpegOutput.Audio = audioEncoder;
```
### 6. Final Configuration and Execution
Apply all settings and start the encoding process:
```csharp
// Apply format settings
core.Output_Format = ffmpegOutput;
// Set operation mode
core.Mode = VideoCaptureMode.VideoCapture; // For Video Capture SDK
// core.Mode = VideoEditMode.Convert; // For Video Edit SDK
// Set output path
core.Output_Filename = "output.mp4";
// Begin processing
await core.StartAsync();
```
## Required Dependencies
Install the following NuGet packages based on your target architecture to ensure proper functionality:
### Video Capture SDK Dependencies
```cmd
Install-Package VisioForge.DotNet.Core.Redist.VideoCapture.x64
Install-Package VisioForge.DotNet.Core.Redist.FFMPEGEXE.x64
```
For x86 targets:
```cmd
Install-Package VisioForge.DotNet.Core.Redist.VideoCapture.x86
Install-Package VisioForge.DotNet.Core.Redist.FFMPEGEXE.x86
```
### Video Edit SDK Dependencies
```cmd
Install-Package VisioForge.DotNet.Core.Redist.VideoEdit.x64
Install-Package VisioForge.DotNet.Core.Redist.FFMPEGEXE.x64
```
For x86 targets:
```cmd
Install-Package VisioForge.DotNet.Core.Redist.VideoEdit.x86
Install-Package VisioForge.DotNet.Core.Redist.FFMPEGEXE.x86
```
## Troubleshooting and Optimization
### Common Issues and Solutions
- **Codec not found errors**: Ensure you've installed the correct FFMPEG package with proper codec support
- **Hardware acceleration failures**: Verify GPU compatibility and driver versions
- **Performance issues**: Adjust thread count and encoding preset based on available CPU resources
- **Output quality problems**: Fine-tune bitrate, profile, and encoding parameters
### Performance Optimization Tips
- Use hardware acceleration when available
- Choose appropriate presets based on your quality/speed requirements
- Set reasonable bitrates based on content type and resolution
- Consider two-pass encoding for non-realtime scenarios requiring highest quality
## Additional Resources
For more code samples and implementation examples, visit our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples).
To learn more about FFMPEG parameters and capabilities, refer to the [official FFMPEG documentation](https://ffmpeg.org/documentation.html).
---END OF PAGE---
# Local File: .\dotnet\general\output-formats\gif.md
---
title: GIF Animation Encoding for .NET Development
description: Learn how to implement and optimize GIF animation encoding in .NET applications. Explore frame rate control, resolution settings, and performance tuning with detailed code examples for both cross-platform and Windows environments.
sidebar_label: GIF
---
# GIF Encoder
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The GIF encoder is a component of the VisioForge SDK that enables video encoding to the GIF format. This document provides detailed information about the GIF encoder settings and implementation guidelines.
## Cross-platform GIF output
[!badge variant="dark" size="xl" text="VideoCaptureCoreX"] [!badge variant="dark" size="xl" text="VideoEditCoreX"] [!badge variant="dark" size="xl" text="MediaBlocksPipeline"]
The GIF encoder settings are managed through the `GIFEncoderSettings` class, which provides configuration options for controlling the encoding process.
### Properties
1. **Repeat**
- Type: `uint`
- Description: Controls the number of times the GIF animation will repeat
- Values:
- `-1`: Loop forever
- `0..n`: Finite number of repetitions
2. **Speed**
- Type: `int`
- Description: Controls the encoding speed
- Range: 1 to 30 (higher values result in faster encoding)
- Default: 10
## Implementation Guide
### Basic Usage
Here's a basic example of how to configure and use the GIF encoder:
```csharp
using VisioForge.Core.Types.X.VideoEncoders;
// Create and configure GIF encoder settings
var settings = new GIFEncoderSettings
{
Repeat = 0, // Play once
Speed = 15 // Set encoding speed to 15
};
```
### Advanced Configuration
For more controlled GIF encoding, you can adjust the settings based on your specific needs:
```csharp
// Configure for an infinitely looping GIF with maximum encoding speed
var settings = new GIFEncoderSettings
{
Repeat = uint.MaxValue, // Loop forever
Speed = 30 // Maximum encoding speed
};
// Configure for optimal quality
var qualitySettings = new GIFEncoderSettings
{
Repeat = 1, // Play twice
Speed = 1 // Slowest encoding speed for best quality
};
```
## Best Practices
1. **Speed Selection**
- For best quality, use lower speed values (1-5)
- For balanced quality and performance, use medium speed values (6-15)
- For fastest encoding, use higher speed values (16-30)
2. **Memory Considerations**
- Higher speed values consume more memory during encoding
- For large videos, consider using lower speed values to manage memory usage
3. **Loop Configuration**
- Use `Repeat = -1` for infinite loops
- Set specific repeat counts for presentation-style GIFs
- Use `Repeat = 0` for single-play GIFs
## Performance Optimization
When encoding videos to GIF format, consider these optimization strategies:
```csharp
// Optimize for web delivery
var webOptimizedSettings = new GIFEncoderSettings
{
Repeat = uint.MaxValue, // Infinite loop for web playback
Speed = 20 // Fast encoding for web content
};
// Optimize for quality
var qualityOptimizedSettings = new GIFEncoderSettings
{
Repeat = 1, // Single repeat
Speed = 3 // Slower encoding for better quality
};
```
### Example Implementation
Here's a complete example showing how to set up M4A output:
Add the M4A output to the Video Capture SDK core instance:
```csharp
var core = new VideoCaptureCoreX();
core.Outputs_Add(gifOutput, true);
```
Set the output format for the Video Edit SDK core instance:
```csharp
var core = new VideoEditCoreX();
core.Output_Format = gifOutput;
```
Create a Media Blocks GIF output instance:
```csharp
var gifSettings = new GIFEncoderSettings();
var gifOutput = new GIFEncoderBlock(gifSettings, "output.gif");
```
## Windows-only GIF output
[!badge variant="dark" size="xl" text="VideoCaptureCore"] [!badge variant="dark" size="xl" text="VideoEditCore"]
The `AnimatedGIFOutput` class is a specialized configuration class within the `VisioForge.Core.Types.Output` namespace that handles settings for generating animated GIF files. This class is designed to work with both video capture and video editing operations, implementing both `IVideoEditBaseOutput` and `IVideoCaptureBaseOutput` interfaces.
The primary purpose of this class is to provide a configuration container for controlling how video content is converted into animated GIF format. It allows users to specify key parameters such as frame rate and output dimensions, which are crucial for creating optimized animated GIFs from video sources.
### Properties
#### ForcedVideoHeight
- Type: `int`
- Purpose: Specifies a forced height for the output GIF
- Usage: Set this property when you need to resize the output GIF to a specific height, regardless of the input video dimensions
- Example: `gifOutput.ForcedVideoHeight = 480;`
#### ForcedVideoWidth
- Type: `int`
- Purpose: Specifies a forced width for the output GIF
- Usage: Set this property when you need to resize the output GIF to a specific width, regardless of the input video dimensions
- Example: `gifOutput.ForcedVideoWidth = 640;`
#### FrameRate
- Type: `VideoFrameRate`
- Default Value: 2 frames per second
- Purpose: Controls how many frames per second the output GIF will contain
- Note: The default value of 2 fps is chosen to balance file size and animation smoothness for typical GIF usage
### Constructor
```csharp
public AnimatedGIFOutput()
```
The constructor initializes a new instance with default settings:
- Sets the frame rate to 2 fps using `new VideoFrameRate(2)`
- All other properties are initialized to their default values
### Serialization Methods
#### Save()
- Returns: `string`
- Purpose: Serializes the current configuration to JSON format
- Usage: Use this method when you need to save or transfer the configuration
- Example:
```csharp
var gifOutput = new AnimatedGIFOutput();
gifOutput.ForcedVideoWidth = 800;
string jsonConfig = gifOutput.Save();
```
#### Load(string json)
- Parameters: `json` - A JSON string containing serialized configuration
- Returns: `AnimatedGIFOutput`
- Purpose: Creates a new instance from a JSON configuration string
- Usage: Use this method to restore a previously saved configuration
- Example:
```csharp
string jsonConfig = "..."; // Your saved JSON configuration
var gifOutput = AnimatedGIFOutput.Load(jsonConfig);
```
### Best Practices and Usage Guidelines
1. Frame Rate Considerations
- The default 2 fps is suitable for most basic animations
- Increase the frame rate for smoother animations, but be aware of file size implications
- Consider using higher frame rates (e.g., 10-15 fps) for complex motion
2. Resolution Settings
- Only set ForcedVideoWidth/Height when you specifically need to resize
- Maintain aspect ratio by setting width and height proportionally
- Consider target platform limitations when choosing dimensions
3. Performance Optimization
- Lower frame rates result in smaller file sizes
- Consider the balance between quality and file size based on your use case
- Test different configurations to find the optimal settings for your needs
### Example Usage
Here's a complete example of configuring and using the AnimatedGIFOutput class:
```csharp
// Create a new instance with default settings
var gifOutput = new AnimatedGIFOutput();
// Configure the output settings
gifOutput.ForcedVideoWidth = 800;
gifOutput.ForcedVideoHeight = 600;
gifOutput.FrameRate = new VideoFrameRate(5); // 5 fps
// Apply the settings to the core
core.Output_Format = gifOutput; // core is an instance of VideoCaptureCore or VideoEditCore
core.Output_Filename = "output.gif";
```
### Common Scenarios and Solutions
#### Creating Web-Optimized GIFs
```csharp
var webGifOutput = new AnimatedGIFOutput
{
ForcedVideoWidth = 480,
ForcedVideoHeight = 270,
FrameRate = new VideoFrameRate(5)
};
```
#### High-Quality Animation Settings
```csharp
var highQualityGif = new AnimatedGIFOutput
{
FrameRate = new VideoFrameRate(15)
};
```
---END OF PAGE---
# Local File: .\dotnet\general\output-formats\index.md
---
title: Video & Audio Format Guide for .NET Development
description: Learn about video and audio formats for .NET applications - from MP4 and WebM to AVI and MKV. Includes practical implementation examples, codec comparisons, and a detailed compatibility matrix for developers.
sidebar_label: Output Formats
order: 17
---
# Output Formats for .NET Media SDKs
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## Introduction
The VisioForge .NET SDKs support a wide range of output formats for video, audio, and media projects. Selecting the right format is crucial for ensuring compatibility, optimizing file size, and maintaining quality appropriate for your target platform. This guide covers all available formats, their technical specifications, use cases, and implementation details to help developers make informed decisions.
## Choosing the Right Format
When selecting an output format, consider these key factors:
- **Target platform** - Some formats work better on specific devices or browsers
- **Quality requirements** - Different codecs provide varying levels of quality at different bitrates
- **File size constraints** - Some formats offer better compression than others
- **Processing overhead** - Encoding complexity varies between formats
- **Streaming requirements** - Certain formats are optimized for streaming scenarios
## Video Container Formats
### AVI (Audio Video Interleave)
[AVI](avi.md) is a classic container format developed by Microsoft that supports various video and audio codecs.
**Key features:**
- Wide compatibility with Windows applications
- Supports virtually any DirectShow-compatible video and audio codec
- Simple structure makes it reliable for video editing workflows
- Better suited for archiving than streaming
### MP4 (MPEG-4 Part 14)
[MP4](mp4.md) is one of the most versatile and widely used container formats in modern applications.
**Key features:**
- Excellent compatibility across devices and platforms
- Supports advanced codecs including H.264, H.265/HEVC, and AAC
- Optimized for streaming and progressive download
- Efficient storage with good quality-to-size ratio
**Supported video codecs:**
- H.264 (AVC) - Balance of quality and compatibility
- H.265 (HEVC) - Better compression but higher encoding overhead
- MPEG-4 Part 2 - Older codec with wider compatibility
**Supported audio codecs:**
- AAC - Industry standard for digital audio compression
- MP3 - Widely supported legacy format
### WebM
[WebM](webm.md) is an open-source container format designed specifically for web use.
**Key features:**
- Royalty-free format ideal for web applications
- Native support in most modern browsers
- Excellent for streaming video content
- Supports VP8, VP9, and AV1 video codecs
**Technical considerations:**
- VP9 offers ~50% bitrate reduction compared to H.264 at similar quality
- AV1 provides even better compression but with significantly higher encoding complexity
- Works well with HTML5 video elements without plugins
### MKV (Matroska)
[MKV](mkv.md) is a flexible container format that can hold virtually any type of audio or video.
**Key features:**
- Supports multiple audio, video, and subtitle tracks
- Can contain almost any codec
- Great for archiving and high-quality storage
- Supports chapters and attachments
**Best uses:**
- Media archives requiring multiple tracks
- High-quality video storage
- Projects requiring complex chapter structures
### Additional Container Formats
- [MOV](mov.md) - Apple's QuickTime container format
- [MPEG-TS](mpegts.md) - Transport Stream format optimized for broadcasting
- [MXF](mxf.md) - Material Exchange Format used in professional video production
- [Windows Media Video](wmv.md) - Microsoft's proprietary format
## Audio-Only Formats
### MP3 (MPEG-1 Audio Layer III)
[MP3](../audio-encoders/mp3.md) remains one of the most widely supported audio formats.
**Key features:**
- Near-universal compatibility
- Configurable bitrate for quality vs. size tradeoffs
- VBR (Variable Bit Rate) option for optimized file sizes
### AAC in M4A Container
[M4A](../audio-encoders/aac.md) provides better audio quality than MP3 at the same bitrate.
**Key features:**
- Better compression efficiency than MP3
- Good compatibility with modern devices
- Supports advanced audio features like multichannel audio
### Other Audio Formats
- [FLAC](../audio-encoders/flac.md) - Lossless audio format for high-quality archiving
- [OGG Vorbis](../audio-encoders/vorbis.md) - Open-source alternative to MP3 with better quality at lower bitrates
## Specialized Formats
### GIF (Graphics Interchange Format)
[GIF](gif.md) is useful for creating short, silent animations.
**Key features:**
- Wide web compatibility
- Limited to 256 colors per frame
- Support for transparency
- Ideal for short, looping animations
### Custom Output Format
[Custom output format](custom.md) allows integration with third-party DirectShow filters.
**Key features:**
- Maximum flexibility for specialized requirements
- Integration with commercial or custom codecs
- Support for proprietary formats
## Advanced Output Options
### FFMPEG Integration
[FFMPEG EXE](ffmpeg-exe.md) integration provides access to FFMPEG's extensive codec library.
**Key features:**
- Support for virtually any format FFMPEG can handle
- Advanced encoding options
- Custom command line arguments for fine-tuned control
## Performance Optimization Tips
When working with video output formats, consider these optimization strategies:
1. **Match format to use case** - Use streaming-optimized formats for web delivery
2. **Consider hardware acceleration** - Many modern codecs support GPU acceleration
3. **Use appropriate bitrates** - Higher isn't always better; find the sweet spot for your content
4. **Test on target devices** - Ensure compatibility before finalizing format choice
5. **Enable multi-threading** - Take advantage of multiple CPU cores for faster encoding
## Implementation Best Practices
- Configure proper keyframe intervals for streaming formats
- Set appropriate bitrate constraints for target platforms
- Use two-pass encoding for highest quality output when time permits
- Consider audio quality requirements alongside video format decisions
## Format Compatibility Matrix
| Format | Windows | macOS | iOS | Android | Web Browsers |
|--------|---------|-------|-----|---------|--------------|
| MP4 (H.264) | ✓ | ✓ | ✓ | ✓ | ✓ |
| WebM (VP9) | ✓ | ✓ | Partial | ✓ | ✓ |
| MKV | ✓ | With players | With players | With players | ✗ |
| AVI | ✓ | With players | Limited | Limited | ✗ |
| MP3 | ✓ | ✓ | ✓ | ✓ | ✓ |
---
Visit our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples) for more code samples and implementation examples. Our documentation is continuously updated to reflect new features and optimizations available in the latest SDK releases.
---END OF PAGE---
# Local File: .\dotnet\general\output-formats\mkv.md
---
title: MKV Container Format for .NET Video Applications
description: Learn how to implement MKV output in .NET applications with hardware-accelerated encoding, multiple audio tracks, and custom video processing. Master video and audio encoding options for high-performance multimedia applications.
sidebar_label: MKV (Matroska)
---
# MKV Output in VisioForge .NET SDKs
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
[!badge variant="dark" size="xl" text="VideoCaptureCoreX"] [!badge variant="dark" size="xl" text="VideoEditCoreX"] [!badge variant="dark" size="xl" text="MediaBlocksPipeline"]
## Introduction to MKV Format
MKV (Matroska Video) is a flexible, open-standard container format that can hold an unlimited number of video, audio, and subtitle tracks in one file. The VisioForge SDKs provide robust support for MKV output with various encoding options to meet diverse development requirements.
This format is particularly valuable for developers working on applications that require:
- Multiple audio tracks or languages
- High-quality video with multiple codec options
- Cross-platform compatibility
- Support for metadata and chapters
## Getting Started with MKV Output
The `MKVOutput` class serves as the primary interface for generating MKV files in VisioForge SDKs. You can initialize it with default settings or specify custom encoders to match your application's needs.
### Basic Implementation
```csharp
// Create MKV output with default encoders
var mkvOutput = new MKVOutput("output.mkv");
// Or specify custom encoders during initialization
var videoEncoder = new NVENCH264EncoderSettings();
var audioEncoder = new MFAACEncoderSettings();
var mkvOutput = new MKVOutput("output.mkv", videoEncoder, audioEncoder);
```
## Video Encoding Options
The MKV format supports multiple video codecs, giving developers flexibility in balancing quality, performance, and compatibility. VisioForge SDKs offer both software and hardware-accelerated encoders.
### H.264 Encoder Options
H.264 remains one of the most widely supported video codecs, providing excellent compression and quality:
- **OpenH264**: Software-based encoder, used as default when hardware acceleration isn't available
- **NVENC H.264**: NVIDIA GPU-accelerated encoding for superior performance
- **QSV H.264**: Intel Quick Sync Video technology for hardware acceleration
- **AMF H.264**: AMD GPU-accelerated encoding option
### HEVC (H.265) Encoder Options
For applications requiring higher compression efficiency or 4K content:
- **MF HEVC**: Windows Media Foundation implementation (Windows-only)
- **NVENC HEVC**: NVIDIA GPU acceleration for H.265
- **QSV HEVC**: Intel Quick Sync implementation for H.265
- **AMF HEVC**: AMD GPU acceleration for H.265 encoding
### Setting a Video Encoder
```csharp
mkvOutput.Video = new NVENCH264EncoderSettings();
```
## Audio Encoding Options
Audio quality is equally important for most applications. VisioForge SDKs provide several audio encoder options for MKV output:
### Supported Audio Codecs
- **AAC Encoders**:
- **VO AAC**: Default choice for non-Windows platforms
- **AVENC AAC**: FFMPEG AAC implementation
- **MF AAC**: Windows Media Foundation implementation (default on Windows)
- **Alternative Audio Formats**:
- **MP3**: Common format with wide compatibility
- **Vorbis**: Open source audio codec
- **OPUS**: Modern codec with excellent quality-to-size ratio
### Configuring Audio Encoding
```csharp
// Platform-specific audio encoder selection
#if NET_WINDOWS
var aacSettings = new MFAACEncoderSettings
{
Bitrate = 192,
SampleRate = 48000
};
mkvOutput.Audio = aacSettings;
#else
var aacSettings = new VOAACEncoderSettings
{
Bitrate = 192,
SampleRate = 44100
};
mkvOutput.Audio = aacSettings;
#endif
// Or use OPUS for better quality at lower bitrates
var opusSettings = new OPUSEncoderSettings
{
Bitrate = 128,
Channels = 2
};
mkvOutput.Audio = opusSettings;
```
## Advanced MKV Configuration
### Custom Video and Audio Processing
For applications that require special processing, you can integrate custom MediaBlock processors:
```csharp
// Add a video processor for effects or transformations
var textOverlayBlock = new TextOverlayBlock(new TextOverlaySettings("Hello world!"));
mkvOutput.CustomVideoProcessor = textOverlayBlock;
// Add audio processing
var volumeBlock = new VolumeBlock() { Level = 1.2 }; // Boost volume by 20%
mkvOutput.CustomAudioProcessor = volumeBlock;
```
### Sink Settings Management
Control output file properties through the sink settings:
```csharp
// Change output filename
mkvOutput.Sink.Filename = "processed_output.mkv";
// Get current filename
string currentFile = mkvOutput.GetFilename();
// Update filename with timestamp
string timestamp = DateTime.Now.ToString("yyyyMMdd_HHmmss");
mkvOutput.SetFilename($"recording_{timestamp}.mkv");
```
## Integration with VisioForge SDK Components
### With Video Capture SDK
```csharp
// Initialize capture core
var captureCore = new VideoCaptureCoreX();
// Configure video and audio source
// ...
// Add MKV output to recording pipeline
var mkvOutput = new MKVOutput("capture.mkv");
captureCore.Outputs_Add(mkvOutput, true);
// Start recording
await captureCore.StartAsync();
```
### With Video Edit SDK
```csharp
// Initialize editing core
var editCore = new VideoEditCoreX();
// Add input sources
// ...
// Configure MKV output with hardware acceleration
var h265Encoder = new NVENCHEVCEncoderSettings
{
Bitrate = 10000
};
var mkvOutput = new MKVOutput("edited.mkv", h265Encoder);
editCore.Output_Format = mkvOutput;
// Process the file
await editCore.StartAsync();
```
### With Media Blocks SDK
```csharp
// Create a pipeline
var pipeline = new MediaBlocksPipeline();
// Add source block
var sourceBlock = // some block
## Interface Implementation
// Configure MKV output
var aacEncoder = new VOAACEncoderSettings();
var h264Encoder = new OpenH264EncoderSettings();
var mkvSinkSettings = new MKVSinkSettings("processed.mkv");
var mkvOutput = new MKVOutputBlock(mkvSinkSettings, h264Encoder, aacEncoder);
// Connect blocks and run the pipeline
pipeline.Connect(sourceBlock.VideoOutput, h264Encoder.Input);
pipeline.Connect(h264Encoder.Output, mkvOutput.CreateNewInput(MediaBlockPadMediaType.Video));
pipeline.Connect(sourceBlock.AudioOutput, aacEncoder.Input);
pipeline.Connect(aacEncoder.Output, mkvOutput.CreateNewInput(MediaBlockPadMediaType.Audio));
pipeline.Connect(mkvOutput.Output, pipeline.Sink);
// Start the pipeline
await pipeline.StartAsync();
```
## Hardware Acceleration Benefits
Hardware-accelerated encoding offers significant advantages for developers building real-time or batch processing applications:
1. **Reduced CPU Load**: Offloads encoding to dedicated hardware
2. **Faster Processing**: Up to 5-10x performance improvement
3. **Power Efficiency**: Lower energy consumption, important for mobile apps
4. **Higher Quality**: Some hardware encoders provide better quality-per-bitrate
## Best Practices for Developers
When implementing MKV output in your applications, consider these recommendations:
1. **Always check hardware availability** before using GPU-accelerated encoders
2. **Select appropriate bitrates** based on content type and resolution
3. **Use platform-specific encoders** where possible for optimal performance
4. **Test on target platforms** to ensure compatibility
5. **Consider quality-size trade-offs** based on your application's needs
## Conclusion
The MKV format provides developers with a flexible, robust container for video content in .NET applications. With VisioForge SDKs, you can leverage hardware acceleration, advanced encoding options, and custom processing to create high-performance video applications.
By understanding the available encoders and configuration options, you can optimize your implementation for specific hardware platforms while maintaining cross-platform compatibility where needed.
---END OF PAGE---
# Local File: .\dotnet\general\output-formats\mov.md
---
title: MOV File Encoding with VisioForge .NET SDKs
description: Learn how to implement high-performance MOV file output in your .NET applications using VisioForge SDKs. This developer guide covers hardware-accelerated encoding options, cross-platform implementation, audio/video configuration, and integration workflows for professional video applications.
sidebar_label: MOV
---
# MOV File Output for .NET Video Applications
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
[!badge variant="dark" size="xl" text="VideoCaptureCoreX"] [!badge variant="dark" size="xl" text="VideoEditCoreX"] [!badge variant="dark" size="xl" text="MediaBlocksPipeline"]
## Introduction to MOV Output in VisioForge
The MOV container format is widely used for video storage in professional environments and Apple ecosystems. VisioForge's .NET SDKs provide robust cross-platform support for generating MOV files with customizable encoding options. The `MOVOutput` class serves as the primary interface for configuring and generating these files across Windows, macOS, and Linux environments.
MOV files created with VisioForge SDKs can leverage hardware acceleration through NVIDIA, Intel, and AMD encoders, making them ideal for performance-critical applications. This guide walks through the essential steps for implementing MOV output in .NET video applications.
### When to Use MOV Format
MOV is particularly well-suited for:
- Video editing workflows
- Projects requiring Apple ecosystem compatibility
- Professional video production pipelines
- Applications needing metadata preservation
- High-quality archival purposes
## Getting Started with MOV Output
The `MOVOutput` class ([API reference](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.X.Output.MOVOutput.html)) provides the foundation for MOV file generation with VisioForge SDKs. It encapsulates the configuration of video and audio encoders, processing parameters, and sink settings.
### Basic Implementation
Creating a MOV output requires minimal code:
```csharp
// Create a MOV output targeting the specified filename
var movOutput = new MOVOutput("output.mov");
```
This simple implementation automatically:
- Selects NVENC H264 encoder if available (falls back to OpenH264)
- Chooses the appropriate AAC encoder for your platform (MF AAC on Windows, VO-AAC elsewhere)
- Configures MOV container settings for broad compatibility
### Default Configuration Behavior
The default configuration delivers balanced performance and compatibility across platforms. However, for specialized use cases, you'll likely need to customize encoder settings, which we'll cover in the following sections.
## Video Encoder Options for MOV Files
MOV output supports a variety of video encoders to accommodate different performance, quality, and compatibility requirements. The choice of encoder significantly impacts processing speed, resource consumption, and output quality.
### Supported Video Encoders
The MOV output supports these video encoders:
| Encoder | Technology | Platform | Best For |
|---------|------------|----------|----------|
| OpenH264 | Software | Cross-platform | Compatibility |
| NVENC H264 | NVIDIA GPU | Cross-platform | Performance |
| QSV H264 | Intel GPU | Cross-platform | Efficiency |
| AMF H264 | AMD GPU | Cross-platform | Performance |
| MF HEVC | Software | Windows only | Quality |
| NVENC HEVC | NVIDIA GPU | Cross-platform | Quality/Performance |
| QSV HEVC | Intel GPU | Cross-platform | Efficiency |
| AMF H265 | AMD GPU | Cross-platform | Quality/Performance |
### Configuring Video Encoders
Set a specific video encoder with code like this:
```csharp
// For NVIDIA hardware-accelerated encoding
movOutput.Video = new NVENCH264EncoderSettings() {
Bitrate = 5000000, // 5 Mbps
};
// For software-based encoding with OpenH264
movOutput.Video = new OpenH264EncoderSettings() {
RateControl = RateControlMode.VBR,
Bitrate = 2500000 // 2.5 Mbps
};
```
### Encoder Selection Strategy
When implementing MOV output, consider these factors for encoder selection:
1. **Hardware availability** - Check if GPU acceleration is available
2. **Quality requirements** - HEVC offers better quality at lower bitrates
3. **Processing speed** - Hardware encoders provide significant speed advantages
4. **Platform compatibility** - Some encoders are Windows-specific
A multi-tier approach often works best, checking for the fastest available encoder and falling back as needed:
```csharp
// Try NVIDIA, then Intel, then software encoding
if (NVENCH264EncoderSettings.IsAvailable())
{
movOutput.Video = new NVENCH264EncoderSettings();
}
else if (QSVH264EncoderSettings.IsAvailable())
{
movOutput.Video = new QSVH264EncoderSettings();
}
else
{
movOutput.Video = new OpenH264EncoderSettings();
}
```
## Audio Encoder Options
Audio quality is critical for most video applications. The SDK provides several audio encoders optimized for different use cases.
### Supported Audio Encoders
| Encoder | Type | Platform | Quality | Use Case |
|---------|------|----------|---------|----------|
| MP3 | Software | Cross-platform | Good | Web distribution |
| VO-AAC | Software | Cross-platform | Excellent | Professional use |
| AVENC AAC | Software | Cross-platform | Very good | General purpose |
| MF AAC | Hardware-accelerated | Windows only | Excellent | Windows apps |
### Audio Encoder Configuration
Implementing audio encoding requires minimal code:
```csharp
// MP3 configuration
movOutput.Audio = new MP3EncoderSettings() {
Bitrate = 320000, // 320 kbps high quality
Channels = 2 // Stereo
};
// Or AAC for better quality (Windows)
movOutput.Audio = new MFAACEncoderSettings() {
Bitrate = 192000 // 192 kbps
};
// Cross-platform AAC implementation
movOutput.Audio = new VOAACEncoderSettings() {
Bitrate = 192000,
SampleRate = 48000
};
```
### Platform-Specific Audio Considerations
To handle platform differences elegantly, use conditional compilation:
```csharp
// Select appropriate encoder based on platform
#if NET_WINDOWS
movOutput.Audio = new MFAACEncoderSettings();
#else
movOutput.Audio = new VOAACEncoderSettings();
#endif
```
## Advanced MOV Output Customization
Beyond basic configuration, VisioForge SDKs enable powerful customization of MOV output through media processing blocks and sink settings.
### Custom Processing Pipeline
For specialized video processing needs, the SDK provides media block integration:
```csharp
// Add custom video processing
movOutput.CustomVideoProcessor = new SomeMediaBlock();
// Add custom audio processing
movOutput.CustomAudioProcessor = new SomeMediaBlock();
```
### MOV Sink Configuration
Fine-tune the MOV container settings for specialized requirements:
```csharp
// Configure sink settings
movOutput.Sink.Filename = "new_output.mov";
```
### Dynamic Encoder Detection
Your application can intelligently select encoders based on system capabilities:
```csharp
// Get available video encoders
var videoEncoders = movOutput.GetVideoEncoders();
// Get available audio encoders
var audioEncoders = movOutput.GetAudioEncoders();
// Display available options to users or auto-select
foreach (var encoder in videoEncoders)
{
Console.WriteLine($"Available encoder: {encoder.Name}");
}
```
## Integration with VisioForge SDK Core Components
The MOV output integrates seamlessly with the core SDK components for video capture, editing, and processing.
### Video Capture Integration
Add MOV output to a capture workflow:
```csharp
// Create and configure capture core
var core = new VideoCaptureCoreX();
// Add capture devices
// ..
// Add configured MOV output
core.Outputs_Add(movOutput, true);
// Start capture
await core.StartAsync();
```
### Video Edit SDK Integration
Incorporate MOV output in video editing:
```csharp
// Create edit core and configure project
var core = new VideoEditCoreX();
// Add input file
// ...
// Set MOV as output format
core.Output_Format = movOutput;
// Process the video
await core.StartAsync();
```
### Media Blocks SDK Implementation
For direct media pipeline control:
```csharp
// Create encoder instances
var aac = new VOAACEncoderSettings();
var h264 = new OpenH264EncoderSettings();
// Configure MOV sink
var movSinkSettings = new MOVSinkSettings("output.mov");
// Create output block
// Note: MP4OutputBlock handles MOV output (MOV is a subset of MP4)
var movOutput = new MP4OutputBlock(movSinkSettings, h264, aac);
// Add to pipeline
pipeline.AddBlock(movOutput);
```
## Platform Compatibility Notes
While VisioForge's MOV implementation is cross-platform, some features are platform-specific:
### Windows-Specific Features
- MF HEVC video encoder provides optimized encoding on Windows
- MF AAC audio encoder offers hardware acceleration on compatible systems
### Cross-Platform Features
- OpenH264, NVENC, QSV, and AMF encoders work across operating systems
- VO-AAC and AVENC AAC provide consistent audio encoding everywhere
## Conclusion
The MOV output capability in VisioForge .NET SDKs provides a powerful and flexible solution for creating high-quality video files. By leveraging hardware acceleration where available and falling back to optimized software implementations when needed, the SDK ensures excellent performance across platforms.
For more information, refer to the [VisioForge API documentation](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.X.Output.MOVOutput.html) or explore other output formats in our documentation.
---END OF PAGE---
# Local File: .\dotnet\general\output-formats\mp4.md
---
title: MP4 Video Output Integration for .NET
description: Learn how to implement MP4 video output in .NET applications using hardware-accelerated encoders. Guide covers H.264/HEVC encoding, audio configuration, and best practices for optimal video processing performance.
sidebar_label: MP4
---
# MP4 file output
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
MP4 (MPEG-4 Part 14), introduced in 2001, is a digital multimedia container format most commonly used to store video and audio. It also supports subtitles and images. MP4 is known for its high compression and compatibility across various devices and platforms, making it a popular choice for streaming and sharing.
Capturing videos from a webcam and saving them to a file is a common requirement in many applications. One way to achieve this is by using a software development kit (SDK) like VisioForge Video Capture SDK .Net, which provides an easy-to-use API for capturing and processing videos in C#.
To capture video in MP4 format using Video Capture SDK, you need to configure video output format using one of the classes for MP4 output. You can use several available software and hardware video encoders, including Intel QuickSync, Nvidia NVENC, and AMD/ATI APU.
## Cross-platform MP4 output
[!badge variant="dark" size="xl" text="VideoCaptureCoreX"] [!badge variant="dark" size="xl" text="VideoEditCoreX"] [!badge variant="dark" size="xl" text="MediaBlocksPipeline"]
The [MP4Output](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.X.Output.MP4Output.html?q=MP4Output) class provides a flexible and powerful way to configure MP4 video output settings for video capture and editing operations. This guide will walk you through how to use the MP4Output class effectively, covering its key features and common usage patterns.
MP4Output implements several important interfaces:
- IVideoEditXBaseOutput
- IVideoCaptureXBaseOutput
- Media Block creation
This makes it suitable for both video editing and capture scenarios while providing extensive control over video and audio processing.
### Basic Usage
The simplest way to create an MP4Output instance is using the constructor with a filename:
```csharp
var output = new MP4Output("output.mp4");
```
This creates an MP4Output with default video and audio encoder settings. On Windows, it will use OpenH264 for video encoding and Media Foundation AAC for audio encoding by default.
### Video Encoder Configuration
The MP4Output class supports multiple video encoders through its `Video` property. Here are the supported video encoders:
**[H.264 Encoders](../video-encoders/h264.md)**
- OpenH264EncoderSettings (Default, CPU)
- AMFH264EncoderSettings (AMD)
- NVENCH264EncoderSettings (NVIDIA)
- QSVH264EncoderSettings (Intel Quick Sync)
**[HEVC (H.265) Encoders](../video-encoders/hevc.md)**
- MFHEVCEncoderSettings (Windows only)
- AMFH265EncoderSettings (AMD)
- NVENCHEVCEncoderSettings (NVIDIA)
- QSVHEVCEncoderSettings (Intel Quick Sync)
You can check the availability of specific encoders using the `IsAvailable` method:
```csharp
if (NVENCH264EncoderSettings.IsAvailable())
{
output.Video = new NVENCH264EncoderSettings();
}
```
Example of configuring a specific video encoder:
```csharp
var output = new MP4Output("output.mp4");
output.Video = new NVENCH264EncoderSettings(); // Use NVIDIA encoder
```
### Audio Encoder Configuration
The `Audio` property allows you to specify the audio encoder. Supported audio encoders include:
- [VOAACEncoderSettings](../audio-encoders/aac.md)
- [AVENCAACEncoderSettings](../audio-encoders/aac.md)
- [MFAACEncoderSettings](../audio-encoders/aac.md) (Windows only)
- [MP3EncoderSettings](../audio-encoders/mp3.md)
Example of setting a custom audio encoder:
```csharp
var output = new MP4Output("output.mp4");
output.Audio = new MP3EncoderSettings();
```
The MP4Output class automatically selects appropriate default encoders based on the platform.
### Sample code
Add the MP4 output to the Video Capture SDK core instance:
```csharp
var core = new VideoCaptureCoreX();
core.Outputs_Add(output, true);
```
Set the output format for the Video Edit SDK core instance:
```csharp
var core = new VideoEditCoreX();
core.Output_Format = output;
```
Create a Media Blocks MP4 output instance:
```csharp
var aac = new VOAACEncoderSettings();
var h264 = new OpenH264EncoderSettings();
var mp4SinkSettings = new MP4SinkSettings("output.mp4");
var mp4Output = new MP4OutputBlock(mp4SinkSettings, h264, aac);
```
### Best Practices
**Hardware Acceleration**: When possible, use hardware-accelerated encoders (NVENC, AMF, QSV) for better performance:
```csharp
var output = new MP4Output("output.mp4");
if (NVENCH264EncoderSettings.IsAvailable())
{
output.Video = new NVENCH264EncoderSettings();
}
```
**Encoder Selection**: Use the provided methods to enumerate available encoders:
```csharp
var output = new MP4Output("output.mp4");
var availableVideoEncoders = output.GetVideoEncoders();
var availableAudioEncoders = output.GetAudioEncoders();
```
### Common Issues and Solutions
1. **File Access**: The MP4Output constructor attempts to verify write access by creating and immediately deleting a test file. Ensure the application has proper permissions to the output directory.
2. **Encoder Availability**: Hardware encoders might not be available on all systems. Always provide a fallback:
```csharp
var output = new MP4Output("output.mp4");
if (!NVENCH264EncoderSettings.IsAvailable())
{
output.Video = new OpenH264EncoderSettings(); // Fallback to software encoder
}
```
3. **Platform Compatibility**: Some encoders are platform-specific. Use conditional compilation or runtime checks when targeting multiple platforms:
```csharp
#if NET_WINDOWS
output.Audio = new MFAACEncoderSettings();
#else
output.Audio = new MP3EncoderSettings();
#endif
```
## Windows-only MP4 output
[!badge variant="dark" size="xl" text="VideoCaptureCore"] [!badge variant="dark" size="xl" text="VideoEditCore"]
`The same sample code can be used for Video Edit SDK .Net. Use the VideoEditCore class instead of VideoCaptureCore.`
### CPU encoder or Intel QuickSync GPU encoder
Create an `MP4Output` object for MP4 output.
```cs
var mp4Output = new MP4Output();
```
Set MP4 mode to `CPU_QSV`.
```cs
mp4Output.MP4Mode = MP4Mode.CPU_QSV;
```
Set video settings.
```cs
mp4Output.Video.Profile = H264Profile.ProfileMain; // H264 profile
mp4Output.Video.Level = H264Level.Level4; // H264 level
mp4Output.Video.Bitrate = 2000; // bitrate
// optional parameters
mp4Output.Video.MBEncoding = H264MBEncoding.CABAC; //CABAC / CAVLC
mp4Output.Video.BitrateAuto = false; // true to use auto bitrate
mp4Output.Video.RateControl = H264RateControl.VBR; // rate control - CBR or VBR
```
Set AAC audio settings.
```cs
mp4Output.Audio_AAC.Bitrate = 192;
mp4Output.Audio_AAC.Version = AACVersion.MPEG4; // MPEG-4 / MPEG-2
mp4Output.Audio_AAC.Output = AACOutput.RAW; // RAW or ADTS
mp4Output.Audio_AAC.Object = AACObject.Low; // type of AAC
```
### Nvidia NVENC encoder
Create the `MP4Output` object for MP4 output.
```cs
var mp4Output = new MP4Output();
```
Set MP4 mode to `NVENC`.
```cs
mp4Output.MP4Mode = MP4Mode.NVENC;
```
Set the video settings.
```cs
mp4Output.Video_NVENC.Profile = NVENCVideoEncoderProfile.H264_Main; // H264 profile
mp4Output.Video_NVENC.Level = NVENCEncoderLevel.H264_4; // H264 level
mp4Output.Video_NVENC.Bitrate = 2000; // bitrate
// optional parameters
mp4Output.Video_NVENC.RateControl = NVENCRateControlMode.VBR; // rate control - CBR or VBR
```
Set the audio settings.
```cs
mp4Output.Audio_AAC.Bitrate = 192;
mp4Output.Audio_AAC.Version = AACVersion.MPEG4; // MPEG-4 / MPEG-2
mp4Output.Audio_AAC.Output = AACOutput.RAW; // RAW or ADTS
mp4Output.Audio_AAC.Object = AACObject.Low; // type of AAC
```
### CPU/GPU encoders
Using MP4 HW output, you can use hardware-accelerated encoders by Intel (QuickSync), Nvidia (NVENC), and AMD/ATI.
Create `MP4HWOutput` object for MP4 HW output.
```cs
var mp4Output = new MP4HWOutput();
```
Get available encoders.
```cs
var availableEncoders = VideoCaptureCore.HWEncodersAvailable();
// or
var availableEncoders = VideoEditCore.HWEncodersAvailable();
```
Depending on available encoders, select video codec.
```cs
mp4Output.Video.Codec = MFVideoEncoder.MS_H264; // Microsoft H264
mp4Output.Video.Profile = MFH264Profile.Main; // H264 profile
mp4Output.Video.Level = MFH264Level.Level4; // H264 level
mp4Output.Video.AvgBitrate = 2000; // bitrate
// optional parameters
mp4Output.Video.CABAC = true; // CABAC / CAVLC
mp4Output.Video.RateControl = MFCommonRateControlMode.CBR; // rate control
// many other parameters are available
```
Set audio settings.
```cs
mp4Output.Audio.Bitrate = 192;
mp4Output.Audio.Version = AACVersion.MPEG4; // MPEG-4 / MPEG-2
mp4Output.Audio.Output = AACOutput.RAW; // RAW or ADTS
mp4Output.Audio.Object = AACObject.Low; // type of AAC
```
Now, we can apply MP4 output settings to the core class (VideoCaptureCore or VideoEditCore) and start video capture or editing.
### Apply video capture settings
Set MP4 format settings for output.
```cs
core.Output_Format = mp4Output;
```
Set a video capture mode (or video convert mode if you use Video Edit SDK).
```cs
core.Mode = VideoCaptureMode.VideoCapture;
```
Set a file name (ensure you have to write access rights).
```cs
core.Output_Filename = "output.mp4";
```
Start video capture (convert) to a file.
```cs
await VideoCapture1.StartAsync();
```
Finally, when we're done capturing the video, we need to stop the video capture and release the resources. We can do this by calling the `StopAsync` method of the `VideoCaptureCore` class.
### Required redists
- Video Capture SDK redist [x86](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VideoCapture.x86/) [x64](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VideoCapture.x64/)
- Video Edit SDK redist [x86](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VideoEdit.x86/) [x64](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VideoEdit.x64/)
- MP4 redist [x86](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.MP4.x86/) [x64](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.MP4.x64/)
---
Visit our [GitHub](https://github.com/visioforge/.Net-SDK-s-samples) page to get more code samples.
---END OF PAGE---
# Local File: .\dotnet\general\output-formats\mpegts.md
---
title: MPEG-TS File Output Guide for .NET
description: Learn how to implement MPEG Transport Stream (MPEG-TS) file output in .NET applications. Covers video and audio encoding options, hardware acceleration, cross-platform considerations, and best practices for developers working with media streaming.
sidebar_label: MPEG-TS
---
# MPEG-TS Output
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The MPEG-TS (Transport Stream) output module in VisioForge SDK functionality for creating MPEG transport stream files with various video and audio encoding options. This guide explains how to configure and use the `MPEGTSOutput` class effectively.
## Cross-platform MPEG-TS output
[!badge variant="dark" size="xl" text="VideoCaptureCoreX"] [!badge variant="dark" size="xl" text="VideoEditCoreX"] [!badge variant="dark" size="xl" text="MediaBlocksPipeline"]
To create a new MPEG-TS output, use the following constructor:
```csharp
// Initialize with AAC audio (recommended)
var output = new MPEGTSOutput("output.ts", useAAC: true);
```
You can also use MP3 audio instead of AAC:
```csharp
// Initialize with MP3 audio instead of AAC
var output = new MPEGTSOutput("output.ts", useAAC: false);
```
### Video Encoding Options
The [MPEGTSOutput](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.X.Output.MPEGTSOutput.html) supports multiple video encoders through the `Video` property. Available encoders include:
**[H.264 Encoders](../video-encoders/h264.md)**
- OpenH264 (Software-based)
- NVENC H.264 (NVIDIA GPU acceleration)
- QSV H.264 (Intel Quick Sync)
- AMF H.264 (AMD GPU acceleration)
**[H.265/HEVC Encoders](../video-encoders/hevc.md)**
- MF HEVC (Windows Media Foundation, Windows only)
- NVENC HEVC (NVIDIA GPU acceleration)
- QSV HEVC (Intel Quick Sync)
- AMF H.265 (AMD GPU acceleration)
Example of setting a specific video encoder:
```csharp
// Check if NVIDIA encoder is available
if (NVENCH264EncoderSettings.IsAvailable())
{
output.Video = new NVENCH264EncoderSettings();
}
else
{
// Fall back to OpenH264
output.Video = new OpenH264EncoderSettings();
}
```
### Audio Encoding Options
The following audio encoders are supported through the `Audio` property:
**[AAC Encoders](../audio-encoders/aac.md)**
- VO-AAC (Cross-platform)
- AVENC AAC
- MF AAC (Windows only)
**[MP3 Encoder](../audio-encoders/mp3.md)**:
- MP3EncoderSettings
Example of setting an audio encoder:
```csharp
// For Windows platforms
output.Audio = new MFAACEncoderSettings();
```
```csharp
// For cross-platform compatibility
output.Audio = new VOAACEncoderSettings();
```
```csharp
// Using MP3 instead of AAC
output.Audio = new MP3EncoderSettings();
```
### File Management
You can get or set the output filename after initialization:
```csharp
// Get current filename
string currentFile = output.GetFilename();
// Change output filename
output.SetFilename("new_output.ts");
```
### Advanced Features
#### Custom Processing
The MPEGTSOutput supports custom video and audio processing through MediaBlocks:
```csharp
// Add custom video processing
output.CustomVideoProcessor = new YourCustomVideoProcessor();
// Add custom audio processing
output.CustomAudioProcessor = new YourCustomAudioProcessor();
```
#### Sink Settings
The output uses MP4SinkSettings for configuration:
```csharp
// Access sink settings
output.Sink.Filename = "modified_output.ts";
```
### Platform Considerations
- Some encoders (MF AAC, MF HEVC) are only available on Windows platforms
- Cross-platform applications should use platform-agnostic encoders like VO-AAC for audio
### Best Practices
1. **Hardware Acceleration**: When available, prefer hardware-accelerated encoders (NVENC, QSV, AMF) over software encoders for better performance.
2. **Audio Codec Selection**: Use AAC for better compatibility and quality unless you have specific requirements for MP3.
3. **Error Handling**: Always check for encoder availability before using hardware-accelerated options:
```csharp
if (NVENCH264EncoderSettings.IsAvailable())
{
// Use NVIDIA encoder
}
else if (QSVH264EncoderSettings.IsAvailable())
{
// Fall back to Intel Quick Sync
}
else
{
// Fall back to software encoding
}
```
**Cross-Platform Compatibility**: For cross-platform applications, ensure you're using encoders available on all target platforms or implement appropriate fallbacks.
### Implementation Example
Here's a complete example showing how to create and configure an MPEG-TS output:
```csharp
var output = new MPEGTSOutput("output.ts", useAAC: true);
// Configure video encoder
if (NVENCH264EncoderSettings.IsAvailable())
{
output.Video = new NVENCH264EncoderSettings();
}
else if (QSVH264EncoderSettings.IsAvailable())
{
output.Video = new QSVH264EncoderSettings();
}
else
{
output.Video = new OpenH264EncoderSettings();
}
// Configure audio encoder based on platform
#if NET_WINDOWS
output.Audio = new MFAACEncoderSettings();
#else
output.Audio = new VOAACEncoderSettings();
#endif
// Optional: Add custom processing
output.CustomVideoProcessor = new YourCustomVideoProcessor();
output.CustomAudioProcessor = new YourCustomAudioProcessor();
```
## Windows-only MPEG-TS output
[!badge variant="dark" size="xl" text="VideoCaptureCore"] [!badge variant="dark" size="xl" text="VideoEditCore"]
The `MPEGTSOutput` class provides configuration settings for MPEG Transport Stream (MPEG-TS) output in the VisioForge video processing framework. This class inherits from `MFBaseOutput` and implements the `IVideoCaptureBaseOutput` interface, enabling it to be used specifically for video capture scenarios with MPEG-TS formatting.
### Class Hierarchy
```text
MFBaseOutput
└── MPEGTSOutput
```
### Inherited Video Settings
The [MPEGTSOutput](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.Output.MPEGTSOutput.html) class inherits video encoding capabilities from [MFBaseOutput](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.Output.MFBaseOutput.html), which includes:
**Video Encoding Configuration**: Through the `Video` property of type [MFVideoEncoderSettings](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.Output.MFVideoEncoderSettings.html), supporting:
- Multiple codec options (H.264/H.265) with hardware acceleration support
- Bitrate control (CBR/VBR)
- Quality settings
- Frame type and GOP structure configuration
- Interlacing options
- Resolution and aspect ratio controls
### Inherited Audio Settings
Audio configuration is handled through the inherited `Audio` property of type [M4AOutput](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.Output.M4AOutput.html), which includes:
AAC audio encoding with configurable:
- Version (default: MPEG-4)
- Object type (default: AAC-LC)
- Bitrate (default: 128 kbps)
- Output format (default: RAW)
### Usage
#### Basic Implementation
```csharp
// Create VideoCaptureCore instance
var core = new VideoCaptureCore();
// Set output filename
core.Output_Filename = "output.ts";
// Create MPEG-TS output
var mpegtsOutput = new MPEGTSOutput();
// Configure video settings
mpegtsOutput.Video.Codec = MFVideoEncoder.MS_H264;
mpegtsOutput.Video.AvgBitrate = 2000; // 2 Mbps
mpegtsOutput.Video.RateControl = MFCommonRateControlMode.CBR;
// Configure audio settings
mpegtsOutput.Audio.Bitrate = 128; // 128 kbps
mpegtsOutput.Audio.Version = AACVersion.MPEG4;
core.Output_Format = mpegtsOutput;
```
#### Serialization Support
The class provides built-in JSON serialization support for saving and loading configurations:
```csharp
// Save configuration
string jsonConfig = mpegtsOutput.Save();
// Load configuration
MPEGTSOutput loadedConfig = MPEGTSOutput.Load(jsonConfig);
```
### Default Configuration
The `MPEGTSOutput` class initializes with these default settings:
#### Video Defaults (inherited from MFBaseOutput)
- Average Bitrate: 2000 kbps
- Codec: Microsoft H.264
- Profile: Main
- Level: 4.2
- Rate Control: CBR
- Quality vs Speed: 85
- Maximum Reference Frames: 2
- GOP Size: 50 frames
- B-Picture Count: 0
- Low Latency Mode: Disabled
- CABAC: Disabled
- Interlace Mode: Progressive
#### Audio Defaults
- Bitrate: 128 kbps
- AAC Version: MPEG-4
- AAC Object Type: Low Complexity (LC)
- Output Format: RAW
### Best Practices
1. **Bitrate Configuration**:
- For streaming applications, ensure the combined video and audio bitrates are within your target bandwidth
- Consider using VBR for storage scenarios and CBR for streaming
2. **Hardware Acceleration**:
- When available, use hardware-accelerated encoders (QSV, NVENC, AMD) for better performance
- Fall back to MS_H264/MS_H265 when hardware acceleration is unavailable
3. **Quality Optimization**:
- For higher quality at the cost of performance, increase the `QualityVsSpeed` value
- Enable CABAC for better compression efficiency in non-low-latency scenarios
- Adjust `MaxKeyFrameSpacing` based on your specific use case (lower values for better seeking, higher values for better compression)
### Technical Notes
1. **MPEG-TS Characteristics**:
- Suitable for streaming and broadcasting applications
- Provides error resilience through packet-based structure
- Supports multiple programs and elementary streams
2. **Performance Considerations**:
- Low latency mode trades quality for reduced encoding delay
- B-frames improve compression but increase latency
- Hardware acceleration can significantly reduce CPU usage
### Required redists
- Video Capture SDK redist [x86](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VideoCapture.x86/) [x64](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VideoCapture.x64/)
- Video Edit SDK redist [x86](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VideoEdit.x86/) [x64](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VideoEdit.x64/)
- MP4 redist [x86](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.MP4.x86/) [x64](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.MP4.x64/)
---
Visit our [GitHub](https://github.com/visioforge/.Net-SDK-s-samples) page to get more code samples.
---END OF PAGE---
# Local File: .\dotnet\general\output-formats\mxf.md
---
title: Professional MXF Integration for .NET Applications
description: Master MXF output implementation in VisioForge SDKs with detailed code samples for professional video workflows. Learn hardware acceleration, codec optimization, cross-platform considerations, and best practices for broadcast-ready MXF files in your .NET applications.
sidebar_label: MXF
---
# MXF Output in VisioForge .NET SDKs
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
[!badge variant="dark" size="xl" text="VideoCaptureCoreX"] [!badge variant="dark" size="xl" text="VideoEditCoreX"] [!badge variant="dark" size="xl" text="MediaBlocksPipeline"]
Material Exchange Format (MXF) is an industry-standard container format designed for professional video applications. It's widely adopted in broadcast environments, post-production workflows, and archival systems. VisioForge SDKs provide robust cross-platform MXF output capabilities that allow developers to integrate this professional format into their applications.
## Understanding MXF Format
MXF serves as a wrapper that can contain various types of video and audio data along with metadata. The format was designed to address interoperability issues in professional video workflows:
- **Industry Standard**: Adopted by major broadcasters worldwide
- **Professional Metadata**: Supports extensive technical and descriptive metadata
- **Versatile Container**: Compatible with numerous audio and video codecs
- **Cross-Platform**: Supported across Windows, macOS, and Linux
## Getting Started with MXF Output
Implementing MXF output in VisioForge SDKs requires just a few steps. The basic setup involves:
1. Creating an MXF output object
2. Specifying video and audio stream types
3. Configuring encoder settings
4. Adding the output to your pipeline
### Basic Implementation
Here's the foundational code to create an MXF output:
```csharp
var mxfOutput = new MXFOutput(
filename: "output.mxf",
videoStreamType: MXFVideoStreamType.H264,
audioStreamType: MXFAudioStreamType.MPEG
);
```
This minimal implementation creates a valid MXF file with default encoding settings. For professional applications, you'll typically want to customize the encoding parameters further.
## Video Encoding Options for MXF
The quality and compatibility of your MXF output largely depends on your choice of video encoder. VisioForge SDKs support multiple encoder options to balance performance, quality, and compatibility.
### Hardware-Accelerated Encoders
For optimal performance in real-time applications, hardware-accelerated encoders are recommended:
#### NVIDIA NVENC Encoders
```csharp
// Check availability first
if (NVENCH264EncoderSettings.IsAvailable())
{
var nvencSettings = new NVENCH264EncoderSettings
{
Bitrate = 8000000, // 8 Mbps
};
mxfOutput.Video = nvencSettings;
}
```
#### Intel Quick Sync Video (QSV) Encoders
```csharp
if (QSVH264EncoderSettings.IsAvailable())
{
var qsvSettings = new QSVH264EncoderSettings
{
Bitrate = 8000000,
};
mxfOutput.Video = qsvSettings;
}
```
#### AMD Advanced Media Framework (AMF) Encoders
```csharp
if (AMFH264EncoderSettings.IsAvailable())
{
var amfSettings = new AMFH264EncoderSettings
{
Bitrate = 8000000,
};
mxfOutput.Video = amfSettings;
}
```
### Software-Based Encoders
When hardware acceleration isn't available, software encoders provide reliable alternatives:
#### OpenH264 Encoder
```csharp
var openH264Settings = new OpenH264EncoderSettings
{
Bitrate = 8000000,
};
mxfOutput.Video = openH264Settings;
```
### High-Efficiency Video Coding (HEVC/H.265)
For applications requiring higher compression efficiency:
```csharp
// NVIDIA HEVC encoder
if (NVENCHEVCEncoderSettings.IsAvailable())
{
var nvencHevcSettings = new NVENCHEVCEncoderSettings
{
Bitrate = 5000000, // Lower bitrate possible with HEVC
};
mxfOutput.Video = nvencHevcSettings;
}
```
## Audio Encoding for MXF Files
While video often gets the most attention, proper audio encoding is crucial for professional MXF outputs. VisioForge SDKs offer multiple audio encoder options:
### AAC Encoders
AAC is the preferred codec for most professional applications:
```csharp
// Media Foundation AAC (Windows-only)
#if NET_WINDOWS
var mfAacSettings = new MFAACEncoderSettings
{
Bitrate = 192000, // 192 kbps
SampleRate = 48000 // Professional standard
};
mxfOutput.Audio = mfAacSettings;
#else
// Cross-platform AAC alternative
var voAacSettings = new VOAACEncoderSettings
{
Bitrate = 192000,
SampleRate = 48000
};
mxfOutput.Audio = voAacSettings;
#endif
```
### MP3 Encoder
For maximum compatibility:
```csharp
var mp3Settings = new MP3EncoderSettings
{
Bitrate = 320000, // 320 kbps
SampleRate = 48000,
ChannelMode = MP3ChannelMode.Stereo
};
mxfOutput.Audio = mp3Settings;
```
## Advanced MXF Configuration
### Custom Processing Pipelines
One of the powerful features of VisioForge SDKs is the ability to add custom processing to your MXF output chain:
```csharp
// Add custom video processing
mxfOutput.CustomVideoProcessor = yourVideoProcessingBlock;
// Add custom audio processing
mxfOutput.CustomAudioProcessor = yourAudioProcessingBlock;
```
### Sink Configuration
Fine-tune your MXF output with sink settings:
```csharp
// Access sink settings
mxfOutput.Sink.Filename = "new_output.mxf";
```
## Cross-Platform Considerations
Building applications that work across different platforms requires careful planning:
```csharp
// Platform-specific encoder selection
var mxfOutput = new MXFOutput(
filename: "output.mxf",
videoStreamType: MXFVideoStreamType.H264,
audioStreamType: MXFAudioStreamType.MPEG
);
#if NET_WINDOWS
// Windows-specific settings
if (QSVH264EncoderSettings.IsAvailable())
{
mxfOutput.Video = new QSVH264EncoderSettings();
mxfOutput.Audio = new MFAACEncoderSettings();
}
#elif NET_MACOS
// macOS-specific settings
mxfOutput.Video = new OpenH264EncoderSettings();
mxfOutput.Audio = new VOAACEncoderSettings();
#else
// Linux fallback
mxfOutput.Video = new OpenH264EncoderSettings();
mxfOutput.Audio = new MP3EncoderSettings();
#endif
```
## Error Handling and Validation
Robust MXF implementations require proper error handling:
```csharp
try
{
// Create MXF output
var mxfOutput = new MXFOutput(
filename: Path.Combine(outputDirectory, "output.mxf"),
videoStreamType: MXFVideoStreamType.H264,
audioStreamType: MXFAudioStreamType.MPEG
);
// Validate encoder availability
if (!OpenH264EncoderSettings.IsAvailable())
{
throw new ApplicationException("No compatible H.264 encoder found");
}
// Validate output directory
var directoryInfo = new DirectoryInfo(Path.GetDirectoryName(mxfOutput.Sink.Filename));
if (!directoryInfo.Exists)
{
Directory.CreateDirectory(directoryInfo.FullName);
}
pipeline.AddBlock(mxfOutput);
// Connect blocks
// ...
}
catch (Exception ex)
{
logger.LogError($"MXF output error: {ex.Message}");
// Implement fallback strategy
}
```
## Performance Optimization
For optimal MXF output performance:
1. **Prioritize Hardware Acceleration**: Always check for and use hardware encoders first
2. **Buffer Management**: Adjust buffer sizes based on system capabilities
3. **Parallel Processing**: Utilize multi-threading where appropriate
4. **Preset Selection**: Choose encoder presets based on quality vs. speed requirements
## Complete Implementation Example
Here's a full example demonstrating MXF implementation with fallback options:
```csharp
// Create MXF output with specific stream types
var mxfOutput = new MXFOutput(
filename: "output.mxf",
videoStreamType: MXFVideoStreamType.H264,
audioStreamType: MXFAudioStreamType.MPEG
);
// Configure video encoder with prioritized fallback chain
if (NVENCH264EncoderSettings.IsAvailable())
{
var nvencSettings = new NVENCH264EncoderSettings
{
Bitrate = 8000000,
};
mxfOutput.Video = nvencSettings;
}
else if (QSVH264EncoderSettings.IsAvailable())
{
var qsvSettings = new QSVH264EncoderSettings
{
Bitrate = 8000000,
};
mxfOutput.Video = qsvSettings;
}
else if (AMFH264EncoderSettings.IsAvailable())
{
var amfSettings = new AMFH264EncoderSettings
{
Bitrate = 8000000,
};
mxfOutput.Video = amfSettings;
}
else
{
// Software fallback
var openH264Settings = new OpenH264EncoderSettings
{
Bitrate = 8000000,
};
mxfOutput.Video = openH264Settings;
}
// Configure platform-optimized audio
#if NET_WINDOWS
mxfOutput.Audio = new MFAACEncoderSettings
{
Bitrate = 192000,
SampleRate = 48000
};
#else
mxfOutput.Audio = new VOAACEncoderSettings
{
Bitrate = 192000,
SampleRate = 48000
};
#endif
// Add to pipeline and start
pipeline.AddBlock(mxfOutput);
// Connect blocks
// ...
// Start the pipeline
await pipeline.StartAsync();
```
By following this guide, you can implement professional-grade MXF output in your applications using VisioForge .NET SDKs, ensuring compatibility with broadcast workflows and post-production systems.
---END OF PAGE---
# Local File: .\dotnet\general\output-formats\webm.md
---
title: WebM Video Output for .NET - Developer Guide
description: Master WebM video implementation in .NET with detailed code examples for VP8, VP9, and AV1 codecs. Learn optimization strategies for quality, performance, and file size to create efficient web-ready videos across Windows and cross-platform applications.
sidebar_label: WebM
---
# WebM Video Output in VisioForge .NET SDKs
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## What is WebM?
WebM is an open-source, royalty-free media file format optimized for web delivery. Developed to provide efficient video streaming with minimal processing requirements, WebM has become a standard for HTML5 video content. The format supports modern codecs including VP8 and VP9 for video compression, along with Vorbis and Opus for audio encoding.
The key advantages of WebM include:
- **Web-optimized performance** with fast loading times
- **Broad browser support** across major platforms
- **High-quality video** at smaller file sizes
- **Open-source licensing** without royalty costs
- **Efficient streaming** capabilities for media applications
## Windows Implementation
[!badge variant="dark" size="xl" text="VideoCaptureCore"] [!badge variant="dark" size="xl" text="VideoEditCore"]
On Windows platforms, VisioForge's implementation leverages the [WebMOutput](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.Output.WebMOutput.html) class from the `VisioForge.Core.Types.Output` namespace.
### Basic Configuration
To quickly implement WebM output in your Windows application:
```csharp
using VisioForge.Core.Types.Output;
// Initialize WebM output settings
var webmOutput = new WebMOutput();
// Configure essential parameters
webmOutput.Video_Mode = VP8QualityMode.Realtime;
webmOutput.Video_EndUsage = VP8EndUsageMode.VBR;
webmOutput.Video_Encoder = WebMVideoEncoder.VP8;
webmOutput.Video_Bitrate = 2000;
webmOutput.Audio_Quality = 80;
// Apply to your core instance
var core = new VideoCaptureCore(); // or VideoEditCore
core.Output_Format = webmOutput;
core.Output_Filename = "output.webm";
```
### Video Quality Settings
Fine-tuning your WebM video quality involves balancing several parameters:
```csharp
var webmOutput = new WebMOutput();
// Quality parameters
webmOutput.Video_MinQuantizer = 4; // Lower values = higher quality (range: 0-63)
webmOutput.Video_MaxQuantizer = 48; // Upper quality bound (range: 0-63)
webmOutput.Video_Bitrate = 2000; // Target bitrate in kbps
// Encode with multiple threads for better performance
webmOutput.Video_ThreadCount = 4; // Adjust based on available CPU cores
```
### Keyframe Control
Proper keyframe configuration is crucial for efficient streaming and seeking:
```csharp
// Keyframe settings
webmOutput.Video_Keyframe_MinInterval = 30; // Minimum frames between keyframes
webmOutput.Video_Keyframe_MaxInterval = 300; // Maximum frames between keyframes
webmOutput.Video_Keyframe_Mode = VP8KeyframeMode.Auto;
```
### Performance Optimization
Balance encoding speed and quality with these parameters:
```csharp
// Performance settings
webmOutput.Video_CPUUsed = 0; // Range: -16 to 16 (higher = faster encoding, lower quality)
webmOutput.Video_LagInFrames = 25; // Frame look-ahead buffer (higher = better quality)
webmOutput.Video_ErrorResilient = true; // Enable for streaming applications
```
### Buffer Management
For streaming applications, proper buffer configuration improves playback stability:
```csharp
// Buffer settings
webmOutput.Video_Decoder_Buffer_Size = 6000; // Buffer size in milliseconds
webmOutput.Video_Decoder_Buffer_InitialSize = 4000; // Initial buffer fill level
webmOutput.Video_Decoder_Buffer_OptimalSize = 5000; // Target buffer level
// Rate control fine-tuning
webmOutput.Video_UndershootPct = 50; // Allows bitrate to drop below target
webmOutput.Video_OvershootPct = 50; // Allows bitrate to exceed target temporarily
```
## Cross-Platform Implementation
[!badge variant="dark" size="xl" text="VideoCaptureCoreX"] [!badge variant="dark" size="xl" text="VideoEditCoreX"] [!badge variant="dark" size="xl" text="MediaBlocksPipeline"]
For cross-platform applications, VisioForge provides the [WebMOutput](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.X.Output.WebMOutput.html) class from the `VisioForge.Core.Types.X.Output` namespace, offering enhanced codec flexibility.
### Basic Setup
```csharp
using VisioForge.Core.Types.X.Output;
using VisioForge.Core.Types.X.VideoEncoders;
using VisioForge.Core.Types.X.AudioEncoders;
// Create WebM output
var webmOutput = new WebMOutput("output.webm");
// Configure video encoder (VP8)
webmOutput.Video = new VP8EncoderSettings();
// Configure audio encoder (Vorbis)
webmOutput.Audio = new VorbisEncoderSettings();
```
### Integration with Video Capture SDK
```csharp
// Add WebM output to Video Capture SDK
var core = new VideoCaptureCoreX();
core.Outputs_Add(webmOutput, true);
```
### Integration with Video Edit SDK
```csharp
// Set WebM as output format for Video Edit SDK
var core = new VideoEditCoreX();
core.Output_Format = webmOutput;
```
### Integration with Media Blocks SDK
```csharp
// Create encoders
var vorbis = new VorbisEncoderSettings();
var vp9 = new VP9EncoderSettings();
// Configure WebM output block
var webmSettings = new WebMSinkSettings("output.webm");
var webmOutput = new WebMOutputBlock(webmSettings, vp9, vorbis);
// Add to your pipeline
// pipeline.AddBlock(webmOutput);
```
## Codec Selection Guide
### Video Codecs
VisioForge SDKs support multiple video codecs for WebM:
1. **VP8**
- Faster encoding speed
- Lower computational requirements
- Wider compatibility with older browsers
- Good quality for standard video
2. **VP9**
- Better compression efficiency (30-50% smaller files vs. VP8)
- Higher quality at the same bitrate
- Slower encoding performance
- Ideal for high-resolution content
3. **AV1**
- Next-generation codec with superior compression
- Highest quality per bit
- Significantly higher encoding complexity
- Best for situations where encoding time isn't critical
For codec-specific settings, refer to our dedicated documentation pages:
- [VP8/VP9 Configuration](../video-encoders/vp8-vp9.md)
- [AV1 Configuration](../video-encoders/av1.md)
### Audio Codecs
Two primary audio codec options are available:
1. **Vorbis**
- Established codec with good overall quality
- Compatible with all WebM-supporting browsers
- Default choice for most applications
2. **Opus**
- Superior audio quality, especially at low bitrates
- Better for voice content and music
- Lower latency for streaming applications
- More efficient for bandwidth-constrained scenarios
For detailed audio settings, see:
- [Vorbis Configuration](../audio-encoders/vorbis.md)
- [Opus Configuration](../audio-encoders/opus.md)
## Optimization Strategies
### For Video Quality
To achieve the highest possible video quality:
- Use VP9 or AV1 for video encoding
- Set lower quantizer values (higher quality)
- Increase `LagInFrames` for better lookahead analysis
- Use 2-pass encoding for offline video processing
- Set higher bitrates for complex visual content
```csharp
// Quality-focused VP9 configuration
var vp9 = new VP9EncoderSettings
{
Bitrate = 3000, // Higher bitrate for better quality
Speed = 0, // Slowest/highest quality encoding
}
```
### For Real-time Applications
When low latency is critical:
- Choose VP8 for faster encoding
- Use single-pass encoding
- Set `CPUUsed` to higher values
- Use smaller frame lookahead buffers
- Configure shorter keyframe intervals
```csharp
// Low-latency VP8 configuration
var vp8 = new VP8EncoderSettings
{
EndUsage = VP8EndUsageMode.CBR, // Constant bitrate for predictable streaming
Speed = 8, // Faster encoding
Deadline = VP8Deadline.Realtime, // Prioritize speed over quality
ErrorResilient = true // Better recovery from packet loss
};
```
### For File Size Efficiency
To minimize storage requirements:
- Use VP9 or AV1 for maximum compression
- Enable two-pass encoding
- Set appropriate target bitrates
- Use Variable Bit Rate (VBR) encoding
- Avoid unnecessary keyframes
```csharp
// Storage-optimized configuration
var av1 = new AV1EncoderSettings
{
EndUsage = AOMEndUsage.VBR, // Variable bitrate for efficiency
TwoPass = true, // Enable multi-pass encoding
CpuUsed = 2, // Balance between speed and compression
KeyframeMaxDistance = 300 // Fewer keyframes = smaller files
};
```
## Dependencies
To implement WebM output, add the appropriate NuGet packages to your project:
- For x86 platforms: [VisioForge.DotNet.Core.Redist.WebM.x86](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.WebM.x86)
- For x64 platforms: [VisioForge.DotNet.Core.Redist.WebM.x64](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.WebM.x64)
## Learning Resources
For additional implementation examples and more advanced scenarios, visit our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples) containing sample code for all VisioForge SDKs.
---END OF PAGE---
# Local File: .\dotnet\general\output-formats\wmv.md
---
title: WMV File Output and Encoding Guide
description: Learn how to implement Windows Media Video (WMV) encoding in .NET applications. Covers audio/video configuration, streaming options, profile management, and cross-platform solutions with code examples.
sidebar_label: Windows Media Video
---
# Windows Media Video encoders
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
This documentation covers the Windows Media Video (WMV) encoding capabilities available in VisioForge, including both Windows-specific and cross-platform solutions.
## Windows-only output
[!badge variant="dark" size="xl" text="VideoCaptureCore"] [!badge variant="dark" size="xl" text="VideoEditCore"]
The [WMVOutput](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.Output.WMVOutput.html) class provides comprehensive Windows Media encoding capabilities for both audio and video on Windows platforms.
### Audio Encoding Features
The `WMVOutput` class offers several audio-specific configuration options:
- Custom audio codec selection
- Audio format customization
- Multiple stream modes
- Bitrate control
- Quality settings
- Language support
- Buffer size management
### Rate Control Modes
WMV encoding supports four rate control modes through the `WMVStreamMode` enum:
1. CBR (Constant Bitrate)
2. VBRQuality (Variable Bitrate based on quality)
3. VBRBitrate (Variable Bitrate with target bitrate)
4. VBRPeakBitrate (Variable Bitrate with peak bitrate constraint)
### Configuration Modes
The encoder can be configured in several ways using the `WMVMode` enum:
- ExternalProfile: Load settings from a profile file
- ExternalProfileFromText: Load settings from a text string
- InternalProfile: Use built-in profiles
- CustomSettings: Manual configuration
- V8SystemProfile: Use Windows Media 8 system profiles
### Sample Code
Create new WMV custom output configuration:
```csharp
var wmvOutput = new WMVOutput
{
// Basic configuration
Mode = WMVMode.CustomSettings,
// Audio settings
Custom_Audio_StreamPresent = true,
Custom_Audio_Mode = WMVStreamMode.VBRQuality,
Custom_Audio_Quality = 98,
Custom_Audio_PeakBitrate = 192000,
Custom_Audio_PeakBufferSize = 3,
// Optional language setting
Custom_Audio_LanguageID = "en-US"
};
```
Using an internal profile:
```csharp
var profileWmvOutput = new WMVOutput
{
Mode = WMVMode.InternalProfile,
Internal_Profile_Name = "Windows Media Video 9 for Local Network (768 kbps)"
};
```
Network streaming configuration:
```csharp
var streamingWmvOutput = new WMVOutput
{
Mode = WMVMode.CustomSettings,
Network_Streaming_WMV_Maximum_Clients = 20,
Custom_Audio_Mode = WMVStreamMode.CBR
};
```
### Custom Profile Configuration
Custom profiles give you the most flexibility by allowing you to configure every aspect of the encoding process. Here are several examples for different scenarios:
High-quality video streaming configuration:
```csharp
var highQualityConfig = new WMVOutput
{
Mode = WMVMode.CustomSettings,
// Video settings
Custom_Video_StreamPresent = true,
Custom_Video_Mode = WMVStreamMode.VBRQuality,
Custom_Video_Quality = 95,
Custom_Video_Width = 1920,
Custom_Video_Height = 1080,
Custom_Video_FrameRate = 30.0,
Custom_Video_KeyFrameInterval = 4,
Custom_Video_Smoothness = 80,
Custom_Video_Buffer_UseDefault = false,
Custom_Video_Buffer_Size = 4000,
// Audio settings
Custom_Audio_StreamPresent = true,
Custom_Audio_Mode = WMVStreamMode.VBRQuality,
Custom_Audio_Quality = 98,
Custom_Audio_Format = "48kHz 16bit Stereo",
Custom_Audio_PeakBitrate = 320000,
Custom_Audio_PeakBufferSize = 3,
// Profile metadata
Custom_Profile_Name = "High Quality Streaming",
Custom_Profile_Description = "1080p streaming profile with high quality audio",
Custom_Profile_Language = "en-US"
};
```
Low bandwidth configuration for mobile streaming:
```csharp
var mobileLowBandwidthConfig = new WMVOutput
{
Mode = WMVMode.CustomSettings,
// Video settings optimized for mobile
Custom_Video_StreamPresent = true,
Custom_Video_Mode = WMVStreamMode.CBR,
Custom_Video_Bitrate = 800000, // 800 kbps
Custom_Video_Width = 854,
Custom_Video_Height = 480,
Custom_Video_FrameRate = 24.0,
Custom_Video_KeyFrameInterval = 5,
Custom_Video_Smoothness = 60,
// Audio settings for low bandwidth
Custom_Audio_StreamPresent = true,
Custom_Audio_Mode = WMVStreamMode.CBR,
Custom_Audio_PeakBitrate = 64000, // 64 kbps
Custom_Audio_Format = "44kHz 16bit Mono",
Custom_Profile_Name = "Mobile Low Bandwidth",
Custom_Profile_Description = "480p optimized for mobile devices"
};
```
Audio-focused configuration for music content:
```csharp
var audioFocusedConfig = new WMVOutput
{
Mode = WMVMode.CustomSettings,
// High quality audio settings
Custom_Audio_StreamPresent = true,
Custom_Audio_Mode = WMVStreamMode.VBRQuality,
Custom_Audio_Quality = 99,
Custom_Audio_Format = "96kHz 24bit Stereo",
Custom_Audio_PeakBitrate = 512000,
Custom_Audio_PeakBufferSize = 4,
// Minimal video settings
Custom_Video_StreamPresent = true,
Custom_Video_Mode = WMVStreamMode.VBRBitrate,
Custom_Video_Bitrate = 500000,
Custom_Video_Width = 1280,
Custom_Video_Height = 720,
Custom_Video_FrameRate = 25.0,
Custom_Profile_Name = "Audio Focus",
Custom_Profile_Description = "High quality audio configuration for music content"
};
```
### Internal Profile Usage
Internal profiles provide pre-configured settings optimized for common scenarios. Here are examples of using different internal profiles:
Standard broadcast quality profile:
```csharp
var broadcastProfile = new WMVOutput
{
Mode = WMVMode.InternalProfile,
Internal_Profile_Name = "Windows Media Video 9 Advanced Profile",
Custom_Video_TVSystem = WMVTVSystem.NTSC // Optional TV system override
};
```
Web streaming profile:
```csharp
var webStreamingProfile = new WMVOutput
{
Mode = WMVMode.InternalProfile,
Internal_Profile_Name = "Windows Media Video 9 for Broadband (2 Mbps)",
Network_Streaming_WMV_Maximum_Clients = 100 // Optional streaming override
};
```
Low latency profile for live streaming:
```csharp
var liveStreamingProfile = new WMVOutput
{
Mode = WMVMode.InternalProfile,
Internal_Profile_Name = "Windows Media Video 9 Screen (Low Rate)",
Network_Streaming_WMV_Maximum_Clients = 50
};
```
### External Profile Configuration
External profiles allow you to load encoding settings from files or text. This is useful for sharing configurations across different projects or storing multiple configurations:
Loading profile from a file:
```csharp
var fileBasedProfile = new WMVOutput
{
Mode = WMVMode.ExternalProfile,
External_Profile_FileName = @"C:\Profiles\HighQualityStreaming.prx"
};
```
Loading profile from text configuration:
```csharp
var textBasedProfile = new WMVOutput
{
Mode = WMVMode.ExternalProfileFromText,
External_Profile_Text = @"
"
};
```
Saving and loading profiles programmatically:
```csharp
async Task SaveAndLoadProfile(WMVOutput profile, string filename)
{
// Save profile configuration to JSON
string jsonConfig = profile.Save();
await File.WriteAllTextAsync(filename, jsonConfig);
// Load profile configuration from JSON
string loadedJson = await File.ReadAllTextAsync(filename);
WMVOutput loadedProfile = WMVOutput.Load(loadedJson);
}
```
Example usage of profile saving/loading:
```csharp
var profile = new WMVOutput
{
Mode = WMVMode.CustomSettings,
// ... configure settings ...
};
await SaveAndLoadProfile(profile, "encoding_profile.json");
```
### Working with Legacy Windows Media 8 Profiles
For compatibility with older systems, you can use Windows Media 8 system profiles:
Using Windows Media 8 profile:
```csharp
var wmv8Profile = new WMVOutput
{
Mode = WMVMode.V8SystemProfile,
V8ProfileName = "Windows Media Video 8 for Dial-up Access (28.8 Kbps)",
};
```
Customizing streaming settings for Windows Media 8 profiles:
```csharp
var wmv8StreamingProfile = new WMVOutput
{
Mode = WMVMode.V8SystemProfile,
V8ProfileName = "Windows Media Video 8 for Local Area Network (384 Kbps)",
Network_Streaming_WMV_Maximum_Clients = 25,
Custom_Video_TVSystem = WMVTVSystem.PAL // Optional TV system override
};
```
### Apply settings to your core object
```csharp
var core = new VideoCaptureCore(); // or VideoEditCore
core.Output_Format = wmvOutput;
core.Output_Filename = "output.wmv";
```
## Cross-platform WMV output
[!badge variant="dark" size="xl" text="VideoCaptureCoreX"] [!badge variant="dark" size="xl" text="VideoEditCoreX"] [!badge variant="dark" size="xl" text="MediaBlocksPipeline"]
The `WMVEncoderSettings` class provides a cross-platform solution for WMV encoding using GStreamer technology.
### Features
- Platform-independent implementation
- Integration with GStreamer backend
- Simple configuration interface
- Availability checking
### Sample Code
Add the WebM output to the Video Capture SDK core instance:
```csharp
var wmvOutput = new WMVOutput("output.wmv");
var core = new VideoCaptureCoreX();
core.Outputs_Add(wmvOutput, true);
```
Set the output format for the Video Edit SDK core instance:
```csharp
var wmvOutput = new WMVOutput("output.wmv");
var core = new VideoEditCoreX();
core.Output_Format = wmvOutput;
```
Create a Media Blocks WebM output instance:
```csharp
var wma = new WMAEncoderSettings();
var wmv = new WMVEncoderSettings();
var sinkSettings = new ASFSinkSettings("output.wmv");
var webmOutput = new WMVOutputBlock(sinkSettings, wmv, wma);
```
### Choosing Between Encoders
Consider the following factors when choosing between Windows-specific `WMVOutput` and cross-platform `WMVEncoderSettings`:
#### Windows-Specific WMVOutput
- Pros:
- Full access to Windows Media format features
- Advanced rate control options
- Network streaming support
- Profile-based configuration
- Cons:
- Windows-only compatibility
- Requires Windows Media components
#### Cross-Platform WMV
- Pros:
- Platform independence
- Simpler implementation
- Cons:
- More limited feature set
- Basic configuration options only
## Best Practices
1. Always check encoder availability before use, especially with cross-platform implementations
2. Use appropriate rate control modes based on your quality and bandwidth requirements
3. Consider using internal profiles for common scenarios when using WMVOutput
4. Implement proper error handling for codec availability checks
5. Test encoding performance across different platforms when using cross-platform solutions
---
Visit our [GitHub](https://github.com/visioforge/.Net-SDK-s-samples) page to get more code samples.
---END OF PAGE---
# Local File: .\dotnet\general\video-effects\add.md
---
title: Implementing Video Effects in .NET Applications
description: Master the implementation of video effects in .NET with this detailed tutorial. Learn to add, update, and configure video effect parameters in multiple SDK environments including capture, playback, and editing applications with practical C# code examples.
sidebar_label: Implementing Video Effects
---
# Implementing Video Effects in .NET SDK Applications
Video effects can significantly enhance the visual quality and user experience of your media applications. This guide demonstrates how to properly implement and manage video effects across various .NET SDK environments.
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Player SDK .Net"](https://www.visioforge.com/media-player-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## Implementation Overview
When working with video processing in .NET applications, you'll often need to apply various effects to enhance or modify the video content. The following sections explain the process step-by-step.
## C# Code Implementation
### Example: Lightness Effect in Media Player SDK
This detailed example demonstrates how to implement a lightness effect, which is a common video enhancement technique. The same implementation approach applies to Video Edit SDK .Net and Video Capture SDK .Net environments.
### Step 1: Define the Effect Interface
First, you need to declare the appropriate interface for your desired effect:
```cs
IVideoEffectLightness lightness;
```
### Step 2: Retrieve or Create the Effect Instance
Each effect requires a unique identifier. The following code checks if the effect already exists in the SDK control:
```cs
var effect = MediaPlayer1.Video_Effects_Get("Lightness");
```
### Step 3: Add the Effect if Not Present
If the effect doesn't exist yet, you'll need to instantiate and add it to your video processing pipeline:
```cs
if (effect == null)
{
lightness = new VideoEffectLightness(true, 100);
MediaPlayer1.Video_Effects_Add(lightness);
}
```
### Step 4: Update Existing Effect Parameters
If the effect is already present, you can modify its parameters to achieve the desired visual outcome:
```cs
else
{
lightness = effect as IVideoEffectLightness;
if (lightness != null)
{
lightness.Value = 100;
}
}
```
## Important Implementation Notes
For proper functionality, ensure you enable effects processing before starting video playback or capture:
* Set the `Video_Effects_Enable` property to `true` before calling any `Play()` or `Start()` methods
* Effects will not be applied if this property is not enabled
* Changing effect parameters during playback will update the visual output in real-time
## System Requirements
To successfully implement video effects in your .NET application, you'll need:
* SDK redistributable packages properly installed
* Sufficient system resources for real-time video processing
* Appropriate .NET framework version
## Additional Resources
For more advanced implementations and examples of video effect techniques:
---
Visit our [GitHub](https://github.com/visioforge/.Net-SDK-s-samples) repository for additional code samples and complete projects.
---END OF PAGE---
# Local File: .\dotnet\general\video-effects\image-overlay.md
---
title: Adding Image Overlays to Video Streams
description: Learn how to overlay images, animated GIFs, and transparent PNGs on video streams in .NET. Step-by-step guide with code examples for implementing image overlays using different formats and transparency effects.
sidebar_label: Image Overlay
---
# Image overlay
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Player SDK .Net"](https://www.visioforge.com/media-player-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
[!badge variant="dark" size="xl" text="VideoCaptureCore"] [!badge variant="dark" size="xl" text="MediaPlayerCore"] [!badge variant="dark" size="xl" text="VideoEditCore"]
## Introduction
This example demonstrates how to overlay an image on a video stream.
JPG, PNG, BMP, and GIF images are supported.
## Sample code
Most simple image overlay with image added from a file with custom position:
```csharp
var effect = new VideoEffectImageLogo(true, "imageoverlay");
effect.Filename = @"logo.png";
effect.Left = 100;
effect.Top = 100;
VideoCapture1.Video_Effects_Add(effect);
```
### Transparent image overlay
SDK fully supports transparency in PNG images. If you want to set a custom transparency level, you can use the `TransparencyLevel` property with a range (0..255).
```csharp
var effect = new VideoEffectImageLogo(true, "imageoverlay");
effect.Filename = @"logo.jpg";
effect.TransparencyLevel = 50;
VideoCapture1.Video_Effects_Add(effect);
```
### Animated GIF overlay
You can overlay an animated GIF image on a video stream. The SDK will play the GIF animation in the overlay.
```csharp
var effect = new VideoEffectImageLogo(true, "imageoverlay");
effect.Filename = @"animated.gif";
effect.Animated = true;
effect.AnimationEnabled = true;
VideoCapture1.Video_Effects_Add(effect);
```
### Image overlay from `System.Drawing.Bitmap`
You can overlay an image from a `System.Drawing.Bitmap` object.
```csharp
var effect = new VideoEffectImageLogo(true, "imageoverlay");
effect.MemoryBitmap = new Bitmap("logo.jpg");
VideoCapture1.Video_Effects_Add(effect);
```
### Image overlay from RGB/RGBA byte array
You can overlay an image from RGB/RGBA data.
```csharp
// add image logo
var effect = new VideoEffectImageLogo(true, "imageoverlay");
// load image from JPG file
var bitmap = new Bitmap("logo.jpg");
// lock bitmap data and save to byte data (IntPtr)
var bitmapData = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb);
var pixels = Marshal.AllocCoTaskMem(bitmapData.Stride * bitmapData.Height);
NativeAPI.CopyMemory(pixels, bitmapData.Scan0, bitmapData.Stride * bitmapData.Height);
bitmap.UnlockBits(bitmapData);
// set data to effect
effect.Bitmap = pixels;
// set bitmap properties
effect.BitmapWidth = bitmap.Width;
effect.BitmapHeight = bitmap.Height;
effect.BitmapDepth = 3; // RGB24
// free bitmap
bitmap.Dispose();
// add effect
VideoCapture1.Video_Effects_Add(effect);
```
---
Visit our [GitHub](https://github.com/visioforge/.Net-SDK-s-samples) page to get more code samples.
---END OF PAGE---
# Local File: .\dotnet\general\video-effects\index.md
---
title: Advanced Video Effects & Processing for .Net SDKs
description: Enhance your applications with powerful video effects, overlays, and processing capabilities for .Net developers. Learn how to implement professional-grade visual effects, text/image overlays, and custom video processing in your .Net applications.
sidebar_label: Video Effects And Processing
order: 15
---
# Video Effects and Processing for .Net Applications
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net)[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Player SDK .Net"](https://www.visioforge.com/media-player-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## Introduction
Our .Net SDKs provide developers with an extensive array of video effects and processing capabilities. These powerful tools enable you to transform raw video content into polished, professional-quality media. Whether you need to add dynamic overlays, apply visual effects, or perform advanced video manipulation, these SDKs deliver the functionality required for sophisticated media applications.
## Available Video Effect Categories
### Real-time Effects
* Color correction and grading
* Blur and sharpening filters
* Noise reduction algorithms
* Chroma key (green screen) processing
### Video Enhancement
* Resolution upscaling
* Frame rate conversion
* Dynamic contrast adjustment
* HDR tone mapping
## Overlay Capabilities
* [Text overlay](text-overlay.md) - Add customizable text with control over font, size, color, and animation
* [Image overlay](image-overlay.md) - Incorporate logos, watermarks, and graphic elements with transparency support
## Video Processing Features
### Transformation Operations
* Rotation, scaling, and cropping
* Picture-in-picture effects
* Custom aspect ratio conversion
* Video composition and layering
### Advanced Processing
* Timeline-based editing capabilities
* Transition effects between scenes
* Audio-video synchronization tools
* Performance-optimized processing pipeline
* [Video sample grabber](video-sample-grabber.md) - Extract frames and process video data in real-time
## Integration Methods
Our SDKs are designed for seamless integration with your .Net applications. The architecture allows for both simple implementations and advanced customizations to meet your specific project requirements.
## More Information
Numerous additional video effects and processing features are available in the SDKs. Please refer to the documentation for the specific SDK you are using for detailed implementation examples and API references.
---
Visit our [GitHub](https://github.com/visioforge/.Net-SDK-s-samples) page to access more code samples and implementation examples.
---END OF PAGE---
# Local File: .\dotnet\general\video-effects\text-overlay.md
---
title: Advanced Text Overlays for .NET Video Processing
description: Learn how to implement custom text overlays in video streams with complete control over font, size, color, position, rotation, and animation effects. Perfect for adding timestamps, captions, and dynamic text to your .NET video applications.
sidebar_label: Text Overlay
---
# Implementing Text Overlays in Video Streams
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Player SDK .Net"](https://www.visioforge.com/media-player-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
[!badge variant="dark" size="xl" text="VideoCaptureCore"] [!badge variant="dark" size="xl" text="MediaPlayerCore"] [!badge variant="dark" size="xl" text="VideoEditCore"]
## Introduction
Text overlays provide a powerful way to enhance video streams with dynamic information, branding, captions, or timestamps. This guide explores how to implement fully customizable text overlays with precise control over appearance, positioning, and animations.
## Classic Engine Implementation
Our classic engines (VideoCaptureCore, MediaPlayerCore, VideoEditCore) offer a straightforward API for adding text to video streams.
### Basic Text Overlay Implementation
The following example demonstrates a simple text overlay with custom positioning:
```csharp
var effect = new VideoEffectTextLogo(true, "textoverlay");
// set position
effect.Left = 20;
effect.Top = 20;
// set Font (System.Drawing.Font)
effect.Font = new Font("Arial", 40);
// set text
effect.Text = "Hello, world!";
// set text color
effect.FontColor = Color.Yellow;
MediaPlayer1.Video_Effects_Add(effect);
```
### Dynamic Information Display Options
#### Timestamp and Date Display
You can automatically display current date, time, or video timestamp information using specialized modes:
```csharp
// set mode and mask
effect.Mode = TextLogoMode.DateTime;
effect.DateTimeMask = "yyyy-MM-dd. hh:mm:ss";
```
The SDK supports custom formatting masks for timestamps and dates, allowing precise control over the displayed information format. Frame number display requires no additional configuration.
### Animation and Transition Effects
#### Implementing Fade Effects
Create smooth text appearances and disappearances with customizable fade effects:
```csharp
// add the fade-in
effect.FadeIn = true;
effect.FadeInDuration = TimeSpan.FromMilliseconds(5000);
// add the fade-out
effect.FadeOut = true;
effect.FadeOutDuration = TimeSpan.FromMilliseconds(5000);
```
### Text Rotation Options
Rotate your text overlay to match your design requirements:
```csharp
// set rotation mode
effect.RotationMode = TextRotationMode.Rm90;
```
### Text Flip Transformations
Apply mirror effects to your text for creative presentations:
```csharp
// set flip mode
effect.FlipMode = TextFlipMode.XAndY;
```
## X-Engine Implementation
Our newer X-engines (VideoCaptureCoreX, MediaPlayerCoreX, VideoEditCoreX) provide an enhanced API with additional features.
### Basic X-Engine Text Overlay
```csharp
// text overlay
var textOverlay = new TextOverlayVideoEffect() { Text = "Hello World!" };
// set position
textOverlay.XPad = 20;
textOverlay.YPad = 20;
textOverlay.HorizontalAlignment = TextOverlayHAlign.Left;
textOverlay.VerticalAlignment = TextOverlayVAlign.Top;
// set Font (System.Drawing.Font)
textOverlay.Font = new FontSettings("Arial", "Bold", 24);
// set text
textOverlay.Text = "Hello, world!";
// set text color
textOverlay.Color = SKColors.Yellow;
// add the effect
await videoCapture1.Video_Effects_AddOrUpdateAsync(textOverlay);
```
### Advanced Dynamic Content Display
#### Video Timestamp Integration
Display the current position within the video:
```csharp
// text overlay
var textOverlay = new TextOverlayVideoEffect();
// set text
textOverlay.Text = "Timestamp: ";
// set Timestamp mode
textOverlay.Mode = TextOverlayMode.Timestamp;
// add the effect
await videoCapture1.Video_Effects_AddOrUpdateAsync(textOverlay);
```
#### System Time Integration
Show the current system time alongside your video content:
```csharp
// text overlay
var textOverlay = new TextOverlayVideoEffect();
// set text
textOverlay.Text = "Time: ";
// set System Time mode
textOverlay.Mode = TextOverlayMode.SystemTime;
// add the effect
await videoCapture1.Video_Effects_AddOrUpdateAsync(textOverlay);
```
## Best Practices for Text Overlays
- Consider readability against different backgrounds
- Use appropriate font sizes for the target display resolution
- Implement fade effects for less intrusive overlays
- Test performance impact with complex text effects
---
For more code examples and implementation details, visit our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples).
---END OF PAGE---
# Local File: .\dotnet\general\video-effects\video-sample-grabber.md
---
title: Video sample grabber usage
description: C# code sample - how to use video sample grabber in Video Capture SDK .Net, Media Player SDK .Net, Video Edit SDK .Net.
sidebar_label: Video Sample Grabber Usage
---
# Video sample grabber usage
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Player SDK .Net"](https://www.visioforge.com/media-player-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## Getting RAW video frames as unmanaged memory pointer inside the structure
+++ X-engines
```csharp
// Subscribe to the video frame buffer event
VideoCapture1.OnVideoFrameBuffer += OnVideoFrameBuffer;
private void OnVideoFrameBuffer(object sender, VideoFrameXBufferEventArgs e)
{
// Process the VideoFrameX object
ProcessFrame(e.Frame);
// If you've modified the frame and want to update the video stream
e.UpdateData = true;
}
// Example of processing a VideoFrameX frame - adjusting brightness
private void ProcessFrame(VideoFrameX frame)
{
// Only process RGB/BGR/RGBA/BGRA formats
if (frame.Format != VideoFormatX.RGB &&
frame.Format != VideoFormatX.BGR &&
frame.Format != VideoFormatX.RGBA &&
frame.Format != VideoFormatX.BGRA)
{
return;
}
// Get the data as a byte array for manipulation
byte[] data = frame.ToArray();
// Determine the pixel size based on format
int pixelSize = (frame.Format == VideoFormatX.RGB || frame.Format == VideoFormatX.BGR) ? 3 : 4;
// Brightness factor (1.2 = 20% brighter, 0.8 = 20% darker)
float brightnessFactor = 1.2f;
// Process each pixel
for (int i = 0; i < data.Length; i += pixelSize)
{
// Adjust R, G, B channels
for (int j = 0; j < 3; j++)
{
int newValue = (int)(data[i + j] * brightnessFactor);
data[i + j] = (byte)Math.Min(255, newValue);
}
}
// Copy the modified data back to the frame
Marshal.Copy(data, 0, frame.Data, data.Length);
}
```
+++ Classic engines
```csharp
// Subscribe to the video frame buffer event
VideoCapture1.OnVideoFrameBuffer += OnVideoFrameBuffer;
private void OnVideoFrameBuffer(object sender, VideoFrameBufferEventArgs e)
{
// Process the VideoFrame structure
ProcessFrame(e.Frame);
// If you've modified the frame and want to update the video stream
e.UpdateData = true;
}
// Example of processing a VideoFrame - adjusting brightness
private void ProcessFrame(VideoFrame frame)
{
// Only process RGB format for this example
if (frame.Info.Colorspace != RAWVideoColorSpace.RGB24)
{
return;
}
// Get the data as a byte array for manipulation
byte[] data = frame.ToArray();
// Brightness factor (1.2 = 20% brighter, 0.8 = 20% darker)
float brightnessFactor = 1.2f;
// Process each pixel (RGB24 format = 3 bytes per pixel)
for (int i = 0; i < data.Length; i += 3)
{
// Adjust R, G, B channels
for (int j = 0; j < 3; j++)
{
int newValue = (int)(data[i + j] * brightnessFactor);
data[i + j] = (byte)Math.Min(255, newValue);
}
}
// Copy the modified data back to the frame
Marshal.Copy(data, 0, frame.Data, data.Length);
}
```
+++ Media Blocks SDK
```csharp
// Create and set up video sample grabber block
var videoSampleGrabberBlock = new VideoSampleGrabberBlock(VideoFormatX.RGB);
videoSampleGrabberBlock.OnVideoFrameBuffer += OnVideoFrameBuffer;
private void OnVideoFrameBuffer(object sender, VideoFrameXBufferEventArgs e)
{
// Process the VideoFrameX object
ProcessFrame(e.Frame);
// If you've modified the frame and want to update the video stream
e.UpdateData = true;
}
// Example of processing a VideoFrameX frame - adjusting brightness
private void ProcessFrame(VideoFrameX frame)
{
if (frame.Format != VideoFormatX.RGB)
{
return;
}
// Get the data as a byte array for manipulation
byte[] data = frame.ToArray();
// Brightness factor (1.2 = 20% brighter, 0.8 = 20% darker)
float brightnessFactor = 1.2f;
// Process each pixel (RGB format = 3 bytes per pixel)
for (int i = 0; i < data.Length; i += 3)
{
// Adjust R, G, B channels
for (int j = 0; j < 3; j++)
{
int newValue = (int)(data[i + j] * brightnessFactor);
data[i + j] = (byte)Math.Min(255, newValue);
}
}
// Copy the modified data back to the frame
Marshal.Copy(data, 0, frame.Data, data.Length);
}
```
+++
## Working with bitmap frames
If you need to work with managed Bitmap objects instead of raw memory pointers, you can use the `OnVideoFrameBitmap` event of the `core` classes or the SampleGrabberBlock:
```csharp
// Subscribe to the bitmap frame event
VideoCapture1.OnVideoFrameBitmap += OnVideoFrameBitmap;
private void OnVideoFrameBitmap(object sender, VideoFrameBitmapEventArgs e)
{
// Process the Bitmap object
ProcessBitmap(e.Frame);
// If you've modified the bitmap and want to update the video stream
e.UpdateData = true;
}
// Example of processing a Bitmap - adjusting brightness
private void ProcessBitmap(Bitmap bitmap)
{
// Use Bitmap methods or Graphics to manipulate the image
// This example uses ColorMatrix for brightness adjustment
// Create a graphics object from the bitmap
using (Graphics g = Graphics.FromImage(bitmap))
{
// Create a color matrix for brightness adjustment
float brightnessFactor = 1.2f; // 1.0 = no change, >1.0 = brighter, <1.0 = darker
ColorMatrix colorMatrix = new ColorMatrix(new float[][]
{
new float[] {brightnessFactor, 0, 0, 0, 0},
new float[] {0, brightnessFactor, 0, 0, 0},
new float[] {0, 0, brightnessFactor, 0, 0},
new float[] {0, 0, 0, 1, 0},
new float[] {0, 0, 0, 0, 1}
});
// Create an ImageAttributes object and set the color matrix
using (ImageAttributes attributes = new ImageAttributes())
{
attributes.SetColorMatrix(colorMatrix);
// Draw the image with the brightness adjustment
g.DrawImage(bitmap,
new Rectangle(0, 0, bitmap.Width, bitmap.Height),
0, 0, bitmap.Width, bitmap.Height,
GraphicsUnit.Pixel, attributes);
}
}
}
```
## Working with SkiaSharp for cross-platform applications
For cross-platform applications, the VideoSampleGrabberBlock provides the ability to work with SkiaSharp, a high-performance 2D graphics API for .NET. This is especially useful for applications targeting multiple platforms including mobile and web.
### Using the OnVideoFrameSKBitmap event
```csharp
// First, add the SkiaSharp NuGet package to your project
// Install-Package SkiaSharp
// Import necessary namespaces
using SkiaSharp;
using VisioForge.Core.MediaBlocks.VideoProcessing;
using VisioForge.Core.Types.X.Events;
// Create a VideoSampleGrabberBlock with RGBA or BGRA format
// Note: OnVideoFrameSKBitmap event works only with RGBA or BGRA formats
var videoSampleGrabberBlock = new VideoSampleGrabberBlock(VideoFormatX.BGRA);
// Enable the SaveLastFrame property if you want to take snapshots later
videoSampleGrabberBlock.SaveLastFrame = true;
// Subscribe to the SkiaSharp bitmap event
videoSampleGrabberBlock.OnVideoFrameSKBitmap += OnVideoFrameSKBitmap;
// Event handler for SkiaSharp bitmap frames
private void OnVideoFrameSKBitmap(object sender, VideoFrameSKBitmapEventArgs e)
{
// Process the SKBitmap
ProcessSKBitmap(e.Frame);
// Note: Unlike VideoFrameBitmapEventArgs, VideoFrameSKBitmapEventArgs does not have
// an UpdateData property as it's designed for frame viewing/analysis only
}
// Example of processing an SKBitmap - adjusting brightness
private void ProcessSKBitmap(SKBitmap bitmap)
{
// Create a new bitmap to hold the processed image
using (var surface = SKSurface.Create(new SKImageInfo(bitmap.Width, bitmap.Height)))
{
var canvas = surface.Canvas;
// Set up a paint with a color filter for brightness adjustment
using (var paint = new SKPaint())
{
// Create a brightness filter (1.2 = 20% brighter)
float brightnessFactor = 1.2f;
var colorMatrix = new float[]
{
brightnessFactor, 0, 0, 0, 0,
0, brightnessFactor, 0, 0, 0,
0, 0, brightnessFactor, 0, 0,
0, 0, 0, 1, 0
};
paint.ColorFilter = SKColorFilter.CreateColorMatrix(colorMatrix);
// Draw the original bitmap with the brightness filter applied
canvas.DrawBitmap(bitmap, 0, 0, paint);
// If you need to get the result as a new SKBitmap:
var processedImage = surface.Snapshot();
using (var processedBitmap = SKBitmap.FromImage(processedImage))
{
// Use processedBitmap for further operations or display
// For example, display it in a SkiaSharp view
// mySkiaView.SKBitmap = processedBitmap.Copy();
}
}
}
}
```
### Taking snapshots with SkiaSharp
```csharp
// Create a method to capture and save a snapshot
private void CaptureSnapshot(string filePath)
{
// Make sure SaveLastFrame was enabled on the VideoSampleGrabberBlock
if (videoSampleGrabberBlock.SaveLastFrame)
{
// Get the last frame as an SKBitmap
using (var bitmap = videoSampleGrabberBlock.GetLastFrameAsSKBitmap())
{
if (bitmap != null)
{
// Save the bitmap to a file
using (var image = SKImage.FromBitmap(bitmap))
using (var data = image.Encode(SKEncodedImageFormat.Png, 100))
using (var stream = File.OpenWrite(filePath))
{
data.SaveTo(stream);
}
}
}
}
}
```
### Advantages of using SkiaSharp
1. **Cross-platform compatibility**: Works on Windows, macOS, Linux, iOS, Android, and WebAssembly
2. **Performance**: Provides high-performance graphics processing
3. **Modern API**: Offers a comprehensive set of drawing, filtering, and transformation functions
4. **Memory efficiency**: More efficient memory management compared to System.Drawing
5. **No platform dependencies**: No dependency on platform-specific imaging libraries
## Frame processing information
You can get video frames from live sources or files using the `OnVideoFrameBuffer` and `OnVideoFrameBitmap` events.
The `OnVideoFrameBuffer` event is faster and provides the unmanaged memory pointer for the decoded frame. The `OnVideoFrameBitmap` event is slower, but you get the decoded frame as the `Bitmap` class object.
### Understanding the frame objects
- **VideoFrameX** (X-engines): Contains frame data, dimensions, format, timestamp, and methods for manipulating raw video data
- **VideoFrame** (Classic engines): Similar structure but with a different memory layout
- **Common properties**:
- Width/Height: Frame dimensions
- Format/Colorspace: Pixel format (RGB, BGR, RGBA, etc.)
- Stride: Number of bytes per scan line
- Timestamp: Frame's position in the video timeline
- Data: Pointer to unmanaged memory with pixel data
### Important considerations
1. The frame's pixel format affects how you process the data:
- RGB/BGR: 3 bytes per pixel
- RGBA/BGRA/ARGB: 4 bytes per pixel (with alpha channel)
- YUV formats: Different component arrangements
2. Set `e.UpdateData = true` if you've modified the frame data and want the changes to be visible in the video stream.
3. For processing that requires multiple frames or complex operations, consider using a buffer or queue to store frames.
4. When using `OnVideoFrameSKBitmap`, select either RGBA or BGRA as the frame format when creating the VideoSampleGrabberBlock.
---
Visit our [GitHub](https://github.com/visioforge/.Net-SDK-s-samples) page to get more code samples.
---END OF PAGE---
# Local File: .\dotnet\general\video-encoders\av1.md
---
title: AV1 encoders usage in VisioForge .Net SDKs
description: AV1 encoders usage in Video Capture SDK .Net, Video Edit SDK .Net, and Media Blocks SDK .Net
sidebar_label: AV1
---
# AV1 Encoders
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
[!badge variant="dark" size="xl" text="VideoCaptureCoreX"] [!badge variant="dark" size="xl" text="VideoEditCoreX"] [!badge variant="dark" size="xl" text="MediaBlocksPipeline"]
VisioForge supports multiple AV1 encoder implementations, each with its own unique features and capabilities. This document covers the available encoders and their configuration options.
Currently, AV1 encoder are supported in the cross-platform engines: `VideoCaptureCoreX`, `VideoEditCoreX`, and `Media Blocks SDK`.
## Available Encoders
1. [AMD AMF AV1 Encoder (AMF)](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.X.VideoEncoders.AMFAV1EncoderSettings.html)
2. [NVIDIA NVENC AV1 Encoder (NVENC)](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.X.VideoEncoders.NVENCAV1EncoderSettings.html)
3. [Intel QuickSync AV1 Encoder (QSV)](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.X.VideoEncoders.QSVAV1EncoderSettings.html)
4. [AOM AV1 Encoder](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.X.VideoEncoders.AOMAV1EncoderSettings.html)
5. [RAV1E Encoder](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.X.VideoEncoders.RAV1EEncoderSettings.html)
You can use AV1 encoder with [WebM output](../output-formats/webm.md) or for network streaming.
## AMD AMF AV1 Encoder
The AMD AMF AV1 encoder provides hardware-accelerated encoding using AMD graphics cards.
### Features
- Multiple quality presets
- Variable bitrate control modes
- GOP size control
- QP (Quantization Parameter) control
- Smart Access Video support
### Rate Control Modes
- `Default`: Depends on Usage
- `CQP`: Constant QP
- `LCVBR`: Latency Constrained VBR
- `VBR`: Peak Constrained VBR
- `CBR`: Constant Bitrate
### Sample Usage
```csharp
var encoderSettings = new AMFAV1EncoderSettings
{
Bitrate = 3000, // 3 Mbps
GOPSize = 30, // GOP size of 30 frames
Preset = AMFAV1EncoderPreset.Quality, // Quality preset
RateControl = AMFAV1RateControlMode.VBR, // Variable Bitrate mode
Usage = AMFAV1EncoderUsage.Transcoding, // Transcoding usage
MaxBitrate = 5000, // 5 Mbps max bitrate
QpI = 26, // I-frame QP
QpP = 26, // P-frame QP
RefFrames = 1, // Number of reference frames
SmartAccessVideo = false // Smart Access Video disabled
};
```
## NVIDIA NVENC AV1 Encoder
NVIDIA's NVENC AV1 encoder provides hardware-accelerated encoding using NVIDIA GPUs.
### Features
- Multiple encoding presets
- Adaptive B-frame support
- Temporal AQ (Adaptive Quantization)
- VBV (Video Buffering Verifier) buffer control
- Spatial AQ support
### Rate Control Modes
- `Default`: Default mode
- `ConstQP`: Constant Quantization Parameter
- `CBR`: Constant Bitrate
- `VBR`: Variable Bitrate
- `CBR_LD_HQ`: Low-delay CBR, high quality
- `CBR_HQ`: CBR, high quality (slower)
- `VBR_HQ`: VBR, high quality (slower)
### Sample Usage
```csharp
var encoderSettings = new NVENCAV1EncoderSettings
{
Bitrate = 3000, // 3 Mbps
Preset = NVENCPreset.HighQuality, // High quality preset
RateControl = NVENCRateControl.VBR, // Variable Bitrate mode
GOPSize = 75, // GOP size of 75 frames
MaxBitrate = 5000, // 5 Mbps max bitrate
BFrames = 2, // 2 B-frames between I and P
RCLookahead = 8, // 8 frames lookahead
TemporalAQ = true, // Enable temporal AQ
Tune = NVENCTune.HighQuality, // High quality tuning
VBVBufferSize = 6000 // 6000k VBV buffer
};
```
## Intel QuickSync AV1 Encoder
Intel's QuickSync AV1 encoder provides hardware-accelerated encoding using Intel GPUs.
### Features
- Low latency mode support
- Configurable target usage
- Reference frame control
- Flexible GOP size settings
### Rate Control Modes
- `CBR`: Constant Bitrate
- `VBR`: Variable Bitrate
- `CQP`: Constant Quantizer
### Sample Usage
```csharp
var encoderSettings = new QSVAV1EncoderSettings
{
Bitrate = 2000, // 2 Mbps
LowLatency = false, // Standard latency mode
TargetUsage = 4, // Balanced quality/speed
GOPSize = 30, // GOP size of 30 frames
MaxBitrate = 4000, // 4 Mbps max bitrate
QPI = 26, // I-frame QP
QPP = 28, // P-frame QP
RateControl = QSVAV1EncRateControl.VBR, // Variable Bitrate mode
RefFrames = 1 // Number of reference frames
};
```
## AOM AV1 Encoder
The Alliance for Open Media (AOM) AV1 encoder is a software-based reference implementation.
### Features
- Buffer control settings
- CPU usage optimization
- Frame dropping support
- Multi-threading capabilities
- Super-resolution support
### Rate Control Modes
- `VBR`: Variable Bit Rate Mode
- `CBR`: Constant Bit Rate Mode
- `CQ`: Constrained Quality Mode
- `Q`: Constant Quality Mode
### Sample Usage
```csharp
var encoderSettings = new AOMAV1EncoderSettings
{
BufferInitialSize = TimeSpan.FromMilliseconds(4000),
BufferOptimalSize = TimeSpan.FromMilliseconds(5000),
BufferSize = TimeSpan.FromMilliseconds(6000),
CPUUsed = 4, // CPU usage level
DropFrame = 0, // Disable frame dropping
RateControl = AOMAV1EncoderEndUsageMode.VBR, // Variable Bitrate mode
TargetBitrate = 256, // 256 Kbps
Threads = 0, // Auto thread count
UseRowMT = true, // Enable row-based threading
SuperResMode = AOMAV1SuperResolutionMode.None // No super-resolution
};
```
## RAV1E Encoder
RAV1E is a fast and safe AV1 encoder written in Rust.
### Features
- Speed preset control
- Quantizer settings
- Key frame interval control
- Low latency mode
- Psychovisual tuning
### Sample Usage
```csharp
var encoderSettings = new RAV1EEncoderSettings
{
Bitrate = 3000, // 3 Mbps
LowLatency = false, // Standard latency mode
MaxKeyFrameInterval = 240, // Maximum keyframe interval
MinKeyFrameInterval = 12, // Minimum keyframe interval
MinQuantizer = 0, // Minimum quantizer value
Quantizer = 100, // Base quantizer value
SpeedPreset = 6, // Speed preset (0-10)
Tune = RAV1EEncoderTune.Psychovisual // Psychovisual tuning
};
```
## General Usage Notes
1. All encoders implement the `IAV1EncoderSettings` interface, providing a consistent way to create encoder blocks.
2. Each encoder has its own specific set of optimizations and trade-offs.
3. Hardware encoders (AMF, NVENC, QSV) generally provide better performance but may have specific hardware requirements.
4. Software encoders (AOM, RAV1E) offer more flexibility but may require more CPU resources.
## Recommendations
- For AMD GPUs: Use AMF encoder
- For NVIDIA GPUs: Use NVENC encoder
- For Intel GPUs: Use QSV encoder
- For maximum quality: Use AOM encoder
- For CPU-efficient encoding: Use RAV1E encoder
## Best Practices
1. Always check encoder availability before using it
2. Set appropriate bitrates based on your target resolution and framerate
3. Use appropriate GOP sizes based on your content type
4. Consider the trade-off between quality and encoding speed
5. Test different rate control modes to find the best fit for your use case
---END OF PAGE---
# Local File: .\dotnet\general\video-encoders\h264.md
---
title: H264 encoders usage in VisioForge .Net SDKs
description: H264 encoders usage in Video Capture SDK .Net, Video Edit SDK .Net, and Media Blocks SDK .Net
sidebar_label: H264
---
# H264 Encoders
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
[!badge variant="dark" size="xl" text="VideoCaptureCoreX"] [!badge variant="dark" size="xl" text="VideoEditCoreX"] [!badge variant="dark" size="xl" text="MediaBlocksPipeline"]
This document provides detailed information about available H264 encoders, their features, rate control options, and usage examples.
For Windows-only engines check the [MP4 output](../output-formats/mp4.md) page.
## Overview
The following H264 encoders are available:
1. AMD AMF H264 Encoder (GPU-accelerated)
2. NVIDIA NVENC H264 Encoder (GPU-accelerated)
3. Intel QSV H264 Encoder (GPU-accelerated)
4. OpenH264 Encoder (Software)
5. Apple Media H264 Encoder (Hardware-accelerated for Apple devices)
6. VAAPI H264 Encoder (Linux hardware acceleration)
7. Various OMX-based encoders (Platform-specific)
## AMD AMF H264 Encoder
AMD's Advanced Media Framework (AMF) provides hardware-accelerated encoding on AMD GPUs.
### Key Features
- Hardware-accelerated encoding
- Multiple preset options (Balanced, Speed, Quality)
- Configurable GOP size
- CABAC entropy coding support
- Various rate control methods
### Rate Control Options
```csharp
public enum AMFH264EncoderRateControl
{
Default = -1, // Default, depends on usage
CQP = 0, // Constant QP
CBR = 1, // Constant bitrate
VBR = 2, // Peak constrained VBR
LCVBR = 3 // Latency Constrained VBR
}
```
### Sample Usage
```csharp
var settings = new AMFH264EncoderSettings
{
Bitrate = 5000, // 5 Mbps
CABAC = true,
RateControl = AMFH264EncoderRateControl.CBR,
Preset = AMFH264EncoderPreset.Quality,
Profile = AMFH264EncoderProfile.Main,
Level = AMFH264EncoderLevel.Level4_2,
GOPSize = 30
};
var encoder = new H264EncoderBlock(settings);
```
## NVIDIA NVENC H264 Encoder
NVIDIA's hardware-based video encoder provides efficient H264 encoding on NVIDIA GPUs.
### Key Features
- Hardware-accelerated encoding
- B-frame support
- Adaptive quantization
- Multiple reference frames
- Weighted prediction
- Look-ahead support
### Rate Control Options
Inherited from NVENCBaseEncoderSettings with additional H264-specific options:
- Constant Bitrate (CBR)
- Variable Bitrate (VBR)
- Constant QP (CQP)
- Quality-based VBR
### Sample Usage
```csharp
var settings = new NVENCH264EncoderSettings
{
Bitrate = 5000,
MaxBitrate = 8000,
RCLookahead = 20,
BFrames = 2,
Profile = NVENCH264Profile.High,
Level = NVENCH264Level.Level4_2,
TemporalAQ = true
};
var encoder = new H264EncoderBlock(settings);
```
## Intel Quick Sync Video (QSV) H264 Encoder
Intel's hardware-based video encoder available on Intel processors with integrated graphics.
### Key Features
- Hardware-accelerated encoding
- Low latency mode
- Multiple rate control methods
- B-frame support
- Intelligent rate control options
### Rate Control Options
```csharp
public enum QSVH264EncRateControl
{
CBR = 1, // Constant Bitrate
VBR = 2, // Variable Bitrate
CQP = 3, // Constant Quantizer
AVBR = 4, // Average Variable Bitrate
LA_VBR = 8, // Look Ahead VBR
ICQ = 9, // Intelligent CQP
VCM = 10, // Video Conferencing Mode
LA_ICQ = 11, // Look Ahead ICQ
LA_HRD = 13, // HRD compliant LA
QVBR = 14 // Quality-defined VBR
}
```
### Sample Usage
```csharp
var settings = new QSVH264EncoderSettings
{
Bitrate = 5000,
MaxBitrate = 8000,
RateControl = QSVH264EncRateControl.VBR,
Profile = QSVH264EncProfile.High,
Level = QSVH264EncLevel.Level4_2,
LowLatency = true,
BFrames = 2
};
var encoder = new H264EncoderBlock(settings);
```
## OpenH264 Encoder
Cisco's open-source H264 software encoder.
### Key Features
- Software-based encoding
- Multiple complexity levels
- Scene change detection
- Adaptive quantization
- Denoising support
### Rate Control Options
```csharp
public enum OpenH264RCMode
{
Quality = 0, // Quality mode
Bitrate = 1, // Bitrate mode
Buffer = 2, // Buffer based
Off = -1 // Rate control off
}
```
### Sample Usage
```csharp
var settings = new OpenH264EncoderSettings
{
Bitrate = 5000,
RateControl = OpenH264RCMode.Bitrate,
Profile = OpenH264Profile.Main,
Level = OpenH264Level.Level4_2,
Complexity = OpenH264Complexity.Medium,
EnableDenoise = true,
SceneChangeDetection = true
};
var encoder = new H264EncoderBlock(settings);
```
## Apple Media H264 Encoder
Hardware-accelerated encoder for Apple platforms.
### Key Features
- Hardware acceleration on Apple devices
- Real-time encoding support
- Frame reordering options
- Quality-based encoding
### Sample Usage
```csharp
var settings = new AppleMediaH264EncoderSettings
{
Bitrate = 5000,
AllowFrameReordering = true,
Quality = 0.8,
Realtime = true
};
var encoder = new H264EncoderBlock(settings);
```
## VAAPI H264 Encoder
Video Acceleration API encoder for Linux systems.
### Key Features
- Hardware acceleration on Linux
- Multiple profile support
- Trellis quantization
- B-frame support
- Various rate control methods
### Rate Control Options
```csharp
public enum VAAPIH264RateControl
{
CQP = 1, // Constant QP
CBR = 2, // Constant bitrate
VBR = 4, // Variable bitrate
VBRConstrained = 5, // Constrained VBR
ICQ = 7, // Intelligent CQP
QVBR = 8 // Quality-defined VBR
}
```
### Sample Usage
```csharp
var settings = new VAAPIH264EncoderSettings
{
Bitrate = 5000,
RateControl = VAAPIH264RateControl.CBR,
Profile = VAAPIH264EncoderProfile.Main,
MaxBFrames = 2,
Trellis = true,
CABAC = true
};
var encoder = new H264EncoderBlock(settings);
```
## OpenMAX (OMX) H264 Encoders Guide
OpenMAX (OMX) is a royalty-free cross-platform API that provides comprehensive streaming media codec and application portability by enabling accelerated multimedia components to be developed, integrated and programmatically accessed across multiple operating systems and silicon platforms.
### OMX Google H264 Encoder
This is a baseline implementation primarily targeted at Android platforms.
```csharp
var settings = new OMXGoogleH264EncoderSettings();
// Configure via Properties dictionary
settings.Properties["some_key"] = "value";
settings.ParseStream = true; // Enable stream parsing (disable for SRT)
```
Key characteristics:
- Generic implementation
- Suitable for most Android devices
- Configurable through properties dictionary
- Minimal direct parameter exposure for maximum compatibility
### OMX Qualcomm H264 Encoder
Optimized for Qualcomm Snapdragon platforms, this encoder leverages hardware acceleration capabilities.
```csharp
var settings = new OMXQualcommH264EncoderSettings
{
Bitrate = 6_000, // 6 Mbps
IFrameInterval = 2, // Keyframe every 2 seconds
ParseStream = true // Enable stream parsing
};
```
Key features:
- Direct bitrate control
- I-frame interval management
- Hardware acceleration on Qualcomm platforms
- Additional properties available through dictionary
### OMX Exynos H264 Encoder
Specifically designed for Samsung Exynos platforms:
```csharp
var settings = new OMXExynosH264EncoderSettings();
// Configure platform-specific options
settings.Properties["quality_level"] = "high";
settings.Properties["hardware_acceleration"] = "true";
```
Characteristics:
- Samsung hardware optimization
- Flexible configuration through properties
- Hardware acceleration support
- Platform-specific optimizations
### OMX SPRD H264 Encoder
Designed for Spreadtrum (UNISOC) platforms:
```csharp
var settings = new OMXSPRDH264EncoderSettings
{
Bitrate = 6_000, // Target bitrate
IFrameInterval = 2, // GOP size in seconds
ParseStream = true // Stream parsing flag
};
```
Features:
- Hardware acceleration for SPRD chips
- Direct bitrate control
- Keyframe interval management
- Additional platform-specific properties
## Common Properties and Usage
All OMX encoders share some common characteristics:
```csharp
// Common interface implementation
public interface IH264EncoderSettings
{
bool ParseStream { get; set; }
KeyFrameDetectedDelegate KeyFrameDetected { get; set; }
H264EncoderType GetEncoderType();
MediaBlock CreateBlock();
}
```
Properties dictionary usage:
```csharp
// Generic way to set platform-specific options
settings.Properties["hardware_acceleration"] = "true";
settings.Properties["quality_preset"] = "balanced";
settings.Properties["thread_count"] = "4";
```
## Best Practices
1. **Encoder Selection**
- Use hardware encoders (AMD, NVIDIA, Intel) when available for better performance
- Fall back to OpenH264 when hardware encoding is not available
- Use platform-specific encoders (Apple Media, VAAPI) when targeting specific platforms
2. **Rate Control Selection**
- Use CBR for streaming applications where consistent bitrate is important
- Use VBR for offline encoding where quality is more important than bitrate consistency
- Use CQP for highest quality when bitrate is not a concern
- Consider using look-ahead options for better quality when latency is not critical
3. **Performance Optimization**
- Adjust GOP size based on content type (smaller for high motion, larger for static content)
- Enable CABAC for better compression efficiency when latency is not critical
- Use appropriate profile and level for target devices
- Consider B-frames for better compression but be aware of latency impact
4. **Platform Detection**:
```csharp
if (OMXSPRDH264EncoderSettings.IsAvailable())
{
// Use SPRD encoder
}
else if (OMXQualcommH264EncoderSettings.IsAvailable())
{
// Fall back to Qualcomm
}
else
{
// Fall back to Google implementation
}
```
## Platform-Specific Considerations
1. **Qualcomm Platforms**:
- Best performance with native bitrate settings
- Optimal for streaming when I-frame interval is 2-3 seconds
- Hardware acceleration should be enabled when possible
2. **Exynos Platforms**:
- Properties dictionary offers more fine-grained control
- Consider using platform-specific quality presets
- Monitor hardware acceleration status
3. **SPRD Platforms**:
- Keep bitrate within platform capabilities
- Use I-frame interval appropriate for content type
- Consider memory constraints when setting properties
4. **General OMX**:
- Always test on target hardware
- Monitor encoder performance metrics
- Have fallback options ready
- Consider power consumption impact
---END OF PAGE---
# Local File: .\dotnet\general\video-encoders\hevc.md
---
title: HEVC Encoding with VisioForge .Net SDKs
description: Learn how to implement hardware HEVC encoding with AMD, NVIDIA, and Intel GPUs in your .NET applications
sidebar_label: HEVC
---
# HEVC Hardware Encoding in .NET Applications
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
[!badge variant="dark" size="xl" text="VideoCaptureCoreX"] [!badge variant="dark" size="xl" text="VideoEditCoreX"] [!badge variant="dark" size="xl" text="MediaBlocksPipeline"]
This guide explores hardware-accelerated HEVC (H.265) encoding options available in VisioForge .NET SDKs. We'll cover implementation details for AMD, NVIDIA, and Intel GPU encoders, helping you choose the right solution for your video processing needs.
For Windows-specific output formats, refer to our [MP4 output documentation](../output-formats/mp4.md).
## Hardware HEVC Encoders Overview
Modern GPUs offer powerful hardware encoding capabilities that significantly outperform software-based solutions. VisioForge SDKs support three major hardware HEVC encoders:
- **AMD AMF** - For AMD Radeon GPUs
- **NVIDIA NVENC** - For NVIDIA GeForce and professional GPUs
- **Intel QuickSync** - For Intel CPUs with integrated graphics
Each encoder provides unique features and optimization options. Let's explore their capabilities and implementation details.
## AMD AMF HEVC Encoder
AMD's Advanced Media Framework (AMF) delivers hardware-accelerated HEVC encoding on compatible Radeon GPUs. It balances encoding speed, quality, and efficiency for various scenarios.
### Key Features and Settings
- **Rate Control Methods**:
- `CQP` (Constant QP) for fixed quality settings
- `LCVBR` (Latency Constrained VBR) for streaming
- `VBR` (Variable Bitrate) for offline encoding
- `CBR` (Constant Bitrate) for reliable bandwidth usage
- **Usage Profiles**:
- Transcoding (highest quality)
- Ultra Low Latency (for real-time applications)
- Low Latency (for interactive streaming)
- Web Camera (optimized for webcam sources)
- **Quality Presets**: Balance between encoding speed and output quality
### Implementation Example
```csharp
var encoder = new AMFHEVCEncoderSettings
{
Bitrate = 3000, // 3 Mbps target bitrate
MaxBitrate = 5000, // 5 Mbps peak bitrate
RateControl = AMFHEVCEncoderRateControl.CBR,
// Quality optimization
Preset = AMFHEVCEncoderPreset.Quality,
Usage = AMFHEVCEncoderUsage.Transcoding,
// GOP and frame settings
GOPSize = 30, // Keyframe interval
QP_I = 22, // I-frame quantization parameter
QP_P = 22, // P-frame quantization parameter
RefFrames = 1 // Reference frames count
};
```
## NVIDIA NVENC HEVC Encoder
NVIDIA's NVENC technology provides dedicated encoding hardware on GeForce and professional GPUs, offering excellent performance and quality across various bitrates.
### Key Capabilities
- **Multiple Profile Support**:
- Main (8-bit)
- Main10 (10-bit HDR)
- Main444 (high color precision)
- Extended bit depth options (12-bit)
- **Advanced Encoding Features**:
- B-frame support with adaptive placement
- Temporal Adaptive Quantization
- Weighted Prediction
- Look-ahead rate control
- **Performance Presets**: From quality-focused to ultra-fast encoding
### Implementation Example
```csharp
var encoder = new NVENCHEVCEncoderSettings
{
// Bitrate configuration
Bitrate = 3000, // 3 Mbps target
MaxBitrate = 5000, // 5 Mbps maximum
// Profile settings
Profile = NVENCHEVCProfile.Main,
Level = NVENCHEVCLevel.Level5_1,
// Quality enhancement options
BFrames = 2, // Number of B-frames
BAdaptive = true, // Adaptive B-frame placement
TemporalAQ = true, // Temporal adaptive quantization
WeightedPrediction = true, // Improves quality for fades
RCLookahead = 20, // Frames to analyze for rate control
// Buffer settings
VBVBufferSize = 0 // Use default buffer size
};
```
## Intel QuickSync HEVC Encoder
Intel QuickSync leverages the integrated GPU present in modern Intel processors for efficient hardware encoding, making it accessible without a dedicated graphics card.
### Key Features
- **Versatile Rate Control Options**:
- `CBR` (Constant Bitrate)
- `VBR` (Variable Bitrate)
- `CQP` (Constant Quantizer)
- `ICQ` (Intelligent Constant Quality)
- `VCM` (Video Conferencing Mode)
- `QVBR` (Quality-defined VBR)
- **Optimization Settings**:
- Target Usage parameter (quality vs speed balance)
- Low-latency mode for streaming
- HDR conformance controls
- Closed caption insertion options
- **Profile Support**:
- Main (8-bit)
- Main10 (10-bit HDR)
### Implementation Example
```csharp
var encoder = new QSVHEVCEncoderSettings
{
// Bitrate settings
Bitrate = 3000, // 3 Mbps target
MaxBitrate = 5000, // 5 Mbps peak
RateControl = QSVHEVCEncRateControl.VBR,
// Quality tuning
TargetUsage = 4, // 1=Best quality, 7=Fastest encoding
// Stream structure
GOPSize = 30, // Keyframe interval
RefFrames = 2, // Reference frames
// Feature configuration
Profile = QSVHEVCEncProfile.Main,
LowLatency = false, // Enable for streaming
// Advanced options
CCInsertMode = QSVHEVCEncSEIInsertMode.Insert,
DisableHRDConformance = false
};
```
## Quality Presets for Simplified Configuration
All encoders support standardized quality presets through the `VideoQuality` enum, providing a simplified configuration approach:
- **Low**: 1 Mbps target, 2 Mbps max (for basic streaming)
- **Normal**: 3 Mbps target, 5 Mbps max (for standard content)
- **High**: 6 Mbps target, 10 Mbps max (for detailed content)
- **Very High**: 15 Mbps target, 25 Mbps max (for premium quality)
### Using Quality Presets
```csharp
// For AMD AMF
var amfEncoder = new AMFHEVCEncoderSettings(VideoQuality.High);
// For NVIDIA NVENC
var nvencEncoder = new NVENCHEVCEncoderSettings(VideoQuality.High);
// For Intel QuickSync
var qsvEncoder = new QSVHEVCEncoderSettings(VideoQuality.High);
```
## Hardware Detection and Fallback Strategy
A robust implementation should check for encoder availability and implement appropriate fallbacks:
```csharp
// Create the most appropriate encoder for the current system
IHEVCEncoderSettings GetOptimalHEVCEncoder()
{
if (AMFHEVCEncoderSettings.IsAvailable())
{
return new AMFHEVCEncoderSettings(VideoQuality.High);
}
else if (NVENCHEVCEncoderSettings.IsAvailable())
{
return new NVENCHEVCEncoderSettings(VideoQuality.High);
}
else if (QSVHEVCEncoderSettings.IsAvailable())
{
return new QSVHEVCEncoderSettings(VideoQuality.High);
}
else
{
// Fall back to software encoder if no hardware is available
return new SoftwareHEVCEncoderSettings(VideoQuality.High);
}
}
```
## Best Practices for HEVC Encoding
### 1. Encoder Selection
- **AMD GPUs**: Best for applications where you know users have AMD hardware
- **NVIDIA GPUs**: Provides consistent quality across generations, ideal for professional applications
- **Intel QuickSync**: Great universal option when a dedicated GPU isn't guaranteed
### 2. Rate Control Selection
- **Streaming**: Use CBR for consistent bandwidth utilization
- **VoD Content**: VBR provides better quality at the same file size
- **Archival**: CQP ensures consistent quality regardless of content complexity
### 3. Performance Optimization
- Lower the reference frames count for faster encoding
- Adjust GOP size based on content type (smaller for high motion, larger for static scenes)
- Consider disabling B-frames for ultra-low latency applications
### 4. Quality Enhancement
- Enable adaptive quantization features for content with varying complexity
- Use weighted prediction for content with fades or gradual transitions
- Implement look-ahead when encoding quality is more important than latency
## Common Troubleshooting
1. **Encoder unavailability**: Ensure GPU drivers are up-to-date
2. **Lower than expected quality**: Check if quality presets match your content type
3. **Performance issues**: Monitor GPU utilization and adjust settings accordingly
4. **Compatibility problems**: Verify target devices support the selected HEVC profile
## Conclusion
Hardware-accelerated HEVC encoding offers significant performance advantages for .NET applications dealing with video processing. By leveraging AMD AMF, NVIDIA NVENC, or Intel QuickSync through VisioForge SDKs, you can achieve optimal balance between quality, speed, and efficiency.
Choose the right encoder and settings based on your specific requirements, target audience, and content type to deliver the best possible experience in your applications.
Start by detecting available hardware encoders, implementing appropriate quality settings, and testing across various content types to ensure optimal results.
---END OF PAGE---
# Local File: .\dotnet\general\video-encoders\index.md
---
title: Complete Guide to Video Encoders in VisioForge .NET SDKs
description: Detailed overview of video encoders for .NET developers using Video Capture, Video Edit, and Media Blocks SDKs - features, performance, and implementation
sidebar_label: Video Encoders
order: 19
---
# Video Encoders in VisioForge .NET SDKs
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## Introduction to Video Encoders
Video encoders are essential components in multimedia processing applications, responsible for compressing video data while maintaining optimal quality. VisioForge .NET SDKs incorporate multiple advanced encoders to meet diverse development requirements across different platforms and use cases.
This guide provides detailed information about each encoder's capabilities, performance characteristics, and implementation details to help .NET developers make informed decisions for their multimedia applications.
## Hardware vs. Software Encoding
When developing video processing applications, choosing between hardware and software encoders significantly impacts application performance and user experience.
### Hardware-Accelerated Encoders
Hardware encoders utilize dedicated processing units (GPUs or specialized hardware):
- **Advantages**: Lower CPU usage, higher encoding speeds, improved battery efficiency
- **Use cases**: Real-time streaming, live video processing, mobile applications
- **Examples in our SDK**: NVIDIA NVENC, AMD AMF, Intel QuickSync
### Software Encoders
Software encoders run on the CPU without specialized hardware:
- **Advantages**: Greater compatibility, more quality control options, platform independence
- **Use cases**: High-quality offline encoding, environments without compatible hardware
- **Examples in our SDK**: OpenH264, Software MJPEG encoder
## Available Video Encoders
Our SDKs provide extensive encoder options to accommodate various project requirements:
### H.264 (AVC) Encoders
H.264 remains one of the most widely used video codecs, offering excellent compression efficiency and broad compatibility.
#### Key Features:
- Multiple profile support (Baseline, Main, High)
- Adjustable bitrate controls (CBR, VBR, CQP)
- B-frame and reference frame configuration
- Hardware acceleration options from major vendors
[Learn more about H.264 encoders →](h264.md)
### HEVC (H.265) Encoders
HEVC delivers superior compression efficiency compared to H.264, enabling higher quality video at the same bitrate or comparable quality at lower bitrates.
#### Key Features:
- Approximately 50% better compression than H.264
- 8-bit and 10-bit color depth support
- Multiple hardware acceleration options
- Advanced rate control mechanisms
[Learn more about HEVC encoders →](hevc.md)
### AV1 Encoder
AV1 represents the next generation of video codecs, offering superior compression efficiency particularly suited for web streaming.
#### Key Features:
- Royalty-free open standard
- Better compression than HEVC
- Increasing browser and device support
- Optimized for web content delivery
[Learn more about AV1 encoder →](av1.md)
### MJPEG Encoders
Motion JPEG provides frame-by-frame JPEG compression, useful for specific applications where individual frame access is important.
#### Key Features:
- Simple implementation
- Low encoding latency
- Independent frame access
- Hardware and software implementations
[Learn more about MJPEG encoders →](mjpeg.md)
### VP8 and VP9 Encoders
These open codecs developed by Google offer royalty-free alternatives with good compression efficiency.
#### Key Features:
- Open-source implementation
- Competitive quality-to-bitrate ratio
- Wide web browser support
- Suitable for WebM container format
[Learn more about VP8/VP9 encoders →](vp8-vp9.md)
### Windows Media Video Encoder
The WMV encoder provides compatibility with Windows ecosystem and legacy applications.
#### Key Features:
- Native Windows integration
- Multiple profile options
- Compatible with Windows Media framework
- Efficient for Windows-centric deployments
[Learn more about WMV encoder →](../output-formats/wmv.md)
## Encoder Selection Guidelines
Selecting the optimal encoder depends on various factors:
### Platform Compatibility
- **Windows**: All encoders supported
- **macOS**: Apple Media encoders, OpenH264, AV1
- **Linux**: VAAPI, OpenH264, software implementations
### Hardware Requirements
When using hardware-accelerated encoders, verify system compatibility:
```csharp
// Check availability of hardware encoders
if (NVENCEncoderSettings.IsAvailable())
{
// Use NVIDIA encoder
}
else if (AMFEncoderSettings.IsAvailable())
{
// Use AMD encoder
}
else if (QSVEncoderSettings.IsAvailable())
{
// Use Intel encoder
}
else
{
// Fallback to software encoder
}
```
### Quality vs. Performance Tradeoffs
Different encoders offer varying balances between quality and encoding speed:
| Encoder Type | Quality | Performance | CPU Usage |
|--------------|---------|-------------|-----------|
| NVENC H.264 | Good | Excellent | Very Low |
| NVENC HEVC | Very Good | Very Good | Very Low |
| AMF H.264 | Good | Very Good | Very Low |
| QSV H.264 | Good | Excellent | Very Low |
| OpenH264 | Good-Excellent | Moderate | High |
| AV1 | Excellent | Poor-Moderate | Very High |
### Encoding Scenarios
- **Live streaming**: Prefer hardware encoders with CBR rate control
- **Video recording**: Hardware encoders with VBR for better quality/size balance
- **Offline processing**: Quality-focused encoders with VBR or CQP
- **Low-latency applications**: Hardware encoders with low-latency presets
## Performance Optimization
Maximize encoder efficiency with these best practices:
1. **Match output resolution to content requirements** - Avoid unnecessary upscaling
2. **Select appropriate bitrates** - Higher isn't always better; target your delivery medium
3. **Choose encoder presets wisely** - Faster presets use less CPU but may reduce quality
4. **Enable scene detection** for improved quality at scene changes
5. **Use hardware acceleration** when available for real-time applications
## Conclusion
VisioForge .NET SDKs provide a comprehensive set of video encoders to meet diverse requirements across different platforms and use cases. By understanding the strengths and configurations of each encoder, developers can create high-performance video applications with optimal quality and efficiency.
For specific encoder configuration details, refer to the dedicated documentation pages for each encoder type linked throughout this guide.
---END OF PAGE---
# Local File: .\dotnet\general\video-encoders\mjpeg.md
---
title: Motion JPEG (MJPEG) Encoders in VisioForge .NET SDKs
description: Complete guide to implementing MJPEG video encoders in .NET applications using VisioForge SDKs, with CPU and GPU acceleration options
sidebar_label: Motion JPEG
---
# Motion JPEG (MJPEG) Video Encoders for .NET Applications
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
[!badge variant="dark" size="xl" text="VideoCaptureCoreX"] [!badge variant="dark" size="xl" text="VideoEditCoreX"] [!badge variant="dark" size="xl" text="MediaBlocksPipeline"]
## Introduction to MJPEG Encoding in VisioForge
The VisioForge .NET SDK suite provides robust Motion JPEG (MJPEG) encoder implementations designed for efficient video processing in your applications. MJPEG remains a popular choice for many video applications due to its simplicity, compatibility, and specific use cases where frame-by-frame compression is advantageous.
This documentation provides a detailed exploration of the two MJPEG encoder options available in the VisioForge library:
1. CPU-based MJPEG encoder - The default implementation utilizing processor resources
2. GPU-accelerated Intel QuickSync MJPEG encoder - Hardware-accelerated option for compatible systems
Both implementations offer developers flexible configuration options while maintaining the core MJPEG functionality through the unified `IMJPEGEncoderSettings` interface.
## What is MJPEG and Why Use It?
Motion JPEG (MJPEG) is a video compression format where each video frame is compressed separately as a JPEG image. Unlike more modern codecs such as H.264 or H.265 that use temporal compression across frames, MJPEG treats each frame independently.
### Key Advantages of MJPEG
- **Frame-by-frame processing**: Each frame maintains independent quality without temporal artifacts
- **Lower latency**: Minimal processing delay makes it suitable for real-time applications
- **Editing friendly**: Individual frame access simplifies non-linear editing workflows
- **Resilience to motion**: Maintains quality during scenes with significant movement
- **Universal compatibility**: Works across platforms without specialized hardware decoders
- **Simplified development**: Straightforward implementation in various programming environments
### Common Use Cases
MJPEG encoding is particularly valuable in scenarios such as:
- **Security and surveillance systems**: Where frame quality and reliability are critical
- **Video capture applications**: Real-time video recording with minimal latency
- **Medical imaging**: When individual frame fidelity is essential
- **Industrial vision systems**: For consistent frame-by-frame analysis
- **Multimedia editing software**: Where rapid seeking and frame extraction is required
- **Streaming in bandwidth-limited environments**: Where consistent quality is preferred over file size
## MJPEG Implementation in VisioForge
Both MJPEG encoder implementations in VisioForge SDKs derive from the `IMJPEGEncoderSettings` interface, ensuring a consistent approach regardless of which encoder you choose. This design allows for easy switching between implementations based on performance requirements and hardware availability.
### Core Interface and Common Properties
The shared interface exposes essential properties and methods:
- **Quality**: Integer value from 10-100 controlling compression level
- **CreateBlock()**: Factory method to generate the encoder processing block
- **IsAvailable()**: Static method to verify encoder support on the current system
## CPU-based MJPEG Encoder
The CPU-based encoder serves as the default implementation, providing reliable encoding across virtually all system configurations. It performs all encoding operations using the CPU, making it a universally compatible choice for MJPEG encoding.
### Features and Specifications
- **Processing method**: Pure CPU-based encoding
- **Quality range**: 10-100 (higher values = better quality, larger files)
- **Default quality**: 85 (balances quality and file size)
- **Performance characteristics**: Scales with CPU cores and processing power
- **Memory usage**: Moderate, dependent on frame resolution and processing settings
- **Compatibility**: Works on any system supporting the .NET runtime
- **Specialized hardware**: None required
### Detailed Implementation Example
```csharp
// Import the necessary VisioForge namespaces
using VisioForge.Core.Types.Output;
// Create a new instance of the CPU-based encoder settings
var mjpegSettings = new MJPEGEncoderSettings();
// Configure quality (10-100)
mjpegSettings.Quality = 85; // Default balanced quality
// Optional: Verify encoder availability
if (MJPEGEncoderSettings.IsAvailable())
{
// Create the encoder processing block
var encoderBlock = mjpegSettings.CreateBlock();
// Add the encoder block to your processing pipeline
pipeline.AddBlock(encoderBlock);
// Additional pipeline configuration
// ...
// Start the encoding process
await pipeline.StartAsync();
}
else
{
// Handle encoder unavailability
Console.WriteLine("CPU-based MJPEG encoder is not available on this system.");
}
```
### Quality-to-Size Relationship
The quality setting directly affects both the visual quality and resulting file size:
| Quality Setting | Visual Quality | File Size | Recommended Use Case |
|----------------|---------------|-----------|----------------------|
| 10-30 | Very Low | Smallest | Archival, minimal bandwidth |
| 31-60 | Low | Small | Web previews, thumbnails |
| 61-80 | Medium | Moderate | Standard recording |
| 81-95 | High | Large | Professional applications |
| 96-100 | Maximum | Largest | Critical visual analysis |
## Intel QuickSync MJPEG Encoder
For systems with compatible Intel hardware, the QuickSync MJPEG encoder offers GPU-accelerated encoding performance. This implementation leverages Intel's QuickSync Video technology to offload encoding operations from the CPU to dedicated media processing hardware.
### Hardware Requirements
- Intel CPU with integrated graphics supporting QuickSync Video
- Supported processor families:
- Intel Core i3/i5/i7/i9 (6th generation or newer recommended)
- Intel Xeon with compatible graphics
- Select Intel Pentium and Celeron processors with HD Graphics
### Features and Advantages
- **Hardware acceleration**: Dedicated media processing engines
- **Quality range**: 10-100 (same as CPU-based encoder)
- **Default quality**: 85
- **Preset profiles**: Four predefined quality configurations
- **Reduced CPU load**: Frees processor resources for other tasks
- **Power efficiency**: Lower energy consumption during encoding
- **Performance gain**: Up to 3x faster than CPU-based encoding (hardware dependent)
### Implementation Examples
#### Basic Implementation
```csharp
// Import required namespaces
using VisioForge.Core.Types.Output;
// Create QuickSync MJPEG encoder with default settings
var qsvEncoder = new QSVMJPEGEncoderSettings();
// Verify hardware support
if (QSVMJPEGEncoderSettings.IsAvailable())
{
// Set custom quality value
qsvEncoder.Quality = 90; // Higher quality setting
// Create and add encoder block
var encoderBlock = qsvEncoder.CreateBlock();
pipeline.AddBlock(encoderBlock);
// Continue pipeline setup
}
else
{
// Fall back to CPU-based encoder
Console.WriteLine("QuickSync hardware not detected. Falling back to CPU encoder.");
var cpuEncoder = new MJPEGEncoderSettings();
pipeline.AddBlock(cpuEncoder.CreateBlock());
}
```
#### Using Preset Quality Profiles
```csharp
// Create encoder with preset quality profile
var highQualityEncoder = new QSVMJPEGEncoderSettings(VideoQuality.High);
// Or select other preset profiles
var lowQualityEncoder = new QSVMJPEGEncoderSettings(VideoQuality.Low);
var normalQualityEncoder = new QSVMJPEGEncoderSettings(VideoQuality.Normal);
var veryHighQualityEncoder = new QSVMJPEGEncoderSettings(VideoQuality.VeryHigh);
// Check availability and create encoder block
if (QSVMJPEGEncoderSettings.IsAvailable())
{
var encoderBlock = highQualityEncoder.CreateBlock();
// Use encoder in pipeline
}
```
### Quality Preset Mapping
The QuickSync implementation provides convenient preset quality profiles that map to specific quality values:
| Preset Profile | Quality Value | Suitable Applications |
|---------------|--------------|----------------------|
| Low | 60 | Surveillance, monitoring, archiving |
| Normal | 75 | Standard recording, web content |
| High | 85 | Default for most applications |
| VeryHigh | 95 | Professional video production |
## Performance Optimization Guidelines
Achieving optimal MJPEG encoding performance requires careful consideration of several factors:
### System Configuration Recommendations
1. **Memory allocation**: Ensure sufficient RAM for frame buffering (minimum 8GB recommended)
2. **Storage throughput**: Use SSD storage for best write performance during encoding
3. **CPU considerations**: Multi-core processors benefit the CPU-based encoder
4. **GPU drivers**: Keep Intel graphics drivers updated for QuickSync performance
5. **Background processes**: Minimize competing system processes during encoding
### Code-Level Optimization Techniques
1. **Frame size selection**: Consider downscaling before encoding for better performance
2. **Quality selection**: Balance visual requirements against performance needs
3. **Pipeline design**: Minimize unnecessary processing stages before encoding
4. **Error handling**: Implement graceful fallback between encoder types
5. **Threading model**: Respect the threading model of the VisioForge pipeline
## Best Practices for MJPEG Implementation
To ensure reliable and efficient MJPEG encoding in your applications:
1. **Always check availability**: Use the `IsAvailable()` method before creating encoder instances
2. **Implement encoder fallback**: Have CPU-based encoding as a backup when QuickSync is unavailable
3. **Quality testing**: Test different quality settings with your specific video content
4. **Performance monitoring**: Monitor CPU/GPU usage during encoding to identify bottlenecks
5. **Exception handling**: Handle potential encoder initialization failures gracefully
6. **Version compatibility**: Ensure SDK version compatibility with your development environment
7. **License validation**: Verify proper licensing for your production environment
## Troubleshooting Common Issues
### QuickSync Availability Problems
- Ensure Intel drivers are up-to-date
- Verify BIOS settings haven't disabled integrated graphics
- Check for competing GPU-accelerated applications
### Performance Issues
- Monitor system resource usage during encoding
- Reduce input frame resolution or frame rate if necessary
- Consider quality setting adjustments
### Quality Problems
- Increase quality settings for better visual results
- Examine source material for pre-existing quality issues
- Consider frame pre-processing for problematic source material
## Conclusion
The VisioForge .NET SDK provides flexible MJPEG encoding options suitable for a wide range of development scenarios. By understanding the characteristics and configuration options of both the CPU-based and QuickSync implementations, developers can make informed decisions about which encoder best fits their application requirements.
Whether prioritizing universal compatibility with the CPU-based encoder or leveraging hardware acceleration with the QuickSync implementation, the consistent interface and comprehensive feature set enable efficient video processing while maintaining the frame-independent nature of MJPEG encoding that makes it valuable for specific video processing applications.
---END OF PAGE---
# Local File: .\dotnet\general\video-encoders\vp8-vp9.md
---
title: Implementing VP8 and VP9 Encoders in VisioForge .Net SDK
description: Learn how to configure VP8 and VP9 video encoders in VisioForge SDK for optimal streaming, recording and processing performance
sidebar_label: VP8/VP9
---
# VP8 and VP9 Video Encoders Guide
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
This guide shows you how to implement VP8 and VP9 video encoding in VisioForge .NET SDKs. You'll learn about the available encoder options and how to optimize them for your specific application needs.
## Encoder Options Overview
VisioForge SDK provides multiple encoder implementations based on your platform requirements:
### Windows Platform Encoders
[!badge variant="dark" size="xl" text="VideoCaptureCore"] [!badge variant="dark" size="xl" text="VideoEditCore"]
- Software-based VP8 and VP9 encoders configured through the [WebMOutput](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.Output.WebMOutput.html) class
### Cross-Platform X-Engine Options
[!badge variant="dark" size="xl" text="VideoCaptureCoreX"] [!badge variant="dark" size="xl" text="VideoEditCoreX"] [!badge variant="dark" size="xl" text="MediaBlocksPipeline"]
- VP8 software encoder via [VP8EncoderSettings](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.X.VideoEncoders.VP8EncoderSettings.html)
- VP9 software encoder via [VP9EncoderSettings](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.X.VideoEncoders.VP9EncoderSettings.html)
- Hardware-accelerated Intel GPU VP9 encoder via [QSVVP9EncoderSettings](https://api.visioforge.org/dotnet/api/VisioForge.Core.Types.X.VideoEncoders.QSVVP9EncoderSettings.html) for integrated GPUs
## Bitrate Control Strategies
All VP8 and VP9 encoders support different bitrate control modes to match your application requirements:
### Constant Bitrate (CBR)
CBR maintains consistent bitrate throughout the encoding process, making it ideal for:
- Live streaming applications
- Scenarios with bandwidth limitations
- Real-time video communication
**Implementation Examples:**
With `WebMOutput` (Windows):
```csharp
var webmOutput = new WebMOutput();
webmOutput.Video_EndUsage = VP8EndUsageMode.CBR;
webmOutput.Video_Encoder = WebMVideoEncoder.VP8;
webmOutput.Video_Bitrate = 2000; // 2 Mbps
```
With `VP8EncoderSettings`:
```csharp
var vp8 = new VP8EncoderSettings();
vp8.RateControl = VPXRateControl.CBR;
vp8.TargetBitrate = 2000; // 2 Mbps
```
With `VP9EncoderSettings`:
```csharp
var vp9 = new VP9EncoderSettings();
vp9.RateControl = VPXRateControl.CBR;
vp9.TargetBitrate = 2000; // 2 Mbps
```
With Intel GPU encoder:
```csharp
var vp9qsv = new QSVVP9EncoderSettings();
vp9qsv.RateControl = QSVVP9EncRateControl.CBR;
vp9qsv.Bitrate = 2000; // 2 Mbps
```
### Variable Bitrate (VBR)
VBR dynamically adjusts bitrate based on content complexity, best for:
- Non-live video encoding
- Scenarios prioritizing visual quality over file size
- Content with varying visual complexity
**Implementation Examples:**
With `WebMOutput` (Windows):
```csharp
var webmOutput = new WebMOutput();
webmOutput.Video_EndUsage = VP8EndUsageMode.VBR;
webmOutput.Video_Encoder = WebMVideoEncoder.VP8;
webmOutput.Video_Bitrate = 3000; // 3 Mbps target
```
With `VP8EncoderSettings`:
```csharp
var vp8 = new VP8EncoderSettings();
vp8.RateControl = VPXRateControl.VBR;
vp8.TargetBitrate = 3000;
```
With `VP9EncoderSettings`:
```csharp
var vp9 = new VP9EncoderSettings();
vp9.RateControl = VPXRateControl.VBR;
vp9.TargetBitrate = 3000;
```
With Intel GPU encoder:
```csharp
var vp9qsv = new QSVVP9EncoderSettings();
vp9qsv.RateControl = QSVVP9EncRateControl.VBR;
vp9qsv.Bitrate = 3000;
```
## Quality-Focused Encoding Modes
These modes prioritize consistent visual quality over specific bitrate targets:
### Constant Quality (CQ) Mode
Available for software VP8 and VP9 encoders:
```csharp
var vp8 = new VP8EncoderSettings();
vp8.RateControl = VPXRateControl.CQ;
vp8.CQLevel = 20; // Quality level (0-63, lower values = better quality)
```
```csharp
var vp9 = new VP9EncoderSettings();
vp9.RateControl = VPXRateControl.CQ;
vp9.CQLevel = 20;
```
### Intel QSV Quality Modes
Intel's hardware encoder supports two quality-focused modes:
**Intelligent Constant Quality (ICQ):**
```csharp
var vp9qsv = new QSVVP9EncoderSettings();
vp9qsv.RateControl = QSVVP9EncRateControl.ICQ;
vp9qsv.ICQQuality = 25; // 20-27 recommended for balanced quality
```
**Constant Quantization Parameter (CQP):**
```csharp
var vp9qsv = new QSVVP9EncoderSettings();
vp9qsv.RateControl = QSVVP9EncRateControl.CQP;
vp9qsv.QPI = 26; // I-frame QP
vp9qsv.QPP = 28; // P-frame QP
```
## VP9 Performance Optimization
VP9 encoders offer additional features for enhanced performance:
### Adaptive Quantization
Improves visual quality by allocating more bits to complex areas:
```csharp
var vp9 = new VP9EncoderSettings();
vp9.AQMode = VPXAdaptiveQuantizationMode.Variance; // Enable variance-based AQ
```
### Parallel Processing
Speeds up encoding through multi-threading and tile-based processing:
```csharp
var vp9 = new VP9EncoderSettings();
vp9.FrameParallelDecoding = true; // Enable parallel frame processing
vp9.RowMultithread = true; // Enable row-based multithreading
vp9.TileColumns = 6; // Set number of tile columns (log2)
vp9.TileRows = 0; // Set number of tile rows (log2)
```
## Error Resilience Settings
Both VP8 and VP9 support error resilience for robust streaming over unreliable networks:
Using `WebMOutput` (Windows):
```csharp
var webmOutput = new WebMOutput();
webmOutput.Video_ErrorResilient = true; // Enable error resilience
```
Using software encoders:
```csharp
var vpx = new VP8EncoderSettings(); // or VP9EncoderSettings
vpx.ErrorResilient = VPXErrorResilientFlags.Default | VPXErrorResilientFlags.Partitions;
```
## Performance Tuning Options
Optimize encoding performance with these settings:
```csharp
var vpx = new VP8EncoderSettings(); // or VP9EncoderSettings
vpx.CPUUsed = 0; // Range: -16 to 16, higher values favor speed over quality
vpx.NumOfThreads = 4; // Specify number of encoding threads
vpx.TokenPartitions = VPXTokenPartitions.Eight; // Enable parallel token processing
```
## Best Practices for VP8/VP9 Encoding
### Rate Control Selection
Choose the appropriate rate control mode based on your application:
- **CBR** for live streaming and real-time communication
- **VBR** for offline encoding where quality is the priority
- **Quality-based modes** (CQ, ICQ, CQP) for highest possible quality regardless of bitrate
### Performance Optimization
- Adjust `CPUUsed` to balance quality and encoding speed
- Enable multithreading for faster encoding on multi-core systems
- Use tile-based parallelism in VP9 for better hardware utilization
### Error Recovery
- Enable error resilience when streaming over unreliable networks
- Configure token partitioning for improved error recovery
- Consider frame reordering limitations for low-latency applications
### Quality Optimization
- Use adaptive quantization in VP9 for better quality distribution
- Consider two-pass encoding for offline encoding scenarios
- Adjust quantizer settings based on content type and target quality
By following this guide, you'll be able to effectively implement and configure VP8 and VP9 encoders in your VisioForge .NET applications for optimal performance and quality.
---END OF PAGE---
# Local File: .\dotnet\install\avalonia.md
---
title: Integrate Media SDKs with Avalonia Applications
description: Learn how to implement powerful video and media capabilities in cross-platform Avalonia projects. This guide covers setup, configuration, and optimization across Windows, macOS, Linux, Android, and iOS platforms, with platform-specific requirements and best practices for seamless integration.
sidebar_label: Avalonia
order: 14
---
# Building Media-Rich Avalonia Applications with VisioForge
## Framework Overview
Avalonia UI stands out as a versatile, truly cross-platform .NET UI framework with support spanning desktop environments (Windows, macOS, Linux) and mobile platforms (iOS and Android). VisioForge enhances this ecosystem through the specialized `VisioForge.DotNet.Core.UI.Avalonia` package, which delivers high-performance multimedia controls tailored for Avalonia's architecture.
Our suite of SDKs empowers Avalonia developers with extensive multimedia capabilities:
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Player SDK .Net"](https://www.visioforge.com/media-player-sdk-net)
## Setup and Configuration
### Essential Package Installation
Creating an Avalonia application with VisioForge multimedia capabilities requires installing several key NuGet components:
1. Avalonia-specific UI layer: `VisioForge.DotNet.Core.UI.Avalonia`
2. Core functionality package: `VisioForge.DotNet.Core` (or specialized SDK variant)
3. Platform-specific native bindings (covered in detail in later sections)
Add these to your project manifest (`.csproj`):
```xml
```
### Avalonia Initialization Architecture
A key advantage of VisioForge's Avalonia integration is its seamless initialization model. Unlike some frameworks requiring explicit global setup, the Avalonia controls become available immediately once the core package is referenced.
Your standard Avalonia bootstrap code in `Program.cs` remains unchanged:
```csharp
using Avalonia;
using System;
namespace YourAppNamespace;
class Program
{
[STAThread]
public static void Main(string[] args) => BuildAvaloniaApp()
.StartWithClassicDesktopLifetime(args);
public static AppBuilder BuildAvaloniaApp()
=> AppBuilder.Configure()
.UsePlatformDetect()
.LogToTrace();
}
```
### Implementing the VideoView Component
The `VideoView` control serves as the central rendering element. Integrate it into your `.axaml` files using:
1. First, declare the VisioForge namespace:
```xml
xmlns:vf="clr-namespace:VisioForge.Core.UI.Avalonia;assembly=VisioForge.Core.UI.Avalonia"
```
2. Then, implement the control in your layout structure:
```xml
```
This control adapts automatically to the platform-specific rendering pipeline while maintaining a consistent API surface.
## Desktop Platform Integration
### Windows Implementation Guide
Windows deployment requires specific native components packaged as NuGet references.
#### Core Windows Components
Add the following Windows-specific packages to your desktop project:
```xml
```
#### Advanced Media Format Support
For extended codec compatibility, include the size-optimized UPX variant of the libAV libraries:
```xml
```
The UPX variant delivers significant size optimization while maintaining full codec compatibility.
### macOS Integration
For macOS deployment:
#### Native Binding Package
Include the macOS-specific native components:
```xml
```
#### Framework Configuration
Configure your project with the appropriate macOS framework target:
```xml
net8.0-macos14.0Exe
```
### Linux Deployment
Linux support includes:
#### Framework Configuration
Set up the appropriate target framework for Linux environments:
```xml
net8.0Exe
```
#### System Dependencies
For Linux deployment, ensure required system libraries are available on the target system. Unlike Windows and macOS which use NuGet packages, Linux may require system-level dependencies. Consult the VisioForge Linux documentation for specific platform requirements.
## Mobile Development
### Android Configuration
Android implementation requires additional steps unique to Avalonia's Android integration model:
#### Java Interoperability Layer
The VisioForge Android implementation requires a binding bridge between .NET and Android native APIs:
1. Obtain the Java binding project from the [VisioForge samples repository](https://github.com/visioforge/.Net-SDK-s-samples) in the `AndroidDependency` directory
2. Add the appropriate binding project to your solution:
- Use `VisioForge.Core.Android.X8.csproj` for .NET 8 applications
3. Reference this project in your Android head project:
```xml
```
#### Android-Specific Package
Add the Android redistributable package:
```xml
```
#### Runtime Permissions
Configure the `AndroidManifest.xml` with appropriate permissions:
- `android.permission.CAMERA`
- `android.permission.RECORD_AUDIO`
- `android.permission.READ_EXTERNAL_STORAGE`
- `android.permission.WRITE_EXTERNAL_STORAGE`
- `android.permission.INTERNET`
### iOS Development
iOS integration with Avalonia requires:
#### Native Components
Add the iOS-specific redistributable to your iOS head project:
```xml
```
#### Important Implementation Notes
- Physical device testing is essential, as simulator support is limited
- Update your `Info.plist` with privacy descriptions:
- `NSCameraUsageDescription` for camera access
- `NSMicrophoneUsageDescription` for audio recording
## Performance Engineering
Maximize application performance with these Avalonia-specific optimizations:
1. Enable hardware acceleration when supported by the underlying platform
2. Implement adaptive resolution scaling based on device capabilities
3. Optimize memory usage patterns, especially for mobile targets
4. Utilize Avalonia's compositing model effectively by minimizing visual tree complexity around the `VideoView`
## Troubleshooting Guide
### Media Format Problems
- **Playback failures**:
- Ensure all platform packages are correctly referenced
- Verify codec availability for the target media format
- Check for platform-specific format restrictions
### Performance Concerns
- **Slow playback or rendering**:
- Enable hardware acceleration where available
- Reduce processing resolution when appropriate
- Utilize Avalonia's threading model correctly
### Deployment Challenges
- **Platform-specific runtime errors**:
- Validate target framework specifications
- Verify native dependency availability
- Ensure proper provisioning for mobile targets
## Multi-Platform Project Architecture
VisioForge's Avalonia integration excels with a specialized multi-headed project structure. The `SimplePlayerMVVM` sample demonstrates this architecture:
- **Core shared project** (`SimplePlayerMVVM.csproj`): Contains cross-platform views, view models, and shared logic with conditional multi-targeting:
```xml
enablelatesttruenet8.0-android;net8.0-ios;net8.0-windowsnet8.0-android;net8.0-ios;net8.0-macos14.0net8.0-android;net8.0
```
- **Platform-specific head projects**:
- `SimplePlayerMVVM.Android.csproj`: Contains Android-specific configuration and binding references
- `SimplePlayerMVVM.iOS.csproj`: Handles iOS initialization and dependencies
- `SimplePlayerMVVM.Desktop.csproj`: Manages desktop platform detection and appropriate redistributable loading
For simpler desktop-only applications, `SimpleVideoCaptureA.csproj` provides a streamlined model with platform detection occurring within a single project file.
## Conclusion
VisioForge's Avalonia integration offers a sophisticated approach to cross-platform multimedia development that leverages Avalonia's unique architectural advantages. Through carefully structured platform-specific components and a unified API, developers can build rich media applications that span desktop and mobile platforms without compromising on performance or capabilities.
For complete code examples and sample applications, visit our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples), which contains specialized Avalonia demonstrations in the Video Capture SDK X and Media Player SDK X sections.
---END OF PAGE---
# Local File: .\dotnet\install\index.md
---
title: .NET SDKs Installation Guide for Developers
description: Complete guide for installing multimedia .NET SDKs in Visual Studio, Rider, and other IDEs. Learn step-by-step installation methods, platform-specific configuration, framework support, and troubleshooting for Windows, macOS, iOS, Android, and Linux environments.
sidebar_label: Installation
order: 21
---
# VisioForge .NET SDKs Installation Guide
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Player SDK .Net"](https://www.visioforge.com/media-player-sdk-net)
VisioForge offers powerful multimedia SDKs for .NET developers that enable advanced video capture, editing, playback, and media processing capabilities in your applications. This guide covers everything you need to know to properly install and configure our SDKs in your development environment.
## Available .NET SDKs
VisioForge provides several specialized SDKs to address different multimedia needs:
- [Video Capture SDK .Net](https://www.visioforge.com/video-capture-sdk-net) - For capturing video from cameras, screen recording, and streaming
- [Video Edit SDK .Net](https://www.visioforge.com/video-edit-sdk-net) - For video editing, processing, and format conversion
- [Media Blocks SDK .Net](https://www.visioforge.com/media-blocks-sdk-net) - For building custom media processing pipelines
- [Media Player SDK .Net](https://www.visioforge.com/media-player-sdk-net) - For creating custom media players with advanced features
## Installation Methods
You can install our SDKs using two primary methods:
### Using Setup Files
The setup file installation method is recommended for Windows development environments. This approach:
1. Automatically installs all required dependencies
2. Configures Visual Studio integration
3. Includes sample projects to help you get started quickly
4. Provides documentation and additional resources
Setup files can be downloaded from the respective SDK product pages on our website.
### Using NuGet Packages
For cross-platform development or CI/CD pipelines, our NuGet packages offer flexibility and easy integration:
```cmd
Install-Package VisioForge.DotNet.Core
```
Additional UI-specific packages may be required depending on your target platform:
```cmd
Install-Package VisioForge.DotNet.Core.UI.MAUI
Install-Package VisioForge.DotNet.Core.UI.WinUI
Install-Package VisioForge.DotNet.Core.UI.Avalonia
```
## IDE Integration and Setup
Our SDKs seamlessly integrate with popular .NET development environments:
### Visual Studio Integration
[Visual Studio](visual-studio.md) offers the most complete experience with our SDKs:
- Full IntelliSense support for SDK components
- Built-in debugging for media processing components
- Designer support for visual controls
- NuGet package management
For detailed Visual Studio setup instructions, see our [Visual Studio integration guide](visual-studio.md).
### JetBrains Rider Integration
[Rider](rider.md) provides excellent cross-platform development support:
- Full code completion for SDK APIs
- Smart navigation features for exploring SDK classes
- Integrated NuGet package management
- Cross-platform debugging capabilities
For Rider-specific instructions, visit our [Rider integration documentation](rider.md).
### Visual Studio for Mac
[Visual Studio for Mac](visual-studio-mac.md) users can develop applications for macOS, iOS, and Android:
- Built-in NuGet package manager for installing SDK components
- Project templates for quick setup
- Integrated debugging tools
Learn more in our [Visual Studio for Mac setup guide](visual-studio-mac.md).
## Platform-Specific Configuration
### Target Framework Configuration
Each operating system requires specific target framework settings for optimal compatibility:
#### Windows Applications
Windows applications must use the `-windows` target framework suffix:
```xml
net8.0-windows
```
This enables access to Windows-specific APIs and UI frameworks like WPF and Windows Forms.
#### Android Development
Android projects require the `-android` framework suffix:
```xml
net8.0-android
```
Ensure that Android workloads are installed in your development environment:
```
dotnet workload install android
```
#### iOS Development
iOS applications must use the `-ios` target framework:
```xml
net8.0-ios
```
iOS development requires a Mac with Xcode installed, even when using Visual Studio on Windows.
#### macOS Applications
macOS native applications use either the `-macos` or `-maccatalyst` framework:
```xml
net8.0-macos
```
For .NET MAUI applications targeting macOS, use:
```xml
net8.0-maccatalyst
```
#### Linux Development
Linux applications use the standard target framework without a platform suffix:
```xml
net8.0
```
Ensure required .NET workloads are installed:
```
dotnet workload install linux
```
## Special Framework Support
### .NET MAUI Applications
[MAUI projects](maui.md) require special configuration:
- Add the `VisioForge.DotNet.Core.UI.MAUI` NuGet package
- Configure platform-specific permissions in your project
- Use MAUI-specific video view controls
See our [detailed MAUI guide](maui.md) for complete instructions.
### Avalonia UI Framework
[Avalonia projects](avalonia.md) provide a cross-platform UI alternative:
- Install the `VisioForge.DotNet.Core.UI.Avalonia` package
- Use Avalonia-specific video rendering controls
- Configure platform-specific dependencies
Our [Avalonia integration guide](avalonia.md) provides complete setup instructions.
## SDK Initialization for Cross-Platform Engines
Our SDKs include both Windows-specific DirectShow engines (like `VideoCaptureCore`) and cross-platform X-engines (like `VideoCaptureCoreX`). The X-engines require explicit initialization and cleanup.
### Initializing the SDK
Before using any X-engine components, initialize the SDK:
```csharp
// Initialize at application startup
VisioForge.Core.VisioForgeX.InitSDK();
// Or use the async version
await VisioForge.Core.VisioForgeX.InitSDKAsync();
```
### Cleaning Up Resources
When your application exits, properly release resources:
```csharp
// Clean up at application exit
VisioForge.Core.VisioForgeX.DestroySDK();
// Or use the async version
await VisioForge.Core.VisioForgeX.DestroySDKAsync();
```
Failing to initialize or clean up properly may result in memory leaks or unstable behavior.
## Video Rendering Controls
Each UI framework requires specific video view controls to display media content:
### Windows Forms
```csharp
// Add reference to VisioForge.DotNet.Core
using VisioForge.Core.UI.WinForms;
// In your form
videoView = new VideoView();
this.Controls.Add(videoView);
```
### WPF Applications
```csharp
// Add reference to VisioForge.DotNet.Core
using VisioForge.Core.UI.WPF;
// In your XAML
```
### MAUI Applications
```csharp
// Add reference to VisioForge.DotNet.Core.UI.MAUI
using VisioForge.Core.UI.MAUI;
// In your XAML
```
### Avalonia UI
```csharp
// Add reference to VisioForge.DotNet.Core.UI.Avalonia
using VisioForge.Core.UI.Avalonia;
// In your XAML
```
## Native Dependencies Management
Our SDKs leverage native libraries for optimal performance. These dependencies must be properly managed for deployment:
- Windows: Included automatically with setup installation or NuGet packages
- macOS/iOS: Bundled with NuGet packages but require proper app signing
- Android: Included in NuGet packages with proper architecture support
- Linux: May require additional system packages depending on distribution
For detailed deployment instructions, see our [deployment guide](../deployment-x/index.md).
## Troubleshooting Common Installation Issues
If you encounter issues during installation:
1. Verify target framework compatibility with your project type
2. Ensure all required workloads are installed (`dotnet workload list`)
3. Check for dependency conflicts in your project
4. Confirm proper SDK initialization for X-engines
5. Review platform-specific requirements in our documentation
## Sample Code and Resources
We maintain an extensive collection of sample applications on our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples) to help you get started quickly with our SDKs.
These examples cover common scenarios like:
- Video capture from cameras and screens
- Media playback with custom controls
- Video editing and processing
- Cross-platform development
Visit our repository for the latest code examples and best practices for using our SDKs.
---
For additional support or questions, please contact our technical support team or visit our documentation portal.
---END OF PAGE---
# Local File: .\dotnet\install\maui.md
---
title: Integrate Media SDKs with .NET MAUI Applications
description: Learn how to implement powerful video and media capabilities in cross-platform .NET MAUI projects. This guide covers setup, configuration, and optimization across Windows, Android, iOS, and macOS platforms, with platform-specific requirements and best practices for seamless integration.
sidebar_label: MAUI
order: 15
---
# Integrating VisioForge SDKs with .NET MAUI Applications
## Overview
.NET Multi-platform App UI (MAUI) enables developers to build cross-platform applications for mobile and desktop from a single codebase. VisioForge provides comprehensive support for MAUI applications through the `VisioForge.Core.UI.MAUI` package, which contains specialized UI controls designed specifically for the .NET MAUI platform.
Our SDKs enable powerful multimedia capabilities across all MAUI-supported platforms:
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Player SDK .Net"](https://www.visioforge.com/media-player-sdk-net)
## Getting Started
### Installation
To begin using VisioForge with your MAUI project, install the required NuGet packages:
1. The core UI package: `VisioForge.Core.UI.MAUI`
2. Platform-specific redistributable (detailed in platform sections below)
### SDK Initialization
Proper initialization is essential for the VisioForge SDKs to function correctly within your MAUI application. This process must be completed in your `MauiProgram.cs` file.
```csharp
using SkiaSharp.Views.Maui.Controls.Hosting;
using VisioForge.Core.UI.MAUI;
public static class MauiProgram
{
public static MauiApp CreateMauiApp()
{
var builder = MauiApp.CreateBuilder();
builder
.UseMauiApp()
// Initialize the SkiaSharp package by adding the below line of code
.UseSkiaSharp()
// Initialize the VisioForge MAUI package by adding the below line of code
.ConfigureMauiHandlers(handlers => handlers.AddVisioForgeHandlers())
// After initializing the VisioForge MAUI package, optionally add additional fonts
.ConfigureFonts(fonts =>
{
fonts.AddFont("OpenSans-Regular.ttf", "OpenSansRegular");
fonts.AddFont("OpenSans-Semibold.ttf", "OpenSansSemibold");
});
// Continue initializing your .NET MAUI App here
return builder.Build();
}
}
```
## Using VisioForge Controls in XAML
The `VideoView` control is the primary interface for displaying video content in your MAUI application. To use VisioForge controls in your XAML files:
1. Add the VisioForge namespace to your XAML file:
```xaml
xmlns:vf="clr-namespace:VisioForge.Core.UI.MAUI;assembly=VisioForge.Core.UI.MAUI"
```
2. Add the VideoView control to your layout:
```xaml
```
The VideoView control adapts to the native rendering capabilities of each platform while providing a consistent API for your application code.
## Platform-Specific Configuration
### Android Implementation
Android requires additional configuration steps to ensure proper operation:
#### 1. Add Java Bindings Library
The VisioForge SDK relies on native Android functionality that requires a custom Java bindings library:
1. Clone the binding library from our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/AndroidDependency)
2. Add the appropriate project to your solution:
- Use `VisioForge.Core.Android.X8.csproj` for .NET 8
- Use `VisioForge.Core.Android.X9.csproj` for .NET 9
3. Add the reference to your project file:
```xml
```
#### 2. Add Android Redistributable Package
Include the Android-specific redistributable package:
```xml
```
#### 3. Android Permissions
Ensure your AndroidManifest.xml includes the necessary permissions for camera, microphone, and storage access depending on your application's functionality. Common required permissions include:
- `android.permission.CAMERA`
- `android.permission.RECORD_AUDIO`
- `android.permission.READ_EXTERNAL_STORAGE`
- `android.permission.WRITE_EXTERNAL_STORAGE`
### iOS Configuration
iOS integration requires fewer steps but has some important considerations:
#### 1. Add iOS Redistributable
Add the iOS-specific package to your project:
```xml
```
#### 2. Important Notes for iOS Development
- **Use physical devices**: The SDK requires testing on physical iOS devices rather than simulators for full functionality.
- **Privacy descriptions**: Add the necessary usage description strings in your Info.plist file for camera and microphone access:
- `NSCameraUsageDescription`
- `NSMicrophoneUsageDescription`
### macOS Configuration
For macOS Catalyst applications:
#### 1. Configure Runtime Identifiers
To ensure your application works correctly on both Intel and Apple Silicon Macs, specify the appropriate runtime identifiers:
```xml
maccatalyst-x64maccatalyst-arm64
```
#### 2. Enable Trimming
For optimal performance on macOS, enable the PublishTrimmed option:
```xml
true
```
For more detailed information about macOS deployment, refer to our [macOS](../deployment-x/macOS.md) documentation page.
### Windows Configuration
For Windows applications, you need to include several redistributable packages:
#### 1. Add Base Windows Redistributables
Include the core Windows packages:
```xml
```
#### 2. Add Extended Codec Support (Optional but Recommended)
For enhanced media format support, include the libAV (FFMPEG) package:
```xml
```
### Performance Optimization
For optimal performance across platforms:
1. Use hardware acceleration when available
2. Adjust video resolution based on the target device capabilities
3. Consider memory constraints on mobile devices when processing large media files
## Troubleshooting Common Issues
- **Blank video display**: Ensure proper permissions are granted on mobile platforms
- **Missing codecs**: Verify all platform-specific redistributable packages are correctly installed
- **Performance issues**: Check that hardware acceleration is enabled when available
- **Deployment errors**: Confirm runtime identifiers are correctly specified for the target platform
## Conclusion
The VisioForge SDK provides a comprehensive solution for adding powerful multimedia capabilities to your .NET MAUI applications. By following the platform-specific setup instructions and best practices outlined in this guide, you can create rich cross-platform applications with advanced video and audio features.
For additional examples and sample code, visit our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples).
---END OF PAGE---
# Local File: .\dotnet\install\rider.md
---
title: Integrate .Net SDKs into JetBrains Rider | Tutorial
description: Learn how to integrate .Net SDKs with JetBrains Rider in this step-by-step tutorial. From project setup to adding NuGet packages, UI components, and platform dependencies - master cross-platform development with WPF, MAUI, WinUI, and Avalonia integration for Windows, macOS, iOS and Android apps.
sidebar_label: JetBrains Rider
order: 12
---
# .Net SDKs Integration with JetBrains Rider
## Introduction
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Player SDK .Net"](https://www.visioforge.com/media-player-sdk-net)
This comprehensive guide walks you through the process of installing and configuring VisioForge .Net SDKs within JetBrains Rider, a powerful cross-platform IDE for .NET development. While we'll use a Windows application with WPF as our primary example, these installation steps can be readily adapted for macOS, iOS, or Android applications as well. JetBrains Rider provides a consistent development experience across Windows, macOS, and Linux platforms, making it an excellent choice for cross-platform .NET development.
## Creating Your Project
### Setting Up a Modern Project Structure
Begin by launching JetBrains Rider and creating a new project. For this tutorial, we'll use WPF (Windows Presentation Foundation) as our framework. It's crucial to utilize the modern project format, which provides enhanced compatibility with VisioForge SDKs and offers a more streamlined development experience.
1. Open JetBrains Rider
2. Select "Create New Solution" from the welcome screen
3. Choose "WPF Application" from the available templates
4. Configure your project settings, ensuring you select the modern project format
5. Click "Create" to generate your project structure

## Adding Required NuGet Packages
### Installing the Main SDK Package
Each VisioForge SDK has a corresponding main package that provides core functionality. You'll need to select the appropriate package based on which SDK you're working with.
1. Right-click on your project in the Solution Explorer
2. Select the "Manage NuGet Packages" menu item
3. In the NuGet Package Manager, search for the VisioForge package that corresponds to your desired SDK
4. Select the latest stable version and click "Install"

### Available Main SDK Packages
Choose from the following main packages based on your development needs:
- [VisioForge.DotNet.VideoCapture](https://www.nuget.org/packages/VisioForge.DotNet.VideoCapture) - For applications requiring video capture functionality
- [VisioForge.DotNet.VideoEdit](https://www.nuget.org/packages/VisioForge.DotNet.VideoEdit) - For video editing and processing applications
- [VisioForge.DotNet.MediaPlayer](https://www.nuget.org/packages/VisioForge.DotNet.MediaPlayer) - For media playback applications
- [VisioForge.DotNet.MediaBlocks](https://www.nuget.org/packages/VisioForge.DotNet.MediaBlocks) - For applications requiring modular media processing capabilities
### Adding the UI Package, if needed
Main SDK package contains the core UI components for WinForms, WPF, Android, and Apple.
For other platforms, you'll need to install the appropriate UI package that corresponds to your chosen UI framework.
### Available UI Packages
Depending on your target platform and UI framework, choose from these UI packages:
- Core package contains the core UI components For WinForms, WPF, and Apple
- [VisioForge.DotNet.Core.UI.WinUI](https://www.nuget.org/packages/VisioForge.DotNet.Core.UI.WinUI) - For Windows applications using the modern WinUI framework
- [VisioForge.DotNet.Core.UI.MAUI](https://www.nuget.org/packages/VisioForge.DotNet.Core.UI.MAUI) - For cross-platform applications using .NET MAUI
- [VisioForge.DotNet.Core.UI.Avalonia](https://www.nuget.org/packages/VisioForge.DotNet.Core.UI.Avalonia) - For cross-platform applications using Avalonia UI
## Integrating VideoView Control (Optional)
### Adding Video Preview Capabilities
If your application requires video preview functionality, you'll need to add the VideoView control to your user interface. This can be accomplished either through XAML markup or programmatically in your code-behind file. Below, we'll demonstrate how to add it via XAML.
#### Step 1: Add the WPF Namespace
First, add the necessary namespace reference to your XAML file:
```xml
xmlns:wpf="clr-namespace:VisioForge.Core.UI.WPF;assembly=VisioForge.Core"
```
#### Step 2: Add the VideoView Control
Then, add the VideoView control to your layout:
```xml
```
This control provides a canvas where video content can be displayed in real-time, essential for applications that involve video capture, editing, or playback.
## Adding Required Redistribution Packages
### Platform-Specific Dependencies
Depending on your target platform, chosen product, and the specific engine you're utilizing, additional redistribution packages may be needed to ensure proper functionality across all deployment environments.
For comprehensive information about which redistribution packages are required for your specific scenario, please consult the Deployment documentation page for your selected VisioForge product. These resources provide detailed guidance on:
- Required system dependencies
- Platform-specific considerations
- Deployment optimization strategies
- Runtime requirements
Following these deployment guidelines will ensure your application functions correctly on end-user systems without missing dependencies or runtime errors.
## Additional Resources
For more examples and detailed implementation guides, visit our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples), which contains numerous code samples demonstrating various features and integration scenarios.
Our documentation portal also offers comprehensive API references, detailed tutorials, and best practice guides to help you make the most of VisioForge SDKs in your JetBrains Rider projects.
## Conclusion
By following this installation guide, you've successfully integrated VisioForge .Net SDKs with JetBrains Rider, setting the foundation for developing powerful media applications. The combination of VisioForge's robust media processing capabilities and JetBrains Rider's intelligent development environment provides an ideal platform for creating sophisticated media applications across multiple platforms.
---END OF PAGE---
# Local File: .\dotnet\install\visual-studio-mac.md
---
title: Integrate .NET SDKs with Visual Studio for Mac
description: Learn how to install, configure, and implement .NET SDKs in Visual Studio for Mac for macOS and iOS development. This step-by-step guide covers environment setup, package installation, UI component configuration, and troubleshooting to help you build powerful multimedia applications for Apple platforms.
sidebar_label: Visual Studio for Mac
order: 13
---
# Complete Guide to Integrating VisioForge .NET SDKs with Visual Studio for Mac
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Player SDK .Net"](https://www.visioforge.com/media-player-sdk-net)
## Introduction to VisioForge SDKs on macOS
VisioForge provides powerful multimedia SDKs for .NET developers working on macOS and iOS platforms. This detailed guide will walk you through the entire process of integrating these SDKs into your Visual Studio for Mac projects. While this tutorial primarily focuses on macOS application development, the same principles apply to iOS applications with minimal adaptations.
By following this guide, you'll learn how to properly set up your development environment, install the necessary packages, configure UI components, and prepare your application for deployment. This knowledge will serve as a solid foundation for building sophisticated multimedia applications using VisioForge technology.
## Prerequisites for Development
Before starting the integration process, ensure you have:
- Visual Studio for Mac (latest version recommended)
- .NET SDK installed (minimum version 6.0)
- Basic knowledge of C# and .NET development
- Administrative access to your macOS system
- Active internet connection for NuGet package downloads
- Optional: XCode for storyboard editing
Having these prerequisites in place will ensure a smooth installation process and prevent common setup issues.
## Setting Up a New macOS Project
Let's begin by creating a new macOS project in Visual Studio for Mac. This will serve as the foundation for our VisioForge SDK integration.
### Creating the Project Structure
1. Launch Visual Studio for Mac.
2. Select **File > New Solution** from the menu bar.
3. In the template selection dialog, navigate to **.NET > App**.
4. Choose **macOS Application** as your project template.
5. Configure your project settings, including:
- Project name (choose something descriptive)
- Organization identifier (typically in reverse domain format)
- Target framework (.NET 6.0 or later recommended)
- Solution name (can match your project name)
6. Click **Create** to generate your project template.
This creates a basic macOS application with the standard project structure required for VisioForge SDK integration.

## Installing VisioForge SDK Packages
After creating your project, the next step is to install the necessary VisioForge SDK packages via NuGet. These packages contain the core functionality and UI components required for multimedia operations.
### Adding the Main SDK Package
Each VisioForge product line has a dedicated main package that contains the core functionality. You'll need to choose the appropriate package based on your development requirements.
1. Right-click on your project in the Solution Explorer.
2. Select **Manage NuGet Packages** from the context menu.
3. Click on the **Browse** tab in the NuGet Package Manager.
4. In the search box, type "VisioForge" to find all available packages.
5. Select one of the following packages based on your requirements:
Available NuGet packages:
- [VisioForge.DotNet.VideoCapture](https://www.nuget.org/packages/VisioForge.DotNet.VideoCapture) - For video capture, webcam, and screen recording functionality
- [VisioForge.DotNet.VideoEdit](https://www.nuget.org/packages/VisioForge.DotNet.VideoEdit) - For video editing, processing, and conversion
- [VisioForge.DotNet.MediaPlayer](https://www.nuget.org/packages/VisioForge.DotNet.MediaPlayer) - For media playback and streaming
- [VisioForge.DotNet.MediaBlocks](https://www.nuget.org/packages/VisioForge.DotNet.MediaBlocks) - For advanced media processing workflows
6. Click **Add Package** to install your selected package.
7. Accept any license agreements that appear.
The installation process will automatically resolve dependencies and add references to your project.

### Adding the Apple UI Package
For macOS and iOS applications, you'll need the Apple-specific UI components that allow VisioForge SDKs to integrate with native UI elements.
1. In the NuGet Package Manager, search for "VisioForge.DotNet.UI.Apple".
2. Select the package from the results list.
3. Click **Add Package** to install.
This package includes specialized controls designed specifically for Apple platforms, ensuring proper visual integration and performance optimization.

## Integrating Video Preview Capabilities
Most multimedia applications require video preview functionality. VisioForge SDKs provide specialized controls for this purpose that integrate seamlessly with macOS applications.
### Adding the VideoView Control
The VideoView control is the primary component for displaying video content in your application. Here's how to add it to your interface:
1. Open your application's main storyboard file by double-clicking it in the Solution Explorer.
2. Visual Studio for Mac will open XCode Interface Builder for storyboard editing.
3. From the Object Library, find the **Custom View** control.
4. Drag the Custom View control onto your window where you want the video to appear.
5. Set appropriate constraints to ensure proper sizing and positioning.
6. Using the Identity Inspector, set a descriptive name for your Custom View (e.g., "videoViewHost").
7. Save your changes and return to Visual Studio for Mac.
This Custom View will serve as a container for the VisioForge VideoView control, which will be added programmatically.


### Initializing the VideoView in Code
After adding the container Custom View, you need to initialize the VideoView control programmatically:
1. Open your ViewController.cs file.
2. Add the necessary using directives at the top of the file:
```csharp
using VisioForge.Core.UI.Apple;
using CoreGraphics;
```
3. Add a private field to your ViewController class to hold the VideoView reference:
```csharp
private VideoViewGL _videoView;
```
4. Modify the ViewDidLoad method to initialize and add the VideoView:
```csharp
public override void ViewDidLoad()
{
base.ViewDidLoad();
// Create and add VideoView
_videoView = new VideoViewGL(new CGRect(0, 0, videoViewHost.Bounds.Width, videoViewHost.Bounds.Height));
this.videoViewHost.AddSubview(_videoView);
// Configure VideoView properties
_videoView.AutoresizingMask = Foundation.NSViewResizingMask.WidthSizable | Foundation.NSViewResizingMask.HeightSizable;
_videoView.BackgroundColor = NSColor.Black;
// Additional initialization code
InitializeMediaComponents();
}
private void InitializeMediaComponents()
{
// Initialize your VisioForge SDK components here
// For example, for MediaPlayer:
// var player = new MediaPlayer();
// player.VideoView = _videoView;
// Additional configuration...
}
```
This code creates a new VideoViewGL instance (optimized for hardware acceleration), sizes it to match your container view, and adds it as a subview. The AutoresizingMask property ensures that the video view resizes properly when the window size changes.
## Adding Required Redistribution Packages
VisioForge SDKs rely on various native libraries and components that must be included in your application bundle. These dependencies vary based on the specific SDK you're using and your target platform.
Check the [deployment documentation](../deployment-x/index.md) for detailed information on which redistribution packages are required for your specific scenario.
## Troubleshooting Common Issues
If you encounter issues during installation or integration, consider these common solutions:
1. **Missing dependencies**: Ensure all required redistribution packages are installed
2. **Build errors**: Verify that your project targets a compatible .NET version
3. **Runtime crashes**: Check for platform-specific initialization issues
4. **Black video display**: Verify that the VideoView is properly initialized and added to the view hierarchy
5. **Performance issues**: Consider enabling hardware acceleration where available
For more specific troubleshooting guidance, refer to the VisioForge documentation or contact their support team.
## Next Steps and Resources
Now that you've successfully integrated VisioForge SDKs into your Visual Studio for Mac project, you can explore more advanced features and capabilities:
- Create custom video processing workflows
- Implement recording and capture functionality
- Develop sophisticated media editing features
- Build streaming media applications
### Additional Resources
- Visit our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples) for code samples and example projects
- Join the [developer forum](https://support.visioforge.com/) to connect with other developers
- Subscribe to our newsletter for updates on new features and best practices
By following this guide, you've established a solid foundation for developing powerful multimedia applications on macOS and iOS using VisioForge SDKs and Visual Studio for Mac.
---END OF PAGE---
# Local File: .\dotnet\install\visual-studio.md
---
title: Integrating .NET SDKs with Visual Studio
description: Learn how to properly install and configure multimedia .NET SDKs in Microsoft Visual Studio with this detailed step-by-step guide. Covers NuGet package installation, manual setup methods, UI framework integration, and best practices for professional video capture and editing applications.
sidebar_label: Visual Studio
order: 14
---
# Comprehensive Guide to Integrating .NET SDKs with Visual Studio
[!badge size="xl" target="blank" variant="info" text="Video Capture SDK .Net"](https://www.visioforge.com/video-capture-sdk-net) [!badge size="xl" target="blank" variant="info" text="Video Edit SDK .Net"](https://www.visioforge.com/video-edit-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net) [!badge size="xl" target="blank" variant="info" text="Media Player SDK .Net"](https://www.visioforge.com/media-player-sdk-net)
## Introduction to VisioForge .NET SDKs
VisioForge offers a powerful suite of multimedia SDKs for .NET developers, enabling you to build feature-rich applications with advanced video capture, editing, playback, and media processing capabilities. This comprehensive guide will walk you through the process of integrating these SDKs into your Visual Studio projects, ensuring a smooth development experience.
For professional developers working on multimedia applications, properly integrating these SDKs is crucial for optimal performance and functionality. Our recommended approach is to use NuGet packages, which simplifies dependency management and ensures you're always using the latest features and bug fixes.
## Installation Methods Overview
There are two primary methods to install VisioForge .NET SDKs:
1. **NuGet Package Installation** (Recommended): The modern, streamlined approach that handles dependencies automatically and simplifies updates.
2. **Manual Installation**: A traditional approach for specialized scenarios, though generally not recommended for most projects.
We'll cover both methods in detail, but strongly encourage the NuGet approach for most development scenarios.
## NuGet Package Installation (Recommended Method)
NuGet is the package manager for .NET, providing a centralized way to incorporate libraries into your projects without the hassle of manual file management. Here's a detailed walkthrough of integrating VisioForge SDKs using NuGet.
### Step 1: Create or Open Your .NET Project
First, you'll need a WinForms, WPF, or other .NET project. We recommend using the modern SDK-style project format for optimal compatibility.
#### Creating a New Project
1. Launch Visual Studio (2019 or 2022 recommended)
2. Select "Create a new project"
3. Filter templates by "C#" and either "WPF" or "Windows Forms"
4. Choose "WPF Application" or "Windows Forms Application" with the .NET Core/5/6+ framework
5. Ensure you select the modern SDK-style project format (this is the default in newer Visual Studio versions)

#### Configuring the Project
After creating a new project, you'll need to configure basic settings:
1. Enter your project name (use a descriptive name relevant to your application)
2. Choose an appropriate location and solution name
3. Select your target framework (.NET 6 or newer recommended for best performance and features)
4. Click "Create" to generate the project structure

### Step 2: Access NuGet Package Manager
Once your project is open in Visual Studio:
1. Right-click on your project in Solution Explorer
2. Select "Manage NuGet Packages..." from the context menu
3. The NuGet Package Manager will open in the center pane
This interface provides search functionality and package browsing to easily find and install the VisioForge components you need.

### Step 3: Install the UI Package for Your Framework
VisioForge SDKs offer specialized UI components for different .NET frameworks. You'll need to select the appropriate UI package based on your project type.
1. In the NuGet Package Manager, switch to the "Browse" tab
2. Search for "VisioForge.DotNet.Core.UI"
3. Select the appropriate UI package for your project type from the search results

#### Available UI Packages
VisioForge supports a wide range of UI frameworks. Choose the one that matches your project:
- **[VisioForge.DotNet.Core.UI.WinUI](https://www.nuget.org/packages/VisioForge.DotNet.Core.UI.WinUI)**: For modern Windows UI applications
- **[VisioForge.DotNet.Core.UI.MAUI](https://www.nuget.org/packages/VisioForge.DotNet.Core.UI.MAUI)**: For cross-platform applications using .NET MAUI
- **[VisioForge.DotNet.Core.UI.Avalonia](https://www.nuget.org/packages/VisioForge.DotNet.Core.UI.Avalonia)**: For cross-platform desktop applications using Avalonia UI
These UI packages provide the necessary controls and components specifically designed for video rendering and interaction within your chosen framework.
### Step 4: Install the Core SDK Package
After installing the UI package, you'll need to add the main SDK package for your specific multimedia needs:
1. Return to the NuGet Package Manager "Browse" tab
2. Search for the specific VisioForge SDK you need (e.g., "VisioForge.DotNet.VideoCapture")
3. Click "Install" on the appropriate package

#### Available Core SDK Packages
Choose the SDK that aligns with your application's requirements:
- **[VisioForge.DotNet.VideoCapture](https://www.nuget.org/packages/VisioForge.DotNet.VideoCapture)**: For applications that need to capture video from cameras, screen recording, or other sources
- **[VisioForge.DotNet.VideoEdit](https://www.nuget.org/packages/VisioForge.DotNet.VideoEdit)**: For video editing, processing, and conversion applications
- **[VisioForge.DotNet.MediaPlayer](https://www.nuget.org/packages/VisioForge.DotNet.MediaPlayer)**: For creating media players with advanced playback controls
- **[VisioForge.DotNet.MediaBlocks](https://www.nuget.org/packages/VisioForge.DotNet.MediaBlocks)**: For building complex media processing pipelines
Each package includes comprehensive documentation, and you can install multiple packages if your application requires different multimedia capabilities.
### Step 5: Implementing the VideoView Control (Optional)
The VideoView control is crucial for applications that need to display video content. You can add it to your UI using XAML (for WPF) or through the designer (for WinForms).
#### For WPF Applications
Add the required namespace to your XAML file:
```xml
xmlns:wpf="clr-namespace:VisioForge.Core.UI.WPF;assembly=VisioForge.Core"
```
Then add the VideoView control to your layout:
```xml
```

The VideoView control will appear in your designer:

#### For WinForms Applications
1. Open the form in designer mode
2. Locate the VisioForge controls in the toolbox (if they don't appear, right-click the toolbox and select "Choose Items...")
3. Drag and drop the VideoView control onto your form
4. Adjust the size and position properties as needed
### Step 6: Install Required Redistribution Packages
Depending on your specific implementation, you may need additional redistribution packages:
1. Return to the NuGet Package Manager
2. Search for "VisioForge.DotNet.Redist" to see available redistribution packages
3. Install the ones relevant to your platform and SDK choice

The required redistribution packages vary based on:
- Target operating system (Windows, macOS, Linux)
- Hardware acceleration requirements
- Specific codecs and formats your application will use
- Backend engine configuration
Consult the specific Deployment documentation page for your selected product to determine which redistribution packages are necessary for your application.
## Manual Installation (Alternative Method)
While we generally don't recommend manual installation due to its complexity and potential for configuration issues, there are specific scenarios where it might be necessary. Follow these steps if NuGet isn't an option for your project:
1. Download the [complete SDK installer](https://files.visioforge.com/trials/visioforge_sdks_installer_dotnet_setup.exe) from our website
2. Run the installer with administrator privileges and follow the on-screen instructions
3. Create your WinForms or WPF project in Visual Studio
4. Add references to the installed SDK libraries:
- Right-click "References" in Solution Explorer
- Select "Add Reference"
- Navigate to the installed SDK location
- Select the required DLL files
5. Configure the Visual Studio Toolbox:
- Right-click the Toolbox and select "Add Tab"
- Name the new tab "VisioForge"
- Right-click the tab and select "Choose Items..."
- Browse to the SDK installation directory
- Select `VisioForge.Core.dll`
6. Drag and drop the VideoView control onto your form or window
This manual approach requires additional configuration for deployment and updates must be managed manually.
## Advanced Configuration and Best Practices
For production applications, consider these additional implementation details:
- **License Management**: Implement proper license validation at application startup
- **Error Handling**: Add comprehensive error handling around SDK initialization and operation
- **Performance Optimization**: Configure hardware acceleration and threading based on your target devices
- **Resource Management**: Implement proper disposal of SDK resources to prevent memory leaks
## Troubleshooting Common Issues
If you encounter problems during installation or implementation:
- Verify your project targets a supported .NET version
- Ensure all required redistributable packages are installed
- Check for NuGet package version compatibility
- Review the SDK documentation for platform-specific requirements
## Conclusion and Next Steps
With the VisioForge .NET SDKs properly installed in your Visual Studio project, you're now ready to leverage their powerful multimedia capabilities. The NuGet installation method ensures you have the correct dependencies and simplifies future updates.
To deepen your understanding and maximize the potential of these SDKs:
- Explore our [comprehensive code samples on GitHub](https://github.com/visioforge/.Net-SDK-s-samples)
- Review the product-specific documentation for advanced features
- Join our developer community forums for support and best practices
By following this guide, you've established a solid foundation for developing sophisticated multimedia applications with VisioForge and Visual Studio.
---END OF PAGE---
# Local File: .\dotnet\mediablocks\index.md
---
title: Media Blocks SDK for .NET Integration Guide
description: Learn how to leverage Media Blocks SDK for .NET to build powerful multimedia applications. Discover how to play, edit, and capture video content with our modular SDK designed for developers. Explore our extensive guide to video encoding, processing, and rendering features.
sidebar_label: Media Blocks SDK .Net
order: 14
---
# Media Blocks SDK for .NET Development Platform
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## What is Media Blocks SDK?
Media Blocks SDK for .NET empowers developers to engineer sophisticated multimedia applications with precision and flexibility. This powerful toolkit provides everything needed to implement professional-grade video playback, non-linear editing systems, and multi-source camera capture solutions.
The modular architecture allows developers to select and combine only the specific components required for each project, optimizing both performance and resource usage in your applications.
## Why Choose Media Blocks for Your Project?
Our component-based approach gives you granular control over your media pipeline. Each specialized block handles a distinct function within the multimedia processing chain:
- High-performance H264/H265 video encoding
- Professional-grade logo and watermark insertion
- Multi-stream mixing and composition
- Hardware-accelerated video rendering
- Cross-platform compatibility
This modular design enables you to construct precisely the multimedia processing workflow your application requires, without unnecessary overhead.
[Get Started with Media Blocks SDK](GettingStarted/index.md)
## Core SDK Components and Capabilities
### Audio Processing Components
- [Audio Encoders](AudioEncoders/index.md) - Convert raw audio streams to AAC, MP3, and other compressed formats with customizable quality settings
- [Audio Processing](AudioProcessing/index.md) - Apply dynamic filters, enhance sound quality, and manipulate audio characteristics in real-time
- [Audio Rendering](AudioRendering/index.md) - Output processed audio to physical devices with precise timing and synchronization
### Video Processing Components
- [Video Encoders](VideoEncoders/index.md) - Generate optimized video streams with support for multiple codecs and container formats
- [Video Processing](VideoProcessing/index.md) - Transform, filter and enhance video content with effects, color correction, and image adjustments
- [Video Rendering](VideoRendering/index.md) - Display video content across different output technologies with hardware acceleration
- [Live Video Compositor](LiveVideoCompositor/index.md) - Combine multiple video sources in real-time with transitions and effects
### Input/Output System Components
- [Bridges](Bridge/index.md) - Connect and synchronize different component types within your processing pipeline
- [Decklink](Decklink/index.md) - Integrate with professional Blackmagic Design video capture and playback hardware
- [Sinks](Sinks/index.md) - Direct processed media to files, streams, network destinations, and other output targets
- [Sources](Sources/index.md) - Ingest media from cameras, files, network streams, and other input devices
- [Special](Special/index.md) - Implement specialized functionality with our extended component collection
## Essential Developer Resources
- [Deployment Guide](../deployment-x/index.md)
- [Changelog](../changelog.md)
- [End User License Agreement](../../eula.md)
- [API Documentation](https://api.visioforge.com/dotnet/api/index.html)
## Technical Support and Community
Our dedicated development team provides responsive support to ensure your success with Media Blocks SDK. Join our active developer community to exchange implementation strategies, optimization techniques, and custom solutions.
---END OF PAGE---
# Local File: .\dotnet\mediablocks\AudioEncoders\index.md
---
title: Audio Encoders for .NET Media Processing
description: Comprehensive guide to audio compression formats including AAC, MP3, FLAC, and more with VisioForge Media Blocks SDK for .NET. Learn implementation with code examples.
sidebar_label: Audio Encoders
order: 19
---
# Audio encoders blocks
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
Audio encoding is the process of converting raw audio data into a compressed format. This process is essential for reducing the size of audio files, making them easier to store and stream over the internet. VisioForge Media Blocks SDK provides a wide range of audio encoders that support various formats and codecs.
## Availability checks
Before using any encoder, you should check if it's available on the current platform. Each encoder block provides a static `IsAvailable()` method for this purpose:
```csharp
// For most encoders
if (EncoderBlock.IsAvailable())
{
// Use the encoder
}
// For AAC encoder which requires passing settings
if (AACEncoderBlock.IsAvailable(settings))
{
// Use the AAC encoder
}
```
This check is important because not all encoders are available on all platforms. Always perform this check before attempting to use an encoder to avoid runtime errors.
## AAC encoder
`AAC (Advanced Audio Coding)`: A lossy compression format known for its efficiency and superior sound quality compared to MP3, widely used in digital music and broadcasting.
AAC encoder is used for encoding files in MP4, MKV, M4A and some other formats, as well as for network streaming using RTSP and HLS.
Use the `AACEncoderSettings` class to set the parameters.
### Block info
Name: AACEncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | PCM/IEEE | 1
Output | AAC | 1
### Constructor options
```csharp
// Constructor with custom settings
public AACEncoderBlock(IAACEncoderSettings settings)
// Constructor without parameters (uses default settings)
public AACEncoderBlock() // Uses GetDefaultSettings() internally
```
### Settings
The `AACEncoderBlock` works with any implementation of the `IAACEncoderSettings` interface. Different implementations are available depending on the platform:
- `AVENCAACEncoderSettings` - Available on Windows and macOS/Linux (preferred when available)
- `MFAACEncoderSettings` - Windows Media Foundation implementation (Windows only)
- `VOAACEncoderSettings` - Used on Android and iOS
You can use the static `GetDefaultSettings()` method to get the optimal encoder settings for the current platform:
```csharp
var settings = AACEncoderBlock.GetDefaultSettings();
```
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->AACEncoderBlock;
AACEncoderBlock-->MP4SinkBlock;
```
### Sample code
```cs
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var aacEncoderBlock = new AACEncoderBlock(new MFAACEncoderSettings() { Bitrate = 192 });
pipeline.Connect(fileSource.AudioOutput, aacEncoderBlock.Input);
var m4aSinkBlock = new MP4SinkBlock(new MP4SinkSettings(@"output.m4a"));
pipeline.Connect(aacEncoderBlock.Output, m4aSinkBlock.CreateNewInput(MediaBlockPadMediaType.Audio));
await pipeline.StartAsync();
```
## ADPCM encoder
`ADPCM (Adaptive Differential Pulse Code Modulation)`: A type of audio compression that reduces the bit rate required for audio storage and transmission while maintaining audio quality through adaptive prediction.
ADPCM encoder is used for embedding audio streams in DV, WAV and AVI formats.
Use the `ADPCMEncoderSettings` class to set the parameters.
### Block info
Name: ADPCMEncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | PCM/IEEE | 1
Output | ADPCM | 1
### Constructor options
```csharp
// Constructor with block align parameter
public ADPCMEncoderBlock(int blockAlign = 1024)
```
The `blockAlign` parameter defines the block alignment in bytes. The default value is 1024.
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->ADPCMEncoderBlock;
ADPCMEncoderBlock-->WAVSinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var adpcmEncoderBlock = new ADPCMEncoderBlock(new ADPCMEncoderSettings());
pipeline.Connect(fileSource.AudioOutput, adpcmEncoderBlock.Input);
var wavSinkBlock = new WAVSinkBlock(@"output.wav");
pipeline.Connect(adpcmEncoderBlock.Output, wavSinkBlock.Input);
await pipeline.StartAsync();
```
## ALAW encoder
`ALAW (A-law algorithm)`: A standard companding algorithm used in digital communications systems to optimize the dynamic range of an analog signal for digitizing.
ALAW encoder is used for embedding audio streams in WAV format or transmitting over IP.
Use the `ALAWEncoderSettings` class to set the parameters.
### Block info
Name: ALAWEncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | PCM/IEEE | 1
Output | ALAW | 1
### Constructor options
```csharp
// Default constructor
public ALAWEncoderBlock()
```
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->ALAWEncoderBlock;
ALAWEncoderBlock-->WAVSinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var alawEncoderBlock = new ALAWEncoderBlock(new ALAWEncoderSettings());
pipeline.Connect(fileSource.AudioOutput, alawEncoderBlock.Input);
var wavSinkBlock = new WAVSinkBlock(@"output.wav");
pipeline.Connect(alawEncoderBlock.Output, wavSinkBlock.Input);
await pipeline.StartAsync();
```
## FLAC encoder
`FLAC (Free Lossless Audio Codec)`: A lossless audio compression format that preserves audio quality while significantly reducing file size compared to uncompressed formats like WAV.
FLAC encoder is used for encoding audio in FLAC format.
Use the `FLACEncoderSettings` class to set the parameters.
### Block info
Name: FLACEncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | PCM/IEEE | 1
Output | FLAC | 1
### Constructor options
```csharp
// Constructor with settings
public FLACEncoderBlock(FLACEncoderSettings settings)
```
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->FLACEncoderBlock;
FLACEncoderBlock-->FileSinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var flacEncoderBlock = new FLACEncoderBlock(new FLACEncoderSettings());
pipeline.Connect(fileSource.AudioOutput, flacEncoderBlock.Input);
var fileSinkBlock = new FileSinkBlock(@"output.flac");
pipeline.Connect(flacEncoderBlock.Output, fileSinkBlock.Input);
await pipeline.StartAsync();
```
## MP2 encoder
`MP2 (MPEG-1 Audio Layer II)`: An older audio compression format that preceded MP3, still used in some broadcasting applications due to its efficiency at specific bitrates.
MP2 encoder is used for transmitting over IP or embedding to AVI/MPEG-2 formats.
Use the `MP2EncoderSettings` class to set the parameters.
### Block info
Name: MP2EncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | PCM/IEEE | 1
Output | audio/mpeg | 1
### Constructor options
```csharp
// Constructor with settings
public MP2EncoderBlock(MP2EncoderSettings settings)
```
The `MP2EncoderSettings` class allows you to configure parameters such as:
- Bitrate (default: 192 kbps)
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->MP2EncoderBlock;
MP2EncoderBlock-->FileSinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var mp2EncoderBlock = new MP2EncoderBlock(new MP2EncoderSettings() { Bitrate = 192 });
pipeline.Connect(fileSource.AudioOutput, mp2EncoderBlock.Input);
var fileSinkBlock = new FileSinkBlock(@"output.mp2");
pipeline.Connect(mp2EncoderBlock.Output, fileSinkBlock.Input);
await pipeline.StartAsync();
```
## MP3 encoder
`MP3 (MPEG Audio Layer III)`: A popular lossy audio format that revolutionized digital music distribution by compressing files while retaining a reasonable sound quality.
An MP3 encoder can convert audio streams into MP3 files or embed MP3 audio streams in formats like AVI, MKV, and others.
Use the `MP3EncoderSettings` class to set the parameters.
### Block info
Name: MP3EncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | PCM/IEEE | 1
Output | audio/mpeg | 1
### Constructor options
```csharp
// Constructor with settings and optional parser flag
public MP3EncoderBlock(MP3EncoderSettings settings, bool addParser = false)
```
The `addParser` parameter is used to add a parser to the output stream, which is required for certain streaming applications like RTMP (YouTube/Facebook) streaming.
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->MP3EncoderBlock;
MP3EncoderBlock-->FileSinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var mp3EncoderBlock = new MP3EncoderBlock(new MP3EncoderSettings() { Bitrate = 192 });
pipeline.Connect(fileSource.AudioOutput, mp3EncoderBlock.Input);
var fileSinkBlock = new FileSinkBlock(@"output.mp3");
pipeline.Connect(mp3EncoderBlock.Output, fileSinkBlock.Input);
await pipeline.StartAsync();
```
### Streaming to RTMP example
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
// Add parser is set to true for RTMP streaming
var mp3EncoderBlock = new MP3EncoderBlock(new MP3EncoderSettings() { Bitrate = 192 }, addParser: true);
pipeline.Connect(fileSource.AudioOutput, mp3EncoderBlock.Input);
// Connect to RTMP sink
var rtmpSink = new RTMPSinkBlock(new RTMPSinkSettings("rtmp://streaming-server/live/stream"));
pipeline.Connect(mp3EncoderBlock.Output, rtmpSink.CreateNewInput(MediaBlockPadMediaType.Audio));
await pipeline.StartAsync();
```
## OPUS encoder
`OPUS`: A highly efficient lossy audio compression format designed for the internet with low latency and high audio quality, making it ideal for real-time applications like WebRTC.
OPUS encoder is used for embedding audio streams in WebM or OGG formats.
Use the `OPUSEncoderSettings` class to set the parameters.
### Block info
Name: OPUSEncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | PCM/IEEE | 1
Output | OPUS | 1
### Constructor options
```csharp
// Constructor with settings
public OPUSEncoderBlock(OPUSEncoderSettings settings)
```
The `OPUSEncoderSettings` class allows you to configure parameters such as:
- Bitrate (default: 128 kbps)
- Audio bandwidth
- Frame size and other encoding parameters
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->OPUSEncoderBlock;
OPUSEncoderBlock-->WebMSinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var opusEncoderBlock = new OPUSEncoderBlock(new OPUSEncoderSettings() { Bitrate = 192 });
pipeline.Connect(fileSource.AudioOutput, opusEncoderBlock.Input);
var webmSinkBlock = new WebMSinkBlock(new WebMSinkSettings(@"output.webm"));
pipeline.Connect(opusEncoderBlock.Output, webmSinkBlock.CreateNewInput(MediaBlockPadMediaType.Audio));
await pipeline.StartAsync();
```
## Speex encoder
`Speex`: A patent-free audio compression format designed specifically for speech, offering high compression rates while maintaining clarity for voice recordings.
Speex encoder is used for embedding audio streams in OGG format.
Use the `SpeexEncoderSettings` class to set the parameters.
### Block info
Name: SpeexEncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | PCM/IEEE | 1
Output | Speex | 1
### Constructor options
```csharp
// Constructor with settings
public SpeexEncoderBlock(SpeexEncoderSettings settings)
```
The `SpeexEncoderSettings` class allows you to configure parameters such as:
- Mode (SpeexMode): NarrowBand, WideBand, UltraWideBand
- Quality
- Complexity
- VAD (Voice Activity Detection)
- DTX (Discontinuous Transmission)
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->SpeexEncoderBlock;
SpeexEncoderBlock-->OGGSinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var speexEncoderBlock = new SpeexEncoderBlock(new SpeexEncoderSettings() { Mode = SpeexMode.NarrowBand });
pipeline.Connect(fileSource.AudioOutput, speexEncoderBlock.Input);
var oggSinkBlock = new OGGSinkBlock(@"output.ogg");
pipeline.Connect(speexEncoderBlock.Output, oggSinkBlock.Input);
await pipeline.StartAsync();
```
## Vorbis encoder
`Vorbis`: An open-source, lossy audio compression format designed as a free alternative to MP3, often used within the OGG container format.
Vorbis encoder is used for embedding audio streams in OGG or WebM formats.
Use the `VorbisEncoderSettings` class to set the parameters.
### Block info
Name: VorbisEncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | PCM/IEEE | 1
Output | Vorbis | 1
### Constructor options
```csharp
// Constructor with settings
public VorbisEncoderBlock(VorbisEncoderSettings settings)
```
The `VorbisEncoderSettings` class allows you to configure parameters such as:
- BaseQuality: A float value between 0.0 and 1.0 that determines the quality of the encoded audio
- Bitrate: Alternative bitrate-based configuration
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->VorbisEncoderBlock;
VorbisEncoderBlock-->OGGSinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var vorbisEncoderBlock = new VorbisEncoderBlock(new VorbisEncoderSettings() { BaseQuality = 0.5f });
pipeline.Connect(fileSource.AudioOutput, vorbisEncoderBlock.Input);
var oggSinkBlock = new OGGSinkBlock(@"output.ogg");
pipeline.Connect(vorbisEncoderBlock.Output, oggSinkBlock.Input);
await pipeline.StartAsync();
```
## WAV encoder
`WAV (Waveform Audio File Format)`: An uncompressed audio format that preserves audio quality but results in larger file sizes compared to compressed formats.
WAV encoder is used for encoding audio into WAV format.
Use the `WAVEncoderSettings` class to set the parameters.
### Block info
Name: WAVEncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | PCM/IEEE | 1
Output | WAV | 1
### Constructor options
```csharp
// Constructor with settings
public WAVEncoderBlock(WAVEncoderSettings settings)
```
The `WAVEncoderSettings` class allows you to configure various parameters for the WAV format.
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->WAVEncoderBlock;
WAVEncoderBlock-->FileSinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var wavEncoderBlock = new WAVEncoderBlock(new WAVEncoderSettings());
pipeline.Connect(fileSource.AudioOutput, wavEncoderBlock.Input);
var fileSinkBlock = new FileSinkBlock(@"output.wav");
pipeline.Connect(wavEncoderBlock.Output, fileSinkBlock.Input);
await pipeline.StartAsync();
```
## WavPack encoder
`WavPack`: A free and open-source lossless audio compression format that offers high compression rates while maintaining excellent audio quality, supporting hybrid lossy/lossless modes.
WavPack encoder is used for encoding audio in WavPack format, which is ideal for archiving audio with perfect fidelity.
Use the `WavPackEncoderSettings` class to set the parameters.
### Block info
Name: WavPackEncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | PCM/IEEE | 1
Output | WavPack | 1
### Constructor options
```csharp
// Constructor with settings
public WavPackEncoderBlock(WavPackEncoderSettings settings)
```
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->WavPackEncoderBlock;
WavPackEncoderBlock-->FileSinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var wavpackEncoderBlock = new WavPackEncoderBlock(new WavPackEncoderSettings());
pipeline.Connect(fileSource.AudioOutput, wavpackEncoderBlock.Input);
var fileSinkBlock = new FileSinkBlock(@"output.wv");
pipeline.Connect(wavpackEncoderBlock.Output, fileSinkBlock.Input);
await pipeline.StartAsync();
```
## WMA encoder
`WMA (Windows Media Audio)`: A proprietary audio compression format developed by Microsoft, offering various compression levels and features for different audio applications.
WMA encoder is used for encoding audio in WMA format.
Use the `WMAEncoderSettings` class to set the parameters.
### Block info
Name: WMAEncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | PCM/IEEE | 1
Output | WMA | 1
### Constructor options
```csharp
// Constructor with settings
public WMAEncoderBlock(WMAEncoderSettings settings)
```
The `WMAEncoderSettings` class allows you to configure parameters such as:
- Bitrate (default: 128 kbps)
- Quality settings
- VBR (Variable Bit Rate) options
### Default settings
You can use the static method to get default settings:
```csharp
var settings = WMAEncoderBlock.GetDefaultSettings();
```
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->WMAEncoderBlock;
WMAEncoderBlock-->ASFSinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var wmaEncoderBlock = new WMAEncoderBlock(new WMAEncoderSettings() { Bitrate = 192 });
pipeline.Connect(fileSource.AudioOutput, wmaEncoderBlock.Input);
var asfSinkBlock = new ASFSinkBlock(@"output.wma");
pipeline.Connect(wmaEncoderBlock.Output, asfSinkBlock.CreateNewInput(MediaBlockPadMediaType.Audio));
await pipeline.StartAsync();
```
## Resource management
All encoder blocks implement `IDisposable` and have internal cleanup mechanisms. It's recommended to properly dispose of them when they're no longer needed:
```csharp
// Using block
using (var encoder = new MP3EncoderBlock(settings))
{
// Use encoder
}
// Or manual disposal
var encoder = new MP3EncoderBlock(settings);
try {
// Use encoder
}
finally {
encoder.Dispose();
}
```
## Platforms
Windows, macOS, Linux, iOS, Android.
Note that not all encoders are available on all platforms. Always use the `IsAvailable()` method to check for availability before using an encoder.
---END OF PAGE---
# Local File: .\dotnet\mediablocks\AudioProcessing\index.md
---
title: .Net Audio Processing & Effect Blocks
description: Explore a comprehensive set of .NET audio processing and effect blocks for building powerful audio pipelines. Includes converters, resamplers, mixers, EQs, and more.
sidebar_label: Audio Processing and Effects
---
# Audio processing and effect blocks
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
VisioForge Media Blocks SDK .Net includes a set of audio processing and effect blocks that allow you to create audio processing pipelines for your applications.
The blocks can be connected to each other to create a processing pipeline.
Most of the blocks are available for all platforms, including Windows, Linux, MacOS, Android, and iOS.
## Basic Audio Processing
### Audio Converter
The audio converter block converts audio from one format to another.
#### Block info
Name: AudioConverterBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Uncompressed audio | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->AudioConverterBlock;
AudioConverterBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var audioConverter = new AudioConverterBlock();
pipeline.Connect(fileSource.AudioOutput, audioConverter.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(audioConverter.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Audio Resampler
The audio resampler block changes the sample rate of an audio stream.
#### Block info
Name: AudioResamplerBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Uncompressed audio | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->AudioResamplerBlock;
AudioResamplerBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
// Resample to 48000 Hz, stereo
var settings = new AudioResamplerSettings(AudioFormatX.S16LE, 48000, 2);
var audioResampler = new AudioResamplerBlock(settings);
pipeline.Connect(fileSource.AudioOutput, audioResampler.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(audioResampler.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Audio Timestamp Corrector
The audio timestamp corrector block can add or remove frames to correct input stream from unstable sources.
#### Block info
Name: AudioTimestampCorrectorBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Uncompressed audio | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->AudioTimestampCorrectorBlock;
AudioTimestampCorrectorBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var settings = new AudioTimestampCorrectorSettings();
var corrector = new AudioTimestampCorrectorBlock(settings);
pipeline.Connect(fileSource.AudioOutput, corrector.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(corrector.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Volume
The volume block allows you to control the volume of the audio stream.
#### Block info
Name: VolumeBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Uncompressed audio | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->VolumeBlock;
VolumeBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
// Volume: 0.0 (silence) to 1.0 (normal) or higher (amplification)
var volume = new VolumeBlock(0.8);
pipeline.Connect(fileSource.AudioOutput, volume.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(volume.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Audio mixer
The audio mixer block mixes multiple audio streams into one. Block mixes the streams regardless of their format, converting if necessary.
All input streams will be synchronized. The mixer block handles the conversion of different input audio formats to a common format for mixing. By default, it will try to match the format of the first connected input, but this can be explicitly configured.
Use the `AudioMixerSettings` class to set the custom output format. This is useful if you need a specific sample rate, channel layout, or audio format (like S16LE, Float32LE, etc.) for the mixed output.
#### Block info
Name: AudioMixerBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1 (dynamically created)
Output | Uncompressed audio | 1
#### The sample pipeline
```mermaid
graph LR;
VirtualAudioSourceBlock#1-->AudioMixerBlock;
VirtualAudioSourceBlock#2-->AudioMixerBlock;
AudioMixerBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var audioSource1Block = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings());
var audioSource2Block = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings());
// Configure the mixer with specific output settings if needed
// For example, to output 48kHz, 2-channel, S16LE audio:
// var mixerSettings = new AudioMixerSettings() { Format = new AudioInfoX(AudioFormatX.S16LE, 48000, 2) };
// var audioMixerBlock = new AudioMixerBlock(mixerSettings);
var audioMixerBlock = new AudioMixerBlock(new AudioMixerSettings());
// Each call to CreateNewInput() adds a new input to the mixer
var inputPad1 = audioMixerBlock.CreateNewInput();
pipeline.Connect(audioSource1Block.Output, inputPad1);
var inputPad2 = audioMixerBlock.CreateNewInput();
pipeline.Connect(audioSource2Block.Output, inputPad2);
// Output the mixed audio to the default audio renderer
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(audioMixerBlock.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Controlling Individual Input Streams
You can control the volume and mute state of individual input streams connected to the `AudioMixerBlock`.
The `streamIndex` for these methods corresponds to the order in which the inputs were added via `CreateNewInput()` or `CreateNewInputLive()` (starting from 0).
* **Set Volume**: Use the `SetVolume(int streamIndex, double value)` method. The `value` ranges from 0.0 (silence) to 1.0 (normal volume), and can be higher for amplification (e.g., up to 10.0, though specifics might depend on the underlying implementation limits).
* **Set Mute**: Use the `SetMute(int streamIndex, bool value)` method. Set `value` to `true` to mute the stream and `false` to unmute it.
```csharp
// Assuming audioMixerBlock is already created and inputs are connected
// Set volume of the first input stream (index 0) to 50%
audioMixerBlock.SetVolume(0, 0.5);
// Mute the second input stream (index 1)
audioMixerBlock.SetMute(1, true);
```
#### Dynamic Input Management (Live Pipeline)
The `AudioMixerBlock` supports adding and removing inputs dynamically while the pipeline is running:
* **Adding Inputs**: Use the `CreateNewInputLive()` method to get a new input pad that can be connected to a source. The underlying GStreamer elements will be set up to handle the new input.
* **Removing Inputs**: Use the `RemoveInputLive(MediaBlockPad blockPad)` method. This will disconnect the specified input pad and clean up associated resources.
This is particularly useful for applications where the number of audio sources can change during operation, such as a live mixing console or a conferencing application.
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Audio sample grabber
The audio sample grabber block allows you to access the raw audio samples from the audio stream.
#### Block info
Name: AudioSampleGrabberBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Uncompressed audio | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->AudioSampleGrabberBlock;
AudioSampleGrabberBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var audioSampleGrabber = new AudioSampleGrabberBlock();
audioSampleGrabber.SampleGrabbed += (sender, args) =>
{
// Process audio samples
// args.AudioData - audio samples
// args.AudioFormat - audio format
};
pipeline.Connect(fileSource.AudioOutput, audioSampleGrabber.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(audioSampleGrabber.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
## Audio Effects
### Amplify
Block amplifies an audio stream by an amplification factor. Several clipping modes are available.
Use method and level values to configure.
#### Block info
Name: AmplifyBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Uncompressed audio | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->AmplifyBlock;
AmplifyBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var amplify = new AmplifyBlock(AmplifyClippingMethod.Normal, 2.0);
pipeline.Connect(fileSource.AudioOutput, amplify.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(amplify.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Echo
The echo block adds echo effect to the audio stream.
#### Block info
Name: EchoBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Uncompressed audio | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->EchoBlock;
EchoBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
// Delay in ms, strength 0.0 - 1.0
var echo = new EchoBlock(500, 0.5);
pipeline.Connect(fileSource.AudioOutput, echo.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(echo.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Karaoke
The karaoke block applies a karaoke effect to the audio stream, removing center-panned vocals.
#### Block info
Name: KaraokeBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Uncompressed audio | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->KaraokeBlock;
KaraokeBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var settings = new KaraokeAudioEffect();
var karaoke = new KaraokeBlock(settings);
pipeline.Connect(fileSource.AudioOutput, karaoke.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(karaoke.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Reverberation
The reverberation block adds reverb effects to the audio stream.
#### Block info
Name: ReverberationBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Uncompressed audio | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->ReverberationBlock;
ReverberationBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var settings = new ReverberationAudioEffect();
var reverb = new ReverberationBlock(settings);
pipeline.Connect(fileSource.AudioOutput, reverb.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(reverb.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Wide Stereo
The wide stereo block enhances the stereo image of the audio.
#### Block info
Name: WideStereoBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Uncompressed audio | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->WideStereoBlock;
WideStereoBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var settings = new WideStereoAudioEffect();
var wideStereo = new WideStereoBlock(settings);
pipeline.Connect(fileSource.AudioOutput, wideStereo.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(wideStereo.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
## Equalization and Filtering
### Balance
Block allows you to control the balance between left and right channels.
#### Block info
Name: AudioBalanceBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Uncompressed audio | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->AudioBalanceBlock;
AudioBalanceBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
// Balance: -1.0 (full left) to 1.0 (full right), 0.0 - center
var balance = new AudioBalanceBlock(0.5);
pipeline.Connect(fileSource.AudioOutput, balance.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(balance.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Equalizer (10 bands)
The 10-band equalizer block provides a 10-band equalizer for audio processing.
#### Block info
Name: Equalizer10Block.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Uncompressed audio | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->Equalizer10Block;
Equalizer10Block-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
// Create 10-band equalizer with all bands set to 0 dB
var equalizer = new Equalizer10Block(0, 0, 0, 0, 0, 0, 0, 0, 0, 0);
// Or set bands individually
equalizer.SetBand(0, 3); // Band 0 (31 Hz) to +3 dB
equalizer.SetBand(1, 2); // Band 1 (62 Hz) to +2 dB
equalizer.SetBand(9, -3); // Band 9 (16 kHz) to -3 dB
pipeline.Connect(fileSource.AudioOutput, equalizer.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(equalizer.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Equalizer (Parametric)
The parametric equalizer block provides a parametric equalizer for audio processing.
#### Block info
Name: EqualizerParametricBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Uncompressed audio | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->EqualizerParametricBlock;
EqualizerParametricBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
// Create parametric equalizer
var equalizer = new EqualizerParametricBlock();
// Set up to 4 bands
equalizer.SetBand(0, 100, 1.0, 3); // Band 0: 100 Hz frequency, 1.0 Q, +3 dB gain
equalizer.SetBand(1, 1000, 1.5, -2); // Band 1: 1000 Hz frequency, 1.5 Q, -2 dB gain
pipeline.Connect(fileSource.AudioOutput, equalizer.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(equalizer.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Chebyshev Band Pass/Reject
The Chebyshev band pass/reject block applies a band pass or band reject filter to the audio stream using Chebyshev filters.
#### Block info
Name: ChebyshevBandPassRejectBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Uncompressed audio | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->ChebyshevBandPassRejectBlock;
ChebyshevBandPassRejectBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var settings = new ChebyshevBandPassRejectAudioEffect();
var filter = new ChebyshevBandPassRejectBlock(settings);
pipeline.Connect(fileSource.AudioOutput, filter.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(filter.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Chebyshev Limit
The Chebyshev limit block applies low-pass or high-pass filtering to the audio using Chebyshev filters.
#### Block info
Name: ChebyshevLimitBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Uncompressed audio | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->ChebyshevLimitBlock;
ChebyshevLimitBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var settings = new ChebyshevLimitAudioEffect();
var filter = new ChebyshevLimitBlock(settings);
pipeline.Connect(fileSource.AudioOutput, filter.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(filter.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
## Dynamic Processing
### Compressor/Expander
The compressor/expander block provides dynamic range compression or expansion.
#### Block info
Name: CompressorExpanderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Uncompressed audio | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->CompressorExpanderBlock;
CompressorExpanderBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var compressor = new CompressorExpanderBlock(0.5, 0.9, 0.1, 0.5);
pipeline.Connect(fileSource.AudioOutput, compressor.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(compressor.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Scale/Tempo
The scale/tempo block allows you to change the tempo and pitch of the audio stream.
#### Block info
Name: ScaleTempoBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Uncompressed audio | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->ScaleTempoBlock;
ScaleTempoBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
// Scale tempo by factor (1.0 is normal, 0.5 is half-speed, 2.0 is double-speed)
var scaleTempo = new ScaleTempoBlock(1.5);
pipeline.Connect(fileSource.AudioOutput, scaleTempo.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(scaleTempo.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
## Analysis and Metering
### VU Meter
The VU meter block allows you to measure the volume level of the audio stream.
#### Block info
Name: VUMeterBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Uncompressed audio | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->VUMeterBlock;
VUMeterBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var vuMeter = new VUMeterBlock();
vuMeter.VolumeUpdated += (sender, args) =>
{
// Left channel volume in dB
var leftVolume = args.LeftVolume;
// Right channel volume in dB
var rightVolume = args.RightVolume;
Console.WriteLine($"Left: {leftVolume:F2} dB, Right: {rightVolume:F2} dB");
};
pipeline.Connect(fileSource.AudioOutput, vuMeter.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(vuMeter.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
---END OF PAGE---
# Local File: .\dotnet\mediablocks\AudioRendering\index.md
---
title: Audio Rendering Block for .NET Media Processing
description: Explore the powerful Audio Rendering Block for cross-platform audio output in .NET applications. Learn implementation techniques, performance optimization, and device management for Windows, macOS, Linux, iOS, and Android development.
sidebar_label: Audio Rendering
---
# Audio Rendering Block: Cross-Platform Audio Output Processing
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## Introduction to Audio Rendering
The Audio Renderer Block serves as a critical component in media processing pipelines, enabling applications to output audio streams to sound devices across multiple platforms. This versatile block handles the complex task of converting digital audio data into audible sound through the appropriate hardware interfaces, making it an essential tool for developers building audio-enabled applications.
Audio rendering requires careful management of hardware resources, buffer settings, and timing synchronization to ensure smooth, uninterrupted playback. This block abstracts these complexities and provides a unified interface for audio output across diverse computing environments.
## Core Functionality
The Audio Renderer Block accepts uncompressed audio streams and outputs them to either the default audio device or a user-selected alternative. It provides essential audio playback controls including:
- Volume adjustment with precise decibel control
- Mute functionality for silent operation
- Device selection from available system audio outputs
- Buffering settings to optimize for latency or stability
These capabilities allow developers to create applications with professional-grade audio output without needing to implement platform-specific code for each target operating system.
## Underlying Technology
### Platform-Specific Implementation
The `AudioRendererBlock` supports various platform-specific audio rendering technologies. It can be configured to use a specific audio device and API (see Device Management section). When instantiated using its default constructor (e.g., `new AudioRendererBlock()`), it attempts to select a suitable default audio API based on the operating system:
- **Windows**: The default constructor typically uses DirectSound. The block supports multiple audio APIs including:
- DirectSound: Provides low-latency output with broad compatibility
- WASAPI (Windows Audio Session API): Offers exclusive mode for highest quality
- ASIO (Audio Stream Input/Output): Professional-grade audio with minimal latency for specialized hardware
- **macOS**: Utilizes the CoreAudio framework. The default constructor will typically select a CoreAudio-based device for:
- High-resolution audio output
- Native integration with macOS audio subsystem
- Support for audio units and professional equipment
(Note: Similarly for macOS, an `OSXAudioSinkBlock` is available for direct interaction with the platform-specific GStreamer sink if needed for specialized scenarios.)
- **Linux**: Implements ALSA (Advanced Linux Sound Architecture). The default constructor will typically select an ALSA-based device for:
- Direct hardware access
- Comprehensive device support
- Integration with the Linux audio stack
- **iOS**: Employs CoreAudio, optimized for mobile. The default constructor will typically select a CoreAudio-based device, enabling features like:
- Power-efficient rendering
- Background audio capabilities
- Integration with iOS audio session management
(Note: For developers requiring more direct control over the iOS-specific GStreamer sink or having advanced use cases, the SDK also provides `IOSAudioSinkBlock` as a distinct media block.)
- **Android**: Defaults to using OpenSL ES to provide:
- Low-latency audio output
- Hardware acceleration when available
## OSXAudioSinkBlock: Direct macOS Audio Output
The `OSXAudioSinkBlock` is a platform-specific media block designed for advanced scenarios where direct interaction with the macOS GStreamer audio sink is required. This block is useful for developers who need low-level control over audio output on macOS devices, such as custom device selection or integration with other native components.
### Key Features
- Direct access to the macOS audio sink
- Device selection via `DeviceID`
- Suitable for specialized or professional audio applications on macOS
### Settings: `OSXAudioSinkSettings`
The `OSXAudioSinkBlock` requires an `OSXAudioSinkSettings` object to specify the audio output device. The `OSXAudioSinkSettings` class allows you to define:
- `DeviceID`: The ID of the macOS audio output device (starting from 0)
Example:
```csharp
using VisioForge.Core.Types.X.Sinks;
// Select the first available audio device (DeviceID = 0)
var osxSettings = new OSXAudioSinkSettings { DeviceID = 0 };
// Create the macOS audio sink block
var osxAudioSink = new OSXAudioSinkBlock(osxSettings);
```
### Availability Check
You can check if the `OSXAudioSinkBlock` is available on the current platform:
```csharp
bool isAvailable = OSXAudioSinkBlock.IsAvailable();
```
### Integration Example
Below is a minimal example of integrating `OSXAudioSinkBlock` into a media pipeline:
```csharp
var pipeline = new MediaBlocksPipeline();
// Set up your audio source block as needed
var audioSourceBlock = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings());
// Define settings for the sink
var osxSettings = new OSXAudioSinkSettings { DeviceID = 0 };
var osxAudioSink = new OSXAudioSinkBlock(osxSettings);
// Connect the source to the macOS audio sink
pipeline.Connect(audioSourceBlock.Output, osxAudioSink.Input);
await pipeline.StartAsync();
```
## IOSAudioSinkBlock: Direct iOS Audio Output
The `IOSAudioSinkBlock` is a platform-specific media block designed for advanced scenarios where direct interaction with the iOS GStreamer audio sink is required. This block is useful for developers who need low-level control over audio output on iOS devices, such as custom audio routing, format handling, or integration with other native components.
### Key Features
- Direct access to the iOS GStreamer audio sink
- Fine-grained control over audio format, sample rate, and channel count
- Suitable for specialized or professional audio applications on iOS
### Settings: `AudioInfoX`
The `IOSAudioSinkBlock` requires an `AudioInfoX` object to specify the audio format. The `AudioInfoX` class allows you to define:
- `Format`: The audio sample format (e.g., `AudioFormatX.S16LE`, `AudioFormatX.F32LE`, etc.)
- `SampleRate`: The sample rate in Hz (e.g., 44100, 48000)
- `Channels`: The number of audio channels (e.g., 1 for mono, 2 for stereo)
Example:
```csharp
using VisioForge.Core.Types.X;
// Define audio format: 16-bit signed little-endian, 44100 Hz, stereo
var audioInfo = new AudioInfoX(AudioFormatX.S16LE, 44100, 2);
// Create the iOS audio sink block
var iosAudioSink = new IOSAudioSinkBlock(audioInfo);
```
### Availability Check
You can check if the `IOSAudioSinkBlock` is available on the current platform:
```csharp
bool isAvailable = IOSAudioSinkBlock.IsAvailable();
```
### Integration Example
Below is a minimal example of integrating `IOSAudioSinkBlock` into a media pipeline:
```csharp
var pipeline = new MediaBlocksPipeline();
// Set up your audio source block as needed
var audioSourceBlock = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings());
// Define audio format for the sink
var audioInfo = new AudioInfoX(AudioFormatX.S16LE, 44100, 2);
var iosAudioSink = new IOSAudioSinkBlock(audioInfo);
// Connect the source to the iOS audio sink
pipeline.Connect(audioSourceBlock.Output, iosAudioSink.Input);
await pipeline.StartAsync();
```
## Technical Specifications
### Block Information
Name: AudioRendererBlock
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input audio | uncompressed audio | 1 |
### Audio Format Support
The Audio Renderer Block accepts a wide range of uncompressed audio formats:
- Sample rates: 8kHz to 192kHz
- Bit depths: 8-bit, 16-bit, 24-bit, and 32-bit (floating point)
- Channel configurations: Mono, stereo, and multichannel (up to 7.1 surround)
This flexibility allows developers to work with everything from basic voice applications to high-fidelity music and immersive audio experiences.
## Device Management
### Enumerating Available Devices
The Audio Renderer Block provides straightforward methods to discover and select from available audio output devices on the system using the `GetDevicesAsync` static method:
```csharp
// Get a list of all audio output devices on the current system
var availableDevices = await AudioRendererBlock.GetDevicesAsync();
// Optionally specify the API to use
var directSoundDevices = await AudioRendererBlock.GetDevicesAsync(AudioOutputDeviceAPI.DirectSound);
// Display device information
foreach (var device in availableDevices)
{
Console.WriteLine($"Device: {device.Name}");
}
// Create a renderer with a specific device
var audioRenderer = new AudioRendererBlock(availableDevices[0]);
```
### Default Device Handling
When no specific device is selected, the block automatically routes audio to the system's default output device. The no-parameter constructor attempts to select an appropriate default device based on the platform:
```csharp
// Create with default device
var audioRenderer = new AudioRendererBlock();
```
The block also monitors device status, handling scenarios such as:
- Device disconnection during playback
- Default device changes by the user
- Audio endpoint format changes
## Performance Considerations
### Latency Management
Audio rendering latency is critical for many applications. The block provides configuration options through the `Settings` property and synchronization control via the `IsSync` property:
```csharp
// Control synchronization behavior
audioRenderer.IsSync = true; // Enable synchronization (default)
// Check if a specific API is available on this platform
bool isDirectSoundAvailable = AudioRendererBlock.IsAvailable(AudioOutputDeviceAPI.DirectSound);
```
### Volume and Mute Control
The AudioRendererBlock provides precise volume control and mute functionality:
```csharp
// Set volume (0.0 to 1.0 range)
audioRenderer.Volume = 0.8; // 80% volume
// Get current volume
double currentVolume = audioRenderer.Volume;
// Mute/unmute
audioRenderer.Mute = true; // Mute audio
audioRenderer.Mute = false; // Unmute audio
// Check mute state
bool isMuted = audioRenderer.Mute;
```
### Resource Utilization
The Audio Renderer Block is designed for efficiency, with optimizations for:
- CPU usage during playback
- Memory footprint for buffer management
- Power consumption on mobile devices
## Integration Examples
### Basic Pipeline Setup
The following example demonstrates how to set up a simple audio rendering pipeline using a virtual audio source:
```csharp
var pipeline = new MediaBlocksPipeline();
var audioSourceBlock = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings());
// Create audio renderer with default settings
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(audioSourceBlock.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
### Real-World Audio Pipeline
For a more practical application, here's how to capture system audio and render it:
```mermaid
graph LR;
SystemAudioSourceBlock-->AudioRendererBlock;
```
```csharp
var pipeline = new MediaBlocksPipeline();
// Capture system audio
var systemAudioSource = new SystemAudioSourceBlock();
// Configure the audio renderer
var audioRenderer = new AudioRendererBlock();
audioRenderer.Volume = 0.8f; // 80% volume
// Connect blocks
pipeline.Connect(systemAudioSource.Output, audioRenderer.Input);
// Start processing
await pipeline.StartAsync();
// Allow audio to play for 10 seconds
await Task.Delay(TimeSpan.FromSeconds(10));
// Stop the pipeline
await pipeline.StopAsync();
```
## Compatibility and Platform Support
The Audio Renderer Block is designed for cross-platform operation, supporting:
- Windows 10 and later
- macOS 10.13 and later
- Linux (Ubuntu, Debian, Fedora)
- iOS 12.0 and later
- Android 8.0 and later
This wide platform support enables developers to create consistent audio experiences across different operating systems and devices.
## Conclusion
The Audio Renderer Block provides developers with a powerful, flexible solution for audio output across multiple platforms. By abstracting the complexities of platform-specific audio APIs, it allows developers to focus on creating exceptional audio experiences without worrying about the underlying implementation details.
Whether building a simple media player, a professional audio editing application, or a real-time communications platform, the Audio Renderer Block provides the tools needed for high-quality, reliable audio output.
---END OF PAGE---
# Local File: .\dotnet\mediablocks\AudioVisualizers\index.md
---
title: .Net Audio Visualizer Blocks
description: Explore a comprehensive set of .NET audio visualizer blocks for building powerful audio-reactive applications. Includes Spacescope, Spectrascope, Synaescope, and Wavescope.
sidebar_label: Audio Visualizers
---
# Audio visualizer blocks
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
VisioForge Media Blocks SDK .Net includes a set of audio visualizer blocks that allow you to create audio-reactive visualizations for your applications. These blocks take audio input and produce video output representing the audio characteristics.
The blocks can be connected to other audio and video processing blocks to create complex media pipelines.
Most of the blocks are available for all platforms, including Windows, Linux, MacOS, Android, and iOS.
## Spacescope
The Spacescope block is a simple audio visualization element that maps the left and right audio channels to X and Y coordinates, respectively, creating a Lissajous-like pattern. This visualizes the phase relationship between the channels. The appearance, such as using dots or lines and colors, can be customized via `SpacescopeSettings`.
#### Block info
Name: SpacescopeBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Video | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->SpacescopeBlock;
SpacescopeBlock-->VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3"; // Or any audio source
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
// Settings can be customized, e.g., for shader, line thickness, etc.
// The style (dots, lines, color-dots, color-lines) can be set in SpacescopeSettings.
var spacescopeSettings = new SpacescopeSettings();
var spacescope = new SpacescopeBlock(spacescopeSettings);
pipeline.Connect(fileSource.AudioOutput, spacescope.Input);
// Assuming you have a VideoRendererBlock or a way to display video output
var videoRenderer = new VideoRendererBlock(IntPtr.Zero); // Example for Windows
pipeline.Connect(spacescope.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
## Spectrascope
The Spectrascope block is a simple spectrum visualization element. It renders the frequency spectrum of the audio input as a series of bars.
#### Block info
Name: SpectrascopeBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Video | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->SpectrascopeBlock;
SpectrascopeBlock-->VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3"; // Or any audio source
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var spectrascope = new SpectrascopeBlock();
pipeline.Connect(fileSource.AudioOutput, spectrascope.Input);
// Assuming you have a VideoRendererBlock or a way to display video output
var videoRenderer = new VideoRendererBlock(IntPtr.Zero); // Example for Windows
pipeline.Connect(spectrascope.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
## Synaescope
The Synaescope block is an audio visualization element that analyzes frequencies and out-of-phase properties of the audio. It draws this analysis as dynamic clouds of stars, creating colorful and abstract patterns.
#### Block info
Name: SynaescopeBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Video | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->SynaescopeBlock;
SynaescopeBlock-->VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3"; // Or any audio source
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
// Settings can be customized for Synaescope.
// For example, to set a specific shader effect (if available in SynaescopeSettings):
// var synaescopeSettings = new SynaescopeSettings() { Shader = SynaescopeShader.LibVisualCurrent };
// var synaescope = new SynaescopeBlock(synaescopeSettings);
var synaescope = new SynaescopeBlock(); // Default settings
pipeline.Connect(fileSource.AudioOutput, synaescope.Input);
// Assuming you have a VideoRendererBlock or a way to display video output
var videoRenderer = new VideoRendererBlock(IntPtr.Zero); // Example for Windows
pipeline.Connect(synaescope.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
## Wavescope
The Wavescope block is a simple audio visualization element that renders the audio waveforms, similar to an oscilloscope display. The drawing style (dots, lines, colors) can be configured using `WavescopeSettings`.
#### Block info
Name: WavescopeBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed audio | 1
Output | Video | 1
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->WavescopeBlock;
WavescopeBlock-->VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3"; // Or any audio source
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
// Settings can be customized, e.g., for style, mono/stereo mode, etc.
// The style (dots, lines, color-dots, color-lines) can be set in WavescopeSettings.
var wavescopeSettings = new WavescopeSettings();
var wavescope = new WavescopeBlock(wavescopeSettings);
pipeline.Connect(fileSource.AudioOutput, wavescope.Input);
// Assuming you have a VideoRendererBlock or a way to display video output
var videoRenderer = new VideoRendererBlock(IntPtr.Zero); // Example for Windows
pipeline.Connect(wavescope.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
---END OF PAGE---
# Local File: .\dotnet\mediablocks\AWS\index.md
---
title: .Net Media AWS S3 Blocks Guide
description: Explore a complete guide to .Net Media SDK AWS S3 source and sink blocks. Learn how to read from and write to AWS S3 for your media processing pipelines.
sidebar_label: Amazon Web Services
---
# AWS S3 Blocks - VisioForge Media Blocks SDK .Net
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
AWS S3 blocks enable interaction with Amazon Simple Storage Service (S3) to read media files as sources or write media files as sinks within your pipelines.
## AWSS3SinkBlock
The `AWSS3SinkBlock` allows you to write media data from your pipeline to a file in an AWS S3 bucket. This is useful for storing recorded media, transcoded files, or other outputs directly to cloud storage.
#### Block info
Name: `AWSS3SinkBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input | Auto (depends on connected block) | 1 |
#### Settings
The `AWSS3SinkBlock` is configured using `AWSS3SinkSettings`. Key properties:
- `Uri` (string): The S3 URI where the media file will be written (e.g., "s3://your-bucket-name/path/to/output/file.mp4").
- `AccessKeyId` (string): Your AWS Access Key ID.
- `SecretAccessKey` (string): Your AWS Secret Access Key.
- `Region` (string): The AWS region where the bucket is located (e.g., "us-east-1").
- `SessionToken` (string, optional): AWS session token, if using temporary credentials.
- `EndpointUrl` (string, optional): Custom S3-compatible endpoint URL.
- `ContentType` (string, optional): The MIME type of the content being uploaded (e.g., "video/mp4").
- `StorageClass` (string, optional): S3 storage class (e.g., "STANDARD", "INTELLIGENT_TIERING").
- `ServerSideEncryption` (string, optional): Server-side encryption method (e.g., "AES256", "aws:kms").
- `ACL` (string, optional): Access Control List for the uploaded object (e.g., "private", "public-read").
#### The sample pipeline
```mermaid
graph LR;
SystemVideoSourceBlock-->VideoEncoderBlock;
VideoEncoderBlock-->MuxerBlock;
SystemAudioSourceBlock-->AudioEncoderBlock;
AudioEncoderBlock-->MuxerBlock;
MuxerBlock-->AWSS3SinkBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create video source (e.g., webcam)
var videoDevice = (await DeviceEnumerator.Shared.VideoSourcesAsync())[0];
var videoSourceSettings = new VideoCaptureDeviceSourceSettings(videoDevice);
var videoSource = new SystemVideoSourceBlock(videoSourceSettings);
// Create audio source (e.g., microphone)
var audioDevice = (await DeviceEnumerator.Shared.AudioSourcesAsync())[0];
var audioSourceSettings = audioDevice.CreateSourceSettings(audioDevice.Formats[0].ToFormat());
var audioSource = new SystemAudioSourceBlock(audioSourceSettings);
// Create video encoder
var h264Settings = new OpenH264EncoderSettings(); // Example encoder settings
var videoEncoder = new H264EncoderBlock(h264Settings);
// Create audio encoder
var opusSettings = new OpusEncoderSettings(); // Example encoder settings
var audioEncoder = new OpusEncoderBlock(opusSettings);
// Create a muxer (e.g., MP4MuxBlock)
var mp4MuxSettings = new MP4MuxSettings();
var mp4Muxer = new MP4MuxBlock(mp4MuxSettings);
// Configure AWSS3SinkSettings
var s3SinkSettings = new AWSS3SinkSettings
{
Uri = "s3://your-bucket-name/output/recorded-video.mp4",
AccessKeyId = "YOUR_AWS_ACCESS_KEY_ID",
SecretAccessKey = "YOUR_AWS_SECRET_ACCESS_KEY",
Region = "your-aws-region", // e.g., "us-east-1"
ContentType = "video/mp4"
};
var s3Sink = new AWSS3SinkBlock(s3SinkSettings);
// Connect video path
pipeline.Connect(videoSource.Output, videoEncoder.Input);
pipeline.Connect(videoEncoder.Output, mp4Muxer.CreateNewInput(MediaBlockPadMediaType.Video));
// Connect audio path
pipeline.Connect(audioSource.Output, audioEncoder.Input);
pipeline.Connect(audioEncoder.Output, mp4Muxer.CreateNewInput(MediaBlockPadMediaType.Audio));
// Connect muxer to S3 sink
pipeline.Connect(mp4Muxer.Output, s3Sink.Input);
// Check if AWSS3Sink is available
if (!AWSS3SinkBlock.IsAvailable())
{
Console.WriteLine("AWS S3 Sink Block is not available. Check SDK redistributables.");
return;
}
// Start pipeline
await pipeline.StartAsync();
// ... wait for recording to finish ...
// Stop pipeline
await pipeline.StopAsync();
```
#### Remarks
You can check if the `AWSS3SinkBlock` is available at runtime using the static method `AWSS3SinkBlock.IsAvailable()`. This ensures that the necessary underlying GStreamer plugins and AWS SDK components are present.
#### Platforms
Windows, macOS, Linux. (Availability depends on GStreamer AWS plugin and AWS SDK support on these platforms).
---END OF PAGE---
# Local File: .\dotnet\mediablocks\Bridge\index.md
---
title: Link Media Pipelines - Bridge Blocks Guide
description: Learn to use Bridge blocks for linking and dynamically switching media pipelines for audio, video, and subtitles in .Net applications.
sidebar_label: Video and Audio Bridges
---
# Bridge blocks
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
Bridges can be used to link two pipelines and dynamically switch between them. For example, you can switch between different files or cameras in the first Pipeline without interrupting streaming in the second Pipeline.
To link source and sink, give them the same name. Each bridge pair has a unique channel name.
## Bridge audio sink and source
Bridges can be used to connect different media pipelines and use them independently. `BridgeAudioSourceBlock` is used to connect to `BridgeAudioSinkBlock` and supports uncompressed audio.
### Block info
#### BridgeAudioSourceBlock information
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Output audio | uncompressed audio | 1 |
#### BridgeAudioSinkBlock information
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input audio | uncompressed audio | 1 |
### Sample pipelines
#### First pipeline with an audio source and a bridge audio sink
```mermaid
graph LR;
VirtualAudioSourceBlock-->BridgeAudioSinkBlock;
```
#### Second pipeline with a bridge audio source and an audio renderer
```mermaid
graph LR;
BridgeAudioSourceBlock-->AudioRendererBlock;
```
### Sample code
The source pipeline with virtual audio source and bridge audio sink.
```csharp
// create source pipeline
var sourcePipeline = new MediaBlocksPipeline();
// create virtual audio source and bridge audio sink
var audioSourceBlock = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings());
var bridgeAudioSink = new BridgeAudioSinkBlock(new BridgeAudioSinkSettings());
// connect source and sink
sourcePipeline.Connect(audioSourceBlock.Output, bridgeAudioSink.Input);
// start pipeline
await sourcePipeline.StartAsync();
```
The sink pipeline with bridge audio source and audio renderer.
```csharp
// create sink pipeline
var sinkPipeline = new MediaBlocksPipeline();
// create bridge audio source and audio renderer
var bridgeAudioSource = new BridgeAudioSourceBlock(new BridgeAudioSourceSettings());
var audioRenderer = new AudioRendererBlock();
// connect source and sink
sinkPipeline.Connect(bridgeAudioSource.Output, audioRenderer.Input);
// start pipeline
await sinkPipeline.StartAsync();
```
## Bridge video sink and source
Bridges can be used to connect different media pipelines and use them independently. `BridgeVideoSinkBlock` is used to connect to the `BridgeVideoSourceBlock` and supports uncompressed video.
### Blocks info
#### BridgeVideoSinkBlock information
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | uncompressed video | 1 |
#### BridgeVideoSourceBlock information
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Output video | uncompressed video | 1 |
### Sample pipelines
#### First pipeline with a video source and a bridge video sink
```mermaid
graph LR;
VirtualVideoSourceBlock-->BridgeVideoSinkBlock;
```
#### Second pipeline with a bridge video source and a video renderer
```mermaid
graph LR;
BridgeVideoSourceBlock-->VideoRendererBlock;
```
### Sample code
Source pipeline with a virtual video source and bridge video sink.
```csharp
// create source pipeline
var sourcePipeline = new MediaBlocksPipeline();
// create virtual video source and bridge video sink
var videoSourceBlock = new VirtualVideoSourceBlock(new VirtualVideoSourceSettings());
var bridgeVideoSink = new BridgeVideoSinkBlock(new BridgeVideoSinkSettings());
// connect source and sink
sourcePipeline.Connect(videoSourceBlock.Output, bridgeVideoSink.Input);
// start pipeline
await sourcePipeline.StartAsync();
```
Sink pipeline with a bridge video source and video renderer.
```csharp
// create sink pipeline
var sinkPipeline = new MediaBlocksPipeline();
// create bridge video source and video renderer
var bridgeVideoSource = new BridgeVideoSourceBlock(new BridgeVideoSourceSettings());
var videoRenderer = new VideoRendererBlock(sinkPipeline, VideoView1);
// connect source and sink
sinkPipeline.Connect(bridgeVideoSource.Output, videoRenderer.Input);
// start pipeline
await sinkPipeline.StartAsync();
```
## Bridge subtitle sink and source
Bridges can be used to connect different media pipelines and use them independently. `BridgeSubtitleSourceBlock` is used to connect to the `BridgeSubtitleSinkBlock`and supports text media type.
### Block info
#### BridgeSubtitleSourceBlock information
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Output video | text | 1 |
#### BridgeSubtitleSinkBlock information
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Output video | text | 1 |
## Proxy source
Proxy source/proxy sink pair of blocks can be used to connect different media pipelines and use them independently.
### Block info
Name: ProxySourceBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Output | Any uncompressed | 1 |
### Sample pipelines
#### First pipeline with a video source and a proxy video sink
```mermaid
graph LR;
VirtualVideoSourceBlock-->ProxySinkBlock;
```
#### Second pipeline with a proxy video source and a video renderer
```mermaid
graph LR;
ProxySourceBlock-->VideoRendererBlock;
```
### Sample code
```csharp
// source pipeline with virtual video source and proxy sink
var sourcePipeline = new MediaBlocksPipeline();
var videoSourceBlock = new VirtualVideoSourceBlock(new VirtualVideoSourceSettings());
var proxyVideoSink = new ProxySinkBlock();
sourcePipeline.Connect(videoSourceBlock.Output, proxyVideoSink.Input);
// sink pipeline with proxy video source and video renderer
var sinkPipeline = new MediaBlocksPipeline();
var proxyVideoSource = new ProxySourceBlock(proxyVideoSink);
var videoRenderer = new VideoRendererBlock(sinkPipeline, VideoView1);
sinkPipeline.Connect(proxyVideoSource.Output, videoRenderer.Input);
// start pipelines
await sourcePipeline.StartAsync();
await sinkPipeline.StartAsync();
```
## Platforms
All bridge blocks are supported on Windows, macOS, Linux, iOS, and Android.
---END OF PAGE---
# Local File: .\dotnet\mediablocks\Decklink\index.md
---
title: Blackmagic Decklink Integration for .NET Developers
description: Integrate professional Blackmagic Decklink devices for high-quality audio/video capture and rendering in your .NET applications. Learn to implement SDI, HDMI inputs/outputs, configure multiple devices, and build advanced media workflows with our comprehensive code examples and API.
sidebar_label: Blackmagic Decklink
---
# Blackmagic Decklink Integration with Media Blocks SDK
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## Introduction to Decklink Integration
The VisioForge Media Blocks SDK for .NET provides robust support for Blackmagic Decklink devices, enabling developers to implement professional-grade audio and video functionality in their applications. This integration allows for seamless capture and rendering operations using Decklink's high-quality hardware.
Our SDK includes specialized blocks designed specifically for Decklink devices, giving you full access to their capabilities including SDI, HDMI, and other inputs/outputs. These blocks are optimized for performance and offer a straightforward API for implementing complex media workflows.
### Key Capabilities
- **Audio Capture and Rendering**: Capture and output audio through Decklink devices
- **Video Capture and Rendering**: Capture and output video in various formats and resolutions
- **Multiple Device Support**: Work with multiple Decklink devices simultaneously
- **Professional I/O Options**: Utilize SDI, HDMI, and other professional interfaces
- **High-Quality Processing**: Maintain professional video/audio quality throughout the pipeline
- **Combined Audio/Video Blocks**: Simplified handling of synchronized audio and video streams with dedicated source and sink blocks.
## System Requirements
Before using the Decklink blocks, ensure your system meets these requirements:
- **Hardware**: Compatible Blackmagic Decklink device
- **Software**: Blackmagic Decklink SDK or drivers installed
## Decklink Block Types
The SDK provides several block types for working with Decklink devices:
1. **Audio Sink Block**: For audio output to Decklink devices.
2. **Audio Source Block**: For audio capture from Decklink devices.
3. **Video Sink Block**: For video output to Decklink devices.
4. **Video Source Block**: For video capture from Decklink devices.
5. **Video + Audio Sink Block**: For synchronized video and audio output to Decklink devices using a single block.
6. **Video + Audio Source Block**: For synchronized video and audio capture from Decklink devices using a single block.
Each block type is designed to handle specific media operations while maintaining synchronization and quality.
## Working with Decklink Audio Sink Block
The Decklink Audio Sink block enables audio output to Blackmagic Decklink devices. This block handles the complexities of audio timing and device interfacing.
### Device Enumeration
Before creating an audio sink block, you'll need to enumerate available devices:
```csharp
var devices = await DecklinkAudioSinkBlock.GetDevicesAsync();
foreach (var item in devices)
{
Console.WriteLine($"Found device: {item.Name}, Device Number: {item.DeviceNumber}");
}
```
This code retrieves all available Decklink devices that support audio output functionality.
### Block Creation and Configuration
Once you've identified the target device, you can create and configure the audio sink block:
```csharp
// Get the first available device
var deviceInfo = (await DecklinkAudioSinkBlock.GetDevicesAsync()).FirstOrDefault();
// Create settings for the selected device
DecklinkAudioSinkSettings audioSinkSettings = null;
if (deviceInfo != null)
{
audioSinkSettings = new DecklinkAudioSinkSettings(deviceInfo);
// Example: audioSinkSettings.DeviceNumber = deviceInfo.DeviceNumber; (already set by constructor)
// Further configuration:
// audioSinkSettings.BufferTime = TimeSpan.FromMilliseconds(100);
// audioSinkSettings.IsSync = true;
}
// Create the block with configured settings
var decklinkAudioSink = new DecklinkAudioSinkBlock(audioSinkSettings);
```
### Key Audio Sink Settings
The `DecklinkAudioSinkSettings` class includes properties like:
- `DeviceNumber`: The output device instance to use.
- `BufferTime`: Minimum latency reported by the sink (default: 50ms).
- `AlignmentThreshold`: Timestamp alignment threshold (default: 40ms).
- `DiscontWait`: Time to wait before creating a discontinuity (default: 1s).
- `IsSync`: Enables synchronization (default: true).
### Connecting to the Pipeline
The audio sink block includes an `Input` pad that accepts audio data from other blocks in your pipeline:
```csharp
// Example: Connect an audio source/encoder to the Decklink audio sink
audioEncoder.Output.Connect(decklinkAudioSink.Input);
```
## Working with Decklink Audio Source Block
The Decklink Audio Source block enables capturing audio from Blackmagic Decklink devices. It supports various audio formats and configurations.
### Device Enumeration
Enumerate available audio source devices:
```csharp
var devices = await DecklinkAudioSourceBlock.GetDevicesAsync();
foreach (var item in devices)
{
Console.WriteLine($"Available audio source: {item.Name}, Device Number: {item.DeviceNumber}");
}
```
### Block Creation and Configuration
Create and configure the audio source block:
```csharp
// Get the first available device
var deviceInfo = (await DecklinkAudioSourceBlock.GetDevicesAsync()).FirstOrDefault();
// Create settings for the selected device
DecklinkAudioSourceSettings audioSourceSettings = null;
if (deviceInfo != null)
{
// create settings object
audioSourceSettings = new DecklinkAudioSourceSettings(deviceInfo);
// Further configuration:
// audioSourceSettings.Channels = DecklinkAudioChannels.Ch2;
// audioSourceSettings.Connection = DecklinkAudioConnection.Embedded;
// audioSourceSettings.Format = DecklinkAudioFormat.S16LE; // SampleRate is fixed at 48000
}
// Create the block with the configured settings
var audioSource = new DecklinkAudioSourceBlock(audioSourceSettings);
```
### Key Audio Source Settings
The `DecklinkAudioSourceSettings` class includes properties like:
- `DeviceNumber`: The input device instance to use.
- `Channels`: Audio channels to capture (e.g., `DecklinkAudioChannels.Ch2`, `Ch8`, `Ch16`). Default `Ch2`.
- `Format`: Audio sample format (e.g., `DecklinkAudioFormat.S16LE`). Default `S16LE`. Sample rate is fixed at 48000 Hz.
- `Connection`: Audio connection type (e.g., `DecklinkAudioConnection.Embedded`, `AES`, `Analog`). Default `Auto`.
- `BufferSize`: Internal buffer size in frames (default: 5).
- `DisableAudioConversion`: Set to `true` to disable internal audio conversion. Default `false`.
### Connecting to the Pipeline
The audio source block provides an `Output` pad that can connect to other blocks:
```csharp
// Example: Connect the audio source to an audio encoder or processor
audioSource.Output.Connect(audioProcessor.Input);
```
## Working with Decklink Video Sink Block
The Decklink Video Sink block enables video output to Blackmagic Decklink devices, supporting various video formats and resolutions.
### Device Enumeration
Find available video sink devices:
```csharp
var devices = await DecklinkVideoSinkBlock.GetDevicesAsync();
foreach (var item in devices)
{
Console.WriteLine($"Available video output device: {item.Name}, Device Number: {item.DeviceNumber}");
}
```
### Block Creation and Configuration
Create and configure the video sink block:
```csharp
// Get the first available device
var deviceInfo = (await DecklinkVideoSinkBlock.GetDevicesAsync()).FirstOrDefault();
// Create settings for the selected device
DecklinkVideoSinkSettings videoSinkSettings = null;
if (deviceInfo != null)
{
videoSinkSettings = new DecklinkVideoSinkSettings(deviceInfo);
// Configure video output format and mode
videoSinkSettings.Mode = DecklinkMode.HD1080i60;
videoSinkSettings.VideoFormat = DecklinkVideoFormat.YUV_10bit; // Use VideoFormat
// Optional: Additional configuration
// videoSinkSettings.KeyerMode = DecklinkKeyerMode.Internal;
// videoSinkSettings.KeyerLevel = 128;
// videoSinkSettings.Profile = DecklinkProfileID.Default;
// videoSinkSettings.TimecodeFormat = DecklinkTimecodeFormat.RP188Any;
}
// Create the block with the configured settings
var decklinkVideoSink = new DecklinkVideoSinkBlock(videoSinkSettings);
```
### Key Video Sink Settings
The `DecklinkVideoSinkSettings` class includes properties like:
- `DeviceNumber`: The output device instance to use.
- `Mode`: Specifies the video resolution and frame rate (e.g., `DecklinkMode.HD1080i60`, `HD720p60`). Default `Unknown`.
- `VideoFormat`: Defines the pixel format using `DecklinkVideoFormat` enum (e.g., `DecklinkVideoFormat.YUV_8bit`, `YUV_10bit`). Default `YUV_8bit`.
- `KeyerMode`: Controls keying/compositing options using `DecklinkKeyerMode` (if supported by the device). Default `Off`.
- `KeyerLevel`: Sets the keyer level (0-255). Default `255`.
- `Profile`: Specifies the Decklink profile to use with `DecklinkProfileID`.
- `TimecodeFormat`: Specifies the timecode format for playback using `DecklinkTimecodeFormat`. Default `RP188Any`.
- `IsSync`: Enables synchronization (default: true).
## Working with Decklink Video Source Block
The Decklink Video Source block allows capturing video from Blackmagic Decklink devices, supporting various input formats and resolutions.
### Device Enumeration
Enumerate video capture devices:
```csharp
var devices = await DecklinkVideoSourceBlock.GetDevicesAsync();
foreach (var item in devices)
{
Console.WriteLine($"Available video capture device: {item.Name}, Device Number: {item.DeviceNumber}");
}
```
### Block Creation and Configuration
Create and configure the video source block:
```csharp
// Get the first available device
var deviceInfo = (await DecklinkVideoSourceBlock.GetDevicesAsync()).FirstOrDefault();
// Create settings for the selected device
DecklinkVideoSourceSettings videoSourceSettings = null;
if (deviceInfo != null)
{
videoSourceSettings = new DecklinkVideoSourceSettings(deviceInfo);
// Configure video input format and mode
videoSourceSettings.Mode = DecklinkMode.HD1080i60;
videoSourceSettings.Connection = DecklinkConnection.SDI;
// videoSourceSettings.VideoFormat = DecklinkVideoFormat.Auto; // Often used with Mode=Auto
}
// Create the block with configured settings
var videoSourceBlock = new DecklinkVideoSourceBlock(videoSourceSettings);
```
### Key Video Source Settings
The `DecklinkVideoSourceSettings` class includes properties like:
- `DeviceNumber`: The input device instance to use.
- `Mode`: Specifies the expected input resolution and frame rate (e.g., `DecklinkMode.HD1080i60`). Default `Unknown`.
- `Connection`: Defines which physical input to use, using `DecklinkConnection` enum (e.g., `DecklinkConnection.HDMI`, `DecklinkConnection.SDI`). Default `Auto`.
- `VideoFormat`: Specifies the video format type for input, using `DecklinkVideoFormat` enum. Default `Auto` (especially when `Mode` is `Auto`).
- `Profile`: Specifies the Decklink profile using `DecklinkProfileID`. Default `Default`.
- `DropNoSignalFrames`: If `true`, drops frames marked as having no input signal. Default `false`.
- `OutputAFDBar`: If `true`, extracts and outputs AFD/Bar data as Meta. Default `false`.
- `OutputCC`: If `true`, extracts and outputs Closed Captions as Meta. Default `false`.
- `TimecodeFormat`: Specifies the timecode format using `DecklinkTimecodeFormat`. Default `RP188Any`.
- `DisableVideoConversion`: Set to `true` to disable internal video conversion. Default `false`.
## Working with Decklink Video + Audio Source Block
The `DecklinkVideoAudioSourceBlock` simplifies capturing synchronized video and audio streams from a single Decklink device.
### Device Enumeration and Configuration
Device selection is managed through `DecklinkVideoSourceSettings` and `DecklinkAudioSourceSettings`. You would typically enumerate video devices using `DecklinkVideoSourceBlock.GetDevicesAsync()` and audio devices using `DecklinkAudioSourceBlock.GetDevicesAsync()`, then configure the respective settings objects for the chosen device. The `DecklinkVideoAudioSourceBlock` itself also provides `GetDevicesAsync()` which enumerates video sources.
```csharp
// Enumerate video devices (for video part of the combined source)
var videoDeviceInfo = (await DecklinkVideoAudioSourceBlock.GetDevicesAsync()).FirstOrDefault(); // or DecklinkVideoSourceBlock.GetDevicesAsync()
var audioDeviceInfo = (await DecklinkAudioSourceBlock.GetDevicesAsync()).FirstOrDefault(d => d.DeviceNumber == videoDeviceInfo.DeviceNumber); // Example: match by device number
DecklinkVideoSourceSettings videoSettings = null;
if (videoDeviceInfo != null)
{
videoSettings = new DecklinkVideoSourceSettings(videoDeviceInfo);
videoSettings.Mode = DecklinkMode.HD1080i60;
videoSettings.Connection = DecklinkConnection.SDI;
}
DecklinkAudioSourceSettings audioSettings = null;
if (audioDeviceInfo != null)
{
audioSettings = new DecklinkAudioSourceSettings(audioDeviceInfo);
audioSettings.Channels = DecklinkAudioChannels.Ch2;
}
// Create the block with configured settings
if (videoSettings != null && audioSettings != null)
{
var decklinkVideoAudioSource = new DecklinkVideoAudioSourceBlock(videoSettings, audioSettings);
// Connect outputs
// decklinkVideoAudioSource.VideoOutput.Connect(videoProcessor.Input);
// decklinkVideoAudioSource.AudioOutput.Connect(audioProcessor.Input);
}
```
### Block Creation and Configuration
You instantiate `DecklinkVideoAudioSourceBlock` by providing pre-configured `DecklinkVideoSourceSettings` and `DecklinkAudioSourceSettings` objects.
```csharp
// Assuming videoSourceSettings and audioSourceSettings are configured as above
var videoAudioSource = new DecklinkVideoAudioSourceBlock(videoSourceSettings, audioSourceSettings);
```
### Connecting to the Pipeline
The block provides separate `VideoOutput` and `AudioOutput` pads:
```csharp
// Example: Connect to video and audio processors/encoders
videoAudioSource.VideoOutput.Connect(videoEncoder.Input);
videoAudioSource.AudioOutput.Connect(audioEncoder.Input);
```
## Working with Decklink Video + Audio Sink Block
The `DecklinkVideoAudioSinkBlock` simplifies sending synchronized video and audio streams to a single Decklink device.
### Device Enumeration and Configuration
Similar to the combined source, device selection is managed via `DecklinkVideoSinkSettings` and `DecklinkAudioSinkSettings`. Enumerate devices using `DecklinkVideoSinkBlock.GetDevicesAsync()` and `DecklinkAudioSinkBlock.GetDevicesAsync()`.
```csharp
var videoSinkDeviceInfo = (await DecklinkVideoSinkBlock.GetDevicesAsync()).FirstOrDefault();
var audioSinkDeviceInfo = (await DecklinkAudioSinkBlock.GetDevicesAsync()).FirstOrDefault(d => d.DeviceNumber == videoSinkDeviceInfo.DeviceNumber); // Example match
DecklinkVideoSinkSettings videoSinkSettings = null;
if (videoSinkDeviceInfo != null)
{
videoSinkSettings = new DecklinkVideoSinkSettings(videoSinkDeviceInfo);
videoSinkSettings.Mode = DecklinkMode.HD1080i60;
videoSinkSettings.VideoFormat = DecklinkVideoFormat.YUV_8bit;
}
DecklinkAudioSinkSettings audioSinkSettings = null;
if (audioSinkDeviceInfo != null)
{
audioSinkSettings = new DecklinkAudioSinkSettings(audioSinkDeviceInfo);
}
// Create the block
if (videoSinkSettings != null && audioSinkSettings != null)
{
var decklinkVideoAudioSink = new DecklinkVideoAudioSinkBlock(videoSinkSettings, audioSinkSettings);
// Connect inputs
// videoEncoder.Output.Connect(decklinkVideoAudioSink.VideoInput);
// audioEncoder.Output.Connect(decklinkVideoAudioSink.AudioInput);
}
```
### Block Creation and Configuration
Instantiate `DecklinkVideoAudioSinkBlock` with configured `DecklinkVideoSinkSettings` and `DecklinkAudioSinkSettings`.
```csharp
// Assuming videoSinkSettings and audioSinkSettings are configured
var videoAudioSink = new DecklinkVideoAudioSinkBlock(videoSinkSettings, audioSinkSettings);
```
### Connecting to the Pipeline
The block provides separate `VideoInput` and `AudioInput` pads:
```csharp
// Example: Connect from video and audio encoders
videoEncoder.Output.Connect(videoAudioSink.VideoInput);
audioEncoder.Output.Connect(videoAudioSink.AudioInput);
```
## Advanced Usage Examples
### Synchronized Audio/Video Capture
**Using separate source blocks:**
```csharp
// Assume videoSourceSettings and audioSourceSettings are configured for the same device/timing
var videoSource = new DecklinkVideoSourceBlock(videoSourceSettings);
var audioSource = new DecklinkAudioSourceBlock(audioSourceSettings);
// Create an MP4 encoder
var mp4Settings = new MP4SinkSettings("output.mp4");
var sink = new MP4SinkBlock(mp4Settings);
// Create video encoder
var videoEncoder = new H264EncoderBlock();
// Create audio encoder
var audioEncoder = new AACEncoderBlock();
// Connect video and audio sources
pipeline.Connect(videoSource.Output, videoEncoder.Input);
pipeline.Connect(audioSource.Output, audioEncoder.Input);
// Connect video encoder to sink
pipeline.Connect(videoEncoder.Output, sink.CreateNewInput(MediaBlockPadMediaType.Video));
// Connect audio encoder to sink
pipeline.Connect(audioEncoder.Output, sink.CreateNewInput(MediaBlockPadMediaType.Audio));
// Start the pipeline
await pipeline.StartAsync();
```
**Using `DecklinkVideoAudioSourceBlock` for simplified synchronized capture:**
If you use `DecklinkVideoAudioSourceBlock` (as configured in its dedicated section), the source setup becomes:
```csharp
// Assuming videoSourceSettings and audioSourceSettings are configured for the same device
var videoAudioSource = new DecklinkVideoAudioSourceBlock(videoSourceSettings, audioSourceSettings);
// ... (encoders and sink setup as above) ...
// Connect video and audio from the combined source
pipeline.Connect(videoAudioSource.VideoOutput, videoEncoder.Input);
pipeline.Connect(videoAudioSource.AudioOutput, audioEncoder.Input);
// ... (connect encoders to sink and start pipeline as above) ...
```
This ensures that audio and video are sourced from the Decklink device in a synchronized manner by the SDK.
## Troubleshooting Tips
- **No Devices Found**: Ensure Blackmagic drivers/SDK are installed and up-to-date. Check if the device is recognized by Blackmagic Desktop Video Setup.
- **Format Mismatch**: Verify the device supports your selected video/audio mode, format, and connection type. For sources with `Mode = DecklinkMode.Unknown` (auto-detect), ensure a stable signal is present.
- **Performance Issues**: Check system resources (CPU, RAM, disk I/O). Consider lowering resolution/framerate if issues persist.
- **Signal Detection**: For input devices, check cable connections and ensure the source device is outputting a valid signal.
- **"Unable to build ...Block" errors**: Double-check that all settings are valid for the selected device and mode. Ensure the correct `DeviceNumber` is used if multiple Decklink cards are present.
## Sample Applications
For complete working examples, refer to these sample applications:
- [Decklink Demo](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/WPF/CSharp/Decklink%20Demo)
## Conclusion
The Blackmagic Decklink blocks in the VisioForge Media Blocks SDK provide a powerful and flexible way to integrate professional video and audio hardware into your .NET applications. By leveraging the specific source and sink blocks, including the combined audio/video blocks, you can efficiently implement complex capture and playback workflows. Always refer to the specific settings classes for detailed configuration options.
For additional support or questions, please refer to our [documentation](https://www.visioforge.com/documentation) or contact our support team.
---END OF PAGE---
# Local File: .\dotnet\mediablocks\Demuxers\index.md
---
title: .Net Media Demuxer Blocks Guide
description: Explore a complete guide to .Net Media SDK demuxer blocks. Learn about MPEG-TS, QT (MP4/MOV), and Universal demuxers for your media processing pipelines.
sidebar_label: Demuxers
---
# Demuxer Blocks - VisioForge Media Blocks SDK .Net
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
Demuxer blocks are essential components in media processing pipelines. They take a multimedia stream, typically from a file or network source, and separate it into its constituent elementary streams, such as video, audio, and subtitles. This allows for individual processing or rendering of each stream. VisioForge Media Blocks SDK .Net provides several demuxer blocks to handle various container formats.
## MPEG-TS Demux Block
The `MPEGTSDemuxBlock` is used to demultiplex MPEG Transport Streams (MPEG-TS). MPEG-TS is a standard format for transmission and storage of audio, video, and Program and System Information Protocol (PSIP) data. It is commonly used in digital television broadcasting and streaming.
### Block info
Name: `MPEGTSDemuxBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input | MPEG-TS Data | 1 |
| Output video | Depends on stream content | 0 or 1+ |
| Output audio | Depends on stream content | 0 or 1+ |
| Output subtitle | Depends on stream content | 0 or 1+ |
| Output metadata | Depends on stream content | 0 or 1+ |
### Settings
The `MPEGTSDemuxBlock` is configured using `MPEGTSDemuxSettings`.
Key properties of `MPEGTSDemuxSettings`:
- `Latency` (`TimeSpan`): Gets or sets the latency. Default is 700 milliseconds.
- `ProgramNumber` (int): Gets or sets the program number. Use -1 for default/automatic selection.
### The sample pipeline
This example shows how to connect a source (like `HTTPSourceBlock` for a network stream or `UniversalSourceBlock` for a local file that outputs raw MPEG-TS data) to `MPEGTSDemuxBlock`, and then connect its outputs to respective renderer blocks.
```mermaid
graph LR;
DataSourceBlock -- MPEG-TS Data --> MPEGTSDemuxBlock;
MPEGTSDemuxBlock -- Video Stream --> VideoRendererBlock;
MPEGTSDemuxBlock -- Audio Stream --> AudioRendererBlock;
MPEGTSDemuxBlock -- Subtitle Stream --> SubtitleOverlayOrRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Assume 'dataSourceBlock' is a source block providing MPEG-TS data
// For example, a UniversalSourceBlock reading a .ts file or an HTTP source.
// var dataSourceBlock = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync("input.ts"));
// For this example, let's assume dataSourceBlock.Output provides the MPEG-TS stream.
var mpegTSDemuxSettings = new MPEGTSDemuxSettings();
// mpegTSDemuxSettings.ProgramNumber = 1; // Optionally select a specific program
// Create MPEG-TS Demuxer Block
// Constructor parameters control which streams to attempt to render
var mpegTSDemuxBlock = new MPEGTSDemuxBlock(
renderVideo: true,
renderAudio: true,
renderSubtitle: true,
renderMetadata: false);
// Connect the data source to the demuxer's input
// pipeline.Connect(dataSourceBlock.Output, mpegTSDemuxBlock.Input); // Assuming dataSourceBlock is defined
// Create renderers
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1 is your display control
var audioRenderer = new AudioRendererBlock();
// var subtitleRenderer = ... ; // A block to handle subtitle rendering or overlay
// Connect demuxer outputs
if (mpegTSDemuxBlock.VideoOutput != null)
{
pipeline.Connect(mpegTSDemuxBlock.VideoOutput, videoRenderer.Input);
}
if (mpegTSDemuxBlock.AudioOutput != null)
{
pipeline.Connect(mpegTSDemuxBlock.AudioOutput, audioRenderer.Input);
}
if (mpegTSDemuxBlock.SubtitleOutput != null)
{
// pipeline.Connect(mpegTSDemuxBlock.SubtitleOutput, subtitleRenderer.Input); // Connect to a subtitle handler
}
// Start pipeline
// await pipeline.StartAsync(); // Start once dataSourceBlock is connected
```
### Remarks
- Ensure that the input to `MPEGTSDemuxBlock` is raw MPEG-TS data. If you are using a `UniversalSourceBlock` with a `.ts` file, it might already demultiplex the stream. In such cases, `MPEGTSDemuxBlock` might be used if `UniversalSourceBlock` is configured to output the raw container stream or if the stream comes from a source like `SRTRAWSourceBlock`.
- The availability of video, audio, or subtitle outputs depends on the content of the MPEG-TS stream.
### Platforms
Windows, macOS, Linux, iOS, Android.
## QT Demux Block (MP4/MOV)
The `QTDemuxBlock` is designed to demultiplex QuickTime (QT) container formats, which include MP4 and MOV files. These formats are widely used for storing video, audio, and other multimedia content.
### Block info
Name: `QTDemuxBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input | MP4/MOV Data | 1 |
| Output video | Depends on stream content | 0 or 1+ |
| Output audio | Depends on stream content | 0 or 1+ |
| Output subtitle | Depends on stream content | 0 or 1+ |
| Output metadata | Depends on stream content | 0 or 1+ |
### Settings
The `QTDemuxBlock` does not have specific settings class beyond the implicit configuration through its constructor parameters (`renderVideo`, `renderAudio`, etc.). The underlying GStreamer element `qtdemux` handles the demultiplexing automatically.
### The sample pipeline
This example shows how to connect a source block that outputs raw MP4/MOV data to `QTDemuxBlock`, and then connect its outputs to respective renderer blocks.
```mermaid
graph LR;
DataSourceBlock -- MP4/MOV Data --> QTDemuxBlock;
QTDemuxBlock -- Video Stream --> VideoRendererBlock;
QTDemuxBlock -- Audio Stream --> AudioRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Assume 'dataSourceBlock' is a source block providing MP4/MOV data.
// This could be a StreamSourceBlock feeding raw MP4 data, or a custom source.
// For typical file playback, UniversalSourceBlock directly provides decoded streams.
// QTDemuxBlock is used when you have the container data and need to demux it within the pipeline.
// Example: var fileStream = File.OpenRead("myvideo.mp4");
// var streamSource = new StreamSourceBlock(fileStream); // StreamSourceBlock provides raw data
// Create QT Demuxer Block
// Constructor parameters control which streams to attempt to render
var qtDemuxBlock = new QTDemuxBlock(
renderVideo: true,
renderAudio: true,
renderSubtitle: false,
renderMetadata: false);
// Connect the data source to the demuxer's input
// pipeline.Connect(streamSource.Output, qtDemuxBlock.Input); // Assuming streamSource is defined
// Create renderers
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
var audioRenderer = new AudioRendererBlock();
// Connect demuxer outputs
if (qtDemuxBlock.VideoOutput != null)
{
pipeline.Connect(qtDemuxBlock.VideoOutput, videoRenderer.Input);
}
if (qtDemuxBlock.AudioOutput != null)
{
pipeline.Connect(qtDemuxBlock.AudioOutput, audioRenderer.Input);
}
// Start pipeline
// await pipeline.StartAsync(); // Start once dataSourceBlock is connected and pipeline is built
```
### Remarks
- `QTDemuxBlock` is typically used when you have a stream of MP4/MOV container data that needs to be demultiplexed within the pipeline (e.g., from a `StreamSourceBlock` or a custom data source).
- For playing local MP4/MOV files, `UniversalSourceBlock` is often more convenient as it handles both demuxing and decoding.
- The availability of outputs depends on the actual streams present in the MP4/MOV file.
### Platforms
Windows, macOS, Linux, iOS, Android.
## Universal Demux Block
The `UniversalDemuxBlock` provides a flexible way to demultiplex various media container formats based on provided settings or inferred from the input stream. It can handle formats like AVI, MKV, MP4, MPEG-TS, FLV, OGG, and WebM.
This block requires `MediaFileInfo` to be provided for proper initialization of its output pads, as the number and type of streams can vary greatly between files.
### Block info
Name: `UniversalDemuxBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input | Various Container Data | 1 |
| Output video | Depends on stream content and `renderVideo` flag | 0 to N |
| Output audio | Depends on stream content and `renderAudio` flag | 0 to N |
| Output subtitle | Depends on stream content and `renderSubtitle` flag | 0 to N |
| Output metadata | Depends on stream content and `renderMetadata` flag | 0 or 1 |
(N is the number of respective streams in the media file)
### Settings
The `UniversalDemuxBlock` is configured using an implementation of `IUniversalDemuxSettings`. The specific settings class depends on the container format you intend to demultiplex.
- `UniversalDemuxerType` (enum): Specifies the type of demuxer to use. Can be `Auto`, `MKV`, `MP4`, `AVI`, `MPEGTS`, `MPEGPS`, `FLV`, `OGG`, `WebM`.
- Based on the `UniversalDemuxerType`, you would create a corresponding settings object:
- `AVIDemuxSettings`
- `FLVDemuxSettings`
- `MKVDemuxSettings`
- `MP4DemuxSettings`
- `MPEGPSDemuxSettings`
- `MPEGTSDemuxSettings` (includes `Latency` and `ProgramNumber` properties)
- `OGGDemuxSettings`
- `WebMDemuxSettings`
- `UniversalDemuxSettings` (for `Auto` type)
The `UniversalDemuxerTypeHelper.CreateSettings(UniversalDemuxerType type)` method can be used to create the appropriate settings object.
### Constructor
`UniversalDemuxBlock(IUniversalDemuxSettings settings, MediaFileInfo info, bool renderVideo = true, bool renderAudio = true, bool renderSubtitle = false, bool renderMetadata = false)`
`UniversalDemuxBlock(MediaFileInfo info, bool renderVideo = true, bool renderAudio = true, bool renderSubtitle = false, bool renderMetadata = false)` (uses `UniversalDemuxSettings` for auto type detection)
**Crucially, `MediaFileInfo` must be provided to the constructor.** This object, typically obtained by analyzing the media file beforehand (e.g., using `MediaInfoReader`), informs the block about the number and types of streams, allowing it to create the correct number of output pads.
### The sample pipeline
This example demonstrates using `UniversalDemuxBlock` to demultiplex a file. Note that a data source block providing the raw file data to the `UniversalDemuxBlock` is implied.
```mermaid
graph LR;
DataSourceBlock -- Container Data --> UniversalDemuxBlock;
UniversalDemuxBlock -- Video Stream 1 --> VideoRendererBlock1;
UniversalDemuxBlock -- Audio Stream 1 --> AudioRendererBlock1;
UniversalDemuxBlock -- Subtitle Stream 1 --> SubtitleHandler1;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// 1. Obtain MediaFileInfo for your media file
var mediaInfoReader = new MediaInfoReader(Context); // Assuming Context is your logging context
MediaFileInfo mediaInfo = await mediaInfoReader.GetInfoAsync("path/to/your/video.mkv");
if (mediaInfo == null)
{
Console.WriteLine("Failed to get media info.");
return;
}
// 2. Choose or create Demuxer Settings
// Example: Auto-detect demuxer type
IUniversalDemuxSettings demuxSettings = new UniversalDemuxSettings();
// Or, specify a type, e.g., for an MKV file:
// IUniversalDemuxSettings demuxSettings = new MKVDemuxSettings();
// Or, for MPEG-TS with specific program:
// var mpegTsSettings = new MPEGTSDemuxSettings { ProgramNumber = 1 };
// IUniversalDemuxSettings demuxSettings = mpegTsSettings;
// 3. Create UniversalDemuxBlock
var universalDemuxBlock = new UniversalDemuxBlock(
demuxSettings,
mediaInfo,
renderVideo: true, // Process video streams
renderAudio: true, // Process audio streams
renderSubtitle: true // Process subtitle streams
);
// 4. Connect a data source that provides the raw file stream to UniversalDemuxBlock's input.
// This step is crucial and depends on how you get the file data.
// For instance, using a FileSource configured to output raw data, or a StreamSourceBlock.
// Example with a hypothetical RawFileSourceBlock (not a standard block, for illustration):
// var rawFileSource = new RawFileSourceBlock("path/to/your/video.mkv");
// pipeline.Connect(rawFileSource.Output, universalDemuxBlock.Input);
// 5. Connect outputs
// Video outputs (MediaBlockPad[])
var videoOutputs = universalDemuxBlock.VideoOutputs;
if (videoOutputs.Length > 0)
{
// Example: connect the first video stream
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
pipeline.Connect(videoOutputs[0], videoRenderer.Input);
}
// Audio outputs (MediaBlockPad[])
var audioOutputs = universalDemuxBlock.AudioOutputs;
if (audioOutputs.Length > 0)
{
// Example: connect the first audio stream
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(audioOutputs[0], audioRenderer.Input);
}
// Subtitle outputs (MediaBlockPad[])
var subtitleOutputs = universalDemuxBlock.SubtitleOutputs;
if (subtitleOutputs.Length > 0)
{
// Example: connect the first subtitle stream to a conceptual handler
// var subtitleHandler = new MySubtitleHandlerBlock();
// pipeline.Connect(subtitleOutputs[0], subtitleHandler.Input);
}
// Metadata output (if renderMetadata was true and metadata stream exists)
var metadataOutputs = universalDemuxBlock.MetadataOutputs;
if (metadataOutputs.Length > 0 && metadataOutputs[0] != null)
{
// Handle metadata stream
}
// Start pipeline after all connections are made
// await pipeline.StartAsync();
```
### Remarks
- **`MediaFileInfo` is mandatory** for `UniversalDemuxBlock` to correctly initialize its output pads based on the streams present in the file.
- The `renderVideo`, `renderAudio`, and `renderSubtitle` flags in the constructor determine if outputs for these stream types will be created and processed. If set to `false`, respective streams will be ignored (or sent to internal null renderers if present in the file but not rendered).
- The `UniversalDemuxBlock` is powerful for scenarios where you need to explicitly manage the demuxing process for various formats or select specific streams from files with multiple tracks.
- For simple playback of common file formats, `UniversalSourceBlock` often provides a more straightforward solution as it integrates demuxing and decoding. `UniversalDemuxBlock` offers more granular control.
### Platforms
Windows, macOS, Linux, iOS, Android. (Platform support for specific formats may depend on underlying GStreamer plugins.)
---END OF PAGE---
# Local File: .\dotnet\mediablocks\GettingStarted\camera.md
---
title: Creating Camera Applications with Media Blocks SDK
description: Learn how to build powerful camera viewing applications with Media Blocks SDK .Net. This step-by-step tutorial covers device enumeration, format selection, camera configuration, pipeline creation, and video rendering for desktop and mobile platforms.
sidebar_label: Camera Applications
---
# Building Camera Applications with Media Blocks SDK
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## Introduction
This comprehensive guide demonstrates how to create a fully functional camera viewing application using the Media Blocks SDK .Net. The SDK provides a robust framework for capturing, processing, and displaying video streams across multiple platforms including Windows, macOS, iOS, and Android.
## Architecture Overview
To create a camera viewer application, you'll need to understand two fundamental components:
1. **System Video Source** - Captures the video stream from connected camera devices
2. **Video Renderer** - Displays the captured video on screen with configurable settings
These components work together within a pipeline architecture that manages media processing.
## Essential Media Blocks
To build a camera application, you need to add the following blocks to your pipeline:
- **[System Video Source Block](../Sources/index.md)** - Connects to and reads from camera devices
- **[Video Renderer Block](../VideoRendering/index.md)** - Displays the video with configurable rendering options
## Setting Up the Pipeline
### Creating the Base Pipeline
First, create a pipeline object that will manage the media flow:
```csharp
using VisioForge.Core.MediaBlocks;
// Initialize the pipeline
var pipeline = new MediaBlocksPipeline();
// Add error handling
pipeline.OnError += (sender, args) =>
{
Console.WriteLine($"Pipeline error: {args.Message}");
};
```
### Camera Device Enumeration
Before adding a camera source, you need to enumerate the available devices and select one:
```csharp
// Get all available video devices asynchronously
var videoDevices = await DeviceEnumerator.Shared.VideoSourcesAsync();
// Display available devices (useful for user selection)
foreach (var device in videoDevices)
{
Console.WriteLine($"Device: {device.Name} [{device.API}]");
}
// Select the first available device
var selectedDevice = videoDevices[0];
```
### Camera Format Selection
Each camera supports different resolutions and frame rates. You can enumerate and select the optimal format:
```csharp
// Display available formats for the selected device
foreach (var format in selectedDevice.VideoFormats)
{
Console.WriteLine($"Format: {format.Width}x{format.Height} {format.Format}");
// Display available frame rates for this format
foreach (var frameRate in format.FrameRateList)
{
Console.WriteLine($" Frame Rate: {frameRate}");
}
}
// Select the optimal format (in this example, we look for HD resolution)
var hdFormat = selectedDevice.GetHDVideoFormatAndFrameRate(out var frameRate);
var formatToUse = hdFormat ?? selectedDevice.VideoFormats[0];
```
## Configuring Camera Settings
### Creating Source Settings
Configure the camera source settings with your selected device and format:
```csharp
// Create camera settings with the selected device and format
var videoSourceSettings = new VideoCaptureDeviceSourceSettings(selectedDevice)
{
Format = formatToUse.ToFormat()
};
// Set the desired frame rate (selecting the highest available)
if (formatToUse.FrameRateList.Count > 0)
{
videoSourceSettings.Format.FrameRate = formatToUse.FrameRateList.Max();
}
// Optional: Enable force frame rate to maintain consistent timing
videoSourceSettings.Format.ForceFrameRate = true;
// Platform-specific settings
#if __ANDROID__
// Android-specific settings
videoSourceSettings.VideoStabilization = true;
#elif __IOS__ && !__MACCATALYST__
// iOS-specific settings
videoSourceSettings.Position = IOSVideoSourcePosition.Back;
videoSourceSettings.Orientation = IOSVideoSourceOrientation.Portrait;
#endif
```
### Creating the Video Source Block
Now create the system video source block with your configured settings:
```csharp
// Create the video source block
var videoSource = new SystemVideoSourceBlock(videoSourceSettings);
```
## Setting Up Video Display
### Creating the Video Renderer
Add a video renderer to display the captured video:
```csharp
// Create the video renderer and connect it to your UI component
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
// Optional: Configure renderer settings
videoRenderer.Settings.IsSync = true;
```
### Advanced Renderer Configuration
For more control over video rendering, you can customize renderer settings:
```csharp
// Enable snapshot capabilities
videoRenderer.Settings.EnableSnapshot = true;
// Configure subtitle overlay if needed
videoRenderer.SubtitleEnabled = false;
```
## Connecting the Pipeline
Connect the video source to the renderer to establish the media flow:
```csharp
// Connect the output of the video source to the input of the renderer
pipeline.Connect(videoSource.Output, videoRenderer.Input);
```
## Managing the Pipeline Lifecycle
### Starting the Pipeline
Start the pipeline to begin capturing and displaying video:
```csharp
// Start the pipeline asynchronously
await pipeline.StartAsync();
```
### Taking Snapshots
Capture still images from the video stream:
```csharp
// Take a snapshot and save it as a JPEG file
await videoRenderer.Snapshot_SaveAsync("camera_snapshot.jpg", SkiaSharp.SKEncodedImageFormat.Jpeg, 90);
// Or get the snapshot as a bitmap for further processing
var bitmap = await videoRenderer.Snapshot_GetAsync();
```
### Stopping the Pipeline
When finished, properly stop the pipeline:
```csharp
// Stop the pipeline asynchronously
await pipeline.StopAsync();
```
## Platform-Specific Considerations
The Media Blocks SDK supports cross-platform development with specific optimizations:
- **Windows**: Supports both Media Foundation and Kernel Streaming APIs
- **macOS/iOS**: Utilizes AVFoundation for optimal performance
- **Android**: Provides access to camera features like stabilization and orientation
## Error Handling and Troubleshooting
Implement proper error handling to ensure a stable application:
```csharp
try
{
// Pipeline operations
await pipeline.StartAsync();
}
catch (Exception ex)
{
Console.WriteLine($"Error starting pipeline: {ex.Message}");
// Handle the exception appropriately
}
```
## Complete Implementation Example
This example demonstrates a complete camera viewer implementation:
```csharp
using System;
using System.Linq;
using System.Threading.Tasks;
using VisioForge.Core.MediaBlocks;
using VisioForge.Core.MediaBlocks.Sources;
using VisioForge.Core.MediaBlocks.VideoRendering;
using VisioForge.Core.Types.X.Sources;
public class CameraViewerExample
{
private MediaBlocksPipeline _pipeline;
private SystemVideoSourceBlock _videoSource;
private VideoRendererBlock _videoRenderer;
public async Task InitializeAsync(IVideoView videoView)
{
// Create pipeline
_pipeline = new MediaBlocksPipeline();
_pipeline.OnError += (s, e) => Console.WriteLine(e.Message);
// Enumerate devices
var devices = await DeviceEnumerator.Shared.VideoSourcesAsync();
if (devices.Length == 0)
{
throw new Exception("No camera devices found");
}
// Select device and format
var device = devices[0];
var format = device.GetHDOrAnyVideoFormatAndFrameRate(out var frameRate);
// Create settings
var settings = new VideoCaptureDeviceSourceSettings(device);
if (format != null)
{
settings.Format = format.ToFormat();
if (frameRate != null && !frameRate.IsEmpty)
{
settings.Format.FrameRate = frameRate;
}
}
// Create blocks
_videoSource = new SystemVideoSourceBlock(settings);
_videoRenderer = new VideoRendererBlock(_pipeline, videoView);
// Build pipeline
_pipeline.AddBlock(_videoSource);
_pipeline.AddBlock(_videoRenderer);
_pipeline.Connect(_videoSource.Output, _videoRenderer.Input);
// Start pipeline
await _pipeline.StartAsync();
}
public async Task StopAsync()
{
if (_pipeline != null)
{
await _pipeline.StopAsync();
_pipeline.Dispose();
}
}
public async Task TakeSnapshotAsync(string filename)
{
return await _videoRenderer.Snapshot_SaveAsync(filename,
SkiaSharp.SKEncodedImageFormat.Jpeg, 90);
}
}
```
## Conclusion
With Media Blocks SDK .Net, building powerful camera applications becomes straightforward. The component-based architecture provides flexibility and performance across platforms while abstracting the complexities of camera device integration.
For complete source code examples, please visit our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/WPF/CSharp/Simple%20Capture%20Demo).
---END OF PAGE---
# Local File: .\dotnet\mediablocks\GettingStarted\device-enum.md
---
title: Complete Guide to Media Device Enumeration in .NET
description: Learn how to efficiently enumerate video cameras, audio inputs/outputs, Blackmagic Decklink devices, NDI sources, and GenICam/GigE Vision cameras in your .NET applications using the Media Blocks SDK. This tutorial provides practical code examples for device discovery and integration.
sidebar_label: Device Enumeration
order: 0
---
# Complete Guide to Media Device Enumeration in .NET
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The Media Blocks SDK provides a powerful and efficient way to discover and work with various media devices in your .NET applications. This guide will walk you through the process of enumerating different types of media devices using the SDK's `DeviceEnumerator` class.
## Introduction to Device Enumeration
Device enumeration is a critical first step when developing applications that interact with media hardware. The `DeviceEnumerator` class provides a centralized way to detect and list all available media devices connected to your system.
The SDK uses a singleton pattern for device enumeration, making it easy to access the functionality from anywhere in your code:
```csharp
// Access the shared DeviceEnumerator instance
var enumerator = DeviceEnumerator.Shared;
```
## Discovering Video Input Devices
### Standard Video Sources
To list all available video input devices (webcams, capture cards, virtual cameras):
```csharp
var videoSources = await DeviceEnumerator.Shared.VideoSourcesAsync();
foreach (var device in videoSources)
{
Debug.WriteLine($"Video device found: {device.Name}");
// You can access additional properties here if needed
}
```
The `VideoCaptureDeviceInfo` objects returned provide detailed information about each device, including device name, internal identifiers, and API type.
## Working with Audio Devices
### Enumerating Audio Input Sources
To discover microphones and other audio input devices:
```csharp
var audioSources = await DeviceEnumerator.Shared.AudioSourcesAsync();
foreach (var device in audioSources)
{
Debug.WriteLine($"Audio input device found: {device.Name}");
// Additional device information can be accessed here
}
```
You can also filter audio devices by their API type:
```csharp
// Get only audio sources for a specific API
var audioSources = await DeviceEnumerator.Shared.AudioSourcesAsync(AudioCaptureDeviceAPI.DirectSound);
```
### Finding Audio Output Devices
For speakers, headphones, and other audio output destinations:
```csharp
var audioOutputs = await DeviceEnumerator.Shared.AudioOutputsAsync();
foreach (var device in audioOutputs)
{
Debug.WriteLine($"Audio output device found: {device.Name}");
// Process device information as needed
}
```
Similar to audio sources, you can filter outputs by API:
```csharp
// Get only audio outputs for a specific API
var audioOutputs = await DeviceEnumerator.Shared.AudioOutputsAsync(AudioOutputDeviceAPI.DirectSound);
```
## Professional Blackmagic Decklink Integration
### Decklink Video Input Sources
For professional video workflows using Blackmagic hardware:
```csharp
var decklinkVideoSources = await DeviceEnumerator.Shared.DecklinkVideoSourcesAsync();
foreach (var device in decklinkVideoSources)
{
Debug.WriteLine($"Decklink video input: {device.Name}");
// You can work with specific Decklink properties here
}
```
### Decklink Audio Input Sources
To access audio channels from Decklink devices:
```csharp
var decklinkAudioSources = await DeviceEnumerator.Shared.DecklinkAudioSourcesAsync();
foreach (var device in decklinkAudioSources)
{
Debug.WriteLine($"Decklink audio input: {device.Name}");
// Process Decklink audio device information
}
```
### Decklink Video Output Destinations
For sending video to Decklink output devices:
```csharp
var decklinkVideoOutputs = await DeviceEnumerator.Shared.DecklinkVideoSinksAsync();
foreach (var device in decklinkVideoOutputs)
{
Debug.WriteLine($"Decklink video output: {device.Name}");
// Access output device properties as needed
}
```
### Decklink Audio Output Destinations
For routing audio to Decklink hardware outputs:
```csharp
var decklinkAudioOutputs = await DeviceEnumerator.Shared.DecklinkAudioSinksAsync();
foreach (var device in decklinkAudioOutputs)
{
Debug.WriteLine($"Decklink audio output: {device.Name}");
// Work with audio output configuration here
}
```
## Network Device Integration
### NDI Sources Discovery
To find NDI sources available on your network:
```csharp
var ndiSources = await DeviceEnumerator.Shared.NDISourcesAsync();
foreach (var device in ndiSources)
{
Debug.WriteLine($"NDI source discovered: {device.Name}");
// Process NDI-specific properties and information
}
```
### ONVIF Network Camera Discovery
To find IP cameras supporting the ONVIF protocol:
```csharp
// Set a timeout for discovery (2 seconds in this example)
var timeout = TimeSpan.FromSeconds(2);
var onvifDevices = await DeviceEnumerator.Shared.ONVIF_ListSourcesAsync(timeout, null);
foreach (var deviceUri in onvifDevices)
{
Debug.WriteLine($"ONVIF camera found at: {deviceUri}");
// Connect to the camera using the discovered URI
}
```
## Industrial Camera Support
### Basler Industrial Cameras
For applications requiring Basler industrial cameras:
```csharp
var baslerCameras = await DeviceEnumerator.Shared.BaslerSourcesAsync();
foreach (var device in baslerCameras)
{
Debug.WriteLine($"Basler camera detected: {device.Name}");
// Access Basler-specific camera features
}
```
### Allied Vision Industrial Cameras
To work with Allied Vision cameras in your application:
```csharp
var alliedCameras = await DeviceEnumerator.Shared.AlliedVisionSourcesAsync();
foreach (var device in alliedCameras)
{
Debug.WriteLine($"Allied Vision camera found: {device.Name}");
// Configure Allied Vision specific parameters
}
```
### Spinnaker SDK Compatible Cameras
For cameras supporting the Spinnaker SDK (Windows only):
```csharp
#if NET_WINDOWS
var spinnakerCameras = await DeviceEnumerator.Shared.SpinnakerSourcesAsync();
foreach (var device in spinnakerCameras)
{
Debug.WriteLine($"Spinnaker SDK camera: {device.Name}");
Debug.WriteLine($"Model: {device.Model}, Vendor: {device.Vendor}");
Debug.WriteLine($"Resolution: {device.WidthMax}x{device.HeightMax}");
// Work with camera-specific properties
}
#endif
```
### Generic GenICam Standard Cameras
For other industrial cameras supporting the GenICam standard:
```csharp
var genicamCameras = await DeviceEnumerator.Shared.GenICamSourcesAsync();
foreach (var device in genicamCameras)
{
Debug.WriteLine($"GenICam compatible device: {device.Name}");
Debug.WriteLine($"Model: {device.Model}, Vendor: {device.Vendor}");
Debug.WriteLine($"Protocol: {device.Protocol}, Serial: {device.SerialNumber}");
// Work with standard GenICam features
}
```
## Device Monitoring
The SDK also supports monitoring device connections and disconnections:
```csharp
// Start monitoring for video device changes
await DeviceEnumerator.Shared.StartVideoSourceMonitorAsync();
// Start monitoring for audio device changes
await DeviceEnumerator.Shared.StartAudioSourceMonitorAsync();
await DeviceEnumerator.Shared.StartAudioSinkMonitorAsync();
// Subscribe to device change events
DeviceEnumerator.Shared.OnVideoSourceAdded += (sender, device) =>
{
Debug.WriteLine($"New video device connected: {device.Name}");
};
DeviceEnumerator.Shared.OnVideoSourceRemoved += (sender, device) =>
{
Debug.WriteLine($"Video device disconnected: {device.Name}");
};
```
## Platform-Specific Considerations
### Windows
On Windows, the SDK can detect USB device connection and removal events at the system level:
```csharp
#if NET_WINDOWS
// Subscribe to system-wide device events
DeviceEnumerator.Shared.OnDeviceAdded += (sender, args) =>
{
// Refresh device lists when new hardware is connected
RefreshDeviceLists();
};
DeviceEnumerator.Shared.OnDeviceRemoved += (sender, args) =>
{
// Update UI when hardware is disconnected
RefreshDeviceLists();
};
#endif
```
By default, Media Foundation device enumeration is disabled to avoid duplication with DirectShow devices. You can enable it if needed:
```csharp
#if NET_WINDOWS
// Enable Media Foundation device enumeration if required
DeviceEnumerator.Shared.IsEnumerateMediaFoundationDevices = true;
#endif
```
### iOS and Android
On mobile platforms, the SDK handles the required permission requests when enumerating devices:
```csharp
#if __IOS__ || __ANDROID__
// This will automatically request camera permissions if needed
var videoSources = await DeviceEnumerator.Shared.VideoSourcesAsync();
// This will automatically request microphone permissions if needed
var audioSources = await DeviceEnumerator.Shared.AudioSourcesAsync();
#endif
```
## Best Practices for Device Enumeration
When working with device enumeration in production applications:
1. Always handle cases where no devices are found
2. Consider caching device lists when appropriate to improve performance
3. Implement proper exception handling for device access failures
4. Provide clear user feedback when required devices are missing
5. Use the async methods to avoid blocking the UI thread during enumeration
6. Clean up resources by calling `Dispose()` when you're done with the DeviceEnumerator
```csharp
// Proper cleanup when done
DeviceEnumerator.Shared.Dispose();
```
---END OF PAGE---
# Local File: .\dotnet\mediablocks\GettingStarted\index.md
---
title: Media Blocks SDK .Net - Developer Quick Start Guide
description: Learn to integrate Media Blocks SDK .Net into your applications with our detailed tutorial. From installation to implementation, discover how to create powerful multimedia pipelines, process video streams, and build robust media applications.
sidebar_label: Getting Started
order: 20
---
# Media Blocks SDK .Net - Developer Quick Start Guide
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## Introduction
This guide provides a comprehensive walkthrough for integrating the Media Blocks SDK .Net into your applications. The SDK is built around a modular pipeline architecture, enabling you to create, connect, and manage multimedia processing blocks for video, audio, and more. Whether you're building video processing tools, streaming solutions, or multimedia applications, this guide will help you get started quickly and correctly.
## SDK Installation Process
The SDK is distributed as a NuGet package for easy integration into your .Net projects. Install it using:
```bash
dotnet add package VisioForge.DotNet.MediaBlocks
```
For platform-specific requirements and additional installation details, refer to the [detailed installation guide](../../install/index.md).
## Core Concepts and Architecture
### MediaBlocksPipeline
- The central class for managing the flow of media data between processing blocks.
- Handles block addition, connection, state management, and event handling.
- Implements `IMediaBlocksPipeline` and exposes events such as `OnError`, `OnStart`, `OnPause`, `OnResume`, `OnStop`, and `OnLoop`.
### MediaBlock and Interfaces
- Each processing unit is a `MediaBlock` (or a derived class), implementing the `IMediaBlock` interface.
- Key interfaces:
- `IMediaBlock`: Base interface for all blocks. Defines properties for `Name`, `Type`, `Input`, `Inputs`, `Output`, `Outputs`, and methods for pipeline context and YAML export.
- `IMediaBlockDynamicInputs`: For blocks that support dynamic input creation (e.g., mixers).
- `IMediaBlockInternals`/`IMediaBlockInternals2`: For internal pipeline management, building, and post-connection logic.
- `IMediaBlockRenderer`: For blocks that render media (e.g., video/audio renderers), with a property to control stream synchronization.
- `IMediaBlockSink`/`IMediaBlockSource`: For blocks that act as sinks (outputs) or sources (inputs).
- `IMediaBlockSettings`: For settings objects that can create blocks.
### Pads and Media Types
- Blocks are connected via `MediaBlockPad` objects, which have a direction (`In`/`Out`) and a media type (`Video`, `Audio`, `Subtitle`, `Metadata`, `Auto`).
- Pads can be connected/disconnected, and their state can be queried.
### Block Types
- The SDK provides a wide range of built-in block types (see `MediaBlockType` enum in the source code) for sources, sinks, renderers, effects, and more.
## Creating and Managing a Pipeline
### 1. Initialize the SDK (if required)
```csharp
using VisioForge.Core;
// Initialize the SDK at application startup
VisioForgeX.InitSDK();
```
### 2. Create a Pipeline and Blocks
```csharp
using VisioForge.Core.MediaBlocks;
// Create a new pipeline instance
var pipeline = new MediaBlocksPipeline();
// Example: Create a virtual video source and a video renderer
var virtualSource = new VirtualVideoSourceBlock(new VirtualVideoSourceSettings());
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // VideoView1 is your UI control
// Add blocks to the pipeline
pipeline.AddBlock(virtualSource);
pipeline.AddBlock(videoRenderer);
```
### 3. Connect Blocks
```csharp
// Connect the output of the source to the input of the renderer
pipeline.Connect(virtualSource.Output, videoRenderer.Input);
```
- You can also use `pipeline.Connect(sourceBlock, targetBlock)` to connect default pads, or connect multiple pads for complex graphs.
- For blocks supporting dynamic inputs, use the `IMediaBlockDynamicInputs` interface.
### 4. Start and Stop the Pipeline
```csharp
// Start the pipeline asynchronously
await pipeline.StartAsync();
// ... later, stop processing
await pipeline.StopAsync();
```
### 5. Resource Cleanup
```csharp
// Dispose of the pipeline when done
pipeline.Dispose();
```
### 6. SDK Cleanup (if required)
```csharp
// Release all SDK resources at application shutdown
VisioForgeX.DestroySDK();
```
## Error Handling and Events
- Subscribe to pipeline events for robust error and state management:
```csharp
pipeline.OnError += (sender, args) =>
{
Console.WriteLine($"Pipeline error: {args.Message}");
// Implement your error handling logic here
};
pipeline.OnStart += (sender, args) =>
{
Console.WriteLine("Pipeline started");
};
pipeline.OnStop += (sender, args) =>
{
Console.WriteLine("Pipeline stopped");
};
```
## Advanced Features
- **Dynamic Block Addition/Removal:** You can add or remove blocks at runtime as needed.
- **Pad Management:** Use `MediaBlockPad` methods to query and manage pad connections.
- **Hardware/Software Decoder Selection:** Use helper methods in `MediaBlocksPipeline` for hardware acceleration.
- **Segment Playback:** Set `StartPosition` and `StopPosition` properties for partial playback.
- **Debugging:** Export pipeline graphs for debugging using provided methods.
## Example: Minimal Pipeline Setup
```csharp
using VisioForge.Core.MediaBlocks;
var pipeline = new MediaBlocksPipeline();
var source = new VirtualVideoSourceBlock(new VirtualVideoSourceSettings());
var renderer = new VideoRendererBlock(pipeline, videoViewControl);
pipeline.AddBlock(source);
pipeline.AddBlock(renderer);
pipeline.Connect(source.Output, renderer.Input);
await pipeline.StartAsync();
// ...
await pipeline.StopAsync();
pipeline.Dispose();
```
## Reference: Key Interfaces
- `IMediaBlock`: Base interface for all blocks.
- `IMediaBlockDynamicInputs`: For blocks with dynamic input support.
- `IMediaBlockInternals`, `IMediaBlockInternals2`: For internal pipeline logic.
- `IMediaBlockRenderer`: For renderer blocks.
- `IMediaBlockSink`, `IMediaBlockSource`: For sink/source blocks.
- `IMediaBlockSettings`: For block settings objects.
- `IMediaBlocksPipeline`: Main pipeline interface.
- `MediaBlockPad`, `MediaBlockPadDirection`, `MediaBlockPadMediaType`: For pad management.
## Further Reading and Samples
- [Complete Pipeline Implementation](pipeline.md)
- [Media Player Development Guide](player.md)
- [Camera Viewer Application Tutorial](camera.md)
- [GitHub repository with code samples](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK)
For a full list of block types and advanced usage, consult the SDK API reference and source code.
---END OF PAGE---
# Local File: .\dotnet\mediablocks\GettingStarted\pipeline.md
---
title: Media Blocks Pipeline Core for Media Processing
description: Discover how to efficiently utilize the Media Blocks Pipeline to create powerful media applications for video playback, recording, and streaming. Learn essential pipeline operations including creation, block connections, error handling, and proper resource management.
sidebar_label: Pipeline Core Usage
order: 0
---
# Media Blocks Pipeline: Core Functionality
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## Overview of Pipeline and Block Structure
The Media Blocks SDK is built around the `MediaBlocksPipeline` class, which manages a collection of modular processing blocks. Each block implements the `IMediaBlock` interface or one of its specialized variants. Blocks are connected via input and output pads, allowing for flexible media processing chains.
### Main Block Interfaces
- **IMediaBlock**: Base interface for all blocks. Exposes properties for name, type, input/output pads, and methods for YAML conversion and pipeline context retrieval.
- **IMediaBlockDynamicInputs**: For blocks (like muxers) that can create new inputs dynamically. Methods: `CreateNewInput(mediaType)` and `GetInput(mediaType)`.
- **IMediaBlockInternals**: Internal methods for pipeline integration (e.g., `SetContext`, `Build`, `CleanUp`, `GetElement`, `GetCore`).
- **IMediaBlockInternals2**: For post-connection logic (`PostConnect()`).
- **IMediaBlockRenderer**: For renderer blocks, exposes `IsSync` property.
- **IMediaBlockSettings**: For settings/configuration objects that can create a block (`CreateBlock()`).
- **IMediaBlockSink**: For sink blocks, exposes filename/URL getter/setter.
- **IMediaBlockSource**: For source blocks (currently only commented-out pad accessors).
### Pads and Media Types
- **MediaBlockPad**: Represents a connection point (input/output) on a block. Has direction (`In`/`Out`), media type (`Video`, `Audio`, `Subtitle`, `Metadata`, `Auto`), and connection logic.
- **Pad connection**: Use `pipeline.Connect(outputPad, inputPad)` or `pipeline.Connect(block1.Output, block2.Input)`. For dynamic inputs, use `CreateNewInput()` on the sink block.
## Setting Up Your Pipeline Environment
### Creating a New Pipeline Instance
The first step in working with Media Blocks is instantiating a pipeline object:
```csharp
using VisioForge.Core.MediaBlocks;
// Create a standard pipeline instance
var pipeline = new MediaBlocksPipeline();
// Optionally, you can assign a name to your pipeline for easier identification
pipeline.Name = "MainVideoPlayer";
```
### Implementing Robust Error Handling
Media applications must handle various error scenarios that may occur during operation. Implementing proper error handling ensures your application remains stable:
```csharp
// Subscribe to error events to capture and handle exceptions
pipeline.OnError += (sender, args) =>
{
// Log the error message
Debug.WriteLine($"Pipeline error occurred: {args.Message}");
// Implement appropriate error recovery based on the message
if (args.Message.Contains("Access denied"))
{
// Handle permission issues
}
else if (args.Message.Contains("File not found"))
{
// Handle missing file errors
}
};
```
## Managing Media Timing and Navigation
### Retrieving Duration and Position Information
Accurate timing control is essential for media applications:
```csharp
// Get the total duration of the media (returns TimeSpan.Zero for live streams)
var duration = await pipeline.DurationAsync();
Console.WriteLine($"Media duration: {duration.TotalSeconds} seconds");
// Get the current playback position
var position = await pipeline.Position_GetAsync();
Console.WriteLine($"Current position: {position.TotalSeconds} seconds");
```
### Implementing Seeking Functionality
Enable your users to navigate through media content with seeking operations:
```csharp
// Basic seeking to a specific time position
await pipeline.Position_SetAsync(TimeSpan.FromSeconds(10));
// Seeking with keyframe alignment for more efficient navigation
await pipeline.Position_SetAsync(TimeSpan.FromMinutes(2), seekToKeyframe: true);
// Advanced seeking with start and stop positions for partial playback
await pipeline.Position_SetRangeAsync(
TimeSpan.FromSeconds(30), // Start position
TimeSpan.FromSeconds(60) // Stop position
);
```
## Controlling Pipeline Execution Flow
### Starting Media Playback
Control the playback of media with these essential methods:
```csharp
// Start playback immediately
await pipeline.StartAsync();
// Preload media without starting playback (useful for reducing startup delay)
await pipeline.StartAsync(onlyPreload: true);
await pipeline.ResumeAsync(); // Start the preloaded pipeline when ready
```
### Managing Playback States
Monitor and control the pipeline's current execution state:
```csharp
// Check the current state of the pipeline
var state = pipeline.State;
if (state == PlaybackState.Play)
{
Console.WriteLine("Pipeline is currently playing");
}
// Subscribe to important state change events
pipeline.OnStart += (sender, args) =>
{
Console.WriteLine("Pipeline playback has started");
UpdateUIForPlaybackState();
};
pipeline.OnStop += (sender, args) =>
{
Console.WriteLine("Pipeline playback has stopped");
Console.WriteLine($"Stopped at position: {args.Position.TotalSeconds} seconds");
ResetPlaybackControls();
};
pipeline.OnPause += (sender, args) =>
{
Console.WriteLine("Pipeline playback is paused");
UpdatePauseButtonState();
};
pipeline.OnResume += (sender, args) =>
{
Console.WriteLine("Pipeline playback has resumed");
UpdatePlayButtonState();
};
```
### Pausing and Resuming Operations
Implement pause and resume functionality for better user experience:
```csharp
// Pause the current playback
await pipeline.PauseAsync();
// Resume playback from paused state
await pipeline.ResumeAsync();
```
### Stopping Pipeline Execution
Properly terminate pipeline operations:
```csharp
// Standard stop operation
await pipeline.StopAsync();
// Force stop in time-sensitive scenarios (may affect output file integrity)
await pipeline.StopAsync(force: true);
```
## Building Media Processing Chains
### Connecting Media Processing Blocks
The true power of the Media Blocks SDK comes from connecting specialized blocks to create processing chains:
```csharp
// Basic connection between two blocks
pipeline.Connect(block1.Output, block2.Input);
// Connect blocks with specific media types
pipeline.Connect(videoSource.GetOutputPadByType(MediaBlockPadMediaType.Video),
videoEncoder.GetInputPadByType(MediaBlockPadMediaType.Video));
```
Different blocks may have multiple specialized inputs and outputs:
- Standard I/O: `Input` and `Output` properties
- Media-specific I/O: `VideoOutput`, `AudioOutput`, `VideoInput`, `AudioInput`
- Arrays of I/O: `Inputs[]` and `Outputs[]` for complex blocks
### Working with Dynamic Input Blocks
Some advanced sink blocks dynamically create inputs on demand:
```csharp
// Create a specialized MP4 muxer for recording
var mp4Muxer = new MP4SinkBlock();
mp4Muxer.FilePath = "output_recording.mp4";
// Request a new video input from the muxer
var videoInput = mp4Muxer.CreateNewInput(MediaBlockPadMediaType.Video);
// Connect a video source to the newly created input
pipeline.Connect(videoSource.Output, videoInput);
// Similarly for audio
var audioInput = mp4Muxer.CreateNewInput(MediaBlockPadMediaType.Audio);
pipeline.Connect(audioSource.Output, audioInput);
```
This flexibility enables complex media processing scenarios with multiple input streams.
## Proper Resource Management
### Disposing Pipeline Resources
Media applications can consume significant system resources. Always properly dispose of pipeline objects:
```csharp
// Synchronous disposal pattern
try
{
// Use pipeline
}
finally
{
pipeline.Dispose();
}
```
For modern applications, use the asynchronous pattern to prevent UI freezing:
```csharp
// Asynchronous disposal (preferred for UI applications)
try
{
// Use pipeline
}
finally
{
await pipeline.DisposeAsync();
}
```
### Using 'using' Statements for Automatic Cleanup
Leverage C# language features for automatic resource management:
```csharp
// Automatic disposal with 'using' statement
using (var pipeline = new MediaBlocksPipeline())
{
// Configure and use pipeline
await pipeline.StartAsync();
// Pipeline will be automatically disposed when exiting this block
}
// C# 8.0+ using declaration
using var pipeline = new MediaBlocksPipeline();
// Pipeline will be disposed when the containing method exits
```
## Advanced Pipeline Features
### Playback Rate Control
Adjust playback speed for slow-motion or fast-forward effects:
```csharp
// Get current playback rate
double currentRate = await pipeline.Rate_GetAsync();
// Set playback rate (1.0 is normal speed)
await pipeline.Rate_SetAsync(0.5); // Slow motion (half speed)
await pipeline.Rate_SetAsync(2.0); // Double speed
```
### Loop Playback Configuration
Implement continuous playback functionality:
```csharp
// Enable looping for continuous playback
pipeline.Loop = true;
// Listen for loop events
pipeline.OnLoop += (sender, args) =>
{
Console.WriteLine("Media has looped back to start");
UpdateLoopCounter();
};
```
### Debug Mode for Development
Enable debugging features during development:
```csharp
// Enable debug mode for more detailed logging
pipeline.Debug_Mode = true;
pipeline.Debug_Dir = Path.Combine(Environment.GetFolderPath(
Environment.SpecialFolder.MyDocuments), "PipelineDebugLogs");
```
## Block Types Reference
The SDK provides a wide range of block types for sources, processing, and sinks. See the `MediaBlockType` enum in the source code for a full list of available block types.
## Notes
- The pipeline supports both synchronous and asynchronous methods for starting, stopping, and disposing. Prefer asynchronous methods in UI or long-running applications.
- Events are available for error handling, state changes, and stream information.
- Use the correct interface for each block type to access specialized features (e.g., dynamic inputs, rendering, settings).
---END OF PAGE---
# Local File: .\dotnet\mediablocks\GettingStarted\player.md
---
title: Media Blocks SDK .Net Player Implementation Guide
description: Learn how to build a robust video player application with Media Blocks SDK .Net. This step-by-step tutorial covers essential components including source blocks, video rendering, audio output configuration, pipeline creation, and advanced playback controls for .NET developers.
sidebar_label: Player Sample
---
# Building a Feature-Rich Video Player with Media Blocks SDK
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
This detailed tutorial walks you through the process of creating a professional-grade video player application using Media Blocks SDK .Net. By following these instructions, you'll understand how to implement key functionalities including media loading, playback control, and audio-video rendering.
## Essential Components for Your Player Application
To construct a fully functional video player, your application pipeline requires these critical building blocks:
- [Universal source](../Sources/index.md) - This versatile component handles media input from various sources, allowing your player to read and process video files from local storage or network streams.
- [Video renderer](../VideoRendering/index.md) - The visual component responsible for displaying video frames on screen with proper timing and formatting.
- [Audio renderer](../AudioRendering/index.md) - Manages sound output, ensuring synchronized audio playback alongside your video content.
## Setting Up the Media Pipeline
### Creating the Foundation
The first step in developing your player involves establishing the media pipeline—the core framework that manages data flow between components.
```csharp
using VisioForge.Core.MediaBlocks;
var pipeline = new MediaBlocksPipeline();
```
### Implementing Error Handling
Robust error management is essential for a reliable player application. Subscribe to the pipeline's error events to capture and respond to exceptions.
```csharp
pipeline.OnError += (sender, args) =>
{
Console.WriteLine(args.Message);
// Additional error handling logic can be implemented here
};
```
### Setting Up Event Listeners
For complete control over your player's lifecycle, implement event handlers for critical state changes:
```csharp
pipeline.OnStart += (sender, args) =>
{
// Execute code when pipeline starts
Console.WriteLine("Playback started");
};
pipeline.OnStop += (sender, args) =>
{
// Execute code when pipeline stops
Console.WriteLine("Playback stopped");
};
```
## Configuring Media Blocks
### Initializing the Source Block
The Universal Source Block serves as the entry point for media content. Configure it with the path to your media file:
```csharp
var sourceSettings = await UniversalSourceSettings.CreateAsync(new Uri(filePath));
var fileSource = new UniversalSourceBlock(sourceSettings);
```
During initialization, the SDK automatically analyzes the file to extract crucial metadata about video and audio streams, enabling proper configuration of downstream components.
### Setting Up Video Display
To render video content on screen, create and configure a Video Renderer Block:
```csharp
var videoRenderer = new VideoRendererBlock(_pipeline, VideoView1);
```
The renderer requires two parameters: a reference to your pipeline and the UI control where video frames will be displayed.
### Configuring Audio Output
For audio playback, you'll need to select and initialize an appropriate audio output device:
```csharp
var audioRenderers = await DeviceEnumerator.Shared.AudioOutputsAsync();
var audioRenderer = new AudioRendererBlock(audioRenderers[0]);
```
This code retrieves available audio output devices and configures the first available option for playback.
## Establishing Component Connections
Once all blocks are configured, you must establish connections between them to create a cohesive media flow:
```csharp
pipeline.Connect(fileSource.VideoOutput, videoRenderer.Input);
pipeline.Connect(fileSource.AudioOutput, audioRenderer.Input);
```
These connections define the path data takes through your application:
- Video data flows from the source to the video renderer
- Audio data flows from the source to the audio renderer
For files containing only video or audio, you can selectively connect only the relevant outputs.
### Validating Media Content
Before playback, you can inspect available streams using the Universal Source Settings:
```csharp
var mediaInfo = await sourceSettings.ReadInfoAsync();
bool hasVideo = mediaInfo.VideoStreams.Count > 0;
bool hasAudio = mediaInfo.AudioStreams.Count > 0;
```
## Controlling Media Playback
### Starting Playback
To begin media playback, call the pipeline's asynchronous start method:
```csharp
await pipeline.StartAsync();
```
Once executed, your application will begin rendering video frames and playing audio through the configured outputs.
### Managing Playback State
To halt playback, invoke the pipeline's stop method:
```csharp
await pipeline.StopAsync();
```
This gracefully terminates all media processing and releases associated resources.
## Advanced Implementation
For a complete implementation example with additional features like seeking, volume control, and full-screen support, refer to our comprehensive source code on [GitHub](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/WPF/CSharp/Simple%20Player%20Demo%20WPF).
The repository contains working demonstrations for various platforms including WPF, Windows Forms, and cross-platform .NET applications.
---END OF PAGE---
# Local File: .\dotnet\mediablocks\Guides\rtsp-save-original-stream.md
---
title: Save Original RTSP Stream (No Video Re-encoding)
description: Learn how to save an RTSP stream to file (MP4) from your IP camera without re-encoding video. This guide covers how to record RTSP video streams, a common task when users want to save camera footage. Alternatives like ffmpeg save rtsp stream or VLC save rtsp stream to file exist, but this method uses .NET with VisioForge Media Blocks for programmatic control.
sidebar_label: Save RTSP Video without Re-encoding
order: 20
---
# How to Save RTSP Stream to File: Record IP Camera Video without Re-encoding
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## Table of Contents
- [How to Save RTSP Stream to File: Record IP Camera Video without Re-encoding](#how-to-save-rtsp-stream-to-file-record-ip-camera-video-without-re-encoding)
- [Table of Contents](#table-of-contents)
- [Overview](#overview)
- [Core Features](#core-features)
- [Core Concept](#core-concept)
- [Prerequisites](#prerequisites)
- [Code Sample: RTSPRecorder Class](#code-sample-rtsprecorder-class)
- [Explanation of the Code](#explanation-of-the-code)
- [How to Use the `RTSPRecorder`](#how-to-use-the-rtsprecorder)
- [Key Considerations](#key-considerations)
- [Full GitHub Sample](#full-github-sample)
- [Best Practices](#best-practices)
- [Troubleshooting](#troubleshooting)
## Overview
This guide demonstrates how to save an RTSP stream to an MP4 file by capturing the original video stream from an RTSP IP camera without re-encoding the video. This approach is highly beneficial for preserving the original video quality from cameras and minimizing CPU usage when you need to record footage. The audio stream can be passed through or, optionally, re-encoded for better compatibility, allowing you to save the complete streaming data. Tools like FFmpeg and VLC offer command-line or UI-based methods to record an RTSP stream; however, this guide focuses on a programmatic approach using the VisioForge Media Blocks SDK for .NET developers who need to create applications that connect to and record video from RTSP cameras.
## Core Features
- **Direct Stream Recording**: Save RTSP camera feeds without quality loss
- **CPU-Efficient Processing**: No video re-encoding required
- **Flexible Audio Handling**: Pass-through or re-encode audio as needed
- **Professional Integration**: Programmatic control for enterprise applications
- **High Performance**: Optimized for continuous recording
We will be using the VisioForge Media Blocks SDK, a powerful .NET library for building custom media processing applications, to effectively save RTSP to file.
## Core Concept
The main idea is to take the raw video stream from the RTSP source and directly send it to a file sink (e.g., MP4 muxer) without any decoding or encoding steps for the video. This is a common requirement for recording RTSP streams with maximum fidelity.
- **Video Stream**: Passed through directly from the RTSP source to the MP4 sink. This ensures the original video data is saved, crucial for applications that need to record high-quality footage from cameras.
- **Audio Stream**: Can either be passed through directly (if the original audio codec is compatible with the MP4 container) or re-encoded (e.g., to AAC) to ensure compatibility and potentially reduce file size when you save the RTSP stream.
## Prerequisites
You'll need the VisioForge Media Blocks SDK. You can add it to your .NET project via NuGet:
```xml
```
Depending on your target platform (Windows, macOS, Linux, including ARM-based systems like Jetson Nano for embedded camera applications), you will also need the corresponding native runtime packages. For example, on Windows to record video:
```xml
```
For detailed information about deployment requirements, and platform-specific dependencies, please refer to our [Deployment Guide](../../deployment-x/index.md). It's important to check these details to ensure your video stream capture application works correctly.
Refer to the `RTSP Capture Original.csproj` file in the sample project for a complete list of dependencies for different platforms.
## Code Sample: RTSPRecorder Class
The following C# code defines an `RTSPRecorder` class that encapsulates the logic for capturing and saving the RTSP stream.
```csharp
using System;
using System.Threading.Tasks;
using VisioForge.Core.MediaBlocks;
using VisioForge.Core.MediaBlocks.AudioEncoders;
using VisioForge.Core.MediaBlocks.Sinks;
using VisioForge.Core.MediaBlocks.Sources;
using VisioForge.Core.MediaBlocks.Special;
using VisioForge.Core.Types.Events;
using VisioForge.Core.Types.X.AudioEncoders;
using VisioForge.Core.Types.X.Sinks;
using VisioForge.Core.Types.X.Sources;
namespace RTSPCaptureOriginalStream
{
///
/// RTSPRecorder class encapsulates the RTSP recording functionality to save RTSP stream to file.
/// It uses the MediaBlocks SDK to create a pipeline that connects an
/// RTSP source (like an IP camera) to an MP4 sink (file).
///
public class RTSPRecorder : IAsyncDisposable
{
///
/// The MediaBlocks pipeline that manages the flow of media data.
///
public MediaBlocksPipeline Pipeline { get; private set; }
// Private fields for the MediaBlock components
private MediaBlock _muxer; // MP4 container muxer (sink)
private RTSPRAWSourceBlock _rtspRawSource; // RTSP stream source (provides raw streams)
private DecodeBinBlock _decodeBin; // Optional: Audio decoder (if re-encoding audio)
private AACEncoderBlock _audioEncoder; // Optional: AAC audio encoder (if re-encoding audio)
private bool disposedValue; // Flag to prevent multiple disposals
///
/// Event fired when an error occurs in the pipeline.
///
public event EventHandler OnError;
///
/// Event fired when a status message is available.
///
public event EventHandler OnStatusMessage;
///
/// Output filename for the MP4 recording.
///
public string Filename { get; set; } = "output.mp4";
///
/// Whether to re-encode audio to AAC format (recommended for compatibility).
/// If false, audio is passed through.
///
public bool ReencodeAudio { get; set; } = true;
///
/// Starts the recording session by creating and configuring the MediaBlocks pipeline.
///
/// RTSP source configuration settings.
/// True if the pipeline started successfully, false otherwise.
public async Task StartAsync(RTSPRAWSourceSettings rtspSettings)
{
// Create a new MediaBlocks pipeline
Pipeline = new MediaBlocksPipeline();
Pipeline.OnError += (sender, e) => OnError?.Invoke(this, e); // Bubble up errors
OnStatusMessage?.Invoke(this, "Creating pipeline to record RTSP stream...");
// 1. Create the RTSP source block.
// RTSPRAWSourceBlock provides raw, un-decoded elementary streams (video and audio) from your IP camera or other RTSP cameras.
_rtspRawSource = new RTSPRAWSourceBlock(rtspSettings);
// 2. Create the MP4 sink (muxer) block.
// This block will write the media streams into an MP4 file.
_muxer = new MP4SinkBlock(new MP4SinkSettings(Filename));
// 3. Connect Video Stream (Passthrough)
// Create a dynamic input pad on the muxer for the video stream.
// We connect the raw video output from the RTSP source directly to the MP4 sink.
// This ensures the video is not re-encoded when you record the camera feed.
var inputVideoPad = (_muxer as IMediaBlockDynamicInputs).CreateNewInput(MediaBlockPadMediaType.Video);
Pipeline.Connect(_rtspRawSource.VideoOutput, inputVideoPad);
OnStatusMessage?.Invoke(this, "Video stream connected (passthrough for original quality video).");
// 4. Connect Audio Stream (Optional Re-encoding)
// This section handles how the audio from the RTSP stream is processed and saved to the file.
if (rtspSettings.AudioEnabled)
{
// Create a dynamic input pad on the muxer for the audio stream.
var inputAudioPad = (_muxer as IMediaBlockDynamicInputs).CreateNewInput(MediaBlockPadMediaType.Audio);
if (ReencodeAudio)
{
// If audio re-encoding is enabled (e.g., to AAC for compatibility):
OnStatusMessage?.Invoke(this, "Setting up audio re-encoding to AAC for the recording...");
// Create a decoder block that only handles audio.
// We need to decode the original audio before re-encoding it to save the MP4 stream with compatible audio.
_decodeBin = new DecodeBinBlock(videoDisabled: false, audioDisabled: true, subtitlesDisabled: false)
{
// We can disable the internal audio converter if we're sure about the format
// or if the encoder handles conversion. For AAC, it's generally fine.
DisableAudioConverter = true
};
// Create an AAC encoder with default settings.
_audioEncoder = new AACEncoderBlock(new AVENCAACEncoderSettings());
// Connect the audio processing pipeline:
// RTSP audio output -> Decoder -> AAC Encoder -> MP4 Sink audio input
Pipeline.Connect(_rtspRawSource.AudioOutput, _decodeBin.Input);
Pipeline.Connect(_decodeBin.AudioOutput, _audioEncoder.Input);
Pipeline.Connect(_audioEncoder.Output, inputAudioPad);
OnStatusMessage?.Invoke(this, "Audio stream connected (re-encoding to AAC for MP4 file).");
}
else
{
// If audio re-encoding is disabled, connect RTSP audio directly to the muxer.
// Note: This may cause issues if the original audio format is not
// compatible with the MP4 container (e.g., G.711 PCMU/PCMA) when trying to save the RTSP stream.
// Common compatible formats include AAC. Check your camera's audio format.
Pipeline.Connect(_rtspRawSource.AudioOutput, inputAudioPad);
OnStatusMessage?.Invoke(this, "Audio stream connected (passthrough). Warning: Compatibility depends on original camera audio format for the file.");
}
}
// 5. Start the pipeline to record video
OnStatusMessage?.Invoke(this, "Starting recording pipeline to save RTSP stream to file...");
bool success = await Pipeline.StartAsync();
if (success)
{
OnStatusMessage?.Invoke(this, "Recording pipeline started successfully.");
}
else
{
OnStatusMessage?.Invoke(this, "Failed to start recording pipeline.");
}
return success;
}
///
/// Stops the recording by stopping the MediaBlocks pipeline.
///
/// True if the pipeline stopped successfully, false otherwise.
public async Task StopAsync()
{
if (Pipeline == null)
return false;
OnStatusMessage?.Invoke(this, "Stopping recording pipeline...");
bool success = await Pipeline.StopAsync();
if (success)
{
OnStatusMessage?.Invoke(this, "Recording pipeline stopped successfully.");
}
else
{
OnStatusMessage?.Invoke(this, "Failed to stop recording pipeline.");
}
// Detach the error handler to prevent issues if StopAsync is called multiple times
// or before DisposeAsync
if (Pipeline != null)
{
Pipeline.OnError -= OnError;
}
return success;
}
///
/// Asynchronously disposes of the RTSPRecorder and all its resources.
/// Implements the IAsyncDisposable pattern for proper resource cleanup.
///
public async ValueTask DisposeAsync()
{
if (!disposedValue)
{
if (Pipeline != null)
{
Pipeline.OnError -= (sender, e) => OnError?.Invoke(this, e); // Ensure detachment
await Pipeline.DisposeAsync();
Pipeline = null;
}
// Dispose of all MediaBlock components
// Using 'as IDisposable' for safe casting and disposal.
(_muxer as IDisposable)?.Dispose();
_muxer = null;
_rtspRawSource?.Dispose();
_rtspRawSource = null;
_decodeBin?.Dispose();
_decodeBin = null;
_audioEncoder?.Dispose();
_audioEncoder = null;
disposedValue = true;
}
}
}
}
```
## Explanation of the Code
1. **`RTSPRecorder` Class**: This class is central to helping a user save RTSP stream to file.
- Implements `IAsyncDisposable` for proper resource management.
- `Pipeline`: The `MediaBlocksPipeline` object that orchestrates the media flow.
- `_rtspRawSource`: An `RTSPRAWSourceBlock` is used. The "RAW" is key here, as it provides the elementary streams (video and audio) from camera without attempting to decode them initially.
- `_muxer`: An `MP4SinkBlock` is used to write the incoming video and audio streams into an MP4 file.
- `_decodeBin` and `_audioEncoder`: These are optional blocks used only if `ReencodeAudio` is true. `_decodeBin` decodes the original audio from the IP camera, and `_audioEncoder` (e.g., `AACEncoderBlock`) re-encodes it to a more compatible format like AAC.
- `Filename`: Specifies the output MP4 file path where the video will be saved.
- `ReencodeAudio`: A boolean property to control audio processing. If `true`, audio is re-encoded to AAC. If `false`, audio is passed through directly. Check your camera audio format for compatibility if set to false.
2. **`StartAsync(RTSPRAWSourceSettings rtspSettings)` Method**: This method initiates the process to **record RTSP stream**.
- Initializes `MediaBlocksPipeline`.
- **RTSP Source**: Creates `_rtspRawSource` with `RTSPRAWSourceSettings`. These settings include the URL (the path to your camera's stream), credentials for user access, and audio capture settings.
- **MP4 Sink**: Creates `_muxer` (MP4 sink) with the target filename.
- **Video Path (Passthrough)**:
- A new dynamic input pad for video is created on the `_muxer`.
- `Pipeline.Connect(_rtspRawSource.VideoOutput, inputVideoPad);` This line directly connects the raw video output from the RTSP source to the MP4 muxer's*video input. No re-encoding occurs for the video stream.
- **Audio Path (Conditional)**: Determines how audio from the **camera** is handled when you **save to file**.
- If `rtspSettings.AudioEnabled` is true:
- A new dynamic input pad for audio is created on the `_muxer`.
- If `ReencodeAudio` is `true` (recommended for wider file compatibility):
- `_decodeBin` is created to decode the incoming audio from the camera. It's configured to only process audio (`audioDisabled: false`).
- `_audioEncoder` (e.g., `AACEncoderBlock`) is created.
- The pipeline is connected: `_rtspRawSource.AudioOutput` -> `_decodeBin.Input` -> `_decodeBin.AudioOutput` -> `_audioEncoder.Input` -> `_audioEncoder.Output` -> `inputAudioPad` (muxer's audio input).
- If `ReencodeAudio` is `false`:
- `Pipeline.Connect(_rtspRawSource.AudioOutput, inputAudioPad);` The raw audio output from the camera source is connected directly to the MP4 muxer. *Caution*: This relies on the original audio codec from the camera being compatible with the MP4 container (e.g., AAC). Formats like G.711 (PCMU/PCMA) are common in RTSP cameras but are not standard in MP4 and might lead to playback issues or require specialized players if you save this way. Check your camera's documentation.
- Starts the pipeline using `Pipeline.StartAsync()` to begin the streaming video record process.
3. **`StopAsync()` Method**: Stops the `Pipeline`.
4. **`DisposeAsync()` Method**:
- Cleans up all resources, including the pipeline and individual media blocks.
## How to Use the `RTSPRecorder`
Here's a basic example of how you might use the `RTSPRecorder` class:
```csharp
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using VisioForge.Core; // For VisioForgeX.DestroySDK()
using VisioForge.Core.Types.X.Sources; // For RTSPRAWSourceSettings
using RTSPCaptureOriginalStream; // Namespace of your RTSPRecorder class
class Demo
{
static async Task Main(string[] args)
{
Console.WriteLine("RTSP Camera to MP4 Capture (Original Video Stream)");
Console.WriteLine("-------------------------------------------------");
string rtspUrl = "rtsp://your_camera_ip:554/stream_path"; // Replace with your RTSP URL
string username = "admin"; // Replace with your username, or empty if none
string password = "password"; // Replace with your password, or empty if none
string outputFilePath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyVideos), "rtsp_original_capture.mp4");
Directory.CreateDirectory(Path.GetDirectoryName(outputFilePath));
Console.WriteLine($"Capturing from: {rtspUrl}");
Console.WriteLine($"Saving to: {outputFilePath}");
Console.WriteLine("Press any key to stop recording...");
var cts = new CancellationTokenSource();
RTSPRecorder recorder = null;
try
{
recorder = new RTSPRecorder
{
Filename = outputFilePath,
ReencodeAudio = true // Set to false to pass through audio (check compatibility)
};
recorder.OnError += (s, e) => Console.WriteLine($"ERROR: {e.Message}");
recorder.OnStatusMessage += (s, msg) => Console.WriteLine($"STATUS: {msg}");
// Configure RTSP source settings
var rtspSettings = new RTSPRAWSourceSettings(new Uri(rtspUrl), audioEnabled: true)
{
Login = username,
Password = password,
// Adjust other settings as needed, e.g., transport protocol
// RTSPTransport = VisioForge.Core.Types.RTSPTransport.TCP,
};
if (await recorder.StartAsync(rtspSettings))
{
Console.ReadKey(true); // Wait for a key press to stop
}
else
{
Console.WriteLine("Failed to start recording. Check status messages and RTSP URL/credentials.");
}
}
catch (Exception ex)
{
Console.WriteLine($"An unexpected error occurred: {ex.Message}");
}
finally
{
if (recorder != null)
{
Console.WriteLine("Stopping recording...");
await recorder.StopAsync();
await recorder.DisposeAsync();
Console.WriteLine("Recording stopped and resources disposed.");
}
// Important: Clean up VisioForge SDK resources on application exit
VisioForgeX.DestroySDK();
}
Console.WriteLine("Press any key to exit.");
Console.ReadKey(true);
}
}
```
## Key Considerations
- **Audio Compatibility (Passthrough)**: If you choose `ReencodeAudio = false`, ensure the camera's audio codec (e.g., AAC, MP3) is compatible with the MP4 container. Common RTSP audio codecs like G.711 (PCMU/PCMA) are generally not directly supported in MP4 files and will likely result in silent audio or playback errors. Re-encoding to AAC is generally safer for wider compatibility.
- **Network Conditions**: RTSP streaming is sensitive to network stability, so ensure a reliable network connection to the camera.
- **Error Handling**: Robust applications should implement thorough error handling by subscribing to the `OnError` event of the `RTSPRecorder` (or directly from the `MediaBlocksPipeline`).
- **Resource Management**: Always `DisposeAsync` the `RTSPRecorder` instance (and thus the `MediaBlocksPipeline`) when done to free up resources. `VisioForgeX.DestroySDK()` should be called once when your application exits.
## Full GitHub Sample
For a complete, runnable console application demonstrating these concepts, including user input for RTSP details and dynamic duration display, please refer to the official VisioForge samples repository:
- **[RTSP Capture Original Stream Sample on GitHub](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/Console/RTSP%20Capture%20Original)**
This sample provides a more comprehensive example and showcases additional features.
## Best Practices
- Always implement proper error handling
- Monitor network stability for reliable streaming
- Use appropriate audio encoding settings
- Manage system resources effectively
- Implement proper cleanup procedures
## Troubleshooting
Common issues and their solutions when saving RTSP streams:
- Network connectivity problems
- Audio codec compatibility
- Resource management
- Stream initialization errors
- Recording storage considerations
---
This guide provides a foundational understanding of how to save an RTSP stream's original video while flexibly handling the audio stream using the VisioForge Media Blocks SDK. By leveraging the `RTSPRAWSourceBlock` and direct pipeline connections, you can achieve efficient, high-quality recordings.
---END OF PAGE---
# Local File: .\dotnet\mediablocks\LiveVideoCompositor\index.md
---
title: .Net Live Video Compositor
description: Master real-time video compositing in .Net. Add/remove multiple live video/audio sources and outputs on the fly. Build dynamic streaming & recording apps.
sidebar_label: Live Video Compositor
---
# Live Video Compositor
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
Live Video Compositor is a part of the [VisioForge Media Blocks SDK .Net](https://www.visioforge.com/media-blocks-sdk-net) that allows you to add and remove sources and outputs in real time to a pipeline.
This allows you to create applications that simultaneously handle multiple video and audio sources.
For example, the LVC allows you to start streaming to YouTube at just the right moment while simultaneously recording video to disk.
Using the LVC, you can create an application similar to OBS Studio.
Each source and output has its unique identifier that can be used to add and remove sources and outputs in real time.
Each source and output has its own independent pipeline that can be started and stopped.
## Features
- Supports multiple video and audio sources
- Supports multiple video and audio outputs
- Setting the position and size of video sources
- Setting the transparency of video sources
- Setting the volume of audio sources
## LiveVideoCompositor class
The `LiveVideoCompositor` is the main class that allows the addition and removal of live sources and outputs to the pipeline. When creating it, it is necessary to specify the resolution and frame rate to use. All sources with a different frame rate will be automatically converted to the frame rate specified when creating the LVC.
`LiveVideoCompositorSettings` allows you to set the video and audio parameters. Key properties include:
- `MixerType`: Specifies the video mixer type (e.g., `LVCMixerType.OpenGL`, `LVCMixerType.D3D11` (Windows only), or `LVCMixerType.CPU`).
- `AudioEnabled`: A boolean indicating whether the audio stream is enabled.
- `VideoWidth`, `VideoHeight`, `VideoFrameRate`: Define the output video resolution and frame rate.
- `AudioFormat`, `AudioSampleRate`, `AudioChannels`: Define the output audio parameters.
- `VideoView`: An optional `IVideoView` for rendering video output directly.
- `AudioOutput`: An optional `AudioRendererBlock` for rendering audio output directly.
It is also necessary to set the maximum number of sources and outputs when designing your application, though this is not a direct parameter of `LiveVideoCompositorSettings`.
### Sample code
1. Create a new instance of the `LiveVideoCompositor` class.
```csharp
var settings = new LiveVideoCompositorSettings(1920, 1080, VideoFrameRate.FPS_25);
// Optionally, configure other settings like MixerType, AudioEnabled, etc.
// settings.MixerType = LVCMixerType.OpenGL;
// settings.AudioEnabled = true;
var compositor = new LiveVideoCompositor(settings);
```
2. Add video and audio sources and outputs (see below)
3. Start the pipeline.
```csharp
await compositor.StartAsync();
```
## LVC Video Input
The `LVCVideoInput` class is used to add video sources to the LVC pipeline. The class allows you to set the video parameters and the rectangle of the video source.
You can use any block that has a video output pad. For example, you can use `VirtualVideoSourceBlock` to create a virtual video source or `SystemVideoSourceBlock` to capture video from the webcam.
Key properties for `LVCVideoInput` include:
- `Rectangle`: Defines the position and size of the video source within the compositor's output.
- `ZOrder`: Determines the stacking order of overlapping video sources.
- `ResizePolicy`: Specifies how the video source should be resized if its aspect ratio differs from the target rectangle (`LVCResizePolicy.Stretch`, `LVCResizePolicy.Letterbox`, `LVCResizePolicy.LetterboxToFill`).
- `VideoView`: An optional `IVideoView` to preview this specific input source.
### Usage
When creating an `LVCVideoInput` object, you must specify the `MediaBlock` to be used as the video data source, along with `VideoFrameInfoX` describing the video, a `Rect` for its placement, and whether it should `autostart`.
### Sample code
#### Virtual video source
The sample code below shows how to create an `LVCVideoInput` object with a `VirtualVideoSourceBlock` as the video source.
```csharp
var rect = new Rect(0, 0, 640, 480);
var name = "Video source [Virtual]";
var settings = new VirtualVideoSourceSettings();
var info = new VideoFrameInfoX(settings.Width, settings.Height, settings.FrameRate);
var src = new LVCVideoInput(name, _compositor, new VirtualVideoSourceBlock(settings), info, rect, true);
// Optionally, set ZOrder or ResizePolicy
// src.ZOrder = 1;
// src.ResizePolicy = LVCResizePolicy.Letterbox;
if (await _compositor.Input_AddAsync(src))
{
// added successfully
}
else
{
src.Dispose();
}
```
#### Screen source
For Desktop platforms, we can capture the screen. The sample code below shows how to create an `LVCVideoInput` object with a `ScreenSourceBlock` as the video source.
```csharp
var settings = new ScreenCaptureDX9SourceSettings();
settings.CaptureCursor = true;
settings.Monitor = 0;
settings.FrameRate = new VideoFrameRate(30);
settings.Rectangle = new Rectangle(0, 0, 1920, 1080);
var rect = new Rect(0, 0, 640, 480);
var name = $"Screen source";
var info = new VideoFrameInfoX(settings.Rectangle.Width, settings.Rectangle.Height, settings.FrameRate);
var src = new LVCVideoInput(name, _compositor, new ScreenSourceBlock(settings), info, rect, true);
// Optionally, set ZOrder or ResizePolicy
// src.ZOrder = 0;
// src.ResizePolicy = LVCResizePolicy.Stretch;
if (await _compositor.Input_AddAsync(src))
{
// added successfully
}
else
{
src.Dispose();
}
```
#### System video source (webcam)
The sample code below shows how to create an `LVCVideoInput` object with a `SystemVideoSourceBlock` as the video source.
We use the `DeviceEnumerator` class to get the video source devices. The first video device will be used as the video source. The first video format of the device will be used as the video format.
```csharp
VideoCaptureDeviceSourceSettings settings = null;
var device = (await DeviceEnumerator.Shared.VideoSourcesAsync())[0];
if (device != null)
{
var formatItem = device.VideoFormats[0];
if (formatItem != null)
{
settings = new VideoCaptureDeviceSourceSettings(device)
{
Format = formatItem.ToFormat()
};
settings.Format.FrameRate = dlg.FrameRate;
}
}
if (settings == null)
{
MessageBox.Show(this, "Unable to configure video capture device.");
return;
}
var name = $"Camera source [{device.Name}]";
var rect = new Rect(0, 0, 1280, 720);
var videoInfo = new VideoFrameInfoX(settings.Format.Width, settings.Format.Height, settings.Format.FrameRate);
var src = new LVCVideoInput(name, _compositor, new SystemVideoSourceBlock(settings), videoInfo, rect, true);
// Optionally, set ZOrder or ResizePolicy
// src.ZOrder = 2;
// src.ResizePolicy = LVCResizePolicy.LetterboxToFill;
if (await _compositor.Input_AddAsync(src))
{
// added successfully
}
else
{
src.Dispose();
}
```
## LVC Audio Input
The `LVCAudioInput` class is used to add audio sources to the LVC pipeline. The class allows you to set the audio parameters and the volume of the audio source.
You can use any block that has an audio output pad. For example, you can use the `VirtualAudioSourceBlock` to create a virtual audio source or `SystemAudioSourceBlock` to capture audio from the microphone.
### Usage
When creating an `LVCAudioInput` object, you must specify the `MediaBlock` to be used as the audio data source, along with `AudioInfoX` (which requires format, channels, and sample rate) and whether it should `autostart`.
### Sample code
#### Virtual audio source
The sample code below shows how to create an `LVCAudioInput` object with a `VirtualAudioSourceBlock` as the audio source.
```csharp
var name = "Audio source [Virtual]";
var settings = new VirtualAudioSourceSettings();
var info = new AudioInfoX(settings.Format, settings.SampleRate, settings.Channels);
var src = new LVCAudioInput(name, _compositor, new VirtualAudioSourceBlock(settings), info, true);
if (await _compositor.Input_AddAsync(src))
{
// added successfully
}
else
{
src.Dispose();
}
```
#### System audio source (DirectSound in Windows)
The sample code below shows how to create an `LVCAudioInput` object with a `SystemAudioSourceBlock` as the audio source.
We use the `DeviceEnumerator` class to get the audio devices. The first audio device is used as the audio source. The first audio format of the device is used as the audio format.
```csharp
DSAudioCaptureDeviceSourceSettings settings = null;
AudioCaptureDeviceFormat deviceFormat = null;
var device = (await DeviceEnumerator.Shared.AudioSourcesAsync(AudioCaptureDeviceAPI.DirectSound))[0]];
if (device != null)
{
var formatItem = device.Formats[0];
if (formatItem != null)
{
deviceFormat = formatItem.ToFormat();
settings = new DSAudioCaptureDeviceSourceSettings(device, deviceFormat);
}
}
if (settings == null)
{
MessageBox.Show(this, "Unable to configure audio capture device.");
return;
}
var name = $"Audio source [{device.Name}]";
var info = new AudioInfoX(deviceFormat.Format, deviceFormat.SampleRate, deviceFormat.Channels);
var src = new LVCAudioInput(name, _compositor, new SystemAudioSourceBlock(settings), info, true);
if (await _compositor.Input_AddAsync(src))
{
// added successfully
}
else
{
src.Dispose();
}
```
## LVC Video Output
The `LVCVideoOutput` class is used to add video outputs to the LVC pipeline. You can start and stop the output pipeline independently from the main pipeline.
### Usage
When creating an `LVCVideoOutput` object, you must specify the `MediaBlock` to be used as the video data output, its `name`, a reference to the `LiveVideoCompositor`, and whether it should `autostart` with the main pipeline. An optional processing `MediaBlock` can also be provided. Usually, this element is used to save the video as a file or stream it (without audio).
For video+audio outputs, use the `LVCVideoAudioOutput` class.
You can use the SuperMediaBlock to make a custom block pipeline for video output. For example, you can add a video encoder, a muxer, and a file writer to save the video to a file.
## LVC Audio Output
The `LVCAudioOutput` class is used to add audio outputs to the LVC pipeline. You can start and stop the output pipeline independently from the main pipeline.
### Usage
When creating an `LVCAudioOutput` object, you must specify the `MediaBlock` to be used as the audio data output, its `name`, a reference to the `LiveVideoCompositor`, and whether it should `autostart`.
### Sample code
#### Add an audio renderer
Add an audio renderer to the LVC pipeline. You need to create an `AudioRendererBlock` object and then create an `LVCAudioOutput` object. Finally, add the output to the compositor.
The first device is used as an audio output.
```csharp
var audioRenderer = new AudioRendererBlock((await DeviceEnumerator.Shared.AudioOutputsAsync())[0]);
var audioRendererOutput = new LVCAudioOutput("Audio renderer", _compositor, audioRenderer, true);
await _compositor.Output_AddAsync(audioRendererOutput, true);
```
#### Add an MP3 output
Add an MP3 output to the LVC pipeline. You need to create an `MP3OutputBlock` object and then create an `LVCAudioOutput` object. Finally, add the output to the compositor.
```csharp
var mp3Output = new MP3OutputBlock(outputFile, new MP3EncoderSettings());
var output = new LVCAudioOutput(outputFile, _compositor, mp3Output, false);
if (await _compositor.Output_AddAsync(output))
{
// added successfully
}
else
{
output.Dispose();
}
```
## LVC Video/Audio Output
The `LVCVideoAudioOutput` class is used to add video+audio outputs to the LVC pipeline. You can start and stop the output pipeline independently from the main pipeline.
### Usage
When creating an `LVCVideoAudioOutput` object, you must specify the `MediaBlock` to be used as the video+audio data output, its `name`, a reference to the `LiveVideoCompositor`, and whether it should `autostart`. Optional processing `MediaBlock`s for video and audio can also be provided.
### Sample code
#### Add an MP4 output
```csharp
var mp4Output = new MP4OutputBlock(new MP4SinkSettings("output.mp4"), new OpenH264EncoderSettings(), new MFAACEncoderSettings());
var output = new LVCVideoAudioOutput(outputFile, _compositor, mp4Output, false);
if (await _compositor.Output_AddAsync(output))
{
// added successfully
}
else
{
output.Dispose();
}
```
#### Add a WebM output
```csharp
var webmOutput = new WebMOutputBlock(new WebMSinkSettings("output.webm"), new VP8EncoderSettings(), new VorbisEncoderSettings());
var output = new LVCVideoAudioOutput(outputFile, _compositor, webmOutput, false);
if (await _compositor.Output_AddAsync(output))
{
// added successfully
}
else
{
output.Dispose();
}
```
## LVC Video View Output
The `LVCVideoViewOutput` class is used to add video view to the LVC pipeline. You can use it to display the video on the screen.
### Usage
When creating an `LVCVideoViewOutput` object, you must specify the `IVideoView` control to be used, its `name`, a reference to the `LiveVideoCompositor`, and whether it should `autostart`. An optional processing `MediaBlock` can also be provided.
### Sample code
```csharp
var name = "[VideoView] Preview";
var videoRendererOutput = new LVCVideoViewOutput(name, _compositor, VideoView1, true);
await _compositor.Output_AddAsync(videoRendererOutput);
```
VideoView1 is a `VideoView` object that is used to display the video. Each platform / UI framework has its own `VideoView` implementation.
You can add several `LVCVideoViewOutput` objects to the LVC pipeline to display the video on different displays.
---
[Sample application on GitHub](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/WPF/CSharp/Live%20Video%20Compositor%20Demo)
---END OF PAGE---
# Local File: .\dotnet\mediablocks\Nvidia\index.md
---
title: .Net Media Nvidia Blocks Guide
description: Explore a complete guide to .Net Media SDK Nvidia blocks. Learn about Nvidia-specific blocks for your media processing pipelines.
sidebar_label: Nvidia
---
# Nvidia Blocks - VisioForge Media Blocks SDK .Net
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
Nvidia blocks leverage Nvidia GPU capabilities for accelerated media processing tasks such as data transfer, video conversion, and resizing.
## NVDataDownloadBlock
Nvidia data download block. Downloads data from Nvidia GPU to system memory.
#### Block info
Name: NVDataDownloadBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | Video (GPU memory) | 1 |
| Output video | Video (system memory) | 1 |
#### The sample pipeline
```mermaid
graph LR;
NVCUDAConverterBlock-->NVDataDownloadBlock-->VideoRendererBlock;
```
#### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// create a source that outputs to GPU memory (e.g., a decoder or another Nvidia block)
// For example, NVDataUploadBlock or an NV-accelerated decoder
var upstreamNvidiaBlock = new NVDataUploadBlock(); // Conceptual: assume this block is properly configured
// create Nvidia data download block
var nvDataDownload = new NVDataDownloadBlock();
// create video renderer block
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1 is your display control
// connect blocks
// pipeline.Connect(upstreamNvidiaBlock.Output, nvDataDownload.Input); // Connect GPU source to download block
// pipeline.Connect(nvDataDownload.Output, videoRenderer.Input); // Connect download block (system memory) to renderer
// start pipeline
// await pipeline.StartAsync();
```
#### Remarks
This block is used to transfer video data from the Nvidia GPU's memory to the main system memory. This is typically needed when a GPU-processed video stream needs to be accessed by a component that operates on system memory, like a CPU-based encoder or a standard video renderer.
Ensure that the correct Nvidia drivers and CUDA toolkit are installed for this block to function.
Use `NVDataDownloadBlock.IsAvailable()` to check if the block can be used.
#### Platforms
Windows, Linux (Requires Nvidia GPU and appropriate drivers/SDK).
## NVDataUploadBlock
Nvidia data upload block. Uploads data to Nvidia GPU from system memory.
#### Block info
Name: NVDataUploadBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | Video (system memory) | 1 |
| Output video | Video (GPU memory) | 1 |
#### The sample pipeline
```mermaid
graph LR;
SystemVideoSourceBlock-->NVDataUploadBlock-->NVH264EncoderBlock;
```
#### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// create a video source (e.g., SystemVideoSourceBlock or UniversalSourceBlock)
var videoSource = new UniversalSourceBlock(); // Conceptual: assume this block is properly configured
// videoSource.Filename = "input.mp4";
// create Nvidia data upload block
var nvDataUpload = new NVDataUploadBlock();
// create an Nvidia accelerated encoder (e.g., NVH264EncoderBlock)
// var nvEncoder = new NVH264EncoderBlock(new NVH264EncoderSettings()); // Conceptual
// connect blocks
// pipeline.Connect(videoSource.VideoOutput, nvDataUpload.Input); // Connect system memory source to upload block
// pipeline.Connect(nvDataUpload.Output, nvEncoder.Input); // Connect upload block (GPU memory) to NV encoder
// start pipeline
// await pipeline.StartAsync();
```
#### Remarks
This block is used to transfer video data from main system memory to the Nvidia GPU's memory. This is typically a prerequisite for using Nvidia-accelerated processing blocks like encoders, decoders, or filters that operate on GPU memory.
Ensure that the correct Nvidia drivers and CUDA toolkit are installed for this block to function.
Use `NVDataUploadBlock.IsAvailable()` to check if the block can be used.
#### Platforms
Windows, Linux (Requires Nvidia GPU and appropriate drivers/SDK).
## NVVideoConverterBlock
Nvidia video converter block. Performs color space conversions and other video format conversions using the Nvidia GPU.
#### Block info
Name: NVVideoConverterBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | Video (GPU memory) | 1 |
| Output video | Video (GPU memory, possibly different format) | 1 |
#### The sample pipeline
```mermaid
graph LR;
NVDataUploadBlock-->NVVideoConverterBlock-->NVDataDownloadBlock;
```
#### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// Assume video data is already in GPU memory via NVDataUploadBlock or an NV-decoder
// var nvUploadedSource = new NVDataUploadBlock(); // Conceptual
// pipeline.Connect(systemMemorySource.Output, nvUploadedSource.Input);
// create Nvidia video converter block
var nvVideoConverter = new NVVideoConverterBlock();
// Specific conversion settings might be applied here if the block has properties for them.
// Assume we want to download the converted video back to system memory
// var nvDataDownload = new NVDataDownloadBlock(); // Conceptual
// connect blocks
// pipeline.Connect(nvUploadedSource.Output, nvVideoConverter.Input);
// pipeline.Connect(nvVideoConverter.Output, nvDataDownload.Input);
// pipeline.Connect(nvDataDownload.Output, videoRenderer.Input); // Or to another system memory component
// start pipeline
// await pipeline.StartAsync();
```
#### Remarks
The `NVVideoConverterBlock` is used for efficient video format conversions (e.g., color space, pixel format) leveraging the Nvidia GPU. This is often faster than CPU-based conversions, especially for high-resolution video. It typically operates on video data already present in GPU memory.
Ensure that the correct Nvidia drivers and CUDA toolkit are installed.
Use `NVVideoConverterBlock.IsAvailable()` to check if the block can be used.
#### Platforms
Windows, Linux (Requires Nvidia GPU and appropriate drivers/SDK).
## NVVideoResizeBlock
Nvidia video resize block. Resizes video frames using the Nvidia GPU.
#### Block info
Name: NVVideoResizeBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | Video (GPU memory) | 1 |
| Output video | Video (GPU memory, resized) | 1 |
#### Settings
The `NVVideoResizeBlock` is configured using a `VisioForge.Core.Types.Size` object passed to its constructor.
- `Resolution` (`VisioForge.Core.Types.Size`): Specifies the target output resolution (Width, Height) for the video.
#### The sample pipeline
```mermaid
graph LR;
NVDataUploadBlock-->NVVideoResizeBlock-->NVH264EncoderBlock;
```
#### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// Target resolution for resizing
var targetResolution = new VisioForge.Core.Types.Size(1280, 720);
// Assume video data is already in GPU memory via NVDataUploadBlock or an NV-decoder
// var nvUploadedSource = new NVDataUploadBlock(); // Conceptual
// pipeline.Connect(systemMemorySource.Output, nvUploadedSource.Input);
// create Nvidia video resize block
var nvVideoResize = new NVVideoResizeBlock(targetResolution);
// Assume the resized video will be encoded by an NV-encoder
// var nvEncoder = new NVH264EncoderBlock(new NVH264EncoderSettings()); // Conceptual
// connect blocks
// pipeline.Connect(nvUploadedSource.Output, nvVideoResize.Input);
// pipeline.Connect(nvVideoResize.Output, nvEncoder.Input);
// start pipeline
// await pipeline.StartAsync();
```
#### Remarks
The `NVVideoResizeBlock` performs video scaling operations efficiently using the Nvidia GPU. This is useful for adapting video streams to different display resolutions or encoding requirements. It typically operates on video data already present in GPU memory.
Ensure that the correct Nvidia drivers and CUDA toolkit are installed.
Use `NVVideoResizeBlock.IsAvailable()` to check if the block can be used.
#### Platforms
Windows, Linux (Requires Nvidia GPU and appropriate drivers/SDK).
---END OF PAGE---
# Local File: .\dotnet\mediablocks\OpenCV\index.md
---
title: .Net Media OpenCV Blocks Guide
description: Explore a complete guide to .Net Media SDK OpenCV blocks. Learn about various OpenCV video processing capabilities.
sidebar_label: OpenCV
---
# OpenCV Blocks - VisioForge Media Blocks SDK .Net
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
OpenCV (Open Source Computer Vision Library) blocks provide powerful video processing capabilities within the VisioForge Media Blocks SDK .Net. These blocks enable a wide range of computer vision tasks, from basic image manipulation to complex object detection and tracking.
To use OpenCV blocks, ensure that the VisioForge.CrossPlatform.OpenCV.Windows.x64 (or corresponding package for your platform) NuGet package is included in your project.
Most OpenCV blocks typically require a `videoconvert` element before them to ensure the input video stream is in a compatible format. The SDK handles this internally when you initialize the block.
## CV Dewarp Block
The CV Dewarp block applies dewarping effects to a video stream, which can correct distortions from wide-angle lenses, for example.
### Block info
Name: `CVDewarpBlock` (GStreamer element: `dewarp`).
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Input video | Uncompressed video | 1 |
| Output video | Uncompressed video | 1 |
### Settings
The `CVDewarpBlock` is configured using `CVDewarpSettings`. Key properties:
- `DisplayMode` (`CVDewarpDisplayMode` enum): Specifies the display mode for dewarping (e.g., `SinglePanorama`, `DoublePanorama`). Default is `CVDewarpDisplayMode.SinglePanorama`.
- `InnerRadius` (double): Inner radius for dewarping.
- `InterpolationMethod` (`CVDewarpInterpolationMode` enum): Interpolation method used (e.g., `Bilinear`, `Bicubic`). Default is `CVDewarpInterpolationMode.Bilinear`.
- `OuterRadius` (double): Outer radius for dewarping.
- `XCenter` (double): X-coordinate of the center for dewarping.
- `XRemapCorrection` (double): X-coordinate remap correction factor.
- `YCenter` (double): Y-coordinate of the center for dewarping.
- `YRemapCorrection` (double): Y-coordinate remap correction factor.
### Sample pipeline
```mermaid
graph LR;
SystemVideoSourceBlock-->CVDewarpBlock;
CVDewarpBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Assuming SystemVideoSourceBlock is already created and configured as 'videoSource'
// Create Dewarp settings
var dewarpSettings = new CVDewarpSettings
{
DisplayMode = CVDewarpDisplayMode.SinglePanorama, // Example mode, default is SinglePanorama
InnerRadius = 0.2, // Example value
OuterRadius = 0.8, // Example value
XCenter = 0.5, // Example value, default is 0.5
YCenter = 0.5, // Example value, default is 0.5
// InterpolationMethod = CVDewarpInterpolationMode.Bilinear, // This is the default
};
var dewarpBlock = new CVDewarpBlock(dewarpSettings);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
// Connect blocks
pipeline.Connect(videoSource.Output, dewarpBlock.Input0);
pipeline.Connect(dewarpBlock.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux.
### Remarks
Ensure the VisioForge OpenCV NuGet package is referenced in your project.
## CV Dilate Block
The CV Dilate block performs a dilation operation on the video stream. Dilation is a morphological operation that typically expands bright regions and shrinks dark regions.
### Block info
Name: `CVDilateBlock` (GStreamer element: `cvdilate`).
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Input video | Uncompressed video | 1 |
| Output video | Uncompressed video | 1 |
### Settings
This block does not have specific settings beyond the default behavior. The dilation is performed with a default structuring element.
### Sample pipeline
```mermaid
graph LR;
SystemVideoSourceBlock-->CVDilateBlock;
CVDilateBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Assuming SystemVideoSourceBlock is already created and configured as 'videoSource'
var dilateBlock = new CVDilateBlock();
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
// Connect blocks
pipeline.Connect(videoSource.Output, dilateBlock.Input0);
pipeline.Connect(dilateBlock.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux.
### Remarks
Ensure the VisioForge OpenCV NuGet package is referenced in your project.
## CV Edge Detect Block
The CV Edge Detect block uses the Canny edge detector algorithm to find edges in the video stream.
### Block info
Name: `CVEdgeDetectBlock` (GStreamer element: `edgedetect`).
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Input video | Uncompressed video | 1 |
| Output video | Uncompressed video | 1 |
### Settings
The `CVEdgeDetectBlock` is configured using `CVEdgeDetectSettings`. Key properties:
- `ApertureSize` (int): Aperture size for the Sobel operator (e.g., 3, 5, or 7). Default is 3.
- `Threshold1` (int): First threshold for the hysteresis procedure. Default is 50.
- `Threshold2` (int): Second threshold for the hysteresis procedure. Default is 150.
- `Mask` (bool): If true, the output is a mask; otherwise, it's the original image with edges highlighted. Default is `false`.
### Sample pipeline
```mermaid
graph LR;
SystemVideoSourceBlock-->CVEdgeDetectBlock;
CVEdgeDetectBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Assuming SystemVideoSourceBlock is already created and configured as 'videoSource'
var edgeDetectSettings = new CVEdgeDetectSettings
{
ApertureSize = 3, // Example value, default is 3
Threshold1 = 2000, // Example value, actual C# type is int, default is 50
Threshold2 = 4000, // Example value, actual C# type is int, default is 150
Mask = true // Example value, default is false
};
var edgeDetectBlock = new CVEdgeDetectBlock(edgeDetectSettings);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
// Connect blocks
pipeline.Connect(videoSource.Output, edgeDetectBlock.Input0);
pipeline.Connect(edgeDetectBlock.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux.
### Remarks
Ensure the VisioForge OpenCV NuGet package is referenced in your project.
## CV Equalize Histogram Block
The CV Equalize Histogram block equalizes the histogram of a video frame using the `cvEqualizeHist` function. This typically improves the contrast of the image.
### Block info
Name: `CVEqualizeHistogramBlock` (GStreamer element: `cvequalizehist`).
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Input video | Uncompressed video | 1 |
| Output video | Uncompressed video | 1 |
### Settings
This block does not have specific settings beyond the default behavior.
### Sample pipeline
```mermaid
graph LR;
SystemVideoSourceBlock-->CVEqualizeHistogramBlock;
CVEqualizeHistogramBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Assuming SystemVideoSourceBlock is already created and configured as 'videoSource'
var equalizeHistBlock = new CVEqualizeHistogramBlock();
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
// Connect blocks
pipeline.Connect(videoSource.Output, equalizeHistBlock.Input0);
pipeline.Connect(equalizeHistBlock.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux.
### Remarks
Ensure the VisioForge OpenCV NuGet package is referenced in your project.
## CV Erode Block
The CV Erode block performs an erosion operation on the video stream. Erosion is a morphological operation that typically shrinks bright regions and expands dark regions.
### Block info
Name: `CVErodeBlock` (GStreamer element: `cverode`).
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Input video | Uncompressed video | 1 |
| Output video | Uncompressed video | 1 |
### Settings
This block does not have specific settings beyond the default behavior. The erosion is performed with a default structuring element.
### Sample pipeline
```mermaid
graph LR;
SystemVideoSourceBlock-->CVErodeBlock;
CVErodeBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Assuming SystemVideoSourceBlock is already created and configured as 'videoSource'
var erodeBlock = new CVErodeBlock();
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
// Connect blocks
pipeline.Connect(videoSource.Output, erodeBlock.Input0);
pipeline.Connect(erodeBlock.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux.
### Remarks
Ensure the VisioForge OpenCV NuGet package is referenced in your project.
## CV Face Blur Block
The CV Face Blur block detects faces in the video stream and applies a blur effect to them.
### Block info
Name: `CVFaceBlurBlock` (GStreamer element: `faceblur`).
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Input video | Uncompressed video | 1 |
| Output video | Uncompressed video | 1 |
### Settings
The `CVFaceBlurBlock` is configured using `CVFaceBlurSettings`. Key properties:
- `MainCascadeFile` (string): Path to the XML file for the primary Haar cascade classifier used for face detection (e.g., `haarcascade_frontalface_default.xml`). Default is `"haarcascade_frontalface_default.xml"`.
- `MinNeighbors` (int): Minimum number of neighbors each candidate rectangle should have to retain it. Default is 3.
- `MinSize` (`Size`): Minimum possible object size. Objects smaller than this are ignored. Default `new Size(30, 30)`.
- `ScaleFactor` (double): How much the image size is reduced at each image scale. Default is 1.25.
Note: `ProcessPaths(Context)` should be called on the settings object to ensure correct path resolution for cascade files.
### Sample pipeline
```mermaid
graph LR;
SystemVideoSourceBlock-->CVFaceBlurBlock;
CVFaceBlurBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Assuming SystemVideoSourceBlock is already created and configured as 'videoSource'
var faceBlurSettings = new CVFaceBlurSettings
{
MainCascadeFile = "haarcascade_frontalface_default.xml", // Adjust path as needed, this is the default
MinNeighbors = 5, // Example value, default is 3
ScaleFactor = 1.2, // Example value, default is 1.25
// MinSize = new VisioForge.Core.Types.Size(30, 30) // This is the default
};
// It's important to call ProcessPaths if you are not providing an absolute path
// and relying on SDK's internal mechanisms to locate the file, especially when deployed.
// faceBlurSettings.ProcessPaths(pipeline.Context); // or pass appropriate context
var faceBlurBlock = new CVFaceBlurBlock(faceBlurSettings);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
// Connect blocks
pipeline.Connect(videoSource.Output, faceBlurBlock.Input0);
pipeline.Connect(faceBlurBlock.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux.
### Remarks
This block requires Haar cascade XML files for face detection. These files are typically bundled with OpenCV distributions. Ensure the path to `MainCascadeFile` is correctly specified. The `ProcessPaths` method on the settings object can help resolve paths if files are placed in standard locations known to the SDK.
## CV Face Detect Block
The CV Face Detect block detects faces, and optionally eyes, noses, and mouths, in the video stream using Haar cascade classifiers.
### Block info
Name: `CVFaceDetectBlock` (GStreamer element: `facedetect`).
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Input video | Uncompressed video | 1 |
| Output video | Uncompressed video | 1 |
### Settings
The `CVFaceDetectBlock` is configured using `CVFaceDetectSettings`. Key properties:
- `Display` (bool): If `true`, draws rectangles around detected features on the output video. Default is `true`.
- `MainCascadeFile` (string): Path to the XML for the primary Haar cascade. Default is `"haarcascade_frontalface_default.xml"`.
- `EyesCascadeFile` (string): Path to the XML for eyes detection. Default is `"haarcascade_mcs_eyepair_small.xml"`. Optional.
- `NoseCascadeFile` (string): Path to the XML for nose detection. Default is `"haarcascade_mcs_nose.xml"`. Optional.
- `MouthCascadeFile` (string): Path to the XML for mouth detection. Default is `"haarcascade_mcs_mouth.xml"`. Optional.
- `MinNeighbors` (int): Minimum neighbors for candidate retention. Default 3.
- `MinSize` (`Size`): Minimum object size. Default `new Size(30, 30)`.
- `MinDeviation` (int): Minimum standard deviation. Default 0.
- `ScaleFactor` (double): Image size reduction factor at each scale. Default 1.25.
- `UpdatesMode` (`CVFaceDetectUpdates` enum): Controls how updates/events are posted (`EveryFrame`, `OnChange`, `OnFace`, `None`). Default `CVFaceDetectUpdates.EveryFrame`.
Note: `ProcessPaths(Context)` should be called on the settings object for cascade files.
### Events
- `FaceDetected`: Occurs when faces (and other enabled features) are detected. Provides `CVFaceDetectedEventArgs` with an array of `CVFace` objects and a timestamp.
- `CVFace` contains `Rect` for `Position`, `Nose`, `Mouth`, and a list of `Rect` for `Eyes`.
### Sample pipeline
```mermaid
graph LR;
SystemVideoSourceBlock-->CVFaceDetectBlock;
CVFaceDetectBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Assuming SystemVideoSourceBlock is already created and configured as 'videoSource'
var faceDetectSettings = new CVFaceDetectSettings
{
MainCascadeFile = "haarcascade_frontalface_default.xml", // Adjust path, default
EyesCascadeFile = "haarcascade_mcs_eyepair_small.xml", // Adjust path, default, optional
// NoseCascadeFile = "haarcascade_mcs_nose.xml", // Optional, default
// MouthCascadeFile = "haarcascade_mcs_mouth.xml", // Optional, default
Display = true, // Default
UpdatesMode = CVFaceDetectUpdates.EveryFrame, // Default, possible values: EveryFrame, OnChange, OnFace, None
MinNeighbors = 5, // Example value, default is 3
ScaleFactor = 1.2, // Example value, default is 1.25
// MinSize = new VisioForge.Core.Types.Size(30,30) // Default
};
// faceDetectSettings.ProcessPaths(pipeline.Context); // or appropriate context
var faceDetectBlock = new CVFaceDetectBlock(faceDetectSettings);
faceDetectBlock.FaceDetected += (s, e) =>
{
Console.WriteLine($"Timestamp: {e.Timestamp}, Faces found: {e.Faces.Length}");
foreach (var face in e.Faces)
{
Console.WriteLine($" Face at [{face.Position.Left},{face.Position.Top},{face.Position.Width},{face.Position.Height}]");
if (face.Eyes.Any())
{
Console.WriteLine($" Eyes at [{face.Eyes[0].Left},{face.Eyes[0].Top},{face.Eyes[0].Width},{face.Eyes[0].Height}]");
}
}
};
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
// Connect blocks
pipeline.Connect(videoSource.Output, faceDetectBlock.Input0);
pipeline.Connect(faceDetectBlock.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux.
### Remarks
Requires Haar cascade XML files. The `ProcessBusMessage` method in the C# class handles parsing messages from the GStreamer element to fire the `FaceDetected` event.
## CV Hand Detect Block
The CV Hand Detect block detects hand gestures (fist or palm) in the video stream using Haar cascade classifiers. It internally resizes the input video to 320x240 for processing.
### Block info
Name: `CVHandDetectBlock` (GStreamer element: `handdetect`).
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Input video | Uncompressed video | 1 |
| Output video | Uncompressed video | 1 |
### Settings
The `CVHandDetectBlock` is configured using `CVHandDetectSettings`. Key properties:
- `Display` (bool): If `true`, draws rectangles around detected hands on the output video. Default is `true`.
- `FistCascadeFile` (string): Path to the XML for fist detection. Default is `"fist.xml"`.
- `PalmCascadeFile` (string): Path to the XML for palm detection. Default is `"palm.xml"`.
- `ROI` (`Rect`): Region Of Interest for detection. Coordinates are relative to the 320x240 processed image. Default (0,0,0,0) - full frame (corresponds to `new Rect()`).
Note: `ProcessPaths(Context)` should be called on the settings object for cascade files.
### Events
- `HandDetected`: Occurs when hands are detected. Provides `CVHandDetectedEventArgs` with an array of `CVHand` objects.
- `CVHand` contains `Rect` for `Position` and `CVHandGesture` for `Gesture` (Fist or Palm).
### Sample pipeline
```mermaid
graph LR;
SystemVideoSourceBlock-->CVHandDetectBlock;
CVHandDetectBlock-->VideoRendererBlock;
```
Note: The `CVHandDetectBlock` internally includes a `videoscale` element to resize input to 320x240 before the `handdetect` GStreamer element.
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Assuming SystemVideoSourceBlock is already created and configured as 'videoSource'
var handDetectSettings = new CVHandDetectSettings
{
FistCascadeFile = "fist.xml", // Adjust path, default
PalmCascadeFile = "palm.xml", // Adjust path, default
Display = true, // Default
ROI = new VisioForge.Core.Types.Rect(0, 0, 320, 240) // Example: full frame of scaled image, default is new Rect()
};
// handDetectSettings.ProcessPaths(pipeline.Context); // or appropriate context
var handDetectBlock = new CVHandDetectBlock(handDetectSettings);
handDetectBlock.HandDetected += (s, e) =>
{
Console.WriteLine($"Hands found: {e.Hands.Length}");
foreach (var hand in e.Hands)
{
Console.WriteLine($" Hand at [{hand.Position.Left},{hand.Position.Top},{hand.Position.Width},{hand.Position.Height}], Gesture: {hand.Gesture}");
}
};
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
// Connect blocks
pipeline.Connect(videoSource.Output, handDetectBlock.Input0);
pipeline.Connect(handDetectBlock.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux.
### Remarks
Requires Haar cascade XML files for fist and palm detection. The input video is internally scaled to 320x240 for processing by the `handdetect` element. The `ProcessBusMessage` method handles GStreamer messages to fire `HandDetected`.
## CV Laplace Block
The CV Laplace block applies a Laplace operator to the video stream, which highlights regions of rapid intensity change, often used for edge detection.
### Block info
Name: `CVLaplaceBlock` (GStreamer element: `cvlaplace`).
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Input video | Uncompressed video | 1 |
| Output video | Uncompressed video | 1 |
### Settings
The `CVLaplaceBlock` is configured using `CVLaplaceSettings`. Key properties:
- `ApertureSize` (int): Aperture size for the Sobel operator used internally (e.g., 1, 3, 5, or 7). Default 3.
- `Scale` (double): Optional scale factor for the computed Laplacian values. Default 1.
- `Shift` (double): Optional delta value that is added to the results prior to storing them. Default 0.
- `Mask` (bool): If true, the output is a mask; otherwise, it's the original image with the effect applied. Default is true.
### Sample pipeline
```mermaid
graph LR;
SystemVideoSourceBlock-->CVLaplaceBlock;
CVLaplaceBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Assuming SystemVideoSourceBlock is already created and configured as 'videoSource'
var laplaceSettings = new CVLaplaceSettings
{
ApertureSize = 3, // Example value
Scale = 1.0, // Example value
Shift = 0.0, // Example value
Mask = true
};
var laplaceBlock = new CVLaplaceBlock(laplaceSettings);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
// Connect blocks
pipeline.Connect(videoSource.Output, laplaceBlock.Input0);
pipeline.Connect(laplaceBlock.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux.
### Remarks
Ensure the VisioForge OpenCV NuGet package is referenced in your project.
## CV Motion Cells Block
The CV Motion Cells block detects motion in a video stream by dividing the frame into a grid of cells and analyzing changes within these cells.
### Block info
Name: `CVMotionCellsBlock` (GStreamer element: `motioncells`).
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Input video | Uncompressed video | 1 |
| Output video | Uncompressed video | 1 |
### Settings
The `CVMotionCellsBlock` is configured using `CVMotionCellsSettings`. Key properties:
- `CalculateMotion` (bool): Enable or disable motion calculation. Default `true`.
- `CellsColor` (`SKColor`): Color to draw motion cells if `Display` is true. Default `SKColors.Red`.
- `DataFile` (string): Path to a data file for loading/saving cell configuration. Extension is handled separately by `DataFileExtension`.
- `DataFileExtension` (string): Extension for the data file (e.g., "dat").
- `Display` (bool): If `true`, draws the grid and motion indication on the output video. Default `true`.
- `Gap` (`TimeSpan`): Interval after which motion is considered finished and a "motion finished" bus message is posted. Default `TimeSpan.FromSeconds(5)`. (Note: This is different from a pixel gap between cells).
- `GridSize` (`Size`): Number of cells in the grid (Width x Height). Default `new Size(10, 10)`.
- `MinimumMotionFrames` (int): Minimum number of frames motion must be detected in a cell to trigger. Default 1.
- `MotionCellsIdx` (string): Comma-separated string of cell indices (e.g., "0:0,1:1") to monitor for motion.
- `MotionCellBorderThickness` (int): Thickness of the border for cells with detected motion. Default 1.
- `MotionMaskCellsPos` (string): String defining cell positions for a motion mask.
- `MotionMaskCoords` (string): String defining coordinates for a motion mask.
- `PostAllMotion` (bool): Post all motion events. Default `false`.
- `PostNoMotion` (`TimeSpan`): Time after which a "no motion" event is posted if no motion is detected. Default `TimeSpan.Zero` (disabled).
- `Sensitivity` (double): Motion sensitivity. Expected range might be 0.0 to 1.0. Default `0.5`.
- `Threshold` (double): Threshold for motion detection, representing the fraction of cells that need to have moved. Default `0.01`.
- `UseAlpha` (bool): Use alpha channel for drawing. Default `true`.
### Events
- `MotionDetected`: Occurs when motion is detected or changes state. Provides `CVMotionCellsEventArgs`:
- `Cells`: String indicating which cells have motion (e.g., "0:0,1:2").
- `StartedTime`: Timestamp when motion began in the current event scope.
- `FinishedTime`: Timestamp when motion finished (if applicable to the event).
- `CurrentTime`: Timestamp of the current frame related to the event.
- `IsMotion`: Boolean indicating if the event signifies motion (`true`) or no motion (`false`).
### Sample pipeline
```mermaid
graph LR;
SystemVideoSourceBlock-->CVMotionCellsBlock;
CVMotionCellsBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Assuming SystemVideoSourceBlock is already created and configured as 'videoSource'
var motionCellsSettings = new CVMotionCellsSettings
{
GridSize = new VisioForge.Core.Types.Size(8, 6), // Example: 8x6 grid, default is new Size(10,10)
Sensitivity = 0.75, // Example value, C# default is 0.5. Represents sensitivity.
Threshold = 0.05, // Example value, C# default is 0.01. Represents fraction of moved cells.
Display = true, // Default is true
CellsColor = SKColors.Aqua, // Example color, default is SKColors.Red
PostNoMotion = TimeSpan.FromSeconds(5) // Post no_motion after 5s of inactivity, default is TimeSpan.Zero
};
var motionCellsBlock = new CVMotionCellsBlock(motionCellsSettings);
motionCellsBlock.MotionDetected += (s, e) =>
{
if (e.IsMotion)
{
Console.WriteLine($"Motion DETECTED at {e.CurrentTime}. Cells: {e.Cells}. Started: {e.StartedTime}");
}
else
{
Console.WriteLine($"Motion FINISHED or NO MOTION at {e.CurrentTime}. Finished: {e.FinishedTime}");
}
};
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
// Connect blocks
pipeline.Connect(videoSource.Output, motionCellsBlock.Input0);
pipeline.Connect(motionCellsBlock.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux.
### Remarks
The `ProcessBusMessage` method handles GStreamer messages to fire `MotionDetected`. Event structure provides timestamps for motion start, finish, and current event time.
## CV Smooth Block
The CV Smooth block applies various smoothing (blurring) filters to the video stream.
### Block info
Name: `CVSmoothBlock` (GStreamer element: `cvsmooth`).
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Input video | Uncompressed video | 1 |
| Output video | Uncompressed video | 1 |
### Settings
The `CVSmoothBlock` is configured using `CVSmoothSettings`. Key properties:
- `Type` (`CVSmoothType` enum): Type of smoothing filter to apply (`Blur`, `Gaussian`, `Median`, `Bilateral`). Default `CVSmoothType.Gaussian`.
- `KernelWidth` (int): Width of the kernel for `Blur`, `Gaussian`, `Median` filters. Default 3.
- `KernelHeight` (int): Height of the kernel for `Blur`, `Gaussian`, `Median` filters. Default 3.
- `Width` (int): Width of the area to blur. Default `int.MaxValue` (full frame).
- `Height` (int): Height of the area to blur. Default `int.MaxValue` (full frame).
- `PositionX` (int): X position for the blur area. Default 0.
- `PositionY` (int): Y position for the blur area. Default 0.
- `Color` (double): Sigma for color space (for Bilateral filter) or standard deviation (for Gaussian if `SpatialSigma` is 0). Default 0.
- `SpatialSigma` (double): Sigma for coordinate space (for Bilateral and Gaussian filters). For Gaussian, if 0, it's calculated from `KernelWidth`/`KernelHeight`. Default 0.
### Sample pipeline
```mermaid
graph LR;
SystemVideoSourceBlock-->CVSmoothBlock;
CVSmoothBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Assuming SystemVideoSourceBlock is already created and configured as 'videoSource'
var smoothSettings = new CVSmoothSettings
{
Type = CVSmoothType.Gaussian, // Example: Gaussian blur, also the default
KernelWidth = 5, // Kernel width, default is 3
KernelHeight = 5, // Kernel height, default is 3
SpatialSigma = 1.5 // Sigma for Gaussian. If 0 (default), it's calculated from kernel size.
};
var smoothBlock = new CVSmoothBlock(smoothSettings);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
// Connect blocks
pipeline.Connect(videoSource.Output, smoothBlock.Input0);
pipeline.Connect(smoothBlock.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux.
### Remarks
Ensure the VisioForge OpenCV NuGet package is referenced in your project. The specific parameters used by the GStreamer element (`color`, `spatial`, `kernel-width`, `kernel-height`) depend on the chosen `Type`. For kernel dimensions, use `KernelWidth` and `KernelHeight`. `Width` and `Height` define the area to apply the blur if not the full frame.
## CV Sobel Block
The CV Sobel block applies a Sobel operator to the video stream, which is used to calculate the derivative of an image intensity function, typically for edge detection.
### Block info
Name: `CVSobelBlock` (GStreamer element: `cvsobel`).
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Input video | Uncompressed video | 1 |
| Output video | Uncompressed video | 1 |
### Settings
The `CVSobelBlock` is configured using `CVSobelSettings`. Key properties:
- `XOrder` (int): Order of the derivative x. Default 1.
- `YOrder` (int): Order of the derivative y. Default 1.
- `ApertureSize` (int): Size of the extended Sobel kernel (1, 3, 5, or 7). Default 3.
- `Mask` (bool): If true, the output is a mask; otherwise, it's the original image with the effect applied. Default is true.
### Sample pipeline
```mermaid
graph LR;
SystemVideoSourceBlock-->CVSobelBlock;
CVSobelBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Assuming SystemVideoSourceBlock is already created and configured as 'videoSource'
var sobelSettings = new CVSobelSettings
{
XOrder = 1, // Default is 1. Used for order of the derivative X.
YOrder = 0, // Example: Use 0 for Y-order to primarily detect vertical edges. C# class default is 1.
ApertureSize = 3, // Default is 3. Size of the extended Sobel kernel.
Mask = true // Default is true. Output as a mask.
};
var sobelBlock = new CVSobelBlock(sobelSettings);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
// Connect blocks
pipeline.Connect(videoSource.Output, sobelBlock.Input0);
pipeline.Connect(sobelBlock.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux.
### Remarks
Ensure the VisioForge OpenCV NuGet package is referenced in your project.
## CV Template Match Block
The CV Template Match block searches for occurrences of a template image within the video stream.
### Block info
Name: `CVTemplateMatchBlock` (GStreamer element: `templatematch`).
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Input video | Uncompressed video | 1 |
| Output video | Uncompressed video | 1 |
### Settings
The `CVTemplateMatchBlock` is configured using `CVTemplateMatchSettings`. Key properties:
- `TemplateImage` (string): Path to the template image file (e.g., PNG, JPG) to search for.
- `Method` (`CVTemplateMatchMethod` enum): The comparison method to use (e.g., `Sqdiff`, `CcorrNormed`, `CcoeffNormed`). Default `CVTemplateMatchMethod.Correlation`.
- `Display` (bool): If `true`, draws a rectangle around the best match on the output video. Default `true`.
### Events
- `TemplateMatch`: Occurs when a template match is found. Provides `CVTemplateMatchEventArgs`:
- `Rect`: A `Types.Rect` object representing the location (x, y, width, height) of the best match.
- `Result`: A double value representing the quality or result of the match, depending on the method used.
### Sample pipeline
```mermaid
graph LR;
SystemVideoSourceBlock-->CVTemplateMatchBlock;
CVTemplateMatchBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Assuming SystemVideoSourceBlock is already created and configured as 'videoSource'
// Ensure "template.png" exists and is accessible.
var templateMatchSettings = new CVTemplateMatchSettings("path/to/your/template.png") // Adjust path as needed
{
// Method: Specifies the comparison method.
// Example: CVTemplateMatchMethod.CcoeffNormed is often a good choice.
// C# class default is CVTemplateMatchMethod.Correlation.
Method = CVTemplateMatchMethod.CcoeffNormed,
// Display: If true, draws a rectangle around the best match.
// C# class default is true.
Display = true
};
var templateMatchBlock = new CVTemplateMatchBlock(templateMatchSettings);
templateMatchBlock.TemplateMatch += (s, e) =>
{
Console.WriteLine($"Template matched at [{e.Rect.Left},{e.Rect.Top},{e.Rect.Width},{e.Rect.Height}] with result: {e.Result}");
};
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
// Connect blocks
pipeline.Connect(videoSource.Output, templateMatchBlock.Input0);
pipeline.Connect(templateMatchBlock.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux.
### Remarks
Ensure the VisioForge OpenCV NuGet package and a valid template image file are available. The `ProcessBusMessage` method handles GStreamer messages to fire the `TemplateMatch` event.
## CV Text Overlay Block
The CV Text Overlay block renders text onto the video stream using OpenCV drawing functions.
### Block info
Name: `CVTextOverlayBlock` (GStreamer element: `opencvtextoverlay`).
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Input video | Uncompressed video | 1 |
| Output video | Uncompressed video | 1 |
### Settings
The `CVTextOverlayBlock` is configured using `CVTextOverlaySettings`. Key properties:
- `Text` (string): The text string to overlay. Default: `"Default text"`.
- `X` (int): X-coordinate of the bottom-left corner of the text string. Default: `50`.
- `Y` (int): Y-coordinate of the bottom-left corner of the text string (from the top, OpenCV origin is top-left, GStreamer textoverlay might be bottom-left). Default: `50`.
- `FontWidth` (double): Font scale factor that is multiplied by the font-specific base size. Default: `1.0`.
- `FontHeight` (double): Font scale factor (similar to FontWidth, though GStreamer element usually has one `font-scale` or relies on point size). Default: `1.0`.
- `FontThickness` (int): Thickness of the lines used to draw a text. Default: `1`.
- `Color` (`SKColor`): Color of the text. Default: `SKColors.Black`.
### Sample pipeline
```mermaid
graph LR;
SystemVideoSourceBlock-->CVTextOverlayBlock;
CVTextOverlayBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Assuming SystemVideoSourceBlock is already created and configured as 'videoSource'
var textOverlaySettings = new CVTextOverlaySettings
{
Text = "VisioForge MediaBlocks.Net ROCKS!", // Default: "Default text"
X = 20, // X position of the text start. Default: 50
Y = 40, // Y position of the text baseline (from top). Default: 50
FontWidth = 1.2, // Font scale. Default: 1.0
FontHeight = 1.2, // Font scale (usually FontWidth is sufficient for opencvtextoverlay). Default: 1.0
FontThickness = 2, // Default: 1
Color = SKColors.Blue // Default: SKColors.Black
};
var textOverlayBlock = new CVTextOverlayBlock(textOverlaySettings);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
// Connect blocks
pipeline.Connect(videoSource.Output, textOverlayBlock.Input0);
pipeline.Connect(textOverlayBlock.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux.
### Remarks
Ensure the VisioForge OpenCV NuGet package is referenced. The GStreamer properties `colorR`, `colorG`, `colorB` are set based on the `Color` property.
## CV Tracker Block
The CV Tracker block implements various object tracking algorithms to follow an object defined by an initial bounding box in a video stream.
### Block info
Name: `CVTrackerBlock` (GStreamer element: `cvtracker`).
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Input video | Uncompressed video | 1 |
| Output video | Uncompressed video | 1 |
### Settings
The `CVTrackerBlock` is configured using `CVTrackerSettings`. Key properties:
- `Algorithm` (`CVTrackerAlgorithm` enum): Specifies the tracking algorithm (`Boosting`, `CSRT`, `KCF`, `MedianFlow`, `MIL`, `MOSSE`, `TLD`). Default: `CVTrackerAlgorithm.MedianFlow`.
- `InitialRect` (`Rect`): The initial bounding box (Left, Top, Width, Height) of the object to track. Default: `new Rect(50, 50, 100, 100)`.
- `DrawRect` (bool): If `true`, draws a rectangle around the tracked object on the output video. Default: `true`.
### Sample pipeline
```mermaid
graph LR;
SystemVideoSourceBlock-->CVTrackerBlock;
CVTrackerBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Assuming SystemVideoSourceBlock is already created and configured as 'videoSource'
var trackerSettings = new CVTrackerSettings
{
Algorithm = CVTrackerAlgorithm.CSRT, // CSRT is often a good general-purpose tracker. Default: CVTrackerAlgorithm.MedianFlow
InitialRect = new VisioForge.Core.Types.Rect(150, 120, 80, 80), // Define your initial object ROI. Default: new Rect(50, 50, 100, 100)
DrawRect = true // Default: true
};
var trackerBlock = new CVTrackerBlock(trackerSettings);
// Note: The tracker initializes with InitialRect.
// To re-initialize tracking on a new object/location at runtime:
// 1. Pause or Stop the pipeline.
// 2. Update trackerBlock.Settings.InitialRect (or create new CVTrackerSettings).
// It's generally safer to update settings on a stopped/paused pipeline,
// or if the block/element supports dynamic property changes, that might be an option.
// Directly modifying `trackerBlock.Settings.InitialRect` might not re-initialize the underlying GStreamer element.
// You may need to remove and re-add the block, or check SDK documentation for live update capabilities.
// 3. Resume/Start the pipeline.
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
// Connect blocks
pipeline.Connect(videoSource.Output, trackerBlock.Input0);
pipeline.Connect(trackerBlock.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux.
### Remarks
Ensure the VisioForge OpenCV NuGet package is referenced. The choice of tracking algorithm can significantly impact performance and accuracy. Some algorithms (like CSRT, KCF) are generally more robust than older ones (like Boosting, MedianFlow). Some trackers might require OpenCV contrib modules to be available in your OpenCV build/distribution.
---END OF PAGE---
# Local File: .\dotnet\mediablocks\OpenGL\index.md
---
title: .Net Media OpenGL Video Effects Guide
description: Explore a comprehensive guide to OpenGL video effects available in VisioForge Media Blocks SDK .Net. Learn about various effects, their settings, and related OpenGL functionalities.
sidebar_label: OpenGL Effects
---
# OpenGL Video Effects - VisioForge Media Blocks SDK .Net
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
OpenGL video effects in VisioForge Media Blocks SDK .Net allow for powerful, hardware-accelerated manipulation of video streams. These effects can be applied to video content processed within an OpenGL context, typically via blocks like `GLVideoEffectsBlock` or custom OpenGL rendering pipelines. This guide covers the available effects, their configuration settings, and other related OpenGL types.
## Base Effect: `GLBaseVideoEffect`
All OpenGL video effects inherit from the `GLBaseVideoEffect` class, which provides common properties and events.
| Property | Type | Description |
|----------|-----------------------|--------------------------------------------------|
| `Name` | `string` | The internal name of the effect (read-only). |
| `ID` | `GLVideoEffectID` | The unique identifier for the effect (read-only). |
| `Index` | `int` | The index of the effect in a chain. |
**Events:**
- `OnUpdate`: Occurs when effect properties need to be updated in the pipeline. Call `OnUpdateCall()` to trigger it.
## Available Video Effects
This section details the various OpenGL video effects you can use. These effects are typically added to a `GLVideoEffectsBlock` or a similar OpenGL processing element.
### Alpha Effect (`GLAlphaVideoEffect`)
Replaces a selected color with an alpha channel or sets/adjusts the existing alpha channel.
**Properties:**
| Property | Type | Default Value | Description |
|--------------------|--------------------------|------------------|--------------------------------------------------------|
| `Alpha` | `double` | `1.0` | The value for the alpha channel. |
| `Angle` | `float` | `20` | The size of the colorcube to change (sensitivity radius for color matching). |
| `BlackSensitivity` | `uint` | `100` | The sensitivity to dark colors. |
| `Mode` | `GLAlphaVideoEffectMode` | `Set` | The method used for alpha modification. |
| `NoiseLevel` | `float` | `2` | The size of noise radius (pixels to ignore around the matched color). |
| `CustomColor` | `SKColor` | `SKColors.Green` | Custom color value for `Custom` chroma key mode. |
| `WhiteSensitivity` | `uint` | `100` | The sensitivity to bright colors. |
**Associated Enum: `GLAlphaVideoEffectMode`**
Defines the mode of operation for the Alpha video effect.
| Value | Description |
|----------|----------------------------------------|
| `Set` | Set/adjust alpha channel directly using the `Alpha` property. |
| `Green` | Chroma Key on pure green. |
| `Blue` | Chroma Key on pure blue. |
| `Custom` | Chroma Key on the color specified by `CustomColor`. |
### Blur Effect (`GLBlurVideoEffect`)
Applies a blur effect using a 9x9 separable convolution. This effect does not have additional configurable properties beyond those inherited from `GLBaseVideoEffect`.
### Bulge Effect (`GLBulgeVideoEffect`)
Creates a bulge distortion on the video. This effect does not have additional configurable properties beyond those inherited from `GLBaseVideoEffect`.
### Color Balance Effect (`GLColorBalanceVideoEffect`)
Adjusts the color balance of the video, including brightness, contrast, hue, and saturation.
**Properties:**
| Property | Type | Default Value | Description |
|--------------|----------|---------------|--------------------------------------------------|
| `Brightness` | `double` | `0` | Adjusts brightness (-1.0 to 1.0, 0 means no change). |
| `Contrast` | `double` | `1` | Adjusts contrast (0.0 to infinity, 1 means no change). |
| `Hue` | `double` | `0` | Adjusts hue (-1.0 to 1.0, 0 means no change). |
| `Saturation` | `double` | `1` | Adjusts saturation (0.0 to infinity, 1 means no change). |
### Deinterlace Effect (`GLDeinterlaceVideoEffect`)
Applies a deinterlacing filter to the video.
**Properties:**
| Property | Type | Default Value | Description |
|----------|-----------------------|-----------------|-------------------------------------|
| `Method` | `GLDeinterlaceMethod` | `VerticalBlur` | The deinterlacing method to use. |
**Associated Enum: `GLDeinterlaceMethod`**
Defines the method for the Deinterlace video effect.
| Value | Description |
|----------------|-----------------------------------------|
| `VerticalBlur` | Vertical blur method. |
| `MAAD` | Motion Adaptive: Advanced Detection. |
### Fish Eye Effect (`GLFishEyeVideoEffect`)
Applies a fish-eye lens distortion effect. This effect does not have additional configurable properties beyond those inherited from `GLBaseVideoEffect`.
### Flip Effect (`GLFlipVideoEffect`)
Flips or rotates the video.
**Properties:**
| Property | Type | Default Value | Description |
|----------|---------------------|---------------|-------------------------------------|
| `Method` | `GLFlipVideoMethod` | `None` | The flip or rotation method to use. |
**Associated Enum: `GLFlipVideoMethod`**
Defines the video flip or rotation method.
| Value | Description |
|----------------------|----------------------------------------------|
| `None` | No rotation. |
| `Clockwise` | Rotate clockwise 90 degrees. |
| `Rotate180` | Rotate 180 degrees. |
| `CounterClockwise` | Rotate counter-clockwise 90 degrees. |
| `HorizontalFlip` | Flip horizontally. |
| `VerticalFlip` | Flip vertically. |
| `UpperLeftDiagonal` | Flip across upper left/lower right diagonal. |
| `UpperRightDiagonal` | Flip across upper right/lower left diagonal. |
### Glow Lighting Effect (`GLGlowLightingVideoEffect`)
Adds a glow lighting effect to the video. This effect does not have additional configurable properties beyond those inherited from `GLBaseVideoEffect`.
### Grayscale Effect (`GLGrayscaleVideoEffect`)
Converts the video to grayscale. This effect does not have additional configurable properties beyond those inherited from `GLBaseVideoEffect`.
### Heat Effect (`GLHeatVideoEffect`)
Applies a heat signature-like effect to the video. This effect does not have additional configurable properties beyond those inherited from `GLBaseVideoEffect`.
### Laplacian Effect (`GLLaplacianVideoEffect`)
Applies a Laplacian edge detection filter.
**Properties:**
| Property | Type | Default Value | Description |
|----------|---------|---------------|-------------------------------------------------------------------|
| `Invert` | `bool` | `false` | If `true`, inverts colors to get dark edges on a bright background. |
### Light Tunnel Effect (`GLLightTunnelVideoEffect`)
Creates a light tunnel visual effect. This effect does not have additional configurable properties beyond those inherited from `GLBaseVideoEffect`.
### Luma Cross Processing Effect (`GLLumaCrossProcessingVideoEffect`)
Applies a luma cross-processing (often "xpro") effect. This effect does not have additional configurable properties beyond those inherited from `GLBaseVideoEffect`.
### Mirror Effect (`GLMirrorVideoEffect`)
Applies a mirror effect to the video. This effect does not have additional configurable properties beyond those inherited from `GLBaseVideoEffect`.
### Resize Effect (`GLResizeVideoEffect`)
Resizes the video to the specified dimensions.
**Properties:**
| Property | Type | Description |
|----------|-------|----------------------------------------|
| `Width` | `int` | The target width for the video resize. |
| `Height` | `int` | The target height for the video resize.|
### Sepia Effect (`GLSepiaVideoEffect`)
Applies a sepia tone effect to the video. This effect does not have additional configurable properties beyond those inherited from `GLBaseVideoEffect`.
### Sin City Effect (`GLSinCityVideoEffect`)
Applies a "Sin City" movie style effect (grayscale with red highlights). This effect does not have additional configurable properties beyond those inherited from `GLBaseVideoEffect`.
### Sobel Effect (`GLSobelVideoEffect`)
Applies a Sobel edge detection filter.
**Properties:**
| Property | Type | Default Value | Description |
|----------|---------|---------------|-------------------------------------------------------------------|
| `Invert` | `bool` | `false` | If `true`, inverts colors to get dark edges on a bright background. |
### Square Effect (`GLSquareVideoEffect`)
Applies a "square" distortion or pixelation effect. This effect does not have additional configurable properties beyond those inherited from `GLBaseVideoEffect`.
### Squeeze Effect (`GLSqueezeVideoEffect`)
Applies a squeeze distortion effect. This effect does not have additional configurable properties beyond those inherited from `GLBaseVideoEffect`.
### Stretch Effect (`GLStretchVideoEffect`)
Applies a stretch distortion effect. This effect does not have additional configurable properties beyond those inherited from `GLBaseVideoEffect`.
### Transformation Effect (`GLTransformationVideoEffect`)
Applies 3D transformations to the video, including rotation, scaling, and translation.
**Properties:**
| Property | Type | Default Value | Description |
|----------------|---------|---------------|-----------------------------------------------------------------------|
| `FOV` | `float` | `90.0f` | Field of view angle in degrees for perspective projection. |
| `Ortho` | `bool` | `false` | If `true`, uses orthographic projection; otherwise, perspective. |
| `PivotX` | `float` | `0.0f` | X-coordinate of the rotation pivot point (0 is center). |
| `PivotY` | `float` | `0.0f` | Y-coordinate of the rotation pivot point (0 is center). |
| `PivotZ` | `float` | `0.0f` | Z-coordinate of the rotation pivot point (0 is center). |
| `RotationX` | `float` | `0.0f` | Rotation around the X-axis in degrees. |
| `RotationY` | `float` | `0.0f` | Rotation around the Y-axis in degrees. |
| `RotationZ` | `float` | `0.0f` | Rotation around the Z-axis in degrees. |
| `ScaleX` | `float` | `1.0f` | Scale multiplier for the X-axis. |
| `ScaleY` | `float` | `1.0f` | Scale multiplier for the Y-axis. |
| `TranslationX` | `float` | `0.0f` | Translation along the X-axis (universal coordinates [0-1]). |
| `TranslationY` | `float` | `0.0f` | Translation along the Y-axis (universal coordinates [0-1]). |
| `TranslationZ` | `float` | `0.0f` | Translation along the Z-axis (universal coordinates [0-1], depth). |
### Twirl Effect (`GLTwirlVideoEffect`)
Applies a twirl distortion effect. This effect does not have additional configurable properties beyond those inherited from `GLBaseVideoEffect`.
### X-Ray Effect (`GLXRayVideoEffect`)
Applies an X-ray like visual effect. This effect does not have additional configurable properties beyond those inherited from `GLBaseVideoEffect`.
## OpenGL Effect Identification: `GLVideoEffectID` Enum
This enumeration lists all available OpenGL video effect types, used by `GLBaseVideoEffect.ID`.
| Value | Description |
|------------------|-------------------------------------------|
| `ColorBalance` | The color balance effect. |
| `Grayscale` | The grayscale effect. |
| `Resize` | The resize effect. |
| `Deinterlace` | The deinterlace effect. |
| `Flip` | The flip effect. |
| `Blur` | Blur with 9x9 separable convolution effect. |
| `FishEye` | The fish eye effect. |
| `GlowLighting` | The glow lighting effect. |
| `Heat` | The heat signature effect. |
| `LumaX` | The luma cross processing effect. |
| `Mirror` | The mirror effect. |
| `Sepia` | The sepia effect. |
| `Square` | The square effect. |
| `XRay` | The X-ray effect. |
| `Stretch` | The stretch effect. |
| `LightTunnel` | The light tunnel effect. |
| `Twirl` | The twirl effect. |
| `Squeeze` | The squeeze effect. |
| `SinCity` | The sin city movie gray-red effect. |
| `Bulge` | The bulge effect. |
| `Sobel` | The sobel effect. |
| `Laplacian` | The laplacian effect. |
| `Alpha` | The alpha channels effect. |
| `Transformation` | The transformation effect. |
## OpenGL Rendering and View Configuration
These types assist in configuring how video is rendered or viewed in an OpenGL context, especially for specialized scenarios like VR or custom display setups.
### Equirectangular View Settings (`GLEquirectangularViewSettings`)
Manages settings for rendering equirectangular (360-degree) video, commonly used in VR applications. Implements `IVRVideoControl`.
**Properties:**
| Property | Type | Default | Description |
|-----------------|--------------|-------------------|------------------------------------------------|
| `VideoWidth` | `int` | (readonly) | Width of the source video. |
| `VideoHeight` | `int` | (readonly) | Height of the source video. |
| `FieldOfView` | `float` | `80.0f` | Field of view in degrees. |
| `Yaw` | `float` | `0.0f` | Yaw (rotation around Y-axis) in degrees. |
| `Pitch` | `float` | `0.0f` | Pitch (rotation around X-axis) in degrees. |
| `Roll` | `float` | `0.0f` | Roll (rotation around Z-axis) in degrees. |
| `Mode` | `VRMode` | `Equirectangular` | The VR mode (supports `Equirectangular`). |
**Methods:**
- `IsModeSupported(VRMode mode)`: Checks if the specified `VRMode` is supported.
**Events:**
- `SettingsChanged`: Occurs when any view setting is changed.
### Video Renderer Settings (`GLVideoRendererSettings`)
Configures general properties for an OpenGL-based video renderer.
**Properties:**
| Property | Type | Default | Description |
|--------------------|-------------------------------|-------------|----------------------------------------------------------------------|
| `ForceAspectRatio` | `bool` | `true` | Whether scaling will respect the original aspect ratio. |
| `IgnoreAlpha` | `bool` | `true` | Whether alpha channel will be ignored (treated as black). |
| `PixelAspectRatio` | `System.Tuple` | `(0, 1)` | Pixel aspect ratio of the display device (numerator, denominator). |
| `Rotation` | `GLVideoRendererRotateMethod` | `None` | Specifies the rotation applied to the video. |
**Associated Enum: `GLVideoRendererRotateMethod`**
Defines rotation methods for the OpenGL video renderer.
| Value | Description |
|------------------|-----------------------------------------|
| `None` | No rotation. |
| `_90C` | Rotate 90 degrees clockwise. |
| `_180` | Rotate 180 degrees. |
| `_90CC` | Rotate 90 degrees counter-clockwise. |
| `FlipHorizontal` | Flip horizontally. |
| `FlipVertical` | Flip vertically. |
## Custom OpenGL Shaders
Allows for the application of custom GLSL shaders to the video stream.
### Shader Definition (`GLShader`)
Represents a pair of vertex and fragment shaders.
**Properties:**
| Property | Type | Description |
|------------------|----------|-----------------------------------------------|
| `VertexShader` | `string` | The GLSL source code for the vertex shader. |
| `FragmentShader` | `string` | The GLSL source code for the fragment shader. |
**Constructors:**
- `GLShader()`
- `GLShader(string vertexShader, string fragmentShader)`
### Shader Settings (`GLShaderSettings`)
Configures custom GLSL shaders for use in the pipeline.
**Properties:**
| Property | Type | Description |
|------------|--------------------------------------|--------------------------------------------------|
| `Vertex` | `string` | The GLSL source code for the vertex shader. |
| `Fragment` | `string` | The GLSL source code for the fragment shader. |
| `Uniforms` | `System.Collections.Generic.Dictionary` | A dictionary of uniform variables (parameters) to be passed to the shaders. |
**Constructors:**
- `GLShaderSettings()`
- `GLShaderSettings(string vertex, string fragment)`
- `GLShaderSettings(GLShader shader)`
## Image Overlays in OpenGL
Provides settings for overlaying static images onto a video stream within an OpenGL context.
### Overlay Settings (`GLOverlaySettings`)
Defines the properties of an image overlay.
**Properties:**
| Property | Type | Default | Description |
|------------|----------|---------|---------------------------------------------------|
| `Filename` | `string` | (N/A) | Path to the image file (read-only after init). |
| `Data` | `byte[]` | (N/A) | Image data as a byte array (read-only after init).|
| `X` | `int` | | X-coordinate of the overlay's top-left corner. |
| `Y` | `int` | | Y-coordinate of the overlay's top-left corner. |
| `Width` | `int` | | Width of the overlay. |
| `Height` | `int` | | Height of the overlay. |
| `Alpha` | `double` | `1.0` | Opacity of the overlay (0.0 transparent to 1.0 opaque). |
**Constructor:**
- `GLOverlaySettings(string filename)`
## OpenGL Video Mixing
These types are used to configure an OpenGL-based video mixer, allowing multiple video streams to be combined and composited.
### Mixer Settings (`GLVideoMixerSettings`)
Extends `VideoMixerBaseSettings` for OpenGL-specific video mixing. It manages a list of `GLVideoMixerStream` objects and inherits properties like `Width`, `Height`, and `FrameRate`.
**Methods:**
- `AddStream(GLVideoMixerStream stream)`: Adds a stream to the mixer.
- `RemoveStream(GLVideoMixerStream stream)`: Removes a stream from the mixer.
- `SetStream(int index, GLVideoMixerStream stream)`: Replaces a stream at a specific index.
**Constructors:**
- `GLVideoMixerSettings(int width, int height, VideoFrameRate frameRate)`
- `GLVideoMixerSettings(int width, int height, VideoFrameRate frameRate, List streams)`
### Mixer Stream (`GLVideoMixerStream`)
Extends `VideoMixerStream` and defines properties for an individual stream within the OpenGL video mixer. Inherits `Rectangle`, `ZOrder`, and `Alpha` from `VideoMixerStream`.
**Properties:**
| Property | Type | Default | Description |
|---------------------------------|-------------------------------|------------------------------|--------------------------------------------------|
| `Crop` | `Rect` | (N/A) | Crop rectangle for the input stream. |
| `BlendConstantColorAlpha` | `double` | `0` | Alpha component for constant blend color. |
| `BlendConstantColorBlue` | `double` | `0` | Blue component for constant blend color. |
| `BlendConstantColorGreen` | `double` | `0` | Green component for constant blend color. |
| `BlendConstantColorRed` | `double` | `0` | Red component for constant blend color. |
| `BlendEquationAlpha` | `GLVideoMixerBlendEquation` | `Add` | Blend equation for the alpha channel. |
| `BlendEquationRGB` | `GLVideoMixerBlendEquation` | `Add` | Blend equation for RGB channels. |
| `BlendFunctionDestinationAlpha` | `GLVideoMixerBlendFunction` | `OneMinusSourceAlpha` | Blend function for destination alpha. |
| `BlendFunctionDesctinationRGB` | `GLVideoMixerBlendFunction` | `OneMinusSourceAlpha` | Blend function for destination RGB. |
| `BlendFunctionSourceAlpha` | `GLVideoMixerBlendFunction` | `One` | Blend function for source alpha. |
| `BlendFunctionSourceRGB` | `GLVideoMixerBlendFunction` | `SourceAlpha` | Blend function for source RGB. |
**Constructor:**
- `GLVideoMixerStream(Rect rectangle, uint zorder, double alpha = 1.0)`
### Blend Equation (`GLVideoMixerBlendEquation` Enum)
Specifies how source and destination colors are combined during blending.
| Value | Description |
|-------------------|-------------------------------------------------|
| `Add` | Source + Destination |
| `Subtract` | Source - Destination |
| `ReverseSubtract` | Destination - Source |
### Blend Function (`GLVideoMixerBlendFunction` Enum)
Defines factors for source and destination colors in blending operations. (Rs, Gs, Bs, As are source color components; Rd, Gd, Bd, Ad are destination; Rc, Gc, Bc, Ac are constant color components).
| Value | Description |
|----------------------------|---------------------------------------------|
| `Zero` | Factor is (0, 0, 0, 0). |
| `One` | Factor is (1, 1, 1, 1). |
| `SourceColor` | Factor is (Rs, Gs, Bs, As). |
| `OneMinusSourceColor` | Factor is (1-Rs, 1-Gs, 1-Bs, 1-As). |
| `DestinationColor` | Factor is (Rd, Gd, Bd, Ad). |
| `OneMinusDestinationColor` | Factor is (1-Rd, 1-Gd, 1-Bd, 1-Ad). |
| `SourceAlpha` | Factor is (As, As, As, As). |
| `OneMinusSourceAlpha` | Factor is (1-As, 1-As, 1-As, 1-As). |
| `DestinationAlpha` | Factor is (Ad, Ad, Ad, Ad). |
| `OneMinusDestinationAlpha` | Factor is (1-Ad, 1-Ad, 1-Ad, 1-Ad). |
| `ConstantColor` | Factor is (Rc, Gc, Bc, Ac). |
| `OneMinusContantColor` | Factor is (1-Rc, 1-Gc, 1-Bc, 1-Ac). |
| `ConstantAlpha` | Factor is (Ac, Ac, Ac, Ac). |
| `OneMinusContantAlpha` | Factor is (1-Ac, 1-Ac, 1-Ac, 1-Ac). |
| `SourceAlphaSaturate` | Factor is (min(As, 1-Ad), min(As, 1-Ad), min(As, 1-Ad), 1). |
## Virtual Test Sources for OpenGL
These settings classes are used to configure virtual sources that generate test patterns directly within an OpenGL context.
### Virtual Video Source Settings (`GLVirtualVideoSourceSettings`)
Configures a source block (`GLVirtualVideoSourceBlock`) that produces test video data. Implements `IMediaPlayerBaseSourceSettings` and `IVideoCaptureBaseVideoSourceSettings`.
**Properties:**
| Property | Type | Default | Description |
|-------------|----------------------------|------------------------|--------------------------------------------------|
| `Width` | `int` | `1280` | Width of the output video. |
| `Height` | `int` | `720` | Height of the output video. |
| `FrameRate` | `VideoFrameRate` | `30/1` (30 fps) | Frame rate of the output video. |
| `IsLive` | `bool` | `true` | Indicates if the source is live. |
| `Mode` | `GLVirtualVideoSourceMode` | (N/A - must be set) | Specifies the type of test pattern to generate. |
**Associated Enum: `GLVirtualVideoSourceMode`**
Defines the test pattern generated by `GLVirtualVideoSourceBlock`.
| Value | Description |
|---------------|------------------------------|
| `SMPTE` | SMPTE 100% color bars. |
| `Snow` | Random (television snow). |
| `Black` | 100% Black. |
| `White` | 100% White. |
| `Red` | Solid Red color. |
| `Green` | Solid Green color. |
| `Blue` | Solid Blue color. |
| `Checkers1` | Checkerboard pattern (1px). |
| `Checkers2` | Checkerboard pattern (2px). |
| `Checkers4` | Checkerboard pattern (4px). |
| `Checkers8` | Checkerboard pattern (8px). |
| `Circular` | Circular pattern. |
| `Blink` | Blinking pattern. |
| `Mandelbrot` | Mandelbrot fractal. |
**Methods:**
- `Task ReadInfoAsync()`: Asynchronously reads media information (returns synthetic info based on settings).
- `MediaBlock CreateBlock()`: Creates a `GLVirtualVideoSourceBlock` instance configured with these settings.
---END OF PAGE---
# Local File: .\dotnet\mediablocks\Outputs\index.md
---
title: .Net Media Output Blocks Guide
description: Explore a complete guide to .Net Media SDK output blocks. Learn about file and network sinks for your media processing pipelines.
sidebar_label: Outputs
---
# Output Blocks - VisioForge Media Blocks SDK .Net
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
Output blocks, also known as sinks, are responsible for writing media data to files, network streams, or other destinations. They are typically the last blocks in any media processing chain. VisioForge Media Blocks SDK .Net provides a comprehensive collection of output blocks for various formats and protocols.
This guide covers file output blocks like MP4, AVI, WebM, MKV, and network streaming blocks for protocols such as RTMP (used by YouTube and Facebook Live).
## AVI Output Block
The `AVIOutputBlock` is used to create AVI files. It combines video and audio encoders with a file sink to produce `.avi` files.
### Block info
Name: `AVIOutputBlock`.
| Pin direction | Media type | Expected Encoders |
| --- | :---: | :---: |
| Input Video | various | H264 (default), other `IVideoEncoder` compatible encoders |
| Input Audio | various | AAC (default), MP3, other `IAudioEncoder` compatible encoders |
### Settings
The `AVIOutputBlock` is configured using `AVISinkSettings` along with settings for the chosen video and audio encoders (e.g., `IH264EncoderSettings` and `IAACEncoderSettings` or `MP3EncoderSettings`).
Key `AVISinkSettings` properties:
- `Filename` (string): The path to the output AVI file.
Constructors:
- `AVIOutputBlock(string filename)`: Uses default H264 video and AAC audio encoders.
- `AVIOutputBlock(AVISinkSettings sinkSettings, IH264EncoderSettings h264settings, IAACEncoderSettings aacSettings)`: Uses specified H264 video and AAC audio encoders.
- `AVIOutputBlock(AVISinkSettings sinkSettings, IH264EncoderSettings h264settings, MP3EncoderSettings mp3Settings)`: Uses specified H264 video and MP3 audio encoders.
### The sample pipeline
```mermaid
graph LR;
VideoSource-->VideoEncoder;
AudioSource-->AudioEncoder;
VideoEncoder-->AVIOutputBlock;
AudioEncoder-->AVIOutputBlock;
```
Or, if using a source that provides encoded data, or if the `AVIOutputBlock` handles internal encoders based on settings:
```mermaid
graph LR;
UniversalSourceBlock--Video Output-->AVIOutputBlock;
UniversalSourceBlock--Audio Output-->AVIOutputBlock;
```
### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// create video source (example: virtual source)
var videoSource = new VirtualVideoSourceBlock(new VirtualVideoSourceSettings());
// create audio source (example: virtual source)
var audioSource = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings());
// create AVI output block
// This constructor uses default H264 video and AAC audio encoders internally.
var aviOutput = new AVIOutputBlock("output.avi");
// Create inputs for the AVI output block
var videoInputPad = aviOutput.CreateNewInput(MediaBlockPadMediaType.Video);
var audioInputPad = aviOutput.CreateNewInput(MediaBlockPadMediaType.Audio);
// connect video path
pipeline.Connect(videoSource.Output, videoInputPad);
// connect audio path
pipeline.Connect(audioSource.Output, audioInputPad);
// start pipeline
await pipeline.StartAsync();
// ... later, to stop ...
// await pipeline.StopAsync();
```
### Remarks
The `AVIOutputBlock` internally manages encoder instances (like `H264Encoder` and `AACEncoder` or `MP3Encoder`) based on the provided settings. Ensure the necessary GStreamer plugins and SDK components for these encoders and the AVI muxer are available.
To check availability:
`AVIOutputBlock.IsAvailable(IH264EncoderSettings h264settings, IAACEncoderSettings aacSettings)`
### Platforms
Primarily Windows. Availability on other platforms depends on GStreamer plugin support for AVI muxing and the chosen encoders.
## Facebook Live Output Block
The `FacebookLiveOutputBlock` is designed to stream video and audio to Facebook Live using RTMP. It internally uses H.264 video and AAC audio encoders.
### Block info
Name: `FacebookLiveOutputBlock`.
| Pin direction | Media type | Expected Encoders |
| --- | :---: | :---: |
| Input Video | various | H.264 (internal) |
| Input Audio | various | AAC (internal) |
### Settings
The `FacebookLiveOutputBlock` is configured using `FacebookLiveSinkSettings`, `IH264EncoderSettings`, and `IAACEncoderSettings`.
Key `FacebookLiveSinkSettings` properties:
- `Url` (string): The RTMP URL provided by Facebook Live for streaming.
Constructor:
- `FacebookLiveOutputBlock(FacebookLiveSinkSettings sinkSettings, IH264EncoderSettings h264settings, IAACEncoderSettings aacSettings)`
### The sample pipeline
```mermaid
graph LR;
VideoSource-->FacebookLiveOutputBlock;
AudioSource-->FacebookLiveOutputBlock;
```
### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// create video source (e.g., SystemVideoSourceBlock)
var videoSource = new SystemVideoSourceBlock(videoSourceSettings); // Assuming videoSourceSettings are configured
// create audio source (e.g., SystemAudioSourceBlock)
var audioSource = new SystemAudioSourceBlock(audioSourceSettings); // Assuming audioSourceSettings are configured
// configure Facebook Live sink settings
var fbSinkSettings = new FacebookLiveSinkSettings("rtmp://your-facebook-live-url/your-stream-key");
// configure H.264 encoder settings (use defaults or customize)
var h264Settings = H264EncoderBlock.GetDefaultSettings();
h264Settings.Bitrate = 4000000; // Example: 4 Mbps
// configure AAC encoder settings (use defaults or customize)
var aacSettings = AACEncoderBlock.GetDefaultSettings();
aacSettings.Bitrate = 128000; // Example: 128 Kbps
// create Facebook Live output block
var facebookOutput = new FacebookLiveOutputBlock(fbSinkSettings, h264Settings, aacSettings);
// Create inputs for the Facebook Live output block
var videoInputPad = facebookOutput.CreateNewInput(MediaBlockPadMediaType.Video);
var audioInputPad = facebookOutput.CreateNewInput(MediaBlockPadMediaType.Audio);
// connect video path
pipeline.Connect(videoSource.Output, videoInputPad);
// connect audio path
pipeline.Connect(audioSource.Output, audioInputPad);
// start pipeline
await pipeline.StartAsync();
// ... later, to stop ...
// await pipeline.StopAsync();
```
### Remarks
This block encapsulates the necessary H.264 and AAC encoders and the RTMP sink (`FacebookLiveSink`).
Ensure that the `FacebookLiveSink`, `H264Encoder`, and `AACEncoder` are available. `FacebookLiveOutputBlock.IsAvailable()` can be used to check this (though the provided source shows `FacebookLiveSink.IsAvailable()`).
### Platforms
Windows, macOS, Linux, iOS, Android (Platform availability depends on GStreamer RTMP support and encoder availability).
## FLAC Output Block
The `FLACOutputBlock` is used for creating FLAC (Free Lossless Audio Codec) audio files. It takes uncompressed audio data, encodes it using a FLAC encoder, and saves it to a `.flac` file.
### Block info
Name: `FLACOutputBlock`.
| Pin direction | Media type | Expected Encoders |
| --- | :---: | :---: |
| Input Audio | uncompressed audio | FLAC (internal) |
### Settings
The `FLACOutputBlock` is configured with a filename and `FLACEncoderSettings`.
Key `FLACEncoderSettings` properties (refer to `FLACEncoderSettings` documentation for full details):
- Quality level, compression level, etc.
Constructor:
- `FLACOutputBlock(string filename, FLACEncoderSettings settings)`
### The sample pipeline
```mermaid
graph LR;
AudioSource-->FLACOutputBlock;
```
### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// create audio source (example: virtual audio source)
var audioSource = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings());
// configure FLAC encoder settings
var flacSettings = new FLACEncoderSettings();
// flacSettings.Quality = 8; // Example: Set quality level (0-8, default is 5)
// create FLAC output block
var flacOutput = new FLACOutputBlock("output.flac", flacSettings);
// connect audio path
pipeline.Connect(audioSource.Output, flacOutput.Input);
// start pipeline
await pipeline.StartAsync();
// ... later, to stop ...
// await pipeline.StopAsync();
```
### Remarks
This block combines a `FLACEncoder` and a `FileSink` internally.
To check if the block and its dependencies are available:
`FLACOutputBlock.IsAvailable()` (This checks for `FLACEncoder` and `FileSink` availability).
### Platforms
Windows, macOS, Linux, iOS, Android (Platform availability depends on GStreamer FLAC encoder and file sink support).
## M4A Output Block
The `M4AOutputBlock` creates M4A (MPEG-4 Audio) files, commonly used for AAC encoded audio. It uses an AAC audio encoder and an MP4 sink to produce `.m4a` files.
### Block info
Name: `M4AOutputBlock`.
| Pin direction | Media type | Expected Encoders |
| --- | :---: | :---: |
| Input Audio | various | AAC (internal) |
### Settings
The `M4AOutputBlock` is configured using `MP4SinkSettings` and `IAACEncoderSettings`.
Key `MP4SinkSettings` properties:
- `Filename` (string): The path to the output M4A file.
Key `IAACEncoderSettings` properties (refer to `AACEncoderSettings` for details):
- Bitrate, profile, etc.
Constructors:
- `M4AOutputBlock(string filename)`: Uses default AAC encoder settings.
- `M4AOutputBlock(MP4SinkSettings sinkSettings, IAACEncoderSettings aacSettings)`: Uses specified AAC encoder settings.
### The sample pipeline
```mermaid
graph LR;
AudioSource-->M4AOutputBlock;
```
### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// create audio source (example: virtual audio source)
var audioSource = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings());
// configure M4A output block with default AAC settings
var m4aOutput = new M4AOutputBlock("output.m4a");
// Or, with custom AAC settings:
// var sinkSettings = new MP4SinkSettings("output.m4a");
// var aacSettings = AACEncoderBlock.GetDefaultSettings();
// aacSettings.Bitrate = 192000; // Example: 192 Kbps
// var m4aOutput = new M4AOutputBlock(sinkSettings, aacSettings);
// Create input for the M4A output block
var audioInputPad = m4aOutput.CreateNewInput(MediaBlockPadMediaType.Audio);
// connect audio path
pipeline.Connect(audioSource.Output, audioInputPad);
// start pipeline
await pipeline.StartAsync();
// ... later, to stop ...
// await pipeline.StopAsync();
```
### Remarks
The `M4AOutputBlock` internally manages an `AACEncoder` and an `MP4Sink`.
To check availability:
`M4AOutputBlock.IsAvailable(IAACEncoderSettings aacSettings)`
### Platforms
Windows, macOS, Linux, iOS, Android (Platform availability depends on GStreamer MP4 muxer and AAC encoder support).
## MKV Output Block
The `MKVOutputBlock` is used to create Matroska (MKV) files. MKV is a flexible container format that can hold various video, audio, and subtitle streams. This block combines specified video and audio encoders with an MKV sink.
### Block info
Name: `MKVOutputBlock`.
| Pin direction | Media type | Expected Encoders |
| --- | :---: | :---: |
| Input Video | various | `IVideoEncoder` (e.g., H.264, HEVC, VPX, AV1) |
| Input Audio | various | `IAudioEncoder` (e.g., AAC, MP3, Vorbis, Opus, Speex) |
### Settings
The `MKVOutputBlock` is configured using `MKVSinkSettings`, along with settings for the chosen video (`IVideoEncoder`) and audio (`IAudioEncoder`) encoders.
Key `MKVSinkSettings` properties:
- `Filename` (string): The path to the output MKV file.
Constructors:
- `MKVOutputBlock(MKVSinkSettings sinkSettings, IVideoEncoder videoSettings, IAudioEncoder audioSettings)`
### The sample pipeline
```mermaid
graph LR;
VideoSource-->VideoEncoder;
AudioSource-->AudioEncoder;
VideoEncoder-->MKVOutputBlock;
AudioEncoder-->MKVOutputBlock;
```
More directly, if `MKVOutputBlock` handles encoder instantiation internally:
```mermaid
graph LR;
VideoSource-->MKVOutputBlock;
AudioSource-->MKVOutputBlock;
```
### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// create video source (example: virtual source)
var videoSource = new VirtualVideoSourceBlock(new VirtualVideoSourceSettings());
// create audio source (example: virtual source)
var audioSource = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings());
// configure MKV sink settings
var mkvSinkSettings = new MKVSinkSettings("output.mkv");
// configure video encoder (example: H.264)
var h264Settings = H264EncoderBlock.GetDefaultSettings();
// h264Settings.Bitrate = 5000000; // Example
// configure audio encoder (example: AAC)
var aacSettings = AACEncoderBlock.GetDefaultSettings();
// aacSettings.Bitrate = 128000; // Example
// create MKV output block
var mkvOutput = new MKVOutputBlock(mkvSinkSettings, h264Settings, aacSettings);
// Create inputs for the MKV output block
var videoInputPad = mkvOutput.CreateNewInput(MediaBlockPadMediaType.Video);
var audioInputPad = mkvOutput.CreateNewInput(MediaBlockPadMediaType.Audio);
// connect video path
pipeline.Connect(videoSource.Output, videoInputPad);
// connect audio path
pipeline.Connect(audioSource.Output, audioInputPad);
// start pipeline
await pipeline.StartAsync();
// ... later, to stop ...
// await pipeline.StopAsync();
```
### Remarks
The `MKVOutputBlock` internally manages the specified video and audio encoder instances (e.g., `H264Encoder`, `HEVCEncoder`, `AACEncoder`, `VorbisEncoder`, etc.) and an `MKVSink`.
Supported video encoders include H.264, HEVC, VPX (VP8/VP9), AV1.
Supported audio encoders include AAC, MP3, Vorbis, Opus, Speex.
To check availability (example with H.264 and AAC):
`MKVOutputBlock.IsAvailable(IH264EncoderSettings h264settings, IAACEncoderSettings aacSettings)`
### Platforms
Windows, macOS, Linux, iOS, Android (Platform availability depends on GStreamer MKV muxer and chosen encoder support).
## MP3 Output Block
The `MP3OutputBlock` is used for creating MP3 audio files. It encodes uncompressed audio data using an MP3 encoder and saves it to an `.mp3` file.
### Block info
Name: `MP3OutputBlock`.
| Pin direction | Media type | Expected Encoders |
| --- | :---: | :---: |
| Input Audio | uncompressed audio | MP3 (internal) |
### Settings
The `MP3OutputBlock` is configured with a filename and `MP3EncoderSettings`.
Key `MP3EncoderSettings` properties (refer to `MP3EncoderSettings` documentation for full details):
- Bitrate, quality, channel mode, etc.
Constructor:
- `MP3OutputBlock(string filename, MP3EncoderSettings mp3Settings)`
### The sample pipeline
```mermaid
graph LR;
AudioSource-->MP3OutputBlock;
```
### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// create audio source (example: virtual audio source)
var audioSource = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings());
// configure MP3 encoder settings
var mp3Settings = new MP3EncoderSettings();
// mp3Settings.Bitrate = 192; // Example: Set bitrate to 192 kbps
// mp3Settings.Quality = MP3Quality.Best; // Example: Set quality
// create MP3 output block
var mp3Output = new MP3OutputBlock("output.mp3", mp3Settings);
// connect audio path
pipeline.Connect(audioSource.Output, mp3Output.Input);
// start pipeline
await pipeline.StartAsync();
// ... later, to stop ...
// await pipeline.StopAsync();
```
### Remarks
This block combines an `MP3Encoder` and a `FileSink` internally.
To check if the block and its dependencies are available:
`MP3OutputBlock.IsAvailable()` (This checks for `MP3Encoder` and `FileSink` availability).
### Platforms
Windows, macOS, Linux, iOS, Android (Platform availability depends on GStreamer MP3 encoder (e.g., LAME) and file sink support).
## MP4 Output Block
The `MP4OutputBlock` is used for creating MP4 files. It can combine various video and audio encoders with an MP4 sink to produce `.mp4` files.
### Block info
Name: `MP4OutputBlock`.
| Pin direction | Media type | Expected Encoders |
| --- | :---: | :---: |
| Input Video | various | `IVideoEncoder` (e.g., H.264, HEVC) |
| Input Audio | various | `IAudioEncoder` (e.g., AAC, MP3) |
### Settings
The `MP4OutputBlock` is configured using `MP4SinkSettings`, along with settings for the chosen video (`IVideoEncoder`, typically `IH264EncoderSettings` or `IHEVCEncoderSettings`) and audio (`IAudioEncoder`, typically `IAACEncoderSettings` or `MP3EncoderSettings`) encoders.
Key `MP4SinkSettings` properties:
- `Filename` (string): The path to the output MP4 file.
- Can also be `MP4SplitSinkSettings` for recording in segments.
Constructors:
- `MP4OutputBlock(string filename)`: Uses default H.264 video and AAC audio encoders.
- `MP4OutputBlock(MP4SinkSettings sinkSettings, IH264EncoderSettings h264settings, IAACEncoderSettings aacSettings)`
- `MP4OutputBlock(MP4SinkSettings sinkSettings, IH264EncoderSettings h264settings, MP3EncoderSettings mp3Settings)`
- `MP4OutputBlock(MP4SinkSettings sinkSettings, IHEVCEncoderSettings hevcSettings, IAACEncoderSettings aacSettings)`
- `MP4OutputBlock(MP4SinkSettings sinkSettings, IHEVCEncoderSettings hevcSettings, MP3EncoderSettings mp3Settings)`
### The sample pipeline
```mermaid
graph LR;
VideoSource-->VideoEncoder;
AudioSource-->AudioEncoder;
VideoEncoder-->MP4OutputBlock;
AudioEncoder-->MP4OutputBlock;
```
If `MP4OutputBlock` uses its default internal encoders:
```mermaid
graph LR;
VideoSource-->MP4OutputBlock;
AudioSource-->MP4OutputBlock;
```
### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// create video source (example: virtual source)
var videoSource = new VirtualVideoSourceBlock(new VirtualVideoSourceSettings());
// create audio source (example: virtual source)
var audioSource = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings());
// create MP4 output block with default H.264 video and AAC audio encoders
var mp4Output = new MP4OutputBlock("output.mp4");
// Or, with custom H.264 and AAC settings:
// var sinkSettings = new MP4SinkSettings("output.mp4");
// var h264Settings = H264EncoderBlock.GetDefaultSettings();
// h264Settings.Bitrate = 8000000; // Example: 8 Mbps
// var aacSettings = AACEncoderBlock.GetDefaultSettings();
// aacSettings.Bitrate = 192000; // Example: 192 Kbps
// var mp4Output = new MP4OutputBlock(sinkSettings, h264Settings, aacSettings);
// Create inputs for the MP4 output block
var videoInputPad = mp4Output.CreateNewInput(MediaBlockPadMediaType.Video);
var audioInputPad = mp4Output.CreateNewInput(MediaBlockPadMediaType.Audio);
// connect video path
pipeline.Connect(videoSource.Output, videoInputPad);
// connect audio path
pipeline.Connect(audioSource.Output, audioInputPad);
// start pipeline
await pipeline.StartAsync();
// ... later, to stop ...
// await pipeline.StopAsync();
```
### Remarks
The `MP4OutputBlock` internally manages video (e.g., `H264Encoder`, `HEVCEncoder`) and audio (e.g., `AACEncoder`, `MP3Encoder`) encoder instances along with an `MP4Sink`.
To check availability (example with H.264 and AAC):
`MP4OutputBlock.IsAvailable(IH264EncoderSettings h264settings, IAACEncoderSettings aacSettings)`
### Platforms
Windows, macOS, Linux, iOS, Android (Platform availability depends on GStreamer MP4 muxer and chosen encoder support).
## OGG Opus Output Block
The `OGGOpusOutputBlock` is used for creating Ogg Opus audio files. It encodes uncompressed audio data using an Opus encoder and multiplexes it into an Ogg container, saving to an `.opus` or `.ogg` file.
### Block info
Name: `OGGOpusOutputBlock`.
| Pin direction | Media type | Expected Encoders |
| --- | :---: | :---: |
| Input Audio | uncompressed audio | Opus (internal) |
### Settings
The `OGGOpusOutputBlock` is configured with a filename and `OPUSEncoderSettings`.
Key `OPUSEncoderSettings` properties (refer to `OPUSEncoderSettings` documentation for full details):
- Bitrate, complexity, frame duration, audio type (voice/music), etc.
Constructor:
- `OGGOpusOutputBlock(string filename, OPUSEncoderSettings settings)`
### The sample pipeline
```mermaid
graph LR;
AudioSource-->OGGOpusOutputBlock;
```
### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// create audio source (example: virtual audio source)
var audioSource = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings());
// configure Opus encoder settings
var opusSettings = new OPUSEncoderSettings();
// opusSettings.Bitrate = 64000; // Example: Set bitrate to 64 kbps
// opusSettings.AudioType = OpusEncoderAudioType.Music; // Example
// create OGG Opus output block
var oggOpusOutput = new OGGOpusOutputBlock("output.opus", opusSettings);
// connect audio path
pipeline.Connect(audioSource.Output, oggOpusOutput.Input);
// start pipeline
await pipeline.StartAsync();
// ... later, to stop ...
// await pipeline.StopAsync();
```
### Remarks
This block combines an `OPUSEncoder` and an `OGGSink` internally.
To check if the block and its dependencies are available:
`OGGOpusOutputBlock.IsAvailable()` (This checks for `OGGSink`, `OPUSEncoder`, and `FileSink` - though `FileSink` might be implicitly part of `OGGSink` logic for file output).
### Platforms
Windows, macOS, Linux, iOS, Android (Platform availability depends on GStreamer Ogg muxer and Opus encoder support).
## OGG Speex Output Block
The `OGGSpeexOutputBlock` is used for creating Ogg Speex audio files, typically for voice. It encodes uncompressed audio data using a Speex encoder, multiplexes it into an Ogg container, and saves to an `.spx` or `.ogg` file.
### Block info
Name: `OGGSpeexOutputBlock`.
| Pin direction | Media type | Expected Encoders |
| --- | :---: | :---: |
| Input Audio | uncompressed audio | Speex (internal) |
### Settings
The `OGGSpeexOutputBlock` is configured with a filename and `SpeexEncoderSettings`.
Key `SpeexEncoderSettings` properties (refer to `SpeexEncoderSettings` documentation for full details):
- Quality, complexity, encoding mode (VBR/ABR/CBR), etc.
Constructor:
- `OGGSpeexOutputBlock(string filename, SpeexEncoderSettings settings)`
### The sample pipeline
```mermaid
graph LR;
AudioSource-->OGGSpeexOutputBlock;
```
### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// create audio source (example: virtual audio source)
var audioSource = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings());
// configure Speex encoder settings
var speexSettings = new SpeexEncoderSettings();
// speexSettings.Quality = 8; // Example: Set quality (0-10)
// speexSettings.Mode = SpeexEncoderMode.VBR; // Example: Use Variable Bitrate
// create OGG Speex output block
var oggSpeexOutput = new OGGSpeexOutputBlock("output.spx", speexSettings);
// connect audio path
pipeline.Connect(audioSource.Output, oggSpeexOutput.Input);
// start pipeline
await pipeline.StartAsync();
// ... later, to stop ...
// await pipeline.StopAsync();
```
### Remarks
This block combines a `SpeexEncoder` and an `OGGSink` internally.
To check if the block and its dependencies are available:
`OGGSpeexOutputBlock.IsAvailable()` (This checks for `OGGSink`, `SpeexEncoder`, and `FileSink` - `FileSink` might be implicit to `OGGSink` for file output).
### Platforms
Windows, macOS, Linux, iOS, Android (Platform availability depends on GStreamer Ogg muxer and Speex encoder support).
## OGG Vorbis Output Block
The `OGGVorbisOutputBlock` is used for creating Ogg Vorbis audio files. It encodes uncompressed audio data using a Vorbis encoder, multiplexes it into an Ogg container, and saves to an `.ogg` file.
### Block info
Name: `OGGVorbisOutputBlock`.
| Pin direction | Media type | Expected Encoders |
| --- | :---: | :---: |
| Input Audio | uncompressed audio | Vorbis (internal) |
### Settings
The `OGGVorbisOutputBlock` is configured with a filename and `VorbisEncoderSettings`.
Key `VorbisEncoderSettings` properties (refer to `VorbisEncoderSettings` documentation for full details):
- Quality, bitrate, managed/unmanaged bitrate settings, etc.
Constructor:
- `OGGVorbisOutputBlock(string filename, VorbisEncoderSettings settings)`
### The sample pipeline
```mermaid
graph LR;
AudioSource-->OGGVorbisOutputBlock;
```
### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// create audio source (example: virtual audio source)
var audioSource = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings());
// configure Vorbis encoder settings
var vorbisSettings = new VorbisEncoderSettings();
// vorbisSettings.Quality = 0.8f; // Example: Set quality (0.0 to 1.0)
// vorbisSettings.Bitrate = 128000; // Example if not using quality based encoding
// create OGG Vorbis output block
var oggVorbisOutput = new OGGVorbisOutputBlock("output.ogg", vorbisSettings);
// connect audio path
pipeline.Connect(audioSource.Output, oggVorbisOutput.Input);
// start pipeline
await pipeline.StartAsync();
// ... later, to stop ...
// await pipeline.StopAsync();
```
### Remarks
This block combines a `VorbisEncoder` and an `OGGSink` internally.
To check if the block and its dependencies are available:
`OGGVorbisOutputBlock.IsAvailable()` (This checks for `OGGSink`, `VorbisEncoder`, and `FileSink` - `FileSink` might be implicit to `OGGSink` for file output).
### Platforms
Windows, macOS, Linux, iOS, Android (Platform availability depends on GStreamer Ogg muxer and Vorbis encoder support).
## WebM Output Block
The `WebMOutputBlock` is used for creating WebM files, typically containing VP8 or VP9 video and Vorbis audio. It combines a VPX video encoder and a Vorbis audio encoder with a WebM sink.
### Block info
Name: `WebMOutputBlock`.
| Pin direction | Media type | Expected Encoders |
| --- | :---: | :---: |
| Input Video | various | VPX (VP8/VP9 - internal) |
| Input Audio | various | Vorbis (internal) |
### Settings
The `WebMOutputBlock` is configured using `WebMSinkSettings`, `IVPXEncoderSettings` (for VP8 or VP9), and `VorbisEncoderSettings`.
Key `WebMSinkSettings` properties:
- `Filename` (string): The path to the output WebM file.
Key `IVPXEncoderSettings` properties (refer to `VPXEncoderSettings` for details):
- Bitrate, quality, speed, threads, etc.
Key `VorbisEncoderSettings` properties:
- Quality, bitrate, etc.
Constructor:
- `WebMOutputBlock(WebMSinkSettings sinkSettings, IVPXEncoderSettings videoEncoderSettings, VorbisEncoderSettings vorbisSettings)`
### The sample pipeline
```mermaid
graph LR;
VideoSource-->WebMOutputBlock;
AudioSource-->WebMOutputBlock;
```
### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// create video source (example: virtual source)
var videoSource = new VirtualVideoSourceBlock(new VirtualVideoSourceSettings());
// create audio source (example: virtual source)
var audioSource = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings());
// configure WebM sink settings
var webmSinkSettings = new WebMSinkSettings("output.webm");
// configure VPX encoder settings (example: VP9)
var vp9Settings = new VPXEncoderSettings(VPXEncoderMode.VP9);
// vp9Settings.Bitrate = 2000000; // Example: 2 Mbps
// vp9Settings.Speed = VP9Speed.Fast; // Example
// configure Vorbis encoder settings
var vorbisSettings = new VorbisEncoderSettings();
// vorbisSettings.Quality = 0.7f; // Example: Set quality
// create WebM output block
var webmOutput = new WebMOutputBlock(webmSinkSettings, vp9Settings, vorbisSettings);
// Create inputs for the WebM output block
var videoInputPad = webmOutput.CreateNewInput(MediaBlockPadMediaType.Video);
var audioInputPad = webmOutput.CreateNewInput(MediaBlockPadMediaType.Audio);
// connect video path
pipeline.Connect(videoSource.Output, videoInputPad);
// connect audio path
pipeline.Connect(audioSource.Output, audioInputPad);
// start pipeline
await pipeline.StartAsync();
// ... later, to stop ...
// await pipeline.StopAsync();
```
### Remarks
The `WebMOutputBlock` internally manages a `VPXEncoder` (for VP8/VP9), a `VorbisEncoder`, and a `WebMSink`.
To check availability:
`WebMOutputBlock.IsAvailable(IVPXEncoderSettings videoEncoderSettings)`
### Platforms
Windows, macOS, Linux, iOS, Android (Platform availability depends on GStreamer WebM muxer, VPX encoder, and Vorbis encoder support).
## Separate Output Block
The `SeparateOutputBlock` provides a flexible way to configure custom output pipelines, allowing you to specify distinct video and audio encoders, processors, and a final writer/sink. It uses bridge sources (`BridgeVideoSourceBlock`, `BridgeAudioSourceBlock`) to tap into the main pipeline, enabling recording independently from preview or other processing chains.
### Block info
Name: `SeparateOutputBlock`.
This block itself doesn't have direct input pads in the traditional sense; it orchestrates a sub-pipeline.
### Settings
The `SeparateOutputBlock` is configured using the `SeparateOutput` settings object.
Key `SeparateOutput` properties:
- `Sink` (`MediaBlock`): The final sink/muxer for the output (e.g., `MP4OutputBlock`, `FileSink`). Must implement `IMediaBlockDynamicInputs` if separate encoders are used, or `IMediaBlockSinkAllInOne` if it handles encoding internally.
- `VideoEncoder` (`MediaBlock`): An optional video encoder block.
- `AudioEncoder` (`MediaBlock`): An optional audio encoder block.
- `VideoProcessor` (`MediaBlock`): An optional video processing block to insert before the video encoder.
- `AudioProcessor` (`MediaBlock`): An optional audio processing block to insert before the audio encoder.
- `Writer` (`MediaBlock`): An optional writer block that takes the output of the `Sink` (e.g., for custom file writing or network streaming logic if the `Sink` is just a muxer).
- `GetFilename()`: Method to retrieve the configured output filename if applicable.
Constructor:
- `SeparateOutputBlock(MediaBlocksPipeline pipeline, SeparateOutput settings, BridgeVideoSourceSettings bridgeVideoSourceSettings, BridgeAudioSourceSettings bridgeAudioSourceSettings)`
### The conceptual pipeline
This block creates an independent processing branch. For video:
```mermaid
graph LR;
MainVideoPath --> BridgeVideoSink;
BridgeVideoSourceBlock --> OptionalVideoProcessor --> VideoEncoder --> SinkOrWriter;
```
For audio:
```mermaid
graph LR;
MainAudioPath --> BridgeAudioSink;
BridgeAudioSourceBlock --> OptionalAudioProcessor --> AudioEncoder --> SinkOrWriter;
```
### Sample code
```csharp
// Assuming 'pipeline' is your main MediaBlocksPipeline
// Assuming 'mainVideoSourceOutputPad' and 'mainAudioSourceOutputPad' are outputs from your main sources
// 1. Configure Bridge Sinks in your main pipeline
var bridgeVideoSinkSettings = new BridgeVideoSinkSettings("sep_video_bridge");
var bridgeVideoSink = new BridgeVideoSinkBlock(bridgeVideoSinkSettings);
pipeline.Connect(mainVideoSourceOutputPad, bridgeVideoSink.Input);
var bridgeAudioSinkSettings = new BridgeAudioSinkSettings("sep_audio_bridge");
var bridgeAudioSink = new BridgeAudioSinkBlock(bridgeAudioSinkSettings);
pipeline.Connect(mainAudioSourceOutputPad, bridgeAudioSink.Input);
// 2. Configure Bridge Sources for the SeparateOutputBlock's sub-pipeline
var bridgeVideoSourceSettings = new BridgeVideoSourceSettings("sep_video_bridge");
var bridgeAudioSourceSettings = new BridgeAudioSourceSettings("sep_audio_bridge");
// 3. Configure encoders and sink for the SeparateOutput
var h264Settings = H264EncoderBlock.GetDefaultSettings();
var videoEncoder = new H264EncoderBlock(h264Settings);
var aacSettings = AACEncoderBlock.GetDefaultSettings();
var audioEncoder = new AACEncoderBlock(aacSettings);
var mp4SinkSettings = new MP4SinkSettings("separate_output.mp4");
var mp4Sink = new MP4OutputBlock(mp4SinkSettings, h264Settings, aacSettings); // Using MP4OutputBlock which handles muxing.
// Alternatively, use a raw MP4Sink and connect encoders to it.
// 4. Configure SeparateOutput settings
var separateOutputSettings = new SeparateOutput(
sink: mp4Sink, // mp4Sink will act as the final writer here
videoEncoder: videoEncoder, // This is somewhat redundant if mp4Sink is MP4OutputBlock with encoders
audioEncoder: audioEncoder // Same as above. Better to use a raw sink if providing encoders separately
);
// A more typical setup if mp4Sink is just a muxer (e.g., new MP4Sink(mp4SinkRawSettings)):
// var separateOutputSettings = new SeparateOutput(
// sink: rawMp4Muxer,
// videoEncoder: videoEncoder,
// audioEncoder: audioEncoder
// );
// 5. Create the SeparateOutputBlock (this will internally connect its components)
var separateOutput = new SeparateOutputBlock(pipeline, separateOutputSettings, bridgeVideoSourceSettings, bridgeAudioSourceSettings);
// 6. Build the sources, encoders, and sink used by SeparateOutputBlock
// Note: Building these might be handled by the pipeline if they are added to it,
// or might need to be done explicitly if they are part of a sub-graph not directly in the main pipeline's block list.
// The SeparateOutputBlock's Build() method will handle building its internal sources (_videoSource, _audioSource)
// and the provided encoders/sink if they haven't been built.
// pipeline.Add(bridgeVideoSink);
// pipeline.Add(bridgeAudioSink);
// pipeline.Add(separateOutput); // Add the orchestrator block
// Start main pipeline
// await pipeline.StartAsync(); // This will also start the separate output processing via bridges
// To change filename later:
// separateOutput.SetFilenameOrURL("new_separate_output.mp4");
```
### Remarks
The `SeparateOutputBlock` itself is more of an orchestrator for a sub-pipeline that's fed by bridge sinks/sources from the main pipeline. It allows for complex recording or streaming configurations that can be started/stopped or modified independently to some extent.
The `VideoEncoder`, `AudioEncoder`, `Sink`, and `Writer` components must be built correctly. The `SeparateOutputBlock.Build()` method attempts to build these components.
### Platforms
Depends on the components used within the `SeparateOutput` configuration (encoders, sinks, processors). Generally cross-platform if GStreamer elements are available.
## WMV Output Block
The `WMVOutputBlock` is used for creating Windows Media Video (WMV) files. It uses WMV video (`WMVEncoder`) and WMA audio (`WMAEncoder`) encoders with an ASF (Advanced Systems Format) sink to produce `.wmv` files.
### Block info
Name: `WMVOutputBlock`.
| Pin direction | Media type | Expected Encoders |
| --- | :---: | :---: |
| Input Video | various | WMV (internal) |
| Input Audio | various | WMA (internal) |
### Settings
The `WMVOutputBlock` is configured using `ASFSinkSettings`, `WMVEncoderSettings`, and `WMAEncoderSettings`.
Key `ASFSinkSettings` properties:
- `Filename` (string): The path to the output WMV file.
Key `WMVEncoderSettings` properties (refer to `WMVEncoderSettings` documentation):
- Bitrate, GOP size, quality, etc.
Key `WMAEncoderSettings` properties (refer to `WMAEncoderSettings` documentation):
- Bitrate, WMA version, etc.
Constructors:
- `WMVOutputBlock(string filename)`: Uses default WMV video and WMA audio encoder settings.
- `WMVOutputBlock(ASFSinkSettings sinkSettings, WMVEncoderSettings videoSettings, WMAEncoderSettings audioSettings)`: Uses specified encoder settings.
### The sample pipeline
```mermaid
graph LR;
VideoSource-->WMVOutputBlock;
AudioSource-->WMVOutputBlock;
```
### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// create video source (example: virtual source)
var videoSource = new VirtualVideoSourceBlock(new VirtualVideoSourceSettings());
// create audio source (example: virtual source)
var audioSource = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings());
// create WMV output block with default settings
var wmvOutput = new WMVOutputBlock("output.wmv");
// Or, with custom settings:
// var asfSinkSettings = new ASFSinkSettings("output.wmv");
// var wmvEncSettings = WMVEncoderBlock.GetDefaultSettings();
// wmvEncSettings.Bitrate = 3000000; // Example: 3 Mbps
// var wmaEncSettings = WMAEncoderBlock.GetDefaultSettings();
// wmaEncSettings.Bitrate = 160000; // Example: 160 Kbps
// var wmvOutput = new WMVOutputBlock(asfSinkSettings, wmvEncSettings, wmaEncSettings);
// Create inputs for the WMV output block
var videoInputPad = wmvOutput.CreateNewInput(MediaBlockPadMediaType.Video);
var audioInputPad = wmvOutput.CreateNewInput(MediaBlockPadMediaType.Audio);
// connect video path
pipeline.Connect(videoSource.Output, videoInputPad);
// connect audio path
pipeline.Connect(audioSource.Output, audioInputPad);
// start pipeline
await pipeline.StartAsync();
// ... later, to stop ...
// await pipeline.StopAsync();
```
### Remarks
The `WMVOutputBlock` internally manages `WMVEncoder`, `WMAEncoder`, and `ASFSink`.
To check availability:
`WMVOutputBlock.IsAvailable()`
### Platforms
Primarily Windows. Availability on other platforms depends on GStreamer plugin support for ASF muxing, WMV, and WMA encoders (which may be limited outside of Windows).
## YouTube Output Block
The `YouTubeOutputBlock` is designed for streaming video and audio to YouTube Live using RTMP. It internally utilizes H.264 video and AAC audio encoders.
### Block info
Name: `YouTubeOutputBlock`.
| Pin direction | Media type | Expected Encoders |
| --- | :---: | :---: |
| Input Video | various | H.264 (internal) |
| Input Audio | various | AAC (internal) |
### Settings
The `YouTubeOutputBlock` is configured using `YouTubeSinkSettings`, `IH264EncoderSettings`, and `IAACEncoderSettings`.
Key `YouTubeSinkSettings` properties:
- `Url` (string): The RTMP URL provided by YouTube Live for streaming (e.g., "rtmp://a.rtmp.youtube.com/live2/YOUR-STREAM-KEY").
Constructor:
- `YouTubeOutputBlock(YouTubeSinkSettings sinkSettings, IH264EncoderSettings h264settings, IAACEncoderSettings aacSettings)`
### The sample pipeline
```mermaid
graph LR;
VideoSource-->YouTubeOutputBlock;
AudioSource-->YouTubeOutputBlock;
```
### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// create video source (e.g., SystemVideoSourceBlock)
var videoSource = new SystemVideoSourceBlock(videoSourceSettings); // Assuming videoSourceSettings are configured
// create audio source (e.g., SystemAudioSourceBlock)
var audioSource = new SystemAudioSourceBlock(audioSourceSettings); // Assuming audioSourceSettings are configured
// configure YouTube sink settings
var ytSinkSettings = new YouTubeSinkSettings("rtmp://a.rtmp.youtube.com/live2/YOUR-STREAM-KEY");
// configure H.264 encoder settings (use defaults or customize per YouTube recommendations)
var h264Settings = H264EncoderBlock.GetDefaultSettings();
// h264Settings.Bitrate = 6000000; // Example: 6 Mbps for 1080p
// h264Settings.UsagePreset = H264UsagePreset.None; // Adjust based on performance/quality needs
// configure AAC encoder settings (use defaults or customize per YouTube recommendations)
var aacSettings = AACEncoderBlock.GetDefaultSettings();
// aacSettings.Bitrate = 128000; // Example: 128 Kbps stereo
// create YouTube output block
var youTubeOutput = new YouTubeOutputBlock(ytSinkSettings, h264Settings, aacSettings);
// Create inputs for the YouTube output block
var videoInputPad = youTubeOutput.CreateNewInput(MediaBlockPadMediaType.Video);
var audioInputPad = youTubeOutput.CreateNewInput(MediaBlockPadMediaType.Audio);
// connect video path
pipeline.Connect(videoSource.Output, videoInputPad);
// connect audio path
pipeline.Connect(audioSource.Output, audioInputPad);
// start pipeline
await pipeline.StartAsync();
// ... later, to stop ...
// await pipeline.StopAsync();
```
### Remarks
This block encapsulates the H.264 and AAC encoders and the RTMP sink (`YouTubeSink`).
Ensure that the `YouTubeSink`, `H264Encoder`, and `AACEncoder` are available. `YouTubeOutputBlock.IsAvailable(IH264EncoderSettings h264settings, IAACEncoderSettings aacSettings)` can be used to check this.
It's crucial to configure encoder settings (bitrate, resolution, frame rate) according to YouTube's recommended settings for live streaming to ensure optimal quality and compatibility.
### Platforms
Windows, macOS, Linux, iOS, Android (Platform availability depends on GStreamer RTMP support and H.264/AAC encoder availability).
---END OF PAGE---
# Local File: .\dotnet\mediablocks\Parsers\index.md
---
title: .Net Media Parser Blocks Guide
description: Explore a complete guide to .Net Media SDK parser blocks. Learn about various video and audio parsers for your media processing pipelines.
sidebar_label: Parsers
---
# Parser Blocks - VisioForge Media Blocks SDK .Net
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
Parser blocks are essential components in media processing pipelines. They are used to parse elementary streams, which might be raw or partially processed, to extract metadata, and to prepare the streams for further processing like decoding or multiplexing. VisioForge Media Blocks SDK .Net offers a variety of parser blocks for common video and audio codecs.
## Video Parser Blocks
### AV1 Parser Block
The `AV1ParseBlock` is used to parse AV1 video elementary streams. It helps in identifying frame boundaries and extracting codec-specific information.
#### Block info
Name: `AV1ParseBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | AV1 video | 1 |
| Output video | AV1 video | 1 |
#### The sample pipeline
```mermaid
graph LR;
DataSourceBlock["Data Source (e.g., File or Network)"] --> AV1ParseBlock;
AV1ParseBlock --> AV1DecoderBlock["AV1 Decoder Block"];
AV1DecoderBlock --> VideoRendererBlock["Video Renderer Block"];
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
---
### H.263 Parser Block
The `H263ParseBlock` is designed to parse H.263 video elementary streams. This is useful for older video conferencing and mobile video applications.
#### Block info
Name: `H263ParseBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | H.263 video | 1 |
| Output video | H.263 video | 1 |
#### The sample pipeline
```mermaid
graph LR;
DataSourceBlock["Data Source"] --> H263ParseBlock;
H263ParseBlock --> H263DecoderBlock["H.263 Decoder Block"];
H263DecoderBlock --> VideoRendererBlock["Video Renderer Block"];
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
---
### H.264 Parser Block
The `H264ParseBlock` parses H.264 (AVC) video elementary streams. This is one of the most widely used video codecs. The parser helps in identifying NAL units and other stream properties.
#### Block info
Name: `H264ParseBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | H.264 video | 1 |
| Output video | H.264 video | 1 |
#### The sample pipeline
```mermaid
graph LR;
PushDataSource["Push Data Source (H.264 NALUs)"] --> H264ParseBlock;
H264ParseBlock --> H264DecoderBlock["H.264 Decoder Block"];
H264DecoderBlock --> VideoRendererBlock["Video Renderer Block"];
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
---
### H.265 Parser Block
The `H265ParseBlock` parses H.265 (HEVC) video elementary streams. H.265 offers better compression than H.264. The parser helps in identifying NAL units and other stream properties.
#### Block info
Name: `H265ParseBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | H.265 video | 1 |
| Output video | H.265 video | 1 |
#### The sample pipeline
```mermaid
graph LR;
PushDataSource["Push Data Source (H.265 NALUs)"] --> H265ParseBlock;
H265ParseBlock --> H265DecoderBlock["H.265 Decoder Block"];
H265DecoderBlock --> VideoRendererBlock["Video Renderer Block"];
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
---
### JPEG 2000 Parser Block
The `JPEG2000ParseBlock` is used to parse JPEG 2000 video streams. JPEG 2000 is a wavelet-based compression standard that can be used for still images and video.
#### Block info
Name: `JPEG2000ParseBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | JPEG 2000 video | 1 |
| Output video | JPEG 2000 video | 1 |
#### The sample pipeline
```mermaid
graph LR;
DataSourceBlock["Data Source"] --> JPEG2000ParseBlock;
JPEG2000ParseBlock --> JPEG2000DecoderBlock["JPEG 2000 Decoder Block"];
JPEG2000DecoderBlock --> VideoRendererBlock["Video Renderer Block"];
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
---
### MPEG-1/2 Video Parser Block
The `MPEG12VideoParseBlock` parses MPEG-1 and MPEG-2 video elementary streams. These are older but still relevant video codecs, especially MPEG-2 for DVDs and broadcast.
#### Block info
Name: `MPEG12VideoParseBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | MPEG-1/2 video | 1 |
| Output video | MPEG-1/2 video | 1 |
#### The sample pipeline
```mermaid
graph LR;
DataSourceBlock["Data Source"] --> MPEG12VideoParseBlock;
MPEG12VideoParseBlock --> MPEGVideoDecoderBlock["MPEG-1/2 Decoder Block"];
MPEGVideoDecoderBlock --> VideoRendererBlock["Video Renderer Block"];
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
---
### MPEG-4 Video Parser Block
The `MPEG4ParseBlock` parses MPEG-4 Part 2 video elementary streams (often referred to as DivX/Xvid in its early forms).
#### Block info
Name: `MPEG4ParseBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | MPEG-4 video | 1 |
| Output video | MPEG-4 video | 1 |
#### The sample pipeline
```mermaid
graph LR;
DataSourceBlock["Data Source"] --> MPEG4ParseBlock;
MPEG4ParseBlock --> MPEG4DecoderBlock["MPEG-4 Decoder Block"];
MPEG4DecoderBlock --> VideoRendererBlock["Video Renderer Block"];
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
---
### PNG Parser Block
The `PNGParseBlock` is used to parse PNG image data. While PNG is primarily an image format, this parser can be useful in scenarios where PNG images are part of a stream or need to be processed within the Media Blocks pipeline.
#### Block info
Name: `PNGParseBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | PNG image data | 1 |
| Output video | PNG image data | 1 |
#### The sample pipeline
```mermaid
graph LR;
DataSourceBlock["Data Source (PNG data)"] --> PNGParseBlock;
PNGParseBlock --> PNGDecoderBlock["PNG Decoder Block"];
PNGDecoderBlock --> VideoRendererBlock["Video Renderer Block (or Image Overlay)"];
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
---
### VC-1 Parser Block
The `VC1ParseBlock` parses VC-1 video elementary streams. VC-1 was developed by Microsoft and was used in Blu-ray Discs and Windows Media Video.
#### Block info
Name: `VC1ParseBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | VC-1 video | 1 |
| Output video | VC-1 video | 1 |
#### The sample pipeline
```mermaid
graph LR;
DataSourceBlock["Data Source"] --> VC1ParseBlock;
VC1ParseBlock --> VC1DecoderBlock["VC-1 Decoder Block"];
VC1DecoderBlock --> VideoRendererBlock["Video Renderer Block"];
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
---
### VP9 Parser Block
The `VP9ParseBlock` parses VP9 video elementary streams. VP9 is an open and royalty-free video coding format developed by Google, often used for web video.
#### Block info
Name: `VP9ParseBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | VP9 video | 1 |
| Output video | VP9 video | 1 |
#### The sample pipeline
```mermaid
graph LR;
DataSourceBlock["Data Source"] --> VP9ParseBlock;
VP9ParseBlock --> VP9DecoderBlock["VP9 Decoder Block"];
VP9DecoderBlock --> VideoRendererBlock["Video Renderer Block"];
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
---
## Audio Parser Blocks
### MPEG Audio Parser Block
The `MPEGAudioParseBlock` parses MPEG audio elementary streams, which includes MP1, MP2, and MP3 audio.
#### Block info
Name: `MPEGAudioParseBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input audio | MPEG audio | 1 |
| Output audio | MPEG audio | 1 |
#### The sample pipeline
```mermaid
graph LR;
DataSourceBlock["Data Source (MP3 data)"] --> MPEGAudioParseBlock;
MPEGAudioParseBlock --> MP3DecoderBlock["MP3 Decoder Block"];
MP3DecoderBlock --> AudioRendererBlock["Audio Renderer Block"];
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
---END OF PAGE---
# Local File: .\dotnet\mediablocks\Sinks\index.md
---
title: .Net Media Sinks - File & Network Streaming
description: Discover .Net media sink blocks for saving or streaming audio/video. Learn about file sinks like MP4, MKV, AVI, and network sinks such as RTMP, HLS, SRT for versatile media output.
sidebar_label: Sinks
---
# Sinks
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
Sinks are blocks that save or stream data. They are the last blocks in the pipeline.
Optionally, some sinks can have output pins to pass data to the next block in the pipeline.
SDK provides a lot of different sinks for different purposes.
**File sinks**
The following file sinks are available:
- [ASF](#asf)
- [AVI](#avi)
- [File](#raw-file)
- [MKV](#mkv)
- [MOV](#mov)
- [MP4](#mp4)
- [MPEG-PS](#mpeg-ps)
- [MPEG-TS](#mpeg-ts)
- [MXF](#mxf)
- [OGG](#ogg)
- [WAV](#wav)
- [WebM](#webm)
**Network streaming**
The following network streaming sinks are available:
- [Facebook Live](#facebook-live)
- [HLS](#hls)
- [MJPEG over HTTP](#mjpeg-over-http)
- [NDI](#ndi)
- [SRT](#srt)
- [SRT MPEG-TS](#srt-mpeg-ts)
- [RTMP](#rtmp)
- [Shoutcast](#shoutcast)
- [YouTube Live](#youtube-live)
## File Sinks
### ASF
`ASF (Advanced Systems Format)`: A Microsoft digital container format used to store multimedia data, designed to be platform-independent and to support scalable media types like audio and video.
Use the `ASFSinkSettings` class to set the parameters.
#### Block info
Name: AVISinkBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input audio | audio/x-raw | one or more |
| | audio/mpeg | |
| | audio/x-ac3 | |
| | audio/x-alaw | |
| | audio/x-mulaw | |
| | audio/x-wma | |
| Input video | video/x-raw | one or more |
| | image/jpeg | |
| | video/x-divx | |
| | video/x-msmpeg | |
| | video/mpeg | |
| | video/x-h263 | |
| | video/x-h264 | |
| | video/x-dv | |
| | video/x-huffyuv | |
| | video/x-wmv | |
| | video/x-jpc | |
| | video/x-vp8 | |
| | image/png | |
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->WMVEncoderBlock;
UniversalSourceBlock-->WMAEncoderBlock;
WMVEncoderBlock-->ASFSinkBlock;
WMAEncoderBlock-->ASFSinkBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var audioEncoderBlock = new WMAEncoderBlock(new WMAEncoderSettings());
pipeline.Connect(fileSource.AudioOutput, audioEncoderBlock.Input);
var videoEncoderBlock = new WMVEncoderBlock(new WMVEncoderSettings());
pipeline.Connect(fileSource.VideoOutput, videoEncoderBlock.Input);
var sinkBlock = new ASFSinkBlock(new ASFSinkSettings(@"output.wmv"));
pipeline.Connect(audioEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Audio));
pipeline.Connect(videoEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### AVI
AVI (Audio Video Interleave) is a multimedia container format introduced by Microsoft. It enables simultaneous audio-with-video playback by alternating segments of audio and video data.
Use the `AVISinkSettings` class to set the parameters.
#### Block info
Name: AVISinkBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input audio | audio/x-raw | one or more |
| | audio/mpeg | |
| | audio/x-ac3 | |
| | audio/x-alaw | |
| | audio/x-mulaw | |
| Input video | video/x-raw | one or more |
| | image/jpeg | |
| | video/x-divx | |
| | video/x-msmpeg | |
| | video/mpeg | |
| | video/x-h263 | |
| | video/x-h264 | |
| | video/x-dv | |
| | video/x-huffyuv | |
| | image/png | |
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->MP3EncoderBlock;
UniversalSourceBlock-->DIVXEncoderBlock;
MP3EncoderBlock-->AVISinkBlock;
DIVXEncoderBlock-->AVISinkBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var audioEncoderBlock = new MP3EncoderBlock(new MP3EncoderSettings() { Bitrate = 192 });
pipeline.Connect(fileSource.AudioOutput, audioEncoderBlock.Input);
var videoEncoderBlock = new DIVXEncoderBlock(new DIVXEncoderSettings());
pipeline.Connect(fileSource.VideoOutput, videoEncoderBlock.Input);
var sinkBlock = new AVISinkBlock(new AVISinkSettings(@"output.avi"));
pipeline.Connect(audioEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Audio));
pipeline.Connect(videoEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### RAW File
Universal output to a file. This sink is used inside all other higher-level sinks, e.g. MP4Sink. It can be used to write RAW video or audio to a file.
#### Block info
Name: FileSinkBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input | Any stream format | 1 |
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->MP3EncoderBlock;
MP3EncoderBlock-->AVISinkBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var mp3EncoderBlock = new MP3EncoderBlock(new MP3EncoderSettings() { Bitrate = 192 });
pipeline.Connect(fileSource.AudioOutput, mp3EncoderBlock.Input);
var fileSinkBlock = new FileSinkBlock(@"output.mp3");
pipeline.Connect(mp3EncoderBlock.Output, fileSinkBlock.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### MKV
MKV (Matroska) is an open standard free container format, similar to MP4 and AVI but with more flexibility and advanced features.
Use the `MKVSinkSettings` class to set the parameters.
#### Block info
Name: MKVSinkBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input audio | audio/x-raw | one or more |
| | audio/mpeg | |
| | audio/x-ac3 | |
| | audio/x-alaw | |
| | audio/x-mulaw | |
| | audio/x-wma | |
| | audio/x-vorbis | |
| | audio/x-opus | |
| | audio/x-flac | |
| Input video | video/x-raw | one or more |
| | image/jpeg | |
| | video/x-divx | |
| | video/x-msmpeg | |
| | video/mpeg | |
| | video/x-h263 | |
| | video/x-h264 | |
| | video/x-h265 | |
| | video/x-dv | |
| | video/x-huffyuv | |
| | video/x-wmv | |
| | video/x-jpc | |
| | video/x-vp8 | |
| | video/x-vp9 | |
| | video/x-theora | |
| | image/png | |
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->VorbisEncoderBlock;
UniversalSourceBlock-->VP9EncoderBlock;
VorbisEncoderBlock-->MKVSinkBlock;
VP9EncoderBlock-->MKVSinkBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var audioEncoderBlock = new VorbisEncoderBlock(new VorbisEncoderSettings() { Bitrate = 192 });
pipeline.Connect(fileSource.AudioOutput, audioEncoderBlock.Input);
var videoEncoderBlock = new VP9EncoderBlock(new VP9EncoderSettings() { Bitrate = 2000 });
pipeline.Connect(fileSource.VideoOutput, videoEncoderBlock.Input);
var sinkBlock = new MKVSinkBlock(new MKVSinkSettings(@"output.mkv"));
pipeline.Connect(audioEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Audio));
pipeline.Connect(videoEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### MOV
MOV (QuickTime File Format) is a multimedia container format developed by Apple for storing video, audio, and other time-based media. It supports various codecs and is widely used for multimedia content on Apple platforms, and also in professional video editing.
Use the `MOVSinkSettings` class to set the parameters.
#### Block info
Name: MOVSinkBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input audio | audio/x-raw | one or more |
| | audio/mpeg | |
| | audio/x-ac3 | |
| | audio/x-alaw | |
| | audio/x-mulaw | |
| | audio/AAC | |
| Input video | video/x-raw | one or more |
| | image/jpeg | |
| | video/x-divx | |
| | video/x-msmpeg | |
| | video/mpeg | |
| | video/x-h263 | |
| | video/x-h264 | |
| | video/x-h265 | |
| | video/x-dv | |
| | video/x-huffyuv | |
| | image/png | |
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->AACEncoderBlock;
UniversalSourceBlock-->H264EncoderBlock;
AACEncoderBlock-->MOVSinkBlock;
H264EncoderBlock-->MOVSinkBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var audioEncoderBlock = new AACEncoderBlock(new AACEncoderSettings() { Bitrate = 192 });
pipeline.Connect(fileSource.AudioOutput, audioEncoderBlock.Input);
var videoEncoderBlock = new H264EncoderBlock(new OpenH264EncoderSettings());
pipeline.Connect(fileSource.VideoOutput, videoEncoderBlock.Input);
var sinkBlock = new MOVSinkBlock(new MOVSinkSettings(@"output.mov"));
pipeline.Connect(audioEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Audio));
pipeline.Connect(videoEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### MP4
MP4 (MPEG-4 Part 14) is a digital multimedia container format used to store video, audio, and other data such as subtitles and images. It's widely used for sharing video content online and is compatible with a wide range of devices and platforms.
Use the `MP4SinkSettings` class to set the parameters.
#### Block info
Name: MP4SinkBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input audio | audio/x-raw | one or more |
| | audio/mpeg | |
| | audio/x-ac3 | |
| | audio/x-alaw | |
| | audio/x-mulaw | |
| | audio/AAC | |
| Input video | video/x-raw | one or more |
| | image/jpeg | |
| | video/x-divx | |
| | video/x-msmpeg | |
| | video/mpeg | |
| | video/x-h263 | |
| | video/x-h264 | |
| | video/x-h265 | |
| | video/x-dv | |
| | video/x-huffyuv | |
| | image/png | |
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->AACEncoderBlock;
UniversalSourceBlock-->H264EncoderBlock;
AACEncoderBlock-->MP4SinkBlock;
H264EncoderBlock-->MP4SinkBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var audioEncoderBlock = new AACEncoderBlock(new AACEncoderSettings() { Bitrate = 192 });
pipeline.Connect(fileSource.AudioOutput, audioEncoderBlock.Input);
var videoEncoderBlock = new H264EncoderBlock(new OpenH264EncoderSettings());
pipeline.Connect(fileSource.VideoOutput, videoEncoderBlock.Input);
var sinkBlock = new MP4SinkBlock(new MP4SinkSettings(@"output.mp4"));
pipeline.Connect(audioEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Audio));
pipeline.Connect(videoEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### MPEG-PS
MPEG-PS (MPEG Program Stream) is a container format for multiplexing digital audio, video, and other data. It is designed for reasonably reliable media, such as DVDs, CD-ROMs, and other disc media.
Use the `MPEGPSSinkSettings` class to set the parameters.
#### Block info
Name: MPEGPSSinkBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input audio | audio/x-raw | one or more |
| | audio/mpeg | |
| | audio/x-ac3 | |
| | audio/x-alaw | |
| | audio/x-mulaw | |
| Input video | video/x-raw | one or more |
| | image/jpeg | |
| | video/x-msmpeg | |
| | video/mpeg | |
| | video/x-h263 | |
| | video/x-h264 | |
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->MP2EncoderBlock;
UniversalSourceBlock-->MPEG2EncoderBlock;
MP2EncoderBlock-->MPEGPSSinkBlock;
MPEG2EncoderBlock-->MPEGPSSinkBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var audioEncoderBlock = new MP2EncoderBlock(new MP2EncoderSettings() { Bitrate = 192 });
pipeline.Connect(fileSource.AudioOutput, audioEncoderBlock.Input);
var videoEncoderBlock = new MPEG2EncoderBlock(new MPEG2EncoderSettings());
pipeline.Connect(fileSource.VideoOutput, videoEncoderBlock.Input);
var sinkBlock = new MPEGPSSinkBlock(new MPEGPSSinkSettings(@"output.mpg"));
pipeline.Connect(audioEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Audio));
pipeline.Connect(videoEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### MPEG-TS
MPEG-TS (MPEG Transport Stream) is a standard digital container format for transmission and storage of audio, video, and Program and System Information Protocol (PSIP) data. It is used in broadcast systems such as DVB, ATSC and IPTV.
Use the `MPEGTSSinkSettings` class to set the parameters.
#### Block info
Name: MPEGTSSinkBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input audio | audio/x-raw | one or more |
| | audio/mpeg | |
| | audio/x-ac3 | |
| | audio/x-alaw | |
| | audio/x-mulaw | |
| | audio/AAC | |
| Input video | video/x-raw | one or more |
| | image/jpeg | |
| | video/x-msmpeg | |
| | video/mpeg | |
| | video/x-h263 | |
| | video/x-h264 | |
| | video/x-h265 | |
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->AACEncoderBlock;
UniversalSourceBlock-->H264EncoderBlock;
AACEncoderBlock-->MPEGTSSinkBlock;
H264EncoderBlock-->MPEGTSSinkBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var audioEncoderBlock = new AACEncoderBlock(new AACEncoderSettings() { Bitrate = 192 });
pipeline.Connect(fileSource.AudioOutput, audioEncoderBlock.Input);
var videoEncoderBlock = new H264EncoderBlock(new OpenH264EncoderSettings());
pipeline.Connect(fileSource.VideoOutput, videoEncoderBlock.Input);
var sinkBlock = new MPEGTSSinkBlock(new MPEGTSSinkSettings(@"output.ts"));
pipeline.Connect(audioEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Audio));
pipeline.Connect(videoEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### MXF
MXF (Material Exchange Format) is a container format for professional digital video and audio media, developed to address issues such as file exchange, interoperability, and to improve project workflow between production houses and content/equipment providers.
Use the `MXFSinkSettings` class to set the parameters.
#### Block info
Name: MXFSinkBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input audio | audio/x-raw | one or more |
| | audio/mpeg | |
| | audio/x-ac3 | |
| | audio/x-alaw | |
| | audio/x-mulaw | |
| | audio/AAC | |
| Input video | video/x-raw | one or more |
| | image/jpeg | |
| | video/x-divx | |
| | video/x-msmpeg | |
| | video/mpeg | |
| | video/x-h263 | |
| | video/x-h264 | |
| | video/x-h265 | |
| | video/x-dv | |
| | image/png | |
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->PCMEncoderBlock;
UniversalSourceBlock-->DIVXEncoderBlock;
PCMEncoderBlock-->MXFSinkBlock;
DIVXEncoderBlock-->MXFSinkBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var audioBlock = new PCMEncoderBlock(new PCMEncoderSettings());
pipeline.Connect(fileSource.AudioOutput, audioBlock.Input);
var videoEncoderBlock = new DIVXEncoderBlock(new DIVXEncoderSettings());
pipeline.Connect(fileSource.VideoOutput, videoEncoderBlock.Input);
var sinkBlock = new MXFSinkBlock(new MXFSinkSettings(@"output.mxf"));
pipeline.Connect(audioBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Audio));
pipeline.Connect(videoEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### OGG
OGG is a free, open container format designed for efficient streaming and manipulation of high quality digital multimedia. It is developed by the Xiph.Org Foundation and supports audio codecs like Vorbis, Opus, and FLAC, and video codecs like Theora.
Use the `OGGSinkSettings` class to set the parameters.
#### Block info
Name: OGGSinkBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input audio | audio/x-raw | one or more |
| | audio/x-vorbis | |
| | audio/x-flac | |
| | audio/x-speex | |
| | audio/x-celt | |
| | audio/x-opus | |
| Input video | video/x-raw | one or more |
| | video/x-theora | |
| | video/x-dirac | |
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->VorbisEncoderBlock;
UniversalSourceBlock-->TheoraEncoderBlock;
VorbisEncoderBlock-->OGGSinkBlock;
TheoraEncoderBlock-->OGGSinkBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var audioEncoderBlock = new VorbisEncoderBlock(new VorbisEncoderSettings() { Bitrate = 192 });
pipeline.Connect(fileSource.AudioOutput, audioEncoderBlock.Input);
var videoEncoderBlock = new TheoraEncoderBlock(new TheoraEncoderSettings());
pipeline.Connect(fileSource.VideoOutput, videoEncoderBlock.Input);
var sinkBlock = new OGGSinkBlock(new OGGSinkSettings(@"output.ogg"));
pipeline.Connect(audioEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Audio));
pipeline.Connect(videoEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### WAV
WAV (Waveform Audio File Format) is an audio file format standard developed by IBM and Microsoft for storing audio bitstreams on PCs. It is the main format used on Windows systems for raw and typically uncompressed audio.
Use the `WAVSinkSettings` class to set the parameters.
#### Block info
Name: WAVSinkBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input audio | audio/x-raw | one |
| | audio/x-alaw | |
| | audio/x-mulaw | |
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->PCMEncoderBlock;
PCMEncoderBlock-->WAVSinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp3";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var audioBlock = new PCMEncoderBlock(new PCMEncoderSettings());
pipeline.Connect(fileSource.AudioOutput, audioBlock.Input);
var sinkBlock = new WAVSinkBlock(new WAVSinkSettings(@"output.wav"));
pipeline.Connect(audioBlock.Output, sinkBlock.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### WebM
WebM is an open, royalty-free, media file format designed for the web. WebM defines the file container structure, video and audio formats. WebM files consist of video streams compressed with the VP8 or VP9 video codecs and audio streams compressed with the Vorbis or Opus audio codecs.
Use the `WebMSinkSettings` class to set the parameters.
#### Block info
Name: WebMSinkBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input audio | audio/x-raw | one or more |
| | audio/x-vorbis | |
| | audio/x-opus | |
| Input video | video/x-raw | one or more |
| | video/x-vp8 | |
| | video/x-vp9 | |
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->VorbisEncoderBlock;
UniversalSourceBlock-->VP9EncoderBlock;
VorbisEncoderBlock-->WebMSinkBlock;
VP9EncoderBlock-->WebMSinkBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var audioEncoderBlock = new VorbisEncoderBlock(new VorbisEncoderSettings() { Bitrate = 192 });
pipeline.Connect(fileSource.AudioOutput, audioEncoderBlock.Input);
var videoEncoderBlock = new VP9EncoderBlock(new VP9EncoderSettings());
pipeline.Connect(fileSource.VideoOutput, videoEncoderBlock.Input);
var sinkBlock = new WebMSinkBlock(new WebMSinkSettings(@"output.webm"));
pipeline.Connect(audioEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Audio));
pipeline.Connect(videoEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
## Network Streaming Sinks
### RTMP
`RTMP (Real-Time Messaging Protocol)`: Developed by Adobe, RTMP is a protocol used for streaming audio, video, and data over the Internet, optimized for high-performance transmission. It enables efficient, low-latency communication, commonly used in live broadcasting like sports events and concerts.
Use the `RTMPSinkSettings` class to set the parameters.
#### Block info
Name: RTMPSinkBlock.
| Pin direction | Media type | Pins count |
| --- |:------------:|:-----------:|
| Input audio | audio/mpeg [1,2,4] | one |
| | audio/x-adpcm |
| | PCM [U8, S16LE] | |
| | audio/x-speex | |
| | audio/x-mulaw | |
| | audio/x-alaw | |
| | audio/x-nellymoser | |
| Input video | video/x-h264 | one |
#### The sample pipeline
```mermaid
graph LR;
VirtualVideoSourceBlock-->H264EncoderBlock;
VirtualAudioSourceBlock-->AACEncoderBlock;
H264EncoderBlock-->RTMPSinkBlock;
AACEncoderBlock-->RTMPSinkBlock;
```
#### Sample code
```csharp
// Pipeline
var pipeline = new MediaBlocksPipeline();
// video and audio sources
var virtualVideoSource = new VirtualVideoSourceSettings
{
Width = 1280,
Height = 720,
FrameRate = VideoFrameRate.FPS_25,
};
var videoSource = new VirtualVideoSourceBlock(virtualVideoSource);
var virtualAudioSource = new VirtualAudioSourceSettings
{
Channels = 2,
SampleRate = 44100,
};
var audioSource = new VirtualAudioSourceBlock(virtualAudioSource);
// H264/AAC encoders
var h264Encoder = new H264EncoderBlock(new OpenH264EncoderSettings());
var aacEncoder = new AACEncoderBlock();
pipeline.Connect(videoSource.Output, h264Encoder.Input);
pipeline.Connect(audioSource.Output, aacEncoder.Input);
// RTMP sink
var sink = new RTMPSinkBlock(new RTMPSinkSettings());
pipeline.Connect(h264Encoder.Output, sink.CreateNewInput(MediaBlockPadMediaType.Video));
pipeline.Connect(aacEncoder.Output, sink.CreateNewInput(MediaBlockPadMediaType.Audio));
// Start
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Facebook Live
Facebook Live is a feature that allows live streaming of video on Facebook. The livestream can be published to personal profiles, pages, or groups.
Use the `FacebookLiveSinkSettings` class to set the parameters.
#### Block info
Name: FacebookLiveSinkBlock.
| Pin direction | Media type | Pins count |
| --- |:------------:|:-----------:|
| Input audio | audio/mpeg [1,2,4] | one |
| | audio/x-adpcm |
| | PCM [U8, S16LE] | |
| | audio/x-speex | |
| | audio/x-mulaw | |
| | audio/x-alaw | |
| | audio/x-nellymoser | |
| Input video | video/x-h264 | one |
#### The sample pipeline
```mermaid
graph LR;
VirtualVideoSourceBlock-->H264EncoderBlock;
VirtualAudioSourceBlock-->AACEncoderBlock;
H264EncoderBlock-->FacebookLiveSinkBlock;
AACEncoderBlock-->FacebookLiveSinkBlock;
```
#### Sample code
```csharp
// Pipeline
var pipeline = new MediaBlocksPipeline();
// video and audio sources
var virtualVideoSource = new VirtualVideoSourceSettings
{
Width = 1280,
Height = 720,
FrameRate = VideoFrameRate.FPS_25,
};
var videoSource = new VirtualVideoSourceBlock(virtualVideoSource);
var virtualAudioSource = new VirtualAudioSourceSettings
{
Channels = 2,
SampleRate = 44100,
};
var audioSource = new VirtualAudioSourceBlock(virtualAudioSource);
// H264/AAC encoders
var h264Encoder = new H264EncoderBlock(new OpenH264EncoderSettings());
var aacEncoder = new AACEncoderBlock();
pipeline.Connect(videoSource.Output, h264Encoder.Input);
pipeline.Connect(audioSource.Output, aacEncoder.Input);
// Facebook Live sink
var sink = new FacebookLiveSinkBlock(new FacebookLiveSinkSettings(
"https://facebook.com/rtmp/...",
"your_stream_key"));
pipeline.Connect(h264Encoder.Output, sink.CreateNewInput(MediaBlockPadMediaType.Video));
pipeline.Connect(aacEncoder.Output, sink.CreateNewInput(MediaBlockPadMediaType.Audio));
// Start
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### HLS
HLS (HTTP Live Streaming) is an HTTP-based adaptive streaming communications protocol developed by Apple. It enables adaptive bitrate streaming by breaking the stream into a sequence of small HTTP-based file segments, typically using MPEG-TS fragments as the container.
Use the `HLSSinkSettings` class to set the parameters.
#### Block info
Name: HLSSinkBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input audio | audio/mpeg | one or more |
| | audio/x-ac3 | |
| | audio/x-alaw | |
| | audio/x-mulaw | |
| | audio/AAC | |
| Input video | video/x-raw | one or more |
| | image/jpeg | |
| | video/x-msmpeg | |
| | video/mpeg | |
| | video/x-h263 | |
| | video/x-h264 | |
| | video/x-h265 | |
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->AACEncoderBlock;
UniversalSourceBlock-->H264EncoderBlock1;
UniversalSourceBlock-->H264EncoderBlock2;
UniversalSourceBlock-->H264EncoderBlock3;
AACEncoderBlock-->HLSSinkBlock;
H264EncoderBlock1-->HLSSinkBlock;
H264EncoderBlock2-->HLSSinkBlock;
H264EncoderBlock3-->HLSSinkBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var audioEncoderBlock = new AACEncoderBlock(new AACEncoderSettings() { Bitrate = 192 });
pipeline.Connect(fileSource.AudioOutput, audioEncoderBlock.Input);
// 3 video encoders with different bitrates for adaptive streaming
var videoEncoderBlock1 = new H264EncoderBlock(new OpenH264EncoderSettings { Bitrate = 3000, Width = 1920, Height = 1080 });
var videoEncoderBlock2 = new H264EncoderBlock(new OpenH264EncoderSettings { Bitrate = 1500, Width = 1280, Height = 720 });
var videoEncoderBlock3 = new H264EncoderBlock(new OpenH264EncoderSettings { Bitrate = 800, Width = 854, Height = 480 });
pipeline.Connect(fileSource.VideoOutput, videoEncoderBlock1.Input);
pipeline.Connect(fileSource.VideoOutput, videoEncoderBlock2.Input);
pipeline.Connect(fileSource.VideoOutput, videoEncoderBlock3.Input);
// Configure HLS sink
var hlsSettings = new HLSSinkSettings("./output/")
{
PlaylistName = "playlist.m3u8",
SegmentDuration = 6,
PlaylistType = HLSPlaylistType.Event,
HTTPServerEnabled = true,
HTTPServerPort = 8080
};
var sinkBlock = new HLSSinkBlock(hlsSettings);
// Connect audio
pipeline.Connect(audioEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Audio));
// Connect video variants
pipeline.Connect(videoEncoderBlock1.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Video, "1080p"));
pipeline.Connect(videoEncoderBlock2.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Video, "720p"));
pipeline.Connect(videoEncoderBlock3.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Video, "480p"));
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### MJPEG over HTTP
HTTP MJPEG (Motion JPEG) Live is a video streaming format where each video frame is compressed separately as a JPEG image and transmitted over HTTP. It is widely used in IP cameras and webcams due to its simplicity, although it is less efficient than modern codecs.
Use the `HTTPMJPEGLiveSinkSettings` class to set the parameters.
#### Block info
Name: HTTPMJPEGLiveSinkBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | video/x-raw | one |
| | image/jpeg | |
#### The sample pipeline
```mermaid
graph LR;
VirtualVideoSourceBlock-->MJPEGEncoderBlock;
MJPEGEncoderBlock-->HTTPMJPEGLiveSinkBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create virtual video source
var virtualVideoSource = new VirtualVideoSourceSettings
{
Width = 1280,
Height = 720,
FrameRate = VideoFrameRate.FPS_30,
};
var videoSource = new VirtualVideoSourceBlock(virtualVideoSource);
// MJPEG encoder
var mjpegEncoder = new MJPEGEncoderBlock(new MJPEGEncoderSettings { Quality = 80 });
pipeline.Connect(videoSource.Output, mjpegEncoder.Input);
// HTTP MJPEG server
var sink = new HTTPMJPEGLiveSinkBlock(new HTTPMJPEGLiveSinkSettings
{
Port = 8080,
Path = "/stream"
});
pipeline.Connect(mjpegEncoder.Output, sink.Input);
// Start
await pipeline.StartAsync();
Console.WriteLine("MJPEG stream available at http://localhost:8080/stream");
Console.WriteLine("Press any key to stop...");
Console.ReadKey();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
### NDI
NDI (Network Device Interface) is a royalty-free video transport standard developed by NewTek that enables video-compatible products to communicate, deliver, and receive broadcast-quality video in a high-quality, low-latency manner over standard Ethernet networks.
Use the `NDISinkSettings` class to set the parameters.
#### Block info
Name: NDISinkBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input audio | audio/x-raw | one |
| Input video | video/x-raw | one |
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->NDISinkBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var sinkBlock = new NDISinkBlock(new NDISinkSettings("My NDI Stream"));
pipeline.Connect(fileSource.AudioOutput, sinkBlock.AudioInput);
pipeline.Connect(fileSource.VideoOutput, sinkBlock.VideoInput);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux.
### SRT
SRT (Secure Reliable Transport) is an open source video transport protocol that enables the delivery of high-quality, secure, low-latency video across unpredictable networks like the public internet. It was developed by Haivision.
Use the `SRTSinkSettings` class to set the parameters.
#### Block info
Name: SRTSinkBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input | Any stream format | 1 |
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->MP4MuxerBlock;
MP4MuxerBlock-->SRTSinkBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
// Create a multiplexer block to combine audio and video
var muxer = new MP4MuxerBlock();
pipeline.Connect(fileSource.AudioOutput, muxer.CreateNewInput(MediaBlockPadMediaType.Audio));
pipeline.Connect(fileSource.VideoOutput, muxer.CreateNewInput(MediaBlockPadMediaType.Video));
// Create SRT sink in caller mode (connecting to a listener)
var srtSettings = new SRTSinkSettings
{
Host = "srt-server.example.com",
Port = 1234,
Mode = SRTMode.Caller,
Latency = 200, // milliseconds
Passphrase = "optional-encryption-passphrase"
};
var srtSink = new SRTSinkBlock(srtSettings);
pipeline.Connect(muxer.Output, srtSink.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### SRT MPEG-TS
SRT MPEG-TS is a combination of the SRT transport protocol with MPEG-TS container format. This allows secure, reliable transport of MPEG-TS streams over public networks, which is useful for broadcast and professional video workflows.
Use the `SRTMPEGTSSinkSettings` class to set the parameters.
#### Block info
Name: SRTMPEGTSSinkBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input audio | audio/x-raw | one or more |
| | audio/mpeg | |
| | audio/x-ac3 | |
| | audio/x-alaw | |
| | audio/x-mulaw | |
| | audio/AAC | |
| Input video | video/x-raw | one or more |
| | image/jpeg | |
| | video/x-msmpeg | |
| | video/mpeg | |
| | video/x-h263 | |
| | video/x-h264 | |
| | video/x-h265 | |
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->AACEncoderBlock;
UniversalSourceBlock-->H264EncoderBlock;
AACEncoderBlock-->SRTMPEGTSSinkBlock;
H264EncoderBlock-->SRTMPEGTSSinkBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var audioEncoderBlock = new AACEncoderBlock(new AACEncoderSettings() { Bitrate = 192 });
pipeline.Connect(fileSource.AudioOutput, audioEncoderBlock.Input);
var videoEncoderBlock = new H264EncoderBlock(new OpenH264EncoderSettings());
pipeline.Connect(fileSource.VideoOutput, videoEncoderBlock.Input);
// Configure SRT MPEG-TS sink
var srtMpegtsSinkSettings = new SRTMPEGTSSinkSettings
{
Host = "srt-server.example.com",
Port = 1234,
Mode = SRTMode.Caller,
Latency = 200,
Passphrase = "optional-encryption-passphrase"
};
var sinkBlock = new SRTMPEGTSSinkBlock(srtMpegtsSinkSettings);
pipeline.Connect(audioEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Audio));
pipeline.Connect(videoEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### YouTube Live
YouTube Live is a live streaming service provided by YouTube. It allows creators to broadcast live videos to their audience through the YouTube platform.
Use the `YouTubeSinkSettings` class to set the parameters.
#### Block info
Name: YouTubeSinkBlock.
| Pin direction | Media type | Pins count |
| --- |:------------:|:-----------:|
| Input audio | audio/mpeg [1,2,4] | one |
| | audio/x-adpcm |
| | PCM [U8, S16LE] | |
| | audio/x-speex | |
| | audio/x-mulaw | |
| | audio/x-alaw | |
| | audio/x-nellymoser | |
| Input video | video/x-h264 | one |
#### The sample pipeline
```mermaid
graph LR;
VirtualVideoSourceBlock-->H264EncoderBlock;
VirtualAudioSourceBlock-->AACEncoderBlock;
H264EncoderBlock-->YouTubeSinkBlock;
AACEncoderBlock-->YouTubeSinkBlock;
```
#### Sample code
```csharp
// Pipeline
var pipeline = new MediaBlocksPipeline();
// video and audio sources
var virtualVideoSource = new VirtualVideoSourceSettings
{
Width = 1920,
Height = 1080,
FrameRate = VideoFrameRate.FPS_30,
};
var videoSource = new VirtualVideoSourceBlock(virtualVideoSource);
var virtualAudioSource = new VirtualAudioSourceSettings
{
Channels = 2,
SampleRate = 48000,
};
var audioSource = new VirtualAudioSourceBlock(virtualAudioSource);
// H264/AAC encoders
var h264Settings = new OpenH264EncoderSettings
{
Bitrate = 4000, // 4 Mbps for 1080p
KeyframeInterval = 2 // Keyframe every 2 seconds
};
var h264Encoder = new H264EncoderBlock(h264Settings);
var aacSettings = new AACEncoderSettings
{
Bitrate = 192 // 192 kbps for audio
};
var aacEncoder = new AACEncoderBlock(aacSettings);
pipeline.Connect(videoSource.Output, h264Encoder.Input);
pipeline.Connect(audioSource.Output, aacEncoder.Input);
// YouTube Live sink
var sink = new YouTubeSinkBlock(new YouTubeSinkSettings(
"rtmp://a.rtmp.youtube.com/live2/",
"your_youtube_stream_key"));
pipeline.Connect(h264Encoder.Output, sink.CreateNewInput(MediaBlockPadMediaType.Video));
pipeline.Connect(aacEncoder.Output, sink.CreateNewInput(MediaBlockPadMediaType.Audio));
// Start
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Shoutcast
`Shoutcast` is a service for streaming media over the internet to media players, using its own cross-platform proprietary software. It allows digital audio content, primarily in MP3 or High-Efficiency Advanced Audio Coding (HE-AAC) format, to be broadcast. The most common use of Shoutcast is for creating or listening to Internet audio broadcasts.
Use the `ShoutcastSinkSettings` class to set the parameters.
#### Block info
Name: ShoutcastSinkBlock.
| Pin direction | Media type | Pins count |
| ------------- | :----------------: | :--------: |
| Input audio | audio/mpeg | one |
| | audio/aac | |
| | audio/x-aac | |
#### The sample pipeline
```mermaid
graph LR;
subgraph MainPipeline
direction LR
A[Audio Source e.g. UniversalSourceBlock or VirtualAudioSourceBlock] --> B{Optional Audio Encoder e.g. MP3EncoderBlock};
B --> C[ShoutcastSinkBlock];
end
subgraph AlternativeIfSourceEncoded
A2[Encoded Audio Source] --> C2[ShoutcastSinkBlock];
end
```
#### Sample code
```csharp
// Pipeline
var pipeline = new MediaBlocksPipeline();
// Audio source (e.g., from a file with MP3/AAC or raw audio)
var universalSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri("input.mp3")));
// Or use VirtualAudioSourceBlock for live raw audio input:
// var audioSource = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings { Channels = 2, SampleRate = 44100 });
// Optional: Audio Encoder (if source is raw audio or needs re-encoding for Shoutcast)
// Example: MP3EncoderBlock if Shoutcast server expects MP3
var mp3Encoder = new MP3EncoderBlock(new MP3EncoderSettings() { Bitrate = 128000 }); // Bitrate in bps
pipeline.Connect(universalSource.AudioOutput, mp3Encoder.Input);
// If using VirtualAudioSourceBlock: pipeline.Connect(audioSource.Output, mp3Encoder.Input);
// Shoutcast sink
// Configure the Shoutcast/Icecast server connection details
var shoutcastSettings = new ShoutcastSinkSettings
{
IP = "your-shoutcast-server-ip", // Server hostname or IP address
Port = 8000, // Server port
Mount = "/mountpoint", // Mount point (e.g., "/stream", "/live.mp3")
Password = "your-password", // Source password for the server
Protocol = ShoutProtocol.ICY, // ShoutProtocol.ICY for Shoutcast v1/v2 (e.g., icy://)
// ShoutProtocol.HTTP for Icecast 2.x (e.g., http://)
// ShoutProtocol.XAudiocast for older Shoutcast/XAudioCast
// Metadata for the stream
StreamName = "My Radio Stream",
Genre = "Various",
Description = "My awesome internet radio station",
URL = "http://my-radio-website.com", // Homepage URL for your stream (shows up in directory metadata)
Public = true, // Set to true to list on public directories (if server supports)
Username = "source" // Username for authentication (often "source"; check server config)
// Other stream parameters like audio bitrate, samplerate, channels are typically determined
// by the properties of the encoded input audio stream fed to the ShoutcastSinkBlock.
};
var shoutcastSink = new ShoutcastSinkBlock(shoutcastSettings);
// Connect encoder's output (or source's audio output if already encoded and compatible) to Shoutcast sink
pipeline.Connect(mp3Encoder.Output, shoutcastSink.Input);
// If source is already encoded and compatible (e.g. MP3 file to MP3 Shoutcast):
// pipeline.Connect(universalSource.AudioOutput, shoutcastSink.Input);
// Start the pipeline
await pipeline.StartAsync();
// For display purposes, you can construct a string representing the connection:
string protocolScheme = shoutcastSettings.Protocol switch
{
ShoutProtocol.ICY => "icy",
ShoutProtocol.HTTP => "http",
ShoutProtocol.XAudiocast => "xaudiocast", // Note: actual scheme might be http for XAudiocast
_ => "unknown"
};
Console.WriteLine($"Streaming to Shoutcast server: {protocolScheme}://{shoutcastSettings.IP}:{shoutcastSettings.Port}{shoutcastSettings.Mount}");
Console.WriteLine($"Stream metadata URL (for directories): {shoutcastSettings.URL}");
Console.WriteLine("Press any key to stop the stream...");
Console.ReadKey();
// Stop the pipeline (important for graceful disconnection and resource cleanup)
await pipeline.StopAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
---END OF PAGE---
# Local File: .\dotnet\mediablocks\Sources\index.md
---
title: .Net Media Source Blocks Guide
description: Explore a complete guide to .Net Media SDK source blocks. Learn about hardware, file, network, and virtual sources for your media processing pipelines.
sidebar_label: Sources
---
# Source Blocks - VisioForge Media Blocks SDK .Net
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
Source blocks provide data to the pipeline and are typically the first blocks in any media processing chain. VisioForge Media Blocks SDK .Net provides a comprehensive collection of source blocks for various inputs including hardware devices, files, networks, and virtual sources.
## Hardware Source Blocks
### System Video Source
SystemVideoSourceBlock is used to access webcams and other video capture devices.
#### Block info
Name: SystemVideoSourceBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Output video | uncompressed video | 1 |
#### Enumerate available devices
Use the `DeviceEnumerator.Shared.VideoSourcesAsync()` method to get a list of available devices and their specifications: available resolutions, frame rates, and video formats.
This method returns a list of `VideoCaptureDeviceInfo` objects. Each `VideoCaptureDeviceInfo` object provides detailed information about a capture device.
#### The sample pipeline
```mermaid
graph LR;
SystemVideoSourceBlock-->VideoRendererBlock;
```
#### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// create video source
VideoCaptureDeviceSourceSettings videoSourceSettings = null;
// select the first device
var device = (await DeviceEnumerator.Shared.VideoSourcesAsync())[0];
if (device != null)
{
// select the first format (maybe not the best, but it is just a sample)
var formatItem = device.VideoFormats[0];
if (formatItem != null)
{
videoSourceSettings = new VideoCaptureDeviceSourceSettings(device)
{
Format = formatItem.ToFormat()
};
// select the first frame rate
videoSourceSettings.Format.FrameRate = formatItem.FrameRateList[0];
}
}
// create video source block using the selected device and format
var videoSource = new SystemVideoSourceBlock(videoSourceSettings);
// create video renderer block
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
// connect blocks
pipeline.Connect(videoSource.Output, videoRenderer.Input);
// start pipeline
await pipeline.StartAsync();
```
#### Sample applications
- [Simple Video Capture Demo (WPF)](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/WPF/CSharp/Simple%20Capture%20Demo)
#### Remarks
You can specify an API to use during the device enumeration (refer to the `VideoCaptureDeviceAPI` enum description under `SystemVideoSourceBlock` for typical values). Android and iOS platforms have only one API, while Windows and Linux have multiple APIs.
#### Platforms
Windows, macOS, Linux, iOS, Android.
### System Audio Source
SystemAudioSourceBlock is used to access mics and other audio capture devices.
#### Block info
Name: SystemAudioSourceBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Output audio | uncompressed audio | 1 |
#### Enumerate available devices
Use the `DeviceEnumerator.Shared.AudioSourcesAsync()` method call to get a list of available devices and their specifications.
During device enumeration, you can get the list of available devices and their specifications. You can select the device and its format to create the source settings.
#### The sample pipeline
```mermaid
graph LR;
SystemAudioSourceBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// create audio source block
IAudioCaptureDeviceSourceSettings audioSourceSettings = null;
// select first device
var device = (await DeviceEnumerator.Shared.AudioSourcesAsync())[0];
if (device != null)
{
// select first format
var formatItem = device.Formats[0];
if (formatItem != null)
{
audioSourceSettings = device.CreateSourceSettings(formatItem.ToFormat());
}
}
// create audio source block using selected device and format
var audioSource = new SystemAudioSourceBlock(audioSourceSettings);
// create audio renderer block
var audioRenderer = new AudioRendererBlock();
// connect blocks
pipeline.Connect(audioSource.Output, audioRenderer.Input);
// start pipeline
await pipeline.StartAsync();
```
#### Capture audio from speakers (loopback)
Currently, loopback audio capture is supported only on Windows. Use the `LoopbackAudioCaptureDeviceSourceSettings` class to create the source settings for loopback audio capture.
WASAPI2 is used as the default API for loopback audio capture. You can specify the API to use during device enumeration.
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// create audio source block
var deviceItem = (await DeviceEnumerator.Shared.AudioOutputsAsync(AudioOutputDeviceAPI.WASAPI2))[0];
if (deviceItem == null)
{
return;
}
var audioSourceSettings = new LoopbackAudioCaptureDeviceSourceSettings(deviceItem);
var audioSource = new SystemAudioSourceBlock(audioSourceSettings);
// create audio renderer block
var audioRenderer = new AudioRendererBlock();
// connect blocks
pipeline.Connect(audioSource.Output, audioRenderer.Input);
// start pipeline
await pipeline.StartAsync();
```
#### Sample applications
- [Audio Capture Demo](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/WPF/CSharp/Audio%20Capture%20Demo)
- [Simple Capture Demo](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/WPF/CSharp/Simple%20Capture%20Demo)
#### Remarks
You can specify an API to use during the device enumeration. Android and iOS platforms have only one API, while Windows and Linux have multiple APIs.
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Basler Source Block
The Basler source block supports Basler USB3 Vision and GigE cameras.
The Pylon SDK or Runtime should be installed to use the camera source.
#### Block info
Name: BaslerSourceBlock.
| Pin direction | Media type | Pins count |
|-----------------|:--------------------:|:-----------:|
| Output video | Uncompressed | 1 |
#### The sample pipeline
```mermaid
graph LR;
BaslerSourceBlock-->VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// get Basler source info by enumerating sources
var sources = await DeviceEnumerator.Shared.BaslerSourcesAsync();
var sourceInfo = sources[0];
// create Basler source
var source = new BaslerSourceBlock(new BaslerSourceSettings(sourceInfo));
// create video renderer for VideoView
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
// connect
pipeline.Connect(source.Output, videoRenderer.Input);
// start
await pipeline.StartAsync();
```
#### Sample applications
- [Basler Source Demo (WPF)](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/WPF/CSharp/Basler%20Source%20Demo)
#### Platforms
Windows, Linux.
### Spinnaker/FLIR Source Block
The Spinnaker/FLIR source supports connection to FLIR cameras using Spinnaker SDK.
To use the `SpinnakerSourceBlock`, you first need to enumerate available Spinnaker cameras and then configure the source using `SpinnakerSourceSettings`.
#### Enumerate Devices & `SpinnakerCameraInfo`
Use `DeviceEnumerator.Shared.SpinnakerSourcesAsync()` to get a list of `SpinnakerCameraInfo` objects. Each `SpinnakerCameraInfo` provides details about a detected camera:
- `Name` (string): Unique identifier or name of the camera. Often a serial number or model-serial combination.
- `NetworkInterfaceName` (string): Name of the network interface if it's a GigE camera.
- `Vendor` (string): Camera vendor name.
- `Model` (string): Camera model name.
- `SerialNumber` (string): Camera's serial number.
- `FirmwareVersion` (string): Camera's firmware version.
- `SensorSize` (`Size`): Reports the sensor dimensions (Width, Height). You might need to call a method on `SpinnakerCameraInfo` like `ReadInfo()` (if available, or implied by enumeration) to populate this.
- `WidthMax` (int): Maximum sensor width.
- `HeightMax` (int): Maximum sensor height.
You select a `SpinnakerCameraInfo` object from the list to initialize `SpinnakerSourceSettings`.
#### Settings
The `SpinnakerSourceBlock` is configured using `SpinnakerSourceSettings`. Key properties:
- `Name` (string): The name of the camera (from `SpinnakerCameraInfo.Name`) to use.
- `Region` (`Rect`): Defines the Region of Interest (ROI) to capture from the camera sensor. Set X, Y, Width, Height.
- `FrameRate` (`VideoFrameRate`): The desired frame rate.
- `PixelFormat` (`SpinnakerPixelFormat` enum): The desired pixel format (e.g., `RGB`, `Mono8`, `BayerRG8`). Default `RGB`.
- `OffsetX` (int): X offset for the ROI on the sensor (default 0). Often implicitly part of `Region.X`.
- `OffsetY` (int): Y offset for the ROI on the sensor (default 0). Often implicitly part of `Region.Y`.
- `ExposureMinimum` (int): Minimum exposure time for auto-exposure algorithm (microseconds, e.g., 10-29999999). Default 0 (auto/camera default).
- `ExposureMaximum` (int): Maximum exposure time for auto-exposure algorithm (microseconds). Default 0 (auto/camera default).
- `ShutterType` (`SpinnakerSourceShutterType` enum): Type of shutter (e.g., `Rolling`, `Global`). Default `Rolling`.
Constructor:
`SpinnakerSourceSettings(string deviceName, Rect region, VideoFrameRate frameRate, SpinnakerPixelFormat pixelFormat = SpinnakerPixelFormat.RGB)`
#### Block info
Name: SpinnakerSourceBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Output video | various | one or more |
#### The sample pipeline
`SpinnakerSourceBlock:Output` → `VideoRendererBlock`
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var sources = await DeviceEnumerator.Shared.SpinnakerSourcesAsync();
var sourceSettings = new SpinnakerSourceSettings(sources[0].Name, new VisioForge.Core.Types.Rect(0, 0, 1280, 720), new VideoFrameRate(10));
var source = new SpinnakerSourceBlock(sourceSettings);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(source.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
#### Requirements
- Spinnaker SDK installed.
#### Platforms
Windows
### Allied Vision Source Block
The Allied Vision Source Block enables integration with Allied Vision cameras using the Vimba SDK. It allows capturing video streams from these industrial cameras.
#### Block info
Name: AlliedVisionSourceBlock.
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
AlliedVisionSourceBlock-->VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Enumerate Allied Vision cameras
var alliedVisionCameras = await DeviceEnumerator.Shared.AlliedVisionSourcesAsync();
if (alliedVisionCameras.Count == 0)
{
Console.WriteLine("No Allied Vision cameras found.");
return;
}
var cameraInfo = alliedVisionCameras[0]; // Select the first camera
// Create Allied Vision source settings
// Width, height, x, y are optional and depend on whether you want to set a specific ROI
// If null, it might use default/full sensor resolution. Camera.ReadInfo() should be called.
cameraInfo.ReadInfo(); // Ensure camera info like Width/Height is read
var alliedVisionSettings = new AlliedVisionSourceSettings(
cameraInfo,
width: cameraInfo.Width, // Or a specific ROI width
height: cameraInfo.Height // Or a specific ROI height
);
// Optionally configure other settings
alliedVisionSettings.ExposureAuto = VmbSrcExposureAutoModes.Continuous;
alliedVisionSettings.Gain = 10; // Example gain value
var alliedVisionSource = new AlliedVisionSourceBlock(alliedVisionSettings);
// Create video renderer
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1 is your display control
// Connect blocks
pipeline.Connect(alliedVisionSource.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
#### Requirements
- Allied Vision Vimba SDK must be installed.
#### Sample applications
- Refer to samples demonstrating industrial camera integration if available.
#### Platforms
Windows, macOS, Linux.
### Blackmagic Decklink Source Block
For information about Decklink sources, see [Decklink](../Decklink/index.md).
## File Source Blocks
### Universal Source Block
A universal source that decodes video and audio files/network streams and provides uncompressed data to the connected blocks.
Block supports MP4, WebM, AVI, TS, MKV, MP3, AAC, M4A, and many other formats. If FFMPEG redist is available, all decoders available in FFMPEG will also be supported.
#### Settings
The `UniversalSourceBlock` is configured through `UniversalSourceSettings`. It's recommended to create settings using the static factory method `await UniversalSourceSettings.CreateAsync(...)`.
Key properties and parameters for `UniversalSourceSettings`:
- **URI/Filename**:
- `UniversalSourceSettings.CreateAsync(string filename, bool renderVideo = true, bool renderAudio = true, bool renderSubtitle = false)`: Creates settings from a local file path.
- `UniversalSourceSettings.CreateAsync(System.Uri uri, bool renderVideo = true, bool renderAudio = true, bool renderSubtitle = false)`: Creates settings from a `System.Uri` (can be a file URI or network URI like HTTP, RTSP - though dedicated blocks are often preferred for network streams). For iOS, an `Foundation.NSUrl` is used.
- The `renderVideo`, `renderAudio`, `renderSubtitle` booleans control which streams are processed. The `CreateAsync` method may update these based on actual stream availability in the media file/stream if `ignoreMediaInfoReader` is `false` (default).
- `StartPosition` (`TimeSpan?`): Sets the starting position for playback.
- `StopPosition` (`TimeSpan?`): Sets the stopping position for playback.
- `VideoCustomFrameRate` (`VideoFrameRate?`): If set, video frames will be dropped or duplicated to match this custom frame rate.
- `UseAdvancedEngine` (bool): If `true` (default, except Android where it's `false`), uses an advanced engine with stream selection support.
- `DisableHWDecoders` (bool): If `true` (default `false`, except Android where it's `true`), hardware-accelerated decoders will be disabled, forcing software decoding.
- `MPEGTSProgramNumber` (int): For MPEG-TS streams, specifies the program number to select (default -1, meaning automatic selection or first program).
- `ReadInfoAsync()`: Asynchronously reads media file information (`MediaFileInfo`). This is called internally by `CreateAsync` unless `ignoreMediaInfoReader` is true.
- `GetInfo()`: Gets the cached `MediaFileInfo`.
The `UniversalSourceBlock` itself is then instantiated with these settings: `new UniversalSourceBlock(settings)`.
The `Filename` property on `UniversalSourceBlock` instance (as seen in older examples) is a shortcut that internally creates basic `UniversalSourceSettings`. Using `UniversalSourceSettings.CreateAsync` provides more control.
#### Block info
Name: UniversalSourceBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Output audio | depends from decoder | one or more |
| Output video | depends from decoder | one or more |
| Output subtitle | depends from decoder | one or more |
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->VideoRendererBlock;
UniversalSourceBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var fileSource = new UniversalSourceBlock();
fileSource.Filename = "test.mp4";
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(fileSource.VideoOutput, videoRenderer.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(fileSource.AudioOutput, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Sample applications
- [Simple Player Demo (WPF)](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/WPF/CSharp/Simple%20Player%20Demo%20WPF)
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Subtitle Source Block
The Subtitle Source Block loads subtitles from a file and outputs them as a subtitle stream, which can then be overlaid on video or rendered separately.
#### Block info
Name: `SubtitleSourceBlock`.
| Pin direction | Media type | Pins count |
|-----------------|:--------------------:|:-----------:|
| Output subtitle | Subtitle data | 1 |
#### Settings
The `SubtitleSourceBlock` is configured using `SubtitleSourceSettings`. Key properties include:
- `Filename` (string): The path to the subtitle file (e.g., .srt, .ass).
#### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock --> SubtitleOverlayBlock;
SubtitleSourceBlock --> SubtitleOverlayBlock;
SubtitleOverlayBlock --> VideoRendererBlock;
UniversalSourceBlock --> AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create subtitle source settings
var subtitleSettings = new SubtitleSourceSettings("path/to/your/subtitles.srt");
var subtitleSource = new SubtitleSourceBlock(subtitleSettings);
// Example: Overlaying subtitles on a video from UniversalSourceBlock
var fileSource = await UniversalSourceSettings.CreateAsync("path/to/your/video.mp4");
var universalSource = new UniversalSourceBlock(fileSource);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
var audioRenderer = new AudioRendererBlock();
// This is a conceptual overlay. Actual implementation might need a specific subtitle overlay block.
// For simplicity, let's assume a downstream block can consume a subtitle stream,
// or you connect it to a block that renders subtitles on the video.
// Example with a hypothetical SubtitleOverlayBlock:
// var subtitleOverlay = new SubtitleOverlayBlock(); // Assuming such a block exists
// pipeline.Connect(universalSource.VideoOutput, subtitleOverlay.VideoInput);
// pipeline.Connect(subtitleSource.Output, subtitleOverlay.SubtitleInput);
// pipeline.Connect(subtitleOverlay.Output, videoRenderer.Input);
// pipeline.Connect(universalSource.AudioOutput, audioRenderer.Input);
// For a simple player without explicit overlay shown here:
pipeline.Connect(universalSource.VideoOutput, videoRenderer.Input);
pipeline.Connect(universalSource.AudioOutput, audioRenderer.Input);
// How subtitles from subtitleSource.Output are used would depend on the rest of the pipeline design.
// This block primarily provides the subtitle stream.
Console.WriteLine("Subtitle source created. Connect its output to a compatible block like a subtitle overlay or renderer.");
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android (Depends on subtitle parsing capabilities).
### Stream Source Block
The Stream Source Block allows reading media data from a `System.IO.Stream`. This is useful for playing media from memory, embedded resources, or custom stream providers without needing a temporary file. The format of the data within the stream must be parsable by the underlying media framework (GStreamer).
#### Block info
Name: `StreamSourceBlock`.
(Pin information is dynamic, similar to `UniversalSourceBlock`, based on stream content. Typically, it would have an output that connects to a demuxer/decoder like `DecodeBinBlock`, or provide decoded audio/video pins if it includes demuxing/decoding capabilities.)
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Output data | Varies (raw stream)| 1 |
_Alternatively, if it decodes:_
| Output video | Depends on stream | 0 or 1 |
| Output audio | Depends on stream | 0 or 1+ |
#### Settings
The `StreamSourceBlock` is typically instantiated directly with a `System.IO.Stream`. The `StreamSourceSettings` class serves as a wrapper to provide this stream.
- `Stream` (`System.IO.Stream`): The input stream containing the media data. The stream must be readable and, if seeking is required by the pipeline, seekable.
#### The sample pipeline
If `StreamSourceBlock` outputs raw data that needs decoding:
```mermaid
graph LR;
StreamSourceBlock -- Stream Data --> DecodeBinBlock;
DecodeBinBlock -- Video Output --> VideoRendererBlock;
DecodeBinBlock -- Audio Output --> AudioRendererBlock;
```
If `StreamSourceBlock` handles decoding internally (less common for a generic stream source):
```mermaid
graph LR;
StreamSourceBlock -- Video Output --> VideoRendererBlock;
StreamSourceBlock -- Audio Output --> AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Example: Load a video file into a MemoryStream
byte[] fileBytes = File.ReadAllBytes("path/to/your/video.mp4");
var memoryStream = new MemoryStream(fileBytes);
// StreamSourceSettings is a container for the stream.
var streamSettings = new StreamSourceSettings(memoryStream);
// The CreateBlock method of StreamSourceSettings would typically return new StreamSourceBlock(streamSettings.Stream)
var streamSource = streamSettings.CreateBlock() as StreamSourceBlock;
// Or, more directly: var streamSource = new StreamSourceBlock(memoryStream);
// Create video and audio renderers
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
var audioRenderer = new AudioRendererBlock();
// Connect outputs. Commonly, a StreamSourceBlock provides raw data to a DecodeBinBlock.
var decodeBin = new DecodeBinBlock();
pipeline.Connect(streamSource.Output, decodeBin.Input); // Assuming a single 'Output' pin on StreamSourceBlock
pipeline.Connect(decodeBin.VideoOutput, videoRenderer.Input);
pipeline.Connect(decodeBin.AudioOutput, audioRenderer.Input);
await pipeline.StartAsync();
// Important: Ensure the stream remains open and valid for the duration of playback.
// Dispose of the stream when the pipeline is stopped or disposed.
// Consider this in relation to pipeline.DisposeAsync() or similar cleanup.
// memoryStream.Dispose(); // Typically after pipeline.StopAsync() and pipeline.DisposeAsync()
```
#### Remarks
The `StreamSourceBlock` itself will attempt to read from the provided stream. The success of playback depends on the format of the data in the stream and the availability of appropriate demuxers and decoders in the subsequent parts of the pipeline (often managed via `DecodeBinBlock`).
#### Platforms
Windows, macOS, Linux, iOS, Android.
### CDG Source Block
The CDG Source Block is designed to play CD+G (Compact Disc + Graphics) files, commonly used for karaoke. It decodes both the audio track and the low-resolution graphics stream.
#### Block info
Name: CDGSourceBlock.
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Output audio | Uncompressed audio | 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
CDGSourceBlock -- Audio --> AudioRendererBlock;
CDGSourceBlock -- Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create CDG source settings
var cdgSettings = new CDGSourceSettings(
"path/to/your/file.cdg", // Path to the CDG graphics file
"path/to/your/file.mp3" // Path to the corresponding audio file (MP3, WAV, etc.)
);
// If audioFilename is null or empty, audio will be ignored.
var cdgSource = new CDGSourceBlock(cdgSettings);
// Create video renderer
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1 is your display control
pipeline.Connect(cdgSource.VideoOutput, videoRenderer.Input);
// Create audio renderer (if audio is to be played)
if (!string.IsNullOrEmpty(cdgSettings.AudioFilename) && cdgSource.AudioOutput != null)
{
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(cdgSource.AudioOutput, audioRenderer.Input);
}
// Start pipeline
await pipeline.StartAsync();
```
#### Remarks
Requires both a `.cdg` file for graphics and a separate audio file (e.g., MP3, WAV) for the music.
#### Platforms
Windows, macOS, Linux, iOS, Android.
## Network Source Blocks
### VNC Source Block
The VNC Source Block allows capturing video from a VNC (Virtual Network Computing) or RFB (Remote Framebuffer) server. This is useful for streaming the desktop of a remote machine.
#### Block info
Name: `VNCSourceBlock`.
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Output video | Uncompressed video | 1 |
#### Settings
The `VNCSourceBlock` is configured using `VNCSourceSettings`. Key properties include:
- `Host` (string): The hostname or IP address of the VNC server.
- `Port` (int): The port number of the VNC server.
- `Password` (string): The password for VNC server authentication, if required.
- `Uri` (string): Alternatively, a full RFB URI (e.g., "rfb://host:port").
- `Width` (int): Desired output width. The block may connect to a VNC server that provides specific dimensions.
- `Height` (int): Desired output height.
- `Shared` (bool): Whether to share the desktop with other clients (default `true`).
- `ViewOnly` (bool): If `true`, no input (mouse/keyboard) is sent to the VNC server (default `false`).
- `Incremental` (bool): Whether to use incremental updates (default `true`).
- `UseCopyrect` (bool): Whether to use copyrect encoding (default `false`).
- `RFBVersion` (string): RFB protocol version (default "3.3").
- `OffsetX` (int): X offset for screen scraping.
- `OffsetY` (int): Y offset for screen scraping.
#### The sample pipeline
```mermaid
graph LR;
VNCSourceBlock-->VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Configure VNC source settings
var vncSettings = new VNCSourceSettings
{
Host = "your-vnc-server-ip", // or use Uri
Port = 5900, // Standard VNC port
Password = "your-password", // if any
// Width = 1920, // Optional: desired width
// Height = 1080, // Optional: desired height
};
var vncSource = new VNCSourceBlock(vncSettings);
// Create video renderer
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1 is your display control
// Connect blocks
pipeline.Connect(vncSource.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux (Depends on underlying GStreamer VNC plugin availability).
### RTSP Source Block
The RTSP source supports connection to IP cameras and other devices supporting the RTSP protocol.
Supported video codecs: H264, HEVC, MJPEG.
Supported audio codecs: AAC, MP3, PCM, G726, G711, and some others if FFMPEG redist is installed.
#### Block info
Name: RTSPSourceBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Output audio | depends from decoder | one or more |
| Output video | depends from decoder | one or more |
| Output subtitle | depends from decoder | one or more |
#### Settings
The `RTSPSourceBlock` is configured using `RTSPSourceSettings`. Key properties include:
- `Uri`: The RTSP URL of the stream.
- `Login`: Username for RTSP authentication, if required.
- `Password`: Password for RTSP authentication, if required.
- `AudioEnabled`: A boolean indicating whether to attempt to process the audio stream.
- `Latency`: Specifies the buffering duration for the incoming stream (default is 1000ms).
- `AllowedProtocols`: Defines the transport protocols to be used for receiving the stream. It's a flags enum `RTSPSourceProtocol` with values:
- `UDP`: Stream data over UDP.
- `UDP_Multicast`: Stream data over UDP multicast.
- `TCP` (Recommended): Stream data over TCP.
- `HTTP`: Stream data tunneled over HTTP.
- `EnableTLS`: Encrypt TCP and HTTP with TLS (use `rtsps://` or `httpsps://` in URI).
- `DoRTCP`: Enables RTCP (RTP Control Protocol) for stream statistics and control (default is usually true).
- `RTPBlockSize`: Specifies the size of RTP blocks.
- `UDPBufferSize`: Buffer size for UDP transport.
- `CustomVideoDecoder`: Allows specifying a custom GStreamer video decoder element name if the default is not suitable.
- `UseGPUDecoder`: If set to `true`, the SDK will attempt to use a hardware-accelerated GPU decoder if available.
- `CompatibilityMode`: If `true`, the SDK will not try to read camera information before attempting to play, which can be useful for problematic streams.
- `EnableRAWVideoAudioEvents`: If `true`, enables events for raw (undecoded) video and audio sample data.
It's recommended to initialize `RTSPSourceSettings` using the static factory method `RTSPSourceSettings.CreateAsync(Uri uri, string login, string password, bool audioEnabled, bool readInfo = true)`. This method can also handle ONVIF discovery if the URI points to an ONVIF device service. Setting `readInfo` to `false` enables `CompatibilityMode`.
#### The sample pipeline
```mermaid
graph LR;
RTSPSourceBlock-->VideoRendererBlock;
RTSPSourceBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// It's recommended to use CreateAsync to initialize settings
var rtspSettings = await RTSPSourceSettings.CreateAsync(
new Uri("rtsp://login:pwd@192.168.1.64:554/Streaming/Channels/101?transportmode=unicast&profile=Profile_1"),
"login",
"pwd",
audioEnabled: true);
// Optionally, configure more settings
// rtspSettings.Latency = TimeSpan.FromMilliseconds(500);
// rtspSettings.AllowedProtocols = RTSPSourceProtocol.TCP; // Prefer TCP
var rtspSource = new RTSPSourceBlock(rtspSettings);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(rtspSource.VideoOutput, videoRenderer.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(rtspSource.AudioOutput, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Sample applications
- [RTSP Preview Demo](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/WPF/CSharp/RTSP%20Preview%20Demo)
- [RTSP MultiViewSync Demo](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/WPF/CSharp/RTSP%20MultiViewSync%20Demo)
#### Platforms
Windows, macOS, Linux, iOS, Android.
### HTTP Source Block
The HTTP source block allows data to be retrieved using HTTP/HTTPS protocols.
It can be used to read data from MJPEG IP cameras, MP4 network files, or other sources.
#### Block info
Name: HTTPSourceBlock.
| Pin direction | Media type | Pins count |
|---------------|:------------:|:-----------:|
| Output | Data | 1 |
#### The sample pipeline
The sample pipeline reads data from an MJPEG camera and displays it using VideoView.
```mermaid
graph LR;
HTTPSourceBlock-->JPEGDecoderBlock;
JPEGDecoderBlock-->VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var settings = new HTTPSourceSettings(new Uri("http://mjpegcamera:8080"))
{
UserID = "username",
UserPassword = "password"
};
var source = new HTTPSourceBlock(settings);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
var jpegDecoder = new JPEGDecoderBlock();
pipeline.Connect(source.Output, jpegDecoder.Input);
pipeline.Connect(jpegDecoder.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux.
### HTTP MJPEG Source Block
The HTTP MJPEG Source Block is specifically designed to connect to and decode MJPEG (Motion JPEG) video streams over HTTP/HTTPS. This is common for many IP cameras.
#### Block info
Name: HTTPMJPEGSourceBlock.
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
HTTPMJPEGSourceBlock-->VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create settings for the HTTP MJPEG source
var mjpegSettings = await HTTPMJPEGSourceSettings.CreateAsync(
new Uri("http://your-mjpeg-camera-url/stream"), // Replace with your camera's MJPEG stream URL
"username", // Optional: username for camera authentication
"password" // Optional: password for camera authentication
);
if (mjpegSettings == null)
{
Console.WriteLine("Failed to initialize HTTP MJPEG settings.");
return;
}
mjpegSettings.CustomVideoFrameRate = new VideoFrameRate(25); // Optional: Set if camera doesn't report frame rate
mjpegSettings.Latency = TimeSpan.FromMilliseconds(200); // Optional: Adjust latency
var httpMjpegSource = new HTTPMJPEGSourceBlock(mjpegSettings);
// Create video renderer
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1 is your display control
// Connect blocks
pipeline.Connect(httpMjpegSource.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
#### Sample applications
- Similar to HTTP MJPEG Source Demo mentioned under the generic HTTP Source Block.
#### Platforms
Windows, macOS, Linux.
### NDI Source Block
The NDI source block supports connection to NDI software sources and devices supporting the NDI protocol.
#### Block info
Name: NDISourceBlock.
| Pin direction | Media type | Pins count |
|-----------------|:--------------------:|:-----------:|
| Output audio | Uncompressed | 1 |
| Output video | Uncompressed | 1 |
#### The sample pipeline
```mermaid
graph LR;
NDISourceBlock-->VideoRendererBlock;
NDISourceBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// get NDI source info by enumerating sources
var ndiSources = await DeviceEnumerator.Shared.NDISourcesAsync();
var ndiSourceInfo = ndiSources[0];
// create NDI source settings
var ndiSettings = NDISourceSettings.CreateAsync(ndiSourceInfo);
var ndiSource = new NDISourceBlock(ndiSettings);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(ndiSource.VideoOutput, videoRenderer.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(ndiSource.AudioOutput, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Sample applications
- [NDI Source Demo](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/WPF/CSharp/NDI%20Source%20Demo)
#### Platforms
Windows, macOS, Linux.
### GenICam Source Block
The GenICam source supports connection to GigE, and the USB3 Vision camera supports the GenICam protocol.
#### Block info
Name: GenICamSourceBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Output video | various | one or more |
#### The sample pipeline
```mermaid
graph LR;
GenICamSourceBlock-->VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var sourceSettings = new GenICamSourceSettings(cbCamera.Text, new VisioForge.Core.Types.Rect(0, 0, 512, 512), 15, GenICamPixelFormat.Mono8);
var source = new GenICamSourceBlock(sourceSettings);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(source.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
#### Sample applications
- [GenICam Source Demo](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/WPF/CSharp/GenICam%20Source%20Demo)
#### Prerequisites
##### macOS
Install the `Aravis` package using Homebrew:
```bash
brew install aravis
```
##### Linux
Install the `Aravis` package using the package manager:
```bash
sudo apt-get install libaravis-0.8-dev
```
##### Windows
Install the `VisioForge.CrossPlatform.GenICam.Windows.x64` package to your project using NuGet.
#### Platforms
Windows, macOS, Linux
### SRT Source Block (with decoding)
The `Secure Reliable Transport (SRT)` is an open-source video streaming protocol designed for secure and low-latency delivery over unpredictable networks, like the public internet. Developed by Haivision, SRT optimizes streaming performance by dynamically adapting to varying bandwidths and minimizing the effects of packet loss. It incorporates AES encryption for secure content transmission. Primarily used in broadcasting and online streaming, SRT is crucial for delivering high-quality video feeds in real-time applications, enhancing viewer experiences even in challenging network conditions. It supports point-to-point and multicast streaming, making it versatile for diverse setups.
The SRT source block provides decoded video and audio streams from an SRT source.
#### Block info
Name: SRTSourceBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Output video | Uncompressed | 0+ |
| Output audio | Uncompressed | 0+ |
#### Settings
The `SRTSourceBlock` is configured using `SRTSourceSettings`. This class provides comprehensive options for SRT connections:
- `Uri` (string): The SRT URI (e.g., "srt://127.0.0.1:8888" or "srt://example.com:9000?mode=listener"). Default is "srt://127.0.0.1:8888".
- `Mode` (`SRTConnectionMode` enum): Specifies the SRT connection mode. Default is `Caller`. See `SRTConnectionMode` enum details below.
- `Passphrase` (string): The password for encrypted transmission.
- `PbKeyLen` (`SRTKeyLength` enum): The crypto key length for AES encryption. Default is `NoKey`. See `SRTKeyLength` enum details below.
- `Latency` (`TimeSpan`): The maximum accepted transmission latency (receiver side for caller/listener, or for both in rendezvous). Default is 125 milliseconds.
- `StreamId` (string): The stream ID for SRT access control.
- `LocalAddress` (string): The local address to bind to when in `Listener` or `Rendezvous` mode. Default `null` (any).
- `LocalPort` (uint): The local port to bind to when in `Listener` or `Rendezvous` mode. Default 7001.
- `Authentication` (bool): Whether to authenticate the connection. Default `true`.
- `AutoReconnect` (bool): Whether the source should attempt to reconnect if the connection fails. Default `true`.
- `KeepListening` (bool): If `false` (default), the element will signal end-of-stream when the remote client disconnects (in listener mode). If `true`, it keeps waiting for reconnection.
- `PollTimeout` (`TimeSpan`): Polling timeout used when an SRT poll is started. Default 1000 milliseconds.
- `WaitForConnection` (bool): If `true` (default), blocks the stream until a client connects (in listener mode).
The `SRTSourceSettings` can be initialized using `await SRTSourceSettings.CreateAsync(string uri, bool ignoreMediaInfoReader = false)`. Setting `ignoreMediaInfoReader` to `true` can be useful if media info reading fails for a live stream.
##### `SRTConnectionMode` Enum
Defines the operational mode for an SRT connection:
- `None` (0): No connection mode specified (should not typically be used directly).
- `Caller` (1): The source initiates the connection to a listener.
- `Listener` (2): The source waits for an incoming connection from a caller.
- `Rendezvous` (3): Both ends initiate connection to each other simultaneously, useful for traversing firewalls.
##### `SRTKeyLength` Enum
Defines the key length for SRT's AES encryption:
- `NoKey` (0) / `Length0` (0): No encryption is used.
- `Length16` (16): 16-byte (128-bit) AES encryption key.
- `Length24` (24): 24-byte (192-bit) AES encryption key.
- `Length32` (32): 32-byte (256-bit) AES encryption key.
#### The sample pipeline
```mermaid
graph LR;
SRTSourceBlock-->VideoRendererBlock;
SRTSourceBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var source = new SRTSourceBlock(new SRTSourceSettings() { Uri = edURL.Text });
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(source.VideoOutput, videoRenderer.Input);
pipeline.Connect(source.AudioOutput, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Sample applications
- [SRT Source Demo](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/WPF/CSharp/SRT%20Source%20Demo)
#### Platforms
Windows, macOS, Linux, iOS, Android.
### SRT RAW Source Block
`The Secure Reliable Transport (SRT)` is a streaming protocol that optimizes video data delivery over unpredictable networks, like the Internet. It is open-source and designed to handle high-performance video and audio streaming. SRT provides security through end-to-end encryption, reliability by recovering lost packets, and low latency, which is suitable for live broadcasts. It adapts to varying network conditions by dynamically managing bandwidth, ensuring high-quality streams even under suboptimal conditions. Widely used in broadcasting and streaming applications, SRT supports interoperability and is ideal for remote production and content distribution.
The SRT source supports connection to SRT sources and provides a data stream. You can connect this block to `DecodeBinBlock` to decode the stream.
#### Block info
Name: SRTRAWSourceBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Output data | Any | one |
#### Settings
The `SRTRAWSourceBlock` is configured using `SRTSourceSettings`. Refer to the detailed description of `SRTSourceSettings` and its related enums (`SRTConnectionMode`, `SRTKeyLength`) under the `SRT Source Block (with decoding)` section for all available properties and their explanations.
#### The sample pipeline
```mermaid
graph LR;
SRTRAWSourceBlock-->DecodeBinBlock;
DecodeBinBlock-->VideoRendererBlock;
DecodeBinBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var source = new SRTRAWSourceBlock(new SRTSourceSettings() { Uri = edURL.Text });
var decodeBin = new DecodeBinBlock();
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(source.Output, decodeBin.Input);
pipeline.Connect(decodeBin.VideoOutput, videoRenderer.Input);
pipeline.Connect(decodeBin.AudioOutput, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
## Other Source Blocks
### Screen Source Block
Screen source supports recording video from the screen. You can select the display (if more than one), the part of the screen to be recorded, and optional mouse cursor recording.
#### Settings
The `ScreenSourceBlock` uses platform-specific settings classes. The choice of settings class determines the underlying screen capture technology. The `ScreenCaptureSourceType` enum indicates the available technologies:
##### Windows
- `ScreenCaptureDX9SourceSettings` - Use `DirectX 9` for screen recording. (`ScreenCaptureSourceType.DX9`)
- `ScreenCaptureD3D11SourceSettings` - Use `Direct3D 11` Desktop Duplication for screen recording. Allows specific window capture. (`ScreenCaptureSourceType.D3D11DesktopDuplication`)
- `ScreenCaptureGDISourceSettings` - Use `GDI` for screen recording. (`ScreenCaptureSourceType.GDI`)
##### macOS
`ScreenCaptureMacOSSourceSettings` - Use `AVFoundation` for screen recording. (`ScreenCaptureSourceType.AVFoundation`)
##### Linux
`ScreenCaptureXDisplaySourceSettings` - Use `X11` (XDisplay) for screen recording. (`ScreenCaptureSourceType.XDisplay`)
##### iOS
`IOSScreenSourceSettings` - Use `AVFoundation` for current window/app recording. (`ScreenCaptureSourceType.IOSScreen`)
#### Block info
Name: ScreenSourceBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Output video | uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
ScreenSourceBlock-->H264EncoderBlock;
H264EncoderBlock-->MP4SinkBlock;
```
#### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// create source settings
var screenSourceSettings = new ScreenCaptureDX9SourceSettings() { FrameRate = 15 }
// create source block
var screenSourceBlock = new ScreenSourceBlock(screenSourceSettings);
// create video encoder block and connect it to the source block
var h264EncoderBlock = new H264EncoderBlock(new OpenH264EncoderSettings());
pipeline.Connect(screenSourceBlock.Output, h264EncoderBlock.Input);
// create MP4 sink block and connect it to the encoder block
var mp4SinkBlock = new MP4SinkBlock(new MP4SinkSettings(@"output.mp4"));
pipeline.Connect(h264EncoderBlock.Output, mp4SinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
// run pipeline
await pipeline.StartAsync();
```
#### [Windows] Window capture
You can capture a specific window by using the `ScreenCaptureD3D11SourceSettings` class.
```csharp
// create Direct3D11 source
var source = new ScreenCaptureD3D11SourceSettings();
// set frame rate
source.FrameRate = new VideoFrameRate(30);
// get handle of the window
var wih = new System.Windows.Interop.WindowInteropHelper(this);
source.WindowHandle = wih.Handle;
// create source block
var screenSourceBlock = new ScreenSourceBlock(new ScreenCaptureDX9SourceSettings() { FrameRate = 15 });
// other code is the same as above
```
#### Sample applications
- [Screen Capture Demo (WPF)](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/WPF/CSharp/Screen%20Capture)
- [Screen Capture Demo (MAUI)](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/MAUI/ScreenCaptureMB)
- [Screen Capture Demo (iOS)](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/iOS/ScreenCapture)
#### Platforms
Windows, macOS, Linux, iOS.
### Virtual Video Source Block
VirtualVideoSourceBlock is used to produce test video data in a wide variety of video formats. The type of test data is controlled by the settings.
#### Settings
The `VirtualVideoSourceBlock` is configured using `VirtualVideoSourceSettings`. Key properties:
- `Pattern` (`VirtualVideoSourcePattern` enum): Specifies the type of test pattern to generate. See `VirtualVideoSourcePattern` enum below for available patterns. Default is `SMPTE`.
- `Width` (int): Width of the output video (default 1280).
- `Height` (int): Height of the output video (default 720).
- `FrameRate` (`VideoFrameRate`): Frame rate of the output video (default 30fps).
- `Format` (`VideoFormatX` enum): Pixel format of the video (default `RGB`).
- `ForegroundColor` (`SKColor`): For patterns that use a foreground color (e.g., `SolidColor`), this property defines it (default `SKColors.White`).
Constructors:
- `VirtualVideoSourceSettings()`: Default constructor.
- `VirtualVideoSourceSettings(int width, int height, VideoFrameRate frameRate)`: Initializes with specified dimensions and frame rate.
##### `VirtualVideoSourcePattern` Enum
Defines the test pattern generated by `VirtualVideoSourceBlock`:
- `SMPTE` (0): SMPTE 100% color bars.
- `Snow` (1): Random (television snow).
- `Black` (2): 100% Black.
- `White` (3): 100% White.
- `Red` (4), `Green` (5), `Blue` (6): Solid colors.
- `Checkers1` (7) to `Checkers8` (10): Checkerboard patterns with 1, 2, 4, or 8 pixel squares.
- `Circular` (11): Circular pattern.
- `Blink` (12): Blinking pattern.
- `SMPTE75` (13): SMPTE 75% color bars.
- `ZonePlate` (14): Zone plate.
- `Gamut` (15): Gamut checkers.
- `ChromaZonePlate` (16): Chroma zone plate.
- `SolidColor` (17): A solid color, defined by `ForegroundColor`.
- `Ball` (18): Moving ball.
- `SMPTE100` (19): Alias for SMPTE 100% color bars.
- `Bar` (20): Bar pattern.
- `Pinwheel` (21): Pinwheel pattern.
- `Spokes` (22): Spokes pattern.
- `Gradient` (23): Gradient pattern.
- `Colors` (24): Various colors pattern.
- `SMPTERP219` (25): SMPTE test pattern, RP 219 conformant.
#### Block info
Name: VirtualVideoSourceBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Output video | uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
VirtualVideoSourceBlock-->VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var audioSourceBlock = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings());
var videoSourceBlock = new VirtualVideoSourceBlock(new VirtualVideoSourceSettings());
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(videoSourceBlock.Output, videoRenderer.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(audioSourceBlock.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Virtual Audio Source Block
VirtualAudioSourceBlock is used to produce test audio data in a wide variety of audio formats. The type of test data is controlled by the settings.
#### Settings
The `VirtualAudioSourceBlock` is configured using `VirtualAudioSourceSettings`. Key properties:
- `Wave` (`VirtualAudioSourceSettingsWave` enum): Specifies the type of audio waveform to generate. See `VirtualAudioSourceSettingsWave` enum below. Default `Sine`.
- `Format` (`AudioFormatX` enum): Audio sample format (default `S16LE`).
- `SampleRate` (int): Sample rate in Hz (default 48000).
- `Channels` (int): Number of audio channels (default 2).
- `Volume` (double): Volume of the test signal (0.0 to 1.0, default 0.8).
- `Frequency` (double): Frequency of the test signal in Hz (e.g., for Sine wave, default 440).
- `IsLive` (bool): Indicates if the source is live (default `true`).
- `ApplyTickRamp` (bool): Apply ramp to tick samples (default `false`).
- `CanActivatePull` (bool): Can activate in pull mode (default `false`).
- `CanActivatePush` (bool): Can activate in push mode (default `true`).
- `MarkerTickPeriod` (uint): Make every Nth tick a marker tick (for `Ticks` wave, 0 = no marker, default 0).
- `MarkerTickVolume` (double): Volume of marker ticks (default 1.0).
- `SamplesPerBuffer` (int): Number of samples in each outgoing buffer (default 1024).
- `SinePeriodsPerTick` (uint): Number of sine wave periods in one tick (for `Ticks` wave, default 10).
- `TickInterval` (`TimeSpan`): Distance between start of current and start of next tick (default 1 second).
- `TimestampOffset` (`TimeSpan`): An offset added to timestamps (default `TimeSpan.Zero`).
Constructor:
- `VirtualAudioSourceSettings(VirtualAudioSourceSettingsWave wave = VirtualAudioSourceSettingsWave.Ticks, int sampleRate = 48000, int channels = 2, AudioFormatX format = AudioFormatX.S16LE)`
##### `VirtualAudioSourceSettingsWave` Enum
Defines the waveform for `VirtualAudioSourceBlock`:
- `Sine` (0): Sine wave.
- `Square` (1): Square wave.
- `Saw` (2): Sawtooth wave.
- `Triangle` (3): Triangle wave.
- `Silence` (4): Silence.
- `WhiteNoise` (5): White uniform noise.
- `PinkNoise` (6): Pink noise.
- `SineTable` (7): Sine table.
- `Ticks` (8): Periodic Ticks.
- `GaussianNoise` (9): White Gaussian noise.
- `RedNoise` (10): Red (Brownian) noise.
- `BlueNoise` (11): Blue noise.
- `VioletNoise` (12): Violet noise.
#### Block info
Name: VirtualAudioSourceBlock.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Output audio | uncompressed audio | 1 |
#### The sample pipeline
```mermaid
graph LR;
VirtualAudioSourceBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var audioSourceBlock = new VirtualAudioSourceBlock(new VirtualAudioSourceSettings());
var videoSourceBlock = new VirtualVideoSourceBlock(new VirtualVideoSourceSettings());
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(videoSourceBlock.Output, videoRenderer.Input);
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(audioSourceBlock.Output, audioRenderer.Input);
await pipeline.StartAsync();
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Demuxer Source Block
The Demuxer Source Block is used to demultiplex local media files into their constituent elementary streams (video, audio, subtitles). It allows for selective rendering of these streams.
#### Block info
Name: DemuxerSourceBlock.
| Pin direction | Media type | Pins count |
|-----------------|:--------------------:|:-----------:|
| Output video | Depends on file | 0 or 1 |
| Output audio | Depends on file | 0 or 1+ |
| Output subtitle | Depends on file | 0 or 1+ |
#### The sample pipeline
```mermaid
graph LR;
DemuxerSourceBlock -- Video Stream --> VideoRendererBlock;
DemuxerSourceBlock -- Audio Stream --> AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create settings, ensure to await CreateAsync
var demuxerSettings = await DemuxerSourceSettings.CreateAsync(
"path/to/your/video.mp4",
renderVideo: true,
renderAudio: true,
renderSubtitle: false);
if (demuxerSettings == null)
{
Console.WriteLine("Failed to initialize demuxer settings. Ensure the file exists and is readable.");
return;
}
var demuxerSource = new DemuxerSourceBlock(demuxerSettings);
// Setup video rendering if video is available and rendered
if (demuxerSettings.RenderVideo && demuxerSource.VideoOutput != null)
{
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1 is your display control
pipeline.Connect(demuxerSource.VideoOutput, videoRenderer.Input);
}
// Setup audio rendering if audio is available and rendered
if (demuxerSettings.RenderAudio && demuxerSource.AudioOutput != null)
{
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(demuxerSource.AudioOutput, audioRenderer.Input);
}
// Start pipeline
await pipeline.StartAsync();
```
#### Sample applications
- No specific sample application link, but can be used in player-like scenarios.
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Image Video Source Block
The Image Video Source Block generates a video stream from a static image file (e.g., JPG, PNG). It repeatedly outputs the image as video frames according to the specified frame rate.
#### Block info
Name: ImageVideoSourceBlock.
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
ImageVideoSourceBlock-->VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create image video source settings
var imageSourceSettings = new ImageVideoSourceSettings("path/to/your/image.jpg"); // Replace with your image path
imageSourceSettings.FrameRate = new VideoFrameRate(10); // Output 10 frames per second
imageSourceSettings.IsLive = true; // Treat as a live source (optional)
// imageSourceSettings.NumBuffers = 100; // Optional: output only 100 frames then stop
var imageSource = new ImageVideoSourceBlock(imageSourceSettings);
// Create video renderer
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1 is your display control
// Connect blocks
pipeline.Connect(imageSource.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
#### Remarks
This block uses SkiaSharp for image decoding, so ensure necessary dependencies are met if not using a standard VisioForge package that includes it.
#### Platforms
Windows, macOS, Linux, iOS, Android.
## Push Source Blocks
Push Source blocks allow you to feed media data (video, audio, JPEG images, or generic data) directly into the Media Blocks pipeline from your application code. This is useful when your media originates from a custom source, such as a proprietary capture device, a network stream not supported by built-in blocks, or procedurally generated content.
The behavior of push sources is generally controlled by common settings available through the `IPushSourceSettings` interface, implemented by specific push source settings classes:
- `IsLive` (bool): Indicates if the source is live. Defaults vary by type (e.g., `true` for audio/video).
- `DoTimestamp` (bool): If `true`, the block will attempt to generate timestamps for the pushed data.
- `StreamType` (`PushSourceStreamType` enum: `Stream` or `SeekableStream`): Defines the stream characteristics.
- `PushFormat` (`PushSourceFormat` enum: `Bytes`, `Time`, `Default`, `Automatic`): Controls how data is pushed (e.g., based on byte count or time).
- `BlockPushData` (bool): If `true`, the push operation will block until the data is consumed by the pipeline.
The specific type of push source is determined by the `PushSourceType` enum: `Video`, `Audio`, `Data`, `JPEG`.
### Push Video Source Block
Allows pushing raw video frames into the pipeline.
#### Block info
Name: `PushSourceBlock` (configured for video).
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Output video | Uncompressed video | 1 |
#### Settings
Configured using `PushVideoSourceSettings`:
- `Width` (int): Width of the video frames.
- `Height` (int): Height of the video frames.
- `FrameRate` (`VideoFrameRate`): Frame rate of the video.
- `Format` (`VideoFormatX` enum): Pixel format of the video frames (e.g., `RGB`, `NV12`, `I420`).
- Inherits common push settings like `IsLive` (defaults to `true`), `DoTimestamp`, `StreamType`, `PushFormat`, `BlockPushData`.
Constructor: `PushVideoSourceSettings(int width, int height, VideoFrameRate frameRate, VideoFormatX format = VideoFormatX.RGB)`
#### The sample pipeline
```mermaid
graph LR;
PushVideoSourceBlock-->VideoEncoderBlock-->MP4SinkBlock;
PushVideoSourceBlock-->VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Configure push video source
var videoPushSettings = new PushVideoSourceSettings(
width: 640,
height: 480,
frameRate: new VideoFrameRate(30),
format: VideoFormatX.RGB);
// videoPushSettings.IsLive = true; // Default
var videoPushSource = new PushSourceBlock(videoPushSettings);
// Example: Render the pushed video
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(videoPushSource.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
// In a separate thread or task, push video frames:
// byte[] frameData = ... ; // Your raw RGB frame data (640 * 480 * 3 bytes)
// videoPushSource.PushFrame(frameData);
// Call PushFrame repeatedly for each new video frame.
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Push Audio Source Block
Allows pushing raw audio samples into the pipeline.
#### Block info
Name: `PushSourceBlock` (configured for audio).
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Output audio | Uncompressed audio | 1 |
#### Settings
Configured using `PushAudioSourceSettings`:
- `SampleRate` (int): Sample rate of the audio (e.g., 44100, 48000).
- `Channels` (int): Number of audio channels (e.g., 1 for mono, 2 for stereo).
- `Format` (`AudioFormatX` enum): Format of the audio samples (e.g., `S16LE` for 16-bit signed little-endian PCM).
- Inherits common push settings like `IsLive` (defaults to `true`), `DoTimestamp`, `StreamType`, `PushFormat`, `BlockPushData`.
Constructor: `PushAudioSourceSettings(bool isLive = true, int sampleRate = 48000, int channels = 2, AudioFormatX format = AudioFormatX.S16LE)`
#### The sample pipeline
```mermaid
graph LR;
PushAudioSourceBlock-->AudioEncoderBlock-->MP4SinkBlock;
PushAudioSourceBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Configure push audio source
var audioPushSettings = new PushAudioSourceSettings(
isLive: true,
sampleRate: 44100,
channels: 2,
format: AudioFormatX.S16LE);
var audioPushSource = new PushSourceBlock(audioPushSettings);
// Example: Render the pushed audio
var audioRenderer = new AudioRendererBlock();
pipeline.Connect(audioPushSource.Output, audioRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
// In a separate thread or task, push audio samples:
// byte[] audioData = ... ; // Your raw PCM S16LE audio data
// audioPushSource.PushFrame(audioData);
// Call PushFrame repeatedly for new audio data.
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Push Data Source Block
Allows pushing generic byte data into the pipeline. The interpretation of this data depends on the `Caps` (capabilities) specified.
#### Block info
Name: `PushSourceBlock` (configured for data).
| Pin direction | Media type | Pins count |
|---------------|:----------:|:----------:|
| Output data | Custom | 1 |
#### Settings
Configured using `PushDataSourceSettings`:
- `Caps` (`Gst.Caps`): GStreamer capabilities string describing the data format (e.g., "video/x-h264, stream-format=byte-stream"). This is crucial for downstream blocks to understand the data.
- `PadMediaType` (`MediaBlockPadMediaType` enum): Specifies the type of the output pad (e.g., `Video`, `Audio`, `Data`, `Auto`).
- Inherits common push settings like `IsLive`, `DoTimestamp`, `StreamType`, `PushFormat`, `BlockPushData`.
#### The sample pipeline
```mermaid
graph LR;
PushDataSourceBlock-->ParserOrDecoder-->Renderer;
```
Example: Pushing H.264 Annex B byte stream
```mermaid
graph LR;
PushDataSourceBlock-->H264ParserBlock-->H264DecoderBlock-->VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Configure push data source for H.264 byte stream
var dataPushSettings = new PushDataSourceSettings();
dataPushSettings.Caps = new Gst.Caps("video/x-h264, stream-format=(string)byte-stream");
dataPushSettings.PadMediaType = MediaBlockPadMediaType.Video;
// dataPushSettings.IsLive = true; // Set if live
var dataPushSource = new PushSourceBlock(dataPushSettings);
// Example: Decode and render H.264 stream
var h264Parser = new H264ParserBlock();
var h264Decoder = new H264DecoderBlock(); // Or OpenH264DecoderBlock, etc.
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(dataPushSource.Output, h264Parser.Input);
pipeline.Connect(h264Parser.Output, h264Decoder.Input);
pipeline.Connect(h264Decoder.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
// In a separate thread or task, push H.264 NALUs:
// byte[] naluData = ... ; // Your H.264 NALU data
// dataPushSource.PushFrame(naluData);
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
### Push JPEG Source Block
Allows pushing individual JPEG images, which are then output as a video stream.
#### Block info
Name: `PushSourceBlock` (configured for JPEG).
| Pin direction | Media type | Pins count |
|---------------|:--------------------:|:----------:|
| Output video | Uncompressed video | 1 |
#### Settings
Configured using `PushJPEGSourceSettings`:
- `Width` (int): Width of the decoded JPEG images.
- `Height` (int): Height of the decoded JPEG images.
- `FrameRate` (`VideoFrameRate`): The frame rate at which the JPEG images will be presented as a video stream.
- Inherits common push settings like `IsLive` (defaults to `true`), `DoTimestamp`, `StreamType`, `PushFormat`, `BlockPushData`.
Constructor: `PushJPEGSourceSettings(int width, int height, VideoFrameRate frameRate)`
#### The sample pipeline
```mermaid
graph LR;
PushJPEGSourceBlock-->VideoRendererBlock;
PushJPEGSourceBlock-->VideoEncoderBlock-->MP4SinkBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Configure push JPEG source
var jpegPushSettings = new PushJPEGSourceSettings(
width: 1280,
height: 720,
frameRate: new VideoFrameRate(10)); // Present JPEGs as a 10 FPS video
var jpegPushSource = new PushSourceBlock(jpegPushSettings);
// Example: Render the video stream from JPEGs
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(jpegPushSource.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
// In a separate thread or task, push JPEG image data:
// byte[] jpegImageData = File.ReadAllBytes("image.jpg");
// jpegPushSource.PushFrame(jpegImageData);
// Call PushFrame for each new JPEG image.
```
#### Platforms
Windows, macOS, Linux, iOS, Android.
## Apple Platform Source Blocks
### iOS Video Source Block
iOSVideoSourceBlock provides video capture from the device camera on iOS platforms. It is available only on iOS (not macOS Catalyst).
#### Block info
Name: IOSVideoSourceBlock.
| Pin direction | Media type | Pins count |
|---------------|:------------------:|:----------:|
| Output video | Uncompressed video | 1 |
#### Enumerate available devices
Use `DeviceEnumerator.Shared.VideoSourcesAsync()` to get a list of available video devices on iOS. Each device is represented by a `VideoCaptureDeviceInfo` object.
#### The sample pipeline
```mermaid
graph LR;
IOSVideoSourceBlock-->VideoRendererBlock;
```
#### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// select the first available video device
var device = (await DeviceEnumerator.Shared.VideoSourcesAsync())[0];
VideoCaptureDeviceSourceSettings videoSourceSettings = null;
if (device != null)
{
var formatItem = device.VideoFormats[0];
if (formatItem != null)
{
videoSourceSettings = new VideoCaptureDeviceSourceSettings(device)
{
Format = formatItem.ToFormat()
};
videoSourceSettings.Format.FrameRate = formatItem.FrameRateList[0];
}
}
// create iOS video source block
var videoSource = new IOSVideoSourceBlock(videoSourceSettings);
// create video renderer block
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
// connect blocks
pipeline.Connect(videoSource.Output, videoRenderer.Input);
// start pipeline
await pipeline.StartAsync();
```
#### Platforms
iOS (not available on macOS Catalyst)
---
### macOS Audio Source Block
OSXAudioSourceBlock provides audio capture from input devices on macOS platforms.
#### Block info
Name: OSXAudioSourceBlock.
| Pin direction | Media type | Pins count |
|---------------|:------------------:|:----------:|
| Output audio | Uncompressed audio | 1 |
#### Enumerate available devices
Use `DeviceEnumerator.Shared.AudioSourcesAsync()` to get a list of available audio devices on macOS. Each device is represented by an `AudioCaptureDeviceInfo` object.
#### The sample pipeline
```mermaid
graph LR;
OSXAudioSourceBlock-->AudioRendererBlock;
```
#### Sample code
```csharp
// create pipeline
var pipeline = new MediaBlocksPipeline();
// select the first available audio device
var devices = await DeviceEnumerator.Shared.AudioSourcesAsync();
var device = devices.Length > 0 ? devices[0] : null;
OSXAudioSourceSettings audioSourceSettings = null;
if (device != null)
{
var formatItem = device.Formats[0];
if (formatItem != null)
{
audioSourceSettings = new OSXAudioSourceSettings(device.DeviceID, formatItem);
}
}
// create macOS audio source block
var audioSource = new OSXAudioSourceBlock(audioSourceSettings);
// create audio renderer block
var audioRenderer = new AudioRendererBlock();
// connect blocks
pipeline.Connect(audioSource.Output, audioRenderer.Input);
// start pipeline
await pipeline.StartAsync();
```
#### Platforms
macOS (not available on iOS)
---END OF PAGE---
# Local File: .\dotnet\mediablocks\Special\index.md
---
title: Special .Net Media Blocks & Customization
description: Discover special media blocks like Null Renderer, Tee, and Super MediaBlock in the VisioForge Media Blocks SDK for .Net. Learn to customize media pipelines with advanced settings for encryption, custom GStreamer elements, and input source switching.
sidebar_label: Special Blocks
---
# Special blocks
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## Introduction
Special blocks are blocks that do not fit into any other category.
## Null Renderer
The null renderer block sends the data to null. This block may be required if your block has outputs you do not want to use.
### Block info
Name: NullRendererBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Any | 1
### The sample pipeline
The sample pipeline is shown below. It reads a file and sends the video data to the video samples grabber, where you can grab each video frame after decoding. The Null renderer block is used to end the pipeline.
```mermaid
graph LR;
UniversalSourceBlock-->VideoSampleGrabberBlock;
VideoSampleGrabberBlock-->NullRendererBlock;
```
### Sample code
```csharp
private void Start()
{
// create the pipeline
var pipeline = new MediaBlocksPipeline();
// create universal source block
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
// create video sample grabber block and add the event handler
var sampleGrabber = new VideoSampleGrabberBlock();
sampleGrabber.OnVideoFrameBuffer += sampleGrabber_OnVideoFrameBuffer;
// create null renderer block
var nullRenderer = new NullRendererBlock();
// connect blocks
pipeline.Connect(fileSource.VideoOutput, sampleGrabber.Input);
pipeline.Connect(sampleGrabber.Output, nullRenderer.Input);
// start the pipeline
await pipeline.StartAsync();
}
private void sampleGrabber_OnVideoFrameBuffer(object sender, VideoFrameXBufferEventArgs e)
{
// received new video frame
}
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Tee
The tee block splits the video or audio data stream into multiple streams that completely copy the original stream.
### Block info
Name: TeeBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Any | 1
Output | Same as input | 2 or more
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->TeeBlock;
TeeBlock-->VideoRendererBlock;
TeeBlock-->H264EncoderBlock;
H264EncoderBlock-->MP4SinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var videoTee = new TeeBlock(2);
var h264Encoder = new H264EncoderBlock(new OpenH264EncoderSettings());
var mp4Muxer = new MP4SinkBlock(new MP4SinkSettings(@"output.mp4"));
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(fileSource.VideoOutput, videoTee.Input);
pipeline.Connect(videoTee.Outputs[0], videoRenderer.Input);
pipeline.Connect(videoTee.Outputs[1], h264Encoder.Input);
pipeline.Connect(h264Encoder.Output, mp4Muxer.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
#### Sample applications
- [Simple Capture Demo](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/WPF/CSharp/Simple%20Capture%20Demo)
### Platforms
Windows, macOS, Linux, iOS, Android.
## Super MediaBlock
The Super MediaBlock allows you to combine multiple blocks into a single block.
### Block info
Name: SuperMediaBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Any | 1
Output | Any | 1
### The sample pipeline
```mermaid
graph LR;
VirtualVideoSourceBlock-->SuperMediaBlock;
SuperMediaBlock-->NullRendererBlock;
```
Inside the SuperMediaBlock:
```mermaid
graph LR;
FishEyeBlock-->ColorEffectsBlock;
```
Final pipeline:
```mermaid
graph LR;
VirtualVideoSourceBlock-->FishEyeBlock;
subgraph SuperMediaBlock
FishEyeBlock-->ColorEffectsBlock;
end
ColorEffectsBlock-->NullRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var videoViewBlock = new VideoRendererBlock(pipeline, VideoView1);
var videoSource = new VirtualVideoSourceBlock(new VirtualVideoSourceSettings());
var colorEffectsBlock = new ColorEffectsBlock(VisioForge.Core.Types.X.VideoEffects.ColorEffectsPreset.Sepia);
var fishEyeBlock = new FishEyeBlock();
var superBlock = new SuperMediaBlock();
superBlock.Blocks.Add(fishEyeBlock);
superBlock.Blocks.Add(colorEffectsBlock);
superBlock.Configure(pipeline);
pipeline.Connect(videoSource.Output, superBlock.Input);
pipeline.Connect(superBlock.Output, videoViewBlock.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## AESCipher
The `AESCipher` enum defines the types of AES ciphers that can be used. (Source: `VisioForge.Core/Types/X/Special/AESCipher.cs`)
### Enum Values
- `AES_128`: AES 128-bit cipher key using CBC method.
- `AES_256`: AES 256-bit cipher key using CBC method.
### Platforms
Windows, macOS, Linux, iOS, Android.
## EncryptorDecryptorSettings
The `EncryptorDecryptorSettings` class holds settings for encryption and decryption operations. (Source: `VisioForge.Core/Types/X/Special/EncryptorDecryptorSettings.cs`)
### Properties
- `Cipher` (`AESCipher`): Gets or sets the AES cipher type. Defaults to `AES_128`.
- `Key` (`string`): Gets or sets the encryption key.
- `IV` (`string`): Gets or sets the initialization vector (16 bytes as hex).
- `SerializeIV` (`bool`): Gets or sets a value indicating whether to serialize the IV.
### Constructor
- `EncryptorDecryptorSettings(string key, string iv)`: Initializes a new instance with the given key and initialization vector.
### Platforms
Windows, macOS, Linux, iOS, Android.
## CustomMediaBlockPad
The `CustomMediaBlockPad` class defines information for a pad within a `CustomMediaBlock`. (Source: `VisioForge.Core/Types/X/Special/CustomMediaBlockPad.cs`)
### Properties
- `Direction` (`MediaBlockPadDirection`): Gets or sets the pad direction (input/output).
- `MediaType` (`MediaBlockPadMediaType`): Gets or sets the media type of the pad (e.g., Audio, Video).
- `CustomCaps` (`Gst.Caps`): Gets or sets custom GStreamer capabilities for an output pad.
### Constructor
- `CustomMediaBlockPad(MediaBlockPadDirection direction, MediaBlockPadMediaType mediaType)`: Initializes a new instance with the specified direction and media type.
### Platforms
Windows, macOS, Linux, iOS, Android.
## CustomMediaBlockSettings
The `CustomMediaBlockSettings` class provides settings for configuring a custom media block, potentially wrapping GStreamer elements. (Source: `VisioForge.Core/Types/X/Special/CustomMediaBlockSettings.cs`)
### Properties
- `ElementName` (`string`): Gets the name of the GStreamer element or Media Blocks SDK element. To create a custom GStreamer Bin, include square brackets, e.g., `"[ videotestsrc ! videoconvert ]"`.
- `UsePadAddedEvent` (`bool`): Gets or sets a value indicating whether to use the `pad-added` event for dynamically created GStreamer pads.
- `ElementParams` (`Dictionary`): Gets the parameters for the element.
- `Pads` (`List`): Gets the list of `CustomMediaBlockPad` definitions for the block.
- `ListProperties` (`bool`): Gets or sets a value indicating whether to list element properties to the Debug window after creation. Defaults to `false`.
### Constructor
- `CustomMediaBlockSettings(string elementName)`: Initializes a new instance with the specified element name.
### Platforms
Windows, macOS, Linux, iOS, Android.
## InputSelectorSyncMode
The `InputSelectorSyncMode` enum defines how an input-selector (used by `SourceSwitchSettings`) synchronizes buffers when in `sync-streams` mode. (Source: `VisioForge.Core/Types/X/Special/SourceSwitchSettings.cs`)
### Enum Values
- `ActiveSegment` (0): Sync using the current active segment.
- `Clock` (1): Sync using the clock.
### Platforms
Windows, macOS, Linux, iOS, Android.
## SourceSwitchSettings
The `SourceSwitchSettings` class configures a block that can switch between multiple input sources. (Source: `VisioForge.Core/Types/X/Special/SourceSwitchSettings.cs`)
The summary "Represents the currently active sink pad" from the code comment might be slightly misleading or incomplete for the class name `SourceSwitchSettings`. The properties suggest it's for configuring a source switcher.
### Properties
- `PadsCount` (`int`): Gets or sets the number of input pads. Defaults to `2`.
- `DefaultActivePad` (`int`): Gets or sets the initially active sink pad.
- `CacheBuffers` (`bool`): Gets or sets whether the active pad caches buffers to avoid missing frames when reactivated. Defaults to `false`.
- `DropBackwards` (`bool`): Gets or sets whether to drop buffers that go backwards relative to the last output buffer pre-switch. Defaults to `false`.
- `SyncMode` (`InputSelectorSyncMode`): Gets or sets how the input-selector syncs buffers in `sync-streams` mode. Defaults to `InputSelectorSyncMode.ActiveSegment`.
- `SyncStreams` (`bool`): Gets or sets if all inactive streams will be synced to the running time of the active stream or to the current clock. Defaults to `true`.
- `CustomName` (`string`): Gets or sets a custom name for logging purposes. Defaults to `"SourceSwitch"`.
### Constructor
- `SourceSwitchSettings(int padsCount = 2)`: Initializes a new instance, optionally specifying the number of pads.
### Platforms
Windows, macOS, Linux, iOS, Android.
---END OF PAGE---
# Local File: .\dotnet\mediablocks\VideoDecoders\index.md
---
title: .Net Media Video Decoder Blocks Guide
description: Explore a complete guide to .Net Media SDK video decoder blocks. Learn about various video decoders for your media processing pipelines.
sidebar_label: Video Decoders
---
# Video Decoder Blocks - VisioForge Media Blocks SDK .Net
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
Video Decoder blocks are essential components in a media pipeline, responsible for decompressing encoded video streams into raw video frames that can be further processed or rendered. VisioForge Media Blocks SDK .Net offers a variety of video decoder blocks supporting numerous codecs and hardware acceleration technologies.
## Available Video Decoder Blocks
### H264 Decoder Block
Decodes H.264 (AVC) video streams. This is one of the most widely used video compression standards. The block can utilize different underlying decoder implementations like FFMPEG, OpenH264, or hardware-accelerated decoders if available.
#### Block info
Name: `H264DecoderBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | H.264 encoded video | 1 |
| Output video | Uncompressed video | 1 |
#### Settings
The `H264DecoderBlock` is configured using settings that implement `IH264DecoderSettings`. Available settings classes include:
- `FFMPEGH264DecoderSettings`
- `OpenH264DecoderSettings`
- `NVH264DecoderSettings` (for NVIDIA GPU acceleration)
- `VAAPIH264DecoderSettings` (for VA-API acceleration on Linux)
A constructor without parameters will attempt to select an available decoder automatically.
#### The sample pipeline
```mermaid
graph LR;
BasicFileSourceBlock -- Raw Data --> UniversalDemuxBlock;
UniversalDemuxBlock -- H.264 Video Stream --> H264DecoderBlock;
H264DecoderBlock -- Decoded Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create H264 Decoder block
var h264Decoder = new H264DecoderBlock();
// Example: Create a basic file source, demuxer, and renderer
var basicFileSource = new BasicFileSourceBlock("test_h264.mp4");
// You'll need MediaFileInfo, typically obtained using MediaInfoReader
// Assuming MediaInfoReader.GetMediaInfoAsync is available:
var mediaInfo = await MediaInfoReader.GetMediaInfoAsync("test_h264.mp4");
if (mediaInfo == null)
{
Console.WriteLine("Failed to get media info.");
return;
}
var universalDemux = new UniversalDemuxBlock(mediaInfo, renderVideo: true, renderAudio: false);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
// Connect blocks
pipeline.Connect(basicFileSource.Output, universalDemux.Input);
pipeline.Connect(universalDemux.GetVideoOutput(), h264Decoder.Input);
pipeline.Connect(h264Decoder.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
#### Availability
You can check for specific decoder implementations using `H264Decoder.IsAvailable(H264DecoderType decoderType)`.
`H264DecoderType` includes `FFMPEG`, `OpenH264`, `GPU_Nvidia_H264`, `VAAPI_H264`, etc.
#### Platforms
Windows, macOS, Linux. (Hardware-specific decoders like NVH264Decoder require specific hardware and drivers).
### JPEG Decoder Block
Decodes JPEG (Motion JPEG) video streams or individual JPEG images into raw video frames.
#### Block info
Name: `JPEGDecoderBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | JPEG encoded video/images | 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
HTTPSourceBlock -- MJPEG Stream --> JPEGDecoderBlock;
JPEGDecoderBlock -- Decoded Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create JPEG Decoder block
var jpegDecoder = new JPEGDecoderBlock();
// Example: Create an HTTP source for an MJPEG camera and a video renderer
var httpSettings = new HTTPSourceSettings(new Uri("http://your-mjpeg-camera/stream"));
var httpSource = new HTTPSourceBlock(httpSettings);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1); // Assuming VideoView1
// Connect blocks
pipeline.Connect(httpSource.Output, jpegDecoder.Input);
pipeline.Connect(jpegDecoder.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
#### Availability
You can check if the underlying NVIDIA JPEG decoder (if applicable) is available using `NVJPEGDecoder.IsAvailable()`. The generic JPEG decoder functionality is generally available.
#### Platforms
Windows, macOS, Linux. (NVIDIA specific implementation requires NVIDIA hardware).
### NVIDIA H.264 Decoder Block (NVH264DecoderBlock)
Provides hardware-accelerated decoding of H.264 (AVC) video streams using NVIDIA's NVDEC technology. This offers high performance and efficiency on systems with compatible NVIDIA GPUs.
#### Block info
Name: `NVH264DecoderBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | H.264 encoded video | 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
BasicFileSourceBlock -- Raw Data --> UniversalDemuxBlock;
UniversalDemuxBlock -- H.264 Video Stream --> NVH264DecoderBlock;
NVH264DecoderBlock -- Decoded Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create NVIDIA H.264 Decoder block
var nvH264Decoder = new NVH264DecoderBlock();
// Example: Create a basic file source, demuxer, and renderer
var basicFileSource = new BasicFileSourceBlock("test_h264.mp4");
var mediaInfo = await MediaInfoReader.GetMediaInfoAsync("test_h264.mp4");
if (mediaInfo == null)
{
Console.WriteLine("Failed to get media info.");
return;
}
var universalDemux = new UniversalDemuxBlock(mediaInfo, renderVideo: true, renderAudio: false);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
// Connect blocks
pipeline.Connect(basicFileSource.Output, universalDemux.Input);
pipeline.Connect(universalDemux.GetVideoOutput(), nvH264Decoder.Input);
pipeline.Connect(nvH264Decoder.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
#### Availability
Check availability using `NVH264Decoder.IsAvailable()`. Requires an NVIDIA GPU that supports NVDEC and appropriate drivers.
#### Platforms
Windows, Linux (with NVIDIA drivers).
### NVIDIA H.265 Decoder Block (NVH265DecoderBlock)
Provides hardware-accelerated decoding of H.265 (HEVC) video streams using NVIDIA's NVDEC technology. H.265 offers better compression efficiency than H.264.
#### Block info
Name: `NVH265DecoderBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | H.265/HEVC encoded video | 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
BasicFileSourceBlock -- Raw Data --> UniversalDemuxBlock;
UniversalDemuxBlock -- H.265 Video Stream --> NVH265DecoderBlock;
NVH265DecoderBlock -- Decoded Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create NVIDIA H.265 Decoder block
var nvH265Decoder = new NVH265DecoderBlock();
// Example: Create a basic file source, demuxer, and renderer
var basicFileSource = new BasicFileSourceBlock("test_h265.mp4");
var mediaInfo = await MediaInfoReader.GetMediaInfoAsync("test_h265.mp4");
if (mediaInfo == null)
{
Console.WriteLine("Failed to get media info.");
return;
}
var universalDemux = new UniversalDemuxBlock(mediaInfo, renderVideo: true, renderAudio: false);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
// Connect blocks
pipeline.Connect(basicFileSource.Output, universalDemux.Input);
pipeline.Connect(universalDemux.GetVideoOutput(), nvH265Decoder.Input);
pipeline.Connect(nvH265Decoder.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
#### Availability
Check availability using `NVH265Decoder.IsAvailable()`. Requires an NVIDIA GPU that supports NVDEC for H.265 and appropriate drivers.
#### Platforms
Windows, Linux (with NVIDIA drivers).
### NVIDIA JPEG Decoder Block (NVJPEGDecoderBlock)
Provides hardware-accelerated decoding of JPEG images or Motion JPEG (MJPEG) streams using NVIDIA's NVJPEG library. This is particularly useful for high-resolution or high-framerate MJPEG streams.
(Note: The sample pipeline for JPEG with BasicFileSourceBlock might be less common than HTTPSource for MJPEG. The example below is adapted but consider typical use cases.)
#### Block info
Name: `NVJPEGDecoderBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | JPEG encoded video/images | 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
BasicFileSourceBlock -- Raw MJPEG Data --> UniversalDemuxBlock;
UniversalDemuxBlock -- JPEG Video Stream --> NVJPEGDecoderBlock;
NVJPEGDecoderBlock -- Decoded Video --> VideoRendererBlock;
```
For live MJPEG streams, `HTTPSourceBlock --> NVJPEGDecoderBlock` is more typical.
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create NVIDIA JPEG Decoder block
var nvJpegDecoder = new NVJPEGDecoderBlock();
// Example: Create a basic file source for an MJPEG file, demuxer, and renderer
// Ensure "test.mjpg" contains a Motion JPEG stream.
var basicFileSource = new BasicFileSourceBlock("test.mjpg");
var mediaInfo = await MediaInfoReader.GetMediaInfoAsync("test.mjpg");
if (mediaInfo == null || mediaInfo.VideoStreams.Count == 0 || !mediaInfo.VideoStreams[0].Codec.Contains("jpeg"))
{
Console.WriteLine("Failed to get MJPEG media info or not an MJPEG file.");
return;
}
var universalDemux = new UniversalDemuxBlock(mediaInfo, renderVideo: true, renderAudio: false);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
// Connect blocks
pipeline.Connect(basicFileSource.Output, universalDemux.Input);
pipeline.Connect(universalDemux.GetVideoOutput(), nvJpegDecoder.Input);
pipeline.Connect(nvJpegDecoder.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
#### Availability
Check availability using `NVJPEGDecoder.IsAvailable()`. Requires an NVIDIA GPU and appropriate drivers.
#### Platforms
Windows, Linux (with NVIDIA drivers).
### NVIDIA MPEG-1 Decoder Block (NVMPEG1DecoderBlock)
Provides hardware-accelerated decoding of MPEG-1 video streams using NVIDIA's NVDEC technology.
#### Block info
Name: `NVMPEG1DecoderBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | MPEG-1 encoded video | 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
BasicFileSourceBlock -- Raw Data --> UniversalDemuxBlock;
UniversalDemuxBlock -- MPEG-1 Video Stream --> NVMPEG1DecoderBlock;
NVMPEG1DecoderBlock -- Decoded Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create NVIDIA MPEG-1 Decoder block
var nvMpeg1Decoder = new NVMPEG1DecoderBlock();
// Example: Create a basic file source, demuxer, and renderer
var basicFileSource = new BasicFileSourceBlock("test_mpeg1.mpg");
var mediaInfo = await MediaInfoReader.GetMediaInfoAsync("test_mpeg1.mpg");
if (mediaInfo == null)
{
Console.WriteLine("Failed to get media info.");
return;
}
var universalDemux = new UniversalDemuxBlock(mediaInfo, renderVideo: true, renderAudio: false);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
// Connect blocks
pipeline.Connect(basicFileSource.Output, universalDemux.Input);
pipeline.Connect(universalDemux.GetVideoOutput(), nvMpeg1Decoder.Input);
pipeline.Connect(nvMpeg1Decoder.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
#### Availability
Check availability using `NVMPEG1Decoder.IsAvailable()`. Requires an NVIDIA GPU that supports NVDEC for MPEG-1 and appropriate drivers.
#### Platforms
Windows, Linux (with NVIDIA drivers).
### NVIDIA MPEG-2 Decoder Block (NVMPEG2DecoderBlock)
Provides hardware-accelerated decoding of MPEG-2 video streams using NVIDIA's NVDEC technology. Commonly used for DVD video and some digital television broadcasts.
#### Block info
Name: `NVMPEG2DecoderBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | MPEG-2 encoded video | 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
BasicFileSourceBlock -- Raw Data --> UniversalDemuxBlock;
UniversalDemuxBlock -- MPEG-2 Video Stream --> NVMPEG2DecoderBlock;
NVMPEG2DecoderBlock -- Decoded Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create NVIDIA MPEG-2 Decoder block
var nvMpeg2Decoder = new NVMPEG2DecoderBlock();
// Example: Create a basic file source, demuxer, and renderer
var basicFileSource = new BasicFileSourceBlock("test_mpeg2.mpg");
var mediaInfo = await MediaInfoReader.GetMediaInfoAsync("test_mpeg2.mpg");
if (mediaInfo == null)
{
Console.WriteLine("Failed to get media info.");
return;
}
var universalDemux = new UniversalDemuxBlock(mediaInfo, renderVideo: true, renderAudio: false);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
// Connect blocks
pipeline.Connect(basicFileSource.Output, universalDemux.Input);
pipeline.Connect(universalDemux.GetVideoOutput(), nvMpeg2Decoder.Input);
pipeline.Connect(nvMpeg2Decoder.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
#### Availability
Check availability using `NVMPEG2Decoder.IsAvailable()`. Requires an NVIDIA GPU that supports NVDEC for MPEG-2 and appropriate drivers.
#### Platforms
Windows, Linux (with NVIDIA drivers).
### NVIDIA MPEG-4 Decoder Block (NVMPEG4DecoderBlock)
Provides hardware-accelerated decoding of MPEG-4 Part 2 video streams (often found in AVI files, e.g., DivX/Xvid) using NVIDIA's NVDEC technology. Note that this is different from MPEG-4 Part 10 (H.264/AVC).
#### Block info
Name: `NVMPEG4DecoderBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | MPEG-4 Part 2 encoded video | 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
BasicFileSourceBlock -- Raw Data --> UniversalDemuxBlock;
UniversalDemuxBlock -- MPEG-4 Video Stream --> NVMPEG4DecoderBlock;
NVMPEG4DecoderBlock -- Decoded Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create NVIDIA MPEG-4 Decoder block
var nvMpeg4Decoder = new NVMPEG4DecoderBlock();
// Example: Create a basic file source, demuxer, and renderer
var basicFileSource = new BasicFileSourceBlock("test_mpeg4.avi");
var mediaInfo = await MediaInfoReader.GetMediaInfoAsync("test_mpeg4.avi");
if (mediaInfo == null)
{
Console.WriteLine("Failed to get media info.");
return;
}
var universalDemux = new UniversalDemuxBlock(mediaInfo, renderVideo: true, renderAudio: false);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
// Connect blocks
pipeline.Connect(basicFileSource.Output, universalDemux.Input);
pipeline.Connect(universalDemux.GetVideoOutput(), nvMpeg4Decoder.Input);
pipeline.Connect(nvMpeg4Decoder.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
#### Availability
Check availability using `NVMPEG4Decoder.IsAvailable()`. Requires an NVIDIA GPU that supports NVDEC for MPEG-4 Part 2 and appropriate drivers.
#### Platforms
Windows, Linux (with NVIDIA drivers).
### NVIDIA VP8 Decoder Block (NVVP8DecoderBlock)
Provides hardware-accelerated decoding of VP8 video streams using NVIDIA's NVDEC technology. VP8 is an open video format, often used with WebM.
#### Block info
Name: `NVVP8DecoderBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | VP8 encoded video | 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
BasicFileSourceBlock -- Raw Data --> UniversalDemuxBlock;
UniversalDemuxBlock -- VP8 Video Stream --> NVVP8DecoderBlock;
NVVP8DecoderBlock -- Decoded Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create NVIDIA VP8 Decoder block
var nvVp8Decoder = new NVVP8DecoderBlock();
// Example: Create a basic file source, demuxer, and renderer
var basicFileSource = new BasicFileSourceBlock("test_vp8.webm");
var mediaInfo = await MediaInfoReader.GetMediaInfoAsync("test_vp8.webm");
if (mediaInfo == null)
{
Console.WriteLine("Failed to get media info.");
return;
}
var universalDemux = new UniversalDemuxBlock(mediaInfo, renderVideo: true, renderAudio: false);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
// Connect blocks
pipeline.Connect(basicFileSource.Output, universalDemux.Input);
pipeline.Connect(universalDemux.GetVideoOutput(), nvVp8Decoder.Input);
pipeline.Connect(nvVp8Decoder.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
#### Availability
Check availability using `NVVP8Decoder.IsAvailable()`. Requires an NVIDIA GPU that supports NVDEC for VP8 and appropriate drivers.
#### Platforms
Windows, Linux (with NVIDIA drivers).
### NVIDIA VP9 Decoder Block (NVVP9DecoderBlock)
Provides hardware-accelerated decoding of VP9 video streams using NVIDIA's NVDEC technology. VP9 is an open and royalty-free video coding format developed by Google, often used for web streaming (e.g., YouTube).
#### Block info
Name: `NVVP9DecoderBlock`.
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | VP9 encoded video | 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
BasicFileSourceBlock -- Raw Data --> UniversalDemuxBlock;
UniversalDemuxBlock -- VP9 Video Stream --> NVVP9DecoderBlock;
NVVP9DecoderBlock -- Decoded Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create NVIDIA VP9 Decoder block
var nvVp9Decoder = new NVVP9DecoderBlock();
// Example: Create a basic file source, demuxer, and renderer
var basicFileSource = new BasicFileSourceBlock("test_vp9.webm");
var mediaInfo = await MediaInfoReader.GetMediaInfoAsync("test_vp9.webm");
if (mediaInfo == null)
{
Console.WriteLine("Failed to get media info.");
return;
}
var universalDemux = new UniversalDemuxBlock(mediaInfo, renderVideo: true, renderAudio: false);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
// Connect blocks
pipeline.Connect(basicFileSource.Output, universalDemux.Input);
pipeline.Connect(universalDemux.GetVideoOutput(), nvVp9Decoder.Input);
pipeline.Connect(nvVp9Decoder.Output, videoRenderer.Input);
// Start pipeline
await pipeline.StartAsync();
```
#### Availability
Check availability using `NVVP9Decoder.IsAvailable()`. Requires an NVIDIA GPU that supports NVDEC for VP9 and appropriate drivers.
#### Platforms
Windows, Linux (with NVIDIA drivers).
### VAAPI H.264 Decoder Block (VAAPIH264DecoderBlock)
Provides hardware-accelerated decoding of H.264 (AVC) video streams using VA-API (Video Acceleration API). Available on Linux systems with compatible hardware and drivers.
#### Block info
| Pin direction | Media type | Pins count |
|---------------|---------------------|------------|
| Input video | H.264 encoded video | 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
BasicFileSourceBlock -- Raw Data --> UniversalDemuxBlock;
UniversalDemuxBlock -- H.264 Video Stream --> VAAPIH264DecoderBlock;
VAAPIH264DecoderBlock -- Decoded Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var vaapiH264Decoder = new VAAPIH264DecoderBlock();
var basicFileSource = new BasicFileSourceBlock("test_h264.mp4");
var mediaInfo = await MediaInfoReader.GetMediaInfoAsync("test_h264.mp4");
if (mediaInfo == null)
{
Console.WriteLine("Failed to get media info.");
return;
}
var universalDemux = new UniversalDemuxBlock(mediaInfo, renderVideo: true, renderAudio: false);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(basicFileSource.Output, universalDemux.Input);
pipeline.Connect(universalDemux.GetVideoOutput(), vaapiH264Decoder.Input);
pipeline.Connect(vaapiH264Decoder.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
#### Availability
Check with `VAAPIH264DecoderBlock.IsAvailable()`. Requires VA-API support and correct SDK redist.
#### Platforms
Linux (with VA-API drivers).
---
### VAAPI HEVC Decoder Block (VAAPIHEVCDecoderBlock)
Provides hardware-accelerated decoding of H.265/HEVC video streams using VA-API. Available on Linux systems with compatible hardware and drivers.
#### Block info
| Pin direction | Media type | Pins count |
|---------------|----------------------|------------|
| Input video | H.265/HEVC encoded | 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
BasicFileSourceBlock -- Raw Data --> UniversalDemuxBlock;
UniversalDemuxBlock -- H.265 Video Stream --> VAAPIHEVCDecoderBlock;
VAAPIHEVCDecoderBlock -- Decoded Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var vaapiHevcDecoder = new VAAPIHEVCDecoderBlock();
var basicFileSource = new BasicFileSourceBlock("test_hevc.mp4");
var mediaInfo = await MediaInfoReader.GetMediaInfoAsync("test_hevc.mp4");
if (mediaInfo == null)
{
Console.WriteLine("Failed to get media info.");
return;
}
var universalDemux = new UniversalDemuxBlock(mediaInfo, renderVideo: true, renderAudio: false);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(basicFileSource.Output, universalDemux.Input);
pipeline.Connect(universalDemux.GetVideoOutput(), vaapiHevcDecoder.Input);
pipeline.Connect(vaapiHevcDecoder.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
#### Availability
Check with `VAAPIHEVCDecoderBlock.IsAvailable()`. Requires VA-API support and correct SDK redist.
#### Platforms
Linux (with VA-API drivers).
---
### VAAPI JPEG Decoder Block (VAAPIJPEGDecoderBlock)
Provides hardware-accelerated decoding of JPEG/MJPEG video streams using VA-API. Available on Linux systems with compatible hardware and drivers.
#### Block info
| Pin direction | Media type | Pins count |
|---------------|--------------------------|------------|
| Input video | JPEG encoded video/images | 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
HTTPSourceBlock -- MJPEG Stream --> VAAPIJPEGDecoderBlock;
VAAPIJPEGDecoderBlock -- Decoded Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var vaapiJpegDecoder = new VAAPIJPEGDecoderBlock();
var httpSettings = new HTTPSourceSettings(new Uri("http://your-mjpeg-camera/stream"));
var httpSource = new HTTPSourceBlock(httpSettings);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(httpSource.Output, vaapiJpegDecoder.Input);
pipeline.Connect(vaapiJpegDecoder.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
#### Availability
Check with `VAAPIJPEGDecoderBlock.IsAvailable()`. Requires VA-API support and correct SDK redist.
#### Platforms
Linux (with VA-API drivers).
---
### VAAPI VC1 Decoder Block (VAAPIVC1DecoderBlock)
Provides hardware-accelerated decoding of VC-1 video streams using VA-API. Available on Linux systems with compatible hardware and drivers.
#### Block info
| Pin direction | Media type | Pins count |
|---------------|---------------------|------------|
| Input video | VC-1 encoded video | 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
BasicFileSourceBlock -- Raw Data --> UniversalDemuxBlock;
UniversalDemuxBlock -- VC-1 Video Stream --> VAAPIVC1DecoderBlock;
VAAPIVC1DecoderBlock -- Decoded Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var vaapiVc1Decoder = new VAAPIVC1DecoderBlock();
var basicFileSource = new BasicFileSourceBlock("test_vc1.wmv");
var mediaInfo = await MediaInfoReader.GetMediaInfoAsync("test_vc1.wmv");
if (mediaInfo == null)
{
Console.WriteLine("Failed to get media info.");
return;
}
var universalDemux = new UniversalDemuxBlock(mediaInfo, renderVideo: true, renderAudio: false);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(basicFileSource.Output, universalDemux.Input);
pipeline.Connect(universalDemux.GetVideoOutput(), vaapiVc1Decoder.Input);
pipeline.Connect(vaapiVc1Decoder.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
#### Availability
Check with `VAAPIVC1DecoderBlock.IsAvailable()`. Requires VA-API support and correct SDK redist.
#### Platforms
Linux (with VA-API drivers).
---
## Direct3D 11/DXVA Video Decoder Blocks
Direct3D 11/DXVA (D3D11) decoder blocks provide hardware-accelerated video decoding on Windows systems with compatible GPUs and drivers. These blocks are useful for high-performance video playback and processing pipelines on Windows.
### D3D11 AV1 Decoder Block
Decodes AV1 video streams using Direct3D 11/DXVA hardware acceleration.
#### Block info
Name: `D3D11AV1DecoderBlock`.
| Pin direction | Media type | Pins count |
|---------------|---------------------|------------|
| Input video | AV1 encoded video | 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
BasicFileSourceBlock -- Raw Data --> UniversalDemuxBlock;
UniversalDemuxBlock -- AV1 Video Stream --> D3D11AV1DecoderBlock;
D3D11AV1DecoderBlock -- Decoded Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create D3D11 AV1 Decoder block
var d3d11Av1Decoder = new D3D11AV1DecoderBlock();
var basicFileSource = new BasicFileSourceBlock("test_av1.mkv");
var mediaInfo = await MediaInfoReader.GetMediaInfoAsync("test_av1.mkv");
if (mediaInfo == null)
{
Console.WriteLine("Failed to get media info.");
return;
}
var universalDemux = new UniversalDemuxBlock(mediaInfo, renderVideo: true, renderAudio: false);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(basicFileSource.Output, universalDemux.Input);
pipeline.Connect(universalDemux.GetVideoOutput(), d3d11Av1Decoder.Input);
pipeline.Connect(d3d11Av1Decoder.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
#### Availability
Check availability using `D3D11AV1DecoderBlock.IsAvailable()`. Requires Windows with D3D11/DXVA support and correct SDK redist.
#### Platforms
Windows (D3D11/DXVA required).
---
### D3D11 H.264 Decoder Block
Decodes H.264 (AVC) video streams using Direct3D 11/DXVA hardware acceleration.
#### Block info
Name: `D3D11H264DecoderBlock`.
| Pin direction | Media type | Pins count |
|---------------|---------------------|------------|
| Input video | H.264 encoded video | 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
BasicFileSourceBlock -- Raw Data --> UniversalDemuxBlock;
UniversalDemuxBlock -- H.264 Video Stream --> D3D11H264DecoderBlock;
D3D11H264DecoderBlock -- Decoded Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create D3D11 H.264 Decoder block
var d3d11H264Decoder = new D3D11H264DecoderBlock();
var basicFileSource = new BasicFileSourceBlock("test_h264.mp4");
var mediaInfo = await MediaInfoReader.GetMediaInfoAsync("test_h264.mp4");
if (mediaInfo == null)
{
Console.WriteLine("Failed to get media info.");
return;
}
var universalDemux = new UniversalDemuxBlock(mediaInfo, renderVideo: true, renderAudio: false);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(basicFileSource.Output, universalDemux.Input);
pipeline.Connect(universalDemux.GetVideoOutput(), d3d11H264Decoder.Input);
pipeline.Connect(d3d11H264Decoder.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
#### Availability
Check availability using `D3D11H264DecoderBlock.IsAvailable()`. Requires Windows with D3D11/DXVA support and correct SDK redist.
#### Platforms
Windows (D3D11/DXVA required).
---
### D3D11 H.265 Decoder Block
Decodes H.265 (HEVC) video streams using Direct3D 11/DXVA hardware acceleration.
#### Block info
Name: `D3D11H265DecoderBlock`.
| Pin direction | Media type | Pins count |
|---------------|----------------------|------------|
| Input video | H.265/HEVC encoded | 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
BasicFileSourceBlock -- Raw Data --> UniversalDemuxBlock;
UniversalDemuxBlock -- H.265 Video Stream --> D3D11H265DecoderBlock;
D3D11H265DecoderBlock -- Decoded Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create D3D11 H.265 Decoder block
var d3d11H265Decoder = new D3D11H265DecoderBlock();
var basicFileSource = new BasicFileSourceBlock("test_h265.mp4");
var mediaInfo = await MediaInfoReader.GetMediaInfoAsync("test_h265.mp4");
if (mediaInfo == null)
{
Console.WriteLine("Failed to get media info.");
return;
}
var universalDemux = new UniversalDemuxBlock(mediaInfo, renderVideo: true, renderAudio: false);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(basicFileSource.Output, universalDemux.Input);
pipeline.Connect(universalDemux.GetVideoOutput(), d3d11H265Decoder.Input);
pipeline.Connect(d3d11H265Decoder.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
#### Availability
Check availability using `D3D11H265DecoderBlock.IsAvailable()`. Requires Windows with D3D11/DXVA support and correct SDK redist.
#### Platforms
Windows (D3D11/DXVA required).
---
### D3D11 MPEG-2 Decoder Block
Decodes MPEG-2 video streams using Direct3D 11/DXVA hardware acceleration.
#### Block info
Name: `D3D11MPEG2DecoderBlock`.
| Pin direction | Media type | Pins count |
|---------------|---------------------|------------|
| Input video | MPEG-2 encoded video| 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
BasicFileSourceBlock -- Raw Data --> UniversalDemuxBlock;
UniversalDemuxBlock -- MPEG-2 Video Stream --> D3D11MPEG2DecoderBlock;
D3D11MPEG2DecoderBlock -- Decoded Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create D3D11 MPEG-2 Decoder block
var d3d11Mpeg2Decoder = new D3D11MPEG2DecoderBlock();
var basicFileSource = new BasicFileSourceBlock("test_mpeg2.mpg");
var mediaInfo = await MediaInfoReader.GetMediaInfoAsync("test_mpeg2.mpg");
if (mediaInfo == null)
{
Console.WriteLine("Failed to get media info.");
return;
}
var universalDemux = new UniversalDemuxBlock(mediaInfo, renderVideo: true, renderAudio: false);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(basicFileSource.Output, universalDemux.Input);
pipeline.Connect(universalDemux.GetVideoOutput(), d3d11Mpeg2Decoder.Input);
pipeline.Connect(d3d11Mpeg2Decoder.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
#### Availability
Check availability using `D3D11MPEG2DecoderBlock.IsAvailable()`. Requires Windows with D3D11/DXVA support and correct SDK redist.
#### Platforms
Windows (D3D11/DXVA required).
---
### D3D11 VP8 Decoder Block
Decodes VP8 video streams using Direct3D 11/DXVA hardware acceleration.
#### Block info
Name: `D3D11VP8DecoderBlock`.
| Pin direction | Media type | Pins count |
|---------------|---------------------|------------|
| Input video | VP8 encoded video | 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
BasicFileSourceBlock -- Raw Data --> UniversalDemuxBlock;
UniversalDemuxBlock -- VP8 Video Stream --> D3D11VP8DecoderBlock;
D3D11VP8DecoderBlock -- Decoded Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create D3D11 VP8 Decoder block
var d3d11Vp8Decoder = new D3D11VP8DecoderBlock();
var basicFileSource = new BasicFileSourceBlock("test_vp8.webm");
var mediaInfo = await MediaInfoReader.GetMediaInfoAsync("test_vp8.webm");
if (mediaInfo == null)
{
Console.WriteLine("Failed to get media info.");
return;
}
var universalDemux = new UniversalDemuxBlock(mediaInfo, renderVideo: true, renderAudio: false);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(basicFileSource.Output, universalDemux.Input);
pipeline.Connect(universalDemux.GetVideoOutput(), d3d11Vp8Decoder.Input);
pipeline.Connect(d3d11Vp8Decoder.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
#### Availability
Check availability using `D3D11VP8DecoderBlock.IsAvailable()`. Requires Windows with D3D11/DXVA support and correct SDK redist.
#### Platforms
Windows (D3D11/DXVA required).
---
### D3D11 VP9 Decoder Block
Decodes VP9 video streams using Direct3D 11/DXVA hardware acceleration.
#### Block info
Name: `D3D11VP9DecoderBlock`.
| Pin direction | Media type | Pins count |
|---------------|---------------------|------------|
| Input video | VP9 encoded video | 1 |
| Output video | Uncompressed video | 1 |
#### The sample pipeline
```mermaid
graph LR;
BasicFileSourceBlock -- Raw Data --> UniversalDemuxBlock;
UniversalDemuxBlock -- VP9 Video Stream --> D3D11VP9DecoderBlock;
D3D11VP9DecoderBlock -- Decoded Video --> VideoRendererBlock;
```
#### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Create D3D11 VP9 Decoder block
var d3d11Vp9Decoder = new D3D11VP9DecoderBlock();
var basicFileSource = new BasicFileSourceBlock("test_vp9.webm");
var mediaInfo = await MediaInfoReader.GetMediaInfoAsync("test_vp9.webm");
if (mediaInfo == null)
{
Console.WriteLine("Failed to get media info.");
return;
}
var universalDemux = new UniversalDemuxBlock(mediaInfo, renderVideo: true, renderAudio: false);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(basicFileSource.Output, universalDemux.Input);
pipeline.Connect(universalDemux.GetVideoOutput(), d3d11Vp9Decoder.Input);
pipeline.Connect(d3d11Vp9Decoder.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
#### Availability
Check availability using `D3D11VP9DecoderBlock.IsAvailable()`. Requires Windows with D3D11/DXVA support and correct SDK redist.
#### Platforms
Windows (D3D11/DXVA required).
---END OF PAGE---
# Local File: .\dotnet\mediablocks\VideoEncoders\index.md
---
title: Mastering Video Encoders in .NET SDK
description: Unlock high-performance video encoding in .NET projects. This guide covers various video encoders, codecs like AV1, H264, HEVC, and GPU acceleration techniques.
sidebar_label: Video Encoders
order: 18
---
# Video encoding
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
Video encoding is the process of converting raw video data into a compressed format. This process is essential for reducing the size of video files, making them easier to store and stream over the internet. VisioForge Media Blocks SDK provides a wide range of video encoders that support various formats and codecs.
For some video encoders, SDK can use GPU acceleration to speed up the encoding process. This feature is especially useful when working with high-resolution video files or when encoding multiple videos simultaneously.
NVidia, Intel, and AMD GPUs are supported for hardware acceleration.
## AV1 encoder
`AV1 (AOMedia Video 1)`: Developed by the Alliance for Open Media, AV1 is an open, royalty-free video coding format designed for video transmissions over the Internet. It is known for its high compression efficiency and better quality at lower bit rates compared to its predecessors, making it well-suited for high-resolution video streaming applications.
Use classes that implement the `IAV1EncoderSettings` interface to set the parameters.
#### CPU Encoders
##### AOMAV1EncoderSettings
AOM AV1 encoder settings. CPU encoder.
**Platforms:** Windows, Linux, macOS.
##### RAV1EEncoderSettings
RAV1E AV1 encoder settings. CPU encoder.
- **Key Properties**:
- `Bitrate` (integer): Target bitrate in kilobits per second.
- `LowLatency` (boolean): Enables or disables low latency mode. Default is `false`.
- `MaxKeyFrameInterval` (ulong): Maximum interval between keyframes. Default is `240`.
- `MinKeyFrameInterval` (ulong): Minimum interval between keyframes. Default is `12`.
- `MinQuantizer` (uint): Minimum quantizer value (range 0-255). Default is `0`.
- `Quantizer` (uint): Quantizer value (range 0-255). Default is `100`.
- `SpeedPreset` (int): Encoding speed preset (10 fastest, 0 slowest). Default is `6`.
- `Tune` (`RAV1EEncoderTune`): Tune setting for the encoder. Default is `RAV1EEncoderTune.Psychovisual`.
**Platforms:** Windows, Linux, macOS.
###### RAV1EEncoderTune Enum
Specifies the tuning option for the RAV1E encoder.
- `PSNR` (0): Tune for best PSNR (Peak Signal-to-Noise Ratio).
- `Psychovisual` (1): Tune for psychovisual quality.
#### GPU Encoders
##### AMFAV1EncoderSettings
AMD GPU AV1 video encoder.
**Platforms:** Windows, Linux, macOS.
##### NVENCAV1EncoderSettings
Nvidia GPU AV1 video encoder.
**Platforms:** Windows, Linux, macOS.
##### QSVAV1EncoderSettings
Intel GPU AV1 video encoder.
**Platforms:** Windows, Linux, macOS.
*Note: Intel QSV encoders may also utilize common enumerations like `QSVCodingOption` (`On`, `Off`, `Unknown`) for configuring specific hardware features.*
### Block info
Name: AV1EncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | AV1 | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->AV1EncoderBlock;
AV1EncoderBlock-->MP4SinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var videoEncoderBlock = new AV1EncoderBlock(new QSVAV1EncoderSettings());
pipeline.Connect(fileSource.VideoOutput, videoEncoderBlock.Input);
var mp4SinkBlock = new MP4SinkBlock(new MP4SinkSettings(@"output.mp4"));
pipeline.Connect(videoEncoderBlock.Output, mp4SinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## DV encoder
`DV (Digital Video)`: A format for storing digital video introduced in the 1990s, primarily used in consumer digital camcorders. DV employs intra-frame compression to deliver high-quality video on digital tapes, making it suitable for home videos as well as semi-professional productions.
### Block info
Name: DVEncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | video/x-dv | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->DVEncoderBlock;
DVEncoderBlock-->AVISinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var videoEncoderBlock = new DVEncoderBlock(new DVVideoEncoderSettings());
pipeline.Connect(fileSource.VideoOutput, videoEncoderBlock.Input);
var sinkBlock = new AVISinkBlock(new AVISinkSettings(@"output.avi"));
pipeline.Connect(videoEncoderBlock.Output, sinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## H264 encoder
The H264 encoder block is used for encoding files in MP4, MKV, and some other formats, as well as for network streaming using RTSP and HLS.
Use classes that implement the IH264EncoderSettings interface to set the parameters.
### Settings
#### NVENCH264EncoderSettings
Nvidia GPUs H264 video encoder.
**Platforms:** Windows, Linux, macOS.
#### AMFHEVCEncoderSettings
AMD/ATI GPUs H264 video encoder.
**Platforms:** Windows, Linux, macOS.
#### QSVH264EncoderSettings
Intel GPU H264 video encoder.
**Platforms:** Windows, Linux, macOS.
#### OpenH264EncoderSettings
Software CPU H264 encoder.
**Platforms:** Windows, macOS, Linux, iOS, Android.
#### CustomH264EncoderSettings
Allows using a custom GStreamer element for H264 encoding. You can specify the GStreamer element name and configure its properties.
**Platforms:** Windows, Linux, macOS.
### Block info
Name: H264EncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | H264 | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->H264EncoderBlock;
H264EncoderBlock-->MP4SinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var h264EncoderBlock = new H264EncoderBlock(new NVENCH264EncoderSettings());
pipeline.Connect(fileSource.VideoOutput, h264EncoderBlock.Input);
var mp4SinkBlock = new MP4SinkBlock(new MP4SinkSettings(@"output.mp4"));
pipeline.Connect(h264EncoderBlock.Output, mp4SinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
#### Sample applications
- [Simple Capture Demo](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/WPF/CSharp/Simple%20Capture%20Demo)
- [Screen Capture Demo](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Blocks%20SDK/WPF/CSharp/Screen%20Capture)
### Platforms
Windows, macOS, Linux, iOS, Android.
## HEVC/H265 encoder
HEVC encoder is used for encoding files in MP4, MKV, and some other formats, as well as for network streaming using RTSP and HLS.
Use classes that implement the IHEVCEncoderSettings interface to set the parameters.
### Settings
#### MFHEVCEncoderSettings
Microsoft Media Foundation HEVC encoder. CPU encoder.
**Platforms:** Windows.
#### NVENCHEVCEncoderSettings
Nvidia GPUs HEVC video encoder.
**Platforms:** Windows, Linux, macOS.
#### AMFHEVCEncoderSettings
AMD/ATI GPUs HEVC video encoder.
**Platforms:** Windows, Linux, macOS.
#### QSVHEVCEncoderSettings
Intel GPU HEVC video encoder.
**Platforms:** Windows, Linux, macOS.
#### CustomHEVCEncoderSettings
Allows using a custom GStreamer element for HEVC encoding. You can specify the GStreamer element name and configure its properties.
**Platforms:** Windows, Linux, macOS.
### Block info
Name: HEVCEncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | HEVC | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->HEVCEncoderBlock;
HEVCEncoderBlock-->MP4SinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var hevcEncoderBlock = new HEVCEncoderBlock(new NVENCHEVCEncoderSettings());
pipeline.Connect(fileSource.VideoOutput, hevcEncoderBlock.Input);
var mp4SinkBlock = new MP4SinkBlock(new MP4SinkSettings(@"output.mp4"));
pipeline.Connect(hevcEncoderBlock.Output, mp4SinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## MJPEG encoder
`MJPEG (Motion JPEG)`: A video compression format where each frame of video is separately compressed into a JPEG image. This technique is straightforward and results in no interframe compression, making it ideal for situations where frame-specific editing or access is required, such as in surveillance and medical imaging.
Use classes that implement the IH264EncoderSettings interface to set the parameters.
### Settings
#### MJPEGEncoderSettings
Default MJPEG encoder. CPU encoder.
- **Key Properties**:
- `Quality` (int): JPEG quality level (10-100). Default is `85`.
- **Encoder Type**: `MJPEGEncoderType.CPU`.
**Platforms:** Windows, Linux, macOS, iOS, Android.
#### QSVMJPEGEncoderSettings
Intel GPUs MJPEG encoder.
- **Key Properties**:
- `Quality` (uint): JPEG quality level (10-100). Default is `85`.
- **Encoder Type**: `MJPEGEncoderType.GPU_Intel_QSV_MJPEG`.
**Platforms:** Windows, Linux, macOS.
#### MJPEGEncoderType Enum
Specifies the type of MJPEG encoder.
- `CPU`: Default CPU-based encoder.
- `GPU_Intel_QSV_MJPEG`: Intel QuickSync GPU-based MJPEG encoder.
### Block info
Name: MJPEGEncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | MJPEG | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->MJPEGEncoderBlock;
MJPEGEncoderBlock-->AVISinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var videoEncoderBlock = new MJPEGEncoderBlock(new MJPEGEncoderSettings());
pipeline.Connect(fileSource.VideoOutput, videoEncoderBlock.Input);
var aviSinkBlock = new AVISinkBlock(new AVISinkSettings(@"output.avi"));
pipeline.Connect(videoEncoderBlock.Output, aviSinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Theora encoder
The [Theora](https://www.theora.org/) encoder is used to encode video files in WebM format.
### Settings
#### TheoraEncoderSettings
Provides settings for the Theora encoder.
- **Key Properties**:
- `Bitrate` (kbps)
- `CapOverflow`, `CapUnderflow` (bit reservoir capping)
- `DropFrames` (allow/disallow frame dropping)
- `KeyFrameAuto` (automatic keyframe detection)
- `KeyFrameForce` (interval to force keyframe every N frames)
- `KeyFrameFrequency` (keyframe frequency)
- `MultipassCacheFile` (string path for multipass cache)
- `MultipassMode` (using `TheoraMultipassMode` enum: `SinglePass`, `FirstPass`, `SecondPass`)
- `Quality` (integer value, typically 0-63 for libtheora, meaning can vary)
- `RateBuffer` (size of rate control buffer in units of frames, 0 = auto)
- `SpeedLevel` (amount of motion vector searching, 0-2 or higher depending on implementation)
- `VP3Compatible` (boolean to enable VP3 compatibility)
- **Availability**: Can be checked using `TheoraEncoderSettings.IsAvailable()`.
### Block info
Name: TheoraEncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | video/x-theora | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->TheoraEncoderBlock;
TheoraEncoderBlock-->WebMSinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var theoraEncoderBlock = new TheoraEncoderBlock(new TheoraEncoderSettings());
pipeline.Connect(fileSource.VideoOutput, theoraEncoderBlock.Input);
var webmSinkBlock = new WebMSinkBlock(new WebMSinkSettings(@"output.webm"));
pipeline.Connect(theoraEncoderBlock.Output, webmSinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## VPX encoder
VPX encoder block is used to encode files in WebM, MKV, or OGG files. VPX encoder is a set of video codecs for encoding in VP8 and VP9 formats.
The VPX encoder block utilizes settings classes that implement the `IVPXEncoderSettings` interface. Key settings classes include:
### Settings
The common base class for VP8 and VP9 CPU encoder settings is `VPXEncoderSettings`. It provides a wide range of shared properties for tuning the encoding process, such as:
- `ARNRMaxFrames`, `ARNRStrength`, `ARNRType` (AltRef noise reduction)
- `BufferInitialSize`, `BufferOptimalSize`, `BufferSize` (client buffer settings)
- `CPUUsed`, `CQLevel` (constrained quality)
- `Deadline` (encoding deadline per frame)
- `DropframeThreshold`
- `RateControl` (using `VPXRateControl` enum)
- `ErrorResilient` (using `VPXErrorResilientFlags` enum)
- `HorizontalScalingMode`, `VerticalScalingMode` (using `VPXScalingMode` enum)
- `KeyFrameMaxDistance`, `KeyFrameMode` (using `VPXKeyFrameMode` enum)
- `MinQuantizer`, `MaxQuantizer`
- `MultipassCacheFile`, `MultipassMode` (using `VPXMultipassMode` enum)
- `NoiseSensitivity`
- `TargetBitrate` (in Kbits/s)
- `NumOfThreads`
- `TokenPartitions` (using `VPXTokenPartitions` enum)
- `Tuning` (using `VPXTuning` enum)
#### VP8EncoderSettings
CPU encoder for VP8. Inherits from `VPXEncoderSettings`.
- **Key Properties**: Leverages properties from `VPXEncoderSettings` tailored for VP8.
- **Encoder Type**: `VPXEncoderType.VP8`.
- **Availability**: Can be checked using `VP8EncoderSettings.IsAvailable()`.
#### VP9EncoderSettings
CPU encoder for VP9. Inherits from `VPXEncoderSettings`.
- **Key Properties**: In addition to `VPXEncoderSettings` properties, includes VP9-specific settings:
- `AQMode` (Adaptive Quantization mode, using `VPXAdaptiveQuantizationMode` enum)
- `FrameParallelDecoding` (allow parallel processing)
- `RowMultithread` (multi-threaded row encoding)
- `TileColumns`, `TileRows` (log2 values)
- **Encoder Type**: `VPXEncoderType.VP9`.
- **Availability**: Can be checked using `VP9EncoderSettings.IsAvailable()`.
#### QSVVP9EncoderSettings
Intel QSV (GPU accelerated) encoder for VP9.
- **Key Properties**:
- `LowLatency`
- `TargetUsage` (1: Best quality, 4: Balanced, 7: Best speed)
- `Bitrate` (Kbit/sec)
- `GOPSize`
- `ICQQuality` (Intelligent Constant Quality)
- `MaxBitrate` (Kbit/sec)
- `QPI`, `QPP` (constant quantizer for I and P frames)
- `Profile` (0-3)
- `RateControl` (using `QSVVP9EncRateControl` enum)
- `RefFrames`
- **Encoder Type**: `VPXEncoderType.QSV_VP9`.
- **Availability**: Can be checked using `QSVVP9EncoderSettings.IsAvailable()`.
#### CustomVPXEncoderSettings
Allows using a custom GStreamer element for VPX encoding.
- **Key Properties**:
- `ElementName` (string to specify the GStreamer element name)
- `Properties` (Dictionary to configure the element)
- `VideoFormat` (required video format like `VideoFormatX.NV12`)
- **Encoder Type**: `VPXEncoderType.CustomEncoder`.
### VPX Enumerations
Several enumerations are available to configure VPX encoders:
- `VPXAdaptiveQuantizationMode`: Defines adaptive quantization modes (e.g., `Off`, `Variance`, `Complexity`, `CyclicRefresh`, `Equator360`, `Perceptual`, `PSNR`, `Lookahead`).
- `VPXErrorResilientFlags`: Flags for error resilience features (e.g., `None`, `Default`, `Partitions`).
- `VPXKeyFrameMode`: Defines keyframe placement strategies (e.g., `Auto`, `Disabled`).
- `VPXMultipassMode`: Modes for multipass encoding (e.g., `OnePass`, `FirstPass`, `LastPass`).
- `VPXRateControl`: Rate control modes (e.g., `VBR`, `CBR`, `CQ`).
- `VPXScalingMode`: Scaling modes (e.g., `Normal`, `_4_5`, `_3_5`, `_1_2`).
- `VPXTokenPartitions`: Number of token partitions (e.g., `One`, `Two`, `Four`, `Eight`).
- `VPXTuning`: Tuning options for the encoder (e.g., `PSNR`, `SSIM`).
- `VPXEncoderType`: Specifies the VPX encoder variant (e.g., `VP8`, `VP9`, `QSV_VP9`, `CustomEncoder`, and platform-specific ones like `OMXExynosVP8Encoder`).
- `QSVVP9EncRateControl`: Rate control modes specific to `QSVVP9EncoderSettings` (e.g., `CBR`, `VBR`, `CQP`, `ICQ`).
### Block info
Name: VPXEncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | VP8/VP9 | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->VPXEncoderBlock;
VPXEncoderBlock-->WebMSinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var vp8EncoderBlock = new VPXEncoderBlock(new VP8EncoderSettings());
pipeline.Connect(fileSource.VideoOutput, vp8EncoderBlock.Input);
var webmSinkBlock = new WebMSinkBlock(new WebMSinkSettings(@"output.webm"));
pipeline.Connect(vp8EncoderBlock.Output, webmSinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## MPEG2 encoder
`MPEG-2`: A widely used standard for video and audio compression, commonly found in DVDs, digital television broadcasts (like DVB and ATSC), and SVCDs. It offers good quality at relatively low bitrates for standard definition content.
### Block info
Name: MPEG2EncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | video/mpeg | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->MPEG2EncoderBlock;
MPEG2EncoderBlock-->MPEGTSSinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var mpeg2EncoderBlock = new MPEG2EncoderBlock(new MPEG2VideoEncoderSettings());
pipeline.Connect(fileSource.VideoOutput, mpeg2EncoderBlock.Input);
// Example: Using an MPGSinkBlock for .mpg or .ts files
var mpgSinkBlock = new MPGSinkBlock(new MPGSinkSettings(@"output.mpg"));
pipeline.Connect(mpeg2EncoderBlock.Output, mpgSinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux.
## MPEG4 encoder
`MPEG-4 Part 2 Visual` (often referred to simply as MPEG-4 video) is a video compression standard that is part of the MPEG-4 suite. It is used in various applications, including streaming video, video conferencing, and optical discs like DivX and Xvid.
### Block info
Name: MPEG4EncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | video/mpeg, mpegversion=4 | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->MPEG4EncoderBlock;
MPEG4EncoderBlock-->MP4SinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4"; // Input file
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var mpeg4EncoderBlock = new MPEG4EncoderBlock(new MPEG4VideoEncoderSettings());
pipeline.Connect(fileSource.VideoOutput, mpeg4EncoderBlock.Input);
// Example: Using an MP4SinkBlock for .mp4 files
var mp4SinkBlock = new MP4SinkBlock(new MP4SinkSettings(@"output_mpeg4.mp4"));
pipeline.Connect(mpeg4EncoderBlock.Output, mp4SinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux.
## Apple ProRes encoder
`Apple ProRes`: A high-quality, lossy video compression format developed by Apple Inc., widely used in professional video production and post-production workflows for its excellent balance of image quality and performance.
### Block info
Name: AppleProResEncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | ProRes | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->AppleProResEncoderBlock;
AppleProResEncoderBlock-->MOVSinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var proResEncoderBlock = new AppleProResEncoderBlock(new AppleProResEncoderSettings());
pipeline.Connect(fileSource.VideoOutput, proResEncoderBlock.Input);
var movSinkBlock = new MOVSinkBlock(new MOVSinkSettings(@"output.mov"));
pipeline.Connect(proResEncoderBlock.Output, movSinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
### Platforms
macOS, iOS.
### Availability
You can check if the Apple ProRes encoder is available in your environment using:
```csharp
bool available = AppleProResEncoderBlock.IsAvailable();
```
## WMV encoder
### Overview
WMV encoder block encodes video in WMV format.
### Block info
Name: WMVEncoderBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | video/x-wmv | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->WMVEncoderBlock;
WMVEncoderBlock-->ASFSinkBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline(false);
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var wmvEncoderBlock = new WMVEncoderBlock(new WMVEncoderSettings());
pipeline.Connect(fileSource.VideoOutput, wmvEncoderBlock.Input);
var asfSinkBlock = new ASFSinkBlock(new ASFSinkSettings(@"output.wmv"));
pipeline.Connect(wmvEncoderBlock.Output, asfSinkBlock.CreateNewInput(MediaBlockPadMediaType.Video));
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux.
## General Video Settings Considerations
While specific encoder settings classes provide detailed control, some general concepts or enumerations might be relevant across different encoders or for understanding video quality options.
---END OF PAGE---
# Local File: .\dotnet\mediablocks\VideoProcessing\index.md
---
title: Video Processing & Effects Blocks for .Net
description: Discover a wide array of video processing and visual effects blocks available in the Media Blocks SDK for .Net. Learn how to implement color adjustments, deinterlacing, image/text overlays, geometric transformations, and many other real-time video enhancements in your .Net applications.
sidebar_label: Video Processing and Effects
---
# Video processing blocks
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## Table of Contents
- [Color effects](#color-effects)
- [Deinterlace](#deinterlace)
- [Fish eye](#fish-eye)
- [Flip/Rotate](#fliprotate)
- [Gamma](#gamma)
- [Gaussian blur](#gaussian-blur)
- [Image overlay](#image-overlay)
- [Mirror](#mirror)
- [Perspective](#perspective)
- [Pinch](#pinch)
- [Resize](#resize)
- [Rotate](#rotate)
- [Video sample grabber](#video-sample-grabber)
- [Sphere](#sphere)
- [Square](#square)
- [Stretch](#stretch)
- [Text overlay](#text-overlay)
- [Tunnel](#tunnel)
- [Twirl](#twirl)
- [Video balance](#video-balance)
- [Video mixer](#video-mixer)
- [Water ripple](#water-ripple)
- [D3D11 Video Converter](#d3d11-video-converter)
- [Video Effects (Windows)](#video-effects-windows)
- [D3D11 Video Compositor](#d3d11-video-compositor)
- [VR360 Processor](#vr360-processor)
## Color effects
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The block performs basic video frame color processing: fake heat camera toning, sepia toning, invert and slightly shade to blue, cross processing toning, and yellow foreground/blue background color filter.
### Block info
Name: ColorEffectsBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->ColorEffectsBlock;
ColorEffectsBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
// Sepia
var colorEffects = new ColorEffectsBlock(ColorEffectsPreset.Sepia);
pipeline.Connect(fileSource.VideoOutput, colorEffects.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(colorEffects.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Deinterlace
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The block deinterlaces interlaced video frames into progressive video frames. Several methods of processing are available.
Use the DeinterlaceSettings class to configure the block.
### Block info
Name: DeinterlaceBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->DeinterlaceBlock;
DeinterlaceBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var deinterlace = new DeinterlaceBlock(new DeinterlaceSettings());
pipeline.Connect(fileSource.VideoOutput, deinterlace.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(deinterlace.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Fish eye
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The fisheye block simulates a fisheye lens by zooming on the center of the image and compressing the edges.
### Block info
Name: FishEyeBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->FishEyeBlock;
FishEyeBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var fishEye = new FishEyeBlock();
pipeline.Connect(fileSource.VideoOutput, fishEye.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(fishEye.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Flip/Rotate
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The block flips and rotates the video stream.
Use the VideoFlipRotateMethod enumeration to configure.
### Block info
Name: FlipRotateBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->FlipRotateBlock;
FlipRotateBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
// 90 degree rotation
var flipRotate = new FlipRotateBlock(VideoFlipRotateMethod.Method90R);
pipeline.Connect(fileSource.VideoOutput, flipRotate.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(flipRotate.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Gamma
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The block performs gamma correction on a video stream.
### Block info
Name: GammaBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->GammaBlock;
GammaBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var gamma = new GammaBlock(2.0);
pipeline.Connect(fileSource.VideoOutput, gamma.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(gamma.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Gaussian blur
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The block blurs the video stream using the Gaussian function.
### Block info
Name: GaussianBlurBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->GaussianBlurBlock;
GaussianBlurBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var gaussianBlur = new GaussianBlurBlock();
pipeline.Connect(fileSource.VideoOutput, gaussianBlur.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(gaussianBlur.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Image overlay
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The block overlays an image loaded from a file onto a video stream.
You can set an image position and optional alpha value. 32-bit images with alpha-channel are supported.
### Block info
Name: ImageOverlayBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->ImageOverlayBlock;
ImageOverlayBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var imageOverlay = new ImageOverlayBlock(@"logo.png");
pipeline.Connect(fileSource.VideoOutput, imageOverlay.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(imageOverlay.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Mirror
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The mirror block splits the image into two halves and reflects one over the other.
### Block info
Name: MirrorBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->MirrorBlock;
MirrorBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var mirrorBlock = new MirrorBlock(MirrorMode.Top);
pipeline.Connect(fileSource.VideoOutput, mirrorBlock.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(mirrorBlock.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Perspective
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The perspective block applies a 2D perspective transform.
### Block info
Name: PerspectiveBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->PerspectiveBlock;
PerspectiveBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var persBlock = new PerspectiveBlock(new int[] { 1, 2, 3, 4, 5, 6, 7, 8, 9 });
pipeline.Connect(fileSource.VideoOutput, persBlock.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(persBlock.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Pinch
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The block performs the pinch geometric transform of the image.
### Block info
Name: PinchBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->PinchBlock;
PinchBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var pinchBlock = new PinchBlock();
pipeline.Connect(fileSource.VideoOutput, pinchBlock.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(pinchBlock.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Rotate
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The block rotates the image by a specified angle.
### Block info
Name: RotateBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->RotateBlock;
RotateBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var rotateBlock = new RotateBlock(0.7);
pipeline.Connect(fileSource.VideoOutput, rotateBlock.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(rotateBlock.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Resize
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The block resizes the video stream. You can configure the resize method, the letterbox flag, and many other options.
Use the `ResizeVideoEffect` class to configure.
### Block info
Name: VideoResizeBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->VideoResizeBlock;
VideoResizeBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var videoResize = new VideoResizeBlock(new ResizeVideoEffect(1280, 720) { Letterbox = false });
pipeline.Connect(fileSource.VideoOutput, videoResize.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(videoResize.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Video sample grabber
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The video sample grabber calls an event for each video frame. You can save or process the received video frame.
### Block info
Name: VideoSampleGrabberBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->VideoSampleGrabberBlock;
VideoSampleGrabberBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var videoSG = new VideoSampleGrabberBlock();
videoSG.OnVideoFrameBuffer += VideoSG_OnVideoFrameBuffer;
pipeline.Connect(fileSource.VideoOutput, videoSG.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(videoSG.Output, videoRenderer.Input);
await pipeline.StartAsync();
private void VideoSG_OnVideoFrameBuffer(object sender, VideoFrameBufferEventArgs e)
{
// save or process the video frame
}
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Sphere
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The sphere block applies a sphere geometric transform to the video.
### Block info
Name: SphereBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->SphereBlock;
SphereBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var sphereBlock = new SphereBlock();
pipeline.Connect(fileSource.VideoOutput, sphereBlock.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(sphereBlock.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Square
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The square block distorts the center part of the video into a square.
### Block info
Name: SquareBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->SquareBlock;
SquareBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var squareBlock = new SquareBlock(new SquareVideoEffect());
pipeline.Connect(fileSource.VideoOutput, squareBlock.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(squareBlock.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Stretch
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The stretch block stretches the video in the circle around the center point.
### Block info
Name: StretchBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->StretchBlock;
StretchBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var stretchBlock = new StretchBlock();
pipeline.Connect(fileSource.VideoOutput, stretchBlock.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(stretchBlock.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Text overlay
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The block adds the text overlay on top of the video stream.
### Block info
Name: TextOverlayBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->TextOverlayBlock;
TextOverlayBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var textOverlay = new TextOverlayBlock(new TextOverlaySettings("Hello world!"));
pipeline.Connect(fileSource.VideoOutput, textOverlay.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(textOverlay.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Tunnel
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The block applies a light tunnel effect to a video stream.
### Block info
Name: TunnelBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->TunnelBlock;
TunnelBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var tunnelBlock = new TunnelBlock();
pipeline.Connect(fileSource.VideoOutput, tunnelBlock.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(tunnelBlock.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Twirl
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The twirl block twists the video frame from the center out.
### Block info
Name: TwirlBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->TwirlBlock;
TwirlBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var twirlBlock = new TwirlBlock();
pipeline.Connect(fileSource.VideoOutput, twirlBlock.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(twirlBlock.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Video balance
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The block processes the video stream and allows you to change brightness, contrast, hue, and saturation.
Use the VideoBalanceVideoEffect class to configure the block settings.
### Block info
Name: VideoBalanceBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->VideoBalanceBlock;
VideoBalanceBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var videoBalance = new VideoBalanceBlock(new VideoBalanceVideoEffect() { Brightness = 0.25 });
pipeline.Connect(fileSource.VideoOutput, videoBalance.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(videoBalance.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## Video mixer
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The video mixer block has several inputs and one output. The block draws the inputs in the selected order at the selected positions. You can also set the desired level of transparency for each stream.
### Block info
Name: VideoMixerBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1 or more
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock#1-->VideoMixerBlock;
UniversalSourceBlock#2-->VideoMixerBlock;
VideoMixerBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
// Define source files
var filename1 = "test.mp4"; // Replace with your first video file
var fileSource1 = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename1)));
var filename2 = "test2.mp4"; // Replace with your second video file
var fileSource2 = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename2)));
// Configure VideoMixerSettings with output resolution and frame rate
// For example, 1280x720 resolution at 30 frames per second
var outputWidth = 1280;
var outputHeight = 720;
var outputFrameRate = new VideoFrameRate(30);
var mixerSettings = new VideoMixerSettings(outputWidth, outputHeight, outputFrameRate);
// Add streams to the mixer
// Stream 1: Main video, occupies the full output frame, Z-order 0 (bottom layer)
mixerSettings.AddStream(new VideoMixerStream(new Rect(0, 0, outputWidth, outputHeight), 0));
// Stream 2: Overlay video, smaller rectangle, positioned at (50,50), Z-order 1 (on top)
// Rectangle: left=50, top=50, width=320, height=180
mixerSettings.AddStream(new VideoMixerStream(new Rect(50, 50, 320, 180), 1));
// Create the VideoMixerBlock
var videoMixer = new VideoMixerBlock(mixerSettings);
// Connect source outputs to VideoMixerBlock inputs
pipeline.Connect(fileSource1.VideoOutput, videoMixer.Inputs[0]);
pipeline.Connect(fileSource2.VideoOutput, videoMixer.Inputs[1]);
// Create a VideoRendererBlock to display the mixed video
// VideoView1 is a placeholder for your UI element (e.g., a WPF control)
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(videoMixer.Output, videoRenderer.Input);
// Start the pipeline
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
### Video Mixer Types and Configuration
The Media Blocks SDK offers several types of video mixers, allowing you to choose the best fit for your application's performance needs and target platform capabilities. These include CPU-based, Direct3D 11, and OpenGL mixers.
All mixer settings classes inherit from `VideoMixerBaseSettings`, which defines common properties like output resolution (`Width`, `Height`), `FrameRate`, and the list of `Streams` to be mixed.
#### 1. CPU-based Video Mixer (VideoMixerSettings)
This is the default video mixer and relies on CPU processing for mixing video streams. It is platform-agnostic and a good general-purpose option.
To use the CPU-based mixer, you instantiate `VideoMixerSettings`:
```csharp
// Output resolution 1920x1080 at 30 FPS
var outputWidth = 1920;
var outputHeight = 1080;
var outputFrameRate = new VideoFrameRate(30);
var mixerSettings = new VideoMixerSettings(outputWidth, outputHeight, outputFrameRate);
// Add streams (see example in the main Video Mixer section)
// mixerSettings.AddStream(new VideoMixerStream(new Rect(0, 0, outputWidth, outputHeight), 0));
// ...
var videoMixer = new VideoMixerBlock(mixerSettings);
```
#### 2. Direct3D 11 Video Compositor (D3D11VideoCompositorSettings)
For Windows applications, the `D3D11VideoCompositorSettings` provides hardware-accelerated video mixing using Direct3D 11. This can offer significant performance improvements, especially with high-resolution video or a large number of streams.
```csharp
// Output resolution 1920x1080 at 30 FPS
var outputWidth = 1920;
var outputHeight = 1080;
var outputFrameRate = new VideoFrameRate(30);
// Optionally, specify the graphics adapter index (-1 for default)
var adapterIndex = -1;
var d3dMixerSettings = new D3D11VideoCompositorSettings(outputWidth, outputHeight, outputFrameRate)
{
AdapterIndex = adapterIndex
};
// Streams are added similarly to VideoMixerSettings
// d3dMixerSettings.AddStream(new VideoMixerStream(new Rect(0, 0, outputWidth, outputHeight), 0));
// For more advanced control, you can use D3D11VideoCompositorStream to specify blend states
// d3dMixerSettings.AddStream(new D3D11VideoCompositorStream(new Rect(50, 50, 320, 180), 1)
// {
// BlendSourceRGB = D3D11CompositorBlend.SourceAlpha,
// BlendDestRGB = D3D11CompositorBlend.InverseSourceAlpha
// });
// ...
var videoMixer = new VideoMixerBlock(d3dMixerSettings);
```
The `D3D11VideoCompositorStream` class, which inherits from `VideoMixerStream`, allows for fine-grained control over D3D11 blend states if needed.
#### 3. OpenGL Video Mixer (GLVideoMixerSettings)
The `GLVideoMixerSettings` enables hardware-accelerated video mixing using OpenGL. This is a cross-platform solution for leveraging GPU capabilities on Windows, macOS, and Linux.
```csharp
// Output resolution 1920x1080 at 30 FPS
var outputWidth = 1920;
var outputHeight = 1080;
var outputFrameRate = new VideoFrameRate(30);
var glMixerSettings = new GLVideoMixerSettings(outputWidth, outputHeight, outputFrameRate);
// Streams are added similarly to VideoMixerSettings
// glMixerSettings.AddStream(new VideoMixerStream(new Rect(0, 0, outputWidth, outputHeight), 0));
// For more advanced control, you can use GLVideoMixerStream to specify blend functions and equations
// glMixerSettings.AddStream(new GLVideoMixerStream(new Rect(50, 50, 320, 180), 1)
// {
// BlendFunctionSourceRGB = GLVideoMixerBlendFunction.SourceAlpha,
// BlendFunctionDesctinationRGB = GLVideoMixerBlendFunction.OneMinusSourceAlpha,
// BlendEquationRGB = GLVideoMixerBlendEquation.Add
// });
// ...
var videoMixer = new VideoMixerBlock(glMixerSettings);
```
The `GLVideoMixerStream` class, inheriting from `VideoMixerStream`, provides properties to control OpenGL-specific blending parameters.
Choosing the appropriate mixer depends on your application's requirements. For simple mixing or maximum compatibility, the CPU-based mixer is suitable. For performance-critical applications on Windows, D3D11 is recommended. For cross-platform GPU acceleration, OpenGL is the preferred choice.
## Water ripple
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The water ripple block creates a water ripple effect on the video stream.
Use the `WaterRippleVideoEffect` class to configure.
### Block info
Name: WaterRippleBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->WaterRippleBlock;
WaterRippleBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var wrBlock = new WaterRippleBlock(new WaterRippleVideoEffect());
pipeline.Connect(fileSource.VideoOutput, wrBlock.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(wrBlock.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows, macOS, Linux, iOS, Android.
## D3D11 Video Converter
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The D3D11 Video Converter block performs hardware-accelerated video format conversion using Direct3D 11. This is useful for efficient color space or format changes on Windows platforms.
### Block info
Name: D3D11VideoConverterBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->D3D11VideoConverterBlock;
D3D11VideoConverterBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var d3d11Converter = new D3D11VideoConverterBlock();
pipeline.Connect(fileSource.VideoOutput, d3d11Converter.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(d3d11Converter.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows (Direct3D 11 required).
## Video Effects (Windows)
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The Video Effects (Windows) block allows you to add, update, and manage multiple video effects in real time. This block is specific to Windows and leverages the Media Foundation pipeline for effects processing.
### Block info
Name: VideoEffectsWinBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->VideoEffectsWinBlock;
VideoEffectsWinBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var videoEffects = new VideoEffectsWinBlock();
// Example: add a brightness effect
videoEffects.Video_Effects_Add(new VideoEffectBrightness(true, 0.2));
pipeline.Connect(fileSource.VideoOutput, videoEffects.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(videoEffects.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows.
## D3D11 Video Compositor
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The D3D11 Video Compositor block provides hardware-accelerated video mixing and compositing using Direct3D 11. It is designed for high-performance multi-stream video composition on Windows.
### Block info
Name: D3D11VideoCompositorBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1 or more
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock#1-->D3D11VideoCompositorBlock;
UniversalSourceBlock#2-->D3D11VideoCompositorBlock;
D3D11VideoCompositorBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename1 = "test.mp4";
var fileSource1 = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename1)));
var filename2 = "test2.mp4";
var fileSource2 = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename2)));
var outputWidth = 1280;
var outputHeight = 720;
var outputFrameRate = new VideoFrameRate(30);
var settings = new D3D11VideoCompositorSettings(outputWidth, outputHeight, outputFrameRate);
settings.AddStream(new D3D11VideoCompositorStream(new Rect(0, 0, outputWidth, outputHeight), 0));
settings.AddStream(new D3D11VideoCompositorStream(new Rect(50, 50, 320, 180), 1));
var d3d11Compositor = new D3D11VideoCompositorBlock(settings);
pipeline.Connect(fileSource1.VideoOutput, d3d11Compositor.Inputs[0]);
pipeline.Connect(fileSource2.VideoOutput, d3d11Compositor.Inputs[1]);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(d3d11Compositor.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows (Direct3D 11 required).
## VR360 Processor
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
The VR360 Processor block applies 360-degree equirectangular video effects, suitable for VR content. It uses Direct3D 11 for GPU-accelerated processing and allows real-time adjustment of yaw, pitch, roll, and field of view.
### Block info
Name: VR360ProcessorBlock.
Pin direction | Media type | Pins count
--- | :---: | :---:
Input | Uncompressed video | 1
Output | Uncompressed video | 1
### The sample pipeline
```mermaid
graph LR;
UniversalSourceBlock-->VR360ProcessorBlock;
VR360ProcessorBlock-->VideoRendererBlock;
```
### Sample code
```csharp
var pipeline = new MediaBlocksPipeline();
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
var vr360Settings = new D3D11VR360RendererSettings
{
Yaw = 0,
Pitch = 0,
Roll = 0,
FOV = 90
};
var vr360Processor = new VR360ProcessorBlock(vr360Settings);
pipeline.Connect(fileSource.VideoOutput, vr360Processor.Input);
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
pipeline.Connect(vr360Processor.Output, videoRenderer.Input);
await pipeline.StartAsync();
```
### Platforms
Windows (Direct3D 11 required).
---END OF PAGE---
# Local File: .\dotnet\mediablocks\VideoRendering\index.md
---
title: Media Streaming Video Renderer Block SDK
description: Display video streams on multiple platforms (Windows, macOS, Linux, iOS, Android) with DirectX, OpenGL, and Metal support using our Video Renderer Block SDK.
sidebar_label: Video Renderer
---
# Video Renderer Block
[!badge size="xl" target="blank" variant="info" text="Media Blocks SDK .Net"](https://www.visioforge.com/media-blocks-sdk-net)
## Overview
The Video Renderer block is an essential component designed for developers who need to display video streams in their applications. This powerful tool enables you to render video content on specific areas of windows or screens across various platforms and UI frameworks.
The block utilizes a platform-specific visual control called `VideoView` which leverages DirectX technology on Windows systems and typically implements OpenGL rendering on other platforms. The SDK fully supports cross-platform development with compatibility for both Avalonia and MAUI UI frameworks.
One of the key advantages of this block is its flexibility - developers can implement multiple video views and renderers to display the same video stream in different locations simultaneously, whether in separate sections of a window or across multiple windows.
## Rendering Technologies
### DirectX Integration
On Windows platforms, the Video Renderer Block seamlessly integrates with DirectX for high-performance hardware-accelerated rendering. This integration provides several benefits:
- **Hardware acceleration**: Utilizes the GPU for efficient video processing and rendering
- **Low-latency playback**: Minimizes delay between frame processing and display
- **Direct3D surface sharing**: Enables efficient memory management and reduced copying of video data
- **Multiple display support**: Handles rendering across various display configurations
- **Support for High DPI**: Ensures crisp rendering on high-resolution displays
The renderer automatically selects the appropriate DirectX version based on your system capabilities, supporting DirectX 11 and DirectX 12 where available.
### OpenGL Implementation
For cross-platform compatibility, the Video Renderer uses OpenGL on Linux and older macOS systems:
- **Consistent rendering API**: Provides a unified approach across different operating systems
- **Shader-based processing**: Enables advanced video effects and color transformations
- **Texture mapping optimization**: Efficiently handles video frame presentation
- **Framebuffer objects support**: Allows for off-screen rendering and complex composition
- **Hardware-accelerated scaling**: Delivers high-quality resizing with minimal performance impact
OpenGL ES variants are utilized on mobile platforms to ensure optimal performance while maintaining compatibility with the core rendering pipeline.
### Metal Framework Support
On newer Apple platforms (macOS, iOS, iPadOS), the Video Renderer can leverage Metal - Apple's modern graphics and compute API:
- **Native Apple integration**: Optimized specifically for Apple hardware
- **Reduced CPU overhead**: Minimizes processing bottlenecks compared to OpenGL
- **Enhanced parallel execution**: Better utilizes multi-core processors
- **Improved memory bandwidth**: More efficient video frame handling
- **Integration with Apple's video toolchain**: Seamless interoperability with AV Foundation and Core Video
The renderer automatically selects Metal when available on Apple platforms, falling back to OpenGL when necessary on older versions.
## Technical Specifications
### Block Information
Name: VideoRendererBlock
| Pin direction | Media type | Pins count |
| --- | :---: | :---: |
| Input video | uncompressed video | one or more |
## Implementation Guide
### Setting Up Your Video View
The Video View component serves as the visual element where your video content will be displayed. It needs to be properly integrated into your application's UI layout.
### Creating a Basic Pipeline
Below is a visual representation of a simple pipeline implementation:
```mermaid
graph LR;
UniversalSourceBlock-->VideoRendererBlock;
```
This diagram illustrates how a source block connects directly to the video renderer to create a functional video playback system.
### Code Implementation Example
The following sample demonstrates how to implement a basic video rendering pipeline:
```csharp
// Create a pipeline
var pipeline = new MediaBlocksPipeline();
// create a source block
var filename = "test.mp4";
var fileSource = new UniversalSourceBlock(await UniversalSourceSettings.CreateAsync(new Uri(filename)));
// create a video renderer block
var videoRenderer = new VideoRendererBlock(pipeline, VideoView1);
// connect the blocks
pipeline.Connect(fileSource.VideoOutput, videoRenderer.Input);
// start the pipeline
await pipeline.StartAsync();
```
## Platform Compatibility
The Video Renderer block offers wide compatibility across multiple operating systems and devices:
- Windows
- macOS
- Linux
- iOS
- Android
This makes it an ideal solution for developers building cross-platform applications that require consistent video rendering capabilities.
---END OF PAGE---
# Local File: .\dotnet\mediaplayer\deployment.md
---
title: Media Player SDK .Net Deployment Guide
description: Step-by-step deployment instructions for Media Player SDK .Net applications. Learn how to deploy using NuGet packages, silent installers, and manual configuration. Includes runtime dependencies, DirectShow filters, and environment setup for Windows and cross-platform development.
sidebar_label: Deployment Guide
---
# Media Player SDK .Net Deployment Guide
[!badge size="xl" target="blank" variant="info" text="Media Player SDK .Net"](https://www.visioforge.com/media-player-sdk-net)
This comprehensive guide covers all deployment scenarios for the Media Player SDK .Net, ensuring your applications work correctly across different environments. Whether you're developing cross-platform applications or Windows-specific solutions, this guide provides the necessary steps for successful deployment.
## Engine Types Overview
The Media Player SDK .Net offers two primary engine types, each designed for specific deployment scenarios:
### MediaPlayerCoreX Engine (Cross-Platform)
MediaPlayerCoreX is our cross-platform solution that works across multiple operating systems. For detailed deployment instructions specific to this engine, refer to the main [Cross-Platform Deployment Guide](../deployment-x/index.md).
### MediaPlayerCore Engine (Windows-Only)
The MediaPlayerCore engine is optimized specifically for Windows environments. When deploying applications that use this engine on computers without the SDK pre-installed, you must include the necessary SDK components with your application.
> **Important**: For AnyCPU applications, you should deploy both x86 and x64 redistributables to ensure compatibility across different system architectures.
## Deployment Options
There are three primary methods for deploying the Media Player SDK .Net components:
1. Using NuGet packages (recommended for most scenarios)
2. Using automatic silent installers (requires administrative privileges)
3. Manual installation (for complete control over the deployment process)
## NuGet Package Deployment
NuGet packages provide the simplest deployment method, automatically handling the inclusion of necessary files in your application folder during the build process.
### Required NuGet Packages
#### Core Packages (Always Required)
* **SDK Base Package**:
* [x86 Version](http://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.Base.x86/)
* [x64 Version](http://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.Base.x64/)
* **Media Player SDK Package**:
* [x86 Version](http://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.MediaPlayer.x86/)
* [x64 Version](http://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.MediaPlayer.x64/)
#### Feature-Specific Packages (Add as Needed)
##### Media Format Support
* **FFMPEG Package** (for file playback using FFMPEG source mode):
* [x86 Version](http://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.FFMPEG.x86/)
* [x64 Version](http://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.FFMPEG.x64/)
* **MP4 Output Package**:
* [x86 Version](http://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.MP4.x86/)
* [x64 Version](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.MP4.x64/)
* **WebM Output Package**:
* [x86 Version](https://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.WebM.x86/)
##### Source Support
* **VLC Source Package** (for file/IP camera sources):
* [x86 Version](http://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VLC.x86/)
* [x64 Version](http://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.VLC.x64/)
##### Audio Format Support
* **XIPH Formats Package** (Ogg, Vorbis, FLAC output/source):
* [x86 Version](http://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.XIPH.x86/)
* [x64 Version](http://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.XIPH.x64/)
##### Filter Support
* **LAV Filters Package**:
* [x86 Version](http://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.LAV.x86/)
* [x64 Version](http://www.nuget.org/packages/VisioForge.DotNet.Core.Redist.LAV.x64/)
## Automatic Silent Installers
For scenarios where you prefer installer-based deployment, the SDK offers automatic silent installers that require administrative privileges.
### Available Installers
#### Core Components
* **Base Package** (always required):
* [x86 Installer](http://files.visioforge.com/redists_net/redist_dotnet_base_x86.exe)
* [x64 Installer](http://files.visioforge.com/redists_net/redist_dotnet_base_x64.exe)
#### Media Format Support
* **FFMPEG Package** (for file/IP camera sources):
* [x86 Installer](http://files.visioforge.com/redists_net/redist_dotnet_ffmpeg_x86.exe)
* [x64 Installer](http://files.visioforge.com/redists_net/redist_dotnet_ffmpeg_x64.exe)
#### Source Support
* **VLC Source Package** (for file/IP camera sources):
* [x86 Installer](http://files.visioforge.com/redists_net/redist_dotnet_vlc_x86.exe)
* [x64 Installer](http://files.visioforge.com/redists_net/redist_dotnet_vlc_x64.exe)
#### Audio Format Support
* **XIPH Formats Package** (Ogg, Vorbis, FLAC output/source):
* [x86 Installer](http://files.visioforge.com/redists_net/redist_dotnet_xiph_x86.exe)
* [x64 Installer](http://files.visioforge.com/redists_net/redist_dotnet_xiph_x64.exe)
#### Filter Support
* **LAV Filters Package**:
* [x86 Installer](http://files.visioforge.com/redists_net/redist_dotnet_lav_x86.exe)
* [x64 Installer](http://files.visioforge.com/redists_net/redist_dotnet_lav_x64.exe)
> **Note**: To uninstall any installed package, run the executable with administrative privileges using the parameters: `/x //`
## Manual Installation
For advanced deployment scenarios requiring precise control over component installation, follow these steps:
### Step 1: Runtime Dependencies
* **With Administrative Privileges**: Install the VC++ 2022 (v143) runtime (x86/x64) and OpenMP runtime DLLs using redistributable executables or MSM modules.
* **Without Administrative Privileges**: Copy the VC++ 2022 (v143) runtime (x86/x64) and OpenMP runtime DLLs directly to your application folder.
### Step 2: Core Components
* Copy the VisioForge_MFP/VisioForge_MFPX (or x64 versions) DLLs from the Redist\Filters directory to your application folder.
### Step 3: .NET Assemblies
* Either copy the .NET assemblies to your application folder or install them to the Global Assembly Cache (GAC).
### Step 4: DirectShow Filters
* Copy and COM-register SDK DirectShow filters using [regsvr32.exe](https://support.microsoft.com/en-us/help/249873/how-to-use-the-regsvr32-tool-and-troubleshoot-regsvr32-error-messages) or another suitable method.
### Step 5: Environment Configuration
* Add the folder containing the filters to the system PATH environment variable if your application executable is located in a different directory.
## DirectShow Filter Configuration
The SDK uses various DirectShow filters for specific functionality. Below is a comprehensive list organized by feature category:
### Basic Feature Filters
* VisioForge_Video_Effects_Pro.ax
* VisioForge_MP3_Splitter.ax
* VisioForge_H264_Decoder.ax
* VisioForge_Audio_Mixer.ax
### Audio Effect Filters
* VisioForge_Audio_Effects_4.ax (legacy audio effects)
### Streaming Support Filters
#### RTSP Streaming
* VisioForge_RTSP_Sink.ax
* MP4 filters (legacy/modern, excluding muxer)
#### SSF Streaming
* VisioForge_SSF_Muxer.ax
* MP4 filters (legacy/modern, excluding muxer)
### Source Filters
#### VLC Source
* VisioForge_VLC_Source.ax
* Complete Redist\VLC folder with COM registration
* VLC_PLUGIN_PATH environment variable pointing to VLC\plugins folder
#### FFMPEG Source
* VisioForge_FFMPEG_Source.ax
* Complete Redist\FFMPEG folder, added to the Windows PATH variable
#### Memory Source
* VisioForge_AsyncEx.ax
#### WebM Decoding
* VisioForge_WebM_Ogg_Source.ax
* VisioForge_WebM_Source.ax
* VisioForge_WebM_Split.ax
* VisioForge_WebM_Vorbis_Decoder.ax
* VisioForge_WebM_VP8_Decoder.ax
* VisioForge_WebM_VP9_Decoder.ax
#### Network Streaming Sources
* VisioForge_RTSP_Source.ax
* VisioForge_RTSP_Source_Live555.ax
* FFMPEG, VLC or LAV filters
#### Audio Format Sources
* VisioForge_Xiph_FLAC_Source.ax (FLAC source)
* VisioForge_Xiph_Ogg_Demux2.ax (Ogg Vorbis source)
* VisioForge_Xiph_Vorbis_Decoder.ax (Ogg Vorbis source)
### Special Feature Filters
#### Video Encryption
* VisioForge_Encryptor_v8.ax
* VisioForge_Encryptor_v9.ax
#### GPU Acceleration
* VisioForge_DXP.dll / VisioForge_DXP64.dll (DirectX 11 GPU video effects)
#### LAV Source
* Complete contents of redist\LAV\x86(x64), with all .ax files registered
### Filter Registration Tip
To simplify the COM registration process for all DirectShow filters in a directory, place the "reg_special.exe" file from the SDK redist into the filters folder and run it with administrative privileges.
---
For more code samples and examples, visit our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples).
---END OF PAGE---
# Local File: .\dotnet\mediaplayer\index.md
---
title: Media Player SDK .Net (MediaPlayerCore)
description: SDK usage tutorials for VisioForge Media Player SDK .Net
sidebar_label: Media Player SDK .Net
order: 13
---
# Media Player SDK .Net
[!badge size="xl" target="blank" variant="info" text="Media Player SDK .Net"](https://www.visioforge.com/media-player-sdk-net)
Media Player SDK .Net is a video player SDK with a wide range of features.
SDK can use several decoding engines to play video and audio files, such as FFMPEG, VLC, and DirectShow. Most of the video and audio formats are supported by the FFMPEG engine.
You can play files, network streams, 360-degree videos, and, optionally, DVD and Blu-Ray disks.
## Features
- Video and audio playback
- Video effects
- Audio effects
- Text overlays
- Image overlays
- SVG overlays
- Brightness, contrast, saturation, hue, and other video adjustments
- Sepia, pixelate, grayscale, and other video filters
You can check the full list of features on the [product page](https://www.visioforge.com/media-player-sdk-net).
## Sample applications
You can use WPF code in WinForms applications and vice versa. Most of code is the same for all UI frameworks, including Avalonia and MAUI. The main difference is the VideoView control available for each UI framework.
### MediaPlayerCoreX engine (cross-platform)
MAUI
Avalonia
- [Simple Media Player](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK%20X/Avalonia/Simple%20Media%20Player) shows basic playback functionality in Avalonia
iOS
Android
- [Simple Media Player](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK%20X/Android/MediaPlayer) shows basic playback functionality in Android
macOS
WPF
WinForms
### MediaPlayerCore engine (Windows only)
#### WPF
- [Simple Player](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WPF/CSharp/Simple%20Player%20Demo) shows basic playback functionality
- [Main Demo](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WPF/CSharp/Main%20Demo) shows all features of the SDK
- [Nvidia Maxine Player](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WPF/CSharp/Nvidia%20Maxine%20Player) uses the Nvidia Maxine engine
- [Skinned Player](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WPF/CSharp/Skinned%20Player) shows how to use custom skins
- [madVR Demo](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WPF/CSharp/madVR%20Demo) uses the madVR video renderer
#### WinForms
- [Audio Player](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WinForms/CSharp/Audio%20Player) shows how to play audio files
- [DVD Player](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WinForms/CSharp/DVD%20Player) shows how to play DVDs
- [Encrypted Memory Playback Demo](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WinForms/CSharp/Encrypted%20Memory%20Playback%20Demo) shows how to play encrypted file from the memory
- [Karaoke Demo](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WinForms/CSharp/Karaoke%20Demo) shows how to play audio karaoke files
- [Main Demo](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WinForms/CSharp/Main%20Demo) shows all features of the SDK
- [Memory Stream](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WinForms/CSharp/Memory%20Stream) shows how to play files from the memory
- [Multiple Video Streams](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WinForms/CSharp/Multiple%20Video%20Streams) shows how to play filles with multiple video streams
- [Seamless Playback](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WinForms/CSharp/Seamless%20Playback) shows how to play files without delays
- [Simple Video Player](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WinForms/CSharp/Simple%20Video%20Player) shows basic playback functionality
- [Two Windows](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WinForms/CSharp/Two%20Windows) shows how to play files in two windows
- [VR 360 Demo](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WinForms/CSharp/VR%20360%20Demo) shows how to play 360-degree videos
- [Video Mixing Demo](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WinForms/CSharp/Video%20Mixing%20Demo) shows how to mix video files
- [YouTube Player](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WinForms/CSharp/YouTube%20Player%20Demo) shows how to play YouTube videos (with open license)
- [madVR Demo](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WinForms/CSharp/madVR%20Demo) uses the madVR to render video
#### WinUI
- [Simple Media Player](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/WinUI/CSharp/Simple%20Media%20Player%20WinUI) shows basic playback functionality
#### Code snippets
- [Memory Playback](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/_CodeSnippets/memory-playback) shows how to play files from the memory
- [Read File Info](https://github.com/visioforge/.Net-SDK-s-samples/tree/master/Media%20Player%20SDK/_CodeSnippets/read-file-info) shows how to read file information
## Documentation
- [Code samples](code-samples/index.md)
- [Deployment](deployment.md)
- [API](https://api.visioforge.com/dotnet/api/index.html)
## Links
- [Changelog](../changelog.md)
- [End User License Agreement](../../eula.md)
---END OF PAGE---
# Local File: .\dotnet\mediaplayer\code-samples\get-frame-from-video-file.md
---
title: Extracting Video Frames in .NET - Complete Guide
description: Learn how to extract and capture specific frames from video files using .NET libraries. This tutorial covers multiple approaches with code examples for both Windows-specific and cross-platform solutions for developers working with video processing.
sidebar_label: Extract Video Frames from Files
order: 1
---
# Extracting Video Frames from Video Files in .NET
[!badge size="xl" target="blank" variant="info" text="Media Player SDK .Net"](https://www.visioforge.com/media-player-sdk-net)
Video frame extraction is a common requirement in many multimedia applications. Whether you're building a video editing tool, creating thumbnails, or performing video analysis, extracting specific frames from video files is an essential capability. This guide explains different approaches to capturing frames from video files in .NET applications.
## Why Extract Video Frames?
There are numerous use cases for video frame extraction:
- Creating thumbnail images for video galleries
- Extracting key frames for video analysis
- Generating preview images at specific timestamps
- Building video editing tools with frame-by-frame precision
- Creating timelapse sequences from video footage
- Capturing still images from video recordings
## Understanding Video Frame Extraction
Video files contain sequences of frames displayed at specific intervals to create the illusion of motion. When extracting a frame, you're essentially capturing a single image at a specific timestamp within the video. This process involves:
1. Opening the video file
2. Seeking to the specific timestamp
3. Decoding the frame data
4. Converting it to an image format
## Frame Extraction Methods in .NET
There are several approaches to extract frames from video files in .NET, depending on your requirements and environment.
### Using Windows-Specific SDK Components
For Windows-only applications, the classic SDK components offer straightforward methods for frame extraction:
```csharp
// Using VideoEditCore for frame extraction
using VisioForge.Core.VideoEdit;
public void ExtractFrameWithVideoEditCore()
{
var videoEdit = new VideoEditCore();
var bitmap = videoEdit.Helpful_GetFrameFromFile("C:\\Videos\\sample.mp4", TimeSpan.FromSeconds(5));
bitmap.Save("C:\\Output\\frame.png");
}
// Using MediaPlayerCore for frame extraction
using VisioForge.Core.MediaPlayer;
public void ExtractFrameWithMediaPlayerCore()
{
var mediaPlayer = new MediaPlayerCore();
var bitmap = mediaPlayer.Helpful_GetFrameFromFile("C:\\Videos\\sample.mp4", TimeSpan.FromSeconds(10));
bitmap.Save("C:\\Output\\frame.png");
}
```
The `Helpful_GetFrameFromFile` method simplifies the process by handling the file opening, seeking, and frame decoding operations in a single call.
### Cross-Platform Solutions with X-Engine
Modern .NET applications often need to run on multiple platforms. The X-engine provides cross-platform capabilities for video frame extraction:
#### Extracting Frames as System.Drawing.Bitmap
The most common approach is to extract frames as `System.Drawing.Bitmap` objects:
```csharp
using VisioForge.Core.MediaInfo;
public void ExtractFrameAsBitmap()
{
// Extract the frame at the beginning of the video (TimeSpan.Zero)
var bitmap = MediaInfoReaderX.GetFileSnapshotBitmap("C:\\Videos\\sample.mp4", TimeSpan.Zero);
// Extract a frame at 30 seconds into the video
var frame30sec = MediaInfoReaderX.GetFileSnapshotBitmap("C:\\Videos\\sample.mp4", TimeSpan.FromSeconds(30));
// Save the extracted frame
bitmap.Save("C:\\Output\\first-frame.png");
frame30sec.Save("C:\\Output\\frame-30sec.png");
}
```
#### Extracting Frames as SkiaSharp Bitmaps
For applications using SkiaSharp for graphics processing, you can extract frames directly as `SKBitmap` objects:
```csharp
using VisioForge.Core.MediaInfo;
using SkiaSharp;
public void ExtractFrameAsSkiaBitmap()
{
// Extract the frame at 15 seconds into the video
var skBitmap = MediaInfoReaderX.GetFileSnapshotSKBitmap("C:\\Videos\\sample.mp4", TimeSpan.FromSeconds(15));
// Work with the SKBitmap
using (var image = SKImage.FromBitmap(skBitmap))
using (var data = image.Encode(SKEncodedImageFormat.Png, 100))
using (var stream = File.OpenWrite("C:\\Output\\frame-skia.png"))
{
data.SaveTo(stream);
}
}
```
#### Working with Raw RGB Data
For more advanced scenarios or when you need direct pixel manipulation, you can extract frames as RGB byte arrays:
```csharp
using VisioForge.Core.MediaInfo;
public void ExtractFrameAsRGBArray()
{
// Extract the frame at 20 seconds as RGB byte array
var rgbData = MediaInfoReaderX.GetFileSnapshotRGB("C:\\Videos\\sample.mp4", TimeSpan.FromSeconds(20));
// Process the RGB data as needed
// The format is typically a byte array with R, G, B values for each pixel
// You would also need to know the frame width and height to properly interpret the data
}
```
## Best Practices for Video Frame Extraction
When implementing video frame extraction in your applications, consider these best practices:
### Performance Considerations
- Extracting frames can be CPU-intensive, especially for high-resolution videos
- Consider implementing caching mechanisms for frequently accessed frames
- For batch extraction, implement parallel processing where appropriate
```csharp
// Example of parallel frame extraction
public void ExtractMultipleFramesInParallel(string videoPath, TimeSpan[] timestamps)
{
Parallel.ForEach(timestamps, timestamp => {
var bitmap = MediaInfoReaderX.GetFileSnapshotBitmap(videoPath, timestamp);
bitmap.Save($"C:\\Output\\frame-{timestamp.TotalSeconds}.png");
});
}
```
### Error Handling
Always implement proper error handling when working with video files:
```csharp
public Bitmap SafeExtractFrame(string videoPath, TimeSpan position)
{
try
{
return MediaInfoReaderX.GetFileSnapshotBitmap(videoPath, position);
}
catch (FileNotFoundException)
{
Console.WriteLine("Video file not found");
}
catch (InvalidOperationException)
{
Console.WriteLine("Invalid position in video");
}
catch (Exception ex)
{
Console.WriteLine($"Error extracting frame: {ex.Message}");
}
return null;
}
```
### Memory Management
Proper memory management is crucial, especially when working with large video files:
```csharp
public void ExtractFrameWithProperDisposal()
{
Bitmap bitmap = null;
try
{
bitmap = MediaInfoReaderX.GetFileSnapshotBitmap("C:\\Videos\\sample.mp4", TimeSpan.FromSeconds(5));
// Process the bitmap...
}
finally
{
bitmap?.Dispose();
}
}
```
## Common Applications
Frame extraction is used in various multimedia applications:
- **Video Players**: Generating preview thumbnails
- **Media Libraries**: Creating video thumbnails for gallery views
- **Video Analysis**: Extracting frames for computer vision processing
- **Content Management**: Creating preview images for video assets
- **Video Editing**: Providing visual reference for timeline editing
## Conclusion
Extracting frames from video files is a powerful capability for .NET developers working with multimedia content. Whether you're building Windows-specific applications or cross-platform solutions, the methods described in this guide provide efficient ways to capture and work with video frames.
By understanding the different approaches and following best practices, you can implement robust frame extraction functionality in your .NET applications.
---
For more code samples and examples, visit our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples).
---END OF PAGE---
# Local File: .\dotnet\mediaplayer\code-samples\index.md
---
title: .NET Media Player SDK Code Examples & Tutorials
description: Explore our extensive library of .NET Media Player SDK code examples for C# and VB.NET developers. Learn to implement video playback, frame extraction, playlists, and more in WinForms, WPF, Console, and Service applications with detailed tutorials and practical implementations.
sidebar_label: Code Examples
---
# .NET Media Player SDK Implementation Examples
[!badge size="xl" target="blank" variant="info" text="Media Player SDK .Net"](https://www.visioforge.com/media-player-sdk-net)
## Getting Started with Code Examples
This resource contains a rich collection of implementation examples for the Media Player SDK .Net, demonstrating the powerful capabilities and diverse functionalities available to developers working with video and audio playback in .NET applications.
### Multi-Language Support
Our examples are meticulously developed in both C# and VB.Net programming languages, showcasing the flexibility and developer-friendly nature of the MediaPlayerCore engine. Each example has been thoughtfully crafted to illustrate real-world scenarios and implementation strategies, enabling developers to quickly master the core concepts needed for effective SDK integration.
### Cross-Platform Application Integration
The provided code examples cover an extensive range of application frameworks, including:
- **WinForms applications** for traditional desktop interfaces
- **WPF applications** for modern UI experiences
- **Console applications** for command-line utilities
- **Windows Service applications** for background processing
Whether you're building feature-rich desktop software, efficient command-line tools, or robust background services, these examples provide valuable guidance throughout your development journey. The examples serve as both learning resources and practical references for troubleshooting and performance optimization in your media applications.
## Featured Implementation Examples
### Video Processing Examples
- [How to get a specific frame from a video file?](get-frame-from-video-file.md)
- [How to play a fragment of the source file?](play-fragment-file.md)
- [How to show the first frame?](show-first-frame.md)
### Advanced Playback Examples
- [Memory playback implementation](memory-playback.md)
- [Playlist API integration](playlist-api.md)
- [Previous frame and reverse video playback](reverse-video-playback.md)
---
## Additional Resources
For a more extensive collection of code examples and implementation scenarios, visit our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples).
---END OF PAGE---
# Local File: .\dotnet\mediaplayer\code-samples\memory-playback.md
---
title: Memory Playback Implementation in .NET Media Player SDK
description: Learn how to implement memory-based media playback in C# applications using stream objects, byte arrays, and memory management techniques. This guide provides code examples and best practices for efficient memory handling during audio and video playback.
sidebar_label: Memory Playback
order: 2
---
# Memory-Based Media Playback in .NET Applications
[!badge size="xl" target="blank" variant="info" text="Media Player SDK .Net"](https://www.visioforge.com/media-player-sdk-net)
## Introduction to Memory-Based Media Playback
Memory-based playback offers a powerful alternative to traditional file-based media consumption in .NET applications. By loading and processing media directly from memory, developers can achieve more responsive playback, enhanced security through reduced file access, and greater flexibility in handling different data sources.
This guide explores the various approaches to implementing memory-based playback in your .NET applications, complete with code examples and best practices.
## Advantages of Memory-Based Media Playback
Before diving into implementation details, let's understand why memory-based playback is valuable:
- **Improved performance**: By eliminating file I/O operations during playback, your application can deliver smoother media experiences.
- **Enhanced security**: Media content doesn't need to be stored as accessible files on the filesystem.
- **Stream processing**: Work with data from various sources, including network streams, encrypted content, or dynamically generated media.
- **Virtual file systems**: Implement custom media access patterns without filesystem dependencies.
- **In-memory transformations**: Apply real-time modifications to media content before playback.
## Implementation Approaches
### Stream-Based Playback from Existing Files
The most straightforward approach to memory-based playback begins with existing media files that you load into memory streams. This technique is ideal when you want the performance benefits of memory playback while still maintaining your content in traditional file formats.
```cs
// Create a FileStream from an existing media file
var fileStream = new FileStream(mediaFilePath, FileMode.Open);
// Convert to a managed IStream for the media player
var managedStream = new ManagedIStream(fileStream);
// Configure stream settings for your content
bool videoPresent = true;
bool audioPresent = true;
// Set the memory stream as the media source
MediaPlayer1.Source_MemoryStream = new MemoryStreamSource(
managedStream,
videoPresent,
audioPresent,
fileStream.Length
);
// Set the player to memory playback mode
MediaPlayer1.Source_Mode = MediaPlayerSourceMode.Memory_DS;
// Start playback
await MediaPlayer1.PlayAsync();
```
When using this approach, remember to properly dispose of the FileStream when playback is complete to prevent resource leaks.
### Byte Array-Based Playback
For scenarios where your media content already exists as a byte array in memory (perhaps downloaded from a network source or decrypted from protected storage), you can play directly from this data structure:
```cs
// Assume 'mediaBytes' is a byte array containing your media content
byte[] mediaBytes = GetMediaContent();
// Create a MemoryStream from the byte array
using (var memoryStream = new MemoryStream(mediaBytes))
{
// Convert to a managed IStream
var managedStream = new ManagedIStream(memoryStream);
// Configure stream settings based on your content
bool videoPresent = true; // Set to false for audio-only content
bool audioPresent = true; // Set to false for video-only content
// Create and assign the memory stream source
MediaPlayer1.Source_MemoryStream = new MemoryStreamSource(
managedStream,
videoPresent,
audioPresent,
memoryStream.Length
);
// Set memory playback mode
MediaPlayer1.Source_Mode = MediaPlayerSourceMode.Memory_DS;
// Begin playback
await MediaPlayer1.PlayAsync();
// Additional playback handling code...
}
```
This technique is particularly useful when dealing with content that should never be written to disk for security reasons.
### Advanced: Custom Stream Implementations
For more complex scenarios, you can implement custom stream handlers that provide media data from any source you can imagine:
```cs
// Example of a custom stream provider
public class CustomMediaStreamProvider : Stream
{
private byte[] _buffer;
private long _position;
// Constructor might take a custom data source
public CustomMediaStreamProvider(IDataSource dataSource)
{
// Initialize your stream from the data source
}
// Implement required Stream methods
public override int Read(byte[] buffer, int offset, int count)
{
// Custom implementation to provide data
}
// Other required Stream overrides
// ...
}
// Usage example:
var customStream = new CustomMediaStreamProvider(myDataSource);
var managedStream = new ManagedIStream(customStream);
MediaPlayer1.Source_MemoryStream = new MemoryStreamSource(
managedStream,
hasVideo,
hasAudio,
streamLength
);
```
## Performance Considerations
When implementing memory-based playback, keep these performance factors in mind:
1. **Memory allocation**: For large media files, ensure your application has sufficient memory available.
2. **Buffering strategy**: Consider implementing a sliding buffer for very large files rather than loading the entire content into memory.
3. **Garbage collection impact**: Large memory allocations can trigger garbage collection, potentially causing playback stuttering.
4. **Thread synchronization**: If providing stream data from another thread or async source, ensure proper synchronization to prevent playback issues.
## Error Handling Best Practices
Robust error handling is critical when implementing memory-based playback:
```cs
try
{
var fileStream = new FileStream(mediaFilePath, FileMode.Open);
var managedStream = new ManagedIStream(fileStream);
MediaPlayer1.Source_MemoryStream = new MemoryStreamSource(
managedStream,
true,
true,
fileStream.Length
);
MediaPlayer1.Source_Mode = MediaPlayerSourceMode.Memory_DS;
await MediaPlayer1.PlayAsync();
}
catch (FileNotFoundException ex)
{
LogError("Media file not found", ex);
DisplayUserFriendlyError("The requested media file could not be found.");
}
catch (UnauthorizedAccessException ex)
{
LogError("Access denied to media file", ex);
DisplayUserFriendlyError("You don't have permission to access this media file.");
}
catch (Exception ex)
{
LogError("Unexpected playback error", ex);
DisplayUserFriendlyError("An error occurred during media playback.");
}
finally
{
// Ensure resources are properly cleaned up
CleanupResources();
}
```
## Required Dependencies
To successfully implement memory-based playback using the Media Player SDK, ensure you have these dependencies:
- Base redistributable components
- SDK redistributable components
For more information on installing or deploying these dependencies to your users' systems, refer to our [deployment guide](../deployment.md).
## Advanced Scenarios
### Encrypted Media Playback
For applications dealing with protected content, you can integrate decryption into your memory-based playback pipeline:
```cs
// Read encrypted content
byte[] encryptedContent = File.ReadAllBytes(encryptedMediaPath);
// Decrypt the content
byte[] decryptedContent = DecryptMedia(encryptedContent, encryptionKey);
// Play from decrypted memory without writing to disk
using (var memoryStream = new MemoryStream(decryptedContent))
{
var managedStream = new ManagedIStream(memoryStream);
// Continue with standard memory playback setup...
}
```
### Network Streaming to Memory
Pull content from network sources directly into memory for playback:
```cs
using (HttpClient client = new HttpClient())
{
// Download media content
byte[] mediaContent = await client.GetByteArrayAsync(mediaUrl);
// Play from memory
using (var memoryStream = new MemoryStream(mediaContent))
{
// Continue with standard memory playback setup...
}
}
```
## Conclusion
Memory-based media playback provides a flexible and powerful approach for .NET applications requiring enhanced performance, security, or custom media handling. By understanding the implementation options and following best practices for resource management, you can deliver smooth and responsive media experiences to your users.
For more sample code and advanced implementations, visit our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples).
---END OF PAGE---
# Local File: .\dotnet\mediaplayer\code-samples\play-fragment-file.md
---
title: Play Video & Audio File Segments in C# .NET Apps
description: Complete guide to implementing precise media fragment playback in your C# applications using .NET Media Player SDK. Learn how to create time-based segments in videos and audio files with step-by-step code examples for both Windows and cross-platform applications.
sidebar_label: Playing Media File Fragments
order: 3
---
# Playing Media File Fragments: Implementation Guide for .NET Developers
[!badge size="xl" target="blank" variant="info" text="Media Player SDK .Net"](https://www.visioforge.com/media-player-sdk-net)
When developing media applications, one frequently requested feature is the ability to play specific segments of a video or audio file. This functionality is crucial for creating video editors, highlight reels, educational platforms, or any application requiring precise media segment playback.
## Understanding Fragment Playback in .NET Applications
Fragment playback allows you to define specific time segments of a media file for playback, effectively creating clips without modifying the source file. This technique is particularly useful when you need to:
- Create preview segments from longer media files
- Focus on specific sections of instructional videos
- Create looping segments for demonstrations or presentations
- Build clip-based media players for sports highlights or video compilations
- Implement training applications that focus on specific video segments
The Media Player SDK .NET provides two primary engines for implementing fragment playback, each with its own approach and platform compatibility considerations.
## Windows-Only Implementation: MediaPlayerCore Engine
The MediaPlayerCore engine provides a straightforward implementation for Windows applications. This solution works across WPF, WinForms, and console applications but is limited to Windows operating systems.
### Setting Up Fragment Playback
To implement fragment playback with the MediaPlayerCore engine, you'll need to follow three key steps:
1. Activate the selection mode on your MediaPlayer instance
2. Define the starting position of your fragment (in milliseconds)
3. Define the ending position of your fragment (in milliseconds)
### Implementation Example
The following C# code demonstrates how to configure fragment playback to play only the segment between 2000ms and 5000ms of your source file:
```csharp
// Step 1: Enable fragment selection mode
MediaPlayer1.Selection_Active = true;
// Step 2: Set the starting position to 2000 milliseconds (2 seconds)
MediaPlayer1.Selection_Start = TimeSpan.FromMilliseconds(2000);
// Step 3: Set the ending position to 5000 milliseconds (5 seconds)
MediaPlayer1.Selection_Stop = TimeSpan.FromMilliseconds(5000);
// When you call Play() or PlayAsync(), only the specified fragment will play
```
When your application calls the Play or PlayAsync method after setting these properties, the player will automatically jump to the selection start position and stop playback when it reaches the selection end position.
### Required Redistributables for Windows Implementation
For the MediaPlayerCore engine implementation to function correctly, you must include:
- Base redistributable package
- SDK redistributable package
These packages contain the necessary components for the Windows-based playback functionality. For detailed information on deploying these redistributables to end-user machines, refer to the [deployment documentation](../deployment.md).
## Cross-Platform Implementation: MediaPlayerCoreX Engine
For developers requiring fragment playback functionality across multiple platforms, the MediaPlayerCoreX engine provides a more versatile solution. This implementation works across Windows, macOS, iOS, Android, and Linux environments.
### Setting Up Cross-Platform Fragment Playback
The cross-platform implementation follows a similar conceptual approach but uses different property names. The key steps include:
1. Creating a MediaPlayerCoreX instance
2. Loading your media source
3. Defining the segment start and stop positions
4. Initiating playback
### Cross-Platform Implementation Example
The following example demonstrates how to implement fragment playback in a cross-platform .NET application:
```csharp
// Step 1: Create a new instance of MediaPlayerCoreX with your video view
MediaPlayerCoreX MediaPlayer1 = new MediaPlayerCoreX(VideoView1);
// Step 2: Set the source media file
var fileSource = await UniversalSourceSettings.CreateAsync(new Uri("video.mkv"));
await MediaPlayer1.OpenAsync(fileSource);
// Step 3: Define the segment start time (2 seconds from beginning)
MediaPlayer1.Segment_Start = TimeSpan.FromMilliseconds(2000);
// Step 4: Define the segment end time (5 seconds from beginning)
MediaPlayer1.Segment_Stop = TimeSpan.FromMilliseconds(5000);
// Step 5: Start playback of the defined segment
await MediaPlayer1.PlayAsync();
```
This implementation uses the Segment_Start and Segment_Stop properties instead of the Selection properties used in the Windows-only implementation. Also note the asynchronous approach used in the cross-platform example, which improves UI responsiveness.
## Advanced Fragment Playback Techniques
### Dynamic Fragment Adjustment
In more complex applications, you might need to adjust fragment boundaries dynamically. Both engines support changing the segment boundaries during runtime:
```csharp
// For Windows-only implementation
private void UpdateFragmentBoundaries(int startMs, int endMs)
{
MediaPlayer1.Selection_Start = TimeSpan.FromMilliseconds(startMs);
MediaPlayer1.Selection_Stop = TimeSpan.FromMilliseconds(endMs);
// If playback is in progress, restart it to apply the new boundaries
if (MediaPlayer1.State == PlaybackState.Playing)
{
MediaPlayer1.Position_Set(MediaPlayer1.Selection_Start);
}
}
// For cross-platform implementation
private async Task UpdateFragmentBoundariesAsync(int startMs, int endMs)
{
MediaPlayer1.Segment_Start = TimeSpan.FromMilliseconds(startMs);
MediaPlayer1.Segment_Stop = TimeSpan.FromMilliseconds(endMs);
// If playback is in progress, restart from the new start position
if (await MediaPlayer1.StateAsync() == PlaybackState.Playing)
{
await MediaPlayer1.Position_SetAsync(MediaPlayer1.Segment_Start);
}
}
```
### Multiple Fragment Playback
For applications that need to play multiple fragments sequentially, you can implement a fragment queue:
```csharp
public class MediaFragment
{
public TimeSpan StartTime { get; set; }
public TimeSpan EndTime { get; set; }
}
private Queue fragmentQueue = new Queue();
private bool isProcessingQueue = false;
// Add fragments to the queue
public void EnqueueFragment(TimeSpan start, TimeSpan end)
{
fragmentQueue.Enqueue(new MediaFragment { StartTime = start, EndTime = end });
if (!isProcessingQueue && MediaPlayer1 != null)
{
PlayNextFragment();
}
}
// Process the fragment queue
private async void PlayNextFragment()
{
if (fragmentQueue.Count == 0)
{
isProcessingQueue = false;
return;
}
isProcessingQueue = true;
var fragment = fragmentQueue.Dequeue();
// Set the fragment boundaries
MediaPlayer1.Segment_Start = fragment.StartTime;
MediaPlayer1.Segment_Stop = fragment.EndTime;
// Subscribe to completion event for this fragment
MediaPlayer1.OnStop += (s, e) => PlayNextFragment();
// Start playback
await MediaPlayer1.PlayAsync();
}
```
### Performance Considerations
For optimal performance when using fragment playback, consider the following tips:
1. For frequent seeking between fragments, use formats with good keyframe density
2. MP4 and MOV files generally perform better for fragment-heavy applications
3. Setting fragments at keyframe boundaries improves seeking performance
4. Consider preloading files before setting fragment boundaries
5. On mobile platforms, keep fragments reasonably sized to avoid memory pressure
## Conclusion
Implementing fragment playback in your .NET media applications provides substantial flexibility and enhanced user experience. Whether you're developing for Windows only or targeting multiple platforms, the Media Player SDK .NET offers robust solutions for precise media segment playback.
By leveraging the techniques demonstrated in this guide, you can create sophisticated media experiences that allow users to focus on exactly the content they need, without the overhead of editing or splitting source files.
For more code samples and implementations, visit our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples) where you'll find comprehensive examples of media player implementations, including fragment playback and other advanced media features.
---END OF PAGE---
# Local File: .\dotnet\mediaplayer\code-samples\playlist-api.md
---
title: Media Player SDK .Net Playlist API Guide
description: Learn how to implement powerful playlist functionality in your .NET applications using our Media Player SDK. Complete code examples for WinForms, WPF, and console applications with step-by-step implementation guide.
sidebar_label: Playlist API
---
# Complete Guide to Playlist API Implementation in .NET
[!badge size="xl" target="blank" variant="info" text="Media Player SDK .Net"](https://www.visioforge.com/media-player-sdk-net) [!badge variant="dark" size="xl" text="MediaPlayerCore"]
## Introduction to Playlist Management
The Playlist API provides a powerful and flexible way to manage media content in your .NET applications. Whether you're developing a music player, video application, or any media-centric software, efficient playlist management is essential for delivering a seamless user experience.
This guide covers everything you need to know about implementing playlist functionality using the MediaPlayerCore component. You'll learn how to create playlists, navigate between tracks, handle playlist events, and optimize performance in various .NET environments.
## Key Features and Benefits
- **Simple Integration**: Easy-to-implement API that integrates seamlessly with existing .NET applications
- **Format Compatibility**: Support for a wide range of audio and video formats
- **Cross-Platform**: Works consistently across WinForms, WPF, and console applications
- **Performance Optimized**: Built for efficient memory usage and responsive playback
- **Event-Driven Architecture**: Rich event system for building reactive UI experiences
## Getting Started with Playlist API
Before diving into specific methods, ensure you have properly initialized the MediaPlayer component in your application. The following sections contain practical code examples that you can implement directly in your project.
### Creating Your First Playlist
Creating a playlist is the first step in managing multiple media files. The API provides straightforward methods to add files to your playlist collection:
```csharp
// Initialize the media player (assuming you've added the component to your form)
// this.mediaPlayer1 = new MediaPlayer();
// Add individual files to the playlist
this.mediaPlayer1.Playlist_Add(@"c:\media\intro.mp4");
this.mediaPlayer1.Playlist_Add(@"c:\media\main_content.mp4");
this.mediaPlayer1.Playlist_Add(@"c:\media\conclusion.mp4");
// Start playback from the first item
this.mediaPlayer1.Play();
```
This approach allows you to build playlists programmatically, which is ideal for applications where playlist content is determined at runtime.
## Core Playlist Operations
### Navigating Through Playlist Items
Once you've created a playlist, your users will need to navigate between items. The API provides intuitive methods for moving to the next or previous file:
```csharp
// Play the next file in the playlist
this.mediaPlayer1.Playlist_PlayNext();
// Play the previous file in the playlist
this.mediaPlayer1.Playlist_PlayPrevious();
```
These methods automatically handle the transition between media files, including stopping the current playback and starting the new item.
### Managing Playlist Content
During application runtime, you may need to modify the playlist by removing specific items or clearing it entirely:
```csharp
// Remove a specific file from the playlist
this.mediaPlayer1.Playlist_Remove(@"c:\media\intro.mp4");
// Clear all items from the playlist
this.mediaPlayer1.Playlist_Clear();
```
This dynamic content management allows your application to adapt to user preferences or changing requirements on the fly.
### Retrieving Playlist Information
Accessing information about the current state of the playlist is crucial for building an informative user interface:
```csharp
// Get the current file's index (0-based)
int currentIndex = this.mediaPlayer1.Playlist_GetPosition();
// Get the total number of files in the playlist
int totalFiles = this.mediaPlayer1.Playlist_GetCount();
// Get a specific filename by its index
string fileName = this.mediaPlayer1.Playlist_GetFilename(1); // Gets the second file
// Display current playback information
string statusMessage = $"Playing file {currentIndex + 1} of {totalFiles}: {fileName}";
```
These methods enable you to create dynamic interfaces that reflect the current state of media playback.
## Advanced Playlist Control
### Resetting and Repositioning
For more precise control over playlist navigation, you can reset the playlist or jump to a specific position:
```csharp
// Reset the playlist to start from the first file
this.mediaPlayer1.Playlist_Reset();
// Jump to a specific position in the playlist (0-based index)
this.mediaPlayer1.Playlist_SetPosition(2); // Jump to the third item
```
These methods are particularly useful for implementing features like "restart playlist" or allowing users to select specific items from a playlist view.
### Custom Event Handling for Playlist Navigation
To create a responsive application, you'll want to implement custom event handling for playlist navigation. Since the MediaPlayerCore doesn't have a dedicated playlist item changed event, you can create your own tracking mechanism using the existing events:
```csharp
private int _lastPlaylistIndex = -1;
// Track playlist position changes when playback starts
private void mediaPlayer1_OnStart(object sender, EventArgs e)
{
int currentIndex = this.mediaPlayer1.Playlist_GetPosition();
if (currentIndex != _lastPlaylistIndex)
{
_lastPlaylistIndex = currentIndex;
// Handle playlist item change
string currentFile = this.mediaPlayer1.Playlist_GetFilename(currentIndex);
UpdatePlaylistUI(currentIndex, currentFile);
}
}
// Also track when a new file playback starts
private void mediaPlayer1_OnNewFilePlaybackStarted(object sender, NewFilePlaybackEventArgs e)
{
int currentIndex = this.mediaPlayer1.Playlist_GetPosition();
_lastPlaylistIndex = currentIndex;
// Handle playlist item change
string currentFile = this.mediaPlayer1.Playlist_GetFilename(currentIndex);
UpdatePlaylistUI(currentIndex, currentFile);
}
// Handle playlist completion
private void mediaPlayer1_OnPlaylistFinished(object sender, EventArgs e)
{
// Handle playlist completion
this.lblPlaybackStatus.Text = "Playlist finished";
// Optionally reset or loop playlist
// this.mediaPlayer1.Playlist_Reset();
// this.mediaPlayer1.Play();
}
private void UpdatePlaylistUI(int index, string filename)
{
// Update UI elements with new information
this.lblCurrentTrack.Text = $"Now playing: {Path.GetFileName(filename)}";
this.lblTrackNumber.Text = $"Track {index + 1} of {this.mediaPlayer1.Playlist_GetCount()}";
// Update playlist selection in UI
// ...
}
```
This approach allows you to detect and respond to playlist navigation events in your application by subscribing to the actual events provided by MediaPlayerCore:
```csharp
// Subscribe to events
this.mediaPlayer1.OnStart += mediaPlayer1_OnStart;
this.mediaPlayer1.OnNewFilePlaybackStarted += mediaPlayer1_OnNewFilePlaybackStarted;
this.mediaPlayer1.OnPlaylistFinished += mediaPlayer1_OnPlaylistFinished;
```
### Async Playlist Operations
The MediaPlayerCore provides async versions of playlist navigation methods for improved responsiveness:
```csharp
// Play the next file asynchronously
await this.mediaPlayer1.Playlist_PlayNextAsync();
// Play the previous file asynchronously
await this.mediaPlayer1.Playlist_PlayPreviousAsync();
```
Using these async methods is recommended for UI applications to prevent blocking the main thread during playback transitions.
## Implementation Patterns and Best Practices
### Implementing Repeat and Shuffle Modes
Most media players include repeat and shuffle functionality. Here's how to implement these common features:
```csharp
private bool repeatEnabled = false;
private bool shuffleEnabled = false;
private Random random = new Random();
// Handle playlist navigation when media playback stops
private void MediaPlayer1_OnStop(object sender, StopEventArgs e)
{
// Check if this is the end of media (not a manual stop)
if (e.Reason == StopReason.EndOfMedia)
{
if (repeatEnabled)
{
// Just replay the current item
this.mediaPlayer1.Play();
}
else if (shuffleEnabled)
{
// Play a random item
int totalFiles = this.mediaPlayer1.Playlist_GetCount();
int randomIndex = random.Next(0, totalFiles);
this.mediaPlayer1.Playlist_SetPosition(randomIndex);
this.mediaPlayer1.Play();
}
else
{
// Standard behavior: play next if available
if (this.mediaPlayer1.Playlist_GetPosition() < this.mediaPlayer1.Playlist_GetCount() - 1)
{
this.mediaPlayer1.Playlist_PlayNext();
}
else
{
// We've reached the end of the playlist
// OnPlaylistFinished will be triggered
}
}
}
}
// Subscribe to the stop event
this.mediaPlayer1.OnStop += MediaPlayer1_OnStop;
```
### Memory Management for Large Playlists
When dealing with large playlists, consider implementing lazy loading techniques:
```csharp
// Store playlist information separately for large playlists
private List masterPlaylist = new List();
public void LoadLargePlaylist(string[] filePaths)
{
// Clear existing playlist
this.mediaPlayer1.Playlist_Clear();
masterPlaylist.Clear();
// Store all paths
masterPlaylist.AddRange(filePaths);
// Load only the first batch of items (e.g., 100)
int initialBatchSize = Math.Min(100, filePaths.Length);
for (int i = 0; i < initialBatchSize; i++)
{
this.mediaPlayer1.Playlist_Add(filePaths[i]);
}
// Start playback
this.mediaPlayer1.Play();
}
// Implement dynamic loading as user approaches the end of loaded items
private void CheckAndLoadMoreItems()
{
int currentPosition = this.mediaPlayer1.Playlist_GetPosition();
int loadedCount = this.mediaPlayer1.Playlist_GetCount();
// If we're near the end of loaded items but have more in master playlist
if (currentPosition > loadedCount - 10 && loadedCount < masterPlaylist.Count)
{
// Load next batch
int nextBatchSize = Math.Min(50, masterPlaylist.Count - loadedCount);
for (int i = 0; i < nextBatchSize; i++)
{
this.mediaPlayer1.Playlist_Add(masterPlaylist[loadedCount + i]);
}
}
}
```
## Cross-Platform Considerations
The Playlist API functions consistently across different .NET environments, but there are some platform-specific considerations:
### WPF Implementation Notes
When implementing in WPF applications, you'll typically use data binding with your playlist:
```csharp
// Create an observable collection to bind to UI
private ObservableCollection observablePlaylist = new ObservableCollection();
// Sync the observable collection with the player's playlist
private void SyncObservablePlaylist()
{
observablePlaylist.Clear();
for (int i = 0; i < this.mediaPlayer1.Playlist_GetCount(); i++)
{
string filename = this.mediaPlayer1.Playlist_GetFilename(i);
observablePlaylist.Add(new PlaylistItem
{
Index = i,
FileName = System.IO.Path.GetFileName(filename),
FullPath = filename
});
}
}
```
## Conclusion
The Playlist API provides a robust foundation for building feature-rich media applications in .NET. By using the methods and patterns outlined in this guide, you can create intuitive playlist management systems that enhance the user experience of your application.
For more advanced scenarios, explore the additional capabilities of the MediaPlayerCore component, including custom event handling, media metadata extraction, and format-specific optimizations.
---END OF PAGE---
# Local File: .\dotnet\mediaplayer\code-samples\reverse-video-playback.md
---
title: Reverse Video Playback for .NET Applications
description: Master reverse video playback in .NET applications with detailed C# code examples, frame-by-frame navigation techniques, performance optimization tips, and platform-specific implementations for both cross-platform and Windows-specific scenarios.
sidebar_label: Reverse Video Playback
order: 4
---
# Implementing Reverse Video Playback in .NET Applications
Playing video in reverse is a powerful feature for media applications, allowing users to review content, create unique visual effects, or enhance the user experience with non-linear playback options. This guide provides complete implementations for reverse playback in .NET applications, focusing on both cross-platform and Windows-specific solutions.
## Understanding Reverse Playback Mechanisms
Reverse video playback can be achieved through several techniques, each with distinct advantages depending on your application's requirements:
1. **Rate-based reverse playback** - Setting a negative playback rate to reverse the video stream
2. **Frame-by-frame backward navigation** - Manually stepping backward through cached video frames
3. **Buffer-based approaches** - Creating a frame buffer to enable smooth reverse navigation
Let's explore how to implement each approach using the Media Player SDK for .NET.
## Cross-Platform Reverse Playback with MediaPlayerCoreX
The MediaPlayerCoreX engine provides cross-platform support for reverse video playback with a straightforward implementation. This approach works across Windows, macOS, and other supported platforms.
### Basic Implementation
The simplest method for reverse playback involves setting a negative rate value:
```cs
// Create new instance of MediaPlayerCoreX
MediaPlayerCoreX MediaPlayer1 = new MediaPlayerCoreX(VideoView1);
// Set the source file
var fileSource = await UniversalSourceSettings.CreateAsync(new Uri("video.mp4"));
await MediaPlayer1.OpenAsync(fileSource);
// Start normal playback first
await MediaPlayer1.PlayAsync();
// Change to reverse playback with normal speed
MediaPlayer1.Rate_Set(-1.0);
```
### Controlling Reverse Playback Speed
You can control the reverse playback speed by adjusting the negative rate value:
```cs
// Double-speed reverse playback
MediaPlayer1.Rate_Set(-2.0);
// Half-speed reverse playback (slow motion in reverse)
MediaPlayer1.Rate_Set(-0.5);
// Quarter-speed reverse playback (very slow motion in reverse)
MediaPlayer1.Rate_Set(-0.25);
```
### Event Handling During Reverse Playback
When implementing reverse playback, you may need to handle events differently:
```cs
// Subscribe to position change events
MediaPlayer1.PositionChanged += (sender, e) =>
{
// Update UI with current position
TimeSpan currentPosition = MediaPlayer1.Position_Get();
UpdatePositionUI(currentPosition);
};
// Handle reaching the beginning of the video
MediaPlayer1.ReachedStart += (sender, e) =>
{
// Stop playback or switch to forward playback
MediaPlayer1.Rate_Set(1.0);
// Alternatively: await MediaPlayer1.PauseAsync();
};
```
## Windows-Specific Frame-by-Frame Reverse Navigation
The MediaPlayerCore engine (Windows-only) provides enhanced frame-by-frame control with its frame caching system, allowing precise backward navigation even with codecs that don't natively support it.
### Setting Up Frame Caching
Before starting playback, configure the reverse playback cache:
```cs
// Configure reverse playback before starting
MediaPlayer1.ReversePlayback_CacheSize = 100; // Cache 100 frames
MediaPlayer1.ReversePlayback_Enabled = true; // Enable the feature
// Start playback
await MediaPlayer1.PlayAsync();
```
### Navigating Frame by Frame
With the cache configured, you can navigate to previous frames:
```cs
// Navigate to the previous frame
MediaPlayer1.ReversePlayback_PreviousFrame();
// Navigate backward multiple frames
for(int i = 0; i < 5; i++)
{
MediaPlayer1.ReversePlayback_PreviousFrame();
// Optional: add delay between frames for controlled playback
await Task.Delay(40); // ~25fps equivalent timing
}
```
### Advanced Frame Cache Configuration
For applications with specific memory or performance requirements, you can fine-tune the cache:
```cs
// For high-resolution videos, you might need fewer frames to manage memory
MediaPlayer1.ReversePlayback_CacheSize = 50; // Reduce cache size
// For applications that need extensive backward navigation
MediaPlayer1.ReversePlayback_CacheSize = 250; // Increase cache size
// Listen for cache-related events
MediaPlayer1.ReversePlayback_CacheFull += (sender, e) =>
{
Console.WriteLine("Reverse playback cache is full");
};
```
## Implementing UI Controls for Reverse Playback
A complete reverse playback implementation typically includes dedicated UI controls:
```cs
// Button click handler for reverse playback
private async void ReversePlaybackButton_Click(object sender, EventArgs e)
{
if(MediaPlayer1.State == MediaPlayerState.Playing)
{
// Toggle between forward and reverse
if(MediaPlayer1.Rate_Get() > 0)
{
MediaPlayer1.Rate_Set(-1.0);
UpdateUIForReverseMode(true);
}
else
{
MediaPlayer1.Rate_Set(1.0);
UpdateUIForReverseMode(false);
}
}
else
{
// Start playback in reverse
await MediaPlayer1.PlayAsync();
MediaPlayer1.Rate_Set(-1.0);
UpdateUIForReverseMode(true);
}
}
// Button click handler for frame-by-frame backward navigation
private void PreviousFrameButton_Click(object sender, EventArgs e)
{
// Ensure we're paused first
if(MediaPlayer1.State == MediaPlayerState.Playing)
{
await MediaPlayer1.PauseAsync();
}
// Navigate to previous frame
MediaPlayer1.ReversePlayback_PreviousFrame();
UpdateFrameCountDisplay();
}
```
## Performance Considerations
Reverse playback can be resource-intensive, especially with high-resolution videos. Consider these optimization techniques:
1. **Limit cache size** for devices with memory constraints
2. **Use hardware acceleration** when available
3. **Monitor performance** during reverse playback with debugging tools
4. **Provide fallback options** for devices that struggle with full-speed reverse playback
```cs
// Example of performance monitoring during reverse playback
private void MonitorPerformance()
{
Timer performanceTimer = new Timer(1000);
performanceTimer.Elapsed += (s, e) =>
{
if(MediaPlayer1.Rate_Get() < 0)
{
// Log or display current memory usage, frame rate, etc.
LogPerformanceMetrics();
// Adjust settings if needed
if(IsMemoryUsageHigh())
{
MediaPlayer1.ReversePlayback_CacheSize =
Math.Max(10, MediaPlayer1.ReversePlayback_CacheSize / 2);
}
}
};
performanceTimer.Start();
}
```
## Required Dependencies
To ensure proper functionality of reverse playback features, include these dependencies:
- Base redistributable package
- SDK redistributable package
These packages contain the necessary codecs and media processing components to enable smooth reverse playback across different video formats.
## Additional Resources and Advanced Techniques
For complex media applications requiring advanced reverse playback features, consider exploring:
- Frame extraction and manual rendering for custom effects
- Keyframe analysis for optimized navigation
- Buffering strategies for smoother reverse playback
## Conclusion
Implementing reverse video playback adds significant value to media applications, providing users with enhanced control over content navigation. By following the implementation patterns in this guide, developers can create robust, performant reverse playback experiences in .NET applications.
---
Visit our [GitHub](https://github.com/visioforge/.Net-SDK-s-samples) page for more complete code samples and implementation examples.
---END OF PAGE---
# Local File: .\dotnet\mediaplayer\code-samples\show-first-frame.md
---
title: Display First Frame in Video Files with .NET
description: Learn how to display the first frame of a video file in your .NET applications using the Media Player SDK. Complete C# code examples for WinForms, WPF, and console applications with detailed implementation steps.
sidebar_label: How to Show the First Frame?
---
# Displaying the First Frame of Video Files in .NET Applications
[!badge size="xl" target="blank" variant="info" text="Media Player SDK .Net"](https://www.visioforge.com/media-player-sdk-net) [!badge variant="dark" size="xl" text="VideoCaptureCore"]
## Overview
When developing media applications, it's often necessary to preview video content without playing the entire file. This technique is particularly useful for creating thumbnail galleries, video selection screens, or providing users with a visual preview before committing to watching a video.
## Implementation Guide
The Media Player SDK .NET provides a simple yet powerful way to display the first frame of any video file. This is achieved through the `Play_PauseAtFirstFrame` property, which when set to `true`, instructs the player to pause immediately after loading the first frame.
### How It Works
When the `Play_PauseAtFirstFrame` property is enabled:
1. The player loads the video file
2. Renders the first frame to the video display surface
3. Automatically pauses playback
4. Maintains the first frame on screen until further user action
If this property is not enabled (set to `false`), the player will proceed with normal playback after loading.
## Code Implementation
### Basic Example
```cs
// create player and configure the file name
// ...
// set the property to true
MediaPlayer1.Play_PauseAtFirstFrame = true;
// play the file
await MediaPlayer1.PlayAsync();
```
Resume playback from the first frame:
```cs
// resume playback
await MediaPlayer1.ResumeAsync();
```
## Practical Applications
This feature is particularly useful for:
- Providing preview capabilities in video editing applications
- Generating video poster frames for streaming applications
- Implementing "hover to preview" functionality in media browsers
## Required Components
To implement this functionality in your application, you'll need:
- Base redist package
- SDK redist package
For more information on distributing these components with your application, see: [How can the required redists be installed or deployed to the user's PC?](../deployment.md)
## Additional Resources
Find more code samples and implementation examples in our [GitHub repository](https://github.com/visioforge/.Net-SDK-s-samples).
## Technical Considerations
When implementing this feature, keep in mind:
- First frame display is nearly instantaneous for most video formats
- Resource usage is minimal as the player doesn't buffer beyond the first frame
- Works with all supported video formats including MP4, MOV, AVI, and more
---END OF PAGE---
# Local File: .\dotnet\mediaplayer\guides\avalonia-player.md
# How to Create a Cross-Platform Media Player using Avalonia MVVM and VisioForge SDK
This guide will walk you through the process of building a cross-platform media player application using Avalonia UI with the Model-View-ViewModel (MVVM) pattern and the VisioForge Media Player SDK. The application will be capable of playing video files on Windows, macOS, Linux, Android, and iOS.
We will be referencing the `SimplePlayerMVVM` example project, which demonstrates the core concepts and implementation details.
`[SCREENSHOT: Final application running on multiple platforms]`
## 1. Prerequisites
Before you begin, ensure you have the following installed:
* .NET SDK (latest version, e.g., .NET 8 or newer)
* An IDE such as Visual Studio, JetBrains Rider, or VS Code with C# and Avalonia extensions.
* For Android development:
* Android SDK
* Java Development Kit (JDK)
* For iOS development (requires a macOS machine):
* Xcode
* Necessary provisioning profiles and certificates.
* VisioForge .NET SDK (MediaPlayer SDK X). You can obtain this from the VisioForge website. The necessary packages will be added via NuGet.
## 2. Project Setup
This section outlines how to set up the solution structure and include the necessary VisioForge SDK packages.
### 2.1. Solution Structure
The `SimplePlayerMVVM` solution consists of several projects:
* **SimplePlayerMVVM**: A .NET Standard library containing the core application logic, including ViewModels, Views (AXAML), and shared interfaces. This is the main project where most of our application logic resides.
* **SimplePlayerMVVM.Android**: The Android-specific head project.
* **SimplePlayerMVVM.Desktop**: The desktop-specific head project (Windows, macOS, Linux).
* **SimplePlayerMVVM.iOS**: The iOS-specific head project.
`[SCREENSHOT: Solution structure in the IDE]`
### 2.2. Core Project (`SimplePlayerMVVM.csproj`)
The main project, `SimplePlayerMVVM.csproj`, targets multiple platforms. Key package references include:
* `Avalonia`: The core Avalonia UI framework.
* `Avalonia.Themes.Fluent`: Provides a Fluent Design theme.
* `Avalonia.ReactiveUI`: For MVVM support using ReactiveUI.
* `VisioForge.DotNet.MediaBlocks`: Core VisioForge media processing components.
* `VisioForge.DotNet.Core.UI.Avalonia`: VisioForge UI components for Avalonia, including the `VideoView`.
```xml
enablelatesttruenet8.0-android;net8.0-ios;net8.0-windowsnet8.0-android;net8.0-ios;net8.0-macos14.0net8.0-android;net8.0
```
This setup allows the core logic to be shared across all target platforms.
### 2.3. Platform-Specific Projects
Each platform head project (`SimplePlayerMVVM.Android.csproj`, `SimplePlayerMVVM.Desktop.csproj`, `SimplePlayerMVVM.iOS.csproj`) includes platform-specific dependencies and configurations.
**Desktop (`SimplePlayerMVVM.Desktop.csproj`):**
* References `Avalonia.Desktop`.
* Includes platform-specific VisioForge native libraries (e.g., `VisioForge.CrossPlatform.Core.Windows.x64`, `VisioForge.CrossPlatform.Core.macOS`).
```xml
net8.0-windowsWinExenet8.0-macos14.0Exenet8.0Exe
```
**Android (`SimplePlayerMVVM.Android.csproj`):**
* References `Avalonia.Android`.
* Includes Android-specific VisioForge libraries and dependencies like `VisioForge.CrossPlatform.Core.Android`.
```xml
Exenet8.0-android21enablecom.CompanyName.Simple_Player_MVVM11.0apkfalse
```
**iOS (`SimplePlayerMVVM.iOS.csproj`):**
* References `Avalonia.iOS`.
* Includes iOS-specific VisioForge libraries like `VisioForge.CrossPlatform.Core.iOS`.
```xml
Exenet8.0-ios13.0enableSimple_Player_MVVM.iOScom.visioforge.avaloniaplayer
```
These project files are crucial for managing dependencies and build configurations for each platform.
## 3. Core MVVM Structure
The application follows the MVVM pattern, separating UI (Views) from logic (ViewModels) and data (Models). ReactiveUI is used to facilitate this pattern.
### 3.1. `ViewModelBase.cs`
This abstract class serves as the base for all ViewModels in the application. It inherits from `ReactiveObject`, which is part of ReactiveUI and provides the necessary infrastructure for property change notifications.
```csharp
using ReactiveUI;
namespace Simple_Player_MVVM.ViewModels
{
public abstract class ViewModelBase : ReactiveObject
{
}
}
```
Any ViewModel that needs to notify the UI of property changes should inherit from `ViewModelBase`.
`[SCREENSHOT: ViewModelBase.cs code]`
### 3.2. `ViewLocator.cs`
The `ViewLocator` class is responsible for locating and instantiating Views based on the type of their corresponding ViewModel. It implements Avalonia's `IDataTemplate` interface.
```csharp
using Avalonia.Controls;
using Avalonia.Controls.Templates;
using Simple_Player_MVVM.ViewModels;
using System;
namespace Simple_Player_MVVM
{
public class ViewLocator : IDataTemplate
{
public Control? Build(object? data)
{
if (data is null)
return null;
var name = data.GetType().FullName!.Replace("ViewModel", "View", StringComparison.Ordinal);
var type = Type.GetType(name);
if (type != null)
{
return (Control)Activator.CreateInstance(type)!;
}
return new TextBlock { Text = "Not Found: " + name };
}
public bool Match(object? data)
{
return data is ViewModelBase;
}
}
}
```
When Avalonia needs to display a ViewModel, the `ViewLocator`'s `Match` method checks if the data object is a `ViewModelBase`. If it is, the `Build` method attempts to find a corresponding View by replacing "ViewModel" with "View" in the ViewModel's class name and instantiates it.
This convention-based approach simplifies the association between Views and ViewModels.
`[SCREENSHOT: ViewLocator.cs code]`
### 3.3. Application Initialization (`App.axaml` and `App.axaml.cs`)
The `App.axaml` file defines the application-level resources, including the `ViewLocator` as a data template and the theme (FluentTheme).
**`App.axaml`**:
```xml
```
**`App.axaml.cs`**:
The `App.axaml.cs` file handles the application's initialization and lifecycle.
Key responsibilities in `OnFrameworkInitializationCompleted`:
1. Creates an instance of `MainViewModel`.
2. Sets up the main window or view based on the application lifetime (`IClassicDesktopStyleApplicationLifetime` for desktop, `ISingleViewApplicationLifetime` for mobile/web-like views).
3. Assigns the `MainViewModel` instance as the `DataContext` for the main window/view.
4. Retrieves the `IVideoView` instance from the `MainView` (hosted within `MainWindow` or directly as `MainView`).
5. Passes the `IVideoView` and the `TopLevel` control (necessary for file dialogs and other top-level interactions) to the `MainViewModel`.
```csharp
using Avalonia;
using Avalonia.Controls;
using Avalonia.Controls.ApplicationLifetimes;
using Avalonia.Markup.Xaml;
using Simple_Player_MVVM.ViewModels;
using Simple_Player_MVVM.Views;
using VisioForge.Core.Types;
namespace Simple_Player_MVVM
{
public partial class App : Application
{
public override void Initialize()
{
AvaloniaXamlLoader.Load(this);
}
public override void OnFrameworkInitializationCompleted()
{
IVideoView videoView = null;
var model = new MainViewModel();
if (ApplicationLifetime is IClassicDesktopStyleApplicationLifetime desktop)
{
desktop.MainWindow = new MainWindow
{
DataContext = model
};
videoView = (desktop.MainWindow as MainWindow).GetVideoView();
model.VideoViewIntf = videoView;
model.TopLevel = desktop.MainWindow;
}
else if (ApplicationLifetime is ISingleViewApplicationLifetime singleViewPlatform)
{
singleViewPlatform.MainView = new MainView
{
DataContext = model
};
videoView = (singleViewPlatform.MainView as MainView).GetVideoView();
model.VideoViewIntf = videoView;
model.TopLevel = TopLevel.GetTopLevel(singleViewPlatform.MainView);
}
base.OnFrameworkInitializationCompleted();
}
}
}
```
This setup ensures that the `MainViewModel` has access to the necessary UI components for video playback and interaction, regardless of the platform.
`[SCREENSHOT: App.axaml.cs code focusing on OnFrameworkInitializationCompleted]`
## 4. MainViewModel Implementation (`MainViewModel.cs`)
The `MainViewModel` is central to the media player's functionality. It manages the player's state, handles user interactions, and communicates with the VisioForge `MediaPlayerCoreX` engine.
`[SCREENSHOT: MainViewModel.cs overall structure or class definition]`
Key components of `MainViewModel`:
### 4.1. Properties for UI Binding
The ViewModel exposes several properties that are bound to UI elements in `MainView.axaml`. These properties use `ReactiveUI`'s `RaiseAndSetIfChanged` to notify the UI of changes.
* **`VideoViewIntf` (IVideoView):** A reference to the `VideoView` control in the UI, passed from `App.axaml.cs`.
* **`TopLevel` (TopLevel):** A reference to the top-level control, used for displaying file dialogs.
* **`Position` (string?):** Current playback position (e.g., "00:01:23").
* **`Duration` (string?):** Total duration of the media file (e.g., "00:05:00").
* **`Filename` (string? or Foundation.NSUrl? for iOS):** The name or path of the currently loaded file.
* **`VolumeValue` (double?):** Current volume level (0-100).
* **`PlayPauseText` (string?):** Text for the Play/Pause button (e.g., "PLAY" or "PAUSE").
* **`SpeedText` (string?):** Text indicating the current playback speed (e.g., "SPEED: 1X").
* **`SeekingValue` (double?):** Current value of the seeking slider.
* **`SeekingMaximum` (double?):** Maximum value of the seeking slider (corresponds to media duration in milliseconds).
```csharp
// Example property
private string? _Position = "00:00:00";
public string? Position
{
get => _Position;
set => this.RaiseAndSetIfChanged(ref _Position, value);
}
// ... other properties ...
```
### 4.2. Commands for UI Interactions
ReactiveUI `ReactiveCommand` instances are used to handle actions triggered by UI elements (e.g., button clicks, slider value changes).
* **`OpenFileCommand`:** Opens a file dialog to select a media file.
* **`PlayPauseCommand`:** Plays or pauses the media.
* **`StopCommand`:** Stops playback.
* **`SpeedCommand`:** Cycles through playback speeds (1x, 2x, 0.5x).
* **`VolumeValueChangedCommand`:** Updates the player volume when the volume slider changes.
* **`SeekingValueChangedCommand`:** Seeks to a new position when the seeking slider changes.
* **`WindowClosingCommand`:** Handles cleanup when the application window is closing.
```csharp
// Constructor - Command initialization
public MainViewModel()
{
OpenFileCommand = ReactiveCommand.Create(OpenFileAsync);
PlayPauseCommand = ReactiveCommand.CreateFromTask(PlayPauseAsync);
StopCommand = ReactiveCommand.CreateFromTask(StopAsync);
// ... other command initializations ...
// Subscribe to property changes to trigger commands for sliders
this.WhenAnyValue(x => x.VolumeValue).Subscribe(_ => VolumeValueChangedCommand.Execute().Subscribe());
this.WhenAnyValue(x => x.SeekingValue).Subscribe(_ => SeekingValueChangedCommand.Execute().Subscribe());
_tmPosition = new System.Timers.Timer(1000); // Timer for position updates
_tmPosition.Elapsed += tmPosition_Elapsed;
VisioForgeX.InitSDK(); // Initialize VisioForge SDK
}
```
Note: `VisioForgeX.InitSDK()` initializes the VisioForge SDK. This should be called once at application startup.
### 4.3. VisioForge `MediaPlayerCoreX` Integration
A private field `_player` of type `MediaPlayerCoreX` holds the instance of the VisioForge media player engine.
```csharp
private MediaPlayerCoreX _player;
```
### 4.4. Engine Creation (`CreateEngineAsync`)
This asynchronous method initializes or re-initializes the `MediaPlayerCoreX` instance.
```csharp
private async Task CreateEngineAsync()
{
if (_player != null)
{
await _player.StopAsync();
await _player.DisposeAsync();
}
_player = new MediaPlayerCoreX(VideoViewIntf); // Pass the Avalonia VideoView
_player.OnError += _player_OnError; // Subscribe to error events
_player.Audio_Play = true; // Ensure audio is enabled
// Create source settings from the filename
var sourceSettings = await UniversalSourceSettings.CreateAsync(Filename);
await _player.OpenAsync(sourceSettings);
}
```
Key steps:
1. Disposes of any existing player instance.
2. Creates a new `MediaPlayerCoreX`, passing the `IVideoView` from the UI.
3. Subscribes to the `OnError` event for error handling.
4. Sets `Audio_Play = true` to enable audio playback by default.
5. Uses `UniversalSourceSettings.CreateAsync(Filename)` to create source settings appropriate for the selected file.
6. Opens the media source using `_player.OpenAsync(sourceSettings)`.
`[SCREENSHOT: CreateEngineAsync method code]`
### 4.5. File Opening (`OpenFileAsync`)
This method is responsible for allowing the user to select a media file.
```csharp
private async Task OpenFileAsync()
{
await StopAllAsync(); // Stop any current playback
PlayPauseText = "PLAY";
#if __IOS__ && !__MACCATALYST__
// iOS specific: Use IDocumentPickerService
var filePicker = Locator.Current.GetService();
var res = await filePicker.PickVideoAsync();
if (res != null)
{
Filename = (Foundation.NSUrl)res;
var access = IOSHelper.CheckFileAccess(Filename); // Helper to check file access
if (!access)
{
IOSHelper.ShowToast("File access error");
return;
}
}
#else
// Other platforms: Use Avalonia's StorageProvider
try
{
var files = await TopLevel.StorageProvider.OpenFilePickerAsync(new FilePickerOpenOptions
{
Title = "Open video file",
AllowMultiple = false
});
if (files.Count >= 1)
{
var file = files[0];
Filename = file.Path.AbsoluteUri;
#if __ANDROID__
// Android specific: Convert content URI to file path if necessary
if (!Filename.StartsWith('/'))
{
Filename = global::VisioForge.Core.UI.Android.FileDialogHelper.GetFilePathFromUri(AndroidHelper.GetContext(), file.Path);
}
#endif
}
}
catch (Exception ex)
{
// Handle cancellation or errors
Debug.WriteLine($"File open error: {ex.Message}");
}
#endif
}
```
Platform-specific considerations:
* **iOS:** Uses an `IDocumentPickerService` (resolved via `Locator.Current.GetService`) to present the native document picker. `IOSHelper.CheckFileAccess` is used to ensure the app has permission to access the selected file. The filename is stored as an `NSUrl`.
* **Android:** If the path obtained from the file picker is a content URI, `FileDialogHelper.GetFilePathFromUri` (from `VisioForge.Core.UI.Android`) is used to convert it to an actual file path. This requires an `IAndroidHelper` to get the Android context.
* **Desktop/Other:** Uses `TopLevel.StorageProvider.OpenFilePickerAsync` for the standard Avalonia file dialog.
`[SCREENSHOT: OpenFileAsync method with platform-specific blocks highlighted]`
### 4.6. Playback Controls
* **`PlayPauseAsync`:**
* If the player is not initialized or stopped (`PlaybackState.Free`), it calls `CreateEngineAsync` and then `_player.PlayAsync()`.
* If playing (`PlaybackState.Play`), it calls `_player.PauseAsync()`.
* If paused (`PlaybackState.Pause`), it calls `_player.ResumeAsync()`.
* Updates `PlayPauseText` accordingly and starts/stops the `_tmPosition` timer.
```csharp
private async Task PlayPauseAsync()
{
// ... (null/empty filename check) ...
if (_player == null || _player.State == PlaybackState.Free)
{
await CreateEngineAsync();
await _player.PlayAsync();
_tmPosition.Start();
PlayPauseText = "PAUSE";
}
else if (_player.State == PlaybackState.Play)
{
await _player.PauseAsync();
PlayPauseText = "PLAY";
}
else if (_player.State == PlaybackState.Pause)
{
await _player.ResumeAsync();
PlayPauseText = "PAUSE";
}
}
```
* **`StopAsync`:**
* Calls `StopAllAsync` to stop the player and reset UI elements.
* Resets `SpeedText` and `PlayPauseText`.
```csharp
private async Task StopAsync()
{
await StopAllAsync();
SpeedText = "SPEED: 1X";
PlayPauseText = "PLAY";
}
```
* **`StopAllAsync` (Helper):**
* Stops the `_tmPosition` timer.
* Calls `_player.StopAsync()`.
* Resets `SeekingMaximum` to null (so it gets re-calculated on next play).
```csharp
private async Task StopAllAsync()
{
if (_player == null) return;
_tmPosition.Stop();
if (_player != null) await _player.StopAsync();
await Task.Delay(300); // Small delay to ensure stop completes
SeekingMaximum = null;
}
```
### 4.7. Playback Speed (`SpeedAsync`)
Cycles through playback rates: 1.0, 2.0, and 0.5.
```csharp
private async Task SpeedAsync()
{
if (SpeedText == "SPEED: 1X")
{
SpeedText = "SPEED: 2X";
await _player.Rate_SetAsync(2.0);
}
else if (SpeedText == "SPEED: 2X")
{
SpeedText = "SPEED: 0.5X";
await _player.Rate_SetAsync(0.5);
}
else if (SpeedText == "SPEED: 0.5X") // Assumes this was the previous state
{
SpeedText = "SPEED: 1X";
await _player.Rate_SetAsync(1.0);
}
}
```
Uses `_player.Rate_SetAsync(double rate)` to change the playback speed.
### 4.8. Position and Duration Updates (`tmPosition_Elapsed`)
This method is called by the `_tmPosition` timer (typically every second) to update the UI with the current playback position and duration.
```csharp
private async void tmPosition_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
if (_player == null) return;
var pos = await _player.Position_GetAsync();
var progress = (int)pos.TotalMilliseconds;
try
{
await Dispatcher.UIThread.InvokeAsync(async () =>
{
if (_player == null) return;
_isTimerUpdate = true; // Flag to prevent seeking loop
if (SeekingMaximum == null)
{
SeekingMaximum = (int)(await _player.DurationAsync()).TotalMilliseconds;
}
SeekingValue = Math.Min(progress, (int)(SeekingMaximum ?? progress));
Position = $"{pos.ToString(@"hh\:mm\:ss", CultureInfo.InvariantCulture)}";
Duration = $"{(await _player.DurationAsync()).ToString(@"hh\:mm\:ss", CultureInfo.InvariantCulture)}";
_isTimerUpdate = false;
});
}
catch (Exception exception)
{
System.Diagnostics.Debug.WriteLine(exception);
}
}
```
Key actions:
1. Retrieves current position (`_player.Position_GetAsync()`) and duration (`_player.DurationAsync()`).
2. Updates `SeekingMaximum` if it hasn't been set yet (usually after a file is opened).
3. Updates `SeekingValue` with the current progress.
4. Formats and updates `Position` and `Duration` strings.
5. Uses `Dispatcher.UIThread.InvokeAsync` to ensure UI updates happen on the UI thread.
6. Sets `_isTimerUpdate = true` before updating `SeekingValue` and `false` after, to prevent the `OnSeekingValueChanged` handler from re-seeking when the timer updates the slider position.
`[SCREENSHOT: tmPosition_Elapsed method]`
### 4.9. Seeking (`OnSeekingValueChanged`)
Called when the `SeekingValue` property changes (i.e., the user moves the seeking slider).
```csharp
private async Task OnSeekingValueChanged()
{
if (!_isTimerUpdate && _player != null && SeekingValue.HasValue)
{
await _player.Position_SetAsync(TimeSpan.FromMilliseconds(SeekingValue.Value));
}
}
```
If not currently being updated by the timer (`!_isTimerUpdate`), it calls `_player.Position_SetAsync()` to seek to the new position.
### 4.10. Volume Control (`OnVolumeValueChanged`)
Called when the `VolumeValue` property changes (i.e., the user moves the volume slider).
```csharp
private void OnVolumeValueChanged()
{
if (_player != null && VolumeValue.HasValue)
{
// Volume for MediaPlayerCoreX is 0.0 to 1.0
_player.Audio_OutputDevice_Volume = VolumeValue.Value / 100.0;
}
}
```
Sets `_player.Audio_OutputDevice_Volume`. Note that the ViewModel uses a 0-100 scale for `VolumeValue`, while the player expects 0.0-1.0.
### 4.11. Error Handling (`_player_OnError`)
A simple error handler that logs errors to the debug console.
```csharp
private void _player_OnError(object sender, VisioForge.Core.Types.Events.ErrorsEventArgs e)
{
Debug.WriteLine(e.Message);
}
```
More sophisticated error handling (e.g., showing a message to the user) could be implemented here.
### 4.12. Resource Cleanup (`OnWindowClosing`)
This method is invoked when the main window is closing. It ensures that VisioForge SDK resources are properly released.
```csharp
private void OnWindowClosing()
{
if (_player != null)
{
_player.OnError -= _player_OnError; // Unsubscribe from events
_player.Stop(); // Ensure player is stopped (sync version here for quick cleanup)
_player.Dispose();
_player = null;
}
VisioForgeX.DestroySDK(); // Destroy VisioForge SDK instance
}
```
It stops the player, disposes of it, and importantly, calls `VisioForgeX.DestroySDK()` to release all SDK resources. This is crucial to prevent memory leaks or issues when the application exits.
This ViewModel orchestrates all the core logic of the media player, from loading files to controlling playback and interacting with the VisioForge SDK.
## 5. User Interface (Views)
The user interface is defined using Avalonia XAML (`.axaml` files).
### 5.1. `MainView.axaml` - The Player Interface
This `UserControl` defines the layout and controls for the media player.
`[SCREENSHOT: MainView.axaml rendered UI design]`
**Key UI Elements:**
* **`avalonia:VideoView`:** This is the VisioForge control responsible for rendering video. It's placed in the main area of the grid and set to stretch.
```xml
```
* **Seeking Slider (`Slider Name="slSeeking"`):**
* `Maximum="{Binding SeekingMaximum}"`: Binds to the `SeekingMaximum` property in `MainViewModel`.
* `Value="{Binding SeekingValue}"`: Binds two-way to the `SeekingValue` property in `MainViewModel`. Changes to this slider by the user will update `SeekingValue`, triggering `OnSeekingValueChanged`. Updates to `SeekingValue` from the ViewModel (e.g., by the timer) will update the slider's position.
* **Time Display (`TextBlock`s for Position and Duration):**
* Bound to `Position` and `Duration` properties in `MainViewModel`.
* `TextBlock Text="{Binding Filename}"` displays the current file name.
* **Playback Control Buttons (`Button`s):**
* **Open File:** `Command="{Binding OpenFileCommand}"`
* **Play/Pause:** `Command="{Binding PlayPauseCommand}"`, `Content="{Binding PlayPauseText}"` (dynamically changes button text).
* **Stop:** `Command="{Binding StopCommand}"`
* **Volume and Speed Controls:**
* **Volume Slider:** `Value="{Binding VolumeValue}"` (binds to `VolumeValue` for volume control).
* **Speed Button:** `Command="{Binding SpeedCommand}"`, `Content="{Binding SpeedText}"`.
**Layout:**
The view uses a `Grid` to arrange the `VideoView` and a `StackPanel` for the controls at the bottom. The controls themselves are organized using nested `StackPanel`s and `Grid`s for alignment.
```xml
```
The `x:DataType="vm:MainViewModel"` directive enables compiled bindings, providing better performance and compile-time checking of binding paths. The `Design.DataContext` is used to provide data for the XAML previewer in IDEs.
`[SCREENSHOT: MainView.axaml XAML code, perhaps highlighting binding expressions]`
### 5.2. `MainView.axaml.cs` - Code-Behind
The code-behind for `MainView` is minimal. Its primary purpose is to provide a way for the application setup code (in `App.axaml.cs`) to access the `VideoView` control instance.
```csharp
using Avalonia.Controls;
using VisioForge.Core.Types;
namespace Simple_Player_MVVM.Views
{
public partial class MainView : UserControl
{
// Provides access to the VideoView control instance
public IVideoView GetVideoView()
{
return videoView1; // videoView1 is the x:Name of the VideoView in XAML
}
public MainView()
{
InitializeComponent(); // Standard Avalonia control initialization
}
}
}
```
This `GetVideoView()` method is called during application startup to pass the `VideoView` reference to the `MainViewModel`.
### 5.3. `MainWindow.axaml` - The Main Application Window (Desktop)
For desktop platforms, `MainWindow.axaml` serves as the top-level window that hosts the `MainView`.
```xml
```
It simply embeds the `MainView` control. The `DataContext` (which will be an instance of `MainViewModel`) is typically set in `App.axaml.cs` when the `MainWindow` is created.
### 5.4. `MainWindow.axaml.cs` - Main Window Code-Behind
The code-behind for `MainWindow` primarily handles two things:
1. Provides a way to get the `VideoView` from the contained `MainView`.
2. Hooks into the `Closing` event of the window to trigger the `WindowClosingCommand` in the `MainViewModel` for resource cleanup.
```csharp
using Avalonia.Controls;
using Simple_Player_MVVM.ViewModels;
using System;
using VisioForge.Core.Types;
namespace Simple_Player_MVVM.Views
{
public partial class MainWindow : Window
{
// Helper to get VideoView from the MainView content
public IVideoView GetVideoView()
{
return (Content as MainView).GetVideoView();
}
public MainWindow()
{
InitializeComponent();
// Handle the window closing event to trigger cleanup in ViewModel
Closing += async (sender, e) =>
{
if (DataContext is MainViewModel viewModel)
{
// Execute the command and handle potential errors or completion
viewModel.WindowClosingCommand.Execute()
.Subscribe(_ => { /* Optional: action on completion */ },
ex => Console.WriteLine($"Error during closing: {ex.Message}"));
}
};
}
}
}
```
When the window closes, it checks if the `DataContext` is a `MainViewModel` and then executes its `WindowClosingCommand`. This ensures that the `MainViewModel` can perform necessary cleanup, such as disposing of the `MediaPlayerCoreX` instance and calling `VisioForgeX.DestroySDK()`.
`[SCREENSHOT: MainWindow.axaml.cs code, highlighting the Closing event handler]`
## 6. Platform-Specific Implementation Details
While Avalonia and .NET provide a high degree of cross-platform compatibility, certain aspects like file system access and permissions require platform-specific handling.
### 6.1. Interfaces for Platform Services
To abstract platform-specific functionality, interfaces are defined in the core `SimplePlayerMVVM` project:
* **`IAndroidHelper.cs`:**
```csharp
namespace SimplePlayerMVVM
{
public interface IAndroidHelper
{
#if __ANDROID__
global::Android.Content.Context GetContext();
#endif
}
}
```
This interface is used to get the Android `Context`, which is needed for operations like converting content URIs to file paths.
* **`IDocumentPickerService.cs`:**
```csharp
using System.Threading.Tasks;
namespace SimplePlayerMVVM;
public interface IDocumentPickerService
{
Task