Introduction Last updated: 2021-06-11

Vectorly's client-side SDK makes it easy to integrate AI filters, such as Background Filters (virtual backgrounds, background blur) as well as AI Upscaling, into WebRTC streaming applications

Installation

When you sign up, you'll get a token which, you will need to use the library. Next, you can install the ai-filters library via NPM or via CDN

NPM

npm install --save @vectorly-io/ai-filters

CDN

https://cdn.vectorly.io/ai-filters/v1/latest/vectorly.filters.js

Available Filters

We've compiled a set of AI filters from research & Academia, open source projects as well as our own custom AI filters, all of which are able to be accessed through the same Vectorly interface.

Background Filter

By using the Background Filter, you can implement features like Virtual Backgrounds or Background Blur, to give users additional privacy when calling from home. The AI model used for running background segmentation is the meet model, used by Google Meet.

Background Filters example:
Original Video Stream
Background Blur
Virtual Background

See the Background Filters section for more details

Upscaling Filter

Vectorly has built it's own AI Upscaling filter based on a technique called Super Resolution, which uses AI to upscale and enhance images. Through Super Resolution, we can upscale and clean-up low-resolution video, making it look close to HD quality.

Super Resolution Example:
240p
240p upscaled to 720p
Original 720p

With AI Upscaling, you can improve the clarity & quality of video streams when the source resolution is of low quality.

You can also stream SD content to users and upscale it to HD in real time as they're watching it, providing an HD viewing experience while only consuming the bandwidth for the low-resolution video (50 to 90% less data than for the HD video).

See the AI Upscaling section for more details

Audio Denoise

In Development

Video Denoise

In Development

Lighting Correction

In Development

Background Filters

Our Background filters, based on the Google Meet background segmentaiton model, make it easy to implemet background blur and virtual background features

You can find a live demo of our virtual background filter here

Loading

The basic API for loading the background filter via NPM is:


     import { BackgroundFilter } from '@vectorly-io/ai-filters';
					

For loading via CDN, you can access the background filter from the vectorly object


     const BackgroundFilter = vectorly.BackgroundFilter;
					

You can find more detailed loading instructions on the API reference page

Basic Usage

Vectorly's Background Filter takes as an input any MediaStream or MediaStreamTrack element, so for a WebRTC application, all you need to do is to instantiate the filter object with the MediaStream or MediaStreamTrack element you want to filter. The output is a MediaStreamTrack, which can be sent via WebRTC or loaded locally into a video element

The basic API for loading the background filter is:


   const stream = await navigator.mediaDevices.getUserMedia({video:true, audio:true});
   const filter = new BackgroundFilter(stream, {token: 'vectorly-token', background: 'blur'});
   const outputStream =  await filter.getOutput();
					

The above example is for a Background Blur filter. For virtual backgrounds, where you replace the background with an image, the API is:


   const filter = new BackgroundFilter(stream, {token: 'vectorly-token', background: 'my-image-url.png'});
					

You can find a full set of methods and parameters on the API reference page

Browser support

Our background filters are supported on all major browsers, except for internet explorer. See a table of browser support below

Chrome Safari Firefox Edge Opera IE
Supported Yes Yes Yes Yes Yes No
SIMD acceleration Yes, since 91 No Yes, since 89 Yes, since 84 Yes, since 77 No
Offscreen Support Yes No No Yes Yes No

Offscreen Support
Offscreen support uses OffscreenCanvas to run video-processing workloads in a worker. This both has some performance benefits and is also necessary to continue running the AI filter when the current tab is hidden or minimized.

Note

For browsers that do not have Offscreen Support, the filtered stream will pause while the user's tab is hidden / minimized, and will resume again when the user's tab is active again..

SIMD acceleration

Recent versions of Chrome, Firefox, Edge and Opera support SIMD acceleration since June 2021. SIMD acceleration enables much higher framerates and low CPU overhead. SIMD acceleration is enabled automatically by default, if it is supported by the browser.

Integration

Integrating the filter with any specific Video Conferencing API or service just requires finding the MediaStream element associated with video stream you want to filter. The following sub-sections discuss how to integrate the filter with various conferencing services.

Vanilla WebRTC

As shown above, the API for basic/general WebRTC is:


   const videoTrack = await navigator.mediaDevices.getUserMedia({video:true, audio:true}).getVideoTracks()[0];
   const filter = new BackgroundFilter(videoTrack, { background: 'image.jpg'});
   const virtualBackgroundTrack = await filter.getOutputTrack();

You can find a demo repository for vanilla WebRTC background filtering here

Jitsi

You can enable filters on any VideoTrack object by feeding into the Filter's Jitsi Plugin (see reference).


   const room = connection.initJitsiConference('conference', confOptions);

   JitsiMeetJS.createLocalTracks({devices: ['video']}).then(tracks => {
      const filter = new BackgroundFilter(tracks[0], {token: 'insert-vectorly-token', background: 'blur'});

	filter.getOutput().then(function(filteredStream){
            room.addTrack(filteredStream);
        });

   });


Agora

For Web deployments using Agora (specifically the 4.x API), you can just feed the video track to the Background filter, which will return a filtered video track which you can publish.


    let videoTrack = AgoraRTC.createCameraVideoTrack();
    let audioTrack = AgoraRTC.createMicrophoneAudioTrack();


    const filter = new BackgroundFilter(videoTrack._mediaStreamTrack, {token: 'insert-vectorly-token-here', background: 'blur'});
    filter.getOutputTracj().then(function(filteredTrack ){

       const filteredAgoraTrack = AgoraRTC.createCustomVideoTrack({
         mediaStreamTrack: filteredTrack
       });
      client.publish([filteredAgoraTrack, audioTrack]);
    });

						

You can find a working demo repo here

Twilio

You can enable filters on any VideoTrack object by extracting the raw MediaStreamTrack, running the filter on that, and creating a new LocalVideoTrack.


   const Twilio = require('twilio-video');

   const localVideoTrack = await Twilio.createLocalVideoTrack();
   const filter = new BackgroundFilter(videoTrack.mediaStreamTrack, {token: 'insert-vectorly-token', background: 'blur'});
   const outputTrack = await filter.getOutputTrack();
   const filteredTrack = new Twilio.LocalVideoTrack(outputTrack);

   room.localParticipant.publishTrack(filteredTrack);

							

Daily.co

If you're building a custom UI with Daily.co, you can use the Daily.co callObject's setInputDevices method to set the filtered video track as the upload stream.


   const sourceVideoTrack = callObject._participants.local.videoTrack;

   const filter = new vectorly.BackgroundFilter(sourceVideoTrack, {token: 'your-vectorly-token', background: 'https://demo.vectorly.io/virtual-backgrounds/1.jpg'});

   filter.getOutputTrack().then(function(filteredTrack ){

      callObject.setInputDevicesAsync({
         videoSource: filteredTrack
      });

   });
							

You can find a working example repo here

Vonage / OpenTok

When you create a Publisher object, just feed the filtered video Track as the video Source


   const stream = await navigator.mediaDevices.getUserMedia({video:true, audio:true});
   const filter = new BackgroundFilter(stream, {token: 'vectorly-token', background: 'blur'});
   const outputTrack =  await filter.getOutpuTrack();

   var publisher = OT.initPublisher('publisher', {
      insertMode: 'append',
      width: '100%',
      height: '100%',
      videoSource: outputTrack
   }, handleError);
							

You can find a working demo repo here

Starter Backgrounds

For convenience, if you're interested in adding a virtual backgrounds feature and need some starter images, here are a few:

https://demo.vectorly.io/virtual-backgrounds/1.jpg
https://demo.vectorly.io/virtual-backgrounds/2.jpg
https://demo.vectorly.io/virtual-backgrounds/3.jpg
https://demo.vectorly.io/virtual-backgrounds/4.jpg
https://demo.vectorly.io/virtual-backgrounds/5.jpg
https://demo.vectorly.io/virtual-backgrounds/6.jpg
https://demo.vectorly.io/virtual-backgrounds/7.jpg
https://demo.vectorly.io/virtual-backgrounds/8.jpg
https://demo.vectorly.io/virtual-backgrounds/9.jpg

Events

Once you have instantiated the filter object, you can access basic filter events, like onload and error handling.

							 
   const filter = new BackgroundFilter(stream, config);

   filter.on('load', function () {
     console.log("filter initialized");
   });

   filter.on('start', function () {
      console.log("Starting filter");
   });

   filter.on('stop', function () {
      console.log("Stopping filter");
   });

   filter.on('error', function () {
     console.log("Filter failed to initialize");
   });
							 
						 

If the filter fails to load, then it will pass through the original video stream

Controls

You can enable and disable the filter programatically.

							 
   const filter = new BackgroundFilter(video, config);

   filter.disable();

   filter.enable();
							 
						 

Note

Calling disable() will stop the filter and return the original MediaStream object by default, and you will need to re-publish the original input MediaStream to the WebRTC client

If you want to avoid havinf to re-publish the original media stream, and just have disable/enable toggle the virtual backgrounds on one Media Stream, you can set `passthrough` in the config

							 
   const filter = new BackgroundFilter(video, {token: '...', passthrough: true});

   filter.disable();   // Same media stream, but just doesn't run the background filter

   filter.enable();    // Re-puts in the background filter on the media stream
							 
						 

You can also change the inputs to the background filter dynamically

							 
    //Change the background image, or set to "blur" to set a background blur
   await filter.changeBackground("new-background-image.png");


   //Change the source media stream
   const devices = await navigator.mediaDevices.enumerateDevices();
   const alternateWebCam = devices[1]; //Just an example, don't literally copy/paste this
   const alternateWebCamStream  = navigator.getUserMedia({video: {deviceId: alternateWebCam.deviceId}});
   await filter.changeInput(alternateWebCamStream);

							 
						 

You can also set the blur level (on a scale of 1 to 10) on initialization, or dynamically with changeBlurRadius method

							 

   const filter = new BackgroundFilter(stream, {token: 'vectorly-token', background: 'blur', blurRadius: 5});
   filter.changeBlurRadius(3);  // 3/10 is less blurry than 5/10

							 
						 

You can see a full set of available methods in the API documentation

WebGL Model

Note

This feature is experimental, and you may encounter bugs or issues when using it

While our default Background Filter model is the Google Selfie model, we are currently training our own WebGL native background segmentation model, which requires little to no CPU usage

While it is much more CPU-efficient, it is currently highly experimental and our first alpha version was released on August 6th, 2021. Outstanding issues to be resolved before our WebGL model goes into Beta include:

  • Improving the quality, especially on edge-cases
  • Performance is abnormally slow on some graphics cards

We are currently training our models more extensively and making improvements to the performance.

You can test our current WebGL model here.

Once our WebGL model is in Beta, you will be able to enable it with the following API:


   const stream = await navigator.mediaDevices.getUserMedia({video:true, audio:true});
   const filter = new BackgroundFilter(stream, {token: 'vectorly-token', background: 'blur', model: 'webgl'});
   const outputStream =  await filter.getOutput();
					

Low level controls

If you are using the AWS Chime SDK, or require lower-level controls such as running the background individual frames/ images, you can use the vectorly-core library.

With the Background Filter Core SDK, you have control over

  • The Input source
  • The destination
  • When rendering happens

Loading

The API for loading the Core Background Core Filter is

NPM


     import { BackgroundCoreFilter } from '@vectorly-io/ai-filters';
					

CDN

https://cdn.vectorly.io/ai-filters/v1/latest/vectorly.BackgroundFilterCore.js

Setting a destination

Each Background object is tied to an individual canvas element, and renders to that canvas element.

You specify the canvas element you want to render the upscales to via the upscaler constructor


   const filter = new BackgroundCoreFilter();
   await filter.load({
	          canvas: document.getElementById('your-canvas-element'),
			  model: 'webgl_v2' || 'selfie_v2',
	          token: "your-token"});
					

The load function returns a promise, which is fulfilled when the background filter loads, and is rejected when it fails to load.

Setting an input

At any time, you can set the input of the filter via the filter.setInput() method

 filter.setInput(source); // Sets input element

Accepted sources include

  • HTMLImageElement
  • HTMLCanvasElement
  • HTMLVideoElement
  • ImageData
  • ImageBitmap
  • Anything else that the texImage2d function accepts

Rendering

Finally, you can render using

filter.render();

Which will run the Background Filter on the current input, and render to the canvas

AI Upscaling

Loading

The API for loading the Upscaler is

NPM


     import { UpscaleFilter } from '@vectorly-io/ai-filters';
					

CDN

https://cdn.vectorly.io/ai-filters/v1/latest/vectorly.UpscaleFilter.js

For loading via CDN, you can access the upscaling filter as the UpscaleFilter object, which will be available in the global scope

For web environments, we've packaged our upscaler as a standalone Javascript library, as well as as plugins to several popular HTML5 video players (see the full API for more detail).

Basic usage

For the UpscaleFilter, the basic API involves instantiating an UpscaleFilter object, and specifying a video element.

							 
   const video = document.getElementById("video");

   const config = {
	   token: '...'
   };

   const upscaler = new UpscaleFilter(video, config);
							 
						 

This automatically upscales the video, by overlaying a canvas element with the upscaled video frames on top of the video element. When the video plays, the upscaler will automatically upscale each frame and update the canvas element. See the styling section for more detail.

Browser support

Our upscaling filters are supported on all major browsers, except for internet explorer. See a table of browser support below

Chrome Safari Firefox Edge Opera IE
Supported Yes Yes Yes Yes Yes No

Integration

General WebRTC

The UpscaleFilter works with any video tag, so for a WebRTC application, all you need to do is to instantiate the upscaler object with the video element you want to upscale.


   const upscaler  = new UpscaleFilter(document.getElementById("remoteVideo"), {token: 'insert-vectorly-token-here'});

We have an example repository, showing how Vectorly can be integrated with WebRTC, as well as a full working general WebRTC demo here.

Integrating the upscaler with any specific Video Conferencing API or service just requires finding the video element associated with video stream you want to upscale.

Jitsi

You can enable upscaling on any VideoTrack object by intercepting the corresponding video element you attach it to (see reference).


   const room = connection.initJitsiConference('conference', confOptions);
   room.on(JitsiMeetJS.events.conference.TRACK_ADDED, function(track){

      const videoElement = document.createElement('video');
      document.body.appendChild(videoElement);
      track.attach(videoElement);

      const upscaler = new UpscaleFilter(videoElement.current, {token: 'insert-vectorly-token'});

   });

Agora

For Web deployments using Agora, you can find the video element of the stream you want to upscale by using the stream's ID.


    let stream = AgoraRTC.createStream({
        streamID: uid,
        audio: true,
        video: true,
        screen: false
    });

    stream.init(function() {

        stream.play('target-div');
        const video = document.getElementById("video" + stream.getId());
        const upscaler = new UpscaleFilter(video, {token: 'insert-vectorly-token-here'});

        client.publish(stream);

    });
						

Amazon Chime SDK

For the Amazon Chime SDK, you'll need to use the low level controls library, in conjunction with the Amazon Chime Video Processor API


    if(!this.upscaler){

        this.upscaler = new UpscaleCoreFilter();

        const config = {
          w: frameWidth,
          h: frameHeight,
          renderSize: {w: frameWidth*3, h: frameHeight*3},
          canvas: this.targetCanvas,
          networkParams: {name: 'residual_5k_3x', tag: 'general', version: '0'},
          token: ""
        }

       this.upscaler.load(config);

        this.upscaler.on('load', function(){
          this.upscalerReady = true;
        }.bind(this));

    }

    if(this.upscalerReady){
      this.upscaler.setInput(canvas) // Sets input element
      this.upscaler.render() // Renders to canvas
    }

						

The Amazon Chime Video Processor API provides an canvas as your input, and provdes a destination canvas as your output. All you need to do is use the vectorly core library, configure it to render to the output canvas, and feed it the input canvas and render on each render cycle

The above is a code snipped taken from our full working demo repository, which you can find here

Twilio

You can enable upscaling on any VideoTrack object by intercepting the corresponding video element you attach it to (see reference).

If you use the track.attach() method to create a video element:


   const Video = require('twilio-video');

   Video.createLocalVideoTrack().then(function(videoTrack) {
     const videoElement = videoTrack.attach();
     document.body.appendChild(videoElement);
     const upscaler = new UpscaleFilter(videoElement.current, {token: 'insert-vectorly-token'});
   });
							

If you specify your own video element:


   const Video = require('twilio-video');

   const videoElement = document.createElement('video');
   document.body.appendChild(videoElement);

   Video.createLocalVideoTrack().then(function(videoTrack) {
     videoTrack.attach(videoElement);
     const upscaler = new UpscaleFilter(videoEl.current, {token: 'insert-vectorly-token'});
   });
							

OpenTok / Vonage

When a Subscriber or Publisher creates a video element you can intercept it and feed that video to the Vectorly Upscaler.

You can use the subscriber.element property to intercept the video element


    session.on('streamCreated', function(event) {

        const subscriber = session.subscribe(event.stream, 'subscriber', {
            insertMode: 'append',
            width: '100%',
            height: '100%'
        }, handleError);

        subscriber.on('videoElementCreated', function (){
            const video = subscriber.element.querySelector('video');
            const upscaler  = new UpscaleFilter(video, {token: '..your-token....'});

        });

    });
							

This also works with a publisher object. Refer to the Vonage documentation for styling - Vectorly's upscaler will fit within the styling defined by OpenTok.

See our example repistory for a working code example

Daily.co

You can integrate Vectorly's AI upscaler with Daily.co if you're building a custom custom video chat interface . Using the default React code sample from Daily, we've built a full working demo reference


   useEffect(() => {
	videoEl.current &&
	(videoEl.current.srcObject = new MediaStream([videoTrack]));
	if (videoEl.current && props.isLarge) {
	window.upscalers = window.upscalers || {}
	window.upscalers[videoTrack.id] = new UpscaleFilter(videoEl.current, {token: 'insert-vectorly-token'});
	}
   }, [videoTrack]);
							

You just need to make sure you intercept the video element associated with the video track you want to upscale.

Vectorly's AI upscaler is not compatible with the pre-built UI from Daily.co, as the pre-built UI is loaded via iframe, making it impossible to access the video element through a third party application.

Electron

If you're building an electron app, the Vectorly library is fairly plug and play, and will work with either CDN or NPM installation.

You can see a demo electron app repostory here

Events

Once you have instantiated the upscaler object, you can access basic upscaler events, like onload and error handling.

							 
   const upscaler = new UpscaleFilter(video, config);

   upscaler.on('load', function () {
     console.log("Upscaler initialized");
   });

   upscaler.on('start', function () {
      console.log("Starting upscaling");
   });

   upscaler.on('stop', function () {
      console.log("Stopping upscaling");
   });

   upscaler.on('error', function () {
     console.log("Failed to initialize");
   });
							 
						 

Controls

You can also enable and disable the upscaler programatically.

							 
   const upscaler = new UpscaleFilter(video, config);

   upscaler.disable();

   upscaler.enable();
							 
						 

Styling and Scaling

Let's say you have a video element, inside of a basic container div.


   <div id="container">
      <video src="video.mp4" ></video>
   </div>
				

When you feed that video element to the Upcaler instantiation function, it will create a canvas element as a sibling node, with the same parent node as the video element.


     <div id="container">
         <video src="video.mp4"  style="visibility: hidden"></video>
         <canvas  id="output" ></canvas>  // Where the upscaled frames are drawn
      </div>
				
The upscaler library styles this canvas to occupy 100% of the width and height of the parent element, which in practice, covers the video element in most HTML5 video player interfaces.

To have more control over the styling and position of the output, you can use the containeroption, to specify a div element to place the destination canvas.



   const video = document.getElementById("video");
   const div = document.getElementById("my-div");

   const config = {
	   token: '...',
	   container:  div //Any div element,
   };

   const upscaler = new UpscaleFilter(video, config);
				
The output canvas will occupy the exact dimensions of the container div, and will dynamically resize and re-position whenever the container div is moved, resized or changed. To dynamically style and position the output therefore, you should style and position the container element.

Models

There are multiple AI models you can choose from. The default is 'residual_3k_3x', but you can specify a model when instantiating the upscaler object


   const upscaler = new UpscaleFilter(video, {token: '...', networkParams: { name: 'residual_3k_3x', tag: 'general', version: '2.1'}});
							 
We are constantly releasing new models. You can find a comprehensive list of models here

Low level controls

For use cases where lower level control is needed, such as upscaling indidual frames or images, using a custom decoder or upscaling as part of a broader image processing pipeline, you can use the vectorly-core library.

With the low level upscaling API, you have control over

  • The Input source
  • The destination
  • When rendering happens

Loading

The API for loading the Core Upscaler is

NPM


     import { UpscaleCoreFilter } from '@vectorly-io/ai-filters';
					

CDN

https://cdn.vectorly.io/ai-filters/v1/latest/vectorly.UpscaleCoreFilter.js

Setting a destination

Each Upscaler object is tied to an individual canvas element, and renders to that canvas element.

You specify the canvas element you want to render the upscales to via the upscaler constructor


   const upscaler = new UpscaleCoreFilter();
   await upscaler.load({w: videoWidth,
	          h: videoHeight,
	          renderSize: {w: videoWidth*2, h: videoHeight*2},
	          canvas: document.getElementById('your-canvas-element'),
	          networkParams : {name: 'model-name', tag: 'model-tag', version: 'model-version'},
	          token: "your-token"});
					

The load function returns a promise, which is fulfilled when the upscaler loads, and is rejected when it fails to load. This is **on top** of the regular on('load') and on('error') behavior, so you can either use the promise or the load/error events for flow control and/or error handling

If you want to upscale multiple streams to different canvases, you will need to define a seperate upscaler for each canvas element.

Setting an input

At any time, you can set the input of the upscaler via the upscaler.setInput() method

 upscaler.setInput(source); // Sets input element

Accepted sources include

  • HTMLImageElement
  • HTMLCanvasElement
  • HTMLVideoElement
  • ImageData
  • ImageBitmap
  • Anything else that the texImage2d function accepts

Note

The architecture of the Neural Network is such that it will expect a fixed-side input (the one specified in the load function) during your render cycle. If you provide inputs different from the input size, the inputs will be resized to the fixed input resolution.

Rendering

Finally, you can render using

upscaler.render();

Which will run the AI upscaling process on the canvas

Styling & Scaling

You need to set the input width and height of your input image or video streaming using the w and h properties in the constructor.

Based on whether you are using a 2X network, or 3X network, it will set the canvas.width and canvas.height property to 2x or 3x the specified w and h.

If you want your canvas to be displayed at anything other than 2*wby 2*h on the screen, you should use CSS styling.

canvas.style.width = desiredWidth + "px";
canvas.style.height = desiredheight + "px";

The browser will still upscale the image from wxh to 2*w x 2*h, but will then use CSS styling & scaling (bicubic scaling) to scale the final output to the height/width you specify via CSS.

Performance

When running AI filters on client-devices, the most practical challenge is client side performance, as it requires doing large numbers of computations. This can especially become an issue when dealing with low-end devices (such as entry-level smartphones).

Accordingly, we have focused a great deal on making our AI models as efficient as possible, to enable good quality outputs while still maintaining good client-side rendering performance on low-end devices.

Background Segmentation

Our background filter uses the Google Selfie model by default. We also have an experimental WebGL based background-segmentation model which has significantly less CPU usage than any existing alternatives (see here for more info).

You can verify performance for yourself with a profiling tool like Google Chrome's performance profiler

Performance Monitoring

For checking the performance on a given device, you can use the static checkPerformance method without actually instantiating it on a screen.

							 
     const performanceResults  = await BackgroundFilter.checkPerformance({token: '...'});
							 
						 

This cycle through a 720p input for 2 seconds and return an object with the following fields

							 
	{
	  "inputSize": "1280x720",
	  "frameCount": Number,
	  "time": Number,
	  "fps": Number
	}
							 
						 

You can also measure the live fps on any given stream using the filter.processer.metrics.fps propery as shown below:

							 

   const filter = new BackgroundFilter(video, config);

   console.log(`The current fps is ${filter.processor.metrics.fps}`);
							 
						 

You can see a working code example in our Background Demo code, the same code used for our public demo

Performance Considerations

We recommend only running the Web Background filters library for desktop users. The performance is considerably worse on Mobile because because the overhead of communicating between the browser and GPU is much higher on mobile devices.

We expect the performance to rival or exceed desktop clients with native mobile SDKs, as is the case for our AI Upscaler Android SDK. We are planning to develop a Background Filters Android and iOS SDK in Q3 2021.

AI Upscaling

The primary "cost" to doing super-resolution is computational complexity. While we have put a lot of work into making super resolution feasible on client devices, it is still something which needs to be managed. Here, we provide some initial performance benchmarks for the same demos shown above, in the demos sections.

Performance Considerations

AI Upscaling does require some computational effort, however it is mostly on the graphics card, so AI Upscaling's impact on CPU is limited. The amount of computation (and therefore the framerate / performance) depends on the size of input video you are upscaling

The following table should give a rough idea performance for different input video resolutions. These results are only for Web environments. Our mobile SDKs will have access to more powerful native libraries, enabling significantly better performance.

240p -> 480p/720p 360p -> 720p/1080p 480p -> 960p/1440p
High End Smartphone 120 fps 40 fps 14fps
Mid-range Smartphone 80 fps 28 fps 9 fps
Low-End Smartphone 20fps 6fps 3fps
Mid-range Laptop 100fps 35fps 8fps
GPU Desktop 200+fps 200+fps 80 fps

You can measure the fps at any time with upscaler.metrics.fps property. The fps number provided by upscaler.metrics.fps will not exceed the source video's frame rate because we only render when a video frame changes.

It's recommended to stick to 240p or 360p inputs, as mid-range devices tend to struggle with larger inputs. You can also disable upscaling if the fps gets too low.

Quality

The primary benefit of Super resolution is to increase video quality. Using the original high-resolution video as a reference, we can use traditional video quality metrics like VMAF to quantify the quality improvement of Super Resolution, when compared to normal bicubic upscaling of the downsampled / low-resolution video content.

Our general AI upscaler filter generally achieves a 10 to 15 point VMAF improvement compared to bicubic scaling. With content-specific AI models, or heavier models, we will likely be able to achieve further quality gains. We are currently working on releasing quality comparisons for content specific models.

Quality visualization

For reference, below are side by side comparisons of bicubic upscaling of the low-resolution original / Super resolution of the low-resolution / High resolution original

Jellyfish

Bicubic (240p)
240p upscaled to 720p
Original 720p

Ducks

Bicubic (240p)
240p upscaled to 720p
Original 720p

Tractor

Bicubic (240p)
240p upscaled to 720p
Original 720p