Introduction Last updated: 2021-02-11

Vectorly's AI upscaler libraries convert low resolution video to HD in real-time on users' devices, enabling HD viewing experiences while using 50% to 90% less bandwidth ("AI Compression").

Super Resolution

Vectorly's AI upscaling technology is based on a concept called Super Resolution, which uses AI to upscale and enhance images. Through Super Resolution, we can upscale and clean-up low-resolution video, making it look close to HD quality.

Super Resolution Example:
240p upscaled to 720p
Original 720p

While AI enhancement tasks normally require lots of computation, we've developed ultra-fast upscaling technology which can run in real time, even on low-end smartphones.

This lets you stream SD video streams to users, and upscale it to HD in real time as they're watching it, providing an HD viewing experience while only consuming the bandwidth for the low-resolution video (50 to 90% less data than for the HD video).

Upscaling libraries

Vectorly's AI Upscaling libraries run entirely on the client side, within your website or app. They work as plugins to native or HTML5 video players, up-scaling and enhancing video content as your users watch it.

Basic Web Example:
   const player = videojs('my-video')
   videojs.registerPlugin('vectorlyPlugin', vectorlyUpscaler.videoJSPlugin);
   const vjsUpscaler = player.vectorlyPlugin({token: '...'});

You can think of AI upscaling as a final, optional layer at the end of the video streaming pipeline. Upscaling happens after video is decoded and rendered by the browser, meaning that it is compatible with any codec, any streaming architecture (HLS/DASH etc…), and works equally well on live and video-on-demand content, as well as video conferencing.

We are currently working on Android and iOS mobile SDKs.

Getting Started

First, sign up in the dashboard to get a token which, you will need to use the library. Next, you can install the upscaler library via NPM or via CDN


npm -i @vectorly-io/ai-upscaler

There are various versions available, including the vectorly-upscaler.js library, as well as plugins for different HTML5 video players. You can find version specific installation instructions here.

Hello World

The fastest way to get upscaling that "Just works" is to use the vectorly-videojs.js videoJS plugin. If you load the following code and you should be able to instantly see the video playing

       <link href="" rel="stylesheet" />
       <script src=""></script>
     <script src=""></script>

   <video id="my-video" class="video-js"  controls width="1280"  height="720"   data-setup="{}" crossorigin="anonymous">
       <source src="" type="video/mp4" />

         videojs.registerPlugin('vectorlyPlugin', vectorlyUpscaler.videoJSPlugin);

         const player = videojs('my-video');

         const upscaler = player.vectorlyPlugin({token: '...'});

         upscaler.addEventListener('load', function () {
             console.log("Upscaler initialized");

         upscaler.addEventListener('error', function () {
             console.log("Failed to initialize");


Make sure you add your token, and verify that "Upscaler initialized" is being displayed in the console.log, in order to know that it's working. You can see a working example on Code Pen.

For other players, scroll down to the web section or see the full API here


For web environments, we've packaged our upscaler as a standalone Javascript library, as well as as plugins to several popular HTML5 video players (see the full API for more detail).

Basic usage

For the vectorly-upscaler.js library, the basic API involves instantiating a vectorlyUpscaler object, and specifying a video element.

   const video = document.getElementById("video");

   const config = {
	   token: '...'

   const upscaler = new vectorlyUpscaler(video, config);

This automatically upscales the video, by overlaying a canvas element with the upscaled video frames on top of the video element. When the video plays, the upscaler will automatically upscale each frame and update the canvas element. See the styling section for more detail.


Besides the vectorly-upscaler.js library, we have several plugins for specific HTML5 players. (see the full API for more detail).


   const player = videojs('my-video')
   videojs.registerPlugin('vectorlyPlugin', vectorlyUpscaler.videoJSPlugin);
   const vjsUpscaler = player.vectorlyPlugin({token: '...'})

Shaka player

    async function init() {
       const video = document.getElementById('my-video');
       const ui = video['ui'];
       const controls = ui.getControls();
       const player = controls.getPlayer();

       try {
           await player.load(url);
           // This runs if the asynchronous load is successful.
           const upscaler = new vectorlyUpscaler.shakaPlugin(player,{
                token: '...'
       } catch (error) {

    document.addEventListener('shaka-ui-loaded', init);

Custom plugin

You can easily build a plugin for Vectorly for any HTML5 video player. All you really need is the video tag, and the video container div, which contains the video UI elements and which is used for styling and layout. See a demo plugin code, off of which our other HMTL5 Player plugins are based
   import vectorlyUpscaler from '@vectorly-io/ai-upscaler';

   class myPlugin {

     constructor(videoElement, config){

       const container = videoElement.parentNode; // Or whatever the video container div is
       const upscaler = new vectorlyUpscaler(videoElement, config);
       this.upscaler = upscaler;


     on(event, callback){
       this.upscaler.on(event, callback)



     changeNetwork(networkParams) {

   export default myPlugin


If you're building an electron app, the Vectorly library is fairly plug and play, and will work with either CDN or NPM installation.

You can see a demo electron app repostory here

Performance Considerations

AI Upscaling does require some computational effort, however it is mostly on the graphics card, so AI Upscaling's impact on CPU is limited. The amount of computation (and therefore the framerate / performance) depends on the size of input video you are upscaling

The following table should give a rough idea performance for different input video resolutions. These results are only for Web environments. Our mobile SDKs will have access to more powerful native libraries, enabling significantly better performance.

240p -> 480p/720p 360p -> 720p/1080p 480p -> 960p/1440p
High End Smartphone 120 fps 40 fps 14fps
Mid-range Smartphone 80 fps 28 fps 9 fps
Low-End Smartphone 20fps 6fps 3fps
Mid-range Laptop 100fps 35fps 8fps
GPU Desktop 200+fps 200+fps 80 fps

You can measure the fps on any given device by adding the analyticsEnabled flag as true in the configuration parameters.

  const config = {
	   token: '...',
	   analyticsEnabled: true

   const upscaler = new vectorlyUpscaler(video, config);

You can then measure the fps at any time with upscaler.metrics.fps property. The fps number provided by upscaler.metrics.fps will not exceed the source video's frame rate because we only render when a video frame changes.

It's recommended to stick to 240p or 360p inputs, as mid-range devices tend to struggle with larger inputs. You can also disable upscaling if the fps gets too low.


Once you have instantiated the upscaler object, you can access basic upscaler events, like onload and error handling.

   const upscaler = new vectorlyUpscaler(video, config);

   upscaler.on('load', function () {
     console.log("Upscaler initialized");

   upscaler.on('start', function () {
      console.log("Starting upscaling");

   upscaler.on('stop', function () {
      console.log("Stopping upscaling");

   upscaler.on('error', function () {
     console.log("Failed to initialize");


You can also enable and disable the upscaler programatically.

   const upscaler = new vectorlyUpscaler(video, config);



Styling and Scaling

Let's say you have a video element, inside of a basic container div.

   <div id="container">
      <video src="video.mp4" ></video>

When you feed that video element to the Upcaler instantiation function, it will create a canvas element as a sibling node, with the same parent node as the video element.

     <div id="container">
         <video src="video.mp4"  style="visibility: hidden"></video>
         <canvas  id="output" ></canvas>  // Where the upscaled frames are drawn
The upscaler library styles this canvas to occupy 100% of the width and height of the parent element, which in practice, covers the video element in most HTML5 video player interfaces.

To have more control over the styling and position of the output, you can use the containeroption, to specify a div element to place the destination canvas.

   const video = document.getElementById("video");
   const div = document.getElementById("my-div");

   const config = {
	   token: '...',
	   container:  div //Any div element,

   const upscaler = new vectorlyUpscaler(video, config);
The output canvas will occupy the exact dimensions of the container div, and will dynamically resize and re-position whenever the container div is moved, resized or changed. To dynamically style and position the output therefore, you should style and position the container element.


There are multiple AI models you can choose from. The default is 'residual_3k_3x', but you can specify a model when instantiating the upscaler object

   const upscaler = new vectorlyUpscaler(video, {token: '...', networkParams: { name: 'residual_3k_3x', tag: 'general', version: '2.1'}});
We are constantly releasing new models. You can find a comprehensive list of models here

Low level controls

For use cases where lower level control is needed, such as upscaling indidual frames or images, using a custom decoder or upscaling as part of a broader image processing pipeline, you can use the vectorly-core library.

With the low level upscaling API, you have control over

  • The Input source
  • The destination
  • When rendering happens

Setting a destination

Each Upscaler object is tied to an individual canvas element, and renders to that canvas element.

You specify the canvas element you want to render the upscales to via the upscaler constructor

const upscaler = new vectorlyUpscaler.core();
upscaler.load({w: videoWidth,
	       h: videoHeight,
	       renderSize: {w: videoWidth*2, h: videoHeight*2},
	       canvas: document.getElementById('your-canvas-element'),
	       networkParams : {name: 'model-name', tag: 'model-tag', version: 'model-version'},
	       token: "your-token"});

If you want to upscale multiple streams to different canvases, you will need to define a seperate upscaler for each canvas element.

Setting an input

At any time, you can set the input of the upscaler via the upscaler.setInput() method

 upscaler.setInput(source); // Sets input element

Accepted sources include

  • HTMLImageElement
  • HTMLCanvasElement
  • HTMLVideoElement
  • ImageData
  • ImageBitmap
  • Anything else that the texImage2d function accepts


Finally, you can render using


Which will run the AI upscaling process on the canvas

Styling & Scaling

You need to set the input width and height of your input image or video streaming using the w and h properties in the constructor.

Based on whether you are using a 2X network, or 3X network, it will set the canvas.width and canvas.height property to 2x or 3x the specified w and h.

If you want your canvas to be displayed at anything other than 2*wby 2*h on the screen, you should use CSS styling. = desiredWidth + "px"; = desiredheight + "px";

The browser will still upscale the image from wxh to 2*w x 2*h, but will then use CSS styling & scaling (bicubic scaling) to scale the final output to the height/width you specify via CSS.

If you encounter any issues with these libraries, you can send us a message

Report Bugs



Our Android SDK works as a plugin to ExoPlayer. You will therefore need to use ExoPlayer, or an ExoPlayer derived player, in order upscale video.

First, you'll need to include our SDK into your app's gradle file. You can import it from our Maven repository, as shown below.

   repositories {
     maven { url "" }
   dependencies {
      implementation 'io.vectorly.glnnrender:glnnrender:0.1.1'

You can then add the following imports into the activity which manages your ExoPlayer view

   import io.vectorly.glnnrender.GlPlayerView;
   import io.vectorly.glnnrender.networks.NetworkTypes;

Once the ExoPlayer view is set up, you can call set up the Upscaler as in the following example. You'll need to feed your API key, which you can get from the Vectorly dashboard.

   private GlPlayerView ePlayerView;

   private void setupUpscalerView() {

           String api_key = "...";
           ePlayerView = new GlPlayerView(this, api_key);

           ePlayerView.setNetwork(NetworkTypes.DEFAULT, getApplicationContext());
            ePlayerView.setLayoutParams(new RelativeLayout.LayoutParams(ViewGroup.LayoutParams.MATCH_PARENT, ViewGroup.LayoutParams.MATCH_PARENT));
           ((MovieWrapperView) findViewById(;



We've created a full working example using our library, which you can find below

Example Repo


We plant to release an iOS SDK in Summer 2021

Video Conferencing

You can also use Vectorly's AI Upscaler for upscaling video streams in any WebRTC video conferencing system.

General WebRTC

The vectorlyUpscaler object works with any video tag, so for a WebRTC application, all you need to do is to instantiate the upscaler object with the video element you want to upscale.

   const upscaler  = new vectorlyUpscaler(document.getElementById("remoteVideo"), {token: 'insert-vectorly-token-here'});

We have an example repository, showing how Vectorly can be integrated with WebRTC, as well as a full working general WebRTC demo here.

Integrating the upscaler with any specific Video Conferencing API or service just requires finding the video element associated with video stream you want to upscale.


You can enable upscaling on any VideoTrack object by intercepting the corresponding video element you attach it to (see reference).

   const room = connection.initJitsiConference('conference', confOptions);
   room.on(, function(track){

      const videoElement = document.createElement('video');

      const upscaler = new vectorlyUpscaler(videoElement.current, {token: 'insert-vectorly-token'});



For Web deployments using Agora, you can find the video element of the stream you want to upscale by using the stream's ID.

    let stream = AgoraRTC.createStream({
        streamID: uid,
        audio: true,
        video: true,
        screen: false

    stream.init(function() {'target-div');
        const video = document.getElementById("video" + stream.getId());
        const upscaler = new vectorlyUpscaler(video, {token: 'insert-vectorly-token-here'});

        client.publish(stream, handleFail);



You can enable upscaling on any VideoTrack object by intercepting the corresponding video element you attach it to (see reference).

If you use the track.attach() method to create a video element:

   const Video = require('twilio-video');

   Video.createLocalVideoTrack().then(function(videoTrack) {
     const videoElement = videoTrack.attach();
     const upscaler = new vectorlyUpscaler(videoElement.current, {token: 'insert-vectorly-token'});

If you specify your own video element:

   const Video = require('twilio-video');

   const videoElement = document.createElement('video');

   Video.createLocalVideoTrack().then(function(videoTrack) {
     const upscaler = new vectorlyUpscaler(videoEl.current, {token: 'insert-vectorly-token'});

You can integrate Vectorly's AI upscaler with if you're building a custom custom video chat interface . Using the default React code sample from Daily, we've built a full working demo reference

   useEffect(() => {
	videoEl.current &&
	(videoEl.current.srcObject = new MediaStream([videoTrack]));
	if (videoEl.current && props.isLarge) {
	window.upscalers = window.upscalers || {}
	window.upscalers[] = new vectorlyUpscaler(videoEl.current, {token: 'insert-vectorly-token'});
   }, [videoTrack]);

You just need to make sure you intercept the video element associated with the video track you want to upscale.

Vectorly's API upscaler is not compatible with the pre-built UI from, as the pre-built UI is loaded via iframe, making it impossible to access the video element through a third party application.


For Super Resolution, the most practical challenge is client side performance, as it requires doing large numbers of computations. This can especially become an issue when dealing with low-end devices (such as entry-level smartphones).

Accordingly, we have focused a great deal on making our AI models as efficient as possible, to enable good quality outputs while still maintaining good client-side rendering performance on low-end devices.

Below, you can see the quality and performance metrics output for the demos outlined above. All of our performance results are for our generic upscaler model. We plan to make more models for different devices, and different qualities.


The primary "cost" to doing super-resolution is computational complexity. While we have put a lot of work into making super resolution feasible on client devices, it is still something which needs to be managed. Here, we provide some initial performance benchmarks for the same demos shown above, in the demos sections.


Upscaling time varies from frame to frame, so we provide average framerates. Framerates for desktop were over 500fps, but we capped the graph for clarity purposes.

For reference, below are the specs for the devices we tested on

GPU Desktop Non-GPU Laptop High-end smartphone Low-end smartphone
Device Alientware Aurora R11 Dell XPS 13 Samsung Galaxy 8 Samsung A2
CPU Intel Core i5 x 6 Intel Core i7 - 1.8GHz x8 Exynos 1.9 GHz x8 Exynos 1.6 GHz x8
GPU NVIDIA GeForce GTX 1650 Mesa Intel UHD Graphics 620 Exynos Mali-G71 MP20 ARM Mali - T830 MP1
Retail Price ($ USD) $1200 $1200 $600 $90


The primary benefit of Super resolution is to increase video quality. Using the original high-resolution video as a reference, we can use traditional video quality metrics like VMAF to quantify the quality improvement of Super Resolution, when compared to normal bicubic upscaling of the downsampled / low-resolution video content.

Our general AI upscaler filter generally achieves a 10 to 15 point VMAF improvement compared to bicubic scaling. With content-specific AI models, or heavier models, we will likely be able to achieve further quality gains. We are currently working on releasing quality comparisons for content specific models.

Quality visualization

For reference, below are side by side comparisons of bicubic upscaling of the low-resolution original / Super resolution of the low-resolution / High resolution original


Bicubic (240p)
240p upscaled to 720p
Original 720p


Bicubic (240p)
240p upscaled to 720p
Original 720p


Bicubic (240p)
240p upscaled to 720p
Original 720p