Introduction Last updated: 2021-02-11

Vectorly's AI upscaler libraries convert low resolution video to HD in real-time on users' devices, enabling HD viewing experiences while using 50% to 90% less bandwidth ("AI Compression").

Super Resolution

Vectorly's AI upscaling technology is based on a concept called Super Resolution, which uses AI to upscale and enhance images. Through Super Resolution, we can upscale and clean-up low-resolution video, making it look close to HD quality.

Super Resolution Example:
240p upscaled to 720p
Original 720p

While AI enhancement tasks normally require lots of computation, we've developed ultra-fast upscaling technology which can run in real time, even on low-end smartphones.

This lets you stream SD video streams to users, and upscale it to HD in real time as they're watching it, providing an HD viewing experience while only consuming the bandwidth for the low-resolution video (50 to 90% less data than for the HD video).

Upscaling libraries

Vectorly's AI Upscaling libraries run entirely on the client side, within your website or app. They work as plugins to native or HTML5 video players, up-scaling and enhancing video content as your users watch it.

Basic Web Example:
This is a simplified example of our VideoJS plugin.

You can think of AI upscaling as a final, optional layer at the end of the video streaming pipeline. Upscaling happens after video is decoded and rendered by the browser, meaning that it is compatible with any codec, any streaming architecture (HLS/DASH etc…), and works equally well on live and video-on-demand content, as well as video conferencing.

We are currently working on Android and iOS mobile SDKs.

Getting Started


You can get our Beta HTML5 Upscaler libraries here, which provides instractions for loading the standalone upscaler, as well as the vectorly-videojs.js file used in Hello world example.

Each upscaler relies on AI models and weights to do the upscaling. These come in the form of additional javascript files, which we've hosted on a CDN for convenience. The library will load the default model at runtime, but you can see more info for choosing additional model in the Models section.

Hello World

The easiest way to get started with upscaling is by using our VideoJS javascript plugin. Just include vectorly-videojs.js in the repository (and the model files and weights), and you should be able to instantly see the video playing

For a more detailed API and for other players, scroll down to the Web section


In all the examples, we specify height and width, which is the target resolution we are upscaling to. Our libraries upscale by a factor of 3, so if you specify 720p output, the upscaler will expect a 240p input, or otherwise will first scale the input to 240p. Upscaling to higher resolutions improves video quality, but lowers framerate.


For web environments, we've packaged our libraries as plugins to popular HTML5 players. We also have a standalone plugin, which you can configure with any HTML5 video player.


Our upscalers are currently based on WebGL 2.0, and will throw an error on Safari and Internet Explorer. We are working on making our libraries backwards compatible with WebGL 1.0, which covers every major browser.


If you are just getting started with Vectorly, or if you are not already using an HTML5 player, we highly recommend using one of the following players and the corresponding Vectorly plugin.
  • VideoJS
  • Shaka Player
  • JWPlayer
If you're using a popular HTML5 player that we haven't created a plugin for, send us a message, and we can build a video player.



Shaka Player

Core Upscaler

The entry point is 'upscaler.js'. You'll need to include the model weights and files as well.


Once you have instantiated the upscaler object, you can access basic upscaler events, like onload and error handling.

Enable / Disable

You can also enable and disable the upscaler programatically.


There are multiple AI models you can choose from. The default is 'residual_3k', but you can specify a model when instantiating the upscaler object

The available models are:

Model Name Latest Version Description
residual_3k 2.1 Basic lightweight model using residuals
residual_5k 0 Slightly heavier residual model
vdsr 0 Old model, not as good as residual_3k
7K 0 Old model, the worst performing model, deprecated
cnn_demo 0 Demo network. Doesn't do anything, feel free to check out the source

If you encounter any issues with these libraries, you can send us a message

Report Bugs


We are currently working on Mobile SDKs for upscaling on Android, iOS and Flutter.


For Android platforms, we plan to release a plugin for ExoPlayer in February 2021.


We plant to release an iOS SDK in Summer 2021


We have a Flutter SDK on our roadmap for 2021, but do not have a specific timeline yet

Video Conferencing

You can also use Vectorly's AI Upscaler for upscaling within Video Conferencing architectures. Here, we outline 3 different ways AI upscaling can be used to improve video quality for end users within WebRTC Video Conferencing systems.

We've also put together an example repository, showing how Vectorly can be integrated with WebRTC.

You can see a full working WebRTC demo here

Client Side Upscaling

You can add Vectorly's AI upscaler directly on the client side for one or more clients, which helps improve the video experience for each individual receiver (for example, if the receiver's network speed drops). This option primarily makes sense for Simulcast SFU and MCU architectures, where receivers have the option of recieving multiple quality levels of video.

You'll need to feed the video tag of the video feed you want to upscale. For a real world example, you can see the exact line where the upscaler is defined in the WebRTC demo repo

When you feed in the input video element to the upscaler, and the upscaler will automatically upscale the corresponding tag

Client Side Compression

In pure Peer to Peer architectures or regular SFU architctures, each broadcaster is only broadcasting one video quality. In these scenarios, you would need to downscale the broadcast video quality for each broadcaster.

Doing this makes sense in scenarios where you would want to reduce bandwidth for all participants involved.

To implement this, you would need to set the output resolution for each broadcaster's video (see example ), as well as to adjust the output bitrate in WebRTC (example). You can see the a full working version of bandwidth and bitrate selection in the WebRTC Demo repository.

This lower quality video is then broadcast to all users, which will then re-upscale it to normal resolution with the AI upscaling module.

For guidance, below are the following recommended encoding settings for WebRTC based video conferencing for different input resolutions of user's video cameras.

Input Resolution H264 VP9
240x320 300kbps 200kbps
360x480 400kbps 300kbps
360x640 500kbps 400kbps

Server Side Upscaling

One further use case is to upscale and enhance video just from users who are broadcasting video over poor connections. This has the advantage of not having any end-user requirements, but this is only possible in architectures where server-side transcoding is possible.

While we currently don't have a server-side upscaler, it would be very easy to build, and the quality and performance would be much better than for client-side devices. If you'd like to implement server-side upscaling, we'd love to learn more about your requrements


Below are a few demos of our upscaling technology.


Our upscalers are currently based on WebGL 2.0, and will throw an error on Safari and Internet Explorer. We are working on making our libraries backwards compatible with WebGL 1.0, which covers every major browser.


For Super Resolution, the most practical challenge is client side performance, as it requires doing large numbers of computations. This can especially become an issue when dealing with low-end devices (such as entry-level smartphones).

Accordingly, we have focused a great deal on making our AI models as efficient as possible, to enable good quality outputs while still maintaining good client-side rendering performance on low-end devices.

Below, you can see the quality and performance metrics output for the demos outlined above. All of our performance results are for our generic upscaler model. We plan to make more models for different devices, and different qualities.


The primary "cost" to doing super-resolution is computational complexity. While we have put a lot of work into making super resolution feasible on client devices, it is still something which needs to be managed. Here, we provide some initial performance benchmarks for the same demos shown above, in the demos sections.


Upscaling time varies from frame to frame, so we provide average framerates. Framerates for desktop were over 500fps, but we capped the graph for clarity purposes.

For reference, below are the specs for the devices we tested on

GPU Desktop Non-GPU Laptop High-end smartphone Low-end smartphone
Device Alientware Aurora R11 Dell XPS 13 Samsung Galaxy 8 Samsung A2
CPU Intel Core i5 x 6 Intel Core i7 - 1.8GHz x8 Exynos 1.9 GHz x8 Exynos 1.6 GHz x8
GPU NVIDIA GeForce GTX 1650 Mesa Intel UHD Graphics 620 Exynos Mali-G71 MP20 ARM Mali - T830 MP1
Retail Price ($ USD) $1200 $1200 $600 $90


The primary benefit of Super resolution is to increase video quality. Using the original high-resolution video as a reference, we can use traditional video quality metrics like VMAF to quantify the quality improvement of Super Resolution, when compared to normal bicubic upscaling of the downsampled / low-resolution video content.

Our general AI upscaler filter generally achieves a 10 to 15 point VMAF improvement compared to bicubic scaling. With content-specific AI models, or heavier models, we will likely be able to achieve further quality gains. We are currently working on releasing quality comparisons for content specific models.

Quality visualization

For reference, below are side by side comparisons of bicubic upscaling of the low-resolution original / Super resolution of the low-resolution / High resolution original


Bicubic (240p)
240p upscaled to 720p
Original 720p


Bicubic (240p)
240p upscaled to 720p
Original 720p


Bicubic (240p)
240p upscaled to 720p
Original 720p