Getting Started
Quick Start
// browser-only; see below for Node.js instructions
import Upscaler from 'upscaler';
const upscaler = new Upscaler();
upscaler.upscale('/image/path').then(upscaledSrc => {
// base64 representation of image src
console.log(upscaledSrc);
});
Browser Setup
In the browser, we can install UpscalerJS via a script tag or by installing via NPM and using a build tool like webpack, parcel, or rollup.
For runnable code examples, check out the guide on Script Tag Installation and the guide on installation via NPM.
Usage via Script Tag
First, ensure we've followed the instructions to install Tensorflow.js.
Then add the following tags to our HTML file:
<script src="https://cdn.jsdelivr.net/npm/@upscalerjs/default-model@latest/dist/umd/index.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/upscaler@latest/dist/browser/umd/upscaler.min.js"></script>
Upscaler
will be available globally on our page. To use:
<script type="text/javascript">
const upscaler = new Upscaler({
model: DefaultUpscalerJSModel,
})
</script>
For a runnable code example, check out the guide on script tag usage.
Installation from NPM
We can install UpscalerJS from NPM. Ensure Tensorflow.js is installed alongside it.
npm install upscaler @tensorflow/tfjs
To use:
import Upscaler from 'upscaler'
const upscaler = new Upscaler()
We can install specific models with NPM as well:
npm install @upscalerjs/esrgan-thick
A full list of official models is available here. We can also use custom models others have trained.
For a runnable code example, check out the guide on NPM usage.
Node
Install UpscalerJS and the targeted platform of Tensorflow.js. We can also install specific models.
For a runnable code example, check out the guide on Node.js usage.
tfjs-node
npm install upscaler @tensorflow/tfjs-node
To use:
const Upscaler = require('upscaler/node');
const upscaler = new Upscaler();
upscaler.upscale('/image/path').then(upscaledSrc => {
// base64 representation of image src
console.log(upscaledSrc);
});
tfjs-node-gpu
npm install upscaler @tensorflow/tfjs-node-gpu
To use:
const Upscaler = require('upscaler/node-gpu');
const upscaler = new Upscaler();
upscaler.upscale('/image/path').then(upscaledSrc => {
// base64 representation of image src
console.log(upscaledSrc);
});
Usage
Instantiation
By default, when UpscalerJS is instantiated, it uses the default model, @upscalerjs/default-model
. We can install alternative models by installing them and providing them as an argument.
For a runnable code example, check out the guide on providing models.
For instance, to use @upscalerjs/esrgan-thick
, we'd first install it:
npm install @upscalerjs/esrgan-thick
And then import and provide it:
import Upscaler from 'upscaler';
import x4 from '@upscalerjs/esrgan-thick/4x';
const upscaler = new Upscaler({
model: x4,
});
A full list of models can be found here.
Alternatively, we can provide a path to a pretrained model of our own:
const upscaler = new Upscaler({
model: {
path: '/path/to/model',
scale: 2,
},
});
See the API documentation for a model definition here.
Upscaling
We can upscale an image with the following code:
upscaler.upscale('/path/to/image').then(img => {
console.log(img);
});
In the browser, we can provide the image in any of the following formats:
string
- A URL to an image. Ensure the image can be loaded (for example, make sure the site's CORS policy allows for loading).tf.Tensor3D
ortf.Tensor4D
- A tensor representing an image.- Any valid input to
tf.browser.fromPixels
In Node, we can provide the image in any of the following formats:
string
- A path to a local image, or if provided a string that begins withhttp
, a URL to a remote image.tf.Tensor3D
ortf.Tensor4D
- A tensor representing an image.Uint8Array
- aUint8Array
representing an image.Buffer
- aBuffer
representing an image.
By default, a base64-encoded src
attribute is returned. We can change the output type like so:
upscaler.upscale('/path/to/image', {
output: 'tensor',
}).then(img => {
console.log(img);
});
The available types for output are:
src
- A src URL of the upscaled image.tensor
- The raw tensor.
Performance
For larger images, attempting to run inference can impact UI performance.
For runnable code examples, check out the guide on patch sizes.
To address this, we can provide a patchSize
parameter to infer the image in "patches" and avoid blocking the UI. We will likely also want to provide a padding
parameter:
({
patchSize: 64,
padding: 5,
})
Without padding, images will usually end up with unsightly artifacting at the seams between patches. We should use as small a padding value as we can get away with (usually anything above 3 will avoid artifacting).
Smaller patch sizes will block the UI less, but also increase overall inference time for a given image.