I use html5 canvas elements to resize images im my browser. It turns out that the quality is very low. I found this: Disable Interpolation when Scaling a <canvas> but it does not help to increase the quality.
Below is my css and js code as well as the image scalled with Photoshop and scaled in the canvas API.
What do I have to do to get optimal quality when scaling an image in the browser?
Note: I want to scale down a large image to a small one, modify color in a canvas and send the result from the canvas to the server.
CSS:
canvas, img {
image-rendering: optimizeQuality;
image-rendering: -moz-crisp-edges;
image-rendering: -webkit-optimize-contrast;
image-rendering: optimize-contrast;
-ms-interpolation-mode: nearest-neighbor;
}
JS:
var $img = $('<img>');
var $originalCanvas = $('<canvas>');
$img.load(function() {
var originalContext = $originalCanvas[0].getContext('2d');
originalContext.imageSmoothingEnabled = false;
originalContext.webkitImageSmoothingEnabled = false;
originalContext.mozImageSmoothingEnabled = false;
originalContext.drawImage(this, 0, 0, 379, 500);
});
The image resized with photoshop:
The image resized on canvas:
Edit:
I tried to make downscaling in more than one steps as proposed in:
Resizing an image in an HTML5 canvas and Html5 canvas drawImage: how to apply antialiasing
This is the function I have used:
function resizeCanvasImage(img, canvas, maxWidth, maxHeight) {
var imgWidth = img.width,
imgHeight = img.height;
var ratio = 1, ratio1 = 1, ratio2 = 1;
ratio1 = maxWidth / imgWidth;
ratio2 = maxHeight / imgHeight;
// Use the smallest ratio that the image best fit into the maxWidth x maxHeight box.
if (ratio1 < ratio2) {
ratio = ratio1;
}
else {
ratio = ratio2;
}
var canvasContext = canvas.getContext("2d");
var canvasCopy = document.createElement("canvas");
var copyContext = canvasCopy.getContext("2d");
var canvasCopy2 = document.createElement("canvas");
var copyContext2 = canvasCopy2.getContext("2d");
canvasCopy.width = imgWidth;
canvasCopy.height = imgHeight;
copyContext.drawImage(img, 0, 0);
// init
canvasCopy2.width = imgWidth;
canvasCopy2.height = imgHeight;
copyContext2.drawImage(canvasCopy, 0, 0, canvasCopy.width, canvasCopy.height, 0, 0, canvasCopy2.width, canvasCopy2.height);
var rounds = 2;
var roundRatio = ratio * rounds;
for (var i = 1; i <= rounds; i++) {
console.log("Step: "+i);
// tmp
canvasCopy.width = imgWidth * roundRatio / i;
canvasCopy.height = imgHeight * roundRatio / i;
copyContext.drawImage(canvasCopy2, 0, 0, canvasCopy2.width, canvasCopy2.height, 0, 0, canvasCopy.width, canvasCopy.height);
// copy back
canvasCopy2.width = imgWidth * roundRatio / i;
canvasCopy2.height = imgHeight * roundRatio / i;
copyContext2.drawImage(canvasCopy, 0, 0, canvasCopy.width, canvasCopy.height, 0, 0, canvasCopy2.width, canvasCopy2.height);
} // end for
// copy back to canvas
canvas.width = imgWidth * roundRatio / rounds;
canvas.height = imgHeight * roundRatio / rounds;
canvasContext.drawImage(canvasCopy2, 0, 0, canvasCopy2.width, canvasCopy2.height, 0, 0, canvas.width, canvas.height);
}
Here is the result if I use a 2 step down sizing:
Here is the result if I use a 3 step down sizing:
Here is the result if I use a 4 step down sizing:
Here is the result if I use a 20 step down sizing:
Note: It turns out that from 1 step to 2 steps there is a large improvement in image quality but the more steps you add to the process the more fuzzy the image becomes.
Is there a way to solve the problem that the image gets more fuzzy the more steps you add?
Edit 2013-10-04: I tried the algorithm of GameAlchemist. Here is the result compared to Photoshop.
PhotoShop Image:
GameAlchemist's Algorithm:
This question is related to
javascript
css
html
canvas
html5-canvas
instead of .85, if we add 1.0. You will get exact answer.
data=canvas.toDataURL('image/jpeg', 1.0);
You can get clear and bright image. Please check
You can use step-down as I describe in the links you refer to but you appear to use them in a wrong way.
Step down is not needed to scale images to ratios above 1:2 (typically, but not limited to). It is where you need to do a drastic down-scaling you need to split it up in two (and rarely, more) steps depending on content of the image (in particular where high-frequencies such as thin lines occur).
Every time you down-sample an image you will loose details and information. You cannot expect the resulting image to be as clear as the original.
If you are then scaling down the images in many steps you will loose a lot of information in total and the result will be poor as you already noticed.
Try with just one extra step, or at tops two.
In case of Photoshop notice that it applies a convolution after the image has been re-sampled, such as sharpen. It's not just bi-cubic interpolation that takes place so in order to fully emulate Photoshop we need to also add the steps Photoshop is doing (with the default setup).
For this example I will use my original answer that you refer to in your post, but I have added a sharpen convolution to it to improve quality as a post process (see demo at bottom).
Here is code for adding sharpen filter (it's based on a generic convolution filter - I put the weight matrix for sharpen inside it as well as a mix factor to adjust the pronunciation of the effect):
Usage:
sharpen(context, width, height, mixFactor);
The mixFactor
is a value between [0.0, 1.0] and allow you do downplay the sharpen effect - rule-of-thumb: the less size the less of the effect is needed.
Function (based on this snippet):
function sharpen(ctx, w, h, mix) {
var weights = [0, -1, 0, -1, 5, -1, 0, -1, 0],
katet = Math.round(Math.sqrt(weights.length)),
half = (katet * 0.5) |0,
dstData = ctx.createImageData(w, h),
dstBuff = dstData.data,
srcBuff = ctx.getImageData(0, 0, w, h).data,
y = h;
while(y--) {
x = w;
while(x--) {
var sy = y,
sx = x,
dstOff = (y * w + x) * 4,
r = 0, g = 0, b = 0, a = 0;
for (var cy = 0; cy < katet; cy++) {
for (var cx = 0; cx < katet; cx++) {
var scy = sy + cy - half;
var scx = sx + cx - half;
if (scy >= 0 && scy < h && scx >= 0 && scx < w) {
var srcOff = (scy * w + scx) * 4;
var wt = weights[cy * katet + cx];
r += srcBuff[srcOff] * wt;
g += srcBuff[srcOff + 1] * wt;
b += srcBuff[srcOff + 2] * wt;
a += srcBuff[srcOff + 3] * wt;
}
}
}
dstBuff[dstOff] = r * mix + srcBuff[dstOff] * (1 - mix);
dstBuff[dstOff + 1] = g * mix + srcBuff[dstOff + 1] * (1 - mix);
dstBuff[dstOff + 2] = b * mix + srcBuff[dstOff + 2] * (1 - mix)
dstBuff[dstOff + 3] = srcBuff[dstOff + 3];
}
}
ctx.putImageData(dstData, 0, 0);
}
The result of using this combination will be:
Depending on how much of the sharpening you want to add to the blend you can get result from default "blurry" to very sharp:
If you want to get the best result quality-wise you'll need to go low-level and consider to implement for example this brand new algorithm to do this.
See Interpolation-Dependent Image Downsampling (2011) from IEEE.
Here is a link to the paper in full (PDF).
There are no implementations of this algorithm in JavaScript AFAIK of at this time so you're in for a hand-full if you want to throw yourself at this task.
The essence is (excerpts from the paper):
Abstract
An interpolation oriented adaptive down-sampling algorithm is proposed for low bit-rate image coding in this paper. Given an image, the proposed algorithm is able to obtain a low resolution image, from which a high quality image with the same resolution as the input image can be interpolated. Different from the traditional down-sampling algorithms, which are independent from the interpolation process, the proposed down-sampling algorithm hinges the down-sampling to the interpolation process. Consequently, the proposed down-sampling algorithm is able to maintain the original information of the input image to the largest extent. The down-sampled image is then fed into JPEG. A total variation (TV) based post processing is then applied to the decompressed low resolution image. Ultimately, the processed image is interpolated to maintain the original resolution of the input image. Experimental results verify that utilizing the downsampled image by the proposed algorithm, an interpolated image with much higher quality can be achieved. Besides, the proposed algorithm is able to achieve superior performance than JPEG for low bit rate image coding.
(see provided link for all details, formulas etc.)
If you wish to use canvas only, the best result will be with multiple downsteps. But that's not good enougth yet. For better quality you need pure js implementation. We just released pica - high speed downscaler with variable quality/speed. In short, it resizes 1280*1024px in ~0.1s, and 5000*3000px image in 1s, with highest quality (lanczos filter with 3 lobes). Pica has demo, where you can play with your images, quality levels, and even try it on mobile devices.
Pica does not have unsharp mask yet, but that will be added very soon. That's much more easy than implement high speed convolution filter for resize.
This is the improved Hermite resize filter that utilises 1 worker so that the window doesn't freeze.
https://github.com/calvintwr/blitz-hermite-resize
const blitz = Blitz.create()
/* Promise */
blitz({
source: DOM Image/DOM Canvas/jQuery/DataURL/File,
width: 400,
height: 600
}).then(output => {
// handle output
})catch(error => {
// handle error
})
/* Await */
let resized = await blizt({...})
/* Old school callback */
const blitz = Blitz.create('callback')
blitz({...}, function(output) {
// run your callback.
})
Fast canvas resample with good quality: http://jsfiddle.net/9g9Nv/442/
Update: version 2.0 (faster, web workers + transferable objects) - https://github.com/viliusle/Hermite-resize
/**
* Hermite resize - fast image resize/resample using Hermite filter. 1 cpu version!
*
* @param {HtmlElement} canvas
* @param {int} width
* @param {int} height
* @param {boolean} resize_canvas if true, canvas will be resized. Optional.
*/
function resample_single(canvas, width, height, resize_canvas) {
var width_source = canvas.width;
var height_source = canvas.height;
width = Math.round(width);
height = Math.round(height);
var ratio_w = width_source / width;
var ratio_h = height_source / height;
var ratio_w_half = Math.ceil(ratio_w / 2);
var ratio_h_half = Math.ceil(ratio_h / 2);
var ctx = canvas.getContext("2d");
var img = ctx.getImageData(0, 0, width_source, height_source);
var img2 = ctx.createImageData(width, height);
var data = img.data;
var data2 = img2.data;
for (var j = 0; j < height; j++) {
for (var i = 0; i < width; i++) {
var x2 = (i + j * width) * 4;
var weight = 0;
var weights = 0;
var weights_alpha = 0;
var gx_r = 0;
var gx_g = 0;
var gx_b = 0;
var gx_a = 0;
var center_y = (j + 0.5) * ratio_h;
var yy_start = Math.floor(j * ratio_h);
var yy_stop = Math.ceil((j + 1) * ratio_h);
for (var yy = yy_start; yy < yy_stop; yy++) {
var dy = Math.abs(center_y - (yy + 0.5)) / ratio_h_half;
var center_x = (i + 0.5) * ratio_w;
var w0 = dy * dy; //pre-calc part of w
var xx_start = Math.floor(i * ratio_w);
var xx_stop = Math.ceil((i + 1) * ratio_w);
for (var xx = xx_start; xx < xx_stop; xx++) {
var dx = Math.abs(center_x - (xx + 0.5)) / ratio_w_half;
var w = Math.sqrt(w0 + dx * dx);
if (w >= 1) {
//pixel too far
continue;
}
//hermite filter
weight = 2 * w * w * w - 3 * w * w + 1;
var pos_x = 4 * (xx + yy * width_source);
//alpha
gx_a += weight * data[pos_x + 3];
weights_alpha += weight;
//colors
if (data[pos_x + 3] < 255)
weight = weight * data[pos_x + 3] / 250;
gx_r += weight * data[pos_x];
gx_g += weight * data[pos_x + 1];
gx_b += weight * data[pos_x + 2];
weights += weight;
}
}
data2[x2] = gx_r / weights;
data2[x2 + 1] = gx_g / weights;
data2[x2 + 2] = gx_b / weights;
data2[x2 + 3] = gx_a / weights_alpha;
}
}
//clear and resize canvas
if (resize_canvas === true) {
canvas.width = width;
canvas.height = height;
} else {
ctx.clearRect(0, 0, width_source, height_source);
}
//draw
ctx.putImageData(img2, 0, 0);
}
I found a solution that doesn't need to access directly the pixel data and loop through it to perform the downsampling. Depending on the size of the image this can be very resource intensive, and it would be better to use the browser's internal algorithms.
The drawImage() function is using a linear-interpolation, nearest-neighbor resampling method. That works well when you are not resizing down more than half the original size.
If you loop to only resize max one half at a time, the results would be quite good, and much faster than accessing pixel data.
This function downsample to half at a time until reaching the desired size:
function resize_image( src, dst, type, quality ) {
var tmp = new Image(),
canvas, context, cW, cH;
type = type || 'image/jpeg';
quality = quality || 0.92;
cW = src.naturalWidth;
cH = src.naturalHeight;
tmp.src = src.src;
tmp.onload = function() {
canvas = document.createElement( 'canvas' );
cW /= 2;
cH /= 2;
if ( cW < src.width ) cW = src.width;
if ( cH < src.height ) cH = src.height;
canvas.width = cW;
canvas.height = cH;
context = canvas.getContext( '2d' );
context.drawImage( tmp, 0, 0, cW, cH );
dst.src = canvas.toDataURL( type, quality );
if ( cW <= src.width || cH <= src.height )
return;
tmp.src = dst.src;
}
}
// The images sent as parameters can be in the DOM or be image objects
resize_image( $( '#original' )[0], $( '#smaller' )[0] );
Here is a reusable Angular service for high quality image / canvas resizing: https://gist.github.com/fisch0920/37bac5e741eaec60e983
The service supports lanczos convolution and step-wise downscaling. The convolution approach is higher quality at the cost of being slower, whereas the step-wise downscaling approach produces reasonably antialiased results and is significantly faster.
Example usage:
angular.module('demo').controller('ExampleCtrl', function (imageService) {
// EXAMPLE USAGE
// NOTE: it's bad practice to access the DOM inside a controller,
// but this is just to show the example usage.
// resize by lanczos-sinc filter
imageService.resize($('#myimg')[0], 256, 256)
.then(function (resizedImage) {
// do something with resized image
})
// resize by stepping down image size in increments of 2x
imageService.resizeStep($('#myimg')[0], 256, 256)
.then(function (resizedImage) {
// do something with resized image
})
})
Maybe man you can try this, which is I always use in my project.In this way you can not only get high quality image ,but any other element on your canvas.
/*
* @parame canvas => canvas object
* @parame rate => the pixel quality
*/
function setCanvasSize(canvas, rate) {
const scaleRate = rate;
canvas.width = window.innerWidth * scaleRate;
canvas.height = window.innerHeight * scaleRate;
canvas.style.width = window.innerWidth + 'px';
canvas.style.height = window.innerHeight + 'px';
canvas.getContext('2d').scale(scaleRate, scaleRate);
}
context.scale(xScale, yScale)
<canvas id="c"></canvas>
<hr/>
<img id="i" />
<script>
var i = document.getElementById('i');
i.onload = function(){
var width = this.naturalWidth,
height = this.naturalHeight,
canvas = document.getElementById('c'),
ctx = canvas.getContext('2d');
canvas.width = Math.floor(width / 2);
canvas.height = Math.floor(height / 2);
ctx.scale(0.5, 0.5);
ctx.drawImage(this, 0, 0);
ctx.rect(0,0,500,500);
ctx.stroke();
// restore original 1x1 scale
ctx.scale(2, 2);
ctx.rect(0,0,500,500);
ctx.stroke();
};
i.src = 'https://static.md/b70a511140758c63f07b618da5137b5d.png';
</script>
Not the right answer for people who really need to resize the image itself, but just to shrink the file size.
I had a problem with "directly from the camera" pictures, that my customers often uploaded in "uncompressed" JPEG.
Not so well known is, that the canvas supports (in most browsers 2017) to change the quality of JPEG
data=canvas.toDataURL('image/jpeg', .85) # [1..0] default 0.92
With this trick I could reduce 4k x 3k pics with >10Mb to 1 or 2Mb, sure it depends on your needs.
I really try to avoid running through image data, especially on larger images. Thus I came up with a rather simple way to decently reduce image size without any restrictions or limitations using a few extra steps. This routine goes down to the lowest possible half step before the desired target size. Then it scales it up to twice the target size and then half again. Sounds funny at first, but the results are astoundingly good and go there swiftly.
function resizeCanvas(canvas, newWidth, newHeight) {
let ctx = canvas.getContext('2d');
let buffer = document.createElement('canvas');
buffer.width = ctx.canvas.width;
buffer.height = ctx.canvas.height;
let ctxBuf = buffer.getContext('2d');
let scaleX = newWidth / ctx.canvas.width;
let scaleY = newHeight / ctx.canvas.height;
let scaler = Math.min(scaleX, scaleY);
//see if target scale is less than half...
if (scaler < 0.5) {
//while loop in case target scale is less than quarter...
while (scaler < 0.5) {
ctxBuf.canvas.width = ctxBuf.canvas.width * 0.5;
ctxBuf.canvas.height = ctxBuf.canvas.height * 0.5;
ctxBuf.scale(0.5, 0.5);
ctxBuf.drawImage(canvas, 0, 0);
ctxBuf.setTransform(1, 0, 0, 1, 0, 0);
ctx.canvas.width = ctxBuf.canvas.width;
ctx.canvas.height = ctxBuf.canvas.height;
ctx.drawImage(buffer, 0, 0);
scaleX = newWidth / ctxBuf.canvas.width;
scaleY = newHeight / ctxBuf.canvas.height;
scaler = Math.min(scaleX, scaleY);
}
//only if the scaler is now larger than half, double target scale trick...
if (scaler > 0.5) {
scaleX *= 2.0;
scaleY *= 2.0;
ctxBuf.canvas.width = ctxBuf.canvas.width * scaleX;
ctxBuf.canvas.height = ctxBuf.canvas.height * scaleY;
ctxBuf.scale(scaleX, scaleY);
ctxBuf.drawImage(canvas, 0, 0);
ctxBuf.setTransform(1, 0, 0, 1, 0, 0);
scaleX = 0.5;
scaleY = 0.5;
}
} else
ctxBuf.drawImage(canvas, 0, 0);
//wrapping things up...
ctx.canvas.width = newWidth;
ctx.canvas.height = newHeight;
ctx.scale(scaleX, scaleY);
ctx.drawImage(buffer, 0, 0);
ctx.setTransform(1, 0, 0, 1, 0, 0);
}
Why use the canvas to resize images? Modern browsers all use bicubic interpolation — the same process used by Photoshop (if you're doing it right) — and they do it faster than the canvas process. Just specify the image size you want (use only one dimension, height or width, to resize proportionally).
This is supported by most browsers, including later versions of IE. Earlier versions may require browser-specific CSS.
A simple function (using jQuery) to resize an image would be like this:
function resizeImage(img, percentage) {
var coeff = percentage/100,
width = $(img).width(),
height = $(img).height();
return {"width": width*coeff, "height": height*coeff}
}
Then just use the returned value to resize the image in one or both dimensions.
Obviously there are different refinements you could make, but this gets the job done.
Paste the following code into the console of this page and watch what happens to the gravatars:
function resizeImage(img, percentage) {
var coeff = percentage/100,
width = $(img).width(),
height = $(img).height();
return {"width": width*coeff, "height": height*coeff}
}
$('.user-gravatar32 img').each(function(){
var newDimensions = resizeImage( this, 150);
this.style.width = newDimensions.width + "px";
this.style.height = newDimensions.height + "px";
});
DEMO: Resizing images with JS and HTML Canvas Demo fiddler.
You may find 3 different methods to do this resize, that will help you understand how the code is working and why.
https://jsfiddle.net/1b68eLdr/93089/
Full code of both demo, and TypeScript method that you may want to use in your code, can be found in the GitHub project.
https://github.com/eyalc4/ts-image-resizer
This is the final code:
export class ImageTools {
base64ResizedImage: string = null;
constructor() {
}
ResizeImage(base64image: string, width: number = 1080, height: number = 1080) {
let img = new Image();
img.src = base64image;
img.onload = () => {
// Check if the image require resize at all
if(img.height <= height && img.width <= width) {
this.base64ResizedImage = base64image;
// TODO: Call method to do something with the resize image
}
else {
// Make sure the width and height preserve the original aspect ratio and adjust if needed
if(img.height > img.width) {
width = Math.floor(height * (img.width / img.height));
}
else {
height = Math.floor(width * (img.height / img.width));
}
let resizingCanvas: HTMLCanvasElement = document.createElement('canvas');
let resizingCanvasContext = resizingCanvas.getContext("2d");
// Start with original image size
resizingCanvas.width = img.width;
resizingCanvas.height = img.height;
// Draw the original image on the (temp) resizing canvas
resizingCanvasContext.drawImage(img, 0, 0, resizingCanvas.width, resizingCanvas.height);
let curImageDimensions = {
width: Math.floor(img.width),
height: Math.floor(img.height)
};
let halfImageDimensions = {
width: null,
height: null
};
// Quickly reduce the size by 50% each time in few iterations until the size is less then
// 2x time the target size - the motivation for it, is to reduce the aliasing that would have been
// created with direct reduction of very big image to small image
while (curImageDimensions.width * 0.5 > width) {
// Reduce the resizing canvas by half and refresh the image
halfImageDimensions.width = Math.floor(curImageDimensions.width * 0.5);
halfImageDimensions.height = Math.floor(curImageDimensions.height * 0.5);
resizingCanvasContext.drawImage(resizingCanvas, 0, 0, curImageDimensions.width, curImageDimensions.height,
0, 0, halfImageDimensions.width, halfImageDimensions.height);
curImageDimensions.width = halfImageDimensions.width;
curImageDimensions.height = halfImageDimensions.height;
}
// Now do final resize for the resizingCanvas to meet the dimension requirments
// directly to the output canvas, that will output the final image
let outputCanvas: HTMLCanvasElement = document.createElement('canvas');
let outputCanvasContext = outputCanvas.getContext("2d");
outputCanvas.width = width;
outputCanvas.height = height;
outputCanvasContext.drawImage(resizingCanvas, 0, 0, curImageDimensions.width, curImageDimensions.height,
0, 0, width, height);
// output the canvas pixels as an image. params: format, quality
this.base64ResizedImage = outputCanvas.toDataURL('image/jpeg', 0.85);
// TODO: Call method to do something with the resize image
}
};
}}
Source: Stackoverflow.com