[javascript] Remove duplicate values from JS array

I have a very simple JavaScript array that may or may not contain duplicates.

var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];

I need to remove the duplicates and put the unique values in a new array.

I could point to all the codes that I've tried but I think it's useless because they don't work. I accept jQuery solutions too.

Similar question:

This question is related to javascript arrays duplicates unique

The answer is


The easiest way to remove string duplicates is to use associative array and then iterate over the associative array to make the list/array back.

Like below:

var toHash = [];
var toList = [];

// add from ur data list to hash
$(data.pointsToList).each(function(index, Element) {
    toHash[Element.nameTo]= Element.nameTo;
});

// now convert hash to array
// don't forget the "hasownproperty" else u will get random results 
for (var key in toHash)  {
    if (toHash.hasOwnProperty(key)) { 
      toList.push(toHash[key]);
   }
}

Voila, now duplicates are gone!


Solution 1

Array.prototype.unique = function() {
    var a = [];
    for (i = 0; i < this.length; i++) {
        var current = this[i];
        if (a.indexOf(current) < 0) a.push(current);
    }
    return a;
}

Solution 2 (using Set)

Array.prototype.unique = function() {
    return Array.from(new Set(this));
}

Test

var x=[1,2,3,3,2,1];
x.unique() //[1,2,3]

Performance

When I tested both implementation (with and without Set) for performance in chrome, I found that the one with Set is much much faster!

_x000D_
_x000D_
Array.prototype.unique1 = function() {_x000D_
    var a = [];_x000D_
    for (i = 0; i < this.length; i++) {_x000D_
        var current = this[i];_x000D_
        if (a.indexOf(current) < 0) a.push(current);_x000D_
    }_x000D_
    return a;_x000D_
}_x000D_
_x000D_
_x000D_
Array.prototype.unique2 = function() {_x000D_
    return Array.from(new Set(this));_x000D_
}_x000D_
_x000D_
var x=[];_x000D_
for(var i=0;i<10000;i++){_x000D_
 x.push("x"+i);x.push("x"+(i+1));_x000D_
}_x000D_
_x000D_
console.time("unique1");_x000D_
console.log(x.unique1());_x000D_
console.timeEnd("unique1");_x000D_
_x000D_
_x000D_
_x000D_
console.time("unique2");_x000D_
console.log(x.unique2());_x000D_
console.timeEnd("unique2");
_x000D_
_x000D_
_x000D_


Vanilla JS: Remove duplicates using an Object like a Set

You can always try putting it into an object, and then iterating through its keys:

function remove_duplicates(arr) {
    var obj = {};
    var ret_arr = [];
    for (var i = 0; i < arr.length; i++) {
        obj[arr[i]] = true;
    }
    for (var key in obj) {
        ret_arr.push(key);
    }
    return ret_arr;
}

Vanilla JS: Remove duplicates by tracking already seen values (order-safe)

Or, for an order-safe version, use an object to store all previously seen values, and check values against it before before adding to an array.

function remove_duplicates_safe(arr) {
    var seen = {};
    var ret_arr = [];
    for (var i = 0; i < arr.length; i++) {
        if (!(arr[i] in seen)) {
            ret_arr.push(arr[i]);
            seen[arr[i]] = true;
        }
    }
    return ret_arr;

}

ECMAScript 6: Use the new Set data structure (order-safe)

ECMAScript 6 adds the new Set Data-Structure, which lets you store values of any type. Set.values returns elements in insertion order.

function remove_duplicates_es6(arr) {
    let s = new Set(arr);
    let it = s.values();
    return Array.from(it);
}

Example usage:

a = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];

b = remove_duplicates(a);
// b:
// ["Adam", "Carl", "Jenny", "Matt", "Mike", "Nancy"]

c = remove_duplicates_safe(a);
// c:
// ["Mike", "Matt", "Nancy", "Adam", "Jenny", "Carl"]

d = remove_duplicates_es6(a);
// d:
// ["Mike", "Matt", "Nancy", "Adam", "Jenny", "Carl"]

Although ES6 Solution is the best, I'm baffled as to how nobody has shown the following solution:

function removeDuplicates(arr){
    o={}
    arr.forEach(function(e){
        o[e]=true
    })
    return Object.keys(o)
}

The thing to remember here is that objects MUST have unique keys. We are exploiting this to remove all the duplicates. I would have thought this would be the fastest solution (before ES6).

Bear in mind though that this also sorts the array.


Here is another approach using jQuery,

function uniqueArray(array){
  if ($.isArray(array)){
    var dupes = {}; var len, i;
    for (i=0,len=array.length;i<len;i++){
      var test = array[i].toString();
      if (dupes[test]) { array.splice(i,1); len--; i--; } else { dupes[test] = true; }
    }
  } 
  else {
    if (window.console) console.log('Not passing an array to uniqueArray, returning whatever you sent it - not filtered!');
      return(array);
  }
  return(array);
}

Author: William Skidmore


here is the simple method without any special libraries are special function,

_x000D_
_x000D_
name_list = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];_x000D_
get_uniq = name_list.filter(function(val,ind) { return name_list.indexOf(val) == ind; })_x000D_
_x000D_
console.log("Original name list:"+name_list.length, name_list)_x000D_
console.log("\n Unique name list:"+get_uniq.length, get_uniq)
_x000D_
_x000D_
_x000D_

enter image description here


The simplest way to remove a duplicate is to do a for loop and compare the elements that are not the same and push them into the new array

 var array = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];

 var removeDublicate = function(arr){
 var result = []
 var sort_arr = arr.sort() //=> optional
 for (var i = 0; i < arr.length; i++) {
        if(arr[ i + 1] !== arr[i] ){
            result.push(arr[i])
        }
 };
  return result
}  
console.log(removeDublicate(array))
==>  ["Adam", "Carl", "Jenny", "Matt", "Mike", "Nancy"]

If you're creating the array yourself, you can save yourself a loop and the extra unique filter by doing the check as you're inserting the data;

var values = [];
$.each(collection, function() {
    var x = $(this).value;
    if (!$.inArray(x, values)) {
        values.push(x);
    }
});

https://jsfiddle.net/2w0k5tz8/

function remove_duplicates(array_){
    var ret_array = new Array();
    for (var a = array_.length - 1; a >= 0; a--) {
        for (var b = array_.length - 1; b >= 0; b--) {
            if(array_[a] == array_[b] && a != b){
                delete array_[b];
            }
        };
        if(array_[a] != undefined)
            ret_array.push(array_[a]);
    };
    return ret_array;
}

console.log(remove_duplicates(Array(1,1,1,2,2,2,3,3,3)));

Loop through, remove duplicates, and create a clone array place holder because the array index will not be updated.

Loop backward for better performance ( your loop wont need to keep checking the length of your array)


Simplest One I've run into so far. In es6.

 var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl", "Mike", "Nancy"]

 var noDupe = Array.from(new Set(names))

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set


Nested loop method for removing duplicates in array and preserving original order of elements.

var array = [1, 3, 2, 1, [5], 2, [4]]; // INPUT

var element = 0;
var decrement = array.length - 1;
while(element < array.length) {
  while(element < decrement) {
    if (array[element] === array[decrement]) {
      array.splice(decrement, 1);
      decrement--;
    } else {
      decrement--;
    }
  }
  decrement = array.length - 1;
  element++;
}

console.log(array);// [1, 3, 2, [5], [4]]

Explanation: Inner loop compares first element of array with all other elements starting with element at highest index. Decrementing towards the first element a duplicate is spliced from the array.

When inner loop is finished the outer loop increments to the next element for comparison and resets the new length of the array.


var lines = ["Mike", "Matt", "Nancy", "Adam", "Jenny", "Nancy", "Carl"];
var uniqueNames = [];

for(var i=0;i<lines.length;i++)
{
    if(uniqueNames.indexOf(lines[i]) == -1)
        uniqueNames.push(lines[i]);
}
if(uniqueNames.indexOf(uniqueNames[uniqueNames.length-1])!= -1)
    uniqueNames.pop();
for(var i=0;i<uniqueNames.length;i++)
{
    document.write(uniqueNames[i]);
      document.write("<br/>");
}

A single line version using array filter and indexOf functions:

arr = arr.filter (function (value, index, array) { 
    return array.indexOf (value) == index;
});

This was just another solution but different than the rest.

function diffArray(arr1, arr2) {
  var newArr = arr1.concat(arr2);
  newArr.sort();
  var finalArr = [];
  for(var i = 0;i<newArr.length;i++) {
   if(!(newArr[i] === newArr[i+1] || newArr[i] === newArr[i-1])) {
     finalArr.push(newArr[i]);
   } 
  }
  return finalArr;
}

A simple but effective technique, is to use the filter method in combination with the filter function(value, index){ return this.indexOf(value) == index }.

Code example :

_x000D_
_x000D_
var data = [2,3,4,5,5,4];_x000D_
var filter = function(value, index){ return this.indexOf(value) == index };_x000D_
var filteredData = data.filter(filter, data );_x000D_
_x000D_
document.body.innerHTML = '<pre>' + JSON.stringify(filteredData, null, '\t') +  '</pre>';
_x000D_
_x000D_
_x000D_

See also this Fiddle.


ES2015, 1-liner, which chains well with map, but only works for integers:

[1, 4, 1].sort().filter((current, next) => current !== next)

[1, 4]


function removeDuplicates(inputArray) {
            var outputArray=new Array();

            if(inputArray.length>0){
                jQuery.each(inputArray, function(index, value) {
                    if(jQuery.inArray(value, outputArray) == -1){
                        outputArray.push(value);
                    }
                });
            }           
            return outputArray;
        }

The top answers have complexity of O(n²), but this can be done with just O(n) by using an object as a hash:

function getDistinctArray(arr) {
    var dups = {};
    return arr.filter(function(el) {
        var hash = el.valueOf();
        var isDup = dups[hash];
        dups[hash] = true;
        return !isDup;
    });
}

This will work for strings, numbers, and dates. If your array contains objects, the above solution won't work because when coerced to a string, they will all have a value of "[object Object]" (or something similar) and that isn't suitable as a lookup value. You can get an O(n) implementation for objects by setting a flag on the object itself:

function getDistinctObjArray(arr) {
    var distinctArr = arr.filter(function(el) {
        var isDup = el.inArray;
        el.inArray = true;
        return !isDup;
    });
    distinctArr.forEach(function(el) {
        delete el.inArray;
    });
    return distinctArr;
}

2019 edit: Modern versions of JavaScript make this a much easier problem to solve. Using Set will work, regardless of whether your array contains objects, strings, numbers, or any other type.

function getDistinctArray(arr) {
    return [...new Set(arr)];
}

The implementation is so simple, defining a function is no longer warranted.


This solution uses a new array, and an object map inside the function. All it does is loop through the original array, and adds each integer into the object map.If while looping through the original array it comes across a repeat, the

`if (!unique[int])`

catches this because there is already a key property on the object with the same number. Thus, skipping over that number and not allowing it to be pushed into the new array.

    function removeRepeats(ints) {
      var unique = {}
      var newInts = []

      for (var i = 0; i < ints.length; i++) {
        var int = ints[i]

        if (!unique[int]) {
          unique[int] = 1
          newInts.push(int)
        }
      }
      return newInts
    }

    var example = [100, 100, 100, 100, 500]
    console.log(removeRepeats(example)) // prints [100, 500]

const numbers = [1, 1, 2, 3, 4, 4];

function unique(array) {
  return array.reduce((a,b) => {
    let isIn = a.find(element => {
        return element === b;
    });
    if(!isIn){
      a.push(b);
    }
    return a;
  },[]);
}

let ret = unique(numbers); // [1, 2, 3, 4]

the way using reduce and find.


Generic Functional Approach

Here is a generic and strictly functional approach with ES2015:

_x000D_
_x000D_
// small, reusable auxiliary functions_x000D_
_x000D_
const apply = f => a => f(a);_x000D_
_x000D_
const flip = f => b => a => f(a) (b);_x000D_
_x000D_
const uncurry = f => (a, b) => f(a) (b);_x000D_
_x000D_
const push = x => xs => (xs.push(x), xs);_x000D_
_x000D_
const foldl = f => acc => xs => xs.reduce(uncurry(f), acc);_x000D_
_x000D_
const some = f => xs => xs.some(apply(f));_x000D_
_x000D_
_x000D_
// the actual de-duplicate function_x000D_
_x000D_
const uniqueBy = f => foldl(_x000D_
   acc => x => some(f(x)) (acc)_x000D_
    ? acc_x000D_
    : push(x) (acc)_x000D_
 ) ([]);_x000D_
_x000D_
_x000D_
// comparators_x000D_
_x000D_
const eq = y => x => x === y;_x000D_
_x000D_
// string equality case insensitive :D_x000D_
const seqCI = y => x => x.toLowerCase() === y.toLowerCase();_x000D_
_x000D_
_x000D_
// mock data_x000D_
_x000D_
const xs = [1,2,3,1,2,3,4];_x000D_
_x000D_
const ys = ["a", "b", "c", "A", "B", "C", "D"];_x000D_
_x000D_
_x000D_
console.log( uniqueBy(eq) (xs) );_x000D_
_x000D_
console.log( uniqueBy(seqCI) (ys) );
_x000D_
_x000D_
_x000D_

We can easily derive unique from unqiueBy or use the faster implementation utilizing Sets:

const unqiue = uniqueBy(eq);

// const unique = xs => Array.from(new Set(xs));

Benefits of this approach:

  • generic solution by using a separate comparator function
  • declarative and succinct implementation
  • reuse of other small, generic functions

Performance Considerations

uniqueBy isn't as fast as an imperative implementation with loops, but it is way more expressive due to its genericity.

If you identify uniqueBy as the cause of a concrete performance penalty in your app, replace it with optimized code. That is, write your code first in an functional, declarative way. Afterwards, provided that you encounter performance issues, try to optimize the code at the locations, which are the cause of the problem.

Memory Consumption and Garbage Collection

uniqueBy utilizes mutations (push(x) (acc)) hidden inside its body. It reuses the accumulator instead of throwing it away after each iteration. This reduces memory consumption and GC pressure. Since this side effect is wrapped inside the function, everything outside remains pure.


The following script returns a new array containing only unique values. It works on string and numbers. No requirement for additional libraries only vanilla JS.

Browser support:

Feature Chrome  Firefox (Gecko)     Internet Explorer   Opera   Safari
Basic support   (Yes)   1.5 (1.8)   9                   (Yes)   (Yes)

https://jsfiddle.net/fzmcgcxv/3/

var duplicates = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl","Mike","Mike","Nancy","Carl"]; 
var unique = duplicates.filter(function(elem, pos) {
    return duplicates.indexOf(elem) == pos;
  }); 
alert(unique);

var uniqueCompnies = function(companyArray) {
    var arrayUniqueCompnies = [],
        found, x, y;

    for (x = 0; x < companyArray.length; x++) {
        found = undefined;
        for (y = 0; y < arrayUniqueCompnies.length; y++) {
            if (companyArray[x] === arrayUniqueCompnies[y]) {
                found = true;
                break;
            }
        }

        if ( ! found) {
            arrayUniqueCompnies.push(companyArray[x]);
        }
    }

    return arrayUniqueCompnies;
}

var arr = [
    "Adobe Systems Incorporated",
    "IBX",
    "IBX",
    "BlackRock, Inc.",
    "BlackRock, Inc.",
];

Go for this one:

var uniqueArray = duplicateArray.filter(function(elem, pos) {
    return duplicateArray.indexOf(elem) == pos;
}); 

Now uniqueArray contains no duplicates.


The most concise way to remove duplicates from an array using native javascript functions is to use a sequence like below:

vals.sort().reduce(function(a, b){ if (b != a[0]) a.unshift(b); return a }, [])

there's no need for slice nor indexOf within the reduce function, like i've seen in other examples! it makes sense to use it along with a filter function though:

vals.filter(function(v, i, a){ return i == a.indexOf(v) })

Yet another ES6(2015) way of doing this that already works on a few browsers is:

Array.from(new Set(vals))

or even using the spread operator:

[...new Set(vals)]

cheers!


var duplicates = function(arr){
     var sorted = arr.sort();
   var dup = [];
   for(var i=0; i<sorted.length; i++){
        var rest  = sorted.slice(i+1); //slice the rest of array
       if(rest.indexOf(sorted[i]) > -1){//do indexOf
            if(dup.indexOf(sorted[i]) == -1)    
         dup.push(sorted[i]);//store it in another arr
      }
   }
   console.log(dup);
}

duplicates(["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"]);

A slight modification of thg435's excellent answer to use a custom comparator:

function contains(array, obj) {
    for (var i = 0; i < array.length; i++) {
        if (isEqual(array[i], obj)) return true;
    }
    return false;
}
//comparator
function isEqual(obj1, obj2) {
    if (obj1.name == obj2.name) return true;
    return false;
}
function removeDuplicates(ary) {
    var arr = [];
    return ary.filter(function(x) {
        return !contains(arr, x) && arr.push(x);
    });
}

You can simply do it in JavaScript, with the help of the second - index - parameter of the filter method:

var a = [2,3,4,5,5,4];
a.filter(function(value, index){ return a.indexOf(value) == index });

or in short hand

a.filter((v,i) => a.indexOf(v) == i)

Got tired of seeing all bad examples with for-loops or jQuery. Javascript has the perfect tools for this nowadays: sort, map and reduce.

Uniq reduce while keeping existing order

var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];

var uniq = names.reduce(function(a,b){
    if (a.indexOf(b) < 0 ) a.push(b);
    return a;
  },[]);

console.log(uniq, names) // [ 'Mike', 'Matt', 'Nancy', 'Adam', 'Jenny', 'Carl' ]

// one liner
return names.reduce(function(a,b){if(a.indexOf(b)<0)a.push(b);return a;},[]);

Faster uniq with sorting

There are probably faster ways but this one is pretty decent.

var uniq = names.slice() // slice makes copy of array before sorting it
  .sort(function(a,b){
    return a > b;
  })
  .reduce(function(a,b){
    if (a.slice(-1)[0] !== b) a.push(b); // slice(-1)[0] means last item in array without removing it (like .pop())
    return a;
  },[]); // this empty array becomes the starting value for a

// one liner
return names.slice().sort(function(a,b){return a > b}).reduce(function(a,b){if (a.slice(-1)[0] !== b) a.push(b);return a;},[]);

Update 2015: ES6 version:

In ES6 you have Sets and Spread which makes it very easy and performant to remove all duplicates:

var uniq = [ ...new Set(names) ]; // [ 'Mike', 'Matt', 'Nancy', 'Adam', 'Jenny', 'Carl' ]

Sort based on occurrence:

Someone asked about ordering the results based on how many unique names there are:

var names = ['Mike', 'Matt', 'Nancy', 'Adam', 'Jenny', 'Nancy', 'Carl']

var uniq = names
  .map((name) => {
    return {count: 1, name: name}
  })
  .reduce((a, b) => {
    a[b.name] = (a[b.name] || 0) + b.count
    return a
  }, {})

var sorted = Object.keys(uniq).sort((a, b) => uniq[a] < uniq[b])

console.log(sorted)

TL;DR

Using the Set constructor and the spread syntax:

uniq = [...new Set(array)];

"Smart" but naïve way

uniqueArray = a.filter(function(item, pos) {
    return a.indexOf(item) == pos;
})

Basically, we iterate over the array and, for each element, check if the first position of this element in the array is equal to the current position. Obviously, these two positions are different for duplicate elements.

Using the 3rd ("this array") parameter of the filter callback we can avoid a closure of the array variable:

uniqueArray = a.filter(function(item, pos, self) {
    return self.indexOf(item) == pos;
})

Although concise, this algorithm is not particularly efficient for large arrays (quadratic time).

Hashtables to the rescue

function uniq(a) {
    var seen = {};
    return a.filter(function(item) {
        return seen.hasOwnProperty(item) ? false : (seen[item] = true);
    });
}

This is how it's usually done. The idea is to place each element in a hashtable and then check for its presence instantly. This gives us linear time, but has at least two drawbacks:

  • since hash keys can only be strings or symbols in JavaScript, this code doesn't distinguish numbers and "numeric strings". That is, uniq([1,"1"]) will return just [1]
  • for the same reason, all objects will be considered equal: uniq([{foo:1},{foo:2}]) will return just [{foo:1}].

That said, if your arrays contain only primitives and you don't care about types (e.g. it's always numbers), this solution is optimal.

The best from two worlds

A universal solution combines both approaches: it uses hash lookups for primitives and linear search for objects.

function uniq(a) {
    var prims = {"boolean":{}, "number":{}, "string":{}}, objs = [];

    return a.filter(function(item) {
        var type = typeof item;
        if(type in prims)
            return prims[type].hasOwnProperty(item) ? false : (prims[type][item] = true);
        else
            return objs.indexOf(item) >= 0 ? false : objs.push(item);
    });
}

sort | uniq

Another option is to sort the array first, and then remove each element equal to the preceding one:

function uniq(a) {
    return a.sort().filter(function(item, pos, ary) {
        return !pos || item != ary[pos - 1];
    });
}

Again, this doesn't work with objects (because all objects are equal for sort). Additionally, we silently change the original array as a side effect - not good! However, if your input is already sorted, this is the way to go (just remove sort from the above).

Unique by...

Sometimes it's desired to uniquify a list based on some criteria other than just equality, for example, to filter out objects that are different, but share some property. This can be done elegantly by passing a callback. This "key" callback is applied to each element, and elements with equal "keys" are removed. Since key is expected to return a primitive, hash table will work fine here:

function uniqBy(a, key) {
    var seen = {};
    return a.filter(function(item) {
        var k = key(item);
        return seen.hasOwnProperty(k) ? false : (seen[k] = true);
    })
}

A particularly useful key() is JSON.stringify which will remove objects that are physically different, but "look" the same:

a = [[1,2,3], [4,5,6], [1,2,3]]
b = uniqBy(a, JSON.stringify)
console.log(b) // [[1,2,3], [4,5,6]]

If the key is not primitive, you have to resort to the linear search:

function uniqBy(a, key) {
    var index = [];
    return a.filter(function (item) {
        var k = key(item);
        return index.indexOf(k) >= 0 ? false : index.push(k);
    });
}

In ES6 you can use a Set:

function uniqBy(a, key) {
    let seen = new Set();
    return a.filter(item => {
        let k = key(item);
        return seen.has(k) ? false : seen.add(k);
    });
}

or a Map:

function uniqBy(a, key) {
    return [
        ...new Map(
            a.map(x => [key(x), x])
        ).values()
    ]
}

which both also work with non-primitive keys.

First or last?

When removing objects by a key, you might to want to keep the first of "equal" objects or the last one.

Use the Set variant above to keep the first, and the Map to keep the last:

_x000D_
_x000D_
function uniqByKeepFirst(a, key) {_x000D_
    let seen = new Set();_x000D_
    return a.filter(item => {_x000D_
        let k = key(item);_x000D_
        return seen.has(k) ? false : seen.add(k);_x000D_
    });_x000D_
}_x000D_
_x000D_
_x000D_
function uniqByKeepLast(a, key) {_x000D_
    return [_x000D_
        ...new Map(_x000D_
            a.map(x => [key(x), x])_x000D_
        ).values()_x000D_
    ]_x000D_
}_x000D_
_x000D_
//_x000D_
_x000D_
data = [_x000D_
    {a:1, u:1},_x000D_
    {a:2, u:2},_x000D_
    {a:3, u:3},_x000D_
    {a:4, u:1},_x000D_
    {a:5, u:2},_x000D_
    {a:6, u:3},_x000D_
];_x000D_
_x000D_
console.log(uniqByKeepFirst(data, it => it.u))_x000D_
console.log(uniqByKeepLast(data, it => it.u))
_x000D_
_x000D_
_x000D_

Libraries

Both underscore and Lo-Dash provide uniq methods. Their algorithms are basically similar to the first snippet above and boil down to this:

var result = [];
a.forEach(function(item) {
     if(result.indexOf(item) < 0) {
         result.push(item);
     }
});

This is quadratic, but there are nice additional goodies, like wrapping native indexOf, ability to uniqify by a key (iteratee in their parlance), and optimizations for already sorted arrays.

If you're using jQuery and can't stand anything without a dollar before it, it goes like this:

  $.uniqArray = function(a) {
        return $.grep(a, function(item, pos) {
            return $.inArray(item, a) === pos;
        });
  }

which is, again, a variation of the first snippet.

Performance

Function calls are expensive in JavaScript, therefore the above solutions, as concise as they are, are not particularly efficient. For maximal performance, replace filter with a loop and get rid of other function calls:

function uniq_fast(a) {
    var seen = {};
    var out = [];
    var len = a.length;
    var j = 0;
    for(var i = 0; i < len; i++) {
         var item = a[i];
         if(seen[item] !== 1) {
               seen[item] = 1;
               out[j++] = item;
         }
    }
    return out;
}

This chunk of ugly code does the same as the snippet #3 above, but an order of magnitude faster (as of 2017 it's only twice as fast - JS core folks are doing a great job!)

_x000D_
_x000D_
function uniq(a) {_x000D_
    var seen = {};_x000D_
    return a.filter(function(item) {_x000D_
        return seen.hasOwnProperty(item) ? false : (seen[item] = true);_x000D_
    });_x000D_
}_x000D_
_x000D_
function uniq_fast(a) {_x000D_
    var seen = {};_x000D_
    var out = [];_x000D_
    var len = a.length;_x000D_
    var j = 0;_x000D_
    for(var i = 0; i < len; i++) {_x000D_
         var item = a[i];_x000D_
         if(seen[item] !== 1) {_x000D_
               seen[item] = 1;_x000D_
               out[j++] = item;_x000D_
         }_x000D_
    }_x000D_
    return out;_x000D_
}_x000D_
_x000D_
/////_x000D_
_x000D_
var r = [0,1,2,3,4,5,6,7,8,9],_x000D_
    a = [],_x000D_
    LEN = 1000,_x000D_
    LOOPS = 1000;_x000D_
_x000D_
while(LEN--)_x000D_
    a = a.concat(r);_x000D_
_x000D_
var d = new Date();_x000D_
for(var i = 0; i < LOOPS; i++)_x000D_
    uniq(a);_x000D_
document.write('<br>uniq, ms/loop: ' + (new Date() - d)/LOOPS)_x000D_
_x000D_
var d = new Date();_x000D_
for(var i = 0; i < LOOPS; i++)_x000D_
    uniq_fast(a);_x000D_
document.write('<br>uniq_fast, ms/loop: ' + (new Date() - d)/LOOPS)
_x000D_
_x000D_
_x000D_

ES6

ES6 provides the Set object, which makes things a whole lot easier:

function uniq(a) {
   return Array.from(new Set(a));
}

or

let uniq = a => [...new Set(a)];

Note that, unlike in python, ES6 sets are iterated in insertion order, so this code preserves the order of the original array.

However, if you need an array with unique elements, why not use sets right from the beginning?

Generators

A "lazy", generator-based version of uniq can be built on the same basis:

  • take the next value from the argument
  • if it's been seen already, skip it
  • otherwise, yield it and add it to the set of already seen values

_x000D_
_x000D_
function* uniqIter(a) {_x000D_
    let seen = new Set();_x000D_
_x000D_
    for (let x of a) {_x000D_
        if (!seen.has(x)) {_x000D_
            seen.add(x);_x000D_
            yield x;_x000D_
        }_x000D_
    }_x000D_
}_x000D_
_x000D_
// example:_x000D_
_x000D_
function* randomsBelow(limit) {_x000D_
    while (1)_x000D_
        yield Math.floor(Math.random() * limit);_x000D_
}_x000D_
_x000D_
// note that randomsBelow is endless_x000D_
_x000D_
count = 20;_x000D_
limit = 30;_x000D_
_x000D_
for (let r of uniqIter(randomsBelow(limit))) {_x000D_
    console.log(r);_x000D_
    if (--count === 0)_x000D_
        break_x000D_
}_x000D_
_x000D_
// exercise for the reader: what happens if we set `limit` less than `count` and why
_x000D_
_x000D_
_x000D_


For anyone looking to flatten arrays with duplicate elements into one unique array:

function flattenUniq(arrays) {
  var args = Array.prototype.slice.call(arguments);

  var array = [].concat.apply([], args)

  var result = array.reduce(function(prev, curr){
    if (prev.indexOf(curr) < 0) prev.push(curr);
    return prev;
  },[]);

  return result;
}

for (i=0; i<originalArray.length; i++) {  
    if (!newArray.includes(originalArray[i])) {
        newArray.push(originalArray[i]); 
    }
}

Here is very simple for understanding and working anywhere (even in PhotoshopScript) code. Check it!

var peoplenames = new Array("Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl");

peoplenames = unique(peoplenames);
alert(peoplenames);

function unique(array){
    var len = array.length;
    for(var i = 0; i < len; i++) for(var j = i + 1; j < len; j++) 
        if(array[j] == array[i]){
            array.splice(j,1);
            j--;
            len--;
        }
    return array;
}

//*result* peoplenames == ["Mike","Matt","Nancy","Adam","Jenny","Carl"]

In ECMAScript 6 (aka ECMAScript 2015), Set can be used to filter out duplicates. Then it can be converted back to an array using the spread operator.

var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"],
    unique = [...new Set(names)];

$(document).ready(function() {

    var arr1=["dog","dog","fish","cat","cat","fish","apple","orange"]

    var arr2=["cat","fish","mango","apple"]

    var uniquevalue=[];
    var seconduniquevalue=[];
    var finalarray=[];

    $.each(arr1,function(key,value){

       if($.inArray (value,uniquevalue) === -1)
       {
           uniquevalue.push(value)

       }

    });

     $.each(arr2,function(key,value){

       if($.inArray (value,seconduniquevalue) === -1)
       {
           seconduniquevalue.push(value)

       }

    });

    $.each(uniquevalue,function(ikey,ivalue){

        $.each(seconduniquevalue,function(ukey,uvalue){

            if( ivalue == uvalue)

            {
                finalarray.push(ivalue);
            }   

        });

    });
    alert(finalarray);
});

Vanilla JS solutions with complexity of O(n) (fastest possible for this problem). Modify the hashFunction to distinguish the objects (e.g. 1 and "1") if needed. The first solution avoids hidden loops (common in functions provided by Array).

var dedupe = function(a) 
{
    var hash={},ret=[];
    var hashFunction = function(v) { return ""+v; };
    var collect = function(h)
    {
        if(hash.hasOwnProperty(hashFunction(h)) == false) // O(1)
        {
            hash[hashFunction(h)]=1;
            ret.push(h); // should be O(1) for Arrays
            return;
        }
    };

    for(var i=0; i<a.length; i++) // this is a loop: O(n)
        collect(a[i]);
    //OR: a.forEach(collect); // this is a loop: O(n)

    return ret;
}

var dedupe = function(a) 
{
    var hash={};
    var isdupe = function(h)
    {
        if(hash.hasOwnProperty(h) == false) // O(1)
        {
            hash[h]=1;
            return true;
        }

        return false;
    };

    return a.filter(isdupe); // this is a loop: O(n)
}

Another method of doing this without writing much code is using the ES5 Object.keys-method:

var arrayWithDuplicates = ['a','b','c','d','a','c'],
    deduper = {};
arrayWithDuplicates.forEach(function (item) {
    deduper[item] = null;
});
var dedupedArray = Object.keys(deduper); // ["a", "b", "c", "d"]

Extracted in a function

function removeDuplicates (arr) {
    var deduper = {}
    arr.forEach(function (item) {
        deduper[item] = null;
    });
    return Object.keys(deduper);
}

If by any chance you were using

D3.js

You could do

d3.set(["foo", "bar", "foo", "baz"]).values() ==> ["foo", "bar", "baz"]

https://github.com/mbostock/d3/wiki/Arrays#set_values


function arrayDuplicateRemove(arr){
    var c = 0;
    var tempArray = [];
    console.log(arr);
    arr.sort();
    console.log(arr);
    for (var i = arr.length - 1; i >= 0; i--) {
        if(arr[i] != tempArray[c-1]){
            tempArray.push(arr[i])
            c++;
        }
    };
    console.log(tempArray);
    tempArray.sort();
    console.log(tempArray);
}

So the options is:

let a = [11,22,11,22];
let b = []


b = [ ...new Set(a) ];     
// b = [11, 22]

b = Array.from( new Set(a))   
// b = [11, 22]

b = a.filter((val,i)=>{
  return a.indexOf(val)==i
})                        
// b = [11, 22]

Here is a simple answer to the question.

var names = ["Alex","Tony","James","Suzane", "Marie", "Laurence", "Alex", "Suzane", "Marie", "Marie", "James", "Tony", "Alex"];
var uniqueNames = [];

    for(var i in names){
        if(uniqueNames.indexOf(names[i]) === -1){
            uniqueNames.push(names[i]);
        }
    }

aLinks is a simple JavaScript array object. If any element exist before the elements on which the index shows that a duplicate record deleted. I repeat to cancel all duplicates. One passage array cancel more records.

var srt_ = 0;
var pos_ = 0;
do {
    var srt_ = 0;
    for (var i in aLinks) {
        pos_ = aLinks.indexOf(aLinks[i].valueOf(), 0);
        if (pos_ < i) {
            delete aLinks[i];
            srt_++;
        }
    }
} while (srt_ != 0);

If you don't want to include a whole library, you can use this one off to add a method that any array can use:

Array.prototype.uniq = function uniq() {
  return this.reduce(function(accum, cur) { 
    if (accum.indexOf(cur) === -1) accum.push(cur); 
    return accum; 
  }, [] );
}

["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"].uniq()

One line:

let names = ['Mike','Matt','Nancy','Adam','Jenny','Nancy','Carl', 'Nancy'];
let dup = [...new Set(names)];
console.log(dup);

function removeDuplicates (array) {
  var sorted = array.slice().sort()
  var result = []

  sorted.forEach((item, index) => {
    if (sorted[index + 1] !== item) {
      result.push(item)
    }
  })
  return result
}

The following is more than 80% faster than the jQuery method listed (see tests below). It is an answer from a similar question a few years ago. If I come across the person who originally proposed it I will post credit. Pure JS.

var temp = {};
for (var i = 0; i < array.length; i++)
  temp[array[i]] = true;
var r = [];
for (var k in temp)
  r.push(k);
return r;

My test case comparison: http://jsperf.com/remove-duplicate-array-tests


Quick and Easy using lodash - var array = ["12346","12347","12348","12349","12349"]; console.log(_.uniqWith(array,_.isEqual));


Apart from being a simpler, more terse solution than the current answers (minus the future-looking ES6 ones), I perf tested this and it was much faster as well:

var uniqueArray = dupeArray.filter(function(item, i, self){
  return self.lastIndexOf(item) == i;
});

One caveat: Array.lastIndexOf() was added in IE9, so if you need to go lower than that, you'll need to look elsewhere.


use Array.filter() like this

_x000D_
_x000D_
var actualArr = ['Apple', 'Apple', 'Banana', 'Mango', 'Strawberry', 'Banana'];_x000D_
_x000D_
console.log('Actual Array: ' + actualArr);_x000D_
_x000D_
var filteredArr = actualArr.filter(function(item, index) {_x000D_
  if (actualArr.indexOf(item) == index)_x000D_
    return item;_x000D_
});_x000D_
_x000D_
console.log('Filtered Array: ' + filteredArr);
_x000D_
_x000D_
_x000D_

this can be made shorter in ES6 to

actualArr.filter((item,index,self) => self.indexOf(item)==index);

Here is nice explanation of Array.filter()


I had done a detailed comparison of dupes removal at some other question but having noticed that this is the real place i just wanted to share it here as well.

I believe this is the best way to do this

_x000D_
_x000D_
var myArray = [100, 200, 100, 200, 100, 100, 200, 200, 200, 200],_x000D_
    reduced = Object.keys(myArray.reduce((p,c) => (p[c] = true,p),{}));_x000D_
console.log(reduced);
_x000D_
_x000D_
_x000D_

OK .. even though this one is O(n) and the others are O(n^2) i was curious to see benchmark comparison between this reduce / look up table and filter/indexOf combo (I choose Jeetendras very nice implementation https://stackoverflow.com/a/37441144/4543207). I prepare a 100K item array filled with random positive integers in range 0-9999 and and it removes the duplicates. I repeat the test for 10 times and the average of the results show that they are no match in performance.

  • In firefox v47 reduce & lut : 14.85ms vs filter & indexOf : 2836ms
  • In chrome v51 reduce & lut : 23.90ms vs filter & indexOf : 1066ms

Well ok so far so good. But let's do it properly this time in the ES6 style. It looks so cool..! But as of now how it will perform against the powerful lut solution is a mystery to me. Lets first see the code and then benchmark it.

_x000D_
_x000D_
var myArray = [100, 200, 100, 200, 100, 100, 200, 200, 200, 200],_x000D_
    reduced = [...myArray.reduce((p,c) => p.set(c,true),new Map()).keys()];_x000D_
console.log(reduced);
_x000D_
_x000D_
_x000D_

Wow that was short..! But how about the performance..? It's beautiful... Since the heavy weight of the filter / indexOf lifted over our shoulders now i can test an array 1M random items of positive integers in range 0..99999 to get an average from 10 consecutive tests. I can say this time it's a real match. See the result for yourself :)

_x000D_
_x000D_
var ranar = [],_x000D_
     red1 = a => Object.keys(a.reduce((p,c) => (p[c] = true,p),{})),_x000D_
     red2 = a => reduced = [...a.reduce((p,c) => p.set(c,true),new Map()).keys()],_x000D_
     avg1 = [],_x000D_
     avg2 = [],_x000D_
       ts = 0,_x000D_
       te = 0,_x000D_
     res1 = [],_x000D_
     res2 = [],_x000D_
     count= 10;_x000D_
for (var i = 0; i<count; i++){_x000D_
  ranar = (new Array(1000000).fill(true)).map(e => Math.floor(Math.random()*100000));_x000D_
  ts = performance.now();_x000D_
  res1 = red1(ranar);_x000D_
  te = performance.now();_x000D_
  avg1.push(te-ts);_x000D_
  ts = performance.now();_x000D_
  res2 = red2(ranar);_x000D_
  te = performance.now();_x000D_
  avg2.push(te-ts);_x000D_
}_x000D_
_x000D_
avg1 = avg1.reduce((p,c) => p+c)/count;_x000D_
avg2 = avg2.reduce((p,c) => p+c)/count;_x000D_
_x000D_
console.log("reduce & lut took: " + avg1 + "msec");_x000D_
console.log("map & spread took: " + avg2 + "msec");
_x000D_
_x000D_
_x000D_

Which one would you use..? Well not so fast...! Don't be deceived. Map is at displacement. Now look... in all of the above cases we fill an array of size n with numbers of range < n. I mean we have an array of size 100 and we fill with random numbers 0..9 so there are definite duplicates and "almost" definitely each number has a duplicate. How about if we fill the array in size 100 with random numbers 0..9999. Let's now see Map playing at home. This time an Array of 100K items but random number range is 0..100M. We will do 100 consecutive tests to average the results. OK let's see the bets..! <- no typo

_x000D_
_x000D_
var ranar = [],_x000D_
     red1 = a => Object.keys(a.reduce((p,c) => (p[c] = true,p),{})),_x000D_
     red2 = a => reduced = [...a.reduce((p,c) => p.set(c,true),new Map()).keys()],_x000D_
     avg1 = [],_x000D_
     avg2 = [],_x000D_
       ts = 0,_x000D_
       te = 0,_x000D_
     res1 = [],_x000D_
     res2 = [],_x000D_
     count= 100;_x000D_
for (var i = 0; i<count; i++){_x000D_
  ranar = (new Array(100000).fill(true)).map(e => Math.floor(Math.random()*100000000));_x000D_
  ts = performance.now();_x000D_
  res1 = red1(ranar);_x000D_
  te = performance.now();_x000D_
  avg1.push(te-ts);_x000D_
  ts = performance.now();_x000D_
  res2 = red2(ranar);_x000D_
  te = performance.now();_x000D_
  avg2.push(te-ts);_x000D_
}_x000D_
_x000D_
avg1 = avg1.reduce((p,c) => p+c)/count;_x000D_
avg2 = avg2.reduce((p,c) => p+c)/count;_x000D_
_x000D_
console.log("reduce & lut took: " + avg1 + "msec");_x000D_
console.log("map & spread took: " + avg2 + "msec");
_x000D_
_x000D_
_x000D_

Now this is the spectacular comeback of Map()..! May be now you can make a better decision when you want to remove the dupes.

Well ok we are all happy now. But the lead role always comes last with some applause. I am sure some of you wonder what Set object would do. Now that since we are open to ES6 and we know Map is the winner of the previous games let us compare Map with Set as a final. A typical Real Madrid vs Barcelona game this time... or is it? Let's see who will win the el classico :)

_x000D_
_x000D_
var ranar = [],_x000D_
     red1 = a => reduced = [...a.reduce((p,c) => p.set(c,true),new Map()).keys()],_x000D_
     red2 = a => Array.from(new Set(a)),_x000D_
     avg1 = [],_x000D_
     avg2 = [],_x000D_
       ts = 0,_x000D_
       te = 0,_x000D_
     res1 = [],_x000D_
     res2 = [],_x000D_
     count= 100;_x000D_
for (var i = 0; i<count; i++){_x000D_
  ranar = (new Array(100000).fill(true)).map(e => Math.floor(Math.random()*10000000));_x000D_
  ts = performance.now();_x000D_
  res1 = red1(ranar);_x000D_
  te = performance.now();_x000D_
  avg1.push(te-ts);_x000D_
  ts = performance.now();_x000D_
  res2 = red2(ranar);_x000D_
  te = performance.now();_x000D_
  avg2.push(te-ts);_x000D_
}_x000D_
_x000D_
avg1 = avg1.reduce((p,c) => p+c)/count;_x000D_
avg2 = avg2.reduce((p,c) => p+c)/count;_x000D_
_x000D_
console.log("map & spread took: " + avg1 + "msec");_x000D_
console.log("set & A.from took: " + avg2 + "msec");
_x000D_
_x000D_
_x000D_

Wow.. man..! Well unexpectedly it didn't turn out to be an el classico at all. More like Barcelona FC against CA Osasuna :))


I know Im a little late, but here is another option using jinqJs

See Fiddle

var result = jinqJs().from(["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"]).distinct().select();

Use Underscore.js

It's a library with a host of functions for manipulating arrays.

It's the tie to go along with jQuery's tux, and Backbone.js's suspenders.

_.uniq

_.uniq(array, [isSorted], [iterator]) Alias: unique
Produces a duplicate-free version of the array, using === to test object equality. If you know in advance that the array is sorted, passing true for isSorted will run a much faster algorithm. If you want to compute unique items based on a transformation, pass an iterator function.

Example

var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];

alert(_.uniq(names, false));

Note: Lo-Dash (an underscore competitor) also offers a comparable .uniq implementation.


This is probably one of the fastest way to remove permanently the duplicates from an array 10x times faster than the most functions here.& 78x faster in safari

function toUnique(a,b,c){               //array,placeholder,placeholder
 b=a.length;while(c=--b)while(c--)a[b]!==a[c]||a.splice(c,1)
}
  1. Test: http://jsperf.com/wgu
  2. Demo: http://jsfiddle.net/46S7g/
  3. More: https://stackoverflow.com/a/25082874/2450730

if you can't read the code above ask, read a javascript book or here are some explainations about shorter code. https://stackoverflow.com/a/21353032/2450730


Examples related to javascript

need to add a class to an element How to make a variable accessible outside a function? Hide Signs that Meteor.js was Used How to create a showdown.js markdown extension Please help me convert this script to a simple image slider Highlight Anchor Links when user manually scrolls? Summing radio input values How to execute an action before close metro app WinJS javascript, for loop defines a dynamic variable name Getting all files in directory with ajax

Examples related to arrays

PHP array value passes to next row Use NSInteger as array index How do I show a message in the foreach loop? Objects are not valid as a React child. If you meant to render a collection of children, use an array instead Iterating over arrays in Python 3 Best way to "push" into C# array Sort Array of object by object field in Angular 6 Checking for duplicate strings in JavaScript array what does numpy ndarray shape do? How to round a numpy array?

Examples related to duplicates

Remove duplicates from dataframe, based on two columns A,B, keeping row with max value in another column C Remove duplicates from a dataframe in PySpark How to "select distinct" across multiple data frame columns in pandas? How to find duplicate records in PostgreSQL Drop all duplicate rows across multiple columns in Python Pandas Left Join without duplicate rows from left table Finding duplicate integers in an array and display how many times they occurred How do I use SELECT GROUP BY in DataTable.Select(Expression)? How to delete duplicate rows in SQL Server? Python copy files to a new directory and rename if file name already exists

Examples related to unique

Count unique values with pandas per groups Find the unique values in a column and then sort them How can I check if the array of objects have duplicate property values? Firebase: how to generate a unique numeric ID for key? pandas unique values multiple columns Select unique values with 'select' function in 'dplyr' library Generate 'n' unique random numbers within a range SQL - select distinct only on one column Can I use VARCHAR as the PRIMARY KEY? Count unique values in a column in Excel