HashMap Operations Complexity O(N)

What is HashMap complexity [Big O] for operations like read or write?

2023-02-09 18:25:56 - Mohamad Abuzaid


To answer this we need to review how HashMap works.

HashMap uses the hash value of the key as index to that <key, value> pair. Ideally this hash value should be unique, but in some cases it is not unique.

This can happen if you didn't manage your keys well. For example, if you use an Object as your key. In this case all hashes of this object will be the same since they are of the same type.

class Key {
    private int id;
    private String name;
    private String name;
    
    Key(int id, String name) {
        this.id = id;
        this.name = name;
    }
}

public class HashMapExample {
    public static void main(String[] args) {
        HashMap<Key, Integer> map = new HashMap<>();

        Key key1 = new Key(1, "John");
        Key key2 = new Key(2, "Jane");
        Key key3 = new Key(3, "Jim");

        map.put(key1, 100);
        map.put(key2, 200);
        map.put(key3, 300);

        map.remove(key3);
        Key myKey = map.get(key2);
    }
}

In the above code case, the Key object doesn't handle it hash methods at all. HasMap will do an extra step and calculates the equals() function of the object to differentiate between each instant of the Key object. Which will add more complexity to the operation.

-----------------------

You can avoid this by well managing your keys, either by using primitive data types for keys [ Strings and Numbers]. Or, by handling the hashing of the object by overriding its hash() method to make sure it generates a different hash value for every instance. For example, adding an id or timestamp to hash calculations. This will save the HashMap the extra complexity of calculating the equals() method for every key in the Map.

Check this updated version of the previous example.

class Key {
    private int id;
    private String name;

    Key(int id, String name) {
        this.id = id;
        this.name = name;
    }

    @Override
    public int hashCode() {
        final int prime = 31;
        int result = 1;
        result = prime * result + id;
        result = prime * result + ((name == null) ? 0 : name.hashCode());
        return result;
    }

    @Override
    public boolean equals(Object obj) {
        if (this == obj)
            return true;
        if (obj == null)
            return false;
        if (getClass() != obj.getClass())
            return false;
        Key other = (Key) obj;
        if (id != other.id)
            return false;
        if (name == null) {
            if (other.name != null)
                return false;
        } else if (!name.equals(other.name))
            return false;
        return true;
    }
}

public class HashMapExample {
    public static void main(String[] args) {
        HashMap<Key, Integer> map = new HashMap<>();

        Key key1 = new Key(1, "John");
        Key key2 = new Key(2, "Jane");
        Key key3 = new Key(3, "Jim");


        map.put(key1, 100);
        map.put(key2, 200);
        map.put(key3, 300);
        
        map.remove(key3);
        Key myKey = map.get(key2);
    }
}

In this update, the Key object handles its hashCode() method including a unique id in the calculations which ensures a unique hash for each instance.

------------------

So returning to our main question...

The complexity of a read/write operation in a HashMap will be:


Note: Same answer goes for the HashSet.

More Posts