Hashtables

Agenda

  • Discussion: pros/cons of array-backed and linked structures
  • Python's other built-in DS: the dict
  • A naive lookup DS
  • Direct lookups via Hashing
  • Hashtables
    • Collisions and the "Birthday problem"
  • Runtime analysis & Discussion

Discussion: pros/cons of array-backed and linked structures

Between the array-backed and linked list we have:

  1. $O(1)$ indexing (array-backed)
  2. $O(1)$ appending (array-backed & linked)
  3. $O(1)$ insertion/deletion without indexing (linked)
  4. $O(\log N)$ binary search, when sorted (array-backed)

Python's other built-in DS: the dict

In [33]:
import timeit

def lin_search(lst, x):
    for i in range(len(lst)):
        if lst[i] == x:
            return i
    raise ValueError(x)
    
def bin_search(lst, x):
    # assume that lst is sorted!!!
    low = 0
    hi  = len(lst)
    mid = (low + hi) // 2
    while lst[mid] != x and low <= hi:
        if lst[mid] < x:
            low = mid + 1
        else:
            hi  = mid - 1
        mid = (low + hi) // 2
    if lst[mid] == x:
        return mid
    else:
        raise ValueError(x)

def time_lin_search(size):
    return timeit.timeit('lin_search(lst, random.randrange({}))'.format(size), # interpolate size into randrange
                         'import random ; from __main__ import lin_search ;'
                         'lst = [x for x in range({})]'.format(size), # interpolate size into list range
                         number=100)

def time_bin_search(size):
    return timeit.timeit('bin_search(lst, random.randrange({}))'.format(size), # interpolate size into randrange
                         'import random ; from __main__ import bin_search ;'
                         'lst = [x for x in range({})]'.format(size), # interpolate size into list range
                         number=100)

def time_dict(size):
    return timeit.timeit('dct[random.randrange({})]'.format(size), 
                         'import random ; '
                         'dct = {{x: x for x in range({})}}'.format(size),
                         number=100)

lin_search_timings = [time_lin_search(n)
                      for n in range(10, 10000, 100)]

bin_search_timings = [time_bin_search(n)
                      for n in range(10, 10000, 100)]

dict_timings = [time_dict(n)
                for n in range(10, 10000, 100)]
In [35]:
%matplotlib inline
import matplotlib.pyplot as plt
#plt.plot(lin_search_timings, 'ro')
plt.plot(bin_search_timings, 'gs')
plt.plot(dict_timings, 'b^')
plt.show()

A naive lookup DS

In [ ]:
class LookupDS:
    def __init__(self):
        self.data = []
    
    def __setitem__(self, key, value):
        pass
    
    def __getitem__(self, key):
        pass

    def __contains__(self, key):
        pass
In [36]:
class LookupDS:
    def __init__(self):
        self.data = []
    
    def __setitem__(self, key, value):
        for i in range(len(self.data)):
            if self.data[i][0] == key:
                self.data[i][1] = value
                return
        else:
            self.data.append([key, value])
    
    def __getitem__(self, key):
        for k, v in self.data:
            if k == key:
                return v
        else:
            raise KeyError

    def __contains__(self, key):
        try:
            _ = self[key] #calls __getitem__; if getting something return True; if KeyError returns, return False
            return True
        except:
            return False
In [37]:
d = LookupDS()
In [38]:
d['hello'] = 'hola'
d['goodbye'] = 'adios'
In [39]:
d['hello']
Out[39]:
'hola'
In [40]:
d['goodbye']
Out[40]:
'adios'
In [41]:
d['hello'] = 'bonjour'
In [42]:
d['hello']
Out[42]:
'bonjour'
In [43]:
d.data
Out[43]:
[['hello', 'bonjour'], ['goodbye', 'adios']]

Direct lookups via Hashing

Hashes (a.k.a. hash codes or hash values) are simply numerical values computed for objects.

In [45]:
hash('hello') #the value could be different on your machine
Out[45]:
-1298108468397806619
In [46]:
[hash(s) for s in ['different', 'objects', 'have', 'very', 'different', 'hashes']]
Out[46]:
[8264025059867943528,
 -909818077496650347,
 6562135653832458469,
 -587347941624417982,
 8264025059867943528,
 5601915235208154100]
In [47]:
hash('aa'), hash('ab')
Out[47]:
(-595044047162001848, -2323253008525125523)
In [48]:
5093 % 100
Out[48]:
93
In [49]:
5093 // 100
Out[49]:
50
In [50]:
for i in range(1,20):
    print(i, '% 6 => ', i%6)
1 % 6 =>  1
2 % 6 =>  2
3 % 6 =>  3
4 % 6 =>  4
5 % 6 =>  5
6 % 6 =>  0
7 % 6 =>  1
8 % 6 =>  2
9 % 6 =>  3
10 % 6 =>  4
11 % 6 =>  5
12 % 6 =>  0
13 % 6 =>  1
14 % 6 =>  2
15 % 6 =>  3
16 % 6 =>  4
17 % 6 =>  5
18 % 6 =>  0
19 % 6 =>  1
In [51]:
-4 % 3
Out[51]:
2
In [52]:
-4 // 3
Out[52]:
-2
In [53]:
hash('hello') % 85
Out[53]:
36

Hashtables

In [ ]:
class Hashtable:
    def __init__(self, n_buckets=1000):
        self.buckets = [None] * n_buckets
        
    def __setitem__(self, key, val):
        bidx = hash(key) % len(self.buckets)
        self.buckets[bidx] = val
    
    def __getitem__(self, key):
        pass
        
    def __contains__(self, key):
        try:
            _ = self[key]
            return True
        except:
            return False
In [54]:
class Hashtable:
    def __init__(self, n_buckets=1000):
        self.buckets = [None] * n_buckets
        
    def __setitem__(self, key, val):
        bidx = hash(key) % len(self.buckets)
        self.buckets[bidx] = val
    
    def __getitem__(self, key):
        bidx = hash(key) % len(self.buckets)
        if self.buckets[bidx]:
            return self.buckets[bidx]
        else:
            raise KeyError
        
    def __contains__(self, key):
        try:
            _ = self[key]
            return True
        except:
            return False
In [55]:
ht = Hashtable()
In [56]:
ht['hello'] = 'hola'
ht['goodbye'] = 'adios'
In [57]:
ht['hello']
Out[57]:
'hola'
In [58]:
ht['goodbye']
Out[58]:
'adios'
In [59]:
ht.buckets
Out[59]:
[None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 'hola',
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 'adios',
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None,
 None]
In [ ]:
#any problems?
In [60]:
ht = Hashtable(2)
In [61]:
ht['hello'] = 'hola'
ht['byebye'] = 'adios'
In [62]:
ht['hello']
Out[62]:
'hola'
In [63]:
ht['byebye']
Out[63]:
'adios'
In [64]:
ht['eat'] = 'comer'
In [65]:
ht['hello']
Out[65]:
'comer'
In [66]:
ht['eat']
Out[66]:
'comer'
In [67]:
ht['byebye']
Out[67]:
'adios'
In [68]:
class Hashtable:
    def __init__(self, n_buckets=1000):
        self.buckets = [None] * n_buckets
        
    def __setitem__(self, key, val):
        bidx = hash(key) % len(self.buckets)
        self.buckets[bidx] = [key, val]
    
    def __getitem__(self, key):
        bidx = hash(key) % len(self.buckets)
        if self.buckets[bidx] and self.buckets[bidx][0] == key:
            return self.buckets[bidx][1]
        else:
            raise KeyError
        
    def __contains__(self, key):
        try:
            _ = self[key]
            return True
        except:
            return False
In [69]:
ht = Hashtable(2)
In [72]:
ht['hello'] = 'hola'
ht['byebye'] = 'adios'
In [73]:
ht['hello']
Out[73]:
'hola'
In [74]:
ht['byebye']
Out[74]:
'adios'
In [75]:
ht['eat']
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-75-f7f3c4c7f167> in <module>()
----> 1 ht['eat']

<ipython-input-68-9b5e5e06061c> in __getitem__(self, key)
     12             return self.buckets[bidx][1]
     13         else:
---> 14             raise KeyError
     15 
     16     def __contains__(self, key):

KeyError: 

On Collisions

The "Birthday Problem"

Problem statement: Given $N$ people at a party, how likely is it that at least two people will have the same birthday?

In [ ]:
def birthday_p(n_people):
    p_inv = 1
    for n in range(365, 365-n_people, -1):
        p_inv *= n / 365
    return 1 - p_inv
In [ ]:
birthday_p(2)
In [ ]:
birthday_p(10)
In [ ]:
birthday_p(50)
In [ ]:
%matplotlib inline
import matplotlib.pyplot as plt

n_people = range(1, 80)
plt.plot(n_people, [birthday_p(n) for n in n_people])
plt.show()

General collision statistics

Repeat the birthday problem, but with a given number of values and "buckets" that are allotted to hold them. How likely is it that two or more values will map to the same bucket?

In [ ]:
def collision_p(n_values, n_buckets):
    p_inv = 1
    for n in range(n_buckets, n_buckets-n_values, -1):
        p_inv *= n / n_buckets
    return 1 - p_inv
In [ ]:
collision_p(23, 365) # same as birthday problem, for 23 people
In [ ]:
collision_p(10, 100)
In [ ]:
collision_p(100, 1000)
In [ ]:
# keeping number of values fixed at 100, but vary number of buckets: visualize probability of collision
%matplotlib inline
import matplotlib.pyplot as plt

n_buckets = range(100, 100001, 1000)
plt.plot(n_buckets, [collision_p(100, nb) for nb in n_buckets])
plt.show()
In [ ]:
def avg_num_collisions(n, b):
    """Returns the expected number of collisions for n values uniformly distributed
    over a hashtable of b buckets. Based on (fairly) elementary probability theory.
    (Pay attention in MATH 474!)"""
    return n - b + b * (1 - 1/b)**n
In [ ]:
avg_num_collisions(28, 365)
In [ ]:
avg_num_collisions(1000, 1000)
In [ ]:
avg_num_collisions(1000, 10000)

Dealing with Collisions

To deal with collisions in a hashtable, we simply create a "chain" of key/value pairs for each bucket where collisions occur. The chain needs to be a data structure that supports quick insertion — natural choice: the linked list!

In [ ]:
class Hashtable:
    class Node:
        def __init__(self, key, val, next=None):
            self.key = key
            self.val = val
            self.next = next
            
    def __init__(self, n_buckets=1000):
        self.buckets = [None] * n_buckets
        
    def __setitem__(self, key, val):
        bucket_idx = hash(key) % len(self.buckets)
        pass

    def __getitem__(self, key):
        bucket_idx = hash(key) % len(self.buckets)
        pass
        
    def __contains__(self, key):
        try:
            _ = self[key]
            return True
        except:
            return False
In [ ]:
def prep_ht(size):
    ht = Hashtable(size*10)
    for x in range(size):
        ht[x] = x
    return ht

def time_ht(size):
    return timeit.timeit('ht[random.randrange({})]'.format(size), 
                         'import random ; from __main__ import prep_ht ;'
                         'ht = prep_ht({})'.format(size),
                         number=100)

ht_timings = [time_ht(n)
                for n in range(10, 10000, 100)]
In [ ]:
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(ht_timings, 'gs')
plt.plot(dict_timings, 'b^')
plt.show()

Loose ends

Iteration

In [ ]:
class Hashtable(Hashtable):
    def __iter__(self):
        pass

"Load factor" and Rehashing

It doesn't often make sense to start with a large number of buckets, unless we know in advance that the number of keys is going to be vast — also, the user of the hashtable would typically prefer to not be bothered with implementation details (i.e., bucket count) when using the data structure.

Instead: start with a relatively small number of buckets, and if the ratio of keys to the number of buckets (known as the load factor) is above some desired threshold — which we can determine using collision probabilities — we can dynamically increase the number of buckets. This requires, however, that we rehash all keys and potentially move them into new buckets (since the hash(key) % num_buckets mapping will likely be different with more buckets).

Other APIs

  • FIXED __setitem__ (to update value for existing key)
  • __delitem__
  • keys & values (return iterators for keys and values)
  • setdefault

Runtime analysis & Discussion

For a hashtable with $N$ key/value entries:

  • Insertion: $O(?)$
  • Lookup: $O(?)$
  • Deletion: $O(?)$

Vocabulary list

  • hashtable
  • hashing and hashes
  • collision
  • hash buckets & chains
  • birthday problem
  • load factor
  • rehashing

Addendum: On Hashability

Remember: a given object must always hash to the same value. This is required so that we can always map the object to the same hash bucket.

Hashcodes for collections of objects are usually computed from the hashcodes of its contents, e.g., the hash of a tuple is a function of the hashes of the objects in said tuple:

In [ ]:
hash(('two', 'strings'))

This is useful. It allows us to use a tuple, for instance, as a key for a hashtable.

However, if the collection of objects is mutable — i.e., we can alter its contents — this means that we can potentially change its hashcode.`

If we were to use such a collection as a key in a hashtable, and alter the collection after it's been assigned to a particular bucket, this leads to a serious problem: the collection may now be in the wrong bucket (as it was assigned to a bucket based on its original hashcode)!

For this reason, only immutable types are, by default, hashable in Python. So while we can use integers, strings, and tuples as keys in dictionaries, lists (which are mutable) cannot be used. Indeed, Python marks built-in mutable types as "unhashable", e.g.,

In [ ]:
hash([1, 2, 3])

That said, Python does support hashing on instances of custom classes (which are mutable). This is because the default hash function implementation does not rely on the contents of instances of custom classes. E.g.,

In [ ]:
class Student:
    def __init__(self, fname, lname):
        self.fname = fname
        self.lname = lname
In [ ]:
s = Student('John', 'Doe')
hash(s)
In [ ]:
s.fname = 'Jane'
hash(s) # same as before mutation

We can change the default behavior by providing our own hash function in __hash__, e.g.,

In [ ]:
class Student:
    def __init__(self, fname, lname):
        self.fname = fname
        self.lname = lname
        
    def __hash__(self):
        return hash(self.fname) + hash(self.lname)
In [ ]:
s = Student('John', 'Doe')
hash(s)
In [ ]:
s.fname = 'Jane'
hash(s)

But be careful: instances of this class are no longer suitable for use as keys in hashtables (or dictionaries), if you intend to mutate them after using them as keys!