Hashed array tree

In computer science, a hashed array tree (HAT) is a dynamic array data-structure published by Edward Sitarski in 1996,[1] maintaining an array of separate memory fragments (or "leaves") to store the data elements, unlike simple dynamic arrays which maintain their data in one contiguous memory area. Its primary objective is to reduce the amount of element copying due to automatic array resizing operations, and to improve memory usage patterns.

Whereas simple dynamic arrays based on geometric expansion waste linear (Ω(n)) space, where n is the number of elements in the array, hashed array trees waste only order O(n) storage space. An optimization of the algorithm allows to eliminate data copying completely, at a cost of increasing the wasted space.

It can perform access in constant (O(1)) time, though slightly slower than simple dynamic arrays. The algorithm has O(1) amortized performance when appending a series of objects to the end of a hashed array tree. Contrary to its name, it does not use hash functions.

A full Hashed Array Tree with 16 elements

Definitions

As defined by Sitarski, a hashed array tree has a top-level directory containing a power of two number of leaf arrays. All leaf arrays are the same size as the top-level directory. This structure superficially resembles a hash table with array-based collision chains, which is the basis for the name hashed array tree. A full hashed array tree can hold m2 elements, where m is the size of the top-level directory.[1] The use of powers of two enables faster physical addressing through bit operations instead of arithmetic operations of quotient and remainder[1] and ensures the O(1) amortized performance of append operation in the presence of occasional global array copy while expanding.

Expansions and size reductions

In a usual dynamic array geometric expansion scheme, the array is reallocated as a whole sequential chunk of memory with the new size a double of its current size (and the whole data is then moved to the new location). This ensures O(1) amortized operations at a cost of O(n) wasted space, as the enlarged array is filled to the half of its new capacity.

When a hashed array tree is full, its directory and leaves must be restructured to twice their prior size to accommodate additional append operations. The data held in old structure is then moved into the new locations. Only one new leaf is then allocated and added into the top array which thus becomes filled only to a quarter of its new capacity. All the extra leaves are not allocated yet, and will only be allocated when needed, thus wasting only O(n) of storage.

There are multiple alternatives for reducing size: when a Hashed Array Tree is one eighth full, it can be restructured to a smaller, half-full hashed array tree; another option is only freeing unused leaf arrays, without resizing the leaves. Further optimizations include adding new leaves without resizing, growing the directory array as needed, possibly through geometric expansion. This would eliminate the need for data copying completely, at the cost of making the wasted space O(n), with a small coefficient, and only performing restructuring when a set threshold overhead is reached.[1]

Comparison of list data structures
Linked list Array Dynamic array Balanced tree Random access list hashed array tree
Indexing Θ(n) Θ(1) Θ(1) Θ(log n) Θ(log n)[2] Θ(1)
Insert/delete at beginning Θ(1) N/A Θ(n) Θ(log n) Θ(1) Θ(n)
Insert/delete at end Θ(n) when last element is unknown;
Θ(1) when last element is known
N/A Θ(1) amortized Θ(log n) Θ(log n) updating Θ(1) amortized
Insert/delete in middle search time + Θ(1)[3][4][5] N/A Θ(n) Θ(log n) Θ(log n) updating Θ(n)
Wasted space (average) Θ(n) 0 Θ(n)[6] Θ(n) Θ(n) Θ(n)

Brodnik et al.[7] presented a dynamic array algorithm with a similar space wastage profile to hashed array trees. Brodnik's implementation retains previously allocated leaf arrays, with a more complicated address calculation function as compared to hashed array trees.

See also

References

  1. 1 2 3 4 Sitarski, Edward (September 1996), "HATs: Hashed array trees", Dr. Dobb's Journal, 21 (11) |contribution= ignored (help)
  2. Chris Okasaki (1995). "Purely Functional Random-Access Lists". Proceedings of the Seventh International Conference on Functional Programming Languages and Computer Architecture: 86–95. doi:10.1145/224164.224187.
  3. Gerald Kruse. CS 240 Lecture Notes: Linked Lists Plus: Complexity Trade-offs. Juniata College. Spring 2008.
  4. Day 1 Keynote - Bjarne Stroustrup: C++11 Style at GoingNative 2012 on channel9.msdn.com from minute 45 or foil 44
  5. Number crunching: Why you should never, ever, EVER use linked-list in your code again at kjellkod.wordpress.com
  6. Brodnik, Andrej; Carlsson, Svante; Sedgewick, Robert; Munro, JI; Demaine, ED (1999), Resizable Arrays in Optimal Time and Space (Technical Report CS-99-09) (PDF), Department of Computer Science, University of Waterloo
  7. Brodnik, Andrej; Carlsson, Svante; Sedgewick, Robert; Munro, JI; Demaine, ED (Technical Report CS-99-09), Resizable Arrays in Optimal Time and Space (PDF), Department of Computer Science, University of Waterloo Check date values in: |date=, |year= / |date= mismatch (help)
This article is issued from Wikipedia - version of the 9/2/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.