HashMap ,jdk-15.0.2 源码
- 源码分析
- 总结1
-
- 1.resize()
- 2. HashMap的hash方法
- 3. 计算元素索引位置:i = (n - 1) & hash
- 4. 解决hash冲突的方法
- 5.HashMap是线程安全的吗
-
- 6.15.0.2相比1.7都优化了什么?
- 如何实现线程安全的Map?
- 总结2
-
- HashMap,ConcurrentHashMap 的比较
- HashMap , HashTable 的比较
- 总结3
-
- HashMap,HashSet 的比较
源码分析
HashMap 是非线程安全的,HashMap 中null 值可以作为key,但是只能有一个null key,
HashMap 如果使用无参构造方法创建实例的时候,底层的Node数组引用 table为 null,只有在添加第一个Node 节点时,才会调用resize()创建一个默认容量为16 的Node 数组 newTab,table = newTab;
如果创建的时候给出了初始容量,HashMap 使用tablesizefor()将其扩充为2的幂次方大小赋给阈值。在添加第一个Node 节点时,才会调用resize()创建一个指定容量为阈值的Node 数组 newTab,table = newTab
resize() 在底层数组引用为null的时候会初始化一个容量为16 的Node数组对象,(其中如果阈值大于0,创建容量为阈值大小的数组对象-----使用了有参的构造方法)不为null的情况下会去扩容底层数组,并会重排底层数组里的元素。
底层数据结构,HashMap的链表长度大于阈值(8)的时候,将链表转化为红黑树,以减少搜索时间
public class HashMap<K,V> extends AbstractMap<K,V>implements Map<K,V>, Cloneable, Serializable {
@java.io.Serialprivate static final long serialVersionUID = 362498820763181265L;/*** The default initial capacity - MUST be a power of two.*/static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16,HashMap的初始容量为16/*** The maximum capacity, used if a higher value is implicitly specified* by either of the constructors with arguments.* MUST be a power of two <= 1<<30.*/static final int MAXIMUM_CAPACITY = 1 << 30; //HashMap的最大容量/*** The load factor used when none specified in constructor.*/static final float DEFAULT_LOAD_FACTOR = 0.75f;/*** The bin count threshold for using a tree rather than list for a* bin. Bins are converted to trees when adding an element to a* bin with at least this many nodes. The value must be greater* than 2 and should be at least 8 to mesh with assumptions in* tree removal about conversion back to plain bins upon* shrinkage.*/static final int TREEIFY_THRESHOLD = 8;/*** HashMap 中存放的对象,使用Node节点保存我们存放的数据*/static class Node<K,V> implements Map.Entry<K,V> {
final int hash;final K key;V value;Node<K,V> next;Node(int hash, K key, V value, Node<K,V> next) {
this.hash = hash;this.key = key;this.value = value;this.next = next;}public final K getKey() {
return key; }public final V getValue() {
return value; }public final String toString() {
return key + "=" + value; }public final int hashCode() {
return Objects.hashCode(key) ^ Objects.hashCode(value);}public final V setValue(V newValue) {
V oldValue = value;value = newValue;return oldValue;}public final boolean equals(Object o) {
if (o == this)return true;if (o instanceof Map.Entry) {
Map.Entry<?,?> e = (Map.Entry<?,?>)o;if (Objects.equals(key, e.getKey()) &&Objects.equals(value, e.getValue()))return true;}return false;}}static final int hash(Object key) {
int h;return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);}/* ---------------- Fields -------------- *//*** The table, initialized on first use, and resized as* necessary. When allocated, length is always a power of two.* (We also tolerate length zero in some operations to allow* bootstrapping mechanics that are currently not needed.)*/transient Node<K,V>[] table;/*** Holds cached entrySet(). Note that AbstractMap fields are used* for keySet() and values().*/transient Set<Map.Entry<K,V>> entrySet;/*** The number of key-value mappings contained in this map.*/transient int size;/*** The number of times this HashMap has been structurally modified* Structural modifications are those that change the number of mappings in* the HashMap or otherwise modify its internal structure (e.g.,* rehash). This field is used to make iterators on Collection-views of* the HashMap fail-fast. (See ConcurrentModificationException).*/transient int modCount;/*** The next size value at which to resize (capacity * load factor).** @serial*/// (The javadoc description is true upon serialization.// Additionally, if the table array has not been allocated, this// field holds the initial array capacity, or zero signifying// DEFAULT_INITIAL_CAPACITY.)int threshold;/*** The load factor for the hash table.** @serial*/final float loadFactor;public HashMap(int initialCapacity, float loadFactor) {
if (initialCapacity < 0)throw new IllegalArgumentException("Illegal initial capacity: " +initialCapacity);if (initialCapacity > MAXIMUM_CAPACITY)initialCapacity = MAXIMUM_CAPACITY;if (loadFactor <= 0 || Float.isNaN(loadFactor))throw new IllegalArgumentException("Illegal load factor: " +loadFactor);this.loadFactor = loadFactor;this.threshold = tableSizeFor(initialCapacity);}/*** Constructs an empty {@code HashMap} with the specified initial* capacity and the default load factor (0.75).** @param initialCapacity the initial capacity.* @throws IllegalArgumentException if the initial capacity is negative.*/public HashMap(int initialCapacity) {
this(initialCapacity, DEFAULT_LOAD_FACTOR);}/*** Constructs an empty {@code HashMap} with the default initial capacity* (16) and the default load factor (0.75).*/public HashMap() {
this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted}/*** Constructs a new {@code HashMap} with the same mappings as the* specified {@code Map}. The {@code HashMap} is created with* default load factor (0.75) and an initial capacity sufficient to* hold the mappings in the specified {@code Map}.** @param m the map whose mappings are to be placed in this map* @throws NullPointerException if the specified map is null*/public HashMap(Map<? extends K, ? extends V> m) {
this.loadFactor = DEFAULT_LOAD_FACTOR;putMapEntries(m, false);}/*** Returns the value to which the specified key is mapped,* or {@code null} if this map contains no mapping for the key.** <p>More formally, if this map contains a mapping from a key* {@code k} to a value {@code v} such that {@code (key==null ? k==null :* key.equals(k))}, then this method returns {@code v}; otherwise* it returns {@code null}. (There can be at most one such mapping.)** <p>A return value of {@code null} does not <i>necessarily</i>* indicate that the map contains no mapping for the key; it's also* possible that the map explicitly maps the key to {@code null}.* The {@link #containsKey containsKey} operation may be used to* distinguish these two cases.** @see #put(Object, Object)*/public V get(Object key) {
Node<K,V> e;return (e = getNode(key)) == null ? null : e.value;}/*** Implements Map.get and related methods.** @param key the key* @return the node, or null if none*/final Node<K,V> getNode(Object key) {
Node<K,V>[] tab; Node<K,V> first, e; int n, hash; K k;if ((tab = table) != null && (n = tab.length) > 0 &&(first = tab[(n - 1) & (hash = hash(key))]) != null) {
if (first.hash == hash && // always check first node((k = first.key) == key || (key != null && key.equals(k))))return first;if ((e = first.next) != null) {
if (first instanceof TreeNode)return ((TreeNode<K,V>)first).getTreeNode(hash, key);do {
if (e.hash == hash &&((k = e.key) == key || (key != null && key.equals(k))))return e;} while ((e = e.next) != null);}}return null;}// put 操作public V put(K key, V value) {
return putVal(hash(key), key, value, false, true);}/*** Implements Map.put and related methods.** @param hash hash for key* @param key the key* @param value the value to put* @param onlyIfAbsent if true, don't change existing value* @param evict if false, the table is in creation mode.* @return previous value, or null if none*/final V putVal(int hash, K key, V value, boolean onlyIfAbsent,boolean evict) {
Node<K,V>[] tab; Node<K,V> p; int n, i;if ((tab = table) == null || (n = tab.length) == 0) //1.如果实例 HashMap 的 Node 数组 table 为null的话, n = (tab = resize()).length; //new 一个初始容量的数组给 tableif ((p = tab[i = (n - 1) & hash]) == null) //2.使用输入的Key的Hash值计算存放该节点的索引i,如果该位置没有tab[i] = newNode(hash, key, value, null); //其他节点的话,则把该节点放在该位置*/else {
//3.如果该位置处已经有节点存在的话Node<K,V> e; K k;if (p.hash == hash && ((k = p.key) == key || (key != null && key.equals(k)))) //3.1已存在节点的key 与 put 的节点的key 相同,e 指向该节点e = p;else if (p instanceof TreeNode) //3.2 如果是该节点是树节点的话,放在书中e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);else {
for (int binCount = 0; ; ++binCount) {
//3.3 key 不同,处理hash冲突,new 一个新的节点if ((e = p.next) == null) {
//添加到链表尾部。也就是说原来该位置上只有p.next = newNode(hash, key, value, null); if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st //如果该索引位置处链表长度大于阈值8的话treeifyBin(tab, hash); //将该链表转化为红黑树break;}if (e.hash == hash &&((k = e.key) == key || (key != null && key.equals(k))))break;p = e;}}if (e != null) {
// existing mapping for key // key相同的话,用新的 value 替换旧的 value V oldValue = e.value;if (!onlyIfAbsent || oldValue == null)e.value = value;afterNodeAccess(e);return oldValue;}}++modCount;if (++size > threshold) //如果当前元素个数大于阈值的话会进行扩容resize();afterNodeInsertion(evict);return null;}final Node<K,V>[] resize() {
Node<K,V>[] oldTab = table;int oldCap = (oldTab == null) ? 0 : oldTab.length;int oldThr = threshold;int newCap, newThr = 0;if (oldCap > 0) {
if (oldCap >= MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;return oldTab;}else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&oldCap >= DEFAULT_INITIAL_CAPACITY)newThr = oldThr << 1; // double threshold}else if (oldThr > 0) // initial capacity was placed in thresholdnewCap = oldThr;else {
// zero initial threshold signifies using defaultsnewCap = DEFAULT_INITIAL_CAPACITY;newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);}if (newThr == 0) {
float ft = (float)newCap * loadFactor;newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?(int)ft : Integer.MAX_VALUE);}threshold = newThr;@SuppressWarnings({
"rawtypes","unchecked"})Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];table = newTab;if (oldTab != null) {
for (int j = 0; j < oldCap; ++j) {
Node<K,V> e;if ((e = oldTab[j]) != null) {
oldTab[j] = null;if (e.next == null)newTab[e.hash & (newCap - 1)] = e;else if (e instanceof TreeNode)((TreeNode<K,V>)e).split(this, newTab, j, oldCap);else {
// preserve orderNode<K,V> loHead = null, loTail = null;Node<K,V> hiHead = null, hiTail = null;Node<K,V> next;do {
next = e.next;if ((e.hash & oldCap) == 0) {
if (loTail == null)loHead = e;elseloTail.next = e;loTail = e;}else {
if (hiTail == null)hiHead = e;elsehiTail.next = e;hiTail = e;}} while ((e = next) != null);if (loTail != null) {
loTail.next = null;newTab[j] = loHead;}if (hiTail != null) {
hiTail.next = null;newTab[j + oldCap] = hiHead;}}}}}return newTab;}}
总结1
1.resize()
- resize():用来对HashMap 数组扩容的方法。1)如果 table 数组还没有被分配空间(table == null),创建一个容量为 初始容量(16)/ 指定容量 的数组赋给table。2)如果 table 数组不为 null 时,创建一个容量为原容量2倍的新数组赋给 table。并重新计算元素的索引位置。
什么时候会进行扩容呢?
1),当使用无参/有参构造函数创建HashMap 实例的时候,table为null,第一次添加元素的时候
2),当HashMap中的元素个数size 大于阈值(capacity * load factor)的时候,(因为数组的容量太小,导致发生hash冲突的概率太大),load factor (默认0.75)调低:HashMap 所能容纳的键值对数量相对变少,冲突概率变低,链表的长度/红黑树的高度变低,查找,删除等效率变高,以空间换时间;load factor (默认0.75)调高:与上述相反。
2. HashMap的hash方法
static final int hash(Object key) {
int h;return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);}
HashMap为什么不直接使用对象的原始hash值呢
通过移位和异或运算,可以让 hash 变得更复杂,进而影响 hash 的分布性
3. 计算元素索引位置:i = (n - 1) & hash
为什么会是这样计算呢 ?而不是 i = hash % n;
因为当 n 为2的整数幂次方时,i = (n - 1) & hash = hash % n,且相比 mod 运算 来说 & 运算更加高效,所以这也回答了HashMap 的数组长度必须是 2 的整数幂次方的原因了
4. 解决hash冲突的方法
- 开放定址法
- 拉链法
- 再hash法
5.HashMap是线程安全的吗
HashMap不是线程安全的
- 数据覆盖问题:
其中if ((p = tab[i = (n - 1) & hash]) == null)
代码是判断是否出现hash碰撞,假设两个线程A、B都在进行put操作,并且hash函数计算出的插入下标是相同的,当线程A执行完第该代码后由于时间片耗尽导致被挂起,而线程B得到时间片后在该下标处插入了元素,完成了正常的插入,然后线程A获得时间片,由于之前已经进行了hash碰撞的判断,所有此时不会再进行判断,而是直接进行插入,这就导致了线程B插入的数据被线程A覆盖了,从而线程不安全。
- 更新数丢失问题:
其中if (++size > threshold) 代码,++size
是用来更新插入操作后元素个数的,同样因为不是原子性的操作,所以可能会导致,有两个线程进行插入操作,最后的元素个数只加了1,造成更新丢失。类似的还有修改数++modCount;
6.15.0.2相比1.7都优化了什么?
- 使用Node 数组 代替了 Entry 数组,不过就是变了下名字,他们都实现了Map.Entry接口。
- 插入元素时由原来的头插法,变为了现在的尾插法,解决了1.7并发扩容造成的环形链问题
- 数据结构变为:数组+链表+红黑树
如何实现线程安全的Map?
HashMap 非线程安全的,那么在并发环境中如何保证 map 线程安全呢?
- Collections.synchronizedMap(Map)创建线程安全的map集合,
HashMap<Integer, Integer> map = new HashMap<>();Map<Integer, Integer> mapSec = Collections.synchronizedMap(map);
synchronizedMap 内部定义了一个普通的Map对象,还有一个互斥锁,
它有两个构造函数,创建出synchronizedMap之后,再操作map的时候,就会对方法上锁
SynchronizedMap(Map<K,V> m) {
this.m = Objects.requireNonNull(m);mutex = this; //互斥锁为调动该方法的对象,即}
SynchronizedMap(Map<K,V> m, Object mutex) {
this.m = m;this.mutex = mutex;
}
-
Hashtable
他的主要方法都是用synchronized来修饰的,虽然线程安全,但并发中效率较为低下。因为他同1一样都是对整个Map对象加锁。 -
ConcurrentHashMap
concurrentHashMap 也是采用HashMap的底层结构,
不同之处在于:采用了 CAS + synchronized 来保证并发安全性,
并且Node节点类中的value,next 使用volatile去修饰,保证了可见性,
加锁的对象是Node 节点。所以如果两个线程操作的元素不产生hash冲突的话,他们不会产生竞争关系。
public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>implements ConcurrentMap<K,V>, Serializable {
private static final long serialVersionUID = 7249069246763182397L;/*** The largest possible table capacity. This value must be* exactly 1<<30 to stay within Java array allocation and indexing* bounds for power of two table sizes, and is further required* because the top two bits of 32bit hash fields are used for* control purposes.*/private static final int MAXIMUM_CAPACITY = 1 << 30;/*** The default initial table capacity. Must be a power of 2* (i.e., at least 1) and at most MAXIMUM_CAPACITY.*/private static final int DEFAULT_CAPACITY = 16;/*** The largest possible (non-power of two) array size.* Needed by toArray and related methods.*/static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;/*** The default concurrency level for this table. Unused but* defined for compatibility with previous versions of this class.*/private static final int DEFAULT_CONCURRENCY_LEVEL = 16;/*** The load factor for this table. Overrides of this value in* constructors affect only the initial table capacity. The* actual floating point value isn't normally used -- it is* simpler to use expressions such as {@code n - (n >>> 2)} for* the associated resizing threshold.*/private static final float LOAD_FACTOR = 0.75f;/*** The bin count threshold for using a tree rather than list for a* bin. Bins are converted to trees when adding an element to a* bin with at least this many nodes. The value must be greater* than 2, and should be at least 8 to mesh with assumptions in* tree removal about conversion back to plain bins upon* shrinkage.*/static final int TREEIFY_THRESHOLD = 8;/*** The bin count threshold for untreeifying a (split) bin during a* resize operation. Should be less than TREEIFY_THRESHOLD, and at* most 6 to mesh with shrinkage detection under removal.*/static final int UNTREEIFY_THRESHOLD = 6;/*** The smallest table capacity for which bins may be treeified.* (Otherwise the table is resized if too many nodes in a bin.)* The value should be at least 4 * TREEIFY_THRESHOLD to avoid* conflicts between resizing and treeification thresholds.*/static final int MIN_TREEIFY_CAPACITY = 64;/*** Minimum number of rebinnings per transfer step. Ranges are* subdivided to allow multiple resizer threads. This value* serves as a lower bound to avoid resizers encountering* excessive memory contention. The value should be at least* DEFAULT_CAPACITY.*/private static final int MIN_TRANSFER_STRIDE = 16;/*** The number of bits used for generation stamp in sizeCtl.* Must be at least 6 for 32bit arrays.*/private static final int RESIZE_STAMP_BITS = 16;/*** The maximum number of threads that can help resize.* Must fit in 32 - RESIZE_STAMP_BITS bits.*/private static final int MAX_RESIZERS = (1 << (32 - RESIZE_STAMP_BITS)) - 1;/*** The bit shift for recording size stamp in sizeCtl.*/private static final int RESIZE_STAMP_SHIFT = 32 - RESIZE_STAMP_BITS;/** Encodings for Node hash fields. See above for explanation.*/static final int MOVED = -1; // hash for forwarding nodesstatic final int TREEBIN = -2; // hash for roots of treesstatic final int RESERVED = -3; // hash for transient reservationsstatic final int HASH_BITS = 0x7fffffff; // usable bits of normal node hashstatic class Node<K,V> implements Map.Entry<K,V> {
final int hash;final K key;volatile V val;volatile Node<K,V> next;Node(int hash, K key, V val) {
this.hash = hash;this.key = key;this.val = val;}Node(int hash, K key, V val, Node<K,V> next) {
this(hash, key, val);this.next = next;}public final K getKey() {
return key; }public final V getValue() {
return val; }public final int hashCode() {
return key.hashCode() ^ val.hashCode(); }public final String toString() {
return Helpers.mapEntryToString(key, val);}public final V setValue(V value) {
throw new UnsupportedOperationException();}public final boolean equals(Object o) {
Object k, v, u; Map.Entry<?,?> e;return ((o instanceof Map.Entry) &&(k = (e = (Map.Entry<?,?>)o).getKey()) != null &&(v = e.getValue()) != null &&(k == key || k.equals(key)) &&(v == (u = val) || v.equals(u)));}/*** Virtualized support for map.get(); overridden in subclasses.*/Node<K,V> find(int h, Object k) {
Node<K,V> e = this;if (k != null) {
do {
K ek;if (e.hash == h &&((ek = e.key) == k || (ek != null && k.equals(ek))))return e;} while ((e = e.next) != null);}return null;}}static final int spread(int h) {
return (h ^ (h >>> 16)) & HASH_BITS;}/*** The array of bins. Lazily initialized upon first insertion.* Size is always a power of two. Accessed directly by iterators.*/transient volatile Node<K,V>[] table;/*** The next table to use; non-null only while resizing.*/private transient volatile Node<K,V>[] nextTable;/*** Base counter value, used mainly when there is no contention,* but also as a fallback during table initialization* races. Updated via CAS.*/private transient volatile long baseCount;/*** Creates a new, empty map with the default initial table size (16).*/public ConcurrentHashMap() {
}/*** Creates a new, empty map with an initial table size* accommodating the specified number of elements without the need* to dynamically resize.** @param initialCapacity The implementation performs internal* sizing to accommodate this many elements.* @throws IllegalArgumentException if the initial capacity of* elements is negative*/public ConcurrentHashMap(int initialCapacity) {
this(initialCapacity, LOAD_FACTOR, 1);}/*** Creates a new map with the same mappings as the given map.** @param m the map*/public ConcurrentHashMap(Map<? extends K, ? extends V> m) {
this.sizeCtl = DEFAULT_CAPACITY;putAll(m);}public V get(Object key) {
Node<K,V>[] tab; Node<K,V> e, p; int n, eh; K ek;int h = spread(key.hashCode());if ((tab = table) != null && (n = tab.length) > 0 &&(e = tabAt(tab, (n - 1) & h)) != null) {
if ((eh = e.hash) == h) {
if ((ek = e.key) == key || (ek != null && key.equals(ek)))return e.val;}else if (eh < 0)return (p = e.find(h, key)) != null ? p.val : null;while ((e = e.next) != null) {
if (e.hash == h &&((ek = e.key) == key || (ek != null && key.equals(ek))))return e.val;}}return null;}/*** Maps the specified key to the specified value in this table.* Neither the key nor the value can be null.** <p>The value can be retrieved by calling the {@code get} method* with a key that is equal to the original key.** @param key key with which the specified value is to be associated* @param value value to be associated with the specified key* @return the previous value associated with {@code key}, or* {@code null} if there was no mapping for {@code key}* @throws NullPointerException if the specified key or value is null*/public V put(K key, V value) {
return putVal(key, value, false);}/** Implementation for put and putIfAbsent */final V putVal(K key, V value, boolean onlyIfAbsent) {
if (key == null || value == null) throw new NullPointerException();int hash = spread(key.hashCode());int binCount = 0;for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh; K fk; V fv;if (tab == null || (n = tab.length) == 0)tab = initTable();else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
if (casTabAt(tab, i, null, new Node<K,V>(hash, key, value)))break; // no lock when adding to empty bin}else if ((fh = f.hash) == MOVED)tab = helpTransfer(tab, f);else if (onlyIfAbsent // check first node without acquiring lock&& fh == hash&& ((fk = f.key) == key || (fk != null && key.equals(fk)))&& (fv = f.val) != null)return fv;else {
V oldVal = null;synchronized (f) {
if (tabAt(tab, i) == f) {
if (fh >= 0) {
binCount = 1;for (Node<K,V> e = f;; ++binCount) {
K ek;if (e.hash == hash &&((ek = e.key) == key ||(ek != null && key.equals(ek)))) {
oldVal = e.val;if (!onlyIfAbsent)e.val = value;break;}Node<K,V> pred = e;if ((e = e.next) == null) {
pred.next = new Node<K,V>(hash, key, value);break;}}}else if (f instanceof TreeBin) {
Node<K,V> p;binCount = 2;if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key,value)) != null) {
oldVal = p.val;if (!onlyIfAbsent)p.val = value;}}else if (f instanceof ReservationNode)throw new IllegalStateException("Recursive update");}}if (binCount != 0) {
if (binCount >= TREEIFY_THRESHOLD)treeifyBin(tab, i);if (oldVal != null)return oldVal;break;}}}addCount(1L, binCount);return null;}
}
ConcurrentHashMap 在put 操作的时候,如果计算出的索引位置没有元素的话,采用CAS的方式在该位置添加新节点
如果索引位置处已有元素f的话,对该位置的节点f,使用synchronized(f) 加锁,再进行相应的添加操作。
ConcurrentHashMap 在get 操作的时候,没有进行加锁,因为Node 节点的Val字段是使用Volatile 修饰的,保证了可见性,保证了并发环境中线程每次获取的都是最新值。
总结2
HashMap,ConcurrentHashMap 的比较
- concurrentHashMap 也是采用HashMap的底层结构,
- 不同之处在于:采用了 CAS + synchronized 来保证并发安全性,并且Node节点类中的value,next ;以及Node数组 table 使用volatile去修饰,保证了可见性
- 计算hash值的方式不同
static final int spread(int h) {
return (h ^ (h >>> 16)) & HASH_BITS;
}
HashMap , HashTable 的比较
- 初始化容量不同:HashMap 的初始容量为:16,Hashtable 初始容量为:11,两者的负载因子默认都是:0.75。
- 扩容机制不同:当现有容量大于总容量 * 负载因子时,HashMap 扩容规则为当前容量翻倍,Hashtable 扩容规则为当前容量翻倍 + 1。
- 迭代器不同:HashMap 中的 Iterator 迭代器是 fail-fast 的,而 Hashtable 的 Enumerator 是 fail-safe 的。
所以,当其他线程改变了HashMap 的结构,如:增加、删除元素,将会抛出ConcurrentModificationException 异常,而 Hashtable 则不会。
乐观锁:总是假设最好的情况,每次去拿数据的时候都认为别人不会修改,所以不会上锁,但是在更新的时候会判断一下在此期间别人有没有去更新这个数据。
- 乐观锁一般会使用版本号机制或CAS算法实现
- Java 中 Atomic原子变量类就是使用了CAS实现的
悲观锁:总是假设最坏的情况,每次去拿数据的时候都认为别人会修改,所以每次在拿数据的时候都会上锁,这样别人想拿这个数据就会阻塞直到它拿到锁
- Java中synchronized和ReentrantLock 等互斥锁就是悲观锁思想的实现
fail-fast:使用迭代器对集合对象进行遍历的时候,当集合结构被修改,会抛出Concurrent Modification Exception
(如果A线程对集合进行遍历,正好B线程对集合进行修改(增加、删除、修改)则A线程会抛出ConcurrentModificationException异常)原理:迭代器在遍历时直接访问集合中的内容,并且在遍历过程中使用一个 modCount 变量。集合在被遍历期间如果内容发生变化,就会改变modCount的值。每当迭代器使用hashNext()/next()遍历下一个元素之前,都会检测modCount变量是否为expectedmodCount值,是的话就返回遍历;否则抛出异常,终止遍历
/* ------------------------------------------------------------ */// iteratorsabstract class HashIterator {
Node<K,V> next; // next entry to returnNode<K,V> current; // current entryint expectedModCount; // for fast-failint index; // current slotHashIterator() {
expectedModCount = modCount;Node<K,V>[] t = table;current = next = null;index = 0;if (t != null && size > 0) {
// advance to first entrydo {
} while (index < t.length && (next = t[index++]) == null);}}public final boolean hasNext() {
return next != null;}final Node<K,V> nextNode() {
Node<K,V>[] t;Node<K,V> e = next;if (modCount != expectedModCount)throw new ConcurrentModificationException();if (e == null)throw new NoSuchElementException();if ((next = (current = e).next) == null && (t = table) != null) {
do {
} while (index < t.length && (next = t[index++]) == null);}return e;}public final void remove() {
Node<K,V> p = current;if (p == null)throw new IllegalStateException();if (modCount != expectedModCount)throw new ConcurrentModificationException();current = null;removeNode(p.hash, p.key, null, false, false);expectedModCount = modCount;}}
fail-safe
迭代时是对原集合的拷贝进行遍历,所以在遍历过程中对原集合所作的修改并不能被迭代器检测到,所以不会触发Concurrent Modificationfail-safe机制有两个问题:
(1)需要复制集合,产生大量的无效对象,开销大
(2)无法保证读取的数据是目前原始数据结构中的数据
总结3
HashMap,HashSet 的比较
HashSet 中的方法,其实都是调用了HashMap的对应方法,HashSet 中的元素其实就是HashMap 的Key,在调用相应的方法时,以元素作为Key,同时还有一个默认的Object 对象作为Vlaue,以此来作为调用HashMap方法的参数。
唯一一点不同可能就是HashSet 继承了AbstractSet 类,实现了Set接口。
HashMap 继承了 AbstractMap 类,实现了Map接口。