collection
如何边遍历边移除 Collection 中的元素?
cloud.tencent.com/developer/a…
List
ArrayList 与 LinkedList
ArrayList 能实现 LinkedList 的大部分功能,能用前者绝不用后者。
ArrayList 的扩容机制了解吗?
private Object[] grow(int minCapacity) {
int oldCapacity = elementData.length;
if (oldCapacity > 0 || elementData != DEFAULTCAPACITY_EMPTY_ELEMENTDATA) {
int newCapacity = ArraysSupport.newLength(oldCapacity,
minCapacity - oldCapacity, /* minimum growth */
oldCapacity >> 1 /* preferred growth */);
return elementData = Arrays.copyOf(elementData, newCapacity);
} else {
return elementData = new Object[Math.max(DEFAULT_CAPACITY, minCapacity)];
}
}
ArraysSupport中的newLength方法
public static int newLength(int oldLength, int minGrowth, int prefGrowth) {
// assert oldLength >= 0
// assert minGrowth > 0
int newLength = Math.max(minGrowth, prefGrowth) + oldLength;
if (newLength - MAX_ARRAY_LENGTH <= 0) {
return newLength;
}
return hugeLength(oldLength, minGrowth);
}
ArraysSupport中的hugeLength方法
private static int hugeLength(int oldLength, int minGrowth) {
int minLength = oldLength + minGrowth;
if (minLength < 0) { // overflow
throw new OutOfMemoryError("Required array length too large");
}
if (minLength <= MAX_ARRAY_LENGTH) {
return MAX_ARRAY_LENGTH;
}
return Integer.MAX_VALUE;
}
为什么 MAX_ARRAY_LENGTH = Integer.MAX_VALUE - 8 ? cloud.tencent.com/developer/a…
Vector 为什么被淘汰了?
Vector 是JDK1.0中给出的类,不推荐使用的原因如下:
- Vector 是线程安全的,每个可能出现线程安全的方法上加了 synchronized 关键字,所以效率低。
- JDK1.5 引入的 CopyOnWriteArrayList 更好用。
- vector 空间满了之后,扩容是一倍,而 ArrayList 仅仅是一半。
Set
HashSet 与 LinkedHashSet 与 TreeSet
底层实现:HashSet 基于 HashMap;LinkedHashSet 基于 LinkedHashMap 其节点比 HashMap 多一个Entry<K,V> before, after;成员:
static class Entry<K,V> extends HashMap.Node<K,V> {
Entry<K,V> before, after;
Entry(int hash, K key, V value, Node<K,V> next) {
super(hash, key, value, next);
}
}
HashSet 如何保证不重复?
cloud.tencent.com/developer/a…
Map
HashMap 和 Hashtable 的区别,Hashtable 为什么被淘汰?
- 对外提供的接口:Hashtable 继承自 Dictionary,HashMap 继承自 AbstractMap,但是 Hashtable 只多了 elements() 和 contains() 方法,一个来自抽象类 Dictionary,一个和 contains(value) 没有区别。
public boolean containsValue(Object value) { return contains(value); }
- 对null的处理:Hashtable 不允许 Key 或者 Value 为 null;HashMap 遇到 Key 为 null 时会将该 Entry 放到
table[0]。由于 HashMap 允许 Value 为 null,get()方法返回 null 时可能是 table 中没有该 Key,也可能是 Key 对应的 value 是 null 。因此判断 Key 是否存在应该使用containsKey()。 - 实现:底层数据结构相同都是数组+链表,数组存放 HashCode 对应的 Entry,映射到相同位置的 Entry 用链表的形式连接起来
transient Node<K,V>[] table;
static class Node<K,V> implements Map.Entry<K,V> {
final int hash;
final K key;
V value;
Node<K,V> next;
Node(int hash, K key, V value, Node<K,V> next) {
this.hash = hash;
this.key = key;
this.value = value;
this.next = next;
}
public final K getKey() { return key; }
public final V getValue() { return value; }
public final String toString() { return key + "=" + value; }
public final int hashCode() {
return Objects.hashCode(key) ^ Objects.hashCode(value);
}
public final V setValue(V newValue) {
V oldValue = value;
value = newValue;
return oldValue;
}
public final boolean equals(Object o) {
if (o == this)
return true;
if (o instanceof Map.Entry) {
Map.Entry<?,?> e = (Map.Entry<?,?>)o;
if (Objects.equals(key, e.getKey()) &&
Objects.equals(value, e.getValue()))
return true;
}
return false;
}
}
- 扩容机制:Hashtable 默认的初始大小为 11,之后每次扩充,容量变为原来的 2n+1;
HashMap默认的初始化大小为 16 ,之后每次扩充,容量变为原来的 2 倍。创建时如果给定了容量初始值,那么Hashtable会直接使用你给定的大小,而HashMap会将其扩充为 2 的幂次方大小。 - 线程安全:Hashtable 是线程安全的
- Hashtable 效率太低,有更好的替代品
ConcurrentHashMap;其父类 Dictionary 也已经被抛弃。
HashMap 和 TreeMap 的区别
TreeMap 实现了 NavigableMap 接口,它继承了 SortedMap 接口,默认按照 Key 值升序排列元素
JDK1.7之前的 HashMap 多线程扩容为什么会造成死循环?
HashMap 源码
成员变量
/**
* 默认初始容量 - MUST be a power of two(必须为2的幂).
*/
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
/**
* The maximum capacity, used if a higher value is implicitly specified
* by either of the constructors with arguments.
* MUST be a power of two <= 1<<30.
*/
static final int MAXIMUM_CAPACITY = 1 << 30;
/**
* The load factor used when none specified in constructor.
*/
static final float DEFAULT_LOAD_FACTOR = 0.75f;
/**
* The bin count threshold for using a tree rather than list for a
* bin. Bins are converted to trees when adding an element to a
* bin with at least this many nodes. The value must be greater
* than 2 and should be at least 8 to mesh with assumptions in
* tree removal about conversion back to plain bins upon
* shrinkage.
*/
static final int TREEIFY_THRESHOLD = 8;
/**
* The bin count threshold for untreeifying a (split) bin during a
* resize operation. Should be less than TREEIFY_THRESHOLD, and at
* most 6 to mesh with shrinkage detection under removal.
*/
static final int UNTREEIFY_THRESHOLD = 6;
/**
* The smallest table capacity for which bins may be treeified.
* (Otherwise the table is resized if too many nodes in a bin.)
* Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts
* between resizing and treeification thresholds.
*/
static final int MIN_TREEIFY_CAPACITY = 64;
/* ---------------- Fields -------------- */
/**
* The table, initialized on first use, and resized as
* necessary. When allocated, length is always a power of two.
* (We also tolerate length zero in some operations to allow
* bootstrapping mechanics that are currently not needed.)
*/
transient Node<K,V>[] table;
/**
* Holds cached entrySet(). Note that AbstractMap fields are used
* for keySet() and values().
*/
transient Set<Map.Entry<K,V>> entrySet;
/**
* The number of key-value mappings contained in this map.
*/
transient int size;
/**
* The number of times this HashMap has been structurally modified
* Structural modifications are those that change the number of mappings in
* the HashMap or otherwise modify its internal structure (e.g.,
* rehash). This field is used to make iterators on Collection-views of
* the HashMap fail-fast. (See ConcurrentModificationException).
*/
transient int modCount;
/**
* The next size value at which to resize (capacity * load factor).
*
* @serial
*/
// (The javadoc description is true upon serialization.
// Additionally, if the table array has not been allocated, this
// field holds the initial array capacity, or zero signifying
// DEFAULT_INITIAL_CAPACITY.)
int threshold;
/**
* The load factor for the hash table.
*
* @serial
*/
final float loadFactor;
LoadFactor 负载因子
控制数组存放数据密度,1 表示数组装满哈希值才扩容,越接近 0 数据存放越稀疏,相应的链表长度也会变短。如何平衡数组利用率和链表查找效率是个问题,官方给出的答案是 0.75。
threshold 数组扩容界
threshold = capacity * loadFactor,当 size > threshold 时,扩容数组。
扩容机制
判断当前数组是初始化还是扩容,初始化就根据情况设置数组长度并创建数组;如果是扩容,需要双倍扩容并转移上面的元素
补:JDK 1.8 Hash算法
static final int hash(Object key) {
int h;
// key.hashCode():返回散列值也就是hashcode
// ^:按位异或
// >>>:无符号右移,忽略符号位,空位都以0补齐
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
put 方法
/**
* Associates the specified value with the specified key in this map.
* If the map previously contained a mapping for the key, the old
* value is replaced.
*
* @param key key with which the specified value is to be associated
* @param value value to be associated with the specified key
* @return the previous value associated with {@code key}, or
* {@code null} if there was no mapping for {@code key}.
* (A {@code null} return can also indicate that the map
* previously associated {@code null} with {@code key}.)
*/
public V put(K key, V value) {
return putVal(hash(key), key, value, false, true);
}
/**
* Implements Map.put and related methods.
*
* @param hash hash for key
* @param key the key
* @param value the value to put
* @param onlyIfAbsent if true, don't change existing value
* @param evict if false, the table is in creation mode.
* @return previous value, or null if none
*/
final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) {
Node<K,V>[] tab; Node<K,V> p; int n, i;
if ((tab = table) == null || (n = tab.length) == 0) // 如果table不存在,先初始化
n = (tab = resize()).length;
if ((p = tab[i = (n - 1) & hash]) == null) // p 是 key 对应的节点
tab[i] = newNode(hash, key, value, null);
else {
Node<K,V> e; K k; // 临时变量 e 存放 key 对应的节点
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
e = p;
else if (p instanceof TreeNode)
e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
else {
for (int binCount = 0; ; ++binCount) {
if ((e = p.next) == null) { // key 不存在,链表尾部加入新节点,e 指向新节点
p.next = newNode(hash, key, value, null);
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
treeifyBin(tab, hash);
break;
}
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
break; // key 存在,e 指向它对应的节点
p = e; // 链表前进
}
}
if (e != null) { // existing mapping for key
V oldValue = e.value; // 旧值存一下
if (!onlyIfAbsent || oldValue == null) // 如果不是复合操作 onlyIfAbsent
// 或者 该 key 对应的节点 value 是 null
e.value = value; // 参数 value 覆盖原来的 value
afterNodeAccess(e);
return oldValue;
}
}
++modCount;
if (++size > threshold) // 达到扩容条件
resize();
afterNodeInsertion(evict);
return null;
}
若插入链表,插入完成以后要判断长度是否超出 TREEIFY_THRESHOLD, 若超出要转化为红黑树。
final void treeifyBin(Node<K,V>[] tab, int hash) {
int n, index; Node<K,V> e;
if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
resize();
else if ((e = tab[index = (n - 1) & hash]) != null) {
TreeNode<K,V> hd = null, tl = null;
do {
TreeNode<K,V> p = replacementTreeNode(e, null);
if (tl == null)
hd = p;
else {
p.prev = tl;
tl.next = p;
}
tl = p;
} while ((e = e.next) != null);
if ((tab[index] = hd) != null)
hd.treeify(tab);
}
}
HashMap 容量为什么维持在2的幂?
对于 % 运算 a % b,如果 b 是 2 的 n 次方,有如下替换公式 a % b = a & (b-1) 即 a % 2^n = a & (2^n-1)
- 位运算比求余运算快,计算哈希值的效率更高
- 哈希值的范围是 -2147483648 到 2147483648,数组空间有限需要先把哈希值对数组总长取模,得到的余数才是存放的位置
...
if ((p = tab[i = (n - 1) & hash]) == null)
tab[i] = newNode(hash, key, value, null);
...
ConcurrentHashMap 如何实现线程安全?
JDK 1.8 前 ConcurrentHashMap 的结构为 segment 数组 + HashEntry 数组 + 链表的结构,一个 segment 对应一个 HashEntry 数组,相当于把 HashMap 划分成很多个小 HashMap 用 segment 数组做目录。
Segment 继承了 ReentrantLock,所以 Segment 是一种可重入锁,扮演锁的角色。
JDK 1.8 后 不再使用 segment 而是使用 Node 数组 + 链表/红黑树的结构,synchronized 只锁定当前链表或红黑二叉树的首节点,粒度更细。
ConcurrentHashMap 的 key 和 value 为什么不能为 null,像HashMap那样?
The main reason that nulls aren't allowed in ConcurrentMaps (ConcurrentHashMaps, ConcurrentSkipListMaps) is that ambiguities that may be just barely tolerable in non-concurrent maps can't be accommodated. The main one is that if
map.get(key)returnsnull, you can't detect whether the key explicitly maps tonullvs the key isn't mapped. In a non-concurrent map, you can check this viamap.contains(key), but in a concurrent one, the map might have changed between calls.
HashMap 中使用 containsKey(key1) 方法来判断是否有 Key == null 的 Node,如果是则返回 table[0] 的 Node 但是在多线程场景下使用,存在二义性问题:
- 其他线程删除了 Node 导致 Key 不存在
- Key 本身就是 null 这两种情况 Key 都是 null 但是无法区分
ConcurrentHashMap 能保证复合操作的原子性吗?
不能。复合操作是一系列基本操作(如put、get、remove、containsKey等)的组合,可能会被其他线程抢先,导致结果不符合预期。
ConcurrentHashMap 提供了一些具有原子性的复合操作,如 putIfAbsent、compute、computeIfAbsent 、computeIfPresent、merge等,使用这些方法来代替。