combine 组合
combine
运算符将两个 flow
合并在一起。两个 flow
都在自己的协程中运行,然后,每当一个 flow
生成一个新值时,将使用另一个 flow
中的最新值调用转换。
internal suspend fun <R, T> FlowCollector<R>.combineInternal(
flows: Array<out Flow<T>>,
arrayFactory: () -> Array<T?>?, // Array factory is required to workaround array typing on JVM
transform: suspend FlowCollector<R>.(Array<T>) -> Unit
): Unit = flowScope { // flow scope so any cancellation within the source flow will cancel the whole scope
val size = flows.size
if (size == 0) return@flowScope // bail-out for empty input
val latestValues = arrayOfNulls<Any?>(size)
latestValues.fill(UNINITIALIZED) // Smaller bytecode & faster than Array(size) { UNINITIALIZED }
val resultChannel = Channel<Update>(size)
val nonClosed = LocalAtomicInt(size)
var remainingAbsentValues = size
for (i in 0 until size) {
// Coroutine per flow that keeps track of its value and sends result to downstream
launch {
try {
flows[i].collect { value ->
resultChannel.send(Update(i, value))
yield() // Emulate fairness, giving each flow chance to emit
}
} finally {
// Close the channel when there is no more flows
if (nonClosed.decrementAndGet() == 0) {
resultChannel.close()
}
}
}
}
/*
* Batch-receive optimization: read updates in batches, but bail-out
* as soon as we encountered two values from the same source
*/
val lastReceivedEpoch = ByteArray(size)
var currentEpoch: Byte = 0
while (true) {
++currentEpoch
// Start batch
// The very first receive in epoch should be suspending
var element = resultChannel.receiveCatching().getOrNull() ?: break // Channel is closed, nothing to do here
while (true) {
val index = element.index
// Update values
val previous = latestValues[index]
latestValues[index] = element.value
if (previous === UNINITIALIZED) --remainingAbsentValues
// Check epoch
// Received the second value from the same flow in the same epoch -- bail out
if (lastReceivedEpoch[index] == currentEpoch) break
lastReceivedEpoch[index] = currentEpoch
element = resultChannel.tryReceive().getOrNull() ?: break
}
// Process batch result if there is enough data
if (remainingAbsentValues == 0) {
/*
* If arrayFactory returns null, then we can avoid array copy because
* it's our own safe transformer that immediately deconstructs the array
*/
val results = arrayFactory()
if (results == null) {
transform(latestValues as Array<T>)
} else {
(latestValues as Array<T?>).copyInto(results)
transform(results as Array<T>)
}
}
}
}
遍历组合Flow
,每一个Flow
,都单独开启一个协程处理,然后通过*Channel
*发送数据,Channel在这里先不过多介绍,只需要知道它提供了基本的发送(send)和接收(receive)操作,发送的数据是
Update(i, value)
,简单理解就是,哪一个Flow
发送出来的Value
for (i in 0 until size) {
// Coroutine per flow that keeps track of its value and sends result to downstream
launch {
try {
flows[i].collect { value ->
resultChannel.send(Update(i, value))
yield() // Emulate fairness, giving each flow chance to emit
}
} finally {
// Close the channel when there is no more flows
if (nonClosed.decrementAndGet() == 0) {
resultChannel.close()
}
}
}
}
接收Channel
发送出来的数据,element.index
表示组合中哪个Flow,element.value
表示Flow
发送出来的数据。latestValues
每次都缓存Flow
数组中发送的最后一个数据。
var element = resultChannel.receiveCatching().getOrNull() ?: break // Channel is closed, nothing to do here
while (true) {
val index = element.index
// Update values
val previous = latestValues[index]
latestValues[index] = element.value
if (previous === UNINITIALIZED) --remainingAbsentValues
// Check epoch
// Received the second value from the same flow in the same epoch -- bail out
if (lastReceivedEpoch[index] == currentEpoch) break
lastReceivedEpoch[index] = currentEpoch
element = resultChannel.tryReceive().getOrNull() ?: break
}
flatMapLatest
该操作符属于Merge.kt模块,用于处理流(Flow)中的元素,并将每个元素映射为另一个流。 只会保留最新触发的流,并取消之前所有未完成的流
internal class ChannelFlowTransformLatest<T, R>(
private val transform: suspend FlowCollector<R>.(value: T) -> Unit,
flow: Flow<T>,
context: CoroutineContext = EmptyCoroutineContext,
capacity: Int = Channel.BUFFERED,
onBufferOverflow: BufferOverflow = BufferOverflow.SUSPEND
) : ChannelFlowOperator<T, R>(flow, context, capacity, onBufferOverflow) {
override fun create(context: CoroutineContext, capacity: Int, onBufferOverflow: BufferOverflow): ChannelFlow<R> =
ChannelFlowTransformLatest(transform, flow, context, capacity, onBufferOverflow)
override suspend fun flowCollect(collector: FlowCollector<R>) {
assert { collector is SendingCollector } // So cancellation behaviour is not leaking into the downstream
coroutineScope {
var previousFlow: Job? = null
flow.collect { value ->
previousFlow?.apply {
cancel(ChildCancelledException())
join()
}
// Do not pay for dispatch here, it's never necessary
previousFlow = launch(start = CoroutineStart.UNDISPATCHED) {
collector.transform(value)
}
}
}
}
}
先取消上一个协程
previousFlow?.apply {
cancel(ChildCancelledException())
join()
}
然后重新开启一个协程
previousFlow = launch(start = CoroutineStart.UNDISPATCHED) {
collector.transform(value)
}
map 转换
map 操作符用于对流中的每个元素应用一个给定的转换函数,并返回一个新的流,其中包含转换后的元素。这个操作符不会改变原始流中的元素数量,只会改变每个元素的值。
public inline fun <T, R> Flow<T>.map(crossinline transform: suspend (value: T) -> R): Flow<R> = transform { value ->
return@transform emit(transform(value))
}
internal inline fun <T, R> Flow<T>.unsafeTransform(
@BuilderInference crossinline transform: suspend FlowCollector<R>.(value: T) -> Unit
): Flow<R> = unsafeFlow { // Note: unsafe flow is used here, because unsafeTransform is only for internal use
collect { value ->
// kludge, without it Unit will be returned and TCE won't kick in, KT-28938
return@collect transform(value)
}
}
在发送下游数据之前,先调用一个transform
方法,得到新的数据再发送出去
filter 过滤
过滤也是属于转换模块,再往下游发送数据之前,先调用一个predicate
函数表达式,验证返回true
才发送
public inline fun <T> Flow<T>.filter(crossinline predicate: suspend (T) -> Boolean): Flow<T> = transform { value ->
if (predicate(value)) return@transform emit(value)
}
distinctUntilChanged 去重
用于移除数据流中连续重复的元素。这个操作符会跟踪上一个发射的元素,并且只有当当前元素与上一个元素不相同时,才会将当前元素发射到下游。
private fun <T> Flow<T>.distinctUntilChangedBy(
keySelector: (T) -> Any?,
areEquivalent: (old: Any?, new: Any?) -> Boolean
): Flow<T> = when {
this is DistinctFlowImpl<*> && this.keySelector === keySelector && this.areEquivalent === areEquivalent -> this // same
else -> DistinctFlowImpl(this, keySelector, areEquivalent)
}
private class DistinctFlowImpl<T>(
private val upstream: Flow<T>,
@JvmField val keySelector: (T) -> Any?,
@JvmField val areEquivalent: (old: Any?, new: Any?) -> Boolean
): Flow<T> {
override suspend fun collect(collector: FlowCollector<T>) {
var previousKey: Any? = NULL
upstream.collect { value ->
val key = keySelector(value)
@Suppress("UNCHECKED_CAST")
if (previousKey === NULL || !areEquivalent(previousKey, key)) {
previousKey = key
collector.emit(value)
}
}
}
}
areEquivalent
是一个函数表达式,上一个发射出去的值,跟当前要发射的值不相同的时候,才会继续发送出去,这里每次都缓冲发射出去的最新值,为下一个发射验证做准备。
所以,既然n!=n+1,n+1!=n+2,n+2!=n+3
,那么n!=n+1!=n+2!=n+3
了
take 限制
take 操作符用于从流中截取指定数量的元素
public fun <T> Flow<T>.take(count: Int): Flow<T> {
require(count > 0) { "Requested element count $count should be positive" }
return flow {
val ownershipMarker = Any()
var consumed = 0
try {
collect { value ->
// Note: this for take is not written via collectWhile on purpose.
// It checks condition first and then makes a tail-call to either emit or emitAbort.
// This way normal execution does not require a state machine, only a termination (emitAbort).
// See "TakeBenchmark" for comparision of different approaches.
if (++consumed < count) {
return@collect emit(value)
} else {
return@collect emitAbort(value, ownershipMarker)
}
}
} catch (e: AbortFlowException) {
e.checkOwnership(owner = ownershipMarker)
}
}
}
consumed
变量可以理解成是消耗数量,当消耗数量小于指定count
数量时候,一直发送数据。
if (++consumed < count)
collect { value ->
// Note: this for take is not written via collectWhile on purpose.
// It checks condition first and then makes a tail-call to either emit or emitAbort.
// This way normal execution does not require a state machine, only a termination (emitAbort).
// See "TakeBenchmark" for comparision of different approaches.
if (++consumed < count) {
return@collect emit(value)
} else {
return@collect emitAbort(value, ownershipMarker)
}
}
collect{}
代码块是调用上游的代码,让上游先发送数据,收到数据之后,才是检查是否满足下游的条件,然后处理下游数据发送。