flutter ImageProvider loadImage/loadBuffer 基本理解

160 阅读1分钟

loadImage/loadBuffer 可以理解为是同一个方法。loadBuffer 在高版本被标记为 deprecated。改为推荐使用loadImage。

个人理解 loadBuffer主要做了如下几件事情

这里以 FileImage 的代码来做说明:

1 解析文件,获取图片信息。

FileImage loadBuffer 返回的类型是 MultiFrameImageStreamCompleter

@override
ImageStreamCompleter loadBuffer(FileImage key, DecoderBufferCallback decode) {
  return MultiFrameImageStreamCompleter(
    codec: _loadAsync(key, decode: decode),
    scale: key.scale,
    debugLabel: key.file.path,
    informationCollector: () => <DiagnosticsNode>[
      ErrorDescription('Path: ${file.path}'),
    ],
  );
}

在MultiFrameImageStreamCompleter的构造函数中会调用 _loadAsync。这个就是解析文件内容的方法。

codec.then<void>(_handleCodecReady, onError: (Object error, StackTrace stack) {
  reportError(
    context: ErrorDescription('resolving an image codec'),
    exception: error,
    stack: stack,
    informationCollector: informationCollector,
    silent: true,
  );
});

2 解析完成之后 调用_handleCodecReady。这个方法中调用 _decodeNextFrameAndSchedule。这个方法主要做是:

1 判断图片是不是只有一帧。

2 根据是否是一帧,走不同的逻辑。如果只有一帧,拿出这一帧构造 ImageInfo,调用 _emitFrame->setImage。在setImage中通知 _listeners

if (_codec!.frameCount == 1) {
// ImageStreamCompleter listeners removed while waiting for next frame to
// be decoded.
// There's no reason to emit the frame without active listeners.
if (!hasListeners) {
  return;
}
// This is not an animated image, just return it and don't schedule more
// frames.
_emitFrame(ImageInfo(
  image: _nextFrame!.image.clone(),
  scale: _scale,
  debugLabel: debugLabel,
));
_nextFrame!.image.dispose();
_nextFrame = null;
return;
}

3 如果是多帧,调用 _scheduleAppFrame->_handleAppFrame,根据每一帧的信息比如时长,来定时刷新每一帧。

void _handleAppFrame(Duration timestamp) {
  _frameCallbackScheduled = false;
  if (!hasListeners) {
    return;
  }
  assert(_nextFrame != null);
  if (_isFirstFrame() || _hasFrameDurationPassed(timestamp)) {
    _emitFrame(ImageInfo(
      image: _nextFrame!.image.clone(),
      scale: _scale,
      debugLabel: debugLabel,
    ));
    _shownTimestamp = timestamp;
    _frameDuration = _nextFrame!.duration;
    _nextFrame!.image.dispose();
    _nextFrame = null;
    final int completedCycles = _framesEmitted ~/ _codec!.frameCount;
    if (_codec!.repetitionCount == -1 || completedCycles <= _codec!.repetitionCount) {
      _decodeNextFrameAndSchedule();
    }
    return;
  }
  final Duration delay = _frameDuration! - (timestamp - _shownTimestamp);
  _timer = Timer(delay * timeDilation, () {
    _scheduleAppFrame();
  });
}

4 再看一个开源库flutter_blurhash中自定义的一个ImageProvider

class UiImage extends ImageProvider<UiImage> {
  final ui.Image image;
  final double scale;

  const UiImage(this.image, {this.scale = 1.0});

  @override
  Future<UiImage> obtainKey(ImageConfiguration configuration) =>
      SynchronousFuture<UiImage>(this);

  @override
  ImageStreamCompleter loadImage(UiImage key, ImageDecoderCallback decode) =>
      OneFrameImageStreamCompleter(_loadAsync(key));

  Future<ImageInfo> _loadAsync(UiImage key) async {
    assert(key == this);
    return ImageInfo(image: image, scale: key.scale);
  }

  @override
  bool operator ==(dynamic other) {
    if (other.runtimeType != runtimeType) return false;
    final UiImage typedOther = other;
    return image == typedOther.image && scale == typedOther.scale;
  }

  @override
  int get hashCode => Object.hash(image.hashCode, scale);

  @override
  String toString() =>
      '$runtimeType(${describeIdentity(image)}, scale: $scale)';
}

loadImage 返回的是 OneFrameImageStreamCompleter,OneFrameImageStreamCompleter字面意思是只有一帧,所以在OneFrameImageStreamCompleter的构造函数中直接调用了setImage。这个地方就给FileImage中只有单帧的逻辑一致了。

OneFrameImageStreamCompleter(Future<ImageInfo> image, { InformationCollector? informationCollector }) {
  image.then<void>(setImage, onError: (Object error, StackTrace stack) {
    reportError(
      context: ErrorDescription('resolving a single-frame image stream'),
      exception: error,
      stack: stack,
      informationCollector: informationCollector,
      silent: true,
    );
  });
}