可以使用HTML5的video标签和canvas标签结合来实现动态截取视频某一帧率的图片。
首先需要在React组件中创建一个video标签来加载视频,然后在componentDidMount生命周期中监听video的loadedmetadata事件,获取视频的宽高信息。
class VideoPlayer extends React.Component {
constructor(props) {
super(props);
this.canvasRef = React.createRef();
this.videoRef = React.createRef();
}
componentDidMount() {
this.videoRef.current.addEventListener('loadedmetadata', this.handleLoadedMetadata);
}
handleLoadedMetadata = () => {
const { videoWidth, videoHeight } = this.videoRef.current;
this.canvasRef.current.width = videoWidth;
this.canvasRef.current.height = videoHeight;
}
render() {
return (
<div>
<video ref={this.videoRef} src={this.props.src} controls></video>
<canvas ref={this.canvasRef}></canvas>
</div>
);
}
}
接下来,在video的timeupdate事件中,获取当前视频的时间,计算当前应该截取的帧数,根据帧数来获取对应的图片,并将图片绘制到canvas上。
class VideoPlayer extends React.Component {
...
handleTimeUpdate = () => {
const { currentTime } = this.videoRef.current;
const fps = 24; // 截取帧率
const frameIndex = Math.floor(currentTime * fps);
const videoImage = new Image();
videoImage.src = this.props.src;
videoImage.onload = () => {
const { videoWidth, videoHeight } = this.videoRef.current;
const canvas = this.canvasRef.current;
const ctx = canvas.getContext('2d');
ctx.drawImage(videoImage, 0, 0, videoWidth, videoHeight);
};
}
componentDidMount() {
this.videoRef.current.addEventListener('loadedmetadata', this.handleLoadedMetadata);
this.videoRef.current.addEventListener('timeupdate', this.handleTimeUpdate);
}
render() {
return (
<div>
<video ref={this.videoRef} src={this.props.src} controls></video>
<canvas ref={this.canvasRef}></canvas>
</div>
);
}
}
最后,可以将canvas的数据转换成base64编码的图片,并将其显示在页面上。
class VideoPlayer extends React.Component {
...
handleTimeUpdate = () => {
const { currentTime } = this.videoRef.current;
const fps = 24; // 截取帧率
const frameIndex = Math.floor(currentTime * fps);
const videoImage = new Image();
videoImage.src = this.props.src;
videoImage.onload = () => {
const { videoWidth, videoHeight } = this.videoRef.current;
const canvas = this.canvasRef.current;
const ctx = canvas.getContext('2d');
ctx.drawImage(videoImage, 0, 0, videoWidth, videoHeight);
const dataUrl = canvas.toDataURL('image/png');
this.setState({ currentFrame: dataUrl });
};
}
render() {
return (
<div>
<video ref={this.videoRef} src={this.props.src} controls></video>
<canvas ref={this.canvasRef}></canvas>
<img src={this.state.currentFrame} alt="currentFrame" />
</div>
);
}
}
完整代码如下:
class VideoPlayer extends React.Component {
constructor(props) {
super(props);
this.canvasRef = React.createRef();
this.videoRef = React.createRef();
this.state = {
currentFrame: '',
};
}
componentDidMount() {
this.videoRef.current.addEventListener('loadedmetadata', this.handleLoadedMetadata);
this.videoRef.current.addEventListener('timeupdate', this.handleTimeUpdate);
}
componentWillUnmount() {
this.videoRef.current.removeEventListener('loadedmetadata', this.handleLoadedMetadata);
this.videoRef.current.removeEventListener('timeupdate', this.handleTimeUpdate);
}
handleLoadedMetadata = () => {
const { videoWidth, videoHeight } = this.videoRef.current;
this.canvasRef.current.width = videoWidth;
this.canvasRef.current.height = videoHeight;
}
handleTimeUpdate = () => {
const { currentTime } = this.videoRef.current;
const fps = 24; // 截取帧率
const frameIndex = Math.floor(currentTime * fps);
const videoImage = new Image();
videoImage.src = this.props.src;
videoImage.onload = () => {
const { videoWidth, videoHeight } = this.videoRef.current;
const canvas = this.canvasRef.current;
const ctx = canvas.getContext('2d');
ctx.drawImage(videoImage, 0, 0, videoWidth, videoHeight);
const dataUrl = canvas.toDataURL('image/png');
this.setState({ currentFrame: dataUrl });
};
}
render() {
return (
<div>
<video ref={this.videoRef} src={this.props.src} controls></video>
<canvas ref={this.canvasRef}></canvas>
<img src={this.state.currentFrame} alt="currentFrame" />
</div>
);
}
}
ReactDOM.render(<VideoPlayer src="https://media.w3.org/2010/05/sintel/trailer_hd.mp4" />, document.getElementById('root'));
在 React 18 中,可以使用 useVideo
hook 来获取视频元素并从中截取帧。以下是一个简单的示例:
import { useRef, useState, useEffect } from 'react';
function VideoFrame({ src, fps }) {
const videoRef = useRef();
const canvasRef = useRef();
const [frame, setFrame] = useState(null);
useEffect(() => {
const video = videoRef.current;
const canvas = canvasRef.current;
const interval = 1000 / fps;
const captureFrame = () => {
const ctx = canvas.getContext('2d');
ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
setFrame(canvas.toDataURL());
};
const timerId = setInterval(captureFrame, interval);
return () => clearInterval(timerId);
}, [videoRef, canvasRef, fps]);
return (
<>
<video ref={videoRef} src={src} autoPlay muted loop style={{ display: 'none' }} />
<canvas ref={canvasRef} style={{ display: 'none' }} />
{frame && <img src={frame} alt="Video frame" />}
</>
);
}
在这个组件中,我们使用 useRef
hook 创建了两个引用,一个用于视频元素,一个用于画布元素。然后,我们使用 useState
hook 创建了一个状态变量 frame
,用于存储截取的帧。
接下来,我们使用 useEffect
hook 来监听引用的变化,并在视频元素加载完成后开始定时截取帧。我们使用 setInterval
函数来循环执行 captureFrame
函数,该函数将视频元素绘制到画布上并将结果转换为 base64 编码的图像数据。最后,我们将截取的帧存储到状态变量 frame
中,并在页面上显示出来。
使用这个组件非常简单,只需要像下面这样传递视频源和帧率即可:
<VideoFrame src="path/to/video.mp4" fps={30} />
请注意,此示例中的代码仅适用于本地视频文件。如果您要使用网络视频流或 YouTube 视频等在线视频,您需要使用不同的技术来获取视频元素。