全球今热点:基于FFMPEG+SDL的简单的视频播放器分析

2023-07-04 16:10:44    来源:博客园
基于FFMPEG+SDL的简单的视频播放器分析前言

最近看了雷霄骅前辈的博客《最简单的基于FFMPEG+SDL的视频播放器 ver2 (采用SDL2.0)》,参照他的代码,在windows端实现了一个简单的视频播放器,代码的有部分改动,但是整体的思路和实现的功能是一样的。下面将对实现的源码进行分析,并对其中的一些细节进行记录。


【资料图】

源码分析引入头文件

引入头文件。

#include #include extern "C"{#include "libavcodec/avcodec.h"#include "libavformat/avformat.h"#include "libswscale/swscale.h"#include "libavutil/imgutils.h"#include "SDL2/SDL.h"}.......

由于ffmpeg和SDL的源码都是C,所以在引入头文件时,可以用extern "C"用于声明 C 函数,以便使其在 C++ 代码中按照 C 语言的函数命名和调用规则处理。

命令行参数解析

这部分不是重点,可跳过直接看媒体文件处理的部分

添加了启动参数解析的代码,以便自定义播放的视频。由于是在windows端实现和编译的,所以使用了int WINAPI WinMain作为程序的入口。

int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow){    ......    return 0;}

因此,想要获取启动参数,需要对数据进行处理。

.......    // 获取命令行参数字符串    LPWSTR lpWideCmdLine = GetCommandLineW();     int argc;    char *filepath;    // 将命令行字符串分割为一个字符串数组,其中每个元素表示一个命令行参数    LPWSTR *argv = CommandLineToArgvW(lpWideCmdLine, &argc);     int bufferSize = WideCharToMultiByte(CP_UTF8, 0, argv[1], -1, NULL, 0, NULL, NULL);    char *buffer = new char[bufferSize];    // 判断是否指定了播放文件    if (argc > 1)    {        // 将参数从宽字符转换为多字节字符        WideCharToMultiByte(CP_UTF8, 0, argv[1], -1, buffer, bufferSize, NULL, NULL);         filepath = buffer;        std::cout << "argv[1]: " << buffer << std::endl;    }    else    {        cout << "Please add the path to the video file that requires the part.\n" << endl;        return -1;    }    ......
媒体文件处理读取媒体文件

读取媒体文件并获取媒体流的相关信息。代码如下:

.......    // av_register_all();    // avformat_network_init();    // 创建一个 AVFormatContext 结构体并进行初始化    pFormatCtx = avformat_alloc_context();    // 打开媒体文件并初始化 AVFormatContext 结构体    if (avformat_open_input(&pFormatCtx, filepath, NULL, NULL) != 0)        {            cout << "Could not open input stream: " << filepath << "\n" << endl;            return -1;        }    // 读取媒体文件并获取媒体流的相关信息    if (avformat_find_stream_info(pFormatCtx, NULL) < 0)    {        cout << "Could not find stream information.\n" << endl;        return -1;    }    .......

旧版本的ffmpeg程序, 程序开头处, 一般总是av_register_all,4.x之后,该函数已经废弃,不需要调用了。更多细节可以参考《ffmpeg4.x为什么不再需要调用av_register_all呢》。

获取视频流的下标。

在一个完整的媒体文件中,一般会包含视频流和音频流。AVFormatContext结构体里会存储关于流的各种信息,本例子是对视频流进行处理,所以可以从AVFormatContext结构体中,获得视频流的下标信息。代码如下:

......    videoindex = -1;    // nb_streams 是一个整数类型的字段,表示 AVFormatContext 中包含的流的数量。    for (i = 0; i < pFormatCtx->nb_streams; i++)    {        // 根据codec_type信息,判断数据流的类型        if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)        {            // 如果 codec_type == AVMEDIA_TYPE_VIDEO,则数据流类型为视频流,记录视频流下标            videoindex = i;            break;        }    }    // 如果 videoindex == -1,则说明媒体文件中未找到视频流数据,检查媒体文件    if (videoindex == -1)    {        cout << "Did not find a video stream.\n" << endl;        return -1;    }    ......

codec_type代表编码器的类型,常见的类型有

AVMEDIA_TYPE_VIDEO: 视频编解码器AVMEDIA_TYPE_AUDIO: 音频编解码器AVMEDIA_TYPE_SUBTITLE: 字幕编解码器AVMEDIA_TYPE_DATA: 数据编解码器AVMEDIA_TYPE_ATTACHMENT: 附件编解码器AVMEDIA_TYPE_UNKNOWN: 未知类型视频解码器

根据视频流信息,查找并打开解码器。代码如下:

......    // 访问视频流的编码参数    pCodecCtx = pFormatCtx->streams[videoindex]->codec;    // 根据编码器 ID(codec_id)查找解码器    pCodec = avcodec_find_decoder(pCodecCtx->codec_id);    if (pCodec == NULL)    {        cout << "Could not found.\n" << endl;        return -1;    }    // 打开编解码器    if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0)    {        cout << "Could not open codec.\n" << endl;        return -1;    }    ......
图像格式转换
......    // 分配 AVFrame 结构体的内存空间    pFrame = av_frame_alloc();    pFrameYUV = av_frame_alloc();    // 分配用于存储图像数据的缓冲区    out_buffer = (unsigned char *)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height, 1));    // 填充pFrameYUV->data,以便后续进行图像处理    av_image_fill_arrays(pFrameYUV->data, pFrameYUV->linesize, out_buffer,                         AV_PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height, 1);    cout << "------------------File Information-----------------\n" << endl;    // 输出数据流信息    av_dump_format(pFormatCtx, 0, filepath, 0);    cout << "--------------------------------------------------\n" << endl;    // 创建图像转换上下文,将原始的像素格式转化为 YUV420P    img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt,                                     pCodecCtx->width, pCodecCtx->height, AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);    ......

av_image_get_buffer_size函数,用于计算给定图像参数下所需的缓冲区大小。根据像素格式和图像的宽度和高度,计算出所需的缓冲区大小。此处所用的图像像素格式为YUV420P。YUV420P 是最常用的像素格式之一,特别在视频编解码领域广泛应用。av_image_fill_arrays用于向pFrameYUV->data中填充数据。不过此时原始的图像数据还并未填充进去,只是先分配内存,为后面存储经过格式转换的图像做准备。

播放实现SDL初始化及配置

创建和配置SDL窗口,renderer,texture和rect。代码如下:

......    // 初始化 SDL 库    if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER))    {        cout << "Could not initialize SDL - " << SDL_GetError() << "\n" << endl;        return -1;    }    // 初始化窗口大小为视频大小    screen_w = pCodecCtx->width;    screen_h = pCodecCtx->height;    // 创建SDL窗口    screen = SDL_CreateWindow("video Player", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, screen_w, screen_h, SDL_WINDOW_OPENGL | SDL_WINDOW_RESIZABLE);    if (!screen)    {        cout << "SDL: could not creeate window - exiting:\n" << SDL_GetError() << "\n" << endl;        return -1;    }    // 创建renderer    sdlRenderer = SDL_CreateRenderer(screen, -1, 0);    // 创建texture    sdlTexture = SDL_CreateTexture(sdlRenderer, SDL_PIXELFORMAT_IYUV, SDL_TEXTUREACCESS_STREAMING, pCodecCtx->width, pCodecCtx->height);    // 初始化rect    sdlRect.x = 0;    sdlRect.y = 0;    sdlRect.w = screen_w;    sdlRect.h = screen_h;    // 分配一个 AVPacket 结构体的内存空间给packet    packet = (AVPacket *)av_malloc(sizeof(AVPacket));    ......

创建一个新的线程,用于检测和处理SDL窗口的活动,代码如下:

int sfp_refresh_thread(void *opaque){    // 初始化线程状态    thread_exit = 0;    thread_pause = 0;    while (!thread_exit)    {        if (!thread_pause)        {            SDL_Event event;            // 设置event状态为SFM_REFRESH_EVENT            event.type = SFM_REFRESH_EVENT;            // 向事件队列中添加事件            SDL_PushEvent(&event);        }        SDL_Delay(40);    }    thread_exit = 0;    thread_pause = 0;    SDL_Event event;    // 设置event状态为SFM_BREAK_EVENT    event.type = SFM_BREAK_EVENT;    SDL_PushEvent(&event);    return 0;}  int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow){    ......    // 创建线程,调用sfp_refresh_thread自定义函数    video_tid = SDL_CreateThread(sfp_refresh_thread, NULL, NULL);    ......    return 0;}
视频播放

获取视频流数据,经过处理和图像转换,将图像填充到SDL窗口,实现视频的播放。代码如下:

......    for (;;)    {   // 获取窗口活动状态        SDL_WaitEvent(&event);        // 播放视频        if (event.type == SFM_REFRESH_EVENT)        {            while (1)            {                // 如果没有读取到packet,设置thread_exit为1,结束播放                if (av_read_frame(pFormatCtx, packet) < 0)                    thread_exit = 1;                // 判断packet是否为视频流                if (packet->stream_index == videoindex)                    break;            }            // 解码视频帧            ret = avcodec_decode_video2(pCodecCtx, pFrame, &got_picture, packet);            if (ret < 0)            {                cout << "Decode Error.\n" << endl;                return -1;            }            if (got_picture)            {                // 将pFrame中的原始图像数据,根据img_convert_ctx转化后,存储到pFrameYUV中,用于在SDL中显示                sws_scale(img_convert_ctx, (const unsigned char *const *)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize);                // 更新texture,更新数据为pFrameYUV->data[0]                SDL_UpdateTexture(sdlTexture, NULL, pFrameYUV->data[0], pFrameYUV->linesize[0]);                // 清空渲染目标                SDL_RenderClear(sdlRenderer);                // 将纹理渲染到sdlRect                SDL_RenderCopy(sdlRenderer, sdlTexture, NULL, &sdlRect);                // 更新窗口显示                SDL_RenderPresent(sdlRenderer);            }            // 释放 AVPacket 结构体内存            av_packet_unref(packet);        }        // 暂停        else if (event.type == SDL_KEYDOWN)        {            // 如果点击空格键,暂停            if (event.key.keysym.sym == SDLK_SPACE)                thread_pause = !thread_pause;        }        // 窗口退出        else if (event.type == SDL_QUIT)        {            thread_exit = 1;        }        // 播放结束        else if (event.type == SFM_BREAK_EVENT)        {            break;        }    }    ......
内存释放

在所有的操作完成后,最后是内存的释放。代码如下:

{    ......    sws_freeContext(img_convert_ctx);    SDL_Quit();    av_frame_free(&pFrameYUV);    av_frame_free(&pFrame);    avcodec_close(pCodecCtx);    avformat_close_input(&pFormatCtx);    LocalFree(argv);    delete[] buffer;    return 0;}
完整代码

simple_video_player.cpp

#include #include extern "C"{#include "libavcodec/avcodec.h"#include "libavformat/avformat.h"#include "libswscale/swscale.h"#include "libavutil/imgutils.h"#include "SDL2/SDL.h"}using namespace std;#define SFM_REFRESH_EVENT (SDL_USEREVENT + 1)#define SFM_BREAK_EVENT (SDL_USEREVENT + 2)int thread_exit = 0;int thread_pause = 0;int sfp_refresh_thread(void *opaque){    thread_exit = 0;    thread_pause = 0;    while (!thread_exit)    {        if (!thread_pause)        {            SDL_Event event;            event.type = SFM_REFRESH_EVENT;            SDL_PushEvent(&event);        }        SDL_Delay(40);    }    thread_exit = 0;    thread_pause = 0;    SDL_Event event;    event.type = SFM_BREAK_EVENT;    SDL_PushEvent(&event);    return 0;}int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow){        AVFormatContext *pFormatCtx;    int i, videoindex;    AVCodecContext *pCodecCtx;    AVCodec *pCodec;    AVFrame *pFrame, *pFrameYUV;    unsigned char *out_buffer;    AVPacket *packet;    int ret, got_picture;    int screen_w, screen_h;    SDL_Window *screen;    SDL_Renderer *sdlRenderer;    SDL_Texture *sdlTexture;    SDL_Rect sdlRect;    SDL_Thread *video_tid;    SDL_Event event;    struct SwsContext *img_convert_ctx;    LPWSTR lpWideCmdLine = GetCommandLineW();     int argc;    char *filepath;    LPWSTR *argv = CommandLineToArgvW(lpWideCmdLine, &argc);     int bufferSize = WideCharToMultiByte(CP_UTF8, 0, argv[1], -1, NULL, 0, NULL, NULL);    char *buffer = new char[bufferSize];        if (argc > 1)    {        WideCharToMultiByte(CP_UTF8, 0, argv[1], -1, buffer, bufferSize, NULL, NULL);        filepath = buffer;        std::cout << "argv[1]: " << buffer << std::endl;    }    else    {        cout << "Please add the path to the video file that requires the part.\n" << endl;        return -1;    }    // av_register_all();    // avformat_network_init();    pFormatCtx = avformat_alloc_context();    if (avformat_open_input(&pFormatCtx, filepath, NULL, NULL) != 0)    {        cout << "Could not open input stream: " << filepath << "\n"             << endl;        return -1;    }    if (avformat_find_stream_info(pFormatCtx, NULL) < 0)    {        cout << "Could not find stream information.\n"             << endl;        return -1;    }    videoindex = -1;    for (i = 0; i < pFormatCtx->nb_streams; i++)    {        if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)        {            videoindex = i;            break;        }    }    if (videoindex == -1)    {        cout << "Did not find a video stream.\n"             << endl;        return -1;    }    pCodecCtx = pFormatCtx->streams[videoindex]->codec;    pCodec = avcodec_find_decoder(pCodecCtx->codec_id);    if (pCodec == NULL)    {        cout << "Could not found.\n"             << endl;        return -1;    }    if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0)    {        cout << "Could not open codec.\n"             << endl;        return -1;    }    pFrame = av_frame_alloc();    pFrameYUV = av_frame_alloc();    out_buffer = (unsigned char *)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height, 1));    av_image_fill_arrays(pFrameYUV->data, pFrameYUV->linesize, out_buffer,                         AV_PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height, 1);    cout << "------------------File Information-----------------\n"         << endl;    av_dump_format(pFormatCtx, 0, filepath, 0);    cout << "--------------------------------------------------\n"         << endl;    img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt,                                     pCodecCtx->width, pCodecCtx->height, AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);    if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER))    {        cout << "Could not initialize SDL - " << SDL_GetError() << "\n"             << endl;        return -1;    }    screen_w = pCodecCtx->width;    screen_h = pCodecCtx->height;    screen = SDL_CreateWindow("video Player", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, screen_w, screen_h, SDL_WINDOW_OPENGL | SDL_WINDOW_RESIZABLE);    if (!screen)    {        cout << "SDL: could not creeate window - exiting:\n"             << SDL_GetError() << "\n"             << endl;        return -1;    }    sdlRenderer = SDL_CreateRenderer(screen, -1, 0);    sdlTexture = SDL_CreateTexture(sdlRenderer, SDL_PIXELFORMAT_IYUV, SDL_TEXTUREACCESS_STREAMING, pCodecCtx->width, pCodecCtx->height);    sdlRect.x = 0;    sdlRect.y = 0;    sdlRect.w = screen_w;    sdlRect.h = screen_h;    packet = (AVPacket *)av_malloc(sizeof(AVPacket));    video_tid = SDL_CreateThread(sfp_refresh_thread, NULL, NULL);    for (;;)    {        SDL_WaitEvent(&event);        if (event.type == SFM_REFRESH_EVENT)        {            while (1)            {                if (av_read_frame(pFormatCtx, packet) < 0)                    thread_exit = 1;                if (packet->stream_index == videoindex)                    break;            }            ret = avcodec_decode_video2(pCodecCtx, pFrame, &got_picture, packet);            if (ret < 0)            {                cout << "Decode Error.\n"                     << endl;                return -1;            }            if (got_picture)            {                sws_scale(img_convert_ctx, (const unsigned char *const *)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize);                SDL_UpdateTexture(sdlTexture, NULL, pFrameYUV->data[0], pFrameYUV->linesize[0]);                SDL_RenderClear(sdlRenderer);                SDL_RenderCopy(sdlRenderer, sdlTexture, NULL, &sdlRect);                SDL_RenderPresent(sdlRenderer);            }            av_packet_unref(packet);        }        else if (event.type == SDL_KEYDOWN)        {            if (event.key.keysym.sym == SDLK_SPACE)                thread_pause = !thread_pause;        }        else if (event.type == SDL_QUIT)        {            thread_exit = 1;        }        else if (event.type == SFM_BREAK_EVENT)        {            break;        }    }    sws_freeContext(img_convert_ctx);    SDL_Quit();    av_frame_free(&pFrameYUV);    av_frame_free(&pFrame);    avcodec_close(pCodecCtx);    avformat_close_input(&pFormatCtx);    LocalFree(argv);    delete[] buffer;    return 0;}

标签:

X 关闭

X 关闭