OpenCV:处理每一帧

我想编写一个使用OpenCV进行video捕获的跨平台应用程序。 在所有示例中,我发现使用抓取function处理来自摄像机的帧并等待一段时间。 我想处理序列中的每一帧。 我想定义我自己的回调函数,每当新的框架准备好被处理时,它就会被执行(就像在Windows的directshow中,当您为了这些目的定义和放入图形时你自己的filter)。

所以问题是:我怎么能这样做?

根据下面的代码,所有回调都必须遵循以下定义:

IplImage* custom_callback(IplImage* frame); 

此签名表示将在系统检索的每个帧上执行回调。 在我的示例中,make_it_gray()分配一个新图像以保存灰度转换的结果并返回它。 这意味着您必须稍后在代码中释放此框架。 我添加了关于它的代码的评论。

请注意,如果您的回调需要大量处理,系统可能会跳过相机中的几帧。 考虑Paul Rdiverscuba23的建议。

 #include  #include "cv.h" #include "highgui.h" typedef IplImage* (*callback_prototype)(IplImage*); /* * make_it_gray: our custom callback to convert a colored frame to its grayscale version. * Remember that you must deallocate the returned IplImage* yourself after calling this function. */ IplImage* make_it_gray(IplImage* frame) { // Allocate space for a new image IplImage* gray_frame = 0; gray_frame = cvCreateImage(cvSize(frame->width, frame->height), frame->depth, 1); if (!gray_frame) { fprintf(stderr, "!!! cvCreateImage failed!\n" ); return NULL; } cvCvtColor(frame, gray_frame, CV_RGB2GRAY); return gray_frame; } /* * process_video: retrieves frames from camera and executes a callback to do individual frame processing. * Keep in mind that if your callback takes too much time to execute, you might loose a few frames from * the camera. */ void process_video(callback_prototype custom_cb) { // Initialize camera CvCapture *capture = 0; capture = cvCaptureFromCAM(-1); if (!capture) { fprintf(stderr, "!!! Cannot open initialize webcam!\n" ); return; } // Create a window for the video cvNamedWindow("result", CV_WINDOW_AUTOSIZE); IplImage* frame = 0; char key = 0; while (key != 27) // ESC { frame = cvQueryFrame(capture); if(!frame) { fprintf( stderr, "!!! cvQueryFrame failed!\n" ); break; } // Execute callback on each frame IplImage* processed_frame = (*custom_cb)(frame); // Display processed frame cvShowImage("result", processed_frame); // Release resources cvReleaseImage(&processed_frame); // Exit when user press ESC key = cvWaitKey(10); } // Free memory cvDestroyWindow("result"); cvReleaseCapture(&capture); } int main( int argc, char **argv ) { process_video(make_it_gray); return 0; } 

编辑:

我更改了上面的代码,因此它打印当前帧速率并执行手动灰度转换 。 它们是对代码的小调整,我是出于教育目的而做的,所以我们知道如何在像素级执行操作。

 #include  #include  #include "cv.h" #include "highgui.h" typedef IplImage* (*callback_prototype)(IplImage*); /* * make_it_gray: our custom callback to convert a colored frame to its grayscale version. * Remember that you must deallocate the returned IplImage* yourself after calling this function. */ IplImage* make_it_gray(IplImage* frame) { // New IplImage* to store the processed image IplImage* gray_frame = 0; // Manual grayscale conversion: ugly, but shows how to access each channel of the pixels individually gray_frame = cvCreateImage(cvSize(frame->width, frame->height), frame->depth, frame->nChannels); if (!gray_frame) { fprintf(stderr, "!!! cvCreateImage failed!\n" ); return NULL; } for (int i = 0; i < frame->width * frame->height * frame->nChannels; i += frame->nChannels) { gray_frame->imageData[i] = (frame->imageData[i] + frame->imageData[i+1] + frame->imageData[i+2])/3; //B gray_frame->imageData[i+1] = (frame->imageData[i] + frame->imageData[i+1] + frame->imageData[i+2])/3; //G gray_frame->imageData[i+2] = (frame->imageData[i] + frame->imageData[i+1] + frame->imageData[i+2])/3; //R } return gray_frame; } /* * process_video: retrieves frames from camera and executes a callback to do individual frame processing. * Keep in mind that if your callback takes too much time to execute, you might loose a few frames from * the camera. */ void process_video(callback_prototype custom_cb) { // Initialize camera CvCapture *capture = 0; capture = cvCaptureFromCAM(-1); if (!capture) { fprintf(stderr, "!!! Cannot open initialize webcam!\n" ); return; } // Create a window for the video cvNamedWindow("result", CV_WINDOW_AUTOSIZE); double elapsed = 0; int last_time = 0; int num_frames = 0; IplImage* frame = 0; char key = 0; while (key != 27) // ESC { frame = cvQueryFrame(capture); if(!frame) { fprintf( stderr, "!!! cvQueryFrame failed!\n" ); break; } // Calculating framerate num_frames++; elapsed = clock() - last_time; int fps = 0; if (elapsed > 1) { fps = floor(num_frames / (float)(1 + (float)elapsed / (float)CLOCKS_PER_SEC)); num_frames = 0; last_time = clock() + 1 * CLOCKS_PER_SEC; printf("FPS: %d\n", fps); } // Execute callback on each frame IplImage* processed_frame = (*custom_cb)(frame); // Display processed frame cvShowImage("result", processed_frame); // Release resources cvReleaseImage(&processed_frame); // Exit when user press ESC key = cvWaitKey(10); } // Free memory cvDestroyWindow("result"); cvReleaseCapture(&capture); } int main( int argc, char **argv ) { process_video(make_it_gray); return 0; } 

快速思考是有2个线程,第一个线程负责抓取帧,并在第二个线程可用时通知它们(将它们放在处理队列中),第二个线程以事件循环类型方式完成所有处理。

请参阅boost :: thread和boost :: signals2,因为这两者应该为我上面描述的内容提供大部分框架(队列除外)。