Ubuntu下安装配置OpenNI, OpenCV 步骤

操作系统 强烈建议 2024-02-07 14:55 221 0

  费了老大半天工夫,总算在Linux下把OpenNI和OPenCV配置好了,网上对Linux下OpenNI的配置方法讲的很少,而寻找使用OpenNI进行开发的方法更是像大海捞针……连手册里都只字不提,翻了好的资料才算搞定。闲话少说,总结在这和大家一起分享。

  一、OPenNI篇

  1.软件下载:

  (1)OpenNI:aspx”>

  选择“OpenNI Binaries”->“Unstable”->“…for ubuntu…”,点击“Download”。

  下载完成后解压,cd进入解压后的路径:$ 不记得要不要加sudo了,试一试吧)

  (2)SensorKinect:

  命令:$git clone

  如果没有安装git,则sudo apt-get install之~

  过程比较慢,结束后会在当前路径出现一个文件夹SensorKinect,cd进SensorKinect/Platform/Linux/CreateRedist,之后$ 这时在上层目录Linux下出现Redist文件夹。此时网上说进该目录$ 但实际上还要进一层目录才有instal.sh文件。但是貌似执行这个需要root权限,我不知道怎么弄,$sudo su后也不行,最后发现还是Redist文件夹里面有一个Final文件夹,里面有一个压缩包Sensor-Bin-Linux-x86-v5.0.5.1.tar.bz2,我索性把它拷出来,解压缩后进去 (…/SensorKinect/Platform/Linux/CreateRedist/Sensor-Bin-Linux-x86-v5.0.5.1/),在里面$竟然就可以了。顺便说一下,可能在这些过程中输入$ 会提示没有命令之类的,可以在install.sh文件上右击->属性->权限,选中“允许以程序执行文件”,就可以了。

  这时可以到在OpenNI-Bin-XXXX/Samples/Bin/x86-Release目录中测试:$ 有可能提示没有什么库之类的,试着装一下:$sudo apt-get install libusb-1.0-0-dev freeglut3-dev,之后应该就可以运行了,就可以看到期待已久的画面喽。如果这时候提示说Failed to set USB interface!或者Open failed: The network connection has been closed!,在命令行里运行:

  $sudo rmmod gspca_kinect

  这是因为Ubuntu可能自带了kinect驱动gspca_kinect,二者有冲突。貌似每次重启电脑后都需要执行一下这句才行。

  这部分主要参考了里面讲的可能和实际不太一样,尝试着来吧。

  (2)NITE:

  选择“OpenNI Compliant Middleware Binaries”->“Unstable”->“…Ubuntu…”,下载就好了。

  下载完成后解压,进入目录$ 即可。

  2.开发环境配置

  我选用的是eclipse-cdt进行开发,网上仅有的可怜的一点资料讲的是用codeblocks进行开发,其实都是差不多的。这里以eclipse为例介绍一下吧。

  新建一个空的或helloworld工程,如kinectOpenNI,在左侧的project exploer中右击kinectOpenNI,点properties,在对话框中选择C/C++ Build->settings->GCC C++ Compiler(如果用C写就选GCC C Compiler)->Directories,在右侧Include paths(-l)里点右边绿色加号,添加两个路径/usr/include/ni和/usr/include/nite,然后再选择GCC C++ Linker->Libraries,在Libraries (-l)中添加OpenNI,glut,XnVNite,注意XnVNite可能有版本号,要到你的/usr/lib目录下看一看,有个文件叫libXnVNite_XXXX.so之类的,我的是libXnVNite_1_5_0.so,所以我填的是XnVNite_1_5_0,反正就是随机应变吧,填不对的话它会报错说找不到库。由于这几个库都是在系统/usr/lib/目录下的,因此不用添加Library search path (-L)。

  到此为止你的工程应该可以编译了。试试这段样例:(注意该一下里面xml文件的路径。样例是直接从安装包里找到的。当然,记得连上你的Kinect)

  View Code

  //—————————————————————————

  // Includes

  //—————————————————————————

  #include

  #if (XN_PLATFORM==XN_PLATFORM_MACOSX)

  #include

  #else

  #include

  #endif

  #include

  #include

  using namespace xn;

  //—————————————————————————

  // Defines

  //—————————————————————————

  #define SAMPLE_XML_PATH “/home/iotuyrfviloh/Softwares/OpenNI-Bin-Dev-Linux-x86-v1.4.0.2/Samples/Config/SamplesConfig.xml”

  #define GL_WIN_SIZE_X 1280

  #define GL_WIN_SIZE_Y 1024

  #define DISPLAY_MODE_OVERLAY 1

  #define DISPLAY_MODE_DEPTH 2

  #define DISPLAY_MODE_IMAGE 3

  #define DEFAULT_DISPLAY_MODE DISPLAY_MODE_DEPTH

  #define MAX_DEPTH 10000

  //—————————————————————————

  // Globals

  //—————————————————————————

  float g_pDepthHist[MAX_DEPTH];

  XnRGB24Pixel* g_pTexMap=NULL;

  unsigned int g_nTexMapX=0;

  unsigned int g_nTexMapY=0;

  unsigned int g_nViewState=DEFAULT_DISPLAY_MODE;

  Context g_context;

  ScriptNode g_scriptNode;

  DepthGenerator g_depth;

  ImageGenerator g_image;

  DepthMetaData g_depthMD;

  ImageMetaData g_imageMD;

  //—————————————————————————

  // Code

  //—————————————————————————

  void glutIdle (void)

  {

  // Display the frame

  glutPostRedisplay();

  }

  void glutDisplay (void)

  {

  XnStatus rc=XN_STATUS_OK;

  // Read a new frame

  rc=g_context.WaitAnyUpdateAll();

  if (rc !=XN_STATUS_OK)

  {

  printf(“Read failed: %s\n”, xnGetStatusString(rc));

  return;

  }

  g_depth.GetMetaData(g_depthMD);

  g_image.GetMetaData(g_imageMD);

  const XnDepthPixel* pDepth=g_depthMD.Data();

  const XnUInt8* pImage=g_imageMD.Data();

  unsigned int nImageScale=GL_WIN_SIZE_X / g_depthMD.FullXRes();

  // Copied from SimpleViewer

  // Clear the OpenGL buffers

  glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

  // Setup the OpenGL viewpoint

  glMatrixMode(GL_PROJECTION);

  glPushMatrix();

  glLoadIdentity();

  glOrtho(0, GL_WIN_SIZE_X, GL_WIN_SIZE_Y, 0, -1.0, 1.0);

  // Calculate the accumulative histogram (the yellow display…)

  xnOSMemSet(g_pDepthHist, 0, MAX_DEPTH*sizeof(float));

  unsigned int nNumberOfPoints=0;

  for (XnUInt y=0; y < g_depthMD.YRes(); ++y)

  {

  for (XnUInt x=0; x < g_depthMD.XRes(); ++x, ++pDepth)

  {

  if (*pDepth !=0)

  {

  g_pDepthHist[*pDepth]++;

  nNumberOfPoints++;

  }

  }

  }

  for (int nIndex=1; nIndex

  {

  g_pDepthHist[nIndex] +=g_pDepthHist[nIndex-1];

  }

  if (nNumberOfPoints)

  {

  for (int nIndex=1; nIndex

  {

  g_pDepthHist[nIndex]=(unsigned int)(256 * (1.0f – (g_pDepthHist[nIndex] / nNumberOfPoints)));

  }

  }

  xnOSMemSet(g_pTexMap, 0, g_nTexMapX*g_nTexMapY*sizeof(XnRGB24Pixel));

  // check if we need to draw image frame to texture

  if (g_nViewState==DISPLAY_MODE_OVERLAY ||

  g_nViewState==DISPLAY_MODE_IMAGE)

  {

  const XnRGB24Pixel* pImageRow=g_imageMD.RGB24Data();

  XnRGB24Pixel* pTexRow=g_pTexMap + g_imageMD.YOffset() * g_nTexMapX;

  for (XnUInt y=0; y < g_imageMD.YRes(); ++y)

  {

  const XnRGB24Pixel* pImage=pImageRow;

  XnRGB24Pixel* pTex=pTexRow + g_imageMD.XOffset();

  for (XnUInt x=0; x < g_imageMD.XRes(); ++x, ++pImage, ++pTex)

  {

  *pTex=*pImage;

  }

  pImageRow +=g_imageMD.XRes();

  pTexRow +=g_nTexMapX;

  }

  }

  // check if we need to draw depth frame to texture

  if (g_nViewState==DISPLAY_MODE_OVERLAY ||

  g_nViewState==DISPLAY_MODE_DEPTH)

  {

  const XnDepthPixel* pDepthRow=g_depthMD.Data();

  XnRGB24Pixel* pTexRow=g_pTexMap + g_depthMD.YOffset() * g_nTexMapX;

  for (XnUInt y=0; y < g_depthMD.YRes(); ++y)

  {

  const XnDepthPixel* pDepth=pDepthRow;

  XnRGB24Pixel* pTex=pTexRow + g_depthMD.XOffset();

  for (XnUInt x=0; x < g_depthMD.XRes(); ++x, ++pDepth, ++pTex)

  {

  if (*pDepth !=0)

  {

  int nHistValue=g_pDepthHist[*pDepth];

  pTex->nRed=nHistValue;

  pTex->nGreen=nHistValue;

  pTex->nBlue=0;

  }

  }

  pDepthRow +=g_depthMD.XRes();

  pTexRow +=g_nTexMapX;

  }

  }

  // Create the OpenGL texture map

  glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP_SGIS, GL_TRUE);

  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);

  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

  glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, g_nTexMapX, g_nTexMapY, 0, GL_RGB, GL_UNSIGNED_BYTE, g_pTexMap);

  // Display the OpenGL texture map

  glColor4f(1,1,1,1);

  glBegin(GL_QUADS);

  int nXRes=g_depthMD.FullXRes();

  int nYRes=g_depthMD.FullYRes();

  // upper left

  glTexCoord2f(0, 0);

  glVertex2f(0, 0);

  // upper right

  glTexCoord2f((float)nXRes/(float)g_nTexMapX, 0);

  glVertex2f(GL_WIN_SIZE_X, 0);

  // bottom right

  glTexCoord2f((float)nXRes/(float)g_nTexMapX, (float)nYRes/(float)g_nTexMapY);

  glVertex2f(GL_WIN_SIZE_X, GL_WIN_SIZE_Y);

  // bottom left

  glTexCoord2f(0, (float)nYRes/(float)g_nTexMapY);

  glVertex2f(0, GL_WIN_SIZE_Y);

  glEnd();

  // Swap the OpenGL display buffers

  glutSwapBuffers();

  }

  void glutKeyboard (unsigned char key, int x, int y)

  {

  switch (key)

  {

  case 27:

  exit (1);

  case ‘1’:

  g_nViewState=DISPLAY_MODE_OVERLAY;

  g_depth.GetAlternativeViewPointCap().SetViewPoint(g_image);

  break;

  case ‘2’:

  g_nViewState=DISPLAY_MODE_DEPTH;

  g_depth.GetAlternativeViewPointCap().ResetViewPoint();

  break;

  case ‘3’:

  g_nViewState=DISPLAY_MODE_IMAGE;

  g_depth.GetAlternativeViewPointCap().ResetViewPoint();

  break;

  case ‘m’:

  g_context.SetGlobalMirror(!g_context.GetGlobalMirror());

  break;

  }

  }

  int main(int argc, char* argv[])

  {

  XnStatus rc;

  EnumerationErrors errors;

  rc=g_context.InitFromXmlFile(SAMPLE_XML_PATH, g_scriptNode, &errors);

  if (rc==XN_STATUS_NO_NODE_PRESENT)

  {

  XnChar strError[1024];

  errors.ToString(strError, 1024);

  printf(“%s\n”, strError);

  return (rc);

  }

  else if (rc !=XN_STATUS_OK)

  {

  printf(“Open failed: %s\n”, xnGetStatusString(rc));

  return (rc);

  }

  rc=g_context.FindExistingNode(XN_NODE_TYPE_DEPTH, g_depth);

  if (rc !=XN_STATUS_OK)

  {

  printf(“No depth node exists! Check your XML.”);

  return 1;

  }

  rc=g_context.FindExistingNode(XN_NODE_TYPE_IMAGE, g_image);

  if (rc !=XN_STATUS_OK)

  {

  printf(“No image node exists! Check your XML.”);

  return 1;

  }

  g_depth.GetMetaData(g_depthMD);

  g_image.GetMetaData(g_imageMD);

  // Hybrid mode isn’t supported in this sample

  if (g_imageMD.FullXRes() !=g_depthMD.FullXRes() || g_imageMD.FullYRes() !=g_depthMD.FullYRes())

  {

  printf (“The device depth and image resolution must be equal!\n”);

  return 1;

  }

  // RGB is the only image format supported.

  if (g_imageMD.PixelFormat() !=XN_PIXEL_FORMAT_RGB24)

  {

  printf(“The device image format must be RGB24\n”);

  return 1;

  }

  // Texture map init

  g_nTexMapX=(((unsigned short)(g_depthMD.FullXRes()-1) / 512) + 1) * 512;

  g_nTexMapY=(((unsigned short)(g_depthMD.FullYRes()-1) / 512) + 1) * 512;

  g_pTexMap=(XnRGB24Pixel*)malloc(g_nTexMapX * g_nTexMapY * sizeof(XnRGB24Pixel));

  // OpenGL init

  glutInit(&argc, argv);

  glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);

  glutInitWindowSize(GL_WIN_SIZE_X, GL_WIN_SIZE_Y);

  glutCreateWindow (“OpenNI Simple Viewer”);

  glutFullScreen();

  glutSetCursor(GLUT_CURSOR_NONE);

  glutKeyboardFunc(glutKeyboard);

  glutDisplayFunc(glutDisplay);

  glutIdleFunc(glutIdle);

  glDisable(GL_DEPTH_TEST);

  glEnable(GL_TEXTURE_2D);

  // Per frame code is in glutDisplay

  glutMainLoop();

  return 0;

  }

  二、OpenCV篇

  虽然已经可以用OpenGL了,但是OpenCV还是很不错的,也配一下吧。

  1.软件下载

  (1)装一下这些吧:

  apt-get install build-essential

  apt-get install cmake cmake-gui

  apt-get install pkg-config

  apt-get install libpng12-0 libpng12-dev libpng++-dev libpng3

  apt-get install libpnglite-dev libpngwriter0-dev libpngwriter0c2

  apt-get install zlib1g-dbg zlib1g zlib1g-dev

  apt-get install libjasper-dev libjasper-runtime libjasper1

  apt-get install pngtools libtiff4-dev libtiff4 libtiffxx0c2 libtiff-tools

  apt-get install libjpeg8 libjpeg8-dev libjpeg8-dbg libjpeg-prog

  apt-get install ffmpeg libavcodec-dev libavcodec52 libavformat52 libavformat-dev

  apt-get install libgstreamer0.10-0-dbg libgstreamer0.10-0 libgstreamer0.10-dev

  apt-get install libxine1-ffmpeg libxine-dev libxine1-bin

  apt-get install libunicap2 libunicap2-dev

  apt-get install libdc1394-22-dev libdc1394-22 libdc1394-utils

  apt-get install swig

  apt-get install libv4l-0 libv4l-dev

  apt-get install python-numpy

  apt-get install libpython2.6 python-dev python2.6-dev

  有些可能你用不到,但是貌似build-essential,cmake,pkg-config都是必须的,其他的都不确定啦,保险起见都装一下吧~

  (2)下载最新版本OpenCV:

  解压,生成目录如OpenCV-2.3.1,在它旁边新建一个目录,如OpenCV-2.3.1-build。这时打开cmake-gui图形界面,应该可以在主菜单的“编程”项中找到。在上面的Source和Build栏中分别填上OpenCV-2.3.1和OpenCV-2.3.1-build的完整路径。点下面的Config,如果窗口内容变红,再点一次,知道不红为止,这时点Generate即可。命令行cd进OpenCV-2.3.1-build,$make然后$sudo make install,此时OpenCV应该就被安装在/usr/local/下面了。

  2.建立工程

  使用eclipse建立新工程,如testOpenCV在左侧的project exploer中右击testOpenCV,点properties,在对话框中选择C/C++ Build->settings->GCC C++ Compiler(如果用C写就选GCC C Compiler)->Directories,在右侧Include paths(-l)里点右边绿色加号,添加路径/usr/local/include/opencv,然后再选择GCC C++ Linker->Libraries,在Libraries (-l)中添加opencv_core,opencv_highgui,如果需要其他库也依次添加。库的路径是/usr/local/lib,因此要添加Library search path (-L):/usr/local/lib。

  还要给系统添加一下环境变量。我是试了好几种方法,也不知道最后是那个生效了:

  命令行里依次输入(就是新建个文件,添上一行/usr/local/lib,也可以用gedit来做):

  sudo vi /etc/ld.so.conf.d/opencv.conf

  G

  o

  /usr/local/lib

  :wq!

  类似地,打开文件/etc/bash.bashrc,在最后添加两行:

  PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig

  export PKG_CONFIG_PATH

  然后在命令行输入:

  sudo ldconfig -v

  export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH

  重启,然后应该就可以编译运行了,如果配置不好的话可能会报错说没有libopencv_core.so文件之类的。如果还是不行,就在看看其他帖子吧,不同人情况不一yang,我是深深地感受到了。

  好了,看看能不能运行吧:

  测试程序:

  #include

  #include “cv.h”

  #include

  #include

  using namespace std;

  int main() {

  IplImage *img=cvLoadImage(“a.jpg”);

  cvNamedWindow(“Image:”,1);

  cvShowImage(“Image:”,img);

  cvWaitKey();

  cvDestroyWindow(“Image:”);

  cvReleaseImage(&img);

  return 0;

  }

  其中图片路径在与工程中该cpp文件一致,或者取个绝对路径吧。

  如果一切安好,那就恭喜啦。

  有了OpenNI和OpenCV,就安心开发你有意思的Kinect应用吧~

  作者:韶子 博客地址:



时间:(2024-02-07 14:55:36)
本站资源均来自互联网或会员发布,如果不小心侵犯了您的权益请与我们联系。我们将立即删除!谢谢!