[vtkusers] GPURenderDemo- Difference between linux OS and Windows OS

Emptystack wulihouxiaoshuai at 163.com
Thu Sep 25 22:22:39 EDT 2014


In the vtkGPUVolumeRaycastMapper.cxx , the following lines are used to check
the GPUInfo:

  this->MaxMemoryInBytes=0;
  vtkGPUInfoList *l=vtkGPUInfoList::New();
  l->Probe();
  if(l->GetNumberOfGPUs()>0)
    {
    vtkGPUInfo *info=l->GetGPUInfo(0);
    this->MaxMemoryInBytes=info->GetDedicatedVideoMemory();
    if(this->MaxMemoryInBytes==0)
      {
      this->MaxMemoryInBytes=info->GetDedicatedSystemMemory();
      }
    // we ignore info->GetSharedSystemMemory(); as this is very slow.
    }
  l->Delete();

  if(this->MaxMemoryInBytes==0) // use some default value: 128MB.
    {
    this->MaxMemoryInBytes=128*1024*1024;
    }

1. VTK_USE_NVCONTROL flag is off. The MaxMemoryInBytes is 128MB.Because the
vtkGPUInfoList cannot 
find my GPU,as a result, the MaxMemoryInBytes is the default value 128MB.
After debugging  I found that vtkGPUInfoList *l=vtkGPUInfoList::New() didn't
invoke the vtkXGPUInfoList constructor, but the vtkDummyGPUInfoList
constructor.

2.VTK_USE_NVCONTRL flag is on. The MaxMemoryInBytes is 1024MB,that is the
VRAM of my nvidia card.
And the vtkGPUInfoList *l=vtkGPUInfoList::New() did invoke the
vtkXGPUInfoList constructor, and the 
l->Probe() invoked the Proble() function of vtkXGPUInfoList class:

void vtkXGPUInfoList::Probe()
{
  if(!this->Probed)
    {
    this->Probed=true;
    this->Array=new vtkGPUInfoListArray;
    bool found=false;
    
#ifdef VTK_USE_NVCONTROL
    // see sample code in nvidia-settings-1.0/samples/nv-control-info.c
    Display *dpy=XOpenDisplay(NULL); // we use the environment variable
DISPLAY
    if(dpy!=NULL)
      {
      int eventBase;
      int errorBase;
      if(XNVCTRLQueryExtension(dpy,&eventBase,&errorBase)==True)
        {
        int screenCount=XScreenCount(dpy);
        int nvScreenCount=0;
        int i=0;
        while(i<screenCount)
          {
          if(XNVCTRLIsNvScreen(dpy,i))
            {
            ++nvScreenCount;
            }
          ++i;
          }
        found=nvScreenCount>0;
        if(found)
          {
          this->Array->v.resize(nvScreenCount);
          int j=0;
          i=0;
          while(i<screenCount)
            {
            if(XNVCTRLIsNvScreen(dpy,i))
              {
              int ramSize;
              Bool status=XNVCTRLQueryAttribute(dpy,i,0,
                                               
NV_CTRL_VIDEO_RAM,&ramSize);
              if(!status)
                {
                ramSize=0;
                }
              vtkGPUInfo *info=vtkGPUInfo::New();
              info->SetDedicatedVideoMemory(ramSize*1024); // ramSize is in
KB
              this->Array->v[j]=info;
              ++j;
              }
            ++i;
            }
          }
        }
      XCloseDisplay(dpy);
      }
#endif // #ifdef VTK_USE_NVCONTROL
    if(!found)
      {
      this->Array->v.resize(0); // no GPU.
      }
    }
  assert("post: probed" && this->IsProbed());
}
Because we have set VTK_USE_NVCONTROL flag ON, so the code between #ifdef
and #endif works,and as a result, The information of my GPU is
detected.Everything works well except the problem of memory I have said.



--
View this message in context: http://vtk.1045678.n5.nabble.com/GPURenderDemo-Difference-between-linux-OS-and-Windows-OS-tp5728759p5728905.html
Sent from the VTK - Users mailing list archive at Nabble.com.


More information about the vtkusers mailing list