[vtkusers] Is there any way to prepare datasets for vtk.js?

Alexey Pechnikov pechnikov at mobigroup.ru
Wed Jul 11 10:29:44 EDT 2018


Aron,

Thank you for the links! I just want to try it. As sample I have large
enough 3D grid (~100MB) that I want to show it by vtk-js. I don't know how
to work with large files by vtk-js... maybe need I use small reducted
dataset for 3D preview and load full slices from server?

I found aneurism.vti viewer at
https://kitware.github.io/paraview-glance/nightly It looks very interesting
but the used datafile is small here.


Sorry, the script doesn't work as Paraview macro:

ERROR: In
/Users/kitware/dashboards/buildbot-slave/8275bd07/build/superbuild/paraview/src/VTK/Rendering/OpenGL2/vtkShaderProgram.cxx,
line 445

vtkShaderProgram (0x60c00058d5b0): 1: #version 150

2: #ifdef GL_ES

3: #if __VERSION__ == 300

4: #define varying in

5: #ifdef GL_FRAGMENT_PRECISION_HIGH

6: precision highp float;

7: precision highp sampler2D;

8: precision highp sampler3D;

9: #else

10: precision mediump float;

11: precision mediump sampler2D;

12: precision mediump sampler3D;

13: #endif

14: #define texelFetchBuffer texelFetch

15: #define texture1D texture

16: #define texture2D texture

17: #define texture3D texture

18: #endif // 300

19: #else // GL_ES

20: #define highp

21: #define mediump

22: #define lowp

23: #if __VERSION__ == 150

24: #define varying in

25: #define texelFetchBuffer texelFetch

26: #define texture1D texture

27: #define texture2D texture

28: #define texture3D texture

29: #endif

30: #if __VERSION__ == 120

31: #extension GL_EXT_gpu_shader4 : require

32: #endif

33: #endif // GL_ES

34:

35:

36:
/*=========================================================================

37:

38: Program: Visualization Toolkit

39: Module: raycasterfs.glsl

40:

41: Copyright (c) Ken Martin, Will Schroeder, Bill Lorensen

42: All rights reserved.

43: See Copyright.txt or http://www.kitware.com/Copyright.htm for details.

44:

45: This software is distributed WITHOUT ANY WARRANTY; without even

46: the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR

47: PURPOSE. See the above copyright notice for more information.

48:

49:
=========================================================================*/

50:

51:
//////////////////////////////////////////////////////////////////////////////

52: ///

53: /// Inputs

54: ///

55:
//////////////////////////////////////////////////////////////////////////////

56:

57: /// 3D texture coordinates form vertex shader

58: varying vec3 ip_textureCoords;

59: varying vec3 ip_vertexPos;

60:

61:
//////////////////////////////////////////////////////////////////////////////

62: ///

63: /// Outputs

64: ///

65:
//////////////////////////////////////////////////////////////////////////////

66:

67: vec4 g_fragColor = vec4(0.0);

68:

69:
//////////////////////////////////////////////////////////////////////////////

70: ///

71: /// Uniforms, attributes, and globals

72: ///

73:
//////////////////////////////////////////////////////////////////////////////

74: vec3 g_dataPos;

75: vec3 g_dirStep;

76: vec4 g_srcColor;

77: vec4 g_eyePosObj;

78: bool g_exit;

79: bool g_skip;

80: float g_currentT;

81: float g_terminatePointMax;

82:

83: out vec4 fragOutput0;

84:

85:

86: uniform sampler3D in_volume[1];

87: uniform vec4 in_volume_scale[1];

88: uniform vec4 in_volume_bias[1];

89: uniform int in_noOfComponents;

90: uniform int in_independentComponents;

91:

92: uniform sampler2D in_noiseSampler;

93: #ifndef GL_ES

94: uniform sampler2D in_depthSampler;

95: #endif

96:

97: // Camera position

98: uniform vec3 in_cameraPos;

99: uniform mat4 in_volumeMatrix[1];

100: uniform mat4 in_inverseVolumeMatrix[1];

101: uniform mat4 in_textureDatasetMatrix[1];

102: uniform mat4 in_inverseTextureDatasetMatrix[1];

103: uniform mat4 in_textureToEye[1];

104: uniform vec3 in_texMin[1];

105: uniform vec3 in_texMax[1];

106: uniform mat4 in_cellToPoint[1];

107: // view and model matrices

108: uniform mat4 in_projectionMatrix;

109: uniform mat4 in_inverseProjectionMatrix;

110: uniform mat4 in_modelViewMatrix;

111: uniform mat4 in_inverseModelViewMatrix;

112: varying mat4 ip_inverseTextureDataAdjusted;

113:

114: // Ray step size

115: uniform vec3 in_cellStep[1];

116: uniform vec2 in_scalarsRange[4];

117: uniform vec3 in_cellSpacing[1];

118:

119: // Sample distance

120: uniform float in_sampleDistance;

121:

122: // Scales

123: uniform vec2 in_windowLowerLeftCorner;

124: uniform vec2 in_inverseOriginalWindowSize;

125: uniform vec2 in_inverseWindowSize;

126: uniform vec3 in_textureExtentsMax;

127: uniform vec3 in_textureExtentsMin;

128:

129: // Material and lighting

130: uniform vec3 in_diffuse[4];

131: uniform vec3 in_ambient[4];

132: uniform vec3 in_specular[4];

133: uniform float in_shininess[4];

134:

135: // Others

136: uniform bool in_useJittering;

137: vec3 g_rayJitter = vec3(0.0);

138:

139: uniform vec2 in_averageIPRange;

140: uniform vec3 in_lightAmbientColor[1];

141: uniform vec3 in_lightDiffuseColor[1];

142: uniform vec3 in_lightSpecularColor[1];

143: vec4 g_lightPosObj;

144: vec3 g_ldir;

145: vec3 g_vdir;

146: vec3 g_h;

147:

148:

149:

150: const float g_opacityThreshold = 1.0 - 1.0 / 255.0;

151:

152:

153:

154:

155: int clippingPlanesSize;

156: vec3 objRayDir;

157: mat4 textureToObjMat;

158:

159:

160:

161:

162:

163:

164:

165: //VTK::GradientCache::Dec

166:

167: //VTK::Transfer2D::Dec

168:

169: uniform sampler2D in_opacityTransferFunc_0[1];

170:

171: float computeOpacity(vec4 scalar)

172: {

173: return texture2D(in_opacityTransferFunc_0[0], vec2(scalar.w, 0)).r;

174: }

175:

176: vec4 computeGradient(in vec3 texPos, in int c, in sampler3D volume, in
int index)

177: {

178: return vec4(0.0);

179: }

180:

181:

182: uniform sampler2D [1];

183:

184:

185:

186: vec4 computeLighting(vec4 color, int component)

187: {

188: vec4 finalColor = vec4(0.0);

189: finalColor = vec4(color.rgb, 0.0);

190: finalColor.a = color.a;

191: return finalColor;

192: }

193:

194: uniform sampler2D in_colorTransferFunc_0[1];

195:

196: vec4 computeColor(vec4 scalar, float opacity)

197: {

198: return computeLighting(vec4(texture2D(in_colorTransferFunc_0[0],

199: vec2(scalar.w, 0.0)).xyz, opacity), 0);

200: }

201:

202:

203: vec3 computeRayDirection()

204: {

205: return normalize(ip_vertexPos.xyz - g_eyePosObj.xyz);

206: }

207:

208: //VTK::Picking::Dec

209:

210: //VTK::RenderToImage::Dec

211:

212: //VTK::DepthPeeling::Dec

213:

214: /// We support only 8 clipping planes for now

215: /// The first value is the size of the data array for clipping

216: /// planes (origin, normal)

217: uniform float in_clippingPlanes[49];

218: uniform float in_scale;

219: uniform float in_bias;

220:

221:
//////////////////////////////////////////////////////////////////////////////

222: ///

223: /// Helper functions

224: ///

225:
//////////////////////////////////////////////////////////////////////////////

226:

227: /**

228: * Transform window coordinate to NDC.

229: */

230: vec4 WindowToNDC(const float xCoord, const float yCoord, const float
zCoord)

231: {

232: vec4 NDCCoord = vec4(0.0, 0.0, 0.0, 1.0);

233:

234: NDCCoord.x = (xCoord - in_windowLowerLeftCorner.x) * 2.0 *

235: in_inverseWindowSize.x - 1.0;

236: NDCCoord.y = (yCoord - in_windowLowerLeftCorner.y) * 2.0 *

237: in_inverseWindowSize.y - 1.0;

238: NDCCoord.z = (2.0 * zCoord - (gl_DepthRange.near + gl_DepthRange.far))
/

239: gl_DepthRange.diff;

240:

241: return NDCCoord;

242: }

243:

244: /**

245: * Transform NDC coordinate to window coordinates.

246: */

247: vec4 NDCToWindow(const float xNDC, const float yNDC, const float zNDC)

248: {

249: vec4 WinCoord = vec4(0.0, 0.0, 0.0, 1.0);

250:

251: WinCoord.x = (xNDC + 1.f) / (2.f * in_inverseWindowSize.x) +

252: in_windowLowerLeftCorner.x;

253: WinCoord.y = (yNDC + 1.f) / (2.f * in_inverseWindowSize.y) +

254: in_windowLowerLeftCorner.y;

255: WinCoord.z = (zNDC * gl_DepthRange.diff +

256: (gl_DepthRange.near + gl_DepthRange.far)) / 2.f;

257:

258: return WinCoord;

259: }

260:

261:
//////////////////////////////////////////////////////////////////////////////

262: ///

263: /// Ray-casting

264: ///

265:
//////////////////////////////////////////////////////////////////////////////

266:

267: /**

268: * Global initialization. This method should only be called once per
shader

269: * invocation regardless of whether castRay() is called several times
(e.g.

270: * vtkDualDepthPeelingPass). Any castRay() specific initialization
should be

271: * placed within that function.

272: */

273: void initializeRayCast()

274: {

275: /// Initialize g_fragColor (output) to 0

276: g_fragColor = vec4(0.0);

277: g_dirStep = vec3(0.0);

278: g_srcColor = vec4(0.0);

279: g_exit = false;

280:

281:

282: // Get the 3D texture coordinates for lookup into the in_volume
dataset

283: g_dataPos = ip_textureCoords.xyz;

284:

285: // Eye position in dataset space

286: g_eyePosObj = in_inverseVolumeMatrix[0] * vec4(in_cameraPos, 1.0);

287:

288: // Getting the ray marching direction (in dataset space);

289: vec3 rayDir = computeRayDirection();

290:

291: // Multiply the raymarching direction with the step size to get the

292: // sub-step size we need to take at each raymarching step

293: g_dirStep = (ip_inverseTextureDataAdjusted *

294: vec4(rayDir, 0.0)).xyz * in_sampleDistance;

295:

296: // 2D Texture fragment coordinates [0,1] from fragment coordinates.

297: // The frame buffer texture has the size of the plain buffer but

298: // we use a fraction of it. The texture coordinate is less than 1 if

299: // the reduction factor is less than 1.

300: // Device coordinates are between -1 and 1. We need texture

301: // coordinates between 0 and 1. The in_noiseSampler and
in_depthSampler

302: // buffers have the original size buffer.

303: vec2 fragTexCoord = (gl_FragCoord.xy - in_windowLowerLeftCorner) *

304: in_inverseWindowSize;

305:

306: if (in_useJittering)

307: {

308: float jitterValue = texture2D(in_noiseSampler, fragTexCoord).x;

309: g_rayJitter = g_dirStep * jitterValue;

310: g_dataPos += g_rayJitter;

311: }

312: else

313: {

314: g_dataPos += g_dirStep;

315: }

316:

317: // Flag to deternmine if voxel should be considered for the rendering

318: g_skip = false;

319:

320:

321: // Flag to indicate if the raymarch loop should terminate

322: bool stop = false;

323:

324: g_terminatePointMax = 0.0;

325:

326: #ifdef GL_ES

327: vec4 l_depthValue = vec4(1.0,1.0,1.0,1.0);

328: #else

329: vec4 l_depthValue = texture2D(in_depthSampler, fragTexCoord);

330: #endif

331: // Depth test

332: if(gl_FragCoord.z >= l_depthValue.x)

333: {

334: discard;

335: }

336:

337: // color buffer or max scalar buffer have a reduced size.

338: fragTexCoord = (gl_FragCoord.xy - in_windowLowerLeftCorner) *

339: in_inverseOriginalWindowSize;

340:

341: // Compute max number of iterations it will take before we hit

342: // the termination point

343:

344: // Abscissa of the point on the depth buffer along the ray.

345: // point in texture coordinates

346: vec4 terminatePoint = WindowToNDC(gl_FragCoord.x, gl_FragCoord.y,
l_depthValue.x);

347:

348: // From normalized device coordinates to eye coordinates.

349: // in_projectionMatrix is inversed because of way VT

350: // From eye coordinates to texture coordinates

351: terminatePoint = ip_inverseTextureDataAdjusted *

352: in_inverseVolumeMatrix[0] *

353: in_inverseModelViewMatrix *

354: in_inverseProjectionMatrix *

355: terminatePoint;

356: terminatePoint /= terminatePoint.w;

357:

358: g_terminatePointMax = length(terminatePoint.xyz - g_dataPos.xyz) /

359: length(g_dirStep);

360: g_currentT = 0.0;

361:

362:

363:

364:

365:

366: //VTK::RenderToImage::Init

367:

368: //VTK::DepthPass::Init

369: }

370:

371: /**

372: * March along the ray direction sampling the volume texture. This
function

373: * takes a start and end point as arguments but it is up to the
specific render

374: * pass implementation to use these values (e.g.
vtkDualDepthPeelingPass). The

375: * mapper does not use these values by default, instead it uses the
number of

376: * steps defined by g_terminatePointMax.

377: */

378: vec4 castRay(const float zStart, const float zEnd)

379: {

380: //VTK::DepthPeeling::Ray::Init

381:

382: //VTK::DepthPeeling::Ray::PathCheck

383:

384:

385:

386: /// For all samples along the ray

387: while (!g_exit)

388: {

389:

390: g_skip = false;

391:

392:

393:

394:

395:

396:

397:

398:

399:

400: //VTK::PreComputeGradients::Impl

401:

402:

403: if (!g_skip)

404: {

405: vec4 scalar = texture3D(in_volume[0], g_dataPos);

406: scalar.r = scalar.r * in_volume_scale[0].r + in_volume_bias[0].r;

407: scalar = vec4(scalar.r);

408: g_srcColor = vec4(0.0);

409: g_srcColor.a = computeOpacity(scalar);

410: if (g_srcColor.a > 0.0)

411: {

412: g_srcColor = computeColor(scalar, g_srcColor.a);

413: // Opacity calculation using compositing:

414: // Here we use front to back compositing scheme whereby

415: // the current sample value is multiplied to the

416: // currently accumulated alpha and then this product

417: // is subtracted from the sample value to get the

418: // alpha from the previous steps. Next, this alpha is

419: // multiplied with the current sample colour

420: // and accumulated to the composited colour. The alpha

421: // value from the previous steps is then accumulated

422: // to the composited colour alpha.

423: g_srcColor.rgb *= g_srcColor.a;

424: g_fragColor = (1.0f - g_fragColor.a) * g_srcColor + g_fragColor;

425: }

426: }

427:

428: //VTK::RenderToImage::Impl

429:

430: //VTK::DepthPass::Impl

431:

432: /// Advance ray

433: g_dataPos += g_dirStep;

434:

435:

436: if(any(greaterThan(g_dataPos, in_texMax[0])) ||

437: any(lessThan(g_dataPos, in_texMin[0])))

438: {

439: break;

440: }

441:

442: // Early ray termination

443: // if the currently composited colour alpha is already fully saturated

444: // we terminated the loop or if we have hit an obstacle in the

445: // direction of they ray (using depth buffer) we terminate as well.

446: if((g_fragColor.a > g_opacityThreshold) ||

447: g_currentT >= g_terminatePointMax)

448: {

449: break;

450: }

451: ++g_currentT;

452: }

453:

454:

455:

456: return g_fragColor;

457: }

458:

459: /**

460: * Finalize specific modes and set output data.

461: */

462: void finalizeRayCast()

463: {

464:

465:

466:

467:

468:

469:

470:

471:

472:

473: // Special coloring mode which renders the voxel index in fragments
that

474: // have accumulated certain level of opacity. Used during the
selection

475: // pass vtkHardwareSelection::ID_MID24.

476: if (g_fragColor.a > 3.0/ 255.0)

477: {

478: uvec3 volumeDim = uvec3(in_textureExtentsMax - in_textureExtentsMin);

479: uvec3 voxelCoords = uvec3(volumeDim * g_dataPos);

480: // vtkHardwareSelector assumes index 0 to be empty space, so add
uint(1).

481: uint idx = volumeDim.x * volumeDim.y * voxelCoords.z +

482: volumeDim.x * voxelCoords.y + voxelCoords.x + uint(1);

483: idx = ((idx & 0xff000000) >> 24);

484: fragOutput0 = vec4(float(idx % uint(256)) / 255.0,

485: float((idx / uint(256)) % uint(256)) / 255.0,

486: float(idx / uint(65536)) / 255.0, 1.0);

487: }

488: else

489: {

490: fragOutput0 = vec4(0.0);

491: }

492: return;

493:

494: g_fragColor.r = g_fragColor.r * in_scale + in_bias * g_fragColor.a;

495: g_fragColor.g = g_fragColor.g * in_scale + in_bias * g_fragColor.a;

496: g_fragColor.b = g_fragColor.b * in_scale + in_bias * g_fragColor.a;

497: fragOutput0 = g_fragColor;

498:

499: //VTK::RenderToImage::Exit

500:

501: //VTK::DepthPass::Exit

502: }

503:

504:
//////////////////////////////////////////////////////////////////////////////

505: ///

506: /// Main

507: ///

508:
//////////////////////////////////////////////////////////////////////////////

509: void main()

510: {

511:

512: initializeRayCast();

513: castRay(-1.0, -1.0);

514: finalizeRayCast();

515: }



ERROR: In
/Users/kitware/dashboards/buildbot-slave/8275bd07/build/superbuild/paraview/src/VTK/Rendering/OpenGL2/vtkShaderProgram.cxx,
line 446

vtkShaderProgram (0x60c00058d5b0): ERROR: 0:483: '&' does not operate on
'unsigned int' and 'int'



ERROR: In
/Users/kitware/dashboards/buildbot-slave/8275bd07/build/superbuild/paraview/src/VTK/Rendering/VolumeOpenGL2/vtkOpenGLGPUVolumeRayCastMapper.cxx,
line 3169

vtkOpenGLGPUVolumeRayCastMapper (0x7fc085e21f60): Shader failed to compile


Traceback (most recent call last):

File "<string>", line 450, in <module>

AttributeError: 'vtkCommonDataModelPython.vtkImageData' object has no
attribute 'GetPoints'




2018-07-11 21:08 GMT+07:00 Aron Helser <aron.helser at kitware.com>:

> Adding back the mailing list....
>
> Alexey,
> Do you have a particular goal, or do you just want to try out vtk-js? If
> you just want to see what it can do, it is used the rendering in ParaView
> Glance, which you can try out here:
> https://kitware.github.io/paraview-glance/nightly/
> It will load several different data formats, or just start with the
> example datasets. As you can see, vtk-js is used for real and useful
> projects.
>
> For just VTK-js, the example applications like the OBJ viewer:
> https://kitware.github.io/vtk-js/examples/OBJViewer.html
> are a good starting point - there are several that will load different
> formats of data.
>
> The script I pointed you at is a ParaView macro, which means that after
> running the ParaView application, you add it by going to the Macro menu and
> choosing 'Add new macro...'
>
> Regards,
> Aron
>
> On Wed, Jul 11, 2018 at 9:59 AM Alexey Pechnikov <pechnikov at mobigroup.ru>
> wrote:
>
>> Aron,
>>
>> That script doesn't work in python2/python3 (there is no paraview
>> extension) and it doesn't work in internal Paraview Python shell. Maybe do
>> you know some working way? Or do you mean I need to write my own script
>> using this one instead of normal documentation? Is vtk-js used by somebody
>> or it's only just to fun unuseful project? Maybe I don't need to try it
>> actually.
>>
>> 2018-07-11 20:16 GMT+07:00 Aron Helser <aron.helser at kitware.com>:
>>
>>> Hi Alexey,
>>> This example: https://kitware.github.io/vtk-js/examples/
>>> StandaloneSceneLoader.html
>>> references this macro from vtk-js: Utilities/ParaView/
>>> export-scene-macro.py
>>> It can be used to export a scene from ParaView for consumption by
>>> vtk-js.
>>>
>>> I know there are other ways to do it as well, but that was the easiest.
>>> HTH,
>>> Aron
>>>
>>> On Wed, Jul 11, 2018 at 5:54 AM Alexey Pechnikov <pechnikov at mobigroup.ru>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I couldn't find any documentation or tool to do it. Google doesn't know
>>>> anything about "vti-web" data format. VTK/VTI and other VTK formats are not
>>>> allowed I see. Is disassembling example datasets and build own tools the
>>>> only possible way?!
>>>>
>>>> --
>>>> Best regards, Alexey Pechnikov.
>>>> _______________________________________________
>>>> Powered by www.kitware.com
>>>>
>>>> Visit other Kitware open-source projects at http://www.kitware.com/
>>>> opensource/opensource.html
>>>>
>>>> Please keep messages on-topic and check the VTK FAQ at:
>>>> http://www.vtk.org/Wiki/VTK_FAQ
>>>>
>>>> Search the list archives at: http://markmail.org/search/?q=vtkusers
>>>>
>>>> Follow this link to subscribe/unsubscribe:
>>>> https://public.kitware.com/mailman/listinfo/vtkusers
>>>>
>>>
>>
>>
>> --
>> Best regards, Alexey Pechnikov.
>>
>


-- 
Best regards, Alexey Pechnikov.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://public.kitware.com/pipermail/vtkusers/attachments/20180711/5b883701/attachment-0001.html>


More information about the vtkusers mailing list