<div dir="ltr">H Sean,<div><br></div><div>I moved this discussion to the VTK developers so that we may get feedback from other.</div><div><br></div><div style>First of all, I apologize. I should have sent an e-mail describing the intention of these changes. That would have clarified certain things better. My larger goal is to eventually introduce a framework for developing scalable multi-threaded algorithms in VTK. We have been working with EDF and Inria who has been developing the framework and my role is to introduce these changes to VTK. These changes are the first steps towards that goal. </div>
<div style><br></div><div style>Let me try to answer your questions within this context:</div><div style><br></div><div style>> Also, all these different ways to increment an integer, wow! :) How about using C++11 </div>
<div style>> std::atomic and falling back to the platform-specific stuff as a last resort?<br></div><div style><br></div><div style>As I mentioned, my goal is to introduce a larger framework multi-threaded computing in VTK. Something similar to Intel TBB but abstracted such that we can use other backends such as X-Kaapi, OpenMP or even our own if we choose so. So in preparation for that, I had to create the infrastructure for various backend, switching between them etc. While doing that, I thought to also use it to support "native" atomic operations for each backend. Part of the reason behind this is that each implementation seems to work on a subset of our target architectures. We have native atomic operations for Mac, gcc and Windows. These may or may not work on other compilers (Intel, XLC and PGI for example). Intel TBB supports another subset. OpenMP 3.1 supports atomics the way we need them but is only supported on some platforms. Ditto for X-Kaapi. I don't even know how many of the compilers support std::atomic. Oh btw, the Boost atomic is even less portable than others :-)</div>
<div style><br></div><div style>This will continue to be the case as we add more functionality such as task-stealing parallel for, tasks etc. Each library seems to support only a subset of platforms.</div><div style><br>
</div>
<div style>Keep in mind that we want to support lots of platforms from various desktops to supercomputers to potentially embedded systems.</div><div style><br></div><div style>I am open to suggestions to make this more robust. As this morning dashboard shows, it is not easy to get this to work on all platforms and I'd rather use one standard implementation if possible.</div>
<div style><br></div><div style>> Was the OSAtomicIncrement32/64Barrier() call deliberately weakened to </div><div style>> OSAtomicIncrement32/64()? The Windows InterlockIncrement() is documented to use a </div><div style>
> memory barrier, and VTK on OS X did before, so...<br></div><div style><br></div><div style>It was deliberate change based on the documentation, which says:</div><div style><br></div><div style><div> These functions are thread and multiprocessor safe. For each function, there is a</div>
<div> version that does and another that does not incorporate a memory barrier. Barriers</div><div> strictly order memory access on a weakly-ordered architecture such as PPC. All</div><div> loads and stores executed in sequential program order before the barrier will com-</div>
<div> plete before any load or store executed after the barrier. On a uniprocessor, the</div><div> barrier operation is typically a nop. On a multiprocessor, the barrier can be</div><div> quite expensive.</div>
<div><br></div><div> Most code will want to use the barrier functions to ensure that memory shared</div><div> between threads is properly synchronized. For example, if you want to initialize a</div><div> shared data structure and then atomically increment a variable to indicate that the</div>
<div> initialization is complete, then you must use OSAtomicIncrement32Barrier() to</div><div> ensure that the stores to your data structure complete before the atomic add.</div><div> Likewise, the consumer of that data structure must use OSAtomicDecrement32Bar-</div>
<div> rier(), in order to ensure that their loads of the structure are not executed</div><div> before the atomic decrement. On the other hand, if you are simply incrementing a</div><div> global counter, then it is safe and potentially much faster to use OSAtomicIncre-</div>
<div> ment32(). If you are unsure which version to use, prefer the barrier variants as</div><div> they are safer.</div><div><br></div><div style>What we are doing in vtkTimeStamp falls into "simply incrementing a global counter", which justifies using a much faster implementation in my opinion. What do you think? We'll probably have to revisit what AtomicInts do under covers if people start using them for other purposes than counters.</div>
<div style><br></div><div style>Best,</div><div style>-berk</div></div><div style><br></div>On Tue, Aug 20, 2013 at 11:11 AM, Sean McBride (Code Review) <<a href="mailto:review@kitware.com">review@kitware.com</a>> wrote:<br>
><br>> Sean McBride has posted comments on this change.<br>><br>> Change subject: Added support for atomic integers.<br>> ......................................................................<br>><br>><br>
> Patch Set 6:<br>><br>> Was the OSAtomicIncrement32/64Barrier() call deliberately weakened to OSAtomicIncrement32/64()? The Windows InterlockIncrement() is documented to use a memory barrier, and VTK on OS X did before, so...<br>
><br>> Also, all these different ways to increment an integer, wow! :) How about using C++11 std::atomic and falling back to the platform-specific stuff as a last resort?<br>><br>> --<br>> To view, visit <a href="http://review.source.kitware.com/12253">http://review.source.kitware.com/12253</a><br>
><br>><br>> To unsubscribe, visit <a href="http://review.source.kitware.com/settings">http://review.source.kitware.com/settings</a><br>><br>> Gerrit-MessageType: comment<br>> Gerrit-Change-Id: Ie606ef9202f9588616a0b5412123512ffe536be6<br>
> Gerrit-PatchSet: 6<br>> Gerrit-Project: VTK<br>> Gerrit-Branch: master<br>> Gerrit-Owner: Berk Geveci <<a href="mailto:berk.geveci@kitware.com">berk.geveci@kitware.com</a>><br>> Gerrit-Reviewer: Berk Geveci <<a href="mailto:berk.geveci@kitware.com">berk.geveci@kitware.com</a>><br>
> Gerrit-Reviewer: Kitware Robot <<a href="mailto:kwrobot@kitware.com">kwrobot@kitware.com</a>><br>> Gerrit-Reviewer: Robert Maynard <<a href="mailto:robert.maynard@kitware.com">robert.maynard@kitware.com</a>><br>
> Gerrit-Reviewer: Utkarsh Ayachit <<a href="mailto:utkarsh.ayachit@kitware.com">utkarsh.ayachit@kitware.com</a>></div>