<div dir="ltr">Hi Matt,<div><br></div><div>As noted in previous msg, the magnitude of the divergence doesn't' seem to be very large on typical usage patterns, so from a practical point of view it might be better to bound it, but I haven't done a very careful analysis yet. </div>
<div><br></div><div>The most straightforward way to avoid this sort of thing is to always do the sum the same way regardless of the number of threads. Simplest approach to that is accumulating all the constituents into a (stably) ordered data structure and performing the sum at the end. Of course, this costs you both the memory and the transaction costs, which may not be acceptable. On the other hand, with enough elements you'll want some partitioning anyway for accuracy, but this is best determined by the data range, not the number of threads.</div>
<div><br></div><div>Most of the other methods I've used to avoid or mitigate this in the past require detailed analysis of local computation and bounding the error at each step, so you can safely truncate the final result to known digits, and regardless of order the final estimate will be identical. You can sometimes avoid any speed or space costs this way, but it's fiddly work and has to be reevaluated with any change.</div>
<div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Sun, Mar 30, 2014 at 10:07 AM, Matt McCormick <span dir="ltr"><<a href="mailto:matt.mccormick@kitware.com" target="_blank">matt.mccormick@kitware.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Simon,<br>
<br>
Thanks for taking a look.<br>
<br>
Yes, your assessment is correct. What is a strategy that would avoid<br>
this, though?<br>
<br>
More eyes on the optimizers are greatly welcome!<br>
<br>
Thanks,<br>
Matt<br>
<div class="HOEnZb"><div class="h5"><br>
On Fri, Mar 28, 2014 at 4:57 PM, Simon Alexander <<a href="mailto:skalexander@gmail.com">skalexander@gmail.com</a>> wrote:<br>
> There is a lot going on here, and I'm not certain that I've got all the<br>
> moving pieces straight in my mind yet, but I've had an quick look at the<br>
> implementation now. I believe the Mattes v4 implementation is similar to<br>
> other metrics it it's approach.<br>
><br>
> As I suggested earlier in the thread: I believe accumulations like this:<br>
><br>
>> for( ThreadIdType threadID = 1; threadID <<br>
>> this->GetNumberOfThreadsUsed(); threadID++ )<br>
>> {<br>
>> this->m_ThreaderJointPDFSum[0] +=<br>
>> this->m_ThreaderJointPDFSum[threadID];<br>
>> }<br>
><br>
><br>
> will guarantee that we don't have absolute consistent results between<br>
> different threadcounts, due to lack of associativity.<br>
><br>
> When I perform only transform initialization and a single evaluation of the<br>
> metric (i.e. outside of the registration routines), I get results consistent<br>
> with this, for example, results for an center-of-mass initialization between<br>
> two MR image volumes give me (double precision):<br>
><br>
> 1 thread : -0.396771472451519<br>
> 2 threads: -0.396771472450998<br>
> 8 threads: -0.396771472451149<br>
><br>
> for the metric evalution (i.e. via GetValue() of the metric)<br>
><br>
> AFAICS, This is consistent magnitude of delta from the above. It will mean<br>
> not chance of binary equivalence between different threadcounts/partitioning<br>
> but you can do this accumulation quite a few times before the accumulated<br>
> divergence gets into digits to worry about. This sort of thing is<br>
> avoidable, but at some space/speed cost.<br>
><br>
> However, In the registration for this case it takes only about twenty steps<br>
> for divergence in the third significant digit between metric estimates! (via<br>
> registration->GetOptimizer()->GetCurrentMetricValue() )<br>
><br>
> Clearly the optimizer is not following the same path, so I think something<br>
> else must be going on.<br>
><br>
> So at this point I don't think the data partitioning of the metric is the<br>
> root cause, but I will have a more careful look later.<br>
><br>
> Any holes in this analysis you can see so far?<br>
><br>
> When I have time to get back into this, I plan to have a look at the<br>
> optimizer next, unless you have better suggestions of where to look next.<br>
><br>
> cheers,<br>
> Simon<br>
><br>
><br>
><br>
> On Wed, Mar 19, 2014 at 12:56 PM, Simon Alexander <<a href="mailto:skalexander@gmail.com">skalexander@gmail.com</a>><br>
> wrote:<br>
>><br>
>> Brian, my apologies for the typo.<br>
>><br>
>> I assume you all are at least as busy as I am; just didn't want to leave<br>
>> the impression that I would definitely be able to pursue this, but I will<br>
>> try.<br>
>><br>
>><br>
>> On Wed, Mar 19, 2014 at 12:45 PM, brian avants <<a href="mailto:stnava@gmail.com">stnava@gmail.com</a>> wrote:<br>
>>><br>
>>> it's brian - and, yes, we all have "copious free time" of course.<br>
>>><br>
>>><br>
>>> brian<br>
>>><br>
>>><br>
>>><br>
>>><br>
>>> On Wed, Mar 19, 2014 at 12:43 PM, Simon Alexander <<a href="mailto:skalexander@gmail.com">skalexander@gmail.com</a>><br>
>>> wrote:<br>
>>>><br>
>>>> Thanks for the summary Brain.<br>
>>>><br>
>>>> A lot of partitioning issues fundamentally come down to the lack of<br>
>>>> associativity & distributivity of fp operations. Not sure I can do<br>
>>>> anything practical to improve it but I will have a look if I can find a bit<br>
>>>> of my "copious free time" .<br>
>>>><br>
>>>><br>
>>>> On Wed, Mar 19, 2014 at 12:29 PM, brian avants <<a href="mailto:stnava@gmail.com">stnava@gmail.com</a>> wrote:<br>
>>>>><br>
>>>>> yes - i understand.<br>
>>>>><br>
>>>>> * matt mccormick implemented compensated summation to address - it<br>
>>>>> helps but is not a full fix<br>
>>>>><br>
>>>>> * truncating floating point precision greatly reduces the effect you<br>
>>>>> are talking about but is unatisfactory to most people ... not sure if the<br>
>>>>> functionality for that truncation was taken out of the v4 metrics but it was<br>
>>>>> in there at one point.<br>
>>>>><br>
>>>>> * there may be a small and undiscovered bug that contributes to this in<br>
>>>>> mattes specificallly but i dont think that's the issue. we saw this effect<br>
>>>>> even in mean squares. if there is a bug it may be beyond just mattes. we<br>
>>>>> cannot disprove that there is a bug. if anyone knows of way to do that, let<br>
>>>>> me know.<br>
>>>>><br>
>>>>> * any help is appreciated<br>
>>>>><br>
>>>>><br>
>>>>> brian<br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> On Wed, Mar 19, 2014 at 12:24 PM, Simon Alexander<br>
>>>>> <<a href="mailto:skalexander@gmail.com">skalexander@gmail.com</a>> wrote:<br>
>>>>>><br>
>>>>>> Brain,<br>
>>>>>><br>
>>>>>> I could have sworn I had initially added a follow up email clarifying<br>
>>>>>> this but since I can't find it in the current quoted exchange, let me<br>
>>>>>> reiterate:<br>
>>>>>><br>
>>>>>> This is not a case of with different results on different systems.<br>
>>>>>> This is a case of different results on the same system if you use a<br>
>>>>>> different number of threads.<br>
>>>>>><br>
>>>>>> So while that possibly could be some odd intrinsics issue, for<br>
>>>>>> example, the far more likely thing is that data partitioning is not being<br>
>>>>>> handled in a way that ensures consistency.<br>
>>>>>><br>
>>>>>> Originally I was also seeing intra-system differences due to internal<br>
>>>>>> precision, but that was a separate issue and has been solved.<br>
>>>>>><br>
>>>>>> Hope that is more clear!<br>
>>>>>><br>
>>>>>><br>
>>>>>><br>
>>>>>> On Wed, Mar 19, 2014 at 12:13 PM, Simon Alexander<br>
>>>>>> <<a href="mailto:skalexander@gmail.com">skalexander@gmail.com</a>> wrote:<br>
>>>>>>><br>
>>>>>>> Brian,<br>
>>>>>>><br>
>>>>>>> Do you mean the generality of my AVX internal precision problem?<br>
>>>>>>><br>
>>>>>>> I agree that is a very common issue, the surprising thing there was<br>
>>>>>>> that we were already constraining the code generation in way that worked as<br>
>>>>>>> over the different processor generations and types we used, up until we hit<br>
>>>>>>> the first Haswell cpus with AVX2 support (even though no AVX2 instructions<br>
>>>>>>> were generated). Perhaps it shouldn't have surprised me, but It took me a<br>
>>>>>>> few tests to work that out because the problem was confounded with the<br>
>>>>>>> problem I discuss in this thread (which is unrelated). Once I separated<br>
>>>>>>> them it was easy to spot.<br>
>>>>>>><br>
>>>>>>> So that is a solved issue for now, but I am still interested the<br>
>>>>>>> partitioning issue in the image metric, as I only have a work around for<br>
>>>>>>> now.<br>
>>>>>>><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> On Wed, Mar 19, 2014 at 11:24 AM, brian avants <<a href="mailto:stnava@gmail.com">stnava@gmail.com</a>><br>
>>>>>>> wrote:<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> <a href="http://software.intel.com/en-us/articles/consistency-of-floating-point-results-using-the-intel-compiler" target="_blank">http://software.intel.com/en-us/articles/consistency-of-floating-point-results-using-the-intel-compiler</a><br>
>>>>>>>><br>
>>>>>>>> just as an example of the generality of this problem<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> brian<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> On Wed, Mar 19, 2014 at 11:22 AM, Simon Alexander<br>
>>>>>>>> <<a href="mailto:skalexander@gmail.com">skalexander@gmail.com</a>> wrote:<br>
>>>>>>>>><br>
>>>>>>>>> Brian, Luis,<br>
>>>>>>>>><br>
>>>>>>>>> Thanks. I have been using Mattes as you suspect.<br>
>>>>>>>>><br>
>>>>>>>>> I don't quite understand how precision is specifically the issue<br>
>>>>>>>>> with # of cores. There are all kinds of issues with precision and order of<br>
>>>>>>>>> operations in numerical analysis, but often data partitioning (i.e. for<br>
>>>>>>>>> concurrency) schemes can be set up so that the actual sums are done the same<br>
>>>>>>>>> way regardless of number of workers, which keeps your final results<br>
>>>>>>>>> identical. Is there some reason this can't be done for the Matte's metric?<br>
>>>>>>>>> I really should look at the implementation to answer that, of course.<br>
>>>>>>>>><br>
>>>>>>>>> Do you have a pointer to earlier discussions? If I can find the<br>
>>>>>>>>> time I'd like to dig into this a bit, but I'm not sure when I'll have the<br>
>>>>>>>>> bandwidth. I've "solved" this currently by constraining the core count.<br>
>>>>>>>>><br>
>>>>>>>>> Perhaps interestingly, my earlier experiments were confounded a bit<br>
>>>>>>>>> by a precision issue, but that had to do with intrinsics generation on my<br>
>>>>>>>>> compiler behaving differently on systems with AVX2 (even though only AVX<br>
>>>>>>>>> intrinsics were being generated). So that made things confusing at first<br>
>>>>>>>>> until I separated the issues.<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> On Wed, Mar 19, 2014 at 9:49 AM, brian avants <<a href="mailto:stnava@gmail.com">stnava@gmail.com</a>><br>
>>>>>>>>> wrote:<br>
>>>>>>>>>><br>
>>>>>>>>>> yes - we had several discussions about this during v4 development.<br>
>>>>>>>>>><br>
>>>>>>>>>> experiments showed that differences are due to precision.<br>
>>>>>>>>>><br>
>>>>>>>>>> one solution was to truncate precision to the point that is<br>
>>>>>>>>>> reliable.<br>
>>>>>>>>>><br>
>>>>>>>>>> but there are problems with that too. last i checked, this was<br>
>>>>>>>>>> an<br>
>>>>>>>>>><br>
>>>>>>>>>> open problem, in general, in computer science.<br>
>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>>>> brian<br>
>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>>>> On Wed, Mar 19, 2014 at 9:16 AM, Luis Ibanez<br>
>>>>>>>>>> <<a href="mailto:luis.ibanez@kitware.com">luis.ibanez@kitware.com</a>> wrote:<br>
>>>>>>>>>>><br>
>>>>>>>>>>> Hi Simon,<br>
>>>>>>>>>>><br>
>>>>>>>>>>> We are aware of some multi-threading related issues in<br>
>>>>>>>>>>> the registration process that result in metric values changing<br>
>>>>>>>>>>> depending on the number of cores used.<br>
>>>>>>>>>>><br>
>>>>>>>>>>> Are you using the MattesMutualInformationMetric ?<br>
>>>>>>>>>>><br>
>>>>>>>>>>> At some point it was suspected that the problem was the<br>
>>>>>>>>>>> result of accumulative rounding, in the contributions that<br>
>>>>>>>>>>> each pixel makes to the metric value.... this may or may<br>
>>>>>>>>>>> not be related to what you are observing.<br>
>>>>>>>>>>><br>
>>>>>>>>>>><br>
>>>>>>>>>>> Thanks<br>
>>>>>>>>>>><br>
>>>>>>>>>>> Luis<br>
>>>>>>>>>>><br>
>>>>>>>>>>><br>
>>>>>>>>>>><br>
>>>>>>>>>>> On Thu, Feb 20, 2014 at 3:27 PM, Simon Alexander<br>
>>>>>>>>>>> <<a href="mailto:skalexander@gmail.com">skalexander@gmail.com</a>> wrote:<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> I've been finding some regressions in registration results when<br>
>>>>>>>>>>>> using systems with different numbers of cores (so the thread count is<br>
>>>>>>>>>>>> different). This is resolved by fixing the global max.<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> It's difficult for me to run the identical code on against<br>
>>>>>>>>>>>> 4.4.2, but similar experiments were run in that timeframe without these<br>
>>>>>>>>>>>> regressions.<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> I recall that there were changes affecting multhreading in the<br>
>>>>>>>>>>>> v4 registration in 4.5.0 release, so I thought this might be a side effect.<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> So a few questions:<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Is this behaviour expected?<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Am I correct that this was not the behaviour in 4.4.x ?<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Does anyone who has a feel for the recent changes 4.4.2 -><br>
>>>>>>>>>>>> 4.5.[0,1] have a good idea where to start looking? I haven't yet dug into<br>
>>>>>>>>>>>> the multithreading architecture, but this "smells" like a data partitioning<br>
>>>>>>>>>>>> issue to me.<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Any other thoughts?<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> cheers,<br>
>>>>>>>>>>>> Simon<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> _______________________________________________<br>
>>>>>>>>>>>> Powered by <a href="http://www.kitware.com" target="_blank">www.kitware.com</a><br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Visit other Kitware open-source projects at<br>
>>>>>>>>>>>> <a href="http://www.kitware.com/opensource/opensource.html" target="_blank">http://www.kitware.com/opensource/opensource.html</a><br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Kitware offers ITK Training Courses, for more information visit:<br>
>>>>>>>>>>>> <a href="http://kitware.com/products/protraining.php" target="_blank">http://kitware.com/products/protraining.php</a><br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Please keep messages on-topic and check the ITK FAQ at:<br>
>>>>>>>>>>>> <a href="http://www.itk.org/Wiki/ITK_FAQ" target="_blank">http://www.itk.org/Wiki/ITK_FAQ</a><br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Follow this link to subscribe/unsubscribe:<br>
>>>>>>>>>>>> <a href="http://www.itk.org/mailman/listinfo/insight-developers" target="_blank">http://www.itk.org/mailman/listinfo/insight-developers</a><br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> _______________________________________________<br>
>>>>>>>>>>>> Community mailing list<br>
>>>>>>>>>>>> <a href="mailto:Community@itk.org">Community@itk.org</a><br>
>>>>>>>>>>>> <a href="http://public.kitware.com/cgi-bin/mailman/listinfo/community" target="_blank">http://public.kitware.com/cgi-bin/mailman/listinfo/community</a><br>
>>>>>>>>>>>><br>
>>>>>>>>>>><br>
>>>>>>>>>>><br>
>>>>>>>>>>> _______________________________________________<br>
>>>>>>>>>>> Powered by <a href="http://www.kitware.com" target="_blank">www.kitware.com</a><br>
>>>>>>>>>>><br>
>>>>>>>>>>> Visit other Kitware open-source projects at<br>
>>>>>>>>>>> <a href="http://www.kitware.com/opensource/opensource.html" target="_blank">http://www.kitware.com/opensource/opensource.html</a><br>
>>>>>>>>>>><br>
>>>>>>>>>>> Kitware offers ITK Training Courses, for more information visit:<br>
>>>>>>>>>>> <a href="http://kitware.com/products/protraining.php" target="_blank">http://kitware.com/products/protraining.php</a><br>
>>>>>>>>>>><br>
>>>>>>>>>>> Please keep messages on-topic and check the ITK FAQ at:<br>
>>>>>>>>>>> <a href="http://www.itk.org/Wiki/ITK_FAQ" target="_blank">http://www.itk.org/Wiki/ITK_FAQ</a><br>
>>>>>>>>>>><br>
>>>>>>>>>>> Follow this link to subscribe/unsubscribe:<br>
>>>>>>>>>>> <a href="http://www.itk.org/mailman/listinfo/insight-developers" target="_blank">http://www.itk.org/mailman/listinfo/insight-developers</a><br>
>>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>><br>
>>>>>>><br>
>>>>>><br>
>>>>><br>
>>>><br>
>>><br>
>><br>
><br>
><br>
> _______________________________________________<br>
> Powered by <a href="http://www.kitware.com" target="_blank">www.kitware.com</a><br>
><br>
> Visit other Kitware open-source projects at<br>
> <a href="http://www.kitware.com/opensource/opensource.html" target="_blank">http://www.kitware.com/opensource/opensource.html</a><br>
><br>
> Kitware offers ITK Training Courses, for more information visit:<br>
> <a href="http://kitware.com/products/protraining.php" target="_blank">http://kitware.com/products/protraining.php</a><br>
><br>
> Please keep messages on-topic and check the ITK FAQ at:<br>
> <a href="http://www.itk.org/Wiki/ITK_FAQ" target="_blank">http://www.itk.org/Wiki/ITK_FAQ</a><br>
><br>
> Follow this link to subscribe/unsubscribe:<br>
> <a href="http://www.itk.org/mailman/listinfo/insight-developers" target="_blank">http://www.itk.org/mailman/listinfo/insight-developers</a><br>
><br>
> _______________________________________________<br>
> Community mailing list<br>
> <a href="mailto:Community@itk.org">Community@itk.org</a><br>
> <a href="http://public.kitware.com/cgi-bin/mailman/listinfo/community" target="_blank">http://public.kitware.com/cgi-bin/mailman/listinfo/community</a><br>
><br>
</div></div></blockquote></div><br></div>