I have spent some time today trying to sort out a bottleneck in my code and been using timef() from the portability library.
I have both linux (Ubuntu 16) and Windows (7) versions of the compiler - both 2017 - update 5. There is only one version of the code and the compiler switches for the two OSs are as good as the same.
It appears to me that the linux version is rounding to an integer. Here is a snip of the debug output for linux
kgsave= 2.000000 move to record= 0.000000 sizegrid= 1.000000 kgload= 1.000000
kgsave= 2.000000 move to record= 0.000000 sizegrid= 0.000000 kgload= 2.000000
kgsave= 1.000000 move to record= 0.000000 sizegrid= 1.000000 kgload= 1.000000
kgsave= 1.000000 move to record= 0.000000 sizegrid= 1.000000 kgload= 1.000000
and here is the same thing for Windows (a virtual machine on the linux box)
kgsave= 0.8750000 move to record= 0.000000 sizegrid= 0.4687500E-01 kgload= 0.7343750
kgsave= 0.7656250 move to record= 0.000000 sizegrid= 0.4687500E-01 kgload= 0.4062500
kgsave= 0.7968750 move to record= 0.000000 sizegrid= 0.3125000E-01 kgload= 0.5312500
kgsave= 0.8437500 move to record= 0.000000 sizegrid= 0.6250000E-01 kgload= 0.7343750
There are some efficiencies in the windows libraries calling the Win API versus the Xorg/Motif calls in linux which explains the faster times but not the precision differences.
The numbers were just created with timer(1)=timef() before the call to the subroutine and timer(2)=timef() after then printing out timer(2)-timer(1). Timer declared as a selected_real_kind(15) array.
Am I doing something wrong or is this problem real?
Cheers
Kim