Hi , I have a problem with my following test code: when I use the registers to control the speed of execution everything's ok, but not when using the low byte of the jiffy clock. I tried to look for some info to solve this in the PRG with no luck. Btw: any ML mentor out there to regularly speak to? In case drop me a PM!
Thank you for your help from a crazy guy who spends his saturday nights this way even knowing what it's like.
If you want to implement something like a clock, this would however always add the processing time between the delays into the calculation, so the clock lags behind after a short time. In that case, the right way is to put the routine in the interrupt processing and decode the display right away from the jiffy clock as it's done here:
@Mike: the first routine is perfect to my actual needs, thanks!
Even with that working solution which uses SBC, I'd still like to understand why even this new code snippet won't work (the update pace is not constant in a remarkable way):
The low-byte of the jiffy clock is counted up by 1 every 1/60 second, and wraps from 255 to 0.
That means, on entry to this sub-routine the byte is either in the range of 0 to 59, and then the routine waits the remaining time until the byte becomes 60 (which already is not constant), or the byte has a value in the range of 60 to 255, and then the routine exits immediately.
For this reason, my routine stores the current time on entry, and then subtracts this value from the running jiffy clock byte, until the difference becomes 60. This also works in case of wrap-around.
I think that's because I can't use an absolute value for the comparison, since we don't know how much time has passed since the last update of the low byte of the jiffy. (EDIT - I meant: I don't know the value in the byte when our code executes. Thanks again Mike for your way better explanation)
Last edited by ldxstx on Sun Feb 24, 2013 7:45 am, edited 1 time in total.