millis_t is long - divisions take for ever.
Return a kind of millisecond instead of microsecond -
divided by 1024 instead of 1000 for speed. (2.4% error)
That does not matter because block_buffer_runtime is
already a too short estimation.
Shrink the return-type.
The target here is to update the screens of graphical and char base
displays as fast as possible, without draining the planner buffer too much.
For that measure the time it takes to draw and transfer one
(partial) screen to the display. Build a max. value from that.
Because ther can be large differences, depending on how much the display
updates are interrupted, the max value is decreased by one ms/s. This way
it can shrink again.
On the other side we keep track on how much time it takes to empty the
planner buffer.
Now we draw the next (partial) display update only then, when we do not
drain the planner buffer to much. We draw only when the time in the
buffer is two times larger than a update takes, or the buffer is empty anyway.
When we have begun to draw a screen we do not wait until the next 100ms
time slot comes. We draw the next partial screen as fast as possible, but
give the system a chance to refill the buffers a bit.
When we see, during drawing a screen, the screen contend has changed,
we stop the current draw and begin to draw the new content from the top.