main.c is now out of the timer business (except for "ticking" cycles...
since there really is no other way to guarantee something runs exactly
once per scan cycle)
was about to start handling the scheduling for cycle counted events the
same way as for real time scheduled events, but i think i'll switch over
to using `<util/atomic.h>`, if i can
keyboard scanning at almost exactly 200Hz (or maybe exactly... as close
as i can measure)
possible to schedule things to run in 'n' scan cycles! haven't tested
scheduling things to run in 'n' milliseconds, but the code is so similar
that it should work too. :D
what was happening was, the bottommost led would indicate keyboard
startup, but wouldn't turn on after that. turns out, i had been setting
timer/counter 0 to output on that pin, unnecessarily - because i didn't
understand how to set up timers o_o . complicated little things.
main() no longer waits an extra millisecond between scan cycles, just to
be safe. before, i was thinking that perhaps the
timer__get_millisecond() call might catch the tail end of whatever
millisecond it was getting, and then the scan cycle would be anywhere
between 4ms and 5ms. but since we wait for our OPT__DEBOUNCE_TIME to be
up *directly before* getting the new time, we shouldn't be catching the
tail end of anything. we should be getting as close as practically
possible to *exactly* OPT__DEBOUNCE_TIME milliseconds.
the timer__ functions now use and return 16-bit values instead of 32-bit
ones. the rational is that 16-bit counters for milliseconds are good up
to ~1 minute, which should be enough for most things. if a greater
length of time is needed, a function can reschedule it self, and only
execute it's body code after being called a certain number of times.
also, i'm considering making a main__ timer, that counts scan cycles
instead of milliseconds. that would have the advantage of having lower
resolution (so, less overhead) and being able to schedule functions to
run and such without executing anything in an interrupt vector.
in the while loop in main() that busywaits until we can scan again, the
compiler was optimizing out the function call, it seems like, when i
wrote `(uint8_t)timer__get_milliseconds()`; if i cast the whole
expression (not just the function) to `(volatile uint8_t)`, or if i just
didn't cast anything at all, it worked. not sure why the compiler would
optimize the function call out like that though, even if it was cast...
this happened when i put it in a for loop too. i need to research it
just a little more, and write a warning about it in the timer
documentation.
Also, I just tested the scan rate without debounce (lol, should have
thought of that before) and it's about 471Hz. This is much faster than
the 200Hz (every 5ms) that we need to limit ourselves to due to the
switches needing a 5ms debounce time. It should be trivial to tune the
scan rate to closer to 200Hz once I get lib/timer implemented.
one bug left to fix before it's actually doing what it's supposed to
scanning at about 140Hz :D , and only slightly bigger than the old
firmware (though, with many fewer layers compiled in...) (also, the
winavr makefile gets the hex to be smaller somehow; i should probably
look into that)
- did not move any of the layer-stack code into code for a 'flex-array'
type, or something similar. i'm thinking that most of the things the
layer-stack does aren't sufficiently generalizable. for instance, one
of the biggest general functions in the layer-stack implementation is
`_shift_elements()`; but shifing elements is only something you want
to do when you're messing with elements not on the the top of the
stack (which breaks the definition of a general stack). so i think
i'll leave things as they are for now. the functionality can always
be split out later if it turns out to be needed elsewhere.