Years ago I ran a service where moderators could perform various actions with massive privacy implications if the accounts or contributions were less than a short time away. I did this by checking the timestamp against the current Unix era, taking into account X hours / days. Normally it worked well.
One day, the server on which it was hosted had been "taken offline" in the data center where I rented it, according to the hosting company. When it turned back on, its clock had been reset to its default many years ago.
This allowed all of my moderators to see the history of each account and my service contributions until I came back and noticed the wrong time (which I might not have not even be done!) And that I resynchronize it. After that, I hard-coded a timestamp in code whose current time had to be greater or otherwise it would trigger "offline mode", to avoid any potential disaster like this at home ;to come up. I have also implemented some sort of automatic timing mechanism (in FreeBSD).
You would think that now, not only would each computer always be automatically synchronized by default with tons of backup mechanisms to never, never be in a situation where the clock is not perfectly synchronized with "time" real ", at least until the second, if not more precisely; it would be impossible or it is extremely difficult to set the clock to anything other than the current time, even if you do everything you can to do it.
I do not remember that my Windows computer has run out of time in the past "years". However, I do significant logging of my system events that run on it. Should I just assume that the operating system can keep the time at all times? Or should I use some sort of time synchronization service myself? Like a free HTTPS API, where I do a search every minute and force the system clock whatever it pays? Should I just let it go and assume this is "taken care of" / resolved?