While doing the mapping of some database columns into Java classes I stumbled onto this obscure SQL-92 Standard type (implemented by PostgreSQL, H2, and HyperSQL afaik). I haven't ever used it, but I wanted to understand how clearly map it to a Java type if I ever find it.
Here are the variants I can see:
Case A: The
TIMEtype, such as15:20:01. It's a "local time". The time zone is evident to the application so the database doesn't record it.Case B: The
TIMEwith offset, as in15:20:01+04:00. It represents a "world time". This time can be converted trivially to UTC, or to any other world clock.Case C: A
TIMEwith a time zone, such as15:20:01 EDT. Since the rules to interpret a time strongly depend on the specific date I can't really make any sense of it without the date; but then, if I add the date, it becomes aTIMESTAMP, and that's something totally different.
So, did the SQL Standard get it wrong? Or maybe "TIME with time zone" should be always interpreted as "time with offset" (case B)?
For lots of reasons, that you described well, interpreting a point in time with time of day and variable time zone but without a date is effectively undefined. There are use cases though, where you're establishing a policy within an international context this would be a helpful data type. Everyday at 15:20:01+04:00 the cats need to take a nap. Now the intention isn't to evaluate value in iosolation but within the context of adding it to a baseline date. Standards are all about supporting theoretical possibilities eaven if they're not super common.