Table Schema and processed bytes
I have a table called Table 1 , which has 15339707 rows and 27 variables. Please find the attached diagram for table schema and details. I am trying to understand how big query is showing the processed bytes as "This query will process 19.16 GB when run.". This will help me to estimate the processed bytes . Google documentation shows processed bytes is depends on the datatype. For example ,for the data type DATETIME,it is using 8 logical bytes to store. In my schema one of the variable A26 is DATETIME. We have 15339707*8 =122717656 bytes which is ~ 122.7 MB. But when I run select A26 from Table 1,it is showing 117.03 MB. This is the same case for rest of the variables. I have gone through many documentation it is not showing the exact calculation. Can someone guide me on how big query is calculating processed bytes. My data is in NorthAmerica-NorthEast region.
I referred https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types and https://cloud.google.com/bigquery/docs/best-practices-costs#on-demand-query-size-calculation to calculate the processed bytes. As mentioned in the details,my calculation is not matching with Big query calculated processed bytes.