For auditing and optimization purposes BigQuery allow to analyze TotalBilleBytes per Query. That is allready usefull to analyze users with the highest consumption. Additionally it were helpfull to identify tables, which have the highest TotalBilledBytes.
Is it any possibility to see such information on more granular level like TotalBilledBytes (or at leas ReadBytes) per every table accessed in Query? This granular level of information will allow to see tables that causes the highest costs.
Usefull were also to have some information how many partitions from every accessed tables were read, which filter conditions were applied during read of the table. It it possible to extract such infromation from Query steps statistics?
I don't believe its possible to get to that level of granularity.
When you run a query, it's logged as a 'job'. If a query references multiple tables, then you are able to report on all the tables involved in that job smoothly by using INFORMATION_SCHEMA.JOBS (link).
Inside this table, there is a field called 'referenced_tables.dataset_id', which lists all the table IDs used in the job. The 'TotalBilledBytes' piece you're after only exists at a job level, and isn't drilled down to each specific table involved in the query.
You could perform some analysis from this though to try and identify which tables are used more frequently in high cost queries than others.