A Databricks job writes one delta-parquet file per minute into an Azure Storage Account (ADLS2), partitioned by event_date + event_hour:

This 1-minute frequency is just an average: sometimes 10 files are written in a minute, sometimes we have several minutes without any new delta-parqet file. In the future, the frequence may become much higher (even 1 file per second).
We receive about 10K rows per second with 4 columns: color, brand, type and timestamp. So, assuming we receive 1 file per minute, it would contain 600,000 rows.
What's the best architecture to build a Power BI real-time dashboard with a line-chart visual that shows every 10 seconds (taken from the timestamp field), the count of rows received in the previous 5 minutes split by the three fields color/brand/type? This chart should show just the last hour (always based on the timestamp field) - a classical hopping window.
I expect that possible options are ASA, Databricks and PowerBI Streaming Dataflows and even Microsoft Fabric Eventstream feature but as far as I could see, a major challenge of this task is to streaming out of the ADLS2 (which contains trillions of records, partitioned by date/hour) just the last hour , so I think it would be a good approach to select just the delta-parquet files written in the previous 60 minutes. Another creative idea would be to modify the Databricks job that currently writes into ADLS2 the delta-parquet files so that it sends the data straight to Power BI too, but the Dashboard won't be running 24X7 so I'm afraid of a lot of data being sent to PowerBI for nothing, since only the last 60 minutes should be shown.
First thing to try is querying the Delta table directly from ADLS using either Synapse Serverless or Fabric SQL (preview). Your data is partitioned by day and hour, so if you carefully ensure that your queries use those columns to filter for only the last 2 hours the performance might be acceptable.