I have data files landing in a single Azure blob storage container every other day or so. These files have either 8, 16, 24, or 32 columns of data. Each column has a unique name within a file, and the names are consistent across files. I.e the column names in the 8-column file will always be the first 8 column names of the 16, 24, and 32 column files. I have the appropriate 4 tables in an Azure SQL database set up to receive the files. I need to create a pipeline(s) in Azure Data Factory that will
- trigger upon the landing of a new file in the blob storage container
- check the # of columns in that file
- use the number of columns to copy the file from the blob into the appropriate Azure SQL database table. Meaning the 8 column blob file copies to the 8 column SQL table and so on.
- delete the file
I've researched the various pieces to complete this but cannot seem to put them together. Schema drift solution got me close but parameterization of the file names lost me. Multiple pipelines to achieve this is okay, as long as the single storage container is maintained. Thanks








