I am doing MySql DB queries pipeline and using results to transfer data do Storage Account. For DB query and tranfer data to file .txt I use this command:
sqlcmd -S $(Server_prod) -i "G:\DB_Automation\xxx\Night_Batch_Report_4.sql" -o "G:\DB_Automation\SQL_Queries\xxx\4th_init.txt"
Later I use that cript to tranfer all that was inside .txt file to .xlsx format.
- task: PowerShell@2
displayName: Tranfer 4st night job --> Excel
inputs:
targetType: 'inline'
script: |
$rawData = Get-Content -Path 'G:\DB_Automation\SQL_Queries\Results\Night_batch\1st_init.txt' | Where-Object {$_ -match '\S'}
$delimiter = [cultureinfo]::CurrentCulture.TextInfo.ListSeparator
($rawData -replace '\s+' , $delimiter) | Set-Content -Path 'G:\DB_Automation\SQL_Queries\Results\Night_batch\theNewFile.csv'
Import-Csv -Path 'G:\DB_Automation\SQL_Queries\Results\Night_batch\theNewFile.csv' | Export-Excel -Path 'G:\DB_Automation\SQL_Queries\Results\Night_batch\batch_audit_report_$(Build.BuildNumber)_prod_Night.xlsx' -Autosize -WorkSheetname '04-Check all task completed'
The issue with this approach is that I can see many more content to be tranferd in .txt file than later lands in .xlsx. Please refer to to screenshot:
Some columens are not tranfered.
Can you pls tell me if it's there an issue with PowerShell script / formatting?
Thanks
UPDATE:
I have modified it to be now:
$text = Get-Content -Raw -Path 'G:\DB_Automation\xxx\Night_batch\8th_init.txt'
$rawData = ($text -split "\(\d* rows affected\)")[1].Trim() -split "\n"
$delimiter = [cultureinfo]::CurrentCulture.TextInfo.ListSeparator
$data = ($rawData -replace '\s+' , $delimiter) | ConvertFrom-Csv
$data | Export-Excel -Path 'G:\DB_Automation\xxx\Night_batch\batch_audit_report_$(Build.BuildNumber)_prod_Night.xlsx' -Autosize -WorkSheetname '08-t_team_role_conn'
But now....
For some queries it works fine (image below):
but for some I get empty Excel sheet, dispite queries are done well.

Can you shed some light on this issue?
Releated Issue
All works fine, except this: if you take a look on the cell you will find date separated from time.
In that case script would not pick whole cell but will divide content to date and time and will push it as new columns.
Is thare a way to get only particular content of a cell instead of deviding it?



Problem
The problem is your attempt to extract a *.csv file from the raw data in your first screenshot isn't working properly.
If you start with this:
and your code:
you get:
and when you read that back in with
Import-Csvit takes column headers from the first row - there's only 5 columns so it discards the data in any additional columns (i.etask_idonwards).Exporting this data to excel (or even back to a CSV with
Export-Csvwill only write the data from the first 5 columns since that's all that is being imported.Workaround
You need to extract just the tabular data rows from your source text file.
I don't know how fragile this approach would be, but you could split on the
(x rows affected)and take the second result from the text file - e.g.:What this does is:
($text -split "\(\d* rows affected\)")- split the entire text file into chunks based on separators in the format "(x rows affected)"[1]- take the second chunk - i.e. the one that contains the tabular data (the array is zero-indexed, so 1 is the second item in the array).Trim()- remove leading and trailing whitespace including line breaks-split "\n"- split the chunk into linesforeach-object -begin { $i = 0 } -process { if( $i -ne 1 ) { $_ } $i++ }- filters the lines of text in the selected chunk and removes the second one (that contains the-------column header underlines)foreach-object { $_.Trim() }- finally trims the remaining lines of textThe result is equivalent to :
and now you can process it like you already were:
The result should look like this in Excel:
This might break if your source file format changes, but it'll hopefully point you in the right direction...
Notes
------header underlines in the output from sqlcmd - thanks @mklement0!