When I run a program on local DB to insert 100,000 records. It take around 2 seconds to import. After switch to Azure SQL the time increase with high percentage of DTU.
This is my code
using (var connection = new SqlConnection(FiddleHelper.GetConnectionStringSqlServer()))
{
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
connection.Open();
var transaction = connection.BeginTransaction();
using (var sqlBulk = new SqlBulkCopy(connection, SqlBulkCopyOptions.TableLock, transaction))
{
// SET BatchSize value.
sqlBulk.BatchSize = 150000;
sqlBulk.BulkCopyTimeout = 60;
sqlBulk.DestinationTableName = "Master_Transaction";
sqlBulk.WriteToServer(dataTable);
transaction.Commit();
}
stopWatch.Stop();
TimeSpan ts = stopWatch.Elapsed;
Console.WriteLine($"database running time {ts}");
}
I test with Azure SQL using DTU purchased plan after testing the result are
- 10 DTU 37 seconds
- 20 DTU 24 seconds
- 50 DTU 18 seconds
- 100 DTU 15 seconds
Even running time is decreased. The avg_log_write_percent is still high and DTU is also high as pictures below:

My question are
- How can I improve the performance of SqlBulkCopy with Azure SQL?
- by fixing code program
- by configuring the azure resource
- Can I decrease the DTU max and reduce avg_log_write_percent?
Notes
- the table have clustered index key
All the service tiers you have used on your tests are based on remote storage (S0 to S3), that means you may see performance degradation on IO intensive tasks. Meanwhile, Premium tiers are based on locally attached SSD storage and that provides higher IOPS and throughput than Basic/Standard tiers.
In addition to the storage related limitations of the S0 to S3 tiers, on all these tiers query plans are serialized, all queries execute with max degree of parallelism = 1. Don't expect good performance on IO and CPU intensive tasks on those service tiers.