First of all: we have an application that is build heavily around the legacy DataTable type. Because of this, we cannot switch to e. g. EF now. This will be a future project. In the meantime we need to build a new server-sided REST based solution as a replacement for the legacy server logics.
Problem: SqlDataAdapter.Update(DataTable) does not update the data in the database:
- New records: get inserted successfully in DB
- Modified records: above
Update()method returns correct count, but the change is not in DB - Deleted records: above
Update()method returns 0 count and therefore throws concurrency exception (which is by design of the data adapter and not correct here)
Supposed Cause: As the DataTable is fetched by the server application on request of a client, but then transmitted to the client and back to the server before it gets written to the DB, SqlDataAdapter seems to not detect them properly as changes:
- Client requests data
- Server fetches data from database
- Data is transmitted serialized via REST to the client
- Client works on data
- Changed data is transmitted serialized via REST to server
- Server instantiates a new instance of
SqlDataAdapterand makesSqlDataAdapter.Update()on this received data
Data integrity:
- the correct
RowStateof each record is present on the server side, when it makes theSqlDataAdapter.Update() - the client transmits changed records only to the server, for efficiency reasons
- all of the tables have a
PK - none of the tables have
FKrelations (this is/was the legacy design rule)
Is it possible to somehow achieve (server-sided) SqlDataAdapter.Update() on "foreign" data or is this method designed for direct (client) updates to the database of the original data only?
Common Errors: of course I heavily searched for this issue already and took care of correct population of the sql command properties.
Server-sided code part:
public override int[] SaveDataTable(IEnumerable<DataTable> dataTables)
{
var counts = new Queue<int>();
using (_connection = new SqlConnection(ConnectionString))
{
_connection.Open();
var transaction = _connection.BeginTransaction();
try
{
foreach (var table in dataTables)
{
//var command = new SqlCommand();
var command = _connection.CreateCommand();
using (command)
{
command.Connection = _connection;
command.Transaction = transaction;
command.CommandText = Global.GetSelectStatement(table);
var dataAdapter = new SqlDataAdapter(command);
var cmdBuilder = new SqlCommandBuilder(dataAdapter);
dataAdapter.UpdateCommand = cmdBuilder.GetUpdateCommand();
dataAdapter.InsertCommand = cmdBuilder.GetInsertCommand();
dataAdapter.DeleteCommand = cmdBuilder.GetDeleteCommand();
//dataAdapter.SelectCommand = command;
//var dSet = new DataSet();
//dataAdapter.Fill(dSet);
//dataAdapter.Fill(table);
//dataAdapter.Fill(new DataTable());
//var clone = table.Copy();
//clone.AcceptChanges();
//dataAdapter.Fill(clone);
counts.Enqueue(dataAdapter.Update(table));
}
}
transaction.Commit();
}
catch (Exception)
{
transaction.Rollback(); //this may throw also
throw;
}
}
return counts.ToArray();
}

ok, so the quest is solved. There was nothing wrong with the implementation of the
SqlDataAdapter(except the improvement advises from the comments of course).The problem was in the client application code in always calling
AcceptChanges()to reduce the amount of data. Prior to sending changed data to the data access layer, theRowStateof each rows were "restored" withDataRow.SetModified(), etc.This causes the problem of
SqlDataAdapter.Update().Of course this is logical, as the original
DataRowVersionis lost then. But this wasn't easy to identify.