Skip to main content

Performance

AdoNet.Async is designed for minimal overhead. This page presents benchmark results, explains what each benchmark measures, and discusses the design decisions that enable strong performance.

All benchmarks were measured on Intel Core i9-12900HK, .NET 10.0.4, Windows 11, BenchmarkDotNet v0.15.8 (ShortRun, Release mode).

Command Execution

Measures the overhead of wrapping DbCommand.ExecuteScalarAsync, ExecuteNonQueryAsync, and ExecuteReaderAsync through the adapter layer.

MethodMeanRatioAllocatedAlloc Ratio
Raw_ExecuteScalar6.581 us1.00720 B1.00
Async_ExecuteScalar12.069 us1.83912 B1.27
Raw_ExecuteNonQuery9.355 us1.42480 B0.67
Async_ExecuteNonQuery16.063 us2.44528 B0.73
Raw_ExecuteReader_Iterate10.087 us1.53704 B0.98
Async_ExecuteReader_Iterate16.023 us2.44992 B1.38

What this tells us: The adapter wrapper adds approximately 6-7 microseconds of overhead per command execution. Memory overhead is 48-288 bytes per operation, primarily from the adapter wrapper objects and async state machines. In real applications where commands take milliseconds (network I/O), this overhead is negligible.

Connection Open/Close

Measures the cost of opening and closing a connection through the adapter.

MethodMeanRatioAllocatedAlloc Ratio
Raw_OpenClose15.54 us1.00384 B1.00
Async_OpenClose14.35 us0.92408 B1.06

What this tells us: The async adapter is actually slightly faster for open/close operations. Memory overhead is minimal (24 bytes).

Data Reader Iteration (50 rows)

Measures the cost of iterating 50 rows through a DbDataReader vs. the async adapter, including await foreach.

MethodMeanRatioAllocatedAlloc Ratio
Raw_ReadAll_Fields20.99 us1.003.7 KB1.00
Async_ReadAll_ManualLoop33.13 us1.583.98 KB1.08
Async_ReadAll_AwaitForeach30.74 us1.464.09 KB1.11

What this tells us: The await foreach approach is slightly faster than a manual ReadAsync loop due to optimizations in the async enumerator. Memory overhead is about 8-11% over raw iteration -- roughly 300-400 bytes total regardless of row count.

DataAdapter Fill

Measures FillAsync vs. raw SqlDataAdapter.Fill for 10 and 100 rows.

MethodRowLimitMeanRatioAllocatedAlloc Ratio
Raw_Fill10595.5 us1.0094.42 KB1.00
Async_Fill10872.3 us1.4678.88 KB0.84
Raw_Fill1001,293.8 us1.00160.21 KB1.00
Async_Fill1001,173.8 us0.92117.40 KB0.73

What this tells us: At 100 rows, FillAsync is actually faster and allocates 27% less memory than SqlDataAdapter.Fill. The async adapter's FillAsync uses LoadAsync which avoids some of the internal overhead of SqlDataAdapter. At smaller row counts the async overhead is more visible, but at realistic data sizes the async path wins.

Serialization (AsyncDataTable)

Measures JSON serialization and deserialization of AsyncDataTable with both Newtonsoft.Json and System.Text.Json.

MethodRowCountMeanRatioAllocatedAlloc Ratio
Newtonsoft_Serialize1012.30 us1.0029.36 KB1.00
STJ_Serialize1010.05 us0.8411.74 KB0.40
Newtonsoft_Deserialize1060.52 us5.0434.38 KB1.17
STJ_Deserialize1054.70 us4.5532.73 KB1.12
Newtonsoft_Serialize10070.08 us1.00114.63 KB1.00
STJ_Serialize10039.28 us0.5765.62 KB0.57
Newtonsoft_Deserialize100165.16 us2.41101.45 KB0.88
STJ_Deserialize100143.26 us2.0998.68 KB0.86

Ratio is relative to Newtonsoft_Serialize per row count.

What this tells us: System.Text.Json is consistently faster and allocates less memory. At 100 rows, STJ serialization is 44% faster and uses 43% less memory. Deserialization is more expensive than serialization in both libraries because it requires constructing DataTable, DataColumn, and DataRow objects, but STJ still edges ahead.

Transactions

Measures beginning and committing/rolling back a transaction.

MethodMeanRatioAllocatedAlloc Ratio
Raw_BeginCommit5.948 us1.001.64 KB1.00
Async_BeginCommit240.697 us40.471.79 KB1.09
Raw_BeginRollback6.040 us1.001.65 KB1.00
Async_BeginRollback233.939 us39.331.73 KB1.06
warning

The high transaction overhead is a microbenchmark artifact. SQLite (used in benchmarks) serializes write transactions, so BeginTransactionAsync must wait for the previous transaction to complete. With network-bound databases (SQL Server, PostgreSQL, MySQL), the async overhead is negligible compared to network I/O latency. Note that memory allocation overhead is minimal -- only ~150 bytes per operation.

Design Decisions

Why ValueTask

All async methods return ValueTask or ValueTask<T> rather than Task/Task<T>. This eliminates heap allocations on synchronous completion paths -- when the underlying provider completes synchronously (common for in-memory or cached operations), no Task object is allocated.

// ValueTask -- zero allocation when the provider completes synchronously
public ValueTask<int> ExecuteNonQueryAsync(CancellationToken ct = default)
=> ExecuteNonQueryCoreAsync(ct);

Why Zero-Alloc Events

Async events on AsyncDataTable use ZeroAlloc.AsyncEvents with InvokeMode.Sequential. This has two key properties:

  1. Zero allocations when no subscribers -- If no handler is registered, InvokeAsync completes synchronously with zero allocations. This means tables without event subscriptions pay no cost.
  2. Sequential dispatch -- Handlers execute one after another, which is the correct semantic for validation (Validator 1 must complete before Validator 2 runs).
// Internal field -- zero-alloc when unused
internal AsyncEventHandler<DataRowChangeEventArgs> _rowChanging =
new(InvokeMode.Sequential);

Adapter Sync Method Optimization

The adapter classes hide base class sync-over-async bridge methods with new declarations that call the inner object's native synchronous methods directly:

// AdapterDbConnection -- calls inner.Open() directly, not sync-over-async
public new void Open() => _inner.Open();

This means synchronous callers get the same performance as calling the underlying provider directly.

Read-Only Indexers on AsyncDataRow

AsyncDataRow indexers are read-only. All writes go through SetValueAsync, which fires async events. This is a compile-time guarantee that async events are never bypassed, rather than an opt-in convention.

Typed vs Untyped Overhead

Typed DataSets (generated from .xsd) add a thin layer over AsyncDataTable<TRow>:

  • Row creation: Typed rows are created via WrapRow(DataRow) -- a single allocation per row, cached in a ConditionalWeakTable.
  • Property access: Typed property getters (e.g., customer.Name) call this["Name"] on the inner DataRow -- no additional allocation.
  • Typed add methods: AddCustomerRowAsync(...) creates the row and calls AddAsync -- one extra DataRow.NewRow() call compared to untyped.

The overhead of typed access is negligible in practice. The ConditionalWeakTable cache ensures that the same DataRow always maps to the same typed row instance without preventing garbage collection.

Memory Allocation Patterns

OperationExtra allocations vs raw
Open/Close connection~24 bytes (adapter wrapper)
Execute command48-288 bytes (adapter + async state machine)
Read 50 rows~300-400 bytes total (adapter + enumerator)
Fill 100 rows27% less than raw SqlDataAdapter
Event with no subscribers0 bytes
Event with subscribersSame as subscriber's allocations
Serialization (STJ, 100 rows)43% less than Newtonsoft

Tips for Optimal Performance

  1. Use await foreach for reader iteration -- It is marginally faster than a manual while (await reader.ReadAsync()) loop.

  2. Use System.Text.Json for serialization -- It is faster and allocates less memory than Newtonsoft.Json for the same wire format.

  3. Fill tables individually with typed DataSets -- Use FillAsync(ds.Customer) rather than FillAsync(ds) to avoid extra table creation.

  4. Reuse JsonSerializerOptions/JsonSerializerSettings -- Both libraries cache type metadata internally. Creating new options per request wastes memory.

  5. Pass CancellationToken everywhere -- All async methods accept cancellation tokens. This avoids wasted work when requests are cancelled.

  6. Use AcceptChangesDuringFill = true (default) -- This avoids tracking row states during bulk loads, which reduces memory usage.

  7. Avoid event handlers in hot loops -- Event handlers execute sequentially. If you have expensive async validation, consider batching or disabling events during bulk operations by using BeginLoadData()/EndLoadData().