Dynoxide patch notes: 0.9.10 to 0.9.12
Dynoxide now lives entirely in its own public repo, out of the private workspace it shared with Nubo. One repo, one CI pipeline, no shuttling private commits to test against the public version. That's the work behind 0.9.10 that doesn't show up in the changelog.
Three patch releases since.
0.9.10: making error messages actually match
v0.9.10 closed 16 places where dynoxide's error strings drifted from real DynamoDB.
Dynoxide's whole pitch is "behaves like AWS DynamoDB". Easy to claim and hard to keep. AWS doesn't publish a spec for the error strings their SDK clients see; they just emit them, and downstream code parses them.
The dynamodb-conformance project I run alongside dynoxide has been growing a tier-3 suite that asserts on exact error strings. Three rungs of strictness:
- Rung 1: literal exact match
- Rung 2: interpolated exact match, with test fixture values substituted in
- Rung 3: anchored regex around the bits AWS controls (e.g. the Java
toStringdump of a request body)
601 tests now. Things you only notice when you assert:
tableNamelength validation is per-operation - 1 char on read/write, 3 onCreateTableSelectenum order matches AWS rather than alphabeticalQueryandScanLimit=0messages are deliberately different- batch and transact empty/oversize requests use the standard
1 validation error detected: Value '...' at 'X' failed to satisfy constraint: ...envelope UpdateExpressionsyntax errors include the AWSnear: "..."window
Plus one real bug. TransactGetItems with a missing key was returning HTTP 500 instead of TransactionCanceledException with a ValidationError cancellation reason. The dedup loop was calling the server-fault helper before key validation. That's the kind of bug you only catch when your test suite holds error messages to byte-strictness.
0.9.11: MCP server hanging on Ctrl+C
v0.9.11 fixed dynoxide hanging on Ctrl+C when an MCP client (Claude Code, Cursor) was connected.
In a side project, dynoxide runs alongside four other processes via concurrently: two seed scripts and two vite servers. Plus Claude Code with .mcp.json pointing at dynoxide's MCP endpoint.
Ctrl+C to stop the dev session. npm run dev again.
error: failed to bind 127.0.0.1:8000: Address already in use
A second Ctrl+C would unstick it. Sometimes a pkill -9 dynoxide. Annoying.
The MCP server's shutdown path was using axum's with_graceful_shutdown(...), which drains in-flight connections before returning. Fine for the short-lived AWS-SDK requests on the HTTP side. Not fine for Streamable-HTTP MCP sessions that Claude Code holds open indefinitely. The drain phase waited forever, so the dynoxide process hung, so the port stayed bound, so the next npm run dev failed.
The fix: race the cancellation token against the serve future and drop the future on cancel. The listener closes, the function returns, the surrounding tokio::join! proceeds, the process exits. No drain phase.
tokio::select! {
res = serve_fut => res?,
_ = ct.cancelled_owned() => {}
}
The HTTP server keeps with_graceful_shutdown because AWS-SDK requests drain in milliseconds.
0.9.12: TIME_WAIT and the case of the still-bound port
v0.9.12 fixed port 8000 staying bound for around 60 seconds after a clean shutdown, if anything had connected during the session.
Shipped 0.9.11. Tested in the same project. Got the same error.
error: failed to bind 127.0.0.1:8000: Address already in use
But this time the dynoxide process had exited cleanly. "Shutting down..." printed, exit code 0. So why was the port still held?
netstat -an | grep 8000 showed it. TIME_WAIT entries. Both directions. The seed scripts had opened TCP connections to dynoxide during the run, those went through TIME_WAIT on close, and the kernel sits on TIME_WAIT for around 60 seconds so stray packets can't land on a fresh listener.
dynoxide's listener was using socket2 with a comment that read:
// Deliberately do NOT set SO_REUSEADDR - this is the whole point.
The comment was wrong. SO_REUSEADDR doesn't allow two live listeners to share a port. That's SO_REUSEPORT. SO_REUSEADDR only bypasses TIME_WAIT entries. The port-conflict detection (a TCP connect probe before the bind) still works either way.
One-line fix:
#[cfg(unix)]
socket.set_reuse_address(true)?;
Gated to Unix because Windows is different. SO_REUSEADDR on Windows lets another process hijack an active bind, which is the opposite of what you want. The Windows-correct answer is SO_EXCLUSIVEADDRUSE, and that's a different change for a different release.
After 0.9.12: npm run dev, Ctrl+C, npm run dev. Instant.
Coming next: a Docker image
Someone's asked for an official Docker image. Hadn't crossed my mind. If you're already running a Docker-based dev or CI workflow around DynamoDB Local, a FROM scratch dynoxide image is an easy drop-in: ~5 MB against the ~600 MB Java image. The release pipeline already builds static linux-musl binaries for npm and Homebrew, so the Docker work is wrapping one of those in a FROM scratch layer. No shell. No OS. Just the binary.
What I take from this
If I hadn't pushed dynoxide to byte-exact error matching, I wouldn't trust it as a drop-in for DynamoDB Local. I wouldn't have used it for real. I wouldn't have hit either of these bugs. The strictness of the conformance work is what gave me the licence to use it in anger.
And the other thing: comments lie. "This is the whole point" had been sitting in the code since at least v0.9.7. It looked authoritative. It was wrong. The bug stayed latent until I started running dynoxide next to scripts that opened a few thousand TCP connections in succession.
Worth questioning the "deliberately" comments in your own code. Especially the ones written by past-you.