Filing my first security advisory
Yesterday I logged into GitHub and noticed a Dependabot alert sitting on dynoxide. DNS rebinding CVE in rmcp, a transitive dependency. The alert had been raised a few days earlier - I just hadn't seen it, because I'd never set up email notifications for Dependabot on that repo. First lesson of the day, before the actual lesson of the day.
I worked through the fix and published a GitHub Security Advisory once the patch release was out. The fix itself was the bit I knew how to do. The GHSA was new ground, and that's the part I want to write about, because if you maintain something with users and you've never filed one, the process is less daunting than I'd built it up to be.
The notification gap
Worth dealing with this one first.
Dependabot raises alerts on your repo's Security tab automatically. By default, GitHub doesn't email you about them - they appear on the dashboard and that's it. If you don't log into the affected repo regularly, you won't see them.
The fix is at the repo level. Top right of the repo page, hit Watch → Custom, then tick Security alerts. You'll start getting emails for security events on that repo.
I can see why this isn't the default. If you've forked a load of old projects you don't actually maintain, the last thing you want is an inbox full of alerts for repos that aren't yours. The cost of that default is that on the repos you do maintain, you have to opt in deliberately. I hadn't.
There's a second timing thing worth knowing about. Even with email notifications set up, there's a gap between the upstream CVE being published and your Dependabot alert firing. For mine, the upstream rmcp advisory was published on 29 April. My Dependabot alert came through about nine days later. GitHub's Advisory Database reviews advisories before they propagate, and Dependabot itself runs on a scan schedule. The delay is mostly outside your control. Knowing it exists at least means you don't panic when you discover the upstream advisory predates your own alert.
What a GHSA actually is
A GHSA is a structured record on your repo's Security tab. Affected package, affected version range, fixed version, severity, CVSS vector, weakness category, description. The structure matters because of what happens when you publish it: GitHub's advisory database picks it up, and every downstream project that depends on your package gets a Dependabot alert. Often an automated upgrade PR.
In my case this was the whole point. The named CVE lives in rmcp. Anyone with a direct rmcp dependency already had an alert from the upstream advisory. But dynoxide users have rmcp transitively - it's in the lockfile, not the manifest - and the upstream alert doesn't fire for them. Without a dynoxide-specific GHSA naming the affected version range of dynoxide-rs and dynoxide, nothing reaches those users automatically.
The GHSA isn't ceremony. It's the actual distribution mechanism.
Order of operations
Release first, advisory second.
The advisory references the patched version. Dependabot's upgrade PRs for downstream projects will fail if that version doesn't exist on the registry yet. If you publish the GHSA before the release is live, you've broadcast "upgrade to a version that doesn't exist" to every project that depends on you.
So: cut the release, wait for CI to push to the registries, verify the patched version is actually available, then publish the advisory. The save-as-draft option on the GHSA form is there for this reason.
Last step is the social post, after opening the GHSA URL in a private window to check a logged-out viewer can actually see it. Until you hit publish, the advisory is only visible to repo admins and the URL returns nothing useful to anyone else.
Filing the form
The form lives at Security → Advisories → New draft security advisory. A few things that tripped me up.
Ecosystem. The dropdown doesn't list "cargo" - it lists "Rust". I scrolled past it twice. The mapping is to crates.io under the hood. npm is named npm.
Package names. Must match the registry exactly or Dependabot doesn't fire. GitHub validates this and shows a small "Package name found on Rust" confirmation when it's right. For dynoxide that's dynoxide-rs (the Cargo crate, because the name dynoxide was already taken) and dynoxide (npm).
Version ranges. Cargo-style semver: >= 0.9.3, < 0.9.13. I started at 0.9.3 because that's when the MCP HTTP transport landed. Earlier versions don't expose the vulnerable surface, so flagging them would just generate noise alerts for users on older versions who aren't actually affected.
CVE. The upstream CVE-2026-42559 already exists. There's a field for "I have an existing CVE ID" - use it. Don't request a new one for the same root cause.
CVSS. This was the field I felt least sure about. There's an in-form calculator and I had a go at deriving my own vector for dynoxide's exposure context. I marked Attack Complexity as High because the rebinding chain felt fiddly to me. That was wrong - upstream marked it Low, and Low is right, because public DNS rebinding tooling makes the attack cheap to execute. My score came out at 7.5 instead of upstream's 8.8.
The mistake was easy to make and easy to fix (GHSAs are editable after publishing), but it would have saved me both the agonising and the error to just match the upstream score in the first place. For a transitive-dep advisory where the exposure mechanism is essentially the same as upstream, divergent scores just confuse downstream readers. Match upstream unless you've got a strong reason not to.
CWE. Weakness category. The upstream advisory used CWE-346 (Origin Validation Error) and CWE-350 (Reliance on Reverse DNS Resolution). Same advice as CVSS: match upstream rather than reasoning from first principles.
Description. Markdown. I used Summary, Impact, Patches, Workarounds, References. The Workarounds section is the one I'd most strongly suggest including - if a user can't upgrade immediately, knowing how to mitigate is more useful than knowing the CVSS score. For dynoxide that was "use the stdio transport, don't pass --http."
The bug
The named CVE is DNS rebinding. You're running dynoxide mcp --http so your coding agent can talk to a local DynamoDB. You visit a malicious page in another tab. The page's JavaScript spoofs the Host header in requests to 127.0.0.1:PORT/mcp and the server processes them. The attacker can call any tool the running dynoxide instance exposes, including writes.
The rmcp 1.4+ fix is a Host-header allowlist: localhost, 127.0.0.1, ::1, anything else gets a 403. Upgrade rmcp, done.
While I was reading the patched rmcp source to make sure I understood what the upgrade actually changed, I clocked something. The Host check closes rebinding. It doesn't close cross-origin CSRF.
fetch('http://127.0.0.1:PORT/mcp', {
mode: 'no-cors',
method: 'POST',
body: JSON.stringify({ /* call put_item */ })
})
The browser sends Host: 127.0.0.1 - legitimately, because that's the URL it's connecting to - and Origin: https://evil.com. The Host check passes. There's no Origin check by default. The request goes through.
A pedant's note: a no-cors POST with a JSON body actually can't set Content-Type: application/json (browsers strip it down to text/plain), so rmcp's protocol-level Accept and Content-Type checks would bounce this exact request before tool execution. The point still stands. Content-type validation isn't a security boundary - alternative request shapes that sidestep no-cors's restrictions (form POSTs, future MCP transport variants) reach the handler with valid framing. Origin is the right layer to enforce same-origin, not content-type.
This isn't a flaw in the upstream fix. The CVE is narrowly scoped to DNS rebinding, and the fix closes it. Cross-origin CSRF is a different shape of attack on the same transport surface, and the upstream rmcp team have it tracked as a defence-in-depth follow-up. rmcp 1.6.0 already ships an allowed_origins field on the same config - it just defaults to empty, which means "skip validation". Same pattern Host had before 1.4.
So dynoxide 0.9.13 sets both lists explicitly:
c.allowed_hosts = vec!["localhost".into(), "127.0.0.1".into(), "::1".into()];
c.allowed_origins = vec!["http://localhost".into(), "http://127.0.0.1".into()];
Plus a regression test covering all three paths: loopback Host with no Origin (passes, because native MCP clients don't send Origin), foreign Host (403), foreign Origin (403). The test asserts on the rejection message, not just the status code, so a future rmcp change that returns 403 for some other reason can't keep the test green while reopening the vulnerability.
What I'd tell past-me
The advisory is a positive signal. I was anxious about publishing it. Felt like it made dynoxide look amateur, like I was admitting fault. The opposite is true. Projects with zero advisories either have no users or aren't handling issues responsibly. Filing one is what well-run projects do.
Match the upstream CVSS and CWEs. Already covered above. The recalculation is defensible in theory but the consistent practice is simpler, faster, and less error-prone.
Turn on the email notifications. Worth the small ongoing noise on the repos you maintain.
One thing the whole exercise sharpened for me: the HTTP transport doesn't authenticate callers at all. The Host and Origin allowlists defend against browsers, but anything on the same machine that can reach the loopback port can call any tool. Worth saying why.
Why no auth (yet)
The MCP HTTP transport doesn't authenticate callers. That was a deliberate choice, not an oversight. dynoxide's main job is to be a small fast binary running in a CI job or on your own machine - both isolated environments where the only thing that can reach the loopback port is the process that launched it. The threat model is browser-borne attacks against that port, which is exactly what the Host and Origin allowlists handle. Adding token-based auth on top buys you very little: you'd be issuing yourself credentials to talk to yourself, and self-issued credentials tend to end up in dotfiles or pasted somewhere they shouldn't be.
What's changed the calculation is the Docker image I've been working on. A container is a different deployment shape: it's something you can hand to another developer, drop into a shared environment, or expose on a network where the caller isn't the process that launched it. The trust boundary widens and the assumptions behind shipping without auth stop holding.
So auth is the next piece of work. There's more design space here than I expected - the MCP spec is OAuth 2.1 or nothing, rmcp doesn't ship an auth hook of its own, and the path that actually fits a single-user local tool (a pre-shared bearer token wired in as a tower middleware) isn't really what the spec contemplates. More on that when it lands.
dynoxide 0.9.13 is out. Upgrade if you're running dynoxide mcp --http or dynoxide serve --mcp. The advisory is at GHSA-fvh2-gm75-j4j7 if you want a look at the finished form. If you've never filed a GHSA and you've got the option to file a draft on a private repo just to see the form, that's the cheapest way to take the unknown out of it before you have to do it for real.