Skip to content

governance/auth work#2

Merged
sidhujag merged 21 commits intomainfrom
sessions
Apr 20, 2026
Merged

governance/auth work#2
sidhujag merged 21 commits intomainfrom
sessions

Conversation

@sidhujag
Copy link
Copy Markdown
Member

backend services for gov/auth session work

@sidhujag
Copy link
Copy Markdown
Member Author

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 2886618e65

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread middleware/rateLimit.js Outdated
Comment on lines +18 to +19
const email = (req.body && req.body.email) || '';
return `${req.ip}|${email.toLowerCase()}`;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Normalize login limiter keys with canonical email form

The login rate-limit key uses req.body.email.toLowerCase() directly, but authentication normalizes emails with trim + NFKC in normalizeEmail. An attacker can submit many canonical-equivalent variants (for example trailing spaces or Unicode-normalized forms) to bypass the 5-attempt bucket while still targeting the same account in users.verifyAuth, which weakens brute-force protection. Reuse the same normalization logic in the limiter key so equivalent emails share one quota.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment thread routes/auth.js Outdated

async function sendVerificationEmail(user) {
const token = verifications.issue(user.id);
const link = `${baseUrl.replace(/\/$/, '')}/auth/verify-email?token=${token}`;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Send verification links to a routable endpoint

Verification emails embed a GET URL (/auth/verify-email?token=...), but this router only exposes POST /auth/verify-email. With the current backend-oriented BASE_URL default, clicking the email link hits an unmapped GET endpoint, so users cannot complete verification from the email alone. The link should target a frontend route that performs the POST, or the backend should provide a GET bridge/redirect handler.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment thread lib/sessions.js
Comment on lines +73 to +74
const newExpiresAt = Math.min(t + slidingMs, row.createdAt + absoluteMs);
updateSliding.run(newExpiresAt, t, row.id);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Refresh sid cookie when sliding session expiry updates

Session verification extends expires_at server-side on each request, but the sid cookie expiry is only set during login/password change. This means active sessions still lose their browser cookie at the original expiry, effectively disabling the intended sliding behavior and causing unexpected logout for active users. Reissue the session cookie (or refresh Max-Age) when sliding expiry is advanced.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

jagdeep sidhu added 3 commits April 19, 2026 22:54
The login rate-limiter keyed on req.body.email.toLowerCase(), but
users.verifyAuth normalizes via normalizeEmail (NFKC + trim + lowercase).
An attacker could send canonical-equivalent variants (trailing
whitespace, Unicode ligatures, mixed case) to split the 5-attempt
bucket across distinct keys while still hitting the same account
downstream, weakening brute-force protection.

Both the login and resend limiters now run their email input through
the same normalizeEmail helper as the auth layer. Register is unchanged
(IP-only key).

Also exports loginKey / resendKey / registerKey for direct unit testing
without spinning up the full rate-limit middleware.

Tests: +5 (102 total). Verifies that mixed-case, whitespace-padded,
and NFKC-equivalent emails produce identical bucket keys.

Made-with: Cursor
The verification email embedded a link at `${BASE_URL}/auth/verify-email?token=...`,
but /auth/verify-email only accepts POST. Clicking the magic link
from a mail client issues a GET, so the user landed on an unmapped
route and verification was effectively broken from the email alone.

The mail now links to `${FRONTEND_URL}/verify-email?token=...`.
The SPA page at /verify-email reads the token from the query string
and POSTs it back to the same /auth/verify-email endpoint it was
always intended to hit. FRONTEND_URL is a new env var defaulting to
CORS_ORIGIN (the SPA origin), preserving sensible behavior without
any required config change for existing setups.

Threaded through appFactory.createApp, appFactory.mountAuthAndVault,
and server.js. .env.example updated.

Tests: +1 integration test asserts the mail body contains the
frontend-rooted link and does NOT contain the backend origin or the
/auth prefix. 103 total tests passing.

Made-with: Cursor
…dex P2)

sessions.verify() slides DB expires_at forward on every authenticated
request, but the sid cookie was only set at login time. The browser
dropped the cookie at the original login expiry regardless of
activity, defeating the intended 14-day sliding window and silently
logging users out mid-session.

sessionMw.parse now re-emits both cookies on every request whose
sid verified successfully:

- sid:   same token, new Expires attribute matching the DB slide.
- csrf:  same value (rotating would invalidate the SPA's in-memory
         mirror and break every subsequent PUT/POST), new Expires.

Guards:
- No refresh when verify() returned null (expired / unknown token),
  so unauthenticated 401 responses never leak a Set-Cookie.
- No csrf refresh when the incoming request had no csrf cookie
  (first-request-after-login is handled by the login handler's own
  Set-Cookie).

Tests:
- middleware/session.test.js: 4 direct unit tests over parse with
  an injected clock — verifies sid refresh, csrf value preservation,
  no-refresh when csrf absent, no-refresh on expired session.
- tests/auth.routes.test.js: 2 integration tests confirming that
  GET /auth/me after login emits both Set-Cookie headers with
  Expires, and that unauthenticated GET /auth/me does not.

109 tests total, 13 suites, full run ~5s.

Made-with: Cursor
@sidhujag
Copy link
Copy Markdown
Member Author

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: e6a417d09e

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread middleware/rateLimit.js Outdated

function loginKey(req) {
const raw = (req.body && req.body.email) || '';
return `login|${req.ip}|${normalizeEmail(raw)}`;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Use ipKeyGenerator when deriving rate-limit keys from req.ip

The login limiter key uses raw req.ip, which express-rate-limit documents as unsafe for custom key generators because IPv6 clients can rotate addresses to bypass throttling unless ipKeyGenerator subnet-masks them. On dual-stack/IPv6 deployments this weakens brute-force protection for /auth/login by letting a single attacker spread attempts across many effective keys.

Useful? React with 👍 / 👎.

Comment thread lib/vaults.js Outdated
e.code = 'etag_required';
throw e;
}
if (ifMatch !== '*' && ifMatch !== existing.etag) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Reject wildcard If-Match once a vault row already exists

Allowing If-Match: * on updates bypasses the optimistic-concurrency contract and lets stale or buggy clients overwrite newer vault data without detecting conflicts. Because the route advertises ETag-based conflict detection, accepting * after the first write can silently clobber ciphertext that was written by another active session/tab.

Useful? React with 👍 / 👎.

Comment thread lib/mailer.js Outdated
async function deliver(msg) {
if (!msg.to) throw new Error('mailer: missing "to"');
const envelope = { ...msg, from };
outbox.push(envelope);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Avoid storing all outbound mail in memory for SMTP/log modes

deliver() always appends to outbox before checking transport, so production smtp/log sends accumulate forever in process memory. Under normal verification/resend/password-change traffic this creates unbounded heap growth and eventual memory pressure; retaining messages should be limited to memory test mode (or bounded).

Useful? React with 👍 / 👎.

…box leak)

Three findings surfaced on the round-1 fixes.

--- P1 middleware/rateLimit.js
Use ipKeyGenerator from express-rate-limit rather than raw req.ip. For
IPv4 it's a no-op; for IPv6 it masks to /56 so a single allocation
can't rotate addresses inside its block to bypass the 5-attempt login
bucket.

--- P2 lib/vaults.js
Reject If-Match: * once a vault row exists. Wildcard was previously
accepted as 'force overwrite' for recovery, but that silently defeats
the optimistic-concurrency contract and lets stale/buggy clients
clobber newer data without detecting the conflict. Clients must now
echo the exact etag they observed; true recovery happens via an
explicit 'reset vault' flow (out of scope here). The wildcard
allowance on FIRST write is preserved — that's the legitimate
'there's nothing to race against yet' case.

--- P2 lib/mailer.js
Only populate `outbox` when transport === 'memory'. The log and
smtp transports are long-running; appending every verification /
password-change / vote-reminder email into a process-lifetime array
was an unbounded memory leak. Outbox is a test affordance and now
behaves like one.

Tests (+5, 114 total):
- middleware/rateLimit.test.js: IPv6 /56 collapse proven for both
  login and register key generators.
- lib/vaults.test.js + tests/vault.routes.test.js: wildcard PUT
  against an existing vault yields etag_mismatch / 412 and the blob
  is verified untouched afterward.
- lib/mailer.test.js: 50 log-mode sends leave outbox empty; smtp
  mode's configuration-error path also leaves outbox empty.

Made-with: Cursor
@sidhujag
Copy link
Copy Markdown
Member Author

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 4bafe7d4ec

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread lib/vaults.js Outdated
Comment on lines +90 to +91
const etag = etagFor(blob);
upsert.run(userId, existing.saltV, blob, etag, now());
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Make vault write conditional on current ETag

put() checks ifMatch against a prior read, but the actual write is an unconditional upsert, so the ETag precondition is not enforced at write time. In a multi-process deployment (for example multiple Node workers sharing the same SQLite file), two concurrent writes with the same stale ETag can both succeed and the later one silently overwrites the earlier ciphertext, violating the optimistic-concurrency guarantee. The write should be conditional on the current ETag (UPDATE ... WHERE user_id=? AND etag=?) or performed inside a transaction that re-validates before commit.

Useful? React with 👍 / 👎.

Comment thread middleware/rateLimit.js
Comment on lines +22 to +23
function ipBucket(req) {
return ipKeyGenerator(req.ip || '');
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Derive client IP correctly before rate-limit bucketing

The limiter keys are derived from req.ip, but there is no trust proxy configuration in the app wiring. When the service is deployed behind a reverse proxy/load balancer, Express defaults to the proxy socket address, which collapses many users into one bucket and allows a few failed attempts to throttle unrelated users. Configure trusted proxy hops (or another trusted client-IP extraction path) before using these IP-based keys.

Useful? React with 👍 / 👎.

--- P1 lib/vaults.js
ETag precondition is now enforced in the WHERE clause of the UPDATE,
not only against a prior read. Previously the upsert was unconditional
and a pair of concurrent workers (e.g. multi-process Node or two
browser tabs racing) with the same stale etag could both pass the
pre-check and clobber each other. The repo now uses:

- INSERT for the first write. The user_id primary-key constraint
  guarantees at most one concurrent-create wins; the loser trips
  SQLITE_CONSTRAINT_PRIMARYKEY and we surface it as etag_mismatch
  so clients re-read rather than seeing a 500.
- UPDATE ... WHERE user_id = ? AND etag = ? for subsequent writes.
  changes === 0 means some other writer raced past us and the client
  must re-read to pick up the newer etag.

Preserves all existing behaviors:
- First write still accepts If-Match: *.
- Once a row exists, wildcard is rejected (round-2 fix).
- error codes and HTTP mappings unchanged.

--- P2 middleware/rateLimit.js + server.js
Without 'trust proxy' set, Express treats the reverse-proxy socket
address as req.ip, so every real client collapses into one rate-limit
bucket (a handful of failures throttle everybody). server.js now
calls app.set('trust proxy', ...) from a new TRUST_PROXY env var,
defaulting to 'loopback' for safe local dev. Ops docs in .env.example
cover the common values (single hop = 1, CIDR, etc.).

Tests (+7, 120 total):
- lib/vaults.test.js: direct SQL-statement test proves stale etag in
  WHERE clause produces changes=0; simulated-race test shows put()
  rejects stale ifMatch against a newer row; PK-collision path is
  exercised.
- tests/trustProxy.smoke.test.js: asserts default ignores
  X-Forwarded-For (safe), loopback setting honors it from loopback
  peer, and hop-count=2 correctly extracts the real client address
  from a chained XFF.

Made-with: Cursor
@sidhujag
Copy link
Copy Markdown
Member Author

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 54e8e9f548

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread routes/auth.js Outdated
Comment on lines +77 to +80
const user = users.create({
email: parsed.data.email,
authHash: parsed.data.authHash,
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Defer credential binding until email ownership is proven

POST /auth/register writes the submitted authHash immediately via users.create(...) before mailbox ownership is verified. Because duplicate registrations are then swallowed in the email_taken path, an attacker can pre-register another person's email with their own credential, wait for the real owner to verify the email link, and then authenticate as that now-verified account while the owner cannot log in. This is a pre-account-takeover flow; credential binding should happen only after verification (or be reset in a verification-gated step).

Useful? React with 👍 / 👎.

Comment thread routes/auth.js Outdated
Comment on lines +105 to +107
const user = users.findByEmail(parsed.data.email);
if (user && !user.emailVerified) {
await sendVerificationEmail(user);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Make resend timing independent of account existence

/auth/resend-verification only awaits sendVerificationEmail when a matching unverified user exists, but returns immediately for unknown emails. With SMTP transport this creates a measurable latency gap, so attackers can enumerate valid accounts even though the JSON/status are always 202. To preserve the non-enumeration guarantee, both branches should have equivalent observable timing (for example, enqueue async work or add a uniform delay).

Useful? React with 👍 / 👎.

…ound 4)

Addresses the two findings from the round-4 review on PR #2:

P1 — Defer credential binding until email ownership is proven
  Previously, POST /auth/register wrote (email, HMAC(authHash)) into the
  users table with email_verified = 0. An attacker could pre-register a
  victim's email with the attacker's authHash and wait for the victim to
  click the verification link, ending up with an account bound to the
  attacker's credential.

  We now keep ALL pre-verification state in a new pending_registrations
  table and create the user row only when a specific token is redeemed
  on /verify-email (already-verified). purgeForEmail then wipes every
  other pending row for that email so stale attacker-spawned tokens die.

  - db/migrations/003_pending_registrations.sql creates the new table.
  - lib/pendingRegistrations.js adds the issue/redeem/purgeForEmail/
    cleanupExpired repo (stored_auth is still HMAC-peppered).
  - lib/users.js gains createVerifiedWithStoredAuth, the one-shot
    "create user already verified from a pre-hashed snapshot" path used
    from /verify-email. Duplicates are caught via the email UNIQUE index
    and surfaced as 409 already_verified.
  - routes/auth.js now issues pending rows on /register and creates
    users on /verify-email. It also validates email syntax synchronously
    (that check cannot leak account existence) so clients still get
    fast 400s on malformed input.

P2 — Make resend timing independent of account existence
  /auth/resend-verification branched on users.findByEmail and therefore
  leaked account existence via response timing. We removed the endpoint
  entirely. /register is now idempotent and always returns 202 in
  constant time: it schedules mail work in the background, and for
  already-verified emails it silently no-ops (no mail, same 202).

Other changes:
  - Drop verifyEmailResendLimiter + resendKey from middleware/rateLimit
    (+ corresponding unit tests), wire pendingRegistrations through
    appFactory.buildServices and mountAuthAndVault, thread an injectable
    `scheduler` so tests can run background work inline.
  - server.js expands the hourly housekeeping sweep to also drain
    pending_registrations (bounded by 30-minute TTL in normal operation
    but belt-and-braces against abuse).
  - tests/auth.routes.test.js updates the "unverified login" assertion
    (no user row exists pre-verification so login now returns 401
    invalid_credentials rather than 403 email_not_verified), removes
    /resend-verification tests, and adds coverage for the deferred-
    binding attack scenario + the "already verified, silent resend"
    branch.

All 129 tests pass.

Made-with: Cursor
@sidhujag
Copy link
Copy Markdown
Member Author

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 4a245bbc94

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread routes/auth.js Outdated
Comment on lines +161 to +164
if (err.code === 'email_taken') {
// Lost the race to another concurrent verify for the same email.
pendingRegistrations.purgeForEmail(email);
return res.status(409).json({ error: 'already_verified' });
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Handle verify for existing unverified accounts

In /auth/verify-email, an email_taken insert failure is always treated as already_verified and all pending tokens are purged. This breaks users who already have a users row with email_verified=0 (for example from pre-deferred registration data): redeeming a new pending token can never mark that row verified, login continues to return email_not_verified, and the user is locked out until manual DB intervention.

Useful? React with 👍 / 👎.

… round 5)

Codex round-5 P1:
  After moving to deferred credential binding (round 4), /verify-email
  always returned 409 already_verified whenever an existing users row was
  found, regardless of whether that row was email_verified. Any row left
  over from the pre-deferred /auth/register flow (email_verified = 0)
  became permanently stranded — verify-email would never promote it,
  login would keep returning email_not_verified, and the account could
  only be recovered by hand-editing the DB.

Two-layer fix:

1. Migration-time cleanup.
   db/migrations/003_pending_registrations.sql now ends with
     DELETE FROM users WHERE email_verified = 0;
   so the legacy rows are drained at deploy time. Those rows were
   never trustworthy (their stored_auth belonged to whoever last
   submitted /register for that email, not a verified owner) and they
   were never completable under the new flow — dropping them costs
   nothing but forces affected users to re-register.

2. Code-level defense.
   users.promoteUnverifiedWithStoredAuth rewrites stored_auth and
   flips email_verified on an existing row only when it is still
   unverified — guarded by "WHERE email_verified = 0" so it cannot
   overwrite credentials on a verified account.

   /verify-email now resolves three cases in order:
     (a) existing verified row  → 409 already_verified,
     (b) existing unverified row → promote in place,
     (c) no row                  → createVerifiedWithStoredAuth.

   The email_taken branch on (c) re-checks the row: if the concurrent
   insert produced a verified row, fold into (a); if not (effectively
   impossible but worth catching), 500 rather than silently claim
   success.

Tests:
  - lib/users.test.js covers promoteUnverifiedWithStoredAuth end to
    end, including the invariant that already-verified rows are never
    rewritten even if called with an attacker's stored_auth.
  - lib/db.test.js applies migrations 001+002, seeds mixed
    verified/unverified rows, then applies 003 and asserts only
    verified rows survive.
  - tests/auth.routes.test.js adds an integration test where a
    pre-existing unverified row is successfully rotated by a fresh
    /register → /verify-email round trip and the login binds to the
    new authHash (old one is rejected).

All 134 tests pass.

Made-with: Cursor
@sidhujag
Copy link
Copy Markdown
Member Author

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 58a2e4e798

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread routes/auth.js Outdated
Comment on lines +67 to +70
} catch (err) {
// eslint-disable-next-line no-console
console.error('[auth] issueAndMail failed', err && err.message);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Fail registration when token/email issuance fails

The issueAndMail helper swallows every exception and /auth/register still returns 202, so a production misconfiguration (for example SYSNODE_AUTH_PEPPER missing, which makes pendingRegistrations.issue throw) or SMTP/DB failure produces a false-success response while no verification link is deliverable. In that scenario users are told registration succeeded but can never complete signup, and operators only get a server log line.

Useful? React with 👍 / 👎.

Comment thread server.js Outdated
const dbPath = process.env.SYSNODE_DB_PATH || './data/sysnode.db';
const db = openDatabase(dbPath);
const mailer = createMailer({
transport: process.env.SMTP_HOST ? 'smtp' : 'log',
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Enforce SMTP config in production startup

Server boot currently falls back to transport: 'log' whenever SMTP_HOST is unset, even in production. That means verification and password-change emails are only printed to stdout while auth endpoints continue returning success, effectively breaking account verification and security notifications without failing health checks. This should fail fast (or require explicit non-SMTP override) in production mode.

Useful? React with 👍 / 👎.

Two P1 findings from round-6 review on PR #2.

P1 #1 — routes/auth.js: register must fail when token issuance fails
  Round-4 moved the entire register workflow into a background
  schedule() call so response timing carried no signal about account
  state. The unintended consequence: any failure inside that callback
  (e.g. pendingRegistrations.issue throws because SYSNODE_AUTH_PEPPER
  is missing, or an unexpected DB write error) was swallowed to stderr
  while the user got a cheerful 202. They'd "check their email"
  forever.

  Split the work into two phases:
    - Synchronous (must succeed): parse + email syntax check, look up
      any existing verified owner, and — if not already verified —
      issue the pending_registrations row. Any failure here returns
      500 so misconfigurations are caught at first registration.
    - Background (best-effort): mailer.sendVerification. Transient
      SMTP failure is logged and /register still returns 202; the
      user can simply re-POST to get a fresh token + retry.

  Timing cross-talk between "already verified" and "new" is now
  bounded to a SELECT plus a single INSERT (hundreds of microseconds),
  well below TLS/network jitter. The seconds-scale leak that motivated
  Round 4 — Argon2 and SMTP in the critical path — stays gone.

P1 #2 — server.js: don't silently ship production with stdout-only mail
  The startup code used `transport: SMTP_HOST ? 'smtp' : 'log'`, which
  meant missing SMTP configuration in production just redirected
  verification + password-change mails to stdout while /auth kept
  returning success. Health checks stayed green. Operators found
  out only when users complained.

  Extracted the selection logic into lib/mailTransport.selectMailTransport
  (directly unit-testable). Behaviour:
    - MAIL_TRANSPORT=smtp|log|memory is always honored (case-insensitive)
      — except 'memory' is refused in production (it is a test-only
      transport).
    - Otherwise: SMTP_HOST set → 'smtp'; unset → 'log' in dev / throw
      in production.
    - Operators can still opt into stdout delivery in production by
      setting MAIL_TRANSPORT=log explicitly (dry-run deploys, staging
      relays, etc.).

Tests:
  - lib/mailTransport.test.js: covers every branch of the selector
    including the "refuse production + no SMTP" guard and the 'memory'
    rejection.
  - tests/auth.routes.test.js: asserts /register returns 5xx when
    pendingRegistrations.issue throws, and still returns 202 (with
    the pending row persisted) when mailer.sendVerification throws.

.env.example documents the new MAIL_TRANSPORT knob and the production
guard expectations.

All 145 tests pass.

Made-with: Cursor
@sidhujag
Copy link
Copy Markdown
Member Author

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 626f98e98a

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread lib/kdf.js
Comment on lines +91 to +94
try {
candidate = hashAuthHash(providedAuthHash);
} catch {
return false;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Surface pepper config failures instead of auth mismatch

verifyAuthHash swallows every exception from hashAuthHash and returns false, which makes production misconfiguration of SYSNODE_AUTH_PEPPER look like ordinary bad credentials. In that state, all logins fail with 401 rather than surfacing an internal error or failing fast, which can cause a full auth outage that is hard to diagnose. This catch should distinguish malformed user input from server configuration failures (or let config errors propagate).

Useful? React with 👍 / 👎.

…(Codex round 7)

P1 — kdf.verifyAuthHash swallowed every error and returned false, so a
production deploy with a missing or malformed SYSNODE_AUTH_PEPPER made
every login look like "wrong password" instead of "server broken".
Operators would see rising 401 rates with no alerts, and users would
be locked out en masse.

Fix splits the two error classes cleanly:
  - KdfConfigError (code: 'kdf_config') is thrown by loadPepper when the
    pepper env var is missing/short in production. verifyAuthHash now
    RETHROWS this — it is not a credential mismatch and must not be
    silenced.
  - Input-shape errors from hashAuthHash (non-string / empty / non-hex
    authHash) remain folded to `return false`. Zod schemas at the route
    level already reject these, so this is just belt-and-braces against
    a malformed DB row or an untrusted internal caller.

Routes catch KdfConfigError from users.verifyAuth and respond with
503 server_misconfigured (plus a console.error with the message) so
ops monitoring picks up a real signal distinct from auth failures.

Boot-time guard: server.js now calls assertPepperConfigured() before
binding anything, joining selectMailTransport() in the "fail at deploy
rather than at first real user" pattern.

Tests:
  - lib/kdf.test.js covers KdfConfigError identity, propagation through
    verifyAuthHash, the "input-shape still returns false" invariant,
    and assertPepperConfigured's happy/sad paths.
  - tests/auth.routes.test.js adds an end-to-end test that removes the
    pepper at runtime and asserts /auth/login returns 503 / not 401.

All 150 tests pass.

Made-with: Cursor
@sidhujag
Copy link
Copy Markdown
Member Author

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: ea99ec7726

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread lib/mailer.js Outdated
}
if (transport === 'smtp') {
if (!smtpTransport) {
if (!smtp.host) throw new Error('mailer: SMTP_HOST is required');
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Validate SMTP config during mailer initialization

createMailer defers the SMTP_HOST check until the first outbound email, so the process can start healthy with transport: 'smtp' and only fail when /auth/register or password-change mail is sent. In production this yields repeated 202/200 API responses while all email delivery fails at runtime, which is exactly the silent outage this commit’s startup-sanity logic is trying to prevent. Validate required SMTP fields when the mailer is constructed (or when selecting transport), not lazily on first send.

Useful? React with 👍 / 👎.

…d 8)

P1 — Before this change, createMailer deferred the SMTP_HOST check into
the first call of deliver(). A production deploy with transport='smtp'
and an unset/misspelled SMTP_HOST started healthy, passed health
probes, accepted /auth/register (now returning a sync 202 after the
round-6 split), and then silently dropped every verification email
when the first background send threw. The very outage round 6 tried to
prevent still slipped through on the construction path.

Validation is now eager and exhaustive at createMailer time:
  - unknown transport names → throw.
  - transport='smtp' requires SMTP_HOST (non-empty, non-whitespace).
  - SMTP_PORT must be a finite int in (0, 65535].
  - SMTP_USER / SMTP_PASS must both be set or both be unset (partial
    credentials are almost always a typo; nodemailer would otherwise
    try authless delivery which most relays silently reject).
  - We also build the nodemailer transport up front so obviously
    malformed config surfaces at startup rather than at first send.
    (nodemailer.createTransport validates shape only; no network
    I/O happens until sendMail.)

Tests:
  - lib/mailer.test.js adds a dedicated "eager validation" block
    covering every guard above, the legitimate happy path, and the
    unknown-transport rejection. The older "smtp with empty host
    throws at send time" test was replaced by its constructor-time
    equivalent so the contract stays pinned.

All 155 tests pass.

Made-with: Cursor
@sidhujag
Copy link
Copy Markdown
Member Author

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 526e8e894c

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread routes/auth.js
console.error('[auth/login] kdf config error', err.message);
return res.status(503).json({ error: 'server_misconfigured' });
}
throw err;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Replace thrown errors in async handlers with next(err)

This throw err sits inside an async Express route handler; with express@4.18.2, rejected handler promises are not automatically routed through Express error middleware. If users.verifyAuth throws any non-kdf_config exception (for example, a transient DB failure), this becomes an unhandled rejection and can terminate the Node process (or at minimum leave the request without a controlled 5xx response). Return an explicit 500 or call next(err) instead; the same pattern is repeated in /auth/change-password.

Useful? React with 👍 / 👎.

…or-middleware safety net (Codex round 9)

Express 4 does not route rejected async-handler promises through the
error pipeline — a bare `throw err` inside `router.post('/login', async ...)`
becomes an unhandled rejection and can crash the process.

Route-level fix (primary, narrow):
- `routes/auth.js`: `/login` and `/change-password` now catch any
  non-KdfConfigError thrown by `users.verifyAuth`, log it, and respond
  `500 internal` instead of re-throwing. `KdfConfigError` still maps to
  `503 server_misconfigured` (Codex round 7). The only observable
  behaviour change is that transient DB errors now produce a controlled
  5xx rather than an unhandled rejection.

Defense-in-depth:
- `lib/appFactory.js`: added a last-chance 4-arg error middleware scoped
  to `/auth` and `/vault`. It preserves well-known status codes attached
  by upstream middleware (e.g. body-parser's 413 PayloadTooLargeError
  stays a 413) and only falls back to 500 for uncategorized throws. This
  catches future handlers that forget to wrap a throw, and covers sync
  errors forwarded via `next(err)`.

Tests:
- `tests/auth.routes.test.js`: new test asserts `/auth/login` returns
  500 and does NOT emit an unhandledRejection event when
  `users.verifyAuth` throws a generic error. Full suite: 156/156 pass.

Made-with: Cursor
@sidhujag
Copy link
Copy Markdown
Member Author

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3a96a5d37c

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread routes/auth.js Outdated
Comment on lines +169 to +173
router.post('/verify-email', async (req, res) => {
const parsed = VerifySchema.safeParse(req.body);
if (!parsed.success) return badRequest(res, 'invalid_token');

const redeemed = pendingRegistrations.redeem(parsed.data.token);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Catch all verify-email failures in async handler

Because /auth/verify-email is declared async on Express 4.18.2, any unexpected throw from the repo calls in this handler rejects the returned promise instead of hitting your JSON error path. In production, a transient SQLite failure here can leave the request hanging and surface as an unhandled rejection (process instability) rather than a controlled 5xx response, so the full handler body should be wrapped in a catch path (or made non-async with explicit error handling).

Useful? React with 👍 / 👎.

Comment thread routes/auth.js
Comment on lines +272 to +275
const { token, expiresAt } = sessions.issue(user.id, {
userAgent: req.get('user-agent') || null,
ip: req.ip,
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Guard login session creation against uncaught DB errors

The /auth/login try/catch only protects users.verifyAuth; if sessions.issue throws (for example on a transient SQLite write error), this async handler rejects after authentication and bypasses Express 4 error routing. That converts a recoverable backend failure into an unhandled rejection/no structured response path, so session issuance should be inside the same guarded error handling.

Useful? React with 👍 / 👎.

Comment thread routes/auth.js Outdated
Comment on lines +345 to +348
users.updateAuthHash(req.user.id, parsed.data.newAuthHash);
sessions.revokeAllForUser(req.user.id);
// Re-issue a fresh session for the current request.
const { token, expiresAt } = sessions.issue(req.user.id, {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Wrap password-change writes in an exception handler

In /auth/change-password, the DB mutations after old-password verification (updateAuthHash, session revocation, and new session issuance) run outside try/catch. Since this callback is async under Express 4, any throw from those writes becomes an unhandled rejected promise instead of the intended API error response, which can crash the process depending on Node rejection settings.

Useful? React with 👍 / 👎.

…ute to 500 (Codex round 9 follow-up)

Round-9 follow-up review flagged three P1s — /verify-email, /login
post-verify, and /change-password post-verify — where async route
handlers performed DB writes outside try/catch. In Express 4, a
rejected promise returned by an async handler does NOT hit error
middleware: it becomes an unhandled rejection and can crash the
process, or worse, leave the client hanging on a socket until TCP
timeout.

Previous fix (round 9, commit 3a96a5d) only wrapped the `users.verifyAuth`
call site in login/change-password. Codex correctly pointed out
everything after verifyAuth was still unguarded.

This commit installs a proper `asyncHandler(fn)` wrapper in
routes/auth.js that converts any rejected promise into `next(err)`,
so the existing 4-arg error middleware in lib/appFactory.js handles
them. All four async handlers (/register, /verify-email, /login,
/change-password) now use it. Specific status-code mappings
(KdfConfigError → 503, email_taken race → 409, invalid_email →
400) stay as explicit try/catch — only the generic "something broke"
path bubbles.

Tests:
- Renamed the single round-9 test into a block of four, covering:
    • /login verifyAuth throws
    • /login sessions.issue throws AFTER successful auth
    • /verify-email pendingRegistrations.redeem throws
    • /change-password updateAuthHash throws AFTER successful auth
  Each asserts (a) HTTP 500 wire response, (b) no process-level
  unhandledRejection event fires. Shared `expectNoUnhandledRejection`
  helper keeps the assertions consistent.

Full suite: 159/159 green, no lint.

Made-with: Cursor
@sidhujag
Copy link
Copy Markdown
Member Author

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7d0a87e2ff

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread routes/auth.js Outdated
const parsed = VerifySchema.safeParse(req.body);
if (!parsed.success) return badRequest(res, 'invalid_token');

const redeemed = pendingRegistrations.redeem(parsed.data.token);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Keep verification token valid until account write succeeds

/auth/verify-email destructively redeems the pending token before any user-row write happens, so a transient DB failure after this point returns 500 with the victim token already burned while other pending tokens for the same email can still exist. In that failure window, the legitimate user cannot retry with their original link, and a competing pending token can still be redeemed later to claim the account; make redemption + user creation/promotion atomic (or restore/purge on failure) to avoid this takeover path under write errors.

Useful? React with 👍 / 👎.

Comment thread routes/auth.js Outdated
Comment on lines +363 to +365
users.updateAuthHash(req.user.id, parsed.data.newAuthHash);
sessions.revokeAllForUser(req.user.id);
const { token, expiresAt } = sessions.issue(req.user.id, {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Wrap password-change state updates in one transaction

The password-change flow performs updateAuthHash, revokeAllForUser, and issue as separate writes; if a DB error occurs mid-sequence, the handler returns 500 with partially-applied security state (for example, auth hash rotated but old sessions not revoked, or all sessions revoked without issuing the replacement session). This should be executed atomically so the operation is all-or-nothing and does not leave inconsistent authentication/session state after transient failures.

Useful? React with 👍 / 👎.

…sactions (Codex round 10)

Two new findings from the round-10 review, both real issues not
addressed by the async-handler wrapper:

P1 — /verify-email burned tokens on downstream failure:
  Previously redeem() ran unconditionally up front, so any throw from
  users.* below it consumed the user's token forever (can't retry with
  the original link) while any sibling pending token for the same email
  stayed live as a potential takeover vector.

P2 — /change-password was not all-or-nothing:
  updateAuthHash + revokeAllForUser + sessions.issue ran as three
  separate writes. A failure in #2 after #1 rotates the hash but
  leaves old sessions alive (silent auth bypass window). A failure
  in #3 revokes every session including the current one and the
  user gets 500 while being signed out of everything.

Both are fixed by introducing a shared `runAtomic(fn)` helper wired
through services → auth router. Implementation uses better-sqlite3's
SAVEPOINT-based db.transaction(), which rolls back all writes if the
body throws. Response-header side effects (setSessionCookie,
csrf.issueCookie) stay outside the transaction and only run on commit.

Tests:
- /verify-email atomicity: force users.createVerifiedWithStoredAuth
  to throw on first call → 500 returned → restore the repo and
  re-POST the SAME token → 200 verified. Pre-fix this second POST
  would have been 400 invalid_or_expired_token.
- /change-password atomicity: establish two sessions, force
  sessions.issue to throw mid-transaction → 500. Assert (a) old
  password still logs in / new password doesn't, (b) the second
  pre-existing session is still live (revokeAll rolled back).

159 → 161 tests, all green, no lint.

Made-with: Cursor
@sidhujag
Copy link
Copy Markdown
Member Author

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 5f0137f838

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread middleware/csrf.js Outdated
Comment on lines +31 to +33
if (typeof a !== 'string' || typeof b !== 'string') return false;
if (a.length !== b.length) return false;
return crypto.timingSafeEqual(Buffer.from(a), Buffer.from(b));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Handle multibyte CSRF headers without throwing

timingEqual only checks JavaScript string length before calling crypto.timingSafeEqual(Buffer.from(a), Buffer.from(b)); if a client sends a non-ASCII X-CSRF-Token with the same character count as the cookie token, the two buffers can have different byte lengths and timingSafeEqual throws, which turns a normal CSRF mismatch into a 500. This is reachable on any CSRF-protected route and should return a 403 instead of surfacing an internal error.

Useful? React with 👍 / 👎.

Comment thread routes/auth.js Outdated
Comment on lines +333 to +335
if (req.sessionToken) sessions.revoke(req.sessionToken);
sessionMw.clearSessionCookie(res);
csrfMw.clearCookie(res);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Clear cookies even if session revocation fails

The logout path revokes the server-side session before clearing sid/csrf cookies, and there is no try/finally; if sessions.revoke throws (for example due to a transient SQLite failure), the request exits via error middleware and the browser keeps both auth cookies. That leaves the user effectively logged in after pressing logout. Clearing cookies should happen regardless of DB revocation outcome.

Useful? React with 👍 / 👎.

…clear cookies on logout (Codex round 11)

Round-11 surfaced two new real issues alongside a pile of already-resolved
re-cites (sliding cookie refresh, trust proxy, pepper 503, login session
guard — all of which have landed + have tests).

P1 — middleware/csrf.js: timingEqual used String#length (UTF-16 code
units) to guard crypto.timingSafeEqual, but compared Buffer.from(x)
(default UTF-8). For a 64-char header containing any non-ASCII codepoint,
the resulting byte buffers had different lengths and timingSafeEqual
threw RangeError. Because this ran inside the required CSRF path, a
hostile/accidental multibyte X-CSRF-Token turned a normal 403
csrf_mismatch into a 500. Fix: encode both sides to utf8 bytes first,
compare byte lengths, then timingSafeEqual.

P2 — POST /auth/logout revoked the server-side session BEFORE clearing
cookies with no try/finally. A transient sessions.revoke throw fell
through asyncHandler to the 500 path and the browser kept sid+csrf,
leaving the user effectively logged in. Fix: try/catch the revoke
(log-and-continue), always clear cookies in finally, always respond 200.

Tests:
- new middleware/csrf.test.js: 6 cases including a dedicated
  "multibyte header doesn't throw" regression test that constructs a
  64-char / 68-byte header and asserts 403 instead of 500.
- new /auth/logout integration test: stub sessions.revoke to throw,
  assert 200 response + both sid and csrf Set-Cookie clear-cookie
  headers on the wire.

161 → 168 tests, 17 suites, all green, no lint.

Made-with: Cursor
@sidhujag
Copy link
Copy Markdown
Member Author

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7c7872effe

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread routes/auth.js Outdated
schedule(async () => {
try {
await mailer.sendVerification({
to: email,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Send verification mail to normalized email

The register flow validates and persists the normalized email, but the actual send uses the raw email input. That means inputs like " user@example.com " pass validation (after normalizeEmail) and create a pending token, yet the SMTP recipient can be malformed; because this send is background best-effort, the API still returns 202 and the user never gets a link. The recipient should be the same normalized value used for validation/storage so delivery behavior matches account identity.

Useful? React with 👍 / 👎.

Round-12 surfaced one new P2 alongside the usual re-cites of already-
resolved findings (sliding cookie, trust proxy, pepper 503, login
session guard — all covered by existing tests).

POST /auth/register validated and persisted via normalizeEmail(email)
but handed the raw `req.body.email` to mailer.sendVerification. Inputs
like "  User@Example.COM  " therefore passed the syntax check (after
normalization), wrote a pending_registrations row under
"user@example.com", but hit RCPT TO with the raw whitespaced value.
Because sends are backgrounded / best-effort, the caller saw 202 and
never received a link.

Fix: compute normalizedEmail once at the top of /register and thread
that single value through the users lookup, pendingRegistrations.issue,
and mailer.sendVerification. The repos still normalize internally
(idempotent) so existing callers stay correct.

Test: a new integration test posts "  User@Example.COM  " and asserts
(a) 202, (b) outbox[0].to === "user@example.com", (c) token redeems
the canonical row and logs in cleanly.

168 → 169 tests, all green, no lint.

Made-with: Cursor
@sidhujag
Copy link
Copy Markdown
Member Author

@codex review

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex Review: Didn't find any major issues. More of your lovely PRs please.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@sidhujag sidhujag merged commit 084d778 into main Apr 20, 2026
sidhujag pushed a commit that referenced this pull request Apr 21, 2026
…te-submit resolution)

Two new P1 findings from Codex round 2 on PR #8 — the first-round
fixes opened up secondary issues that only showed up under retry
scenarios. Both are real production hazards, not just reviewer
taste.

P1 #2 (routes/govProposals.js, lib/proposalSubmissions.js):
`proposalHash` bakes in `time` (seconds since epoch) — part of the
CGovernanceObject::GetHash() input tuple Core signs under. The
old idempotency check keyed /prepare on `proposalHash`, so two
retries of the same logical /prepare that straddled a one-second
boundary hashed differently and BOTH landed in the DB as
`prepared`, leaving the user with duplicate pending submissions
for one proposal. Fix: add findPreparedByDataHexForUser() on the
submissions repo (scoped to owner, status=prepared, ordered by
recency) and make /prepare look up by the time-free canonical
payload first. On hit we reuse the frozen {time, proposalHash}
from the existing row, skip the RPC pre-flight (Core already
accepted it and rate-limits repeats), and return 200 idempotent.
Regression test injects a monotonic now() and advances the
clock >1s between two identical POST /prepare calls, asserts
single row, same id, same hash, same time.

P1 #3 (lib/proposalDispatcher.js): the round-1 fix that stopped
flipping "already exists" to `failed` left the row in
`awaiting_collateral` forever if Core kept returning the
duplicate error — no terminal transition, no completion signal
to the user. Core's govobj store is keyed on
CGovernanceObject::GetHash(parent, rev, time, vchData, outpoint,
sig), which computeProposalHash() reproduces exactly at prepare
time (outpoint/sig are empty for top-level user proposals), so
the on-chain hash MUST equal row.proposalHash. Resolve by
calling markSubmitted({ governanceHash: row.proposalHash })
inside the duplicate branch. If the repo rejects with
governance_hash_clash (rare race against a parallel tick or a
coincidentally-identical submission), we log and leave the
state machine to reconcile — still no false-failure email, but
now we have a real terminal path. Two new tests cover both
outcomes.

Full BE suite green (29 suites / 676 tests).

Made-with: Cursor
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant