feat(gov-proposals): governance proposal creation (PR 8 backend)#8
feat(gov-proposals): governance proposal creation (PR 8 backend)#8
Conversation
… step 1/3)
First slice of the governance proposal creation feature. Pure backend,
no wiring yet — appFactory + mailer template land in the next commit so
CI can exercise these modules in isolation first.
Summary:
- `lib/proposalHash.js` — JS port of Syscoin Core's
`CGovernanceObject::GetHash()`. We need this because Core only
exposes the hash via `gobject_prepare`, which ALWAYS burns 150 SYS
from the node wallet. Computing it locally lets the dApp hand the
OP_RETURN bytes to the user's own wallet (Pali or Syscoin-Qt) and
keep the backend wallet-less. Verified against a frozen golden
vector plus sensitivity tests.
- `lib/proposalValidate.js` — canonical JSON serializer + structural
validator. Emits exactly the on-chain fields (type, name,
start_epoch, end_epoch, payment_address, payment_amount, url) with
a fixed key order and non-scientific decimal amounts so the hash
is byte-deterministic. Stricter than Core on URL scheme
(requires http(s)://) as a social-engineering guardrail.
- `db/migrations/001_init.sql` (in-place) — `proposal_drafts` and
`proposal_submissions` tables. Submissions snapshot the canonical
hashing fields (parent_hash, revision, time_unix, data_hex,
proposal_hash) as immutable; the collateral OP_RETURN commits to
these, so freezing them at create() time prevents drift.
- `lib/proposalDrafts.js` + tests — plaintext per-user drafts,
BigInt-safe amounts (.safeIntegers(true)).
- `lib/proposalSubmissions.js` + tests — state machine
(prepared → awaiting_collateral → submitted | failed) enforced
inside the repo with stable `.code` errors.
- `lib/proposalDispatcher.js` + tests — background advance loop:
polls `getRawTransaction`, bumps conf counts, fires
`gObject_submit` once >= GOVERNANCE_FEE_CONFIRMATIONS (6) confs.
Classifies Core errors as terminal (→ failed) vs transient (retry)
and times out rows whose collateral is missing > 7 days.
- `routes/govProposals.js` + tests — HTTP surface:
* drafts CRUD (50-per-user soft cap)
* POST /prepare — canonicalize, compute hash, optional
gObject_check pre-flight, persist `prepared` row (idempotent on
(userId, proposalHash)), consume source draft in an atomic block
* POST /submissions/:id/attach-collateral — hand-off to dispatcher
* GET/DELETE for status polling and cleanup
Notes:
- `payment_count` is NOT a consensus field (Core has no bound on it;
number of payouts is implicit from (end_epoch - start_epoch) /
superblock_cycle). We keep it as pure display metadata with a soft
1..60 range check in the route layer — never in canonical JSON.
- We submit after 6 confs rather than relying on Core's
"postponed on immature collateral" path because it gives a cleaner
UX: one deterministic state transition instead of a silent
Core-internal retry schedule.
Tests: 658/658 passing, 5 consecutive clean full-suite runs.
Made-with: Cursor
Glues the governance-proposals subsystem into the running app and
closes the feedback loop with the user via email.
Mailer (lib/mailer.js):
- New footer kind 'proposal_notification' — transparency copy
specific to user-authored proposals (cannot disable, because
the user themselves initiated each one by paying collateral).
- New template `proposalSubmitted` — fired when a proposal reaches
6 confs and Core returns a governance hash. Surfaces both the
collateral txid and the governance hash so the user can verify
on any block explorer without trusting us.
- New template `proposalFailed` — fired on terminal failure
(collateral_not_found after 7 days OR submit_rejected from
Core). Body is explicit that the 150 SYS collateral is burned
by protocol and cannot be refunded.
- Both templates HTML-escape user-controlled fields (proposal
name, fail detail) — regression-tested against <script> and
onerror XSS payloads.
Dispatcher (lib/proposalDispatcher.js):
- New optional `onSubmitted` / `onFailed` hooks fired after
each real state transition, receiving the freshly re-read
submission row.
- Hook errors are caught + logged; they never propagate back
into the advance loop (a failing mailer must not stall the
state machine).
- Hooks are `await`'d so async mailers are safe.
Wiring (lib/appFactory.js + server.js):
- buildServices now constructs proposalDrafts + proposalSubmissions
repos.
- mountAuthAndVault accepts `proposalRpc` and — when provided —
mounts /gov/proposals. Mount is unconditional on the rpc
(drafts work offline) but the prepare pre-flight only runs
when rpc.gObjectCheck is available.
- createApp and server.js both build the dispatcher, wire it
to the mailer via user-email lookup, and start a minute-level
tick (first run staggered 5min after boot to avoid tramping
the RPC node alongside the hourly reminder dispatcher).
- server.js defines the camelCase→snake_case RPC adapter so
only one module has to know about @syscoin/syscoin-js's
`.call()` stub convention. Pre-existing /gov and
reminderDispatcher wiring is untouched.
Tests: - Mailer: 9 new cases (new footer kind, new templates, HTML
escape, detail-row conditional, burn disclosure on fail).
- Dispatcher: 8 new cases covering the hooks (fire-once on
submit, fire on both failure paths, async await, error
swallowing, factory validation).
- Full suite: 673 tests, stable across 3 consecutive runs.
Made-with: Cursor
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b089dc3bd9
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if (typeof rpc.gObjectCheck === 'function') { | ||
| try { | ||
| const resp = await rpc.gObjectCheck(canon.dataHex); | ||
| const result = |
There was a problem hiding this comment.
Call gObjectCheck with canonical proposal arguments
In routes/govProposals.js this preflight calls rpc.gObjectCheck with only dataHex, but the production adapter in server.js is defined as gObjectCheck(parentHash, revision, time, dataHex). With the current call shape, dataHex is passed as parentHash and the real payload becomes undefined, which causes invalid-params RPC failures; those failures match the /invalid/ classifier and can incorrectly return 422 core_rejected for otherwise valid proposals whenever RPC preflight is enabled.
Useful? React with 👍 / 👎.
| const TERMINAL_CORE_ERRORS = [ | ||
| /rate limit/i, | ||
| /invalid/i, | ||
| /already exists/i, |
There was a problem hiding this comment.
Remove "already exists" from terminal submit failures
Treating /already exists/ as a terminal rejection can flip a genuinely published proposal to failed: if gObject_submit succeeded previously (e.g., process crash before markSubmitted, or overlapping dispatcher workers), a retry may return an "already exists" error even though the object is live on-chain. The current logic then calls markFailed(...), which can persist a false failure state and trigger incorrect failure notifications.
Useful? React with 👍 / 👎.
…bmit) Two review findings from Codex on PR #8 (backend): P1 (routes/govProposals.js): the /prepare pre-flight called rpc.gObjectCheck with only dataHex, but server.js defines the adapter with the full Core signature (parentHash, revision, time, dataHex). That shifted args so dataHex became parentHash and the real payload was undefined; subsequent invalid-params errors from Core matched the /invalid/ classifier and masqueraded as 422 core_rejected for otherwise valid proposals. Fix: always pass the full canonical tuple we just hashed. Regression test asserts on the exact arg shape. P2 (lib/proposalDispatcher.js): /already exists/ was in the TERMINAL_CORE_ERRORS list, so a dispatcher retry after a prior successful gObject_submit (crash before markSubmitted, overlapping workers, operator running two dispatchers, etc.) would flip the row to failed and fire a false failure email, even though the governance object is live on-chain. Fix: classify duplicate-submit as a transient error, log it loudly via submit_already_exists, and let the next tick retry so the existing governance_hash_clash path in the status handler can promote it to submitted naturally. Added a test that asserts the row stays awaiting_collateral and no failure transition fires. Full BE suite green (29 suites / 674 tests). Made-with: Cursor
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ded00443dc
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const existing = submissions.findByProposalHashForUser( | ||
| userId, | ||
| proposalHash | ||
| ); |
There was a problem hiding this comment.
Fix /prepare idempotency key to avoid time-based duplicates
The idempotency check in POST /prepare keys only on proposalHash, but proposalHash is computed from timeUnix generated from now() for each request in the same handler. That means two retries with identical proposal fields that cross a one-second boundary produce different hashes and bypass this lookup, creating duplicate prepared rows for the same logical submission. This contradicts the documented idempotency contract and can leave users with multiple pending submissions for one proposal.
Useful? React with 👍 / 👎.
| } else if (isDuplicateCoreError(err)) { | ||
| // "Governance object already exists" — the object is live on | ||
| // chain, but we don't yet have its hash locally (likely a | ||
| // crash between gObject_submit succeeding and markSubmitted | ||
| // persisting, or a second dispatcher/operator submitted it). |
There was a problem hiding this comment.
Resolve duplicate-submit rows instead of leaving them pending
In the duplicate-submit branch, the dispatcher explicitly leaves the row in awaiting_collateral and only logs. If gObjectSubmit keeps returning "already exists" for that object, this row has no transition path to either submitted or failed, so it can remain indefinitely in a non-terminal state with no completion signal to the user. This should reconcile to a terminal status (or recover the governance hash) rather than relying on unbounded retries.
Useful? React with 👍 / 👎.
…te-submit resolution) Two new P1 findings from Codex round 2 on PR #8 — the first-round fixes opened up secondary issues that only showed up under retry scenarios. Both are real production hazards, not just reviewer taste. P1 #2 (routes/govProposals.js, lib/proposalSubmissions.js): `proposalHash` bakes in `time` (seconds since epoch) — part of the CGovernanceObject::GetHash() input tuple Core signs under. The old idempotency check keyed /prepare on `proposalHash`, so two retries of the same logical /prepare that straddled a one-second boundary hashed differently and BOTH landed in the DB as `prepared`, leaving the user with duplicate pending submissions for one proposal. Fix: add findPreparedByDataHexForUser() on the submissions repo (scoped to owner, status=prepared, ordered by recency) and make /prepare look up by the time-free canonical payload first. On hit we reuse the frozen {time, proposalHash} from the existing row, skip the RPC pre-flight (Core already accepted it and rate-limits repeats), and return 200 idempotent. Regression test injects a monotonic now() and advances the clock >1s between two identical POST /prepare calls, asserts single row, same id, same hash, same time. P1 #3 (lib/proposalDispatcher.js): the round-1 fix that stopped flipping "already exists" to `failed` left the row in `awaiting_collateral` forever if Core kept returning the duplicate error — no terminal transition, no completion signal to the user. Core's govobj store is keyed on CGovernanceObject::GetHash(parent, rev, time, vchData, outpoint, sig), which computeProposalHash() reproduces exactly at prepare time (outpoint/sig are empty for top-level user proposals), so the on-chain hash MUST equal row.proposalHash. Resolve by calling markSubmitted({ governanceHash: row.proposalHash }) inside the duplicate branch. If the repo rejects with governance_hash_clash (rare race against a parallel tick or a coincidentally-identical submission), we log and leave the state machine to reconcile — still no false-failure email, but now we have a real terminal path. Two new tests cover both outcomes. Full BE suite green (29 suites / 676 tests). Made-with: Cursor
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 548d50959f
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| CREATE TABLE proposal_drafts ( | ||
| id INTEGER PRIMARY KEY AUTOINCREMENT, | ||
| user_id INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE, |
There was a problem hiding this comment.
Add a new migration for proposal tables
These schema changes were appended to 001_init.sql, but this codebase applies migrations only once per filename (schema_migrations), so any existing deployment that already recorded 001_init.sql will never run these CREATE TABLE statements. After upgrading, /gov/proposals will hit runtime SQL errors (no such table: proposal_drafts / proposal_submissions) on production databases. Ship these DDL changes in a new migration file (e.g., 002_...sql) so upgrades actually apply them.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Acknowledged, but this is a false positive relative to this project's explicit pre-launch convention.
See the header of db/migrations/001_init.sql:
Until v1 ships, this file is the schema. Any rename, column addition, or constraint change that lands BEFORE launch belongs here, edited in place. Do NOT propose a forward ALTER TABLE migration as a "safer" alternative — there is no deployed database to migrate from, so a forward migration would only add file-count noise while describing a history that never existed. Once v1 is in production, this rule inverts and every schema change becomes a new 002+ file.
Nothing is deployed yet. The schema_migrations table is empty on every environment, so editing 001_init.sql in place does run the new CREATE TABLE / CREATE INDEX statements on first boot. The moment v1 ships and 001_init.sql is recorded in any production schema_migrations, this rule inverts and every subsequent schema change will land in a new 002+ file — that is exactly what the header calls out.
Keeping the review signal tight: marking this as "does not apply in this iteration". The new partial unique index (idx_proposal_submissions_user_payload_prepared added in 2e479f9) lives in 001_init.sql in keeping with that convention.
| const existingByPayload = submissions.findPreparedByDataHexForUser( | ||
| userId, | ||
| canon.dataHex | ||
| ); |
There was a problem hiding this comment.
Enforce prepare idempotency atomically
The payload-idempotency check is performed before the transactional insert, and there is no unique DB constraint for (user_id, data_hex, status='prepared'). Two concurrent /prepare requests with the same payload can both miss existingByPayload and both insert, violating the route’s idempotency contract and creating duplicate prepared submissions for the same proposal text.
Useful? React with 👍 / 👎.
| db.prepare( | ||
| `UPDATE proposal_submissions | ||
| SET status = ?, collateral_txid = ?, updated_at = ? | ||
| WHERE id = ? AND user_id = ?` | ||
| ).run(STATUS.AWAITING_COLLATERAL, txidLower, now(), id, userId); |
There was a problem hiding this comment.
Translate collateral txid unique races into 409
attachCollateral does a read-before-write duplicate check, but the uniqueness is ultimately enforced by a partial unique index at update time. Under concurrent requests attaching the same txid, one update can throw a raw SQLite constraint error here; because the route only maps custom .code values, that path returns 500 instead of the intended 409 txid_already_used. Catch and normalize constraint errors in this write path.
Useful? React with 👍 / 👎.
Address the two P2 findings from Codex's latest pass on PR #8 by turning the read-before-write idempotency/duplicate checks into DB-enforced invariants and normalising the remaining SQLite constraint errors into stable application-level codes. 1. /prepare — payload-keyed idempotency is now atomic (Codex P2) - New partial unique index on proposal_submissions: (user_id, data_hex) WHERE status = 'prepared'. Two concurrent /prepare calls with identical canonical payloads can no longer both insert; the second write is rejected at the SQLite layer. - Route catches SQLITE_CONSTRAINT_UNIQUE from the create() path and rehashes/returns the winner's row as a 200 idempotent response, matching the pre-read idempotent branch. Caller can't tell whether the pre-read or the constraint fallback served it, which is the whole point. 2. attachCollateral — write-level race normalised (Codex P2) - Repo wraps the status='awaiting_collateral' UPDATE in a try/catch; SQLite's partial unique index on collateral_txid translates to the same `txid_already_used` code that the pre-read clash throws, so the route layer keeps returning 409 regardless of which branch fires. 3. Pre-launch migration convention is preserved (Codex P1) - Codex flagged the new DDL as "needs a 002_ migration". The header in db/migrations/001_init.sql explicitly opts out of that rule pre-launch — 001 is the single source of truth until v1 ships. The new partial index lives in 001_init.sql in keeping with that convention. (Will flip to 002+ files for every schema change post-launch.) Tests - New: repo test that two prepared rows with same (user_id, data_hex) are rejected at the DB layer; different users may each hold a prepared row with the same dataHex; the partial predicate releases once the first row leaves 'prepared'. - New: repo test that simulates the attach-collateral race by planting a competing row between the pre-read and the UPDATE; the raw SQLite constraint error is translated to txid_already_used. - New: route test that forces the pre-read miss once and verifies the /prepare constraint fallback returns 200 idempotent with the winner's submission and the DB still holds exactly one row. - Adjusted existing tests that happened to rely on being able to create two prepared rows with the same dataHex (listByStatus, proposalDispatcher multi-row test, db.test.js collateral_txid uniqueness fixture) to use distinct data_hex values — the intent of each test is unchanged. All 682 backend tests green. Made-with: Cursor
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 2e479f9547
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| `UPDATE proposal_submissions | ||
| SET status = ?, collateral_txid = ?, updated_at = ? | ||
| WHERE id = ? AND user_id = ?` | ||
| ).run(STATUS.AWAITING_COLLATERAL, txidLower, now(), id, userId); |
There was a problem hiding this comment.
Include status guard in attach-collateral UPDATE
attachCollateral() checks that the row is prepared before writing, but the actual UPDATE only filters by id/user_id. In a multi-worker/process deployment, two racing attach requests can both pass the pre-read and the later write can overwrite collateral_txid on a row already moved to awaiting_collateral, violating the state machine and potentially binding the submission to the wrong collateral transaction. The write should enforce the expected old status in SQL (or equivalent compare-and-swap semantics).
Useful? React with 👍 / 👎.
Adds a minimal GitHub Actions workflow that installs with `npm ci` and runs `npm test -- --runInBand --ci` on Node 20 and Node 22 (both in LTS). --runInBand keeps the SQLite test suites deterministic on the hosted runner and avoids the rare cross-test `:memory:` db leakage we saw locally on multi-worker jobs. Also adds a cheap schema-sanity job that fails if two migration filenames share the same numeric prefix — the runner in `lib/db.js` sorts lexicographically, so colliding prefixes make apply order undefined. Exactly the kind of silent footgun CI should catch. concurrency: ci-<workflow>-<ref> with cancel-in-progress so a force-push supersedes the previous run rather than queueing. Made-with: Cursor
…dex R4 P1)
Context: the prior attachCollateral() pre-read the row, verified
status='prepared', then issued an UPDATE that filtered only on
id/user_id. In a multi-worker deployment, two racing attach
requests could both pass the pre-read and both UPDATE — the second
would silently overwrite the first's collateral_txid, binding the
submission to the wrong collateral transaction. The partial unique
index on collateral_txid only catches the case where both racers
are attaching the SAME txid; divergent txids would pass.
Fix: fold the expected status into the UPDATE's WHERE clause
(`AND status = 'prepared'`) so the write itself implements a
compare-and-swap. On zero-changes, re-read the row and:
- return null if it disappeared (remove())
- throw `status_not_prepared` with the row's real current state
if a racer already advanced it
- throw `attach_cas_miss` (retryable) as a defensive fallback
Existing `txid_already_used` SQLITE_CONSTRAINT normalization is
preserved around the new CAS UPDATE.
Test: adds a regression that plants a concurrent status-flip
between the pre-read and UPDATE via a narrowly-scoped db.prepare
monkey-patch, then verifies:
1. the attempt throws `status_not_prepared`, and
2. the row's collateral_txid remains the winner's, never the
loser's.
All 683 backend tests pass on Node 20.19.4.
Made-with: Cursor
|
@codex review. Don't worry about migrations, not an issue. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 40dcb4eb58
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| // SYS. Transient errors (node unreachable) are soft-treated — | ||
| // we proceed and trust the dispatcher to surface any issue later | ||
| // when it calls gObject_submit. | ||
| if (typeof rpc.gObjectCheck === 'function') { |
There was a problem hiding this comment.
Guard optional RPC before dereferencing gObjectCheck
The route advertises rpc as optional, but this dereference assumes it is always an object. In the same commit, mountAuthAndVault passes proposalRpc through with a default of null, so a default createApp() deployment can hit /gov/proposals/prepare and throw TypeError: Cannot read properties of null (reading 'gObjectCheck') instead of gracefully skipping preflight. Because this happens inside an async handler without an outer catch, it can surface as an unhandled route failure rather than a structured API error.
Useful? React with 👍 / 👎.
| let timeUnix; | ||
| let proposalHash; | ||
| let opReturnHex; | ||
| if (existingByPayload) { |
There was a problem hiding this comment.
Run Core preflight before idempotent prepare short-circuit
This early-return branch bypasses the gObject_check preflight entirely for retries. If the first /prepare call created the row during a transient RPC outage (the code explicitly soft-allows network errors), then a later retry with the same payload will always return this cached prepared row and never revalidate against Core, even after RPC recovers. That can let Core-only-invalid proposals (for example, checksum-invalid payment addresses) proceed to collateral payment and fail only in the dispatcher after the 150 SYS fee is already burned.
Useful? React with 👍 / 👎.
…odex R5 P1) Two related prepare-route fixes: 1. `rpc` is advertised as optional and `appFactory.js` explicitly passes `null` when no Core connection is wired. Destructured defaults only fire for `undefined`, so an explicit null flowed through and the bare `typeof rpc.gObjectCheck` dereference threw TypeError inside an async handler, surfacing as an unhandled rejection instead of a clean "skip preflight". Guard with `rpc &&` before dereferencing. 2. The idempotent-replay branch short-circuited before re-running the Core preflight. If the original /prepare landed during a transient RPC outage (the code soft-allows network errors), every retry would replay the cached row and never revalidate once Core recovered. A Core-invalid proposal could then proceed to burn 150 SYS before failing in the dispatcher. Moved preflight BEFORE the idempotency short-circuit so every /prepare response is backed by a fresh Core ack (or an explicit soft-fail we logged). Added regression test. Made-with: Cursor
|
@codex review. Don't worry about migrations, not an issue. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: fcd965e995
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| `UPDATE proposal_submissions | ||
| SET status = ?, governance_hash = ?, updated_at = ? | ||
| WHERE id = ?` | ||
| ).run(STATUS.SUBMITTED, gh, now(), id); |
There was a problem hiding this comment.
Guard markSubmitted with state compare-and-swap
markSubmitted does a read-time status check, but the UPDATE only filters by id, so two dispatcher workers processing the same row can both read awaiting_collateral and both commit a "successful" transition. In that race, each worker can treat the transition as its own success and fire duplicate submitted-side effects (notably duplicate emails), despite the dispatcher expecting a raced second writer to get status_not_awaiting. Add AND status = 'awaiting_collateral' (CAS semantics) and treat zero changed rows as a raced no-op.
Useful? React with 👍 / 👎.
| if (f.paymentAmountSats !== undefined) { | ||
| // Digit-string or number; proposalDrafts will bigint-coerce. | ||
| patch.payment_amount_sats = f.paymentAmountSats; | ||
| } else if (f.paymentAmount !== undefined) { |
There was a problem hiding this comment.
Validate paymentAmountSats before writing draft rows
When paymentAmountSats is provided, this path forwards it unvalidated to the repo; malformed values (for example "12.5" or non-digit strings) then throw inside proposalDrafts.create/update, which the route catches as a generic error and returns 500 internal instead of a user-facing 400 validation_failed. Since this is request-shape validation, it should be rejected here consistently like the paymentAmount branch.
Useful? React with 👍 / 👎.
…untSats validation (Codex R5 P1/P2)
Backend round 5 follow-ups from the Codex review loop on PR8:
* markSubmitted now does a compare-and-swap on status
(`WHERE id = ? AND status = 'awaiting_collateral'`). Losing workers
get back `null` instead of throwing, so two dispatchers that race on
the same row no longer both emit `onSubmitted` side effects.
* proposalDispatcher handles both paths of the CAS:
- null return (CAS miss): log `markSubmitted_raced` with
`code: 'cas_miss'`
- exception (`status_not_awaiting` / `governance_hash_clash`): log
once and set a guard so the CAS-miss branch does not re-log.
* /gov/proposals/drafts POST + PATCH validate `paymentAmountSats`
before forwarding to the repo. Previously malformed values
(`"12.5"`, `"abc"`, `""`, `"-1"`, `"007"`, `"1e3"`, ...) bubbled out
of `toBigIntSats()` as a generic 500. Route layer now returns
the canonical `400 { error: 'validation_failed', issues: [...] }`
shape with `code: 'amount_sats_invalid'`.
Regression tests added:
* proposalSubmissions.test.js — markSubmitted CAS miss returns null
without throwing and preserves the winning row's state.
* govProposals.routes.test.js — 400 for malformed paymentAmountSats
on both POST and PATCH; happy-path digit string still accepted.
All 688 backend tests pass.
Made-with: Cursor
|
@codex review. Don't worry about migrations, not an issue. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 960418c282
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| return rpcServices(client.callRpc) | ||
| .gObject_check(parentHash, String(revision), String(time), dataHex) | ||
| .call(); |
There was a problem hiding this comment.
Call gObject_check with its single hex_data argument
Syscoin Core's gobject_check RPC accepts only one parameter (hex_data), but this adapter sends four (parentHash, revision, time, dataHex). On a real node that returns RPC_INVALID_PARAMETER, and /gov/proposals/prepare classifies that as a terminal validation error (/invalid/), so otherwise-valid proposal prepares are rejected with 422 whenever proposalRpc is enabled.
Useful? React with 👍 / 👎.
| const okFlag = result && (result.Object || result.object); | ||
| // Core 4.x returns { "Object": "success" } on accept. Anything | ||
| // else is treated as a rejection; parse the message for codes. | ||
| if ( | ||
| okFlag && | ||
| String(okFlag).toLowerCase() === 'success' | ||
| ) { |
There was a problem hiding this comment.
Treat Core's successful gobject_check shape as accepted
The preflight success check only accepts result.Object === 'success', but Syscoin Core returns success as { "Object status": "OK" } for gobject_check. That means successful Core validation is interpreted as rejection and /prepare returns 422 core_rejected even for valid payloads; this still breaks proposal creation even after fixing the RPC argument count.
Useful? React with 👍 / 👎.
…s": "OK" } (Codex R6 P1)
Two tightly-coupled Codex findings against the proposal preflight,
both surfaced as "otherwise-valid proposals get 422 core_rejected on
a real Core node":
1. Adapter signature: Syscoin Core's `gobject_check` RPC takes
exactly ONE positional arg, `hex_data` (see
syscoin/src/rpc/governance.cpp::gobject_check — it constructs
the govobj with uint256() / 1 / GetAdjustedTime() internally and
derives nothing from the caller). Our adapter was modelled on
`gobject_submit` and forwarded (parentHash, revision, time,
dataHex); Core rejects that with RPC_INVALID_PARAMS ("too many
positional arguments") and the route's terminal-error classifier
then flagged that as 422 core_rejected. Adapter + route now pass
just `canon.dataHex`.
2. Success response shape: Core returns
{ "Object status": "OK" }
(literal key with a space; governance.cpp line 111). Our check
looked for `result.Object === 'success'`, which never matches a
real Core response, so every successful preflight fell through
to the reject branch. Route now accepts the real shape, with a
case-insensitive compare on the OK string for forward-compat
but a strict key match.
Regression tests:
* happy-path preflight mock updated to produce the real shape and
asserts the adapter sees exactly one arg (the hex_data).
* new: "accepts Core's canonical success shape" — sends the exact
"Object status: OK" response and expects 201.
* new: "gObjectCheck is called with hex_data only" — pins the
adapter signature at the route boundary so a future regression
would fail fast.
* existing "core_rejected when gObjectCheck returns non-success"
updated to a rejection shape that no longer pretends to set
`Object: 'failure'` (Core never emits that key).
All 690 backend tests pass.
Made-with: Cursor
|
@codex review. Don't worry about migrations, not an issue. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 481d7586c0
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| db.prepare( | ||
| `UPDATE proposal_submissions | ||
| SET status = ?, fail_reason = ?, fail_detail = ?, updated_at = ? | ||
| WHERE id = ?` |
There was a problem hiding this comment.
Guard markFailed with status compare-and-swap
markFailed does a status pre-read but then updates by id only, so a concurrent worker can move the same row to submitted between those two operations and this write will still overwrite it back to failed. In a multi-worker dispatcher setup, that corrupts the terminal state and can trigger the wrong side effects (failure path after successful submission). This transition needs the same CAS-style status guard used in markSubmitted (or a checked changes===0 path) to avoid clobbering raced updates.
Useful? React with 👍 / 👎.
| submissions.remove(id, userId); | ||
| return res.status(204).end(); |
There was a problem hiding this comment.
Verify submission deletion actually removed a row
This handler always returns 204 after calling submissions.remove, but remove is conditional on status (prepared/failed) and can legitimately return 0 if the row changes state after the pre-read (for example, a concurrent attach-collateral request flips it to awaiting_collateral). That creates a false-success delete response while the submission still exists. The result of remove should be checked and mapped to a conflict/not-found response.
Useful? React with 👍 / 👎.
… a row (Codex R7 P1/P2)
Two race-safety follow-ups against the proposal submission lifecycle:
* `markFailed` had a read-time status check but the UPDATE only filtered by
id, so two dispatcher workers processing the same row could have one
succeed in `markSubmitted` while the other's "terminal reject" path
(e.g. gObjectSubmit throws a validation-ish error) stomps it back to
`failed` — corrupting the terminal state and firing onFailed after an
onSubmitted already went out. Now UPDATE gates on
`AND status NOT IN ('submitted', 'failed')` and returns null on a CAS
miss. Dispatcher's `if (failedRow)` guard already skips the hook on
null, so the winner keeps emit-once semantics.
* `DELETE /submissions/:id` pre-reads the row, confirms it is
`prepared` or `failed`, then calls `submissions.remove`. That remove
uses a partial DELETE guarded on the same status set, so a concurrent
transition (another tab attach-collateral, dispatcher pickup) between
pre-read and DELETE produces `changes = 0`. The handler used to
return 204 regardless — a false-success that tells the client the
submission is gone while it is in fact still live and may run to
completion on-chain. Route now checks the row count, re-reads on a
miss, and returns 404 (row deleted by another tab) or 409
status_not_deletable (raced to a non-deletable state).
Regression tests:
* proposalSubmissions.test.js — CAS-miss on markFailed returns null
without throwing; winning `submitted` row is preserved.
* govProposals.routes.test.js — DELETE on a row raced to
`awaiting_collateral` returns 409 and the row is still visible.
* govProposals.routes.test.js — DELETE that finds 0 changes because
the row was consumed elsewhere returns 404.
All 693 backend tests pass.
Made-with: Cursor
|
@codex review. Don't worry about migrations, not an issue. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 6e5a0d9b66
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const existingByPayload = submissions.findPreparedByDataHexForUser( | ||
| userId, | ||
| canon.dataHex | ||
| ); |
There was a problem hiding this comment.
Catch repo errors in async prepare route
This submissions.findPreparedByDataHexForUser(...) call sits outside any top-level try/catch in an async Express 4 handler, so a synchronous DB exception (for example SQLITE_BUSY/I/O errors) becomes an unhandled promise rejection instead of a controlled JSON 500 response. In production that can surface as hung requests or process-level instability under transient database faults, so this lookup should be wrapped by the same error handling used for other persistence calls in this route.
Useful? React with 👍 / 👎.
| dispatcherTimer = setTimeout( | ||
| fireAndSchedule, | ||
| proposalDispatcherIntervalMs | ||
| ); |
There was a problem hiding this comment.
Stop proposal dispatcher from rearming after stop
stopProposalDispatcher() clears the current timeout, but fireAndSchedule() always creates a new timeout after await dispatcher.tick(). If stopProposalDispatcher() is called while a tick is in flight, the callback still executes this block and re-arms the loop, so the dispatcher keeps running after it was asked to stop (notably in teardown/shutdown paths).
Useful? React with 👍 / 👎.
Codex PR8 round 8 fixes. P1 (routes/govProposals.js): the async /prepare handler had multiple persistence calls (findPreparedByDataHexForUser, drafts.getByIdForUser) outside any try/catch. A synchronous better-sqlite3 throw (SQLITE_BUSY, I/O, corrupt-index) became an unhandled promise rejection rather than a controlled JSON 500 — Express 4 does not catch async handler rejections. Wrap both calls so transient DB faults surface as clean 500s the client can retry. Regression test forces SQLITE_BUSY on the lookup and asserts a 500 with no unhandled rejection leak. P2 (lib/appFactory.js): stopProposalDispatcher() only called clearTimeout on a pending timer. If a tick was already in flight (past `await dispatcher.tick()`), the callback still re-armed the loop after resolving, so stop was effectively ignored — leaking the dispatcher into teardown and shutdown paths. Add a `dispatcherStopped` flag, check it before AND after the in-flight tick, and set it in stopProposalDispatcher. Regression test stubs the dispatcher with a controllable deferred, stops while a tick is parked on it, then releases it; asserts no further ticks fire. Made-with: Cursor
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 591956117d
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| log('warn', 'markSubmitted_after_duplicate_failed', { | ||
| id, | ||
| code: markErr && markErr.code, | ||
| msg: String((markErr && markErr.message) || markErr), | ||
| }); |
There was a problem hiding this comment.
Transition duplicate-hash clashes out of awaiting_collateral
When gObject_submit returns an "already exists" error, this path tries markSubmitted, but if that throws governance_hash_clash (because another row already claimed the governance hash) the handler only logs and exits. The submission remains in awaiting_collateral, so every dispatcher tick retries forever, repeatedly hitting the same duplicate condition and never producing a terminal user-visible outcome even though collateral is already paid.
Useful? React with 👍 / 👎.
| // (Codex PR8 round 1 P2.) | ||
| const TERMINAL_CORE_ERRORS = [ | ||
| /rate limit/i, | ||
| /invalid/i, |
There was a problem hiding this comment.
Tighten terminal error matching to avoid false failures
Including /invalid/i in terminal error patterns is too broad: transient RPC transport/parser failures frequently contain the word "invalid" (for example, invalid JSON-RPC response text), so recoverable outages can be misclassified as permanent submit rejections. In those cases rows are marked failed instead of retried, which is a user-impacting incorrect terminal transition.
Useful? React with 👍 / 👎.
…ns (Codex R10 P1/P2)
Two backend fixes from Codex PR8 round 10 review:
P1 (lib/proposalDispatcher.js) — duplicate governance-hash clashes
must transition out of awaiting_collateral. When Core returns
"already exists" and markSubmitted then throws
governance_hash_clash (another row already claimed this
governance_hash via the UNIQUE index), round 9 code only logged
and left the row in awaiting_collateral — so the dispatcher
retried that same row every tick forever, producing no terminal
user-visible outcome even though the on-chain object exists and
the collateral is already spent. Now flip the clashing row to
terminal `failed` with reason `duplicate_governance_hash` via the
same CAS path used elsewhere, and fire onFailed so the user gets
a clean notification. A follow-up tick confirms the terminal row
is no longer picked up.
P2 (lib/proposalDispatcher.js) — tighten TERMINAL_CORE_ERRORS.
The previous list had a blanket /invalid/i pattern that was far
too broad: transient RPC transport / JSON-parser errors ("invalid
JSON-RPC response", "invalid response from server", "invalid
utf-8 sequence", etc.) frequently contain the word "invalid",
and classifying those as permanent rejections flipped rows to
`failed` on temporary outages and fired user-visible failure
emails the user could never fix. Narrow to the exact phrases
Syscoin Core actually emits for governance-object validation
failures (`Governance object is not valid`, `Invalid parent
hash`, `Invalid (object )?signature`, `Invalid object type`,
`Invalid proposal`, `Invalid data hex`, `hash mismatch`) plus the
existing `Object submission rejected` and `rate limit`. Anything
else — bare "invalid" in a transport string — stays classified
as transient and is retried next tick.
Updated an existing test that asserted the hash-clash row stayed
in awaiting_collateral; added a new regression test asserting
the terminal + onFailed + no-retry behavior. Added a separate
regression test for the invalid-transport vs. Core-terminal
distinction. Full BE suite 697/697 passing.
Made-with: Cursor
|
@codex review. Don't worry about migrations, not an issue. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 601eeedd48
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| /validation|invalid|exceeds|rejected|collateral|size/i.test(msg) | ||
| ) { |
There was a problem hiding this comment.
Narrow gObjectCheck terminal-error matcher
POST /gov/proposals/prepare is designed to soft-allow transient RPC outages, but this branch classifies any exception containing invalid as a permanent Core rejection and returns 422 core_rejected. JSON-RPC transport failures commonly include text like “invalid response”/“Invalid URL”, so temporary node or network issues can be misreported as proposal-validation failures and block users from preparing submissions until the outage clears. Restrict this matcher to actual governance-validation phrases (or explicitly exclude transport errors) before returning 422.
Useful? React with 👍 / 👎.
| clashed = markErr && markErr.code === 'governance_hash_clash'; | ||
| log('warn', 'markSubmitted_after_duplicate_failed', { |
There was a problem hiding this comment.
Handle unique-hash races in duplicate-submit recovery
In the duplicate-submit recovery path, only markSubmitted errors with code === 'governance_hash_clash' are treated as terminal duplicates. Under concurrent workers, the same scenario can surface as raw SQLITE_CONSTRAINT(_UNIQUE) from the unique governance_hash index (race between pre-check and UPDATE), which leaves clashed false and the row stuck in awaiting_collateral to be retried indefinitely. This branch should also treat unique-constraint errors as a clash (or ensure the repo normalizes them) so the row can transition to the terminal duplicate-failure state.
Useful? React with 👍 / 👎.
…ash races
P1 (routes/govProposals.js): narrow gObjectCheck terminal-error
matcher. Previous heuristic included a bare /invalid/ token, which
matched JSON-RPC transport/parser errors that routinely contain the
word "invalid" ("Invalid URL", "invalid response from server",
"invalid JSON-RPC response", "invalid utf-8 sequence in headers").
Those are transient outages, not Core validation rejects, but the
old matcher misreported them as permanent 422 core_rejected and
blocked legitimate /prepare calls until the node/network recovered.
Replaced with an explicit allowlist of phrases Syscoin Core actually
emits from CGovernanceObject::IsValidLocally() and gobject_check's
reject branches (name exceeds, data exceeds, url invalid,
payment_address invalid, start_epoch, type field not found, ...)
plus the gobject-submit wrapper rejects (Governance object is not
valid, Object submission rejected, Invalid parent hash, signature/
object-type/data-hex rejects, collateral rejected). Everything
else falls through to soft-allow. Deliberately does NOT use
parseCoreRejectMessage for classification because its catch-all
arm raises a generic core_rejected issue for ANY non-empty string,
which would false-positive every transport error. Added 11
regression tests: 5 transport-error strings that must NOT terminate
and 6 genuine Core reject phrases that still must 422.
P2 (lib/proposalSubmissions.js): normalize raw SQLITE_CONSTRAINT_
UNIQUE on governance_hash to governance_hash_clash. Under
concurrent dispatcher workers, the read-above (byGovHashStmt
pre-check) + write-here (UPDATE) pair is not atomic — two workers
can each see clash=null, then the second UPDATE hits the partial
UNIQUE index on `governance_hash` and rejects with SQLITE_CONSTRAINT_
UNIQUE. Pre-fix the raw error bubbled up unwrapped; the dispatcher's
duplicate-recovery branch only treats code === 'governance_hash_
clash' as a terminal clash, so the raw error left clashed=false,
the row stayed awaiting_collateral, and every subsequent tick
re-hit Core's "already exists" and looped forever with no
terminal user-visible outcome. Wrap the UPDATE in try/catch and
rethrow as governance_hash_clash (same normalization pattern as
attachCollateral's collateral-txid unique clash). Regression test
exercises the code path by patching db.prepare to force the
pre-check to miss so the UPDATE is the boundary that raises.
Made-with: Cursor
|
@codex review. Don't worry about migrations, not an issue. |
1 similar comment
|
@codex review. Don't worry about migrations, not an issue. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 56c4491be5
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| // transport string — stays classified as transient and gets | ||
| // retried. | ||
| const TERMINAL_CORE_ERRORS = [ | ||
| /rate limit/i, |
There was a problem hiding this comment.
Narrow terminal rate-limit matching in submit errors
Treating any message that contains rate limit as terminal causes false permanent failures when the RPC path returns transient throttling errors (for example temporary 429s from a proxy/provider). In that case isTerminalCoreError returns true, the row is marked failed with submit_rejected, and retries stop even though the submission may succeed later. Match only the explicit Syscoin Core governance rejection phrase instead of a broad token.
Useful? React with 👍 / 👎.
| const winner = submissions.findPreparedByDataHexForUser( | ||
| userId, | ||
| canon.dataHex | ||
| ); |
There was a problem hiding this comment.
Guard winner re-read in prepare race fallback
In the unique-constraint recovery path, submissions.findPreparedByDataHexForUser(...) is called without a local try/catch inside an async Express 4 handler. If that DB read throws (e.g., transient SQLite error), the rejection escapes and can become an unhandled async route failure instead of returning the expected JSON 500. This reintroduces the async error-handling gap this route is otherwise trying to avoid.
Useful? React with 👍 / 👎.
…-read
P1 (lib/proposalDispatcher.js): replace bare /rate limit/i in
TERMINAL_CORE_ERRORS with the exact Syscoin Core phrase
"Object creation rate limit exceeded" (thrown from gobject(submit)
at syscoin/src/rpc/governance.cpp:204). The old pattern matched
transient proxy/provider 429 responses whose bodies routinely
contain "rate limit"/"rate-limited" — those should be retried,
not terminate the row. Core's phrase is the one permanent-reject
condition where the object hash is burned for this cycle.
Regression test: a proxy 429 "rate limit exceeded" body leaves
the row in awaiting_collateral, while Core's exact phrase flips
to failed/submit_rejected on the next tick.
P2 (routes/govProposals.js): wrap the winner re-read in the
unique-constraint recovery path (submissions.findPreparedByData
HexForUser) in try/catch. better-sqlite3 is synchronous and can
throw (SQLITE_BUSY, I/O, corrupt index) from that read; without
the guard the throw escaped into the async Express 4 handler as
an unhandled rejection, which in prod surfaces as a process-
level UnhandledPromiseRejection warning and a client-visible
500 HTML default page instead of the handler's structured
{ error: 'internal' } JSON. Returns 500 JSON on lookup failure,
matching the pattern used for the other DB reads in this route.
Regression test: prime a prepared row so the UNIQUE index will
reject a second insert; make the pre-read miss (drives the
recovery branch) AND throw SQLITE_BUSY on the re-read; assert
the second /prepare returns { error: 'internal' } JSON 500.
Made-with: Cursor
|
@codex review. Don't worry about migrations, not an issue. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 386108c5e2
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| // status_not_awaiting: someone else moved the row ahead of us. | ||
| // governance_hash_clash: we computed the same hash again for | ||
| // the same row (safe; the UNIQUE index prevents a second row). | ||
| log('warn', 'markSubmitted_raced', { |
There was a problem hiding this comment.
Mark clashed submissions failed after successful gObjectSubmit
In the success path, a markSubmitted throw with governance_hash_clash is currently treated as a harmless race and only logged, but that error means a different row already owns the same governance hash. When Core returns a hash for a duplicate submit (instead of an "already exists" error), this leaves the current row stuck in awaiting_collateral and retried forever with no terminal user-visible outcome. This branch should mirror the duplicate-error handling and transition the row to terminal failed (e.g., duplicate_governance_hash).
Useful? React with 👍 / 👎.
|
@codex review. Don't worry about migrations, not an issue. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 386108c5e2
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if (typeof input === 'number') { | ||
| if (!Number.isFinite(input)) throw new Error('payment_amount must be finite'); | ||
| // Convert via string to avoid float drift: 42.5 -> "42.5" -> sats | ||
| return parsePaymentAmountToSats(input.toString()); |
There was a problem hiding this comment.
Parse numeric payment_amount without exponent rejection
parsePaymentAmountToSats documents support for numeric payment_amount, but this branch converts numbers with input.toString() and then validates with a decimal-only regex. For valid values like 0.00000001 (1 satoshi), JavaScript stringifies to 1e-8, which fails the regex and gets rejected as invalid. That makes /gov/proposals/prepare (and draft normalization paths that rely on this parser) return validation_failed for valid numeric inputs that common JSON clients can emit in scientific notation.
Useful? React with 👍 / 👎.
Backend:
- BE P1 (lib/proposalDispatcher.js): mark rows failed with
duplicate_governance_hash when gObjectSubmit succeeds but the
repo's UNIQUE index on governance_hash rejects the UPDATE.
Previously this path only logged and returned, leaving the row
stuck in awaiting_collateral and retried forever with no
terminal outcome. Mirrors the existing duplicate-error branch
and fires onFailed so the user gets a terminal notification.
- BE P2 (lib/proposalValidate.js): parsePaymentAmountToSats now
handles numeric inputs that JavaScript stringifies to
scientific notation (e.g. 0.00000001 -> "1e-8"). Route through
toFixed(8) so the decimal regex accepts them; previously valid
sats-scale numeric values surfaced as validation_failed.
Frontend:
- FE P2 (src/pages/NewProposal.js): always clear loadError when
the ?draft= query is removed from the URL. Prior code gated
the reset on draftId != null, so a failed load (which clears
draftId in catch) left a stale "Couldn't load draft" banner
visible on a fresh-new-proposal route.
- FE P2 (src/lib/proposalForm.js): draftBodyFromForm now accepts
{ forUpdate } and, when true, emits explicit empty strings for
cleared text fields (name/url/paymentAddress) and null for
cleared epochs. NewProposal passes forUpdate: true to
updateDraft so user clears actually PATCH through. Previously
dropping empty keys silently discarded a user's explicit
delete of e.g. url.
- FE P2 (src/pages/ProposalStatus.js): extend the [id] reset
effect to also clear deleteError, attachError, and txidInput.
These are per-submission UI state; leaving them set across
/governance/proposal/:id navigations showed stale error
banners on a different row and — worse — prefilled row B's
attach-collateral textbox with a txid the user pasted into
row A.
- FE P3 (src/pages/ProposalStatus.js): treat not_found as
non-retryable in the polling short-circuit. Previously the
guard excluded only invalid_id and forbidden; a stale/deleted
submission would otherwise keep re-firing getSubmission every
60s indefinitely for every mounted tab.
Full regression tests added for each fix; existing test suites
continue to pass (BE 714 / FE 652).
Made-with: Cursor
|
@codex review. Don't worry about migrations, not an issue. |
1 similar comment
|
@codex review. Don't worry about migrations, not an issue. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d3050d2a15
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| // already far exceeds total supply). Route through a fixed | ||
| // decimal form so the regex/BigInt path below stays the sole | ||
| // format of record. | ||
| return parsePaymentAmountToSats(input.toFixed(8)); |
There was a problem hiding this comment.
Reject over-precision numeric payment amounts
parsePaymentAmountToSats currently converts numeric inputs with toFixed(8), which rounds instead of validating precision. That means a request like paymentAmount: 0.000000009 is silently accepted as 1 sat (and 1.999999999 becomes 2 SYS) rather than being rejected for having more than 8 decimals, so the persisted/on-chain payment amount can differ from what the client sent. The string path correctly enforces <= 8 decimals, so the number path should preserve that behavior instead of mutating values.
Useful? React with 👍 / 👎.
Codex PR8 round 14 P1. `parsePaymentAmountToSats` previously routed numeric inputs through `toFixed(8)` to sidestep scientific-notation stringification. `toFixed` ROUNDS, so `0.000000009` silently became 1 sat and `1.999999999` became 2 SYS — the persisted/on-chain amount diverged from what the client sent. Replace the round-through-toFixed step with a non-mutating decimal converter (`numberToDecimalString`) that preserves the number's exact significant digits from its shortest round-trip form. The existing string path's `<= 8 decimals` check then becomes the sole precision gate: numeric `0.00000001` is still accepted as 1 sat, but numeric `0.000000009`, `0.0000000099`, and `1.0000000001` are now rejected with the same `payment_amount has more than 8 decimal places` error that the string path emits. Regression tests cover the rejection path and confirm that exact 8-decimal numeric values (1 sat, 1.00000001 SYS) still parse losslessly. Made-with: Cursor
|
@codex review. Don't worry about migrations, not an issue. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 26be11d083
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| typeof payload.payment_amount_sats !== 'bigint' || | ||
| payload.payment_amount_sats <= 0n | ||
| ) { |
There was a problem hiding this comment.
Enforce int64 cap on proposal payment amounts
validateStructural only checks that payment_amount_sats is positive, so inputs above SQLite's signed 64-bit range (e.g. 9223372036854775808 sats) pass validation and reach persistence. In POST /gov/proposals/prepare, that overflows the payment_amount_sats INTEGER write in proposalSubmissions.create, which surfaces as a generic 500 instead of a deterministic 400 validation error; users can repeatedly hit an internal error with the same payload. Add an explicit upper-bound check before persistence (or in structural validation) to reject out-of-range amounts.
Useful? React with 👍 / 👎.
Codex PR8 round 15 P2. `validateStructural` only rejected payment_amount_sats when it was non-positive, so values at or above int64_max (e.g. 9_223_372_036_854_775_808 sats) slipped through and hit the SQLite INTEGER write inside proposalSubmissions.create. The column overflowed the signed 64-bit range and POST /gov/proposals/prepare surfaced a generic 500 instead of a deterministic 400 validation error — clients then had no structured reason to correct the payload and would retry the same broken request. Introduce MAX_PAYMENT_AMOUNT_SATS = 2n**63n - 1n as an explicit upper bound in structural validation, emitting a new `amount_too_large` code so the frontend can map it to distinct copy (separate from `amount_not_positive`). Syscoin's total supply cap is ~9e15 sats, comfortably below this ceiling, so legitimate proposals are unaffected. Regression tests cover three cases: - Exact int64_max (9.22e18 sats) accepted — no storage overflow. - int64_max + 1 rejected with `amount_too_large`, not `amount_not_positive`. - Wildly over-range amount (10^20 sats) still produces a clean validation failure rather than an engine-level overflow. Made-with: Cursor
|
@codex review. Don't worry about migrations, not an issue. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 748bf67b8d
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| .gObject_submit(parentHash, String(revision), String(time), dataHex, feeTxid) | ||
| .call(true); |
There was a problem hiding this comment.
Send numeric revision/time to gObject_submit
gobject_submit expects revision and time as numeric RPC params, but this adapter coerces both to strings before calling syscoind. In production this causes Core to reject submission calls with invalid-parameter errors once collateral reaches 6 confs; the dispatcher classifies those as transient and leaves rows stuck in awaiting_collateral, so proposals never transition to submitted or failed.
Useful? React with 👍 / 👎.
| } else if (typeof sats === 'string') { | ||
| // Require digit-only with no leading/trailing whitespace and | ||
| // no leading zeros longer than 1 char (so "0" is fine but | ||
| // "007" is not — matches the canonical serialization we | ||
| // would later emit). An empty string is rejected. | ||
| isValid = /^(0|[1-9][0-9]*)$/.test(sats); |
There was a problem hiding this comment.
Enforce int64 ceiling for draft payment_amount_sats
Draft validation accepts arbitrarily large digit strings for payment_amount_sats but never bounds them to SQLite INTEGER range. A request with a very large value passes this check and then reaches proposal_drafts.payment_amount_sats insert/update, where the oversized BigInt can throw at the DB layer and is surfaced as 500 internal instead of a deterministic 400 validation_failed.
Useful? React with 👍 / 👎.
…ap on draft amounts
Codex PR8 round 16.
P1 (lib/proposalRpc.js + server.js): the production RPC adapter was
wrapping `revision` and `time` in `String(...)` before invoking
@syscoin/syscoin-js. syscoin-js forwards JS values to the daemon
as-is, so both params arrived as JSON strings; Syscoin Core's
`gobject_submit` declares them as `RPCArg::Type::NUM` and
`RPCTypeCheck` rejects the call with an
"Expected type number, got string" RPC_TYPE_ERROR. The dispatcher's
terminal/transient classifier treated that as transient and rows
stayed pinned to `awaiting_collateral` forever in production — no
proposal ever transitioned to `submitted` or `failed`. The String()
wrappers were a leftover from mirroring the syscoin-CLI shape
(rpc/client.cpp has `{"gobject_submit", 1, "revision"}` conversion
entries, but those rules are applied by the CLI before forwarding
and do not apply to direct JSON-RPC callers).
Extracted the adapter into `lib/proposalRpc.js` (factory pattern)
so the exact argument shapes sent to syscoin-js are now unit-
testable. Regression test asserts revision/time are numeric, not
strings; also guards the earlier `gobject_check` one-positional-
arg fix and the `getRawTransaction` verbose 0/1 mapping.
P2 (routes/govProposals.js): draft validation accepted arbitrarily
large digit strings / BigInts / numbers for `payment_amount_sats`
but never bounded them to SQLite INTEGER (signed 64-bit) range.
Oversized values overflowed the column on insert/update and
surfaced as a generic 500 instead of a deterministic 400. Added
an int64 ceiling via the existing `MAX_PAYMENT_AMOUNT_SATS`
constant in `proposalValidate.js`, applied after the shape
validation so all three accepted wire forms (BigInt / Number /
digit-string) flow through one gate and emit a new
`amount_too_large` issue code. Values are normalized to BigInt on
the way out so the drafts repo never re-parses a string.
Regression tests cover:
- proposalRpc: numeric vs string arg shape, gobject_check single
arg, verbose mapping, upstream error passthrough.
- drafts create/patch: int64_max accepted, int64_max+1 rejected
with amount_too_large, paymentAmount (decimal SYS) path also
gated.
Made-with: Cursor
|
@codex review. Don't worry about migrations, not an issue. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 30b58664f9
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| try { | ||
| rawForCanon.payment_amount_sats = BigInt(f.paymentAmountSats); | ||
| } catch { |
There was a problem hiding this comment.
Reject unsafe numeric paymentAmountSats before hashing
POST /gov/proposals/prepare converts paymentAmountSats with BigInt(f.paymentAmountSats) when the client sends a JSON number, but JavaScript numbers above Number.MAX_SAFE_INTEGER are already rounded before this conversion. In that case the route silently commits a different amount than the caller sent, so the stored submission/canonical hash can encode the wrong payment amount without returning a validation error. Please reject non-safe numeric integers (or require string input for large values) before converting to BigInt.
Useful? React with 👍 / 👎.
…shing Codex PR8 round 17 P2. Both POST /gov/proposals/prepare and the drafts create/patch paths were running `BigInt(f.paymentAmountSats)` (or accepting a `Number.isInteger(sats)` number) without bounding numeric input to the JS safe-integer range. JSON.parse silently rounds integers above `Number.MAX_SAFE_INTEGER (2^53 - 1)` at parse time, so a caller that sent `9007199254740993` would see the server canonicalize, hash, and persist `9007199254740992` — a silent mismatch between the bytes the user typed and what the submission row commits to. Extract a shared `parsePaymentAmountSatsInput` helper in routes/govProposals.js that accepts three wire shapes explicitly: - BigInt: must be >= 0n, forwarded as-is. - number: must be `Number.isSafeInteger` AND >= 0. Integers at or above 2^53 are rejected with a dedicated `amount_sats_unsafe_number` code so the frontend can direct the caller to resend as a digit string. - string: must match `/^(0|[1-9][0-9]*)$/` — the canonical digit- only form. `BigInt` parses digit strings losslessly, so this is the recommended wire form for amounts above safe-integer. The helper always returns a BigInt on success, which flows into the existing int64 ceiling (MAX_PAYMENT_AMOUNT_SATS) gate without further normalization. Both /prepare and /drafts now route through the helper, replacing the old `BigInt(...)` try/catch and the ad-hoc `Number.isInteger` check that leaked rounded doubles through to persistence. Regression tests cover: - drafts create: unsafe JSON-number rejected with amount_sats_unsafe_number, Number.MAX_SAFE_INTEGER accepted, large digit-strings accepted. - prepare: unsafe JSON-number rejected with the same code, large digit-strings pass end-to-end into a prepared submission. Made-with: Cursor
|
@codex review. Don't worry about migrations, not an issue. |
|
Codex Review: Didn't find any major issues. More of your lovely PRs please. ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
Summary
Backend slice for governance-proposal creation. Users can draft a
proposal, commit to its canonical hash, pay the 150 SYS collateral
(burned, non-refundable — enforced at the Core protocol level), and
have the backend auto-submit the gobject once the collateral clears 6
confirmations. All state changes surface via existing email channels
with full compliance footers.
Frontend wiring (Pali dApp interaction, PSBT builder, wizard, status
page, drafts list) lands in a sibling PR against sysnode-info.
What's new
Canonical hashing (lib/proposalHash.js)
JS port of
CGovernanceObject::GetHash()from Syscoin Core. Lets thedApp hand the OP_RETURN bytes to the user's own wallet instead of
calling
gobject_prepareserver-side (which would unconditionallyburn 150 SYS from the backend wallet on every attempt — not an option
for a wallet-less backend). Verified against frozen golden vectors +
field-sensitivity tests.
Canonicalization (lib/proposalValidate.js)
payment_address/payment_amount/url) with a fixed key order and
non-scientific decimal amounts so the hash is byte-deterministic.
social-engineering guardrail; Core's CheckURL is permissive.
payment_countis deliberately NOT in canonical JSON — it is not aconsensus field in Core; kept as display metadata only, with a soft
1..60 bound in the route layer.
State machine (lib/proposalSubmissions.js + proposalDispatcher.js)
`prepared → awaiting_collateral → (submitted | failed)`. Repo enforces
the transition and raises stable `.code` errors. Dispatcher polls
`awaiting_collateral` rows once a minute, bumps confirmation counts
from `getRawTransaction`, and calls `gObject_submit` once
`>= GOVERNANCE_FEE_CONFIRMATIONS (6)`. Rejection vs transient error
classification is tested against Core's actual error strings.
HTTP surface (routes/govProposals.js → /gov/proposals)
`gObject_check` pre-flight, persist `prepared` row (idempotent on
`(userId, proposalHash)`), atomically consume source draft
dispatcher
Mailer (lib/mailer.js)
both collateral txid and governance hash.
150 SYS burn.
getting this because you authored the proposal; cannot disable).
Design notes worth calling out
collateral via its "postponed" path. The difference: one
deterministic state transition observable by the user, instead of
a silent Core-internal retry schedule we can't introspect.
the hashed payload. Number of payouts is already implicit from
`(end_epoch - start_epoch) / superblock_cycle`; adding it to the
JSON would diverge from Core's canonical representation.
write-through the mailer has to state changes. Mailer failures are
swallowed + logged so a broken SMTP relay never stalls the
advance loop.
Test plan
field stretching), repos (state machine), dispatcher (conf
tracking, submit path, failure paths, hook wiring)
draft limits, idempotent prepare, atomic draft consumption,
duplicate-txid detection, soft handling of transient RPC
Follow-ups (explicitly out of scope for this PR)
🤖 Generated with Claude Code
Made with Cursor