Skip to content

Validate tensor dimensions in xnnpack flatbuffer#18896

Open
lucylq wants to merge 1 commit intomainfrom
security39-42
Open

Validate tensor dimensions in xnnpack flatbuffer#18896
lucylq wants to merge 1 commit intomainfrom
security39-42

Conversation

@lucylq
Copy link
Copy Markdown
Contributor

@lucylq lucylq commented Apr 14, 2026

Validate that dims array is non-null and num_dims matches the actual array size in defineTensor to prevent heap buffer overflows.

Change flatbufferDimsToVector to return Result<> with null-check and per-dimension validation against a 16M limit to prevent unbounded memory allocation from malicious dimension values.

Authored-with: Claude

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Apr 14, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18896

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 1 New Failure, 2 Unrelated Failures

As of commit 459b8f5 with merge base 5e8a0df (image):

NEW FAILURE - The following job has failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 14, 2026
@lucylq
Copy link
Copy Markdown
Contributor Author

lucylq commented Apr 14, 2026

@claude review

@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@claude
Copy link
Copy Markdown

claude bot commented Apr 14, 2026

Claude finished @lucylq's task in 1m 33s —— View job


Review of PR #18896

  • Read PR diff and changed files
  • Analyze changes for correctness, security, and style
  • Post review feedback

This PR adds input validation to prevent heap buffer overflows (TOB-EXECUTORCH-39, -42) from malicious flatbuffer payloads. The approach is sound — validating at deserialization time before passing untrusted dimension values to XNNPACK APIs. Here's a detailed review:


Looks Good

Null check in flatbufferDimsToVector (XNNCompiler.cpp:166-169): Converting the function to return Result<> and adding the null check is the right pattern. All 7 call sites have been properly updated to propagate errors.

num_dims vs dims()->size() consistency check (XNNCompiler.cpp:306-311): Validating that the declared num_dims matches the actual flatbuffer array size prevents a mismatch that could lead to out-of-bounds reads downstream.

Combined null check in defineTensor (XNNCompiler.cpp:301-304): Checking both tensor_value and tensor_value->dims() in one guard is cleaner and prevents null dereference on dims().

Error propagation at all call sites: All callers (defineStaticTransposeNode, defineStaticConstantPadNode, defineStaticReshapeNode, defineStaticSliceNode) are correctly updated with the Result unwrapping pattern using std::move.


Issues

1. kMaxDimensionValue validation applies to non-dimension values (minor concern)

flatbufferDimsToVector is used for permutation indices (perm()), padding values (pre_paddings(), post_paddings()), offsets, and sizes — not just tensor dimensions. The 16M limit is generous enough that it won't cause false positives for these use cases, but the function name and the error message both say "Dimension value" which could be confusing when diagnosing errors from, say, a bad perm() array. This is a minor documentation/naming nit, not a blocker.

2. 1 << 24 is a signed shift (XNNCompiler.cpp:161)

constexpr uint32_t kMaxDimensionValue = 1 << 24; // 16M per dimension

The literal 1 is int, so 1 << 24 is a signed int shift. This works correctly here (the value 16,777,216 fits in a 32-bit signed int), but for consistency and to avoid any linter/compiler warnings, consider using 1u << 24 or uint32_t{1} << 24. Fix this →

3. No total-elements overflow check (suggestion, not a blocker)

The per-dimension cap of 16M prevents any single dimension from being absurdly large, but a tensor with dimensions [16M, 16M, 16M] would still represent an astronomical number of elements. XNNPACK itself may reject such tensors downstream, but an explicit total-element-count check (e.g., ensuring the product of all dimensions doesn't exceed some limit) would be a stronger defense. This is a hardening suggestion for future work, not a requirement for this PR.


Summary

The PR is a clear security improvement. It correctly validates deserialized flatbuffer tensor data before it reaches XNNPACK APIs, addressing heap buffer overflow risks from crafted inputs. The Result<> return type change is clean and all call sites are updated. The signed shift literal is the only concrete fix I'd suggest before merging.

@lucylq lucylq marked this pull request as ready for review April 14, 2026 23:14
@lucylq lucylq requested a review from digantdesai as a code owner April 14, 2026 23:14
Copilot AI review requested due to automatic review settings April 14, 2026 23:14
@lucylq
Copy link
Copy Markdown
Contributor Author

lucylq commented Apr 14, 2026

addressed claude's comments

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR hardens the XNNPACK runtime flatbuffer deserialization path by validating shape vectors before passing raw pointers into XNNPACK APIs, aiming to prevent heap buffer overflows and unbounded allocations from malformed or malicious programs.

Changes:

  • Change flatbufferDimsToVector to return Result<std::vector<T>>, add NULL-checking, and validate each dimension value against a max limit.
  • Add validation in defineTensor to ensure dims exist and num_dims matches the flatbuffer vector length, and propagate Result errors to callers.
  • Update several node-definition helpers (transpose/reshape/slice/pad) to handle Result from flatbufferDimsToVector.
Comments suppressed due to low confidence (3)

backends/xnnpack/runtime/XNNCompiler.cpp:1160

  • xnn_define_static_reshape is called with graph_node->num_dims() but new_shape()->size() is not validated against num_dims. A malformed flatbuffer could cause XNNPACK to read past dims_data.
    Please add new_shape != nullptr and new_shape->size() == num_dims validation (and a reasonable num_dims cap) before the call.
  auto graph_node = node->xnode_union_as_XNNStaticReshape();

  // Get tensor dims, we need to convert the uint32_t* to size_t*
  auto dims_result = flatbufferDimsToVector(graph_node->new_shape());
  if (!dims_result.ok()) {
    return dims_result.error();
  }
  std::vector<size_t> dims_data = std::move(dims_result.get());

  xnn_status status = xnn_define_static_reshape(
      subgraph_ptr,
      graph_node->num_dims(),
      dims_data.data(),
      remapped_ids.at(graph_node->input_id()),

backends/xnnpack/runtime/XNNCompiler.cpp:1465

  • xnn_define_static_slice is called with graph_node->num_dims() but there is no validation that offsets()->size() and sizes()->size() both equal num_dims. If either vector is shorter, XNNPACK will read past offsets/sizes.
    Add checks for non-null vectors, matching sizes, and a reasonable num_dims cap before calling XNNPACK.
  auto graph_node = node->xnode_union_as_XNNStaticSlice();

  auto offsets_result = flatbufferDimsToVector(graph_node->offsets());
  if (!offsets_result.ok()) {
    return offsets_result.error();
  }
  std::vector<size_t> offsets = std::move(offsets_result.get());
  auto sizes_result = flatbufferDimsToVector(graph_node->sizes());
  if (!sizes_result.ok()) {
    return sizes_result.error();
  }
  std::vector<size_t> sizes = std::move(sizes_result.get());

  xnn_status status = xnn_define_static_slice(
      subgraph_ptr,
      graph_node->num_dims(),
      offsets.data(),
      sizes.data(),
      remapped_ids.at(graph_node->input_id()),

backends/xnnpack/runtime/XNNCompiler.cpp:1004

  • xnn_define_static_transpose is called with graph_node->num_dims() but there is no validation that perm()->size() matches num_dims. If num_dims is larger than the perm vector length, XNNPACK will read past dims_data.
    Add a check that perm != nullptr and perm->size() == num_dims (and ideally num_dims <= XNN_MAX_TENSOR_DIMS) before calling into XNNPACK.
  auto graph_node = node->xnode_union_as_XNNStaticTranspose();

  // Get tensor dims, we need to convert the uint32_t* to size_t*
  auto dims_result = flatbufferDimsToVector(graph_node->perm());
  if (!dims_result.ok()) {
    return dims_result.error();
  }
  std::vector<size_t> dims_data = std::move(dims_result.get());

  xnn_status status = xnn_define_static_transpose(
      subgraph_ptr,
      graph_node->num_dims(),
      dims_data.data(),
      remapped_ids.at(graph_node->input_id()),

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 301 to +311
ET_CHECK_OR_RETURN_ERROR(
tensor_value != nullptr,
Internal,
"Deserialized Tensor is Null, this should never happen");
tensor_value != nullptr && tensor_value->dims() != nullptr,
InvalidProgram,
"Deserialized tensor is null, or tensor dims is null");

ET_CHECK_OR_RETURN_ERROR(
tensor_value->num_dims() == tensor_value->dims()->size(),
InvalidProgram,
"Tensor num_dims %u does not match dims array size %u",
tensor_value->num_dims(),
tensor_value->dims()->size());
Comment on lines 154 to +159
Converts dims from uint32 to size_t. Takes in a flatbuffer vector
of uint32_t and returns a std::vector of size_t. XNNPACK takes in
dims of size_t* but tensor shape is serialized in flatbuffer as
int32_t. As a result, we need to static cast the shapes to size_t
int32_t. As a result, we need to static cast the shapes to size_t.
Individual dimension values are validated to prevent unbounded memory
allocation from malicious inputs.
Comment on lines +166 to 172
ET_CHECK_OR_RETURN_ERROR(
fb_dims != nullptr,
InvalidProgram,
"flatbufferDimsToVector: dims vector is null");
std::vector<T> dims_data;
dims_data.reserve(fb_dims->size());
for (auto fb_dim : *fb_dims) {
Comment on lines 301 to +305
ET_CHECK_OR_RETURN_ERROR(
tensor_value != nullptr,
Internal,
"Deserialized Tensor is Null, this should never happen");
tensor_value != nullptr && tensor_value->dims() != nullptr,
InvalidProgram,
"Deserialized tensor is null, or tensor dims is null");

Comment on lines 1061 to +1073
const fb_xnnpack::XNNStaticConstantPad* graph_node =
node->xnode_union_as_XNNStaticConstantPad();

std::vector<size_t> pre_paddings_dims =
flatbufferDimsToVector(graph_node->pre_paddings());
std::vector<size_t> post_paddings_dims =
flatbufferDimsToVector(graph_node->post_paddings());
auto pre_result = flatbufferDimsToVector(graph_node->pre_paddings());
if (!pre_result.ok()) {
return pre_result.error();
}
std::vector<size_t> pre_paddings_dims = std::move(pre_result.get());
auto post_result = flatbufferDimsToVector(graph_node->post_paddings());
if (!post_result.ok()) {
return post_result.error();
}
std::vector<size_t> post_paddings_dims = std::move(post_result.get());
Individual dimension values are validated to prevent unbounded memory
allocation from malicious inputs.
*/
constexpr uint32_t kMaxDimensionValue = 1 << 24; // 16M per dimension
@GregoryComer
Copy link
Copy Markdown
Member

Clamping max elements at 16 million makes me a little bit nervous. Flattening large tensors isn't uncommon, and if a graph break happened in between the flatten call and whatever uses it, it could break existing PTEs.

How critical is this check?

…piler

Validate that dims array is non-null and num_dims matches the actual array
size in defineTensor to prevent heap buffer overflows. Change
flatbufferDimsToVector to return Result<> with null-check and per-dimension
validation against a 16M limit to prevent unbounded memory allocation from
malicious dimension values.

Authored-with: Claude
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants