Bria fibo#12545
Conversation
sayakpaul
left a comment
There was a problem hiding this comment.
Thanks a lot for the PR. Excited for FIBO to make strides!
I have left a bunch of comments, most of which should be easily resolvable. If not, please let me know.
Additionally, I think:
- It'd be nice to include a code snippet for folks to test it out (@linoytsaban @asomoza).
- Remove the custom block implementations from the PR, host them on the Hub (just like this one), and guide the users about how to use them alongside the pipeline.
| output_height, output_width, _ = image.shape | ||
| assert (output_height, output_width) == (expected_height, expected_width) | ||
|
|
||
| @unittest.skipIf(torch_device not in ["cuda", "xpu"], reason="float16 requires CUDA or XPU") |
There was a problem hiding this comment.
We can remove this test I guess. If not, would you mind explaining why we had to override it here?
There was a problem hiding this comment.
we used it to debug something, its redundant and removed
There was a problem hiding this comment.
Seems like the test is still being kept here?
sayakpaul
left a comment
There was a problem hiding this comment.
Added a few more comments. I think we should let the users know that they should absolutely use the structured prompt in the docs.
| output_height, output_width, _ = image.shape | ||
| assert (output_height, output_width) == (expected_height, expected_width) | ||
|
|
||
| @unittest.skipIf(torch_device not in ["cuda", "xpu"], reason="float16 requires CUDA or XPU") |
There was a problem hiding this comment.
Seems like the test is still being kept here?
- Updated BriaFiboAttnProcessor and BriaFiboAttention classes to reflect changes from Flux equivalents. - Modified the _unpack_latents method in BriaFiboPipeline to improve clarity. - Increased the default max_sequence_length to 3000 and added a new optional parameter do_patching. - Cleaned up test_pipeline_bria_fibo.py by removing unused imports and skipping unsupported tests.
| latents = latents.unsqueeze(dim=2) | ||
| latents = list(torch.unbind(latents, dim=0)) |
There was a problem hiding this comment.
@kfirbria Hmm that's unusual. Is the input shape to the decoder in this format (batch_size, channels, 1, height, width)?
- Updated class names from FIBO to BriaFibo for consistency across the module. - Modified instances of FIBOEmbedND, FIBOTimesteps, TextProjection, and TimestepProjEmbeddings to reflect the new naming. - Ensured all references in the BriaFiboTransformer2DModel are updated accordingly.
…oTransformer2DModel and BriaFiboPipeline classes to dummy objects for enhanced compatibility with torch and transformers.
… in pipeline module - Added documentation comments indicating the source of copied code in BriaFiboTransformerBlock and _pack_latents methods. - Corrected the import statement for BriaFiboPipeline in the pipelines module.
…ration from existing implementations - Updated comments in BriaFiboAttnProcessor, BriaFiboAttention, and BriaFiboPipeline to reflect that the code is inspired by other modules rather than copied. - Enhanced clarity on the origins of the methods to maintain proper attribution.
…riaFibo classes - Introduced a new documentation file for BriaFiboTransformer2DModel. - Updated comments in BriaFiboAttnProcessor, BriaFiboAttention, and BriaFiboPipeline to clarify the origins of the code, indicating copied sources for better attribution.
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
pr has been merged, but docs do not contain anything on how to use? |
What does this PR do?
Fixes # (issue)
Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.