You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Thanks for the interesting technical discussion with @ericspod@wyli@atbenmurray@rijobro , as we still have many unclear requirements and unknown use cases, we plan to develop the model package feature step by step, May adjust the design based on feedback during the development.
For the initial step, the core team aligned to develop a small but typical example for inference first, it will use JSON config files to define environments, components and workflow, save the config and model into TorchScript model. then other projects can easily reconstruct the exact same python program and parameters to reproduce the inference. When the small MVP is ready, will share and discuss within the team for the next steps.
I will try to implement the MVP referring to some existing solutions, like NVIDIA Clara MMAR, ignite online package, etc. Basic task steps:
Include metadata (env / sys info, changelog, version, input / output data format, etc), configs, model weights, etc. in a model package example for review. PR: [WIP] 486 Add example model package tutorials#487
Is your feature request related to a problem? Please describe.
Thanks for the interesting technical discussion with @ericspod @wyli @atbenmurray @rijobro , as we still have many unclear requirements and unknown use cases, we plan to develop the model package feature step by step, May adjust the design based on feedback during the development.
For the initial step, the core team aligned to develop a small but typical example for
inferencefirst, it will use JSON config files to define environments, components and workflow, save the config and model into TorchScript model. then other projects can easily reconstruct the exact same python program and parameters to reproduce the inference. When the small MVP is ready, will share and discuss within the team for the next steps.I will try to implement the MVP referring to some existing solutions, like NVIDIA Clara MMAR, ignite online package, etc. Basic task steps:
name/path&args. PR: 3482 AddConfigComponentfor config parsing #3720{"dataset": {"<name>": "Dataset", "<args>": {"data": "$load_datalist()"}}, "dataloader": {"<name>": "DataLoader", "<args>": {"data": "@dataset"}}}. PR: 3482 Add ReferenceResolver to manage config items #3818, 3482 3829 AddConfigParserto recursively parse config content #3822metadata#3865metadata).runAPI for common training, evaluation and inference #3832huggingface(Pretrained Models #3451)."test": "@###data#1", 1#means current level, 2##means upper level, etc. PR: 3482 Add support for relative IDs in the config content #3974ConfigItemandReferenceResolverin theConfigParser. PR: 3482 Add support for customized ConfigItem and resolver #3980_requires_keyword for config component (adds a_requires_key forConfigComponent#3942).