Empowering AI researchers to share fully reproducible and portable model implementations.
Crowdsourced through contributions by the scientific research community, MHub is a repository of self-contained deep learning models pretrained for a wide variety of applications. MHub highlights recent trends in deep learning applications, enables transfer learning approaches, and promotes reproducible science.
Dockerized containers for security and instant setup.
MHub containers are bundled inside a Docker container with all their system dependencies and model weights, reducing the setup to a single docker run command. All you need to install on your system is Docker. And to remove a model all you need to do is delete the image.
Standardized input and output data across all MHub models.
All MHub models share the same flexible input and output interface. Inside the docker container, not only is the original model pipeline executed without modifications but all conversions and re-structuring of the input data into the specific format required by the model are handled automatically. Therefore, you only need to prepare your data once for MHub and can then use it with any model.
Run AI models with a single line of code.
1 | Prepare your data
2 | Select your model
--gpus all mhubai/totalsegmentator:cuda12.0
Copy to clipboard.
MHub runs all models on a dicom-to-dicom pipeline by default. Simply point to the folder where your dicom data is. Your data is in a different format? Check our documentation and discover the full flexibility of MHub.