{"name":"manini","display_name":"Manini","visibility":"public","icon":"","categories":[],"schema_version":"0.2.0","on_activate":null,"on_deactivate":null,"contributions":{"commands":[{"id":"manini.manini_widget","title":"Manini Widget","python_name":"manini._widget:ManiniWidget","short_title":null,"category":null,"icon":null,"enablement":null}],"readers":null,"writers":null,"widgets":[{"command":"manini.manini_widget","display_name":"Manini","autogenerate":false}],"sample_data":null,"themes":null,"menus":{},"submenus":null,"keybindings":null,"configuration":[]},"package_metadata":{"metadata_version":"2.1","name":"manini","version":"0.0.3","dynamic":null,"platform":null,"supported_platform":null,"summary":"An user-friendly plugin that enables to annotate images from a pre-trained model (segmentation, classification, detection) given by an user.","description":"# manini\n\n[![License BSD-3](https://img.shields.io/pypi/l/manini.svg?color=green)](https://github.com/hereariim/manini/raw/main/LICENSE)\n[![PyPI](https://img.shields.io/pypi/v/manini.svg?color=green)](https://pypi.org/project/manini)\n[![Python Version](https://img.shields.io/pypi/pyversions/manini.svg?color=green)](https://python.org)\n[![tests](https://github.com/hereariim/manini/workflows/tests/badge.svg)](https://github.com/hereariim/manini/actions)\n[![codecov](https://codecov.io/gh/hereariim/manini/branch/main/graph/badge.svg)](https://codecov.io/gh/hereariim/manini)\n[![napari hub](https://img.shields.io/endpoint?url=https://api.napari-hub.org/shields/manini)](https://napari-hub.org/plugins/manini)\n\nAn user-friendly plugin that enables to annotate images from a pre-trained model (segmentation, classification, detection) given by an user.\n\nThe Manini plugin for napari a tool to perform image inference from a pre-trained model (tensorflow .h5) and then annotate the resulting images with the tools provided by napari. Its development is ongoing.\n\n![Screencast from 24-01-2023 14 00 51](https://user-images.githubusercontent.com/93375163/214298805-8405a923-5952-458c-8542-7c78887479ab.gif)\n\n----------------------------------\n\nThis [napari] plugin was generated with [Cookiecutter] using [@napari]'s [cookiecutter-napari-plugin] template.\n\n\n\n## Installation\n\nYou can install `manini` via [pip]:\n\n pip install manini\n\nTo install latest development version :\n\n pip install git+https://github.com/hereariim/manini.git\n\n\n## Description\n\nThis plugin is a tool to perform 2D image inference. The inference is open to the model for image segmentation (binary or multiclass), image classification and object detection.\nThis tool is compatible with tensorflow h5 models. In this format, the h5 file must contain all the elements of the model (architecture, weights, etc).\n\n### Image segmentation\n\nThis tool allows image inference from a segmentation model.\n\n#### Input\n\nThe user must deposit two items (+1 optional item).\n\n- A compressed file (.zip) containing the images in RGB\n\n```\n.\n└── input.zip\n ├── im_1.JPG\n ├── im_2.JPG \n ├── im_3.JPG\n ...\n └── im_n.JPG\n```\n\n- A tensorflow h5 file (.h5) which is the segmentation model\n- A text file (.txt) containing the names of the classes (optional)\n\nThe Ok button is used to validate the imported elements. The Run button is used to launch the segmentation.\n\n#### Processing\n\nOnce the image inference is complete, the plugin returns a drop-down menu showing a list of RGB images contained in the compressed file. When the user clicks on an image displayed in this list, two items appear in the napari window:\n\n- A layer label which is the segmentation mask\n- A layer image which is the RGB image\n\n![cpe](https://user-images.githubusercontent.com/93375163/214246685-e86a9f62-bb27-44b5-92eb-86ef5aa2c663.png)\n\nA widget also appears to the right of the window. This is a list of the classes in the model with their associated colours. In this tool, the number of classes is limited to 255.\n\nThe user can make annotations on the layer label. For example, the user can correct mispredicted pixels by annotating them with a brush or an eraser.\n\n#### Output\n\nThe Save button allows you to obtain a compressed image file. This file contains folders containing the RGB images and their greyscale mask.\n\n### Image classification\n\nThis tool performs image inference from an image classification model.\n\n#### Input\n\nThis tool offers three mandatory inputs:\n\n- A compressed file (.zip) containing the RGB images\n\n```\n.\n└── input.zip\n ├── im_1.JPG\n ├── im_2.JPG \n ├── im_3.JPG\n ...\n └── im_n.JPG\n```\n\n- A tensorflow h5 (.h5) file which is the image classification model\n- A text file (.txt) containing the class names\n\nThe Ok button is used to validate the imported elements. The Run button is used to launch the classification.\n\n#### Processing\n\nOnce the image inference is complete, the plugin returns two elements :\n\n- a drop-down menu showing a list of RGB images contained in the compressed file.\n- an table containing the predicted class for each image.\n\n![cpe2](https://user-images.githubusercontent.com/93375163/214252875-c8e59773-4c3d-4582-b8db-67c59ab01975.png)\n\nThe user can change the predicted class by selecting a class displayed in the associated drop-down menu for an image.\n\n#### Output\n\nThe Save button allows you to obtain a csv file. This file is the table on which the user had made his modifications.\n\n### Detection\n\nThis tool performs image inference from an yolo object detection model. The inference is made from [darknet] command\n\n#### Input\n\nThis tool offers five mandatory inputs:\n\n- A folder which is the darknet repository images\n- A file (.data) containing the paths (train,validation,test,class) and the number of class\n- A file (.cfg) containing the model architecture\n- A file (.weight) containing the weights associated to the model (.cfg) cited just above\n- A file (.txt) that indicates the path of the images\n\nThe Ok button is used to validate the imported elements. The Run button is used to launch the command `./darknet detector test` .\n\n#### Processing\n\nWhen the prediction of bounding box coordinates is complete for each image, the plugin returns two elements:\n\n- A menu that presents a list of the RGB images given as input.\n- A menu that presents a list of the classes given as input\n\n![Screenshot from 2023-01-24 10-33-07](https://user-images.githubusercontent.com/93375163/214257222-945ed096-49dd-4b91-aa2a-df4c43a30372.png)\n\nThe window displays the bounding boxes and the RGB image. The bounding box coordinates are taken from the json file which is an output file of the darknet detector test command. The user can update these coordinates by deleting or adding one or more bounding boxes. From the list of classes, the user can quickly add a bounding box to the image.\n\n#### Output\n\nThe Save button allows you to obtain a json file. This file contains for each image, the bounding box coordinates and the class for each detected object.\n\n## Contributing\n\nContributions are very welcome. Tests can be run with [tox], please ensure\nthe coverage at least stays the same before you submit a pull request.\n\n## License\n\nDistributed under the terms of the [BSD-3] license,\n\"manini\" is free and open source software\n\n## Issues\n\nIf you encounter any problems, please [file an issue] along with a detailed description.\n\n[napari]: https://github.com/napari/napari\n[Cookiecutter]: https://github.com/audreyr/cookiecutter\n[@napari]: https://github.com/napari\n[MIT]: http://opensource.org/licenses/MIT\n[BSD-3]: http://opensource.org/licenses/BSD-3-Clause\n[GNU GPL v3.0]: http://www.gnu.org/licenses/gpl-3.0.txt\n[GNU LGPL v3.0]: http://www.gnu.org/licenses/lgpl-3.0.txt\n[Apache Software License 2.0]: http://www.apache.org/licenses/LICENSE-2.0\n[Mozilla Public License 2.0]: https://www.mozilla.org/media/MPL/2.0/index.txt\n[cookiecutter-napari-plugin]: https://github.com/napari/cookiecutter-napari-plugin\n\n[file an issue]: https://github.com/hereariim/manini/issues\n\n[napari]: https://github.com/napari/napari\n[darknet]: https://pjreddie.com/darknet/yolo/\n[tox]: https://tox.readthedocs.io/en/latest/\n[pip]: https://pypi.org/project/pip/\n[PyPI]: https://pypi.org/\n","description_content_type":"text/markdown","keywords":null,"home_page":"https://github.com/hereariim/manini","download_url":null,"author":"Herearii Metuarea","author_email":"herearii.metuarea@gmail.com","maintainer":null,"maintainer_email":null,"license":"BSD-3-Clause","classifier":["Development Status :: 2 - Pre-Alpha","Framework :: napari","Intended Audience :: Developers","License :: OSI Approved :: BSD License","Operating System :: OS Independent","Programming Language :: Python","Programming Language :: Python :: 3","Programming Language :: Python :: 3 :: Only","Programming Language :: Python :: 3.8","Programming Language :: Python :: 3.9","Programming Language :: Python :: 3.10","Topic :: Scientific/Engineering :: Image Processing"],"requires_dist":["numpy","magicgui","qtpy","napari","scikit-image","pandas","opencv-python-headless","tensorflow","PyQt5","tox ; extra == 'testing'","pytest ; extra == 'testing'","pytest-cov ; extra == 'testing'","pytest-qt ; extra == 'testing'","napari ; extra == 'testing'","pyqt5 ; extra == 'testing'","pytest-xvfb ; extra == 'testing'","numpy ; extra == 'testing'","magicgui ; extra == 'testing'","qtpy ; extra == 'testing'","scikit-image ; extra == 'testing'","pandas ; extra == 'testing'","opencv-python-headless ; extra == 'testing'","tensorflow ; extra == 'testing'"],"requires_python":">=3.8","requires_external":null,"project_url":["Bug Tracker, https://github.com/hereariim/manini/issues","Documentation, https://github.com/hereariim/manini#README.md","Source Code, https://github.com/hereariim/manini","User Support, https://github.com/hereariim/manini/issues"],"provides_extra":["testing"],"provides_dist":null,"obsoletes_dist":null},"npe1_shim":false}