{"name":"hesperos","display_name":"Hesperos application","visibility":"public","icon":"","categories":[],"schema_version":"0.2.0","on_activate":null,"on_deactivate":null,"contributions":{"commands":[{"id":"hesperos.make_manual_segmentation_widget","title":"Make Manual Segmentation Widget","python_name":"hesperos._manual_widget:ManualSegmentationWidget","short_title":null,"category":null,"icon":null,"enablement":null},{"id":"hesperos.make_oneshot_segmentation_widget","title":"Make OneShot Segmentation Widget","python_name":"hesperos._oneshot_widget:OneShotWidget","short_title":null,"category":null,"icon":null,"enablement":null}],"readers":null,"writers":null,"widgets":[{"command":"hesperos.make_manual_segmentation_widget","display_name":"Manual Segmentation or Correction","autogenerate":false},{"command":"hesperos.make_oneshot_segmentation_widget","display_name":"OneShot Segmentation","autogenerate":false}],"sample_data":null,"themes":null,"menus":{},"submenus":null,"keybindings":null,"configuration":[]},"package_metadata":{"metadata_version":"2.1","name":"hesperos","version":"0.2.1","dynamic":null,"platform":null,"supported_platform":null,"summary":"A plugin to manually or semi-automatically segment medical data and correct previous segmentation data.","description":"
\n \n# HESPEROS PLUGIN FOR NAPARI\n\n[![License](https://img.shields.io/pypi/l/hesperos.svg?color=green)](https://github.com/DBC/hesperos/raw/main/LICENSE)\n[![PyPI](https://img.shields.io/pypi/v/hesperos.svg?color=green)](https://pypi.org/project/hesperos)\n[![Python Version](https://img.shields.io/pypi/pyversions/hesperos.svg?color=green)](https://python.org)\n[![tests](https://github.com/DBC/hesperos/workflows/tests/badge.svg)](https://github.com/DBC/hesperos/actions)\n[![codecov](https://codecov.io/gh/DBC/hesperos/branch/main/graph/badge.svg)](https://codecov.io/gh/DBC/hesperos)\n[![napari hub](https://img.shields.io/endpoint?url=https://api.napari-hub.org/shields/hesperos)](https://napari-hub.org/plugins/hesperos)\n\nA Napari plugin for pre-defined manual segmentation or semi-automatic segmentation with a one-shot learning procedure. The objective was to simplify the interface as much as possible so that the user can concentrate on annotation tasks using a pen on a tablet, or a mouse on a computer. \n \nThis [napari] plugin was generated with [Cookiecutter] using [@napari]'s [cookiecutter-napari-plugin] template.\n\n \n# Table of Contents\n- [Installation and Usage](#installation-and-usage)\n * [Automatic installation](#automatic-installation)\n * [Manual installation](#manual-installation)\n * [Upgrade Hesperos version](#upgrade-hesperos-version)\n- [Hesperos: *Manual Segmentation and Correction* mode](#hesperos-manual-segmentation-and-correction-mode)\n * [Import and adjust your image](#import-and-adjust-your-image-use-panel-1)\n * [Layer controls](#layer-controls)\n * [Annotate your image](#annotate-your-image-use-panel-2)\n * [Select slices of interest](#select-slices-of-interest-use-panel-3----only-displayed-for-the-shoulder-bones-category)\n * [Export annotations](#export-annotations-use-panel-3----or-4-if-the-shoulder-bones-category-is-selected)\n- [Hesperos: *OneShot Segmentation* mode](#hesperos-oneshot-segmentation-mode)\n * [Import and adjust your image](#import-and-adjust-your-image-use-panel-1)\n * [Annotate your image](#annotate-your-image-use-panel-2)\n * [Run automatic segmentation](#run-automatic-segmentation-use-panel-3)\n * [Export annotations](#export-annotations-use-panel-4)\n\n \n# Installation and Usage\nThe Hesperos plugin is designed to run on Windows (11 or less) and MacOS with Python 3.8 / 3.9 / 3.10.\n \n \n## Automatic installation\n1. Install [Anaconda] and unselect *Add to PATH*. Keep in mind the path where you choose to install anaconda.\n2. Only download the *script_files* folder for [Windows](/script_files/for_Windows/) or [Macos](/script_files/for_Windows/). \n3. Add your Anaconda path in these script files:\n 1. For Windows: \n Right click on the .bat files (for [installation](/script_files/for_Windows/install_hesperos_env.bat) and [running](/script_files/for_Windows/run_hesperos.bat)) and select *Modify*. Change *PATH_TO_ADD* with your Anaconda path. Then save the changes.\n > for exemple:\n ```\n anaconda_dir=C:\\Users\\chgodard\\anaconda3\n ```\n 2. For Macos:\n 1. Right click on the .command files (for [installation](/script_files/for_Macos/install_hesperos_env.command) and [running](/script_files/for_Macos/run_hesperos.command)) and select *Open with TextEdit*. Change *PATH_TO_ADD* with your Anaconda path. Then save the changes.\n > for exemple:\n ```\n source ~/opt/anaconda3/etc/profile.d/conda.sh\n ```\n 2. In your terminal, change the permissions to allow the following .command files to be run (change *PATH* with the path of your .command files): \n ``` \n chmod u+x PATH/install_hesperos_env.command \n chmod u+x PATH/run_hesperos.command \n ```\n4. Double click on the **install_hesperos_env file** to create a virtual environment in Anaconda with python 3.9 and Napari 0.4.14. \n > /!\\ The Hesperos plugin is not yet compatible with Napari versions superior to 0.4.14.\n5. Double click on the **run_hesperos file** to run Napari from your virtual environment.\n6. In Napari: \n 1. Go to *Plugins/Install Plugins...*\n 2. Search for \"hesperos\" (it can take a while to load).\n 3. Install the **hesperos** plugin.\n 4. When the installation is done, close Napari. A restart of Napari is required to finish the plugin installation.\n7. Double click on the **run_hesperos file** to run Napari.\n8. In Napari, use the Hesperos plugin with *Plugins/hesperos*.\n\n \n## Manual installation\n1. Install [Anaconda] and unselect *Add to PATH*.\n2. Open your Anaconda prompt command.\n3. Create a virtual environment with Python 3.8 / 3.9 / 3.10:\n ```\n conda create -n hesperos_env python=3.9\n ```\n4. Install the required Python packages in your virtual environment:\n ```\n conda activate hesperos_env\n conda install -c conda-forge napari=0.4.14 \n conda install -c anaconda pyqt\n pip install hesperos\n ```\n > /!\\ Hesperos plugin is not yet compatible with napari version superior to 0.4.14.\n5. Launch Napari:\n ```\n napari\n ```\n \n## Upgrade Hesperos version\n1. Double click on the **run_hesperos file** to run Napari. \n2. In Napari: \n 1. Go to *Plugins/Install Plugins...*\n 2. Search for \"hesperos\" (it can take a while to load).\n 3. Click on *Update* if a new version of Hesperos has been found. You can check the latest version of Hesperos in the [Napari Hub](https://www.napari-hub.org/plugins/hesperos).\n 4. When the installation is done, close Napari. A restart of Napari is required to finish the plugin installation.\n \n \n# Hesperos: *Manual Segmentation and Correction* mode\n \n The ***Manual Segmentation and Correction*** mode of the Hesperos plugin is a simplified and optimized interface to do basic 2D manual segmentation of several structures in a 3D image using a mouse or a stylet with a tablet.\n\n \n \n \n## Import and adjust your image *(use Panel 1)*\nThe Hesperos plugin can be used with Digital Imaging and COmmunications in Medicine (DICOM), Neuroimaging Informatics Technology Initiative (NIfTI) or Tagged Image File Format (TIFF) images. To improve performances, use images that are located on your own disk.\n\n1. To import data:\n - use the button for *(.tiff, .tif, .nii or .nii.gz)* image files.\n - use the button for a DICOM serie. /!\\ Folder with multiple DICOM series is not supported. \n2. After the image has loaded, a slider appears that allows to zoom in/out: . Zooming is also possible with the button in the layer controls panel. \n3. If your data is a DICOM serie, you have the possibility to directly change the contrast of the image (according to the Hounsfield Unit):\n - by choosing one of the two predefined contrasts: *CT bone* or *CT Soft* in .\n - by creating a custom default contrast with the button and selecting *Custom Contrast*. Settings can be exported as a .json file with the button.\n - by loading a saved default contrast with the button and selecting *Custom Contrast*.\n4. In the bottom left corner of the application you also have the possibility to: \n - : change the order of the visible axis (for example go to sagittal, axial or coronal planes).\n - : transpose the 3D image on the current axis being displayed.\n\n\n## Layer controls\n\nWhen data is loading, two layers are created: the *`image`* layer and the *`annotations`* layer. Order in the layer list correspond to the overlayed order. By clicking on these layers you will have acces to different layer controls (at the top left corner of the application). All actions can be undone/redone with the Ctrl-Z/Shift-Ctrl-Z keyboard shortcuts. You can also hide a layer by clicking on its eye icon on the layer list.\n \n \nFor the *image* layer:\n- *`opacity`*: a slider to control the global opacity of the layer.\n- *`contrast limits`*: a double slider to manually control the contrast of the image (same as the option for DICOM data).\n \n\nFor the *annotations* layer:\n- : erase brush to erase all labels at once (if *`preserve labels`* is not selected) or only erase the selected label (if *`preserve labels`* is selected).\n- : paint brush with the same color than the *`label`* rectangle.\n- : fill bucket with the same color than the *`label`* rectangle.\n- : select to zoom in and out with the mouse wheel (same as the zoom slider at the top right corner in Panel 1).\n- *`label`*: a colored rectangle to represent the selected label. \n- *`opacity`*: a slider to control the global opacity of the layer. \n- *`brush size limits`*: a slider to control size of the paint/erase brush. \n- *`preserve labels`*: if selected, all actions are applied only on the selected label (see the *`label`* rectangle); if not selected, actions are applied on all labels.\n- *`show selected`*: if selected, only the selected label will be display on the layer; if not selected, all labels are displayed.\n \n \n>*Remark*: a second option for filling has been added\n>1. Drawn the egde of a closed shape with the paint brush mode. \n>2. Double click to activate the fill bucket. \n>3. Click inside the closed area to fill it. \n>4. Double click on the filled area to deactivate the fill bucket and reactivate the paint brush mode.\n \n\n## Annotate your image *(use Panel 2)*\n \nManual annotation and correction on the segmented file is done using the layer controls of the *`annotations`* layer. Click on the layer to display them. /!\\ You have to choose a structure to start annotating *(see 2.)*.\n1. To modify an existing segmentation, you can directy open the segmented file with the button. The file needs to have the same dimensions as the original image. \n > /!\\ Only .tiff, .tif, .nii and .nii.gz files are supported as segmented files. \n \n2. Choose a structure to annotate in the drop-down menu\n - *`Fetus`*: to annotate pregnancy image.\n - *`Shoulder`*: to annotate bones and muscles for shoulder surgery.\n - *`Shoulder Bones`*: to annotate only few bones for shoulder surgery.\n - *`Feta Challenge`*: to annotate fetal brain MRI with the same label than the FeTA Challenge (see ADD LIEN WEB).\n \n> When selecting a structure, a new panel appears with a list of elements to annotate. Each element has its own label and color. Select one element in the list to automatically activate the paint brush mode with the corresponding color (color is updated in the *`label`* rectangle in the layer controls panel).\n \n3. All actions can be undone with the button or Ctrl-Z.\n \n4. If you need to work on a specific slice of your 3D image, but also have to explore the volume to understand some complex structures, you can use the locking option to facilitate the annotation task.\n - To activate the functionality: \n 1. Go to the slice of interest.\n 2. Click on the button => will change the button to and save the layer index.\n 3. Scroll in the z-axis to explore the data (with the mouse wheel or the slider under the image).\n 4. To go back to your slice of interest, click on the button.\n - To deactivate the functionality (or change the locked slice index): \n 1. Go to the locked slice.\n 2. Click on the button => change the button to and \"unlock\" the slice.\n\n\n## Select slices of interest *(use Panel 3 -- only displayed for the Shoulder Bones category)*\n\nThis panel will only be displayed if the *`Shoulder Bones`* category is selected. A maxiumum of 10 slices can be selected in a 3D image and the corresponding z-indexes will be integrated in the metadata during the exportation of the segmentation file.\n \n > /!\\ Metadata integration is available only for exported .tiff and .tif files and with the *`Unique`* save option. \n\n- : to add the currently displayed z-index in the drop-down menu.\n- : to remove the currently displayed z-index from the drop-down menu.\n- : to go to the z-index selected in the drop-down menu. The icon will be checked when the currently displayed z-index matches the selected z-index in the drop-down menu.\n- : a drop-down menu containing the list of selected z-indexes. Select a z-index from the list to work with it more easily.\n\n\n## Export annotations *(use Panel 3 -- or 4 if the Shoulder Bones category is selected)*\n \n1. Annotations can be exported as .tif, .tiff, .nii or .nii.gz file with the button in one of the two following saving mode:\n - *`Unique`*: segmented data is exported as a unique 3D image with corresponding label ids (1-2-3-...). This file can be re-opened in the application.\n - *`Several`*: segmented data is exported as several binary 3D images (0 or 255), one for each label id.\n2. : delete annotation data.\n3. *`Automatic segmentation backup`*: if selected, the segmentation data will be automatically exported as a unique 3D image when the image slice is changed.\n > /!\\ This process can slow down the display if the image is large.\n\n# Hesperos: *OneShot Segmentation* mode\n \n The ***OneShot Segmentation*** mode of the Hesperos plugin is a 2D version of the VoxelLearning method implemented in DIVA (see [our Github](https://github.com/DecBayComp/VoxelLearning) and the latest article [Guérinot, C., Marcon, V., Godard, C., et al. (2022). New Approach to Accelerated Image Annotation by Leveraging Virtual Reality and Cloud Computing. _Frontiers in Bioinformatics_. doi:10.3389/fbinf.2021.777101](https://www.frontiersin.org/articles/10.3389/fbinf.2021.777101/full)).\n \n\nThe principle is to accelerate the segmentation without prior information. The procedure consists of:\n1. A **rapid tagging** of few pixels in the image with two labels: one for the structure of interest (named positive tags), and one for the other structures (named negative tags).\n2. A **training** of a simple random forest classifier with these tagged pixels and their features (mean, gaussian, ...).\n3. An **inference** of all the pixels of the image to automatically segment the structure of interest. The output is a probability image (0-255) of belonging to a specific class.\n4. Iterative corrections if needed.\n \n\n\n \n## Import and adjust your image *(use Panel 1)*\n \nSame panel as the *Manual Segmentation and Correction* mode *(see [panel 1 description](#import-and-adjust-your-image-use-panel-1))*.\n \n \n## Annotate your image *(use Panel 2)*\n \nAnnotations and corrections on the segmented file is done using the layer controls of the *`annotations`* layer. Click on the layer to display them. Only two labels are available: *`Structure of interest`* and *`Other`*. \n\nThe rapid manual tagging step of the one-shot learning method aims to learn and attribute different features to each label.\n \nTo achieve that, the user has to:\n- with the label *`Structure of interest`*, tag few pixels of the structure of interest.\n- with the label *`Other`*, tag the greatest diversity of uninteresting structures in the 3D image (avoid tagging too much pixels).\n\n> see the exemple image with *`Structure of interest`* label in red and *`Other`* label in cyan.\n \n1. To modify an existing segmentation, you can directy open the segmented file with the button. The file needs to have the same dimensions as the original image. \n > /!\\ Only .tiff, .tif, .nii and .nii.gz files are supported as segmented files. \n2. All actions can be undone with the button or Ctrl-Z.\n\n \n## Run automatic segmentation *(use Panel 3)*\n\nFrom the previously tagged pixels, features are extracted and used to train a basic classifier : the Random Forest Classifier (RFC). When the training of the pixel classifier is done, it is applied to each pixel of the complete volume and outputs a probability to belong to the structure of interest.\n\nTo run training and inference, click on the button:\n1. You will be asked to save a .pckl file which corresponds to the model.\n2. A new status will appears under the *Panel 4* : *`Computing...`*. You must wait for the message to change to: *`Ready`* before doing anything in the application (otherwise the application may freeze or crash).\n3. When the processing is done, two new layers will appear:\n - the *`probabilities`* layer which corresponds to the direct probability (between 0 and 1) of a pixel to belong to the structure of interest. This layer is disabled by default, to enable it click on its eye icon in the layer list.\n - the *`segmented probabilities`* layer which corresponds to a binary image obtained from the probability image normed and thresholded according to a value manually defined with the *`Probability threshold`* slider: .\n\n>Remark: If the output is not perfect, you have two possibilities to improve the result:\n>1. Add some tags with the paint brush to take in consideration unintersting structures or add information in critical areas of your structure of interest (such as in thin sections). Then, run the training and inference process again. /!\\ This will overwrite all previous segmentation data.\n>2. Export your segmentation data and re-open it with the *Manual Annotation and Correction* mode of Hesperos to manually erase or add annotations.\n \n \n## Export annotations *(use Panel 4)*\n \n1. Segmented probabilites can be exported as .tif, .tiff, .nii or .nii.gz file with the button. The image is exported as a unique 3D binary image (value 0 and 255). This file can be re-opened in the application for correction.\n2. Probabilities can be exported as .tif, .tiff, .nii or .nii.gz file with the button as a unique 3D image. The probabilities image is normed between 0 and 255.\n3. : delete annotation data.\n\n\n# License\n\nDistributed under the terms of the [BSD-3] license, **Hesperos** is a free and open source software.\n\n \n[napari]: https://github.com/napari/napari\n[Cookiecutter]: https://github.com/audreyr/cookiecutter\n[@napari]: https://github.com/napari\n[BSD-3]: http://opensource.org/licenses/BSD-3-Clause\n[cookiecutter-napari-plugin]: https://github.com/napari/cookiecutter-napari-plugin\n\n[tox]: https://tox.readthedocs.io/en/latest/\n[pip]: https://pypi.org/project/pip/\n[PyPI]: https://pypi.org/\n[Anaconda]: https://www.anaconda.com/products/distribution#Downloads\n[VoxelLearning]: https://github.com/DecBayComp/VoxelLearning\n","description_content_type":"text/markdown","keywords":null,"home_page":"https://github.com/chgodard/hesperos","download_url":null,"author":"Charlotte Godard","author_email":"charlotte.godard@pasteur.fr","maintainer":null,"maintainer_email":null,"license":"BSD-3-Clause","classifier":["Development Status :: 2 - Pre-Alpha","Intended Audience :: Developers","Framework :: napari","Programming Language :: Python","Programming Language :: Python :: 3","Programming Language :: Python :: 3.8","Programming Language :: Python :: 3.9","Programming Language :: Python :: 3.10","Operating System :: MacOS :: MacOS X","Operating System :: Microsoft :: Windows","License :: OSI Approved :: BSD License"],"requires_dist":["numpy","qtpy","tifffile","scikit-image","scikit-learn","SimpleITK","pandas","napari (<0.4.15)","napari-plugin-engine","imageio-ffmpeg","tox ; extra == 'testing'","pytest ; extra == 'testing'","pytest-cov ; extra == 'testing'","pytest-qt ; extra == 'testing'","napari ; extra == 'testing'","pyqt5 ; extra == 'testing'"],"requires_python":">=3.8","requires_external":null,"project_url":["Documentation, https://github.com/chgodard/hesperos/blob/main/README.md","Source Code, https://github.com/chgodard/hesperos"],"provides_extra":["testing"],"provides_dist":null,"obsoletes_dist":null},"npe1_shim":false}