-
Notifications
You must be signed in to change notification settings - Fork 0
5. Analysis Pipeline
Registration aligns multiple images from different cycles in a multiplex imaging experiment so that they precisely overlap. This is often done using the nuclei (DAPI) channel in each cycle.
Stitching combines overlapping images into a single, comprehensive picture. This allows for the analysis of larger structures and patterns that would not be visible in the smaller, individual images.
https://labsyspharm.github.io/ashlar/
https://github.com/labsyspharm/ashlar
- Install ASHLAR by running the command:
- pip install ashlar
- On the imaging computer, you can use the Mamba environment:
mamba activate ashlar-mamba
.
- Create a folder with your images, one for each cycle.
- Rename your images to have the pattern:
Cycle<CYCLE#>_<COLUMN#>_<ROW#>.tif
- Example:
Cycle2_001_002.tif
- Note: The column and row numbers need to have 3 digits.
- Run the command:
ashlar \
'filepattern|<PATH_TO_FOLDER_CYCLE1>|pattern=Cycle1_{col:03}_{row:03}.tif|overlap=0.2' \
'filepattern|<PATH_TO_FOLDER_CYCLE2>|pattern=Cycle2_{col:03}_{row:03}.tif|overlap=0.2'
- Replace
<PATH_TO_FOLDER_CYCLE#>
with the path for each image folder. - Additional cycles can be added by concatenating a filepattern line.
- The overlap can be changed if it's not 0.2.
- The registered and stitched image is created as an
.ome.tiff
file in the directory the command is run.
(under construction)
Single-cell segmentation generates outlines for individual cells within an image using a biomarker for nuclei or cell bodies. This allows for measurement of activity for other biomarkers within individual cells.
https://cellpose.readthedocs.io/en/latest/cli.html
- Install Cellpose by running the command:
pip install cellpose
- On the imaging computer, you can use the Mamba environment:
mamba activate cellpose-mamba
.
- Create a folder with the .tif images that you want to segment.
- Run the command:
python -m cellpose --dir <PATH_DIR> --chan <CHAN_SEGMENT> --diameter <OBJ_DIAMETER> --save_tif
- Replace
<PATH_FOLDER>
with the path to the folder with the images. - Replace
<CHAN_SEGMENT>
with the index of the image channel to be segmented. - Replace
<OBJ_DIAMETER>
with the approximate pixel diameter of a cell/nucleus. - Can add
--use_gpu
if there's a GPU available. - Can add
--pretrained_model nuclei
if segmenting nuclei.
- The masks.tif files for each image will be created in the same folder.
https://cellpose.readthedocs.io/en/latest/
(under construction)
Feature extraction identifies and quantifies characteristics within an image of a biological sample (e.g. tissue, single cell). These features provide a method to compare and classify different conditions in an experiment (e.g. gene knockdown, drug perturbation).
(under construction)
https://cytomining.github.io/DeepProfiler-handbook/docs/00-welcome.html
https://github.com/cytomining/DeepProfiler
DeepProfiler has a Cell Painting CNN that can be used to generate features for cell painting images. Please refer to the handbook above for instructions on installing the package, creating the project directory, and generating the features.
Once the features are generated, to load them and analyze them, please refer to this Jupyter Notebook in the GitHub: /imaging-resources/5_analysis/feature-extraction/deepprofiler.ipynb
https://github.com/labsyspharm/quantification
MCQuant uses a segmentation mask to create a cell by marker intensity matrix for a multi-channel image. There are additional features (e.g. X,Y coordinates and area) that are calculated for each cell too.
- Clone the repo with the command:
git clone https://github.com/labsyspharm/quantification.git
- Create an environment with the YML file in the repo above:
conda env create -f quantification.yml
- On the imaging computer, you can use the Mamba environment:
mamba activate imaging-mamba
.
- Create a CSV file with the biomarker channels in each cycle. Example:
channel_number,cycle_number,marker_name
1,1,DAPI_1
2,1,example_marker_a
3,1,example_marker_b
4,1,example_marker_c
5,2,DAPI_2
6,2,example_marker_d
7,2,example_marker_e
8,2,example_marker_f
- Run the quantification with the command:
python CommandSingleCellExtraction.py
--image /<PATH_TO_IMAGE>/image.ome.tiff
--masks /<PATH_TO_MASK>/segmentation_mask.ome.tiff
--channel_names /<PATH_TO_CHANNELS_CSV>/channels_cycle.csv
--output /<PATH_TO_OUTPUT_FOLDER>
Batch correction reduces the variation across multiple wells/plates/datasets due to technical effects, so that the resulting variation in the data reflects true biological differences.
A sphering transform can be used to correct for batch effects by reducing the variation associated with confounders and amplifying features related to phenotypic outcomes. Control samples are used to generate the transform and the transform is applied to the entire dataset.
Please refer to this Jupyter Notebook in the GitHub: /imaging-resources/5_analysis/batch-correction/sphering-transform.ipynb
https://portals.broadinstitute.org/harmony/index.html
(under construction)
https://epigenelabs.github.io/pyComBat/
https://github.com/CoAxLab/pycombat
(under construction)
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegressionCV.html
(under construction)
https://pytorch.org/tutorials/beginner/basics/intro.html
(under construction)