Jupyter Notebook Binder

Project flow#

LaminDB allows tracking data lineage on the entire project level.

Here, we walk through exemplified app uploads, pipelines & notebooks following Schmidt et al., 2022.

A CRISPR screen reading out a phenotypic endpoint on T cells is paired with scRNA-seq to generate insights into IFN-γ production.

These insights get linked back to the original data through the steps taken in the project to provide context for interpretation & future decision making.

More specifically: Why should I care about data flow?

Data flow tracks data sources & transformations to trace biological insights, verify experimental outcomes, meet regulatory standards, increase the robustness of research and optimize the feedback loop of team-wide learning iterations.

While tracking data flow is easier when it’s governed by deterministic pipelines, it becomes hard when it’s governed by interactive human-driven analyses.

LaminDB interfaces workflow mangers for the former and embraces the latter.

Setup#

Init a test instance:

!lamin init --storage ./mydata
Hide code cell output
✅ saved: User(uid='DzTjkKse', handle='testuser1', name='Test User1', updated_at=2024-01-18 22:14:41 UTC)
✅ saved: Storage(uid='TIz8T3Tz', root='/home/runner/work/lamin-usecases/lamin-usecases/docs/mydata', type='local', updated_at=2024-01-18 22:14:41 UTC, created_by_id=1)
💡 loaded instance: testuser1/mydata
💡 did not register local instance on hub

Import lamindb:

import lamindb as ln
from IPython.display import Image, display
💡 lamindb instance: testuser1/mydata

Steps#

In the following, we walk through exemplified steps covering different types of transforms (Transform).

Note

The full notebooks are in this repository.

App upload of phenotypic data #

Register data through app upload from wetlab by testuser1:

# This function mimics the upload of artifacts via the UI
# In reality, you simply drag and drop files into the UI
def run_upload_crispra_result_app():
    ln.setup.login("testuser1")
    transform = ln.Transform(name="Upload GWS CRISPRa result", type="app")
    ln.track(transform)
    output_path = ln.dev.datasets.schmidt22_crispra_gws_IFNG(ln.settings.storage)
    output_file = ln.Artifact(
        output_path, description="Raw data of schmidt22 crispra GWS"
    )
    output_file.save()


run_upload_crispra_result_app()
Hide code cell output
💡 saved: Transform(uid='LLBn1jrnHdMWOPLQ', name='Upload GWS CRISPRa result', type='app', updated_at=2024-01-18 22:14:43 UTC, created_by_id=1)
💡 saved: Run(uid='VBTOQa2FR5I43xEOQ3ZB', run_at=2024-01-18 22:14:43 UTC, transform_id=1, created_by_id=1)

Hit identification in notebook #

Access, transform & register data in drylab by testuser2:

def run_hit_identification_notebook():
    # log in as another user
    ln.setup.login("testuser2")

    # create a new transform to mimic a new notebook (in reality you just run ln.track() in a notebook)
    transform = ln.Transform(name="GWS CRIPSRa analysis", type="notebook")
    ln.track(transform)

    # access the upload artifact
    input_file = ln.Artifact.filter(key="schmidt22-crispra-gws-IFNG.csv").one()

    # identify hits
    input_df = input_file.load().set_index("id")
    output_df = input_df[input_df["pos|fdr"] < 0.01].copy()

    # register hits in output artifact
    ln.Artifact(output_df, description="hits from schmidt22 crispra GWS").save()


run_hit_identification_notebook()
Hide code cell output
💡 saved: Transform(uid='TCjNjrfZkEIaaReb', name='GWS CRIPSRa analysis', type='notebook', updated_at=2024-01-18 22:14:45 UTC, created_by_id=1)
💡 saved: Run(uid='h1ZRE32b6hhVWbUEiW57', run_at=2024-01-18 22:14:45 UTC, transform_id=2, created_by_id=1)

Inspect data flow:

artifact = ln.Artifact.filter(description="hits from schmidt22 crispra GWS").one()
artifact.view_lineage()
_images/47e74ff3fc73c76e7ac17e67ad4d2c3e61da497a0720416cde7b4f3f427bf421.svg

Sequencer upload #

Upload files from sequencer:

def run_upload_from_sequencer_pipeline():
    ln.setup.login("testuser1")

    # create a pipeline transform
    ln.track(ln.Transform(name="Chromium 10x upload", type="pipeline"))
    # register output files of the sequencer
    upload_dir = ln.dev.datasets.dir_scrnaseq_cellranger(
        "perturbseq", basedir=ln.settings.storage, output_only=False
    )
    ln.Artifact(upload_dir.parent / "fastq/perturbseq_R1_001.fastq.gz").save()
    ln.Artifact(upload_dir.parent / "fastq/perturbseq_R2_001.fastq.gz").save()


run_upload_from_sequencer_pipeline()
Hide code cell output
💡 saved: Transform(uid='PmUmTqJHlmP9ivY3', name='Chromium 10x upload', type='pipeline', updated_at=2024-01-18 22:14:47 UTC, created_by_id=1)
💡 saved: Run(uid='IGvXjVd4W0pjA8enPMYN', run_at=2024-01-18 22:14:47 UTC, transform_id=3, created_by_id=1)

scRNA-seq bioinformatics pipeline #

Process uploaded files using a script or workflow manager: Pipelines and obtain 3 output files in a directory filtered_feature_bc_matrix/:

def run_scrna_analysis_pipeline():
    ln.setup.login("testuser2")
    transform = ln.Transform(name="Cell Ranger", version="7.2.0", type="pipeline")
    ln.track(transform)
    # access uploaded files as inputs for the pipeline
    input_artifacts = ln.Artifact.filter(key__startswith="fastq/perturbseq").all()
    input_paths = [artifact.stage() for artifact in input_artifacts]
    # register output files
    output_artifacts = ln.Artifact.from_dir(
        "./mydata/perturbseq/filtered_feature_bc_matrix/"
    )
    ln.save(output_artifacts)

    # Post-process these 3 files
    transform = ln.Transform(
        name="Postprocess Cell Ranger", version="2.0", type="pipeline"
    )
    ln.track(transform)
    input_artifacts = [f.stage() for f in output_artifacts]
    output_path = ln.dev.datasets.schmidt22_perturbseq(basedir=ln.settings.storage)
    output_file = ln.Artifact(output_path, description="perturbseq counts")
    output_file.save()


run_scrna_analysis_pipeline()
Hide code cell output
💡 saved: Transform(uid='ubFzqqTVNV3p9HxB', name='Cell Ranger', version='7.2.0', type='pipeline', updated_at=2024-01-18 22:14:48 UTC, created_by_id=1)
💡 saved: Run(uid='UbHrk4ZZwYKE1C49wlK5', run_at=2024-01-18 22:14:48 UTC, transform_id=4, created_by_id=1)
❗ this creates one artifact per file in the directory - you might simply call ln.Artifact(dir) to get one artifact for the entire directory
💡 saved: Transform(uid='5HahDv2IyPwG3Wkx', name='Postprocess Cell Ranger', version='2.0', type='pipeline', updated_at=2024-01-18 22:14:49 UTC, created_by_id=1)
💡 saved: Run(uid='dxFlKzVG3L9kZytgaWLw', run_at=2024-01-18 22:14:49 UTC, transform_id=5, created_by_id=1)

Inspect data flow:

output_file = ln.Artifact.filter(description="perturbseq counts").one()
output_file.view_lineage()
_images/c75fcf6cf399742bdb7ca0b92f9b75af3257ac8014af0b1135c1e7c6bc25b592.svg

Integrate scRNA-seq & phenotypic data #

Integrate data in a notebook:

def run_integrated_analysis_notebook():
    import scanpy as sc

    # create a new transform to mimic a new notebook (in reality you just run ln.track() in a notebook)
    transform = ln.Transform(
        name="Perform single cell analysis, integrate with CRISPRa screen",
        type="notebook",
    )
    ln.track(transform)

    # access the output files of bfx pipeline and previous analysis
    file_ps = ln.Artifact.filter(description__icontains="perturbseq").one()
    adata = file_ps.load()
    file_hits = ln.Artifact.filter(description="hits from schmidt22 crispra GWS").one()
    screen_hits = file_hits.load()

    # perform analysis and register output plot files
    sc.tl.score_genes(adata, adata.var_names.intersection(screen_hits.index).tolist())
    filesuffix = "_fig1_score-wgs-hits.png"
    sc.pl.umap(adata, color="score", show=False, save=filesuffix)
    filepath = f"figures/umap{filesuffix}"
    artifact = ln.Artifact(filepath, key=filepath)
    artifact.save()
    filesuffix = "fig2_score-wgs-hits-per-cluster.png"
    sc.pl.matrixplot(
        adata, groupby="cluster_name", var_names=["score"], show=False, save=filesuffix
    )
    filepath = f"figures/matrixplot_{filesuffix}"
    artifact = ln.Artifact(filepath, key=filepath)
    artifact.save()


run_integrated_analysis_notebook()
Hide code cell output
💡 saved: Transform(uid='9Y25KqNyZgq0Tqg1', name='Perform single cell analysis, integrate with CRISPRa screen', type='notebook', updated_at=2024-01-18 22:14:51 UTC, created_by_id=1)
💡 saved: Run(uid='jQESbyD3x9WtCKpmJZmE', run_at=2024-01-18 22:14:51 UTC, transform_id=6, created_by_id=1)
WARNING: saving figure to file figures/umap_fig1_score-wgs-hits.png
WARNING: saving figure to file figures/matrixplot_fig2_score-wgs-hits-per-cluster.png

Review results#

Let’s load one of the plots:

# track the current notebook as transform
ln.track()
artifact = ln.Artifact.filter(key__contains="figures/matrixplot").one()
artifact.stage()
Hide code cell output
💡 notebook imports: ipython==8.20.0 lamindb==0.67.2 scanpy==1.9.6
💡 saved: Transform(uid='1LCd8kco9lZU6K79', name='Project flow', short_name='project-flow', version='0', type=notebook, updated_at=2024-01-18 22:14:53 UTC, created_by_id=1)
💡 saved: Run(uid='Cqym0Nqct1xEDQwlunOX', run_at=2024-01-18 22:14:53 UTC, transform_id=7, created_by_id=1)
PosixUPath('/home/runner/work/lamin-usecases/lamin-usecases/docs/mydata/.lamindb/Omt2cBB3ukcxZuWgOvSL.png')
display(Image(filename=artifact.path))
_images/cc92513294ee37e9961661fdc08e251027317eb7a1f5308c26df308bc57787f5.png

We see that the image artifact is tracked as an input of the current notebook. The input is highlighted, the notebook follows at the bottom:

artifact.view_lineage()
_images/2d74c3301e7b2f82ca2edfaba7bdb22bbe27bba8169535f28f79938be28c36c8.svg

Alternatively, we can also look at the sequence of transforms:

transform = ln.Transform.search("Bird's eye view", return_queryset=True).first()
transform.parents.df()
uid name short_name version type latest_report_id source_code_id reference reference_type created_at updated_at created_by_id
id
4 ubFzqqTVNV3p9HxB Cell Ranger None 7.2.0 pipeline None None None None 2024-01-18 22:14:48.918887+00:00 2024-01-18 22:14:48.918908+00:00 1
transform.view_parents()
_images/2d4a1fe3a9889b664037ee16cd117e504c735e75c9d6dc12d7b63af3d2f89a62.svg

Understand runs#

We tracked pipeline and notebook runs through run_context, which stores a Transform and a Run record as a global context.

Artifact objects are the inputs and outputs of runs.

What if I don’t want a global context?

Sometimes, we don’t want to create a global run context but manually pass a run when creating an artifact:

run = ln.Run(transform=transform)
ln.Artifact(filepath, run=run)
When does an artifact appear as a run input?

When accessing an artifact via stage(), load() or backed(), two things happen:

  1. The current run gets added to artifact.input_of

  2. The transform of that artifact gets added as a parent of the current transform

You can then switch off auto-tracking of run inputs if you set ln.settings.track_run_inputs = False: Can I disable tracking run inputs?

You can also track run inputs on a case by case basis via is_run_input=True, e.g., here:

artifact.load(is_run_input=True)

Query by provenance#

We can query or search for the notebook that created the artifact:

transform = ln.Transform.search("GWS CRIPSRa analysis", return_queryset=True).first()

And then find all the artifacts created by that notebook:

ln.Artifact.filter(transform=transform).df()
uid storage_id key suffix accessor description version size hash hash_type n_objects n_observations transform_id run_id visibility key_is_virtual created_at updated_at created_by_id
id
2 Xa9tlkhKUZZBCR3zr487 1 None .parquet DataFrame hits from schmidt22 crispra GWS None 18368 q54ULUuKxw3LglyzMVYZ8Q md5 None None 2 2 1 True 2024-01-18 22:14:46.140501+00:00 2024-01-18 22:14:46.140527+00:00 1

Which transform ingested a given artifact?

artifact = ln.Artifact.filter().first()
artifact.transform
Transform(uid='LLBn1jrnHdMWOPLQ', name='Upload GWS CRISPRa result', type='app', updated_at=2024-01-18 22:14:43 UTC, created_by_id=1)

And which user?

artifact.created_by
User(uid='DzTjkKse', handle='testuser1', name='Test User1', updated_at=2024-01-18 22:14:47 UTC)

Which transforms were created by a given user?

users = ln.User.lookup()
ln.Transform.filter(created_by=users.testuser2).df()
uid name short_name version type reference reference_type created_at updated_at latest_report_id source_code_id created_by_id
id

Which notebooks were created by a given user?

ln.Transform.filter(created_by=users.testuser2, type="notebook").df()
uid name short_name version type reference reference_type created_at updated_at latest_report_id source_code_id created_by_id
id

We can also view all recent additions to the entire database:

ln.view()
Hide code cell output
Artifact
uid storage_id key suffix accessor description version size hash hash_type n_objects n_observations transform_id run_id visibility key_is_virtual created_at updated_at created_by_id
id
10 Omt2cBB3ukcxZuWgOvSL 1 figures/matrixplot_fig2_score-wgs-hits-per-clu... .png None None None 28814 ijpft7zAYShlKDXYYAD4hw md5 None None 6 6 1 True 2024-01-18 22:14:52.765443+00:00 2024-01-18 22:14:52.765467+00:00 1
9 VD64ECsI0GTSL6FrTqLS 1 figures/umap_fig1_score-wgs-hits.png .png None None None 118999 74WuaFnZeoMTvSpY--lbrA md5 None None 6 6 1 True 2024-01-18 22:14:52.546567+00:00 2024-01-18 22:14:52.546591+00:00 1
8 Z2zjKwf9jSxo5EwYclAb 1 schmidt22_perturbseq.h5ad .h5ad AnnData perturbseq counts None 20659936 la7EvqEUMDlug9-rpw-udA md5 None None 5 5 1 False 2024-01-18 22:14:50.980125+00:00 2024-01-18 22:14:50.980155+00:00 1
7 sN3HR5QQwgkxb5OZGber 1 perturbseq/filtered_feature_bc_matrix/barcodes... .tsv.gz None None None 6 a6rIsXzn-cEdEbMSsSuMVQ md5 None None 4 4 1 False 2024-01-18 22:14:49.373965+00:00 2024-01-18 22:14:49.373983+00:00 1
6 G0VjVBj0sMyGVdAgYA2L 1 perturbseq/filtered_feature_bc_matrix/matrix.m... .mtx.gz None None None 6 rkZOqj5JLIdSFzkAe0WVYQ md5 None None 4 4 1 False 2024-01-18 22:14:49.373355+00:00 2024-01-18 22:14:49.373373+00:00 1
5 btU8X69P2QxcYe9hXaZU 1 perturbseq/filtered_feature_bc_matrix/features... .tsv.gz None None None 6 Z1q2Gl5aAWx1sbndjJEYUQ md5 None None 4 4 1 False 2024-01-18 22:14:49.372620+00:00 2024-01-18 22:14:49.372639+00:00 1
4 vaDiEduFrEDuLg1qQlJV 1 fastq/perturbseq_R2_001.fastq.gz .fastq.gz None None None 6 3ks-hN6e61PR2W4MftbxmA md5 None None 3 3 1 False 2024-01-18 22:14:47.715488+00:00 2024-01-18 22:14:47.715507+00:00 1
Run
uid transform_id run_at created_by_id report_id environment_id is_consecutive reference reference_type created_at
id
1 VBTOQa2FR5I43xEOQ3ZB 1 2024-01-18 22:14:43.415810+00:00 1 None None None None None 2024-01-18 22:14:43.415971+00:00
2 h1ZRE32b6hhVWbUEiW57 2 2024-01-18 22:14:45.680898+00:00 1 None None None None None 2024-01-18 22:14:45.681025+00:00
3 IGvXjVd4W0pjA8enPMYN 3 2024-01-18 22:14:47.292013+00:00 1 None None None None None 2024-01-18 22:14:47.292090+00:00
4 UbHrk4ZZwYKE1C49wlK5 4 2024-01-18 22:14:48.923352+00:00 1 None None None None None 2024-01-18 22:14:48.923427+00:00
5 dxFlKzVG3L9kZytgaWLw 5 2024-01-18 22:14:49.386976+00:00 1 None None None None None 2024-01-18 22:14:49.387048+00:00
6 jQESbyD3x9WtCKpmJZmE 6 2024-01-18 22:14:51.900079+00:00 1 None None None None None 2024-01-18 22:14:51.900168+00:00
7 Cqym0Nqct1xEDQwlunOX 7 2024-01-18 22:14:53.136122+00:00 1 None None None None None 2024-01-18 22:14:53.136201+00:00
Storage
uid root description type region created_at updated_at created_by_id
id
1 TIz8T3Tz /home/runner/work/lamin-usecases/lamin-usecase... None local None 2024-01-18 22:14:41.248337+00:00 2024-01-18 22:14:41.248365+00:00 1
Transform
uid name short_name version type latest_report_id source_code_id reference reference_type created_at updated_at created_by_id
id
7 1LCd8kco9lZU6K79 Project flow project-flow 0 notebook None None None None 2024-01-18 22:14:53.133125+00:00 2024-01-18 22:14:53.133152+00:00 1
6 9Y25KqNyZgq0Tqg1 Perform single cell analysis, integrate with C... None None notebook None None None None 2024-01-18 22:14:51.894670+00:00 2024-01-18 22:14:51.894699+00:00 1
5 5HahDv2IyPwG3Wkx Postprocess Cell Ranger None 2.0 pipeline None None None None 2024-01-18 22:14:49.383889+00:00 2024-01-18 22:14:49.383910+00:00 1
4 ubFzqqTVNV3p9HxB Cell Ranger None 7.2.0 pipeline None None None None 2024-01-18 22:14:48.918887+00:00 2024-01-18 22:14:48.918908+00:00 1
3 PmUmTqJHlmP9ivY3 Chromium 10x upload None None pipeline None None None None 2024-01-18 22:14:47.288940+00:00 2024-01-18 22:14:47.288961+00:00 1
2 TCjNjrfZkEIaaReb GWS CRIPSRa analysis None None notebook None None None None 2024-01-18 22:14:45.676185+00:00 2024-01-18 22:14:45.676204+00:00 1
1 LLBn1jrnHdMWOPLQ Upload GWS CRISPRa result None None app None None None None 2024-01-18 22:14:43.411677+00:00 2024-01-18 22:14:43.411695+00:00 1
User
uid handle name created_at updated_at
id
2 bKeW4T6E testuser2 Test User2 2024-01-18 22:14:45.668909+00:00 2024-01-18 22:14:48.909503+00:00
1 DzTjkKse testuser1 Test User1 2024-01-18 22:14:41.244649+00:00 2024-01-18 22:14:47.281171+00:00
Hide code cell content
!lamin login testuser1
!lamin delete --force mydata
!rm -r ./mydata
✅ logged in with email testuser1@lamin.ai (uid: DzTjkKse)
💡 deleting instance testuser1/mydata
✅     deleted instance settings file: /home/runner/.lamin/instance--testuser1--mydata.env
✅     instance cache deleted
✅     deleted '.lndb' sqlite file
❗     consider manually deleting your stored data: /home/runner/work/lamin-usecases/lamin-usecases/docs/mydata