Command line programs and libraries
Table of Contents
- Scripts
- ttlfmt
- spc
- slimgen
- scigraph-codegen
- scig
- registry-sync
- rdflib_profile
- qnamefix
- pushd
- parcellation
- overlaps
- ontutils
- ontree
- ontload
- ont-docs
- ont-catalog
- obo-io
- neurondm-build
- necromancy
- map-identifiers
- make_config
- interlex
- graphml-to-ttl
- googapis
- defs
- clifun-demo
- allen_transgenic_lines
- allen_cell_types
- Modules
Scripts
ttlfmt
Format ontology files using a uniform ttl serializer from rdflib Usage: ttlfmt [options] ttlfmt [options] <file>... Options: -h --help print this -v --verbose do something fun! -a --vanilla use the regular rdflib turtle serializer -y --subclass use the subClassOf turtle serializer -c --compact use the compact turtle serializer -u --uncompact use the uncompact turtle serializer -r --racket use the racket turtle serializer -j --jsonld use the rdflib-jsonld serializer -f --format=FM specify the input format (used for pipes) -t --outfmt=F specify the output format [default: nifttl] -s --slow do not use a process pool -n --nowrite parse the file and reserialize it but do not write changes -o --output=FI serialize all input files to output file -p --profile enable profiling on parsing and serialization -d --debug launch debugger after parsing and before serialization
spc
SPARC curation cli for fetching, validating datasets, and reporting. Usage: spc configure spc clone [options] <project-id> spc pull [options] [<directory>...] spc refresh [options] [<path>...] spc fetch [options] [<path>...] spc find [options] --name=<PAT>... spc status [options] spc meta [options] [<path>...] spc rmeta [options] spc export [schemas protcur protocols] [options] [<path>...] spc report all [options] spc report size [options] [<path>...] spc report tofetch [options] [<directory>...] spc report terms [anatomy cells subcelluar] [options] spc report terms [hubmap hubmap-anatomy] [options] spc report overview [<path>...] [options] spc report anno-tags <tag>... [options] spc report [samples-values subjects-values] [options] spc report [access filetypes pathids errors] [options] spc report [completeness keywords subjects] [options] spc report [contributors samples milestones] [options] spc report [protocols changes test mbf mis] [options] spc shell [affil integration protocols exit] [options] spc shell [(dates [<path>...]) sheets] [options] spc server [options] spc apinat [options] <path-in> <path-out> spc tables [options] [<directory>...] spc annos [options] [fetch export shell] spc feedback <feedback-file> <feedback>... spc missing [options] spc xattrs [options] spc goto <remote-id> spc fix [options] [duplicates mismatch cache] [<path>...] spc fix [options] [bf-to-pn] [<path>...] spc stash [options --restore] <path>... spc make-url [options] [<id-or-path>...] spc show [schemas rmeta (export [json ttl])] [options] [<project-id>] spc show protcur [json ttl] [options] spc sheets [update cache] [options] <sheet-name> spc fab [meta] [options] Commands: configure run commands to check and get auth creditials clone clone a remote project (creates a new folder in the current directory) pull retrieve remote file structure options: --empty : --sparse-limit refresh retrieve remote file sizes and fild ids (can also fetch using the new data) options: --fetch : --level : --only-no-file-id fetch fetch remote data based on local metadata (NOTE does NOT refresh first) options: --level : --mbf fetch mbf xml metadata and only for specific datasets find list unfetched files with option to fetch options: --name=<PAT>... glob options should be quoted to avoid expansion : --existing include existing files in search : --refresh refresh matching files : --fetch fetch matching files : --level status list existing files where local meta does not match cached meta display the metadata the current folder or specified paths options: --diff diff the local and cached metadata : --uri render the uri for the remote : --browser navigate to the human uri for this file : --human : --context include context, e.g. dataset rmeta retrieve metadata about files/folders from the remote export export extracted data to json (and everything else) schemas export schemas from python to json options: --latest run derived pipelines from latest json : --partial run derived pipelines from the latest partial json export : --open=P open the output file with specified command : --show open the output file using xopen : --mbf extract and export mbf embedded metadata report generate reports all generate all reports (use with --to-sheets) size dataset sizes and file counts completeness submission and curation completeness filetypes filetypes used across datasets pathids mapping from local path to cached id keywords keywords used per dataset terms all ontology terms used in the export anatomy cells subcelluar hubmap hubmap-anatomy subjects all headings from subjects files samples all headings from samples files contributors report on all contributors errors list of all errors per dataset test do as little as possible (use with --profile) mbf mbf term report (can use with --unique) anno-tags list anno exact for a curation tag protocols general report on status of protocols changes diff two curation exports mis list summary predicates used per dataset access report on dataset access master vs pipelines overview general dataset information samples-values report all cell values for samples sheets subjects-values report all cell values for subjects sheets options: --raw run reports on live data without export : --tab-table : --to-sheets : --sort-count-desc : --unique : --uri : --uri-api : --uri-html : --debug : --export-file=PATH : --protcur-file=PATH : --ttl-file=PATHoURI : --ttl-compare=PATHoURI : --published show show an export file schemas show the latest schema export folder rmeta show the rmeta cache folder export show the latest project level export json ttl protcur show the latest protcur export json ttl options: --open=P open the output file with specified command shell drop into an ipython shell integration integration subshell with different defaults exit (use with --profile) server reporting server options: --raw run server on live data without export apinat convert ApiNATOMY json to rdf and serialize to ttl missing find and fix missing metadata xattrs populate metastore / backup xattrs goto given an id cd to the containing directory invoke as `pushd $(spc goto <id>)` dedupe find and resolve cases with multiple ids fix broke something? put the code to fix it here mismatch duplicates stash stash a copy of the specific files and their parents make-url return urls for blackfynn dataset ids, or paths fab fabricate something meta make fake metadata for a locally updated file Options: -f --fetch fetch matching files -R --refresh refresh matching files -r --rate=HZ sometimes we can go too fast when fetching [default: 5] -l --limit=SIZE_MB the maximum size to download in megabytes [default: 2] use zero or negative numbers to indicate no limit -L --level=LEVEL how deep to go in a refresh used by any command that acceps <path>... -p --pretend if the defult is to act, dont, opposite of fetch -h --human print human readable values -b --browser open the uri in default browser -u --uri print the human uri for the path in question -a --uri-api print the api uri for the path in question --uri-html print the html uri for the path in question -c --context include context for a file e.g. dataset -n --name=<PAT> filename pattern to match (like find -name) -e --empty only pull empty directories -x --exists when searching include files that have already been pulled -m --only-meta only pull known dataset metadata files -z --only-no-file-id only pull files missing file_id -o --overwrite fetch even if the file exists --project-path=<PTH> set the project path manually --sparse-limit=COUNT package count that forces a sparse pull [default: sparcur.config.auth.get('sparse-limit')] use zero or negative numbers to indicate no limit -F --export-file=PATH run reports on a specific export file -t --tab-table print simple table using tabs for copying -A --latest run derived pipelines from latest json -P --partial run derived pipelines from the latest partial json export -W --raw run reporting on live data without export --published run on the latest published export --to-sheets push report to google sheets --protcur-file=PATH location of protcur jsonld file --ttl-file=PATHoURI location of ttl file (uses latest if not specified) --ttl-compare=PATHoURI location of ttl file for comparison --preview run export and reporting in preview mode if not set auth.get('preview') takes priority -S --sort-size-desc sort by file size, largest first -C --sort-count-desc sort by count, largest first -O --open=PROGRAM open the output file with program -w --show open the output file -U --upload update remote target (e.g. a google sheet) if one exists -N --no-google hack for ipv6 issues -D --diff diff local vs cache --force force the regeneration of a cached file --port=PORT server port [default: 7250] -j --jobs=N number of jobs to run [default: 12] -d --debug drop into a shell after running a step -v --verbose print extra information --profile profile startup performance --local ignore network issues --no-network do not make any network requests (incomplete impl) --mbf fetch/export mbf related metadata --unique return a unique set of values without additional info --log-level=LEVEL set python logging log level --log-path=PATH folder where logs are saved [default: sparcur.config.auth.get_path('log-path')] --cache-path=PATH folder where remote data is saved [default: sparcur.config.auth.get_path('cache-path')] --export-path=PATH base folder for exports [default: sparcur.config.auth.get_path('export-path')] --project-id=PID alternate way to pass project id [default: sparcur.config.auth.get('remote-organization')] --hypothesis-group-name=NAME hypothesis group name for protcur [default: sparc-curation] --hypothesis-cache-file=PATH path to hyputils json cache file --i-know-what-i-am-doing don't use this unless you already know what it does
slimgen
Generate slim ontology files Usage: slimgen [options] (chebi|gene|doid)... slimgen [options] all Options: -h --help show this -j --jobs=NJOBS number of jobs [default: 1] -d --debug
scigraph-codegen
Client library generator for SciGraph REST api. Usage: scigraph-codegen [options] [--dynamic=<PATH>...] Options: -o --output-file=FILE save client library here [default: import tempfile.tempdir/scigraph_client.py] -a --api=API API endpoint to build from [default: pyontutils.config.auth.get('scigraph-api')] -v --scigraph-version=VER API docs version [default: 2] -b --basepath=BASEPATH alternate default basepath [default: https://scicrunch.org/api/1/sparc-scigraph] -d --dynamic=<PATH> additional servers to search for dynamic endpoints
scig
Look look up ontology terms on the command line. Usage: scig v [--api=A --local --verbose --key=KEY] <id>... scig i [--api=A --local --verbose --key=KEY] <id>... scig t [--api=A --local --verbose --limit=LIMIT --key=KEY --prefix=P...] <term>... scig s [--api=A --local --verbose --limit=LIMIT --key=KEY --prefix=P...] <term>... scig g [--api=A --local --verbose --rt=RELTYPE --edges --key=KEY] <id>... scig e [--api=A --local --verbose --key=KEY] <p> <s> <o> scig c [--api=A --local --verbose --key=KEY] scig cy [--api=A --local --verbose --limit=LIMIT] <query> scig onts [--api=A --local --verbose --key=KEY] Options: -a --api=A Full url to SciGraph api endpoint -e --edges print edges only -l --local hit the local scigraph server -v --verbose print the full uri -t --limit=LIMIT limit number of results [default: 10] -k --key=KEY api key -w --warn warn on errors -p --prefix=P filter by prefix -r --rt=RELTYPE relationshipType
registry-sync
Sync the scicrunch registry to a ttl file for loading into scigraph for autocomplete. Usage: registry-sync [options] Options: -u --user=USER [default: nif_eelg_secure] -h --host=HOST [default: nif-mysql.crbs.ucsd.edu] -p --port=PORT [default: 3306] -d --database=DB [default: nif_eelg] -g --git-remote=GBASE remote git hosting [default: pyontutils.config.auth.get('git-remote-base')] -l --git-local=LBASE local path to look for ontology <repo> [default: pyontutils.config.auth.get_path('git-local-base')] -o --org=ORG user/org to clone/load ontology from [default: pyontutils.config.auth.get('ontology-org')] -r --repo=REPO name of ontology repo [default: pyontutils.config.auth.get('ontology-repo')] --test
rdflib_profile
run rdflib performance tests Usage: rdflib_profile [options] Options: -s --setup run setup only -p --pipenv setup pipenv -l --local run tests in the parent process rather than forking
qnamefix
Set qnames based on the curies defined for a given ontology. Usage: qnamefix [options] qnamefix [options] (-x <prefix>)... qnamefix [options] <file>... qnamefix [options] (-x <prefix>)... <file>... Options: -h --help print this -k --keep keep only existing used prefixes -x --exclude=X do not include the prefix when rewriting, ALL will strip -v --verbose do something fun! -s --slow do not use a process pool -n --nowrite parse the file and reserialize it but do not write changes
pushd
Create a file with only the published subset curation-export.ttl Usage: pushd path/to/export/root; python -m sparcur.export.published; popd
parcellation
Generate NIF parcellation schemes from external resources. Usage: parcellation [options] Options: -f --fail fail loudly on common common validation checks -j --jobs=NJOBS number of parallel jobs to run [default: 9] -l --local only build files with local source copies -s --stats generate report on current parcellations
overlaps
Report on overlaping triples between all pairs of ontology files. Usage: overlaps [options] <file>... Options: -h --help print this -v --verbose do something fun!
ontutils
Common commands for ontology processes. Also old ontology refactors to run in the root ttl folder. Usage: ontutils set ontology-local-repo <path> ontutils set scigraph-api-key <key> ontutils devconfig [--write] [<field> ...] ontutils parcellation ontutils catalog-extras [options] ontutils iri-commit [options] <repo> ontutils deadlinks [options] <file> ... ontutils scigraph-stress [options] ontutils spell [options] <file> ... ontutils version-iri [options] <file>... ontutils uri-switch [options] <file>... ontutils backend-refactor [options] <file>... ontutils todo [options] <repo> ontutils expand <curie>... Options: -a --scigraph-api=API SciGraph API endpoint [default: pyontutils.config.auth.get('scigraph-api')] -o --output-file=FILE output file -l --git-local=LBASE local git folder [default: pyontutils.config.auth.get_path('git-local-base')] -u --curies=CURIEFILE curie definition file [default: pyontutils.config.auth.get_path('curies')] -e --epoch=EPOCH specify the epoch to use for versionIRI -r --rate=Hz rate in Hz for requests, zero is no limit [default: 20] -t --timeout=SECONDS timeout in seconds for deadlinks requests [default: 5] -f --fetch fetch catalog extras from their remote location -d --debug drop into debugger when finished -v --verbose verbose output -w --write write devconfig file
ontree
Render a tree from a predicate root pair. Normally run as a web service. Usage: ontree server [options] ontree [options] <predicate-curie> <root-curie> ontree --test Options: -a --api=API Full url to SciGraph api endpoint --data-api=DAPI Full url to SciGraph data api endpoint -k --key=APIKEY apikey for SciGraph instance -p --port=PORT port on which to run the server [default: 8000] -f --input-file=FILE don't use SciGraph, load an individual file instead -o --outgoing if not specified defaults to incoming -b --both if specified goes in both directions -t --test run tests -v --verbose print extra information
ontload
Use SciGraph to load an ontology from a loacal git repository. Remote imports are replaced with local imports. NIF -> http://ontology.neuinfo.org/NIF Usage: ontload graph [options] <repo> <remote_base> ontload config [options] <repo> <remote_base> <graph_path> ontload scigraph [options] ontload imports [options] <repo> <remote_base> <ontologies>... ontload chain [options] <repo> <remote_base> <ontologies>... ontload extra [options] <repo> ontload patch [options] <repo> ontload prov [blazegraph scigraph] [options] <path-out> ontload prov [blazegraph scigraph] [options] <path-in> <path-out> ontload [options] Options: -g --git-remote=GBASE remote git hosting [default: pyontutils.core.auth.get('git-remote-base')] -l --git-local=LBASE local git folder [default: pyontutils.core.auth.get_path('git-local-base')] -z --zip-location=ZIPLOC local path for build files [default: pyontutils.core.auth.get_path('zip-location')] -t --graphload-config=CFG graphload.yaml location [default: pyontutils.core.auth.get_path('scigraph-graphload')] THIS IS THE LOCATION OF THE BASE TEMPLATE FILE -n --graphload-ontologies=YML ontologies-*.yaml file -o --org=ORG user/org for ontology [default: pyontutils.core.auth.get('ontology-org')] -b --branch=BRANCH ontology branch to load [default: master] -c --commit=COMMIT ontology commit to load [default: HEAD] -s --scp-loc=SCP scp zipped graph here [default: user@localhost:import tempfile.tempdir/graph/] -i --path-build-scigraph=PBS build scigraph at path -O --scigraph-org=SORG user/org for scigraph [default: SciGraph] -B --scigraph-branch=SBRANCH scigraph branch to build [default: master] -C --scigraph-commit=SCOMMIT scigraph commit to build [default: HEAD] -S --scigraph-scp-loc=SGSCP scp zipped services here [default: user@localhost:import tempfile.tempdir/scigraph/] -Q --scigraph-quiet silence mvn log output -P --patch-config=PATCHLOC patchs.yaml location [default: pyontutils.core.auth.get_path('patch-config')] -u --curies=CURIEFILE curie definition file [default: pyontutils.core.auth.get_path('curies')] if only the filename is given assued to be in scigraph-config-folder -p --patch retrieve ontologies to patch and modify import chain accordingly -K --check-built check whether a local copy is present but do not build if it is not -d --debug call breakpoint when done -L --logfile=LOG log output here [default: ontload.log] -v --view-defaults print out the currently configured default values -f --graph-config-out=GCO output for graphload.yaml [default: pyontutils.core.auth.get_path('scigraph-graphload')] only useful for `ontload config` ignored otherwise -x --fix-imports-only prepare graph build but only fix the import chain do not build --build-id=BUILD_ID --nowish=NOWISH
ont-docs
Compile all documentation from git repos. Usage: ont-docs [options] [--repo=<REPO>...] ont-docs render [options] <path>... Options: -h --help show this -c --config=<PATH> path to doc-index.yaml [default: pyontutils.config.auth.get_path('resources') / 'doc-config.yaml'] -o --out-path=<PATH> path inside which docs are built [default: augpathlib.RepoPath(import tempfile.tempdir) / 'build-ont-docs' / 'docs'] -b --html-root=<REL> relative path to the html root [default: ..] -s --spell run hunspell on all docs -d --docstring-only build docstrings only -j --jobs=NJOBS number of jobs [default: 9] -r --repo=<REPO> additional repos to crawl for docs --theme=<THEMEPATH> path to theme inside theme-repo [default: 'org/theme-readtheorg-local.setup'] --theme-repo=<REPO> path to theme [default: augpathlib.RepoPath(pyontutils.config.auth.get_path('git-local-base')) / 'org-html-themes'] --debug redirect stderr to debug pipeline errors
ont-catalog
Generate ttl/catalog-*.xml Usage: ont-catalog [options] ont-catalog [options] <file> ... Options: -b --big when creating catalog also import big files reccomend running this option with pypy3 -j --jobs=NJOBS number of parallel jobs to run [default: 9] -d --debug break at the end -l --ontology-local-repo=OLR path to ontology [default: pyontutils.config.auth.get_path('ontology-local-repo')]
obo-io
python .obo file parser and writer Usage: obo-io [options] <obofile> [<output-name>] obo-io --help Options: -h --help show this -d --debug break after parsing -t --out-format=FMT output to this format options are obo or ttl [default: obo] -o --overwrite write the format, overwrite existing -w --write write the output -s --strict fail on missing definitions -r --robot match the format produced by robot -n --no-stamp do not add date, user, and program header stamp based on the obo 1.2 / 1.4 (ish) spec defined at https://owlcollab.github.io/oboformat/doc/GO.format.obo-1_2.html https://owlcollab.github.io/oboformat/doc/GO.format.obo-1_4.html lacks appropraite levels of testing for production use acts as a command line script or as a python module also converts to ttl format but the conversion convetions are ill defined When writing a file if the path for the obofile exists it will not overwrite what you have but instead will append a number to the end. ALWAYS MANUALLY CHECK YOUR OUTPUT THIS SUCKER IS FLAKY
neurondm-build
run neurondm related exports and conversions Usage: neurondm-build release [options] neurondm-build all [options] neurondm-build [indicators phenotypes] [options] neurondm-build [models bridge old dep dev] [options] neurondm-build [sheets] [options] Options: -h --help Display this help message
necromancy
Find dead ids in an ontology and raise them to be owl:Classes again. Also build a list of classes that may be banished to the shadow realm of oboInOwl:hasAlternativeId in the near future. Usage: necromancy [options] <file-or-url>... Options: -h --help print this -v --verbose do something fun! -s --slow do not use a process pool -n --nowrite parse the file and reserialize it but do not write changes -m --mkdir make the output directory if it does not exist -l --ontology-local-repo=OLR
map-identifiers
map ids Usage: map-identifiers [options] [methods npokb] Options: -h --help print this
make_config
Create nginx configs for resolver. Usage: make_config [options] Options: -l --git-local=LBASE local git folder [default: pyontutils.config.auth.get_path('git-local-base')] -d --debug call IPython embed when done
interlex
InterLex python implementaiton Usage: interlex server [uri curies alt api] [options] [<database>] interlex shell [alt] [options] [<database>] interlex dbsetup [options] [<database>] interlex sync [options] [<database>] interlex get [options] interlex post ontology [options] <ontology-filename> ... interlex post triples [options] (<reference-name> <triples-filename>) ... interlex post curies [options] [<curies-filename>] interlex post curies [options] (<curie-prefix> <iri-prefix>) ... interlex post resource [options] <rdf-iri> interlex post class [options] <rdfs:subClassOf> <rdfs:label> [<definition:>] [<synonym:> ...] interlex post entity [options] <rdf:type> <rdfs:sub*Of> <rdfs:label> [<definition:>] [<synonym:> ...] interlex post triple [options] <subject> <predicate> <object> interlex id [options] <match-curie-or-iri> ... interlex label [options] <match-label> ... interlex term [options] <match-label-or-synonym> ... interlex search [options] <match-full-text> ... Commands: server api start a server running the api endpoint (WARNING: OLD) server uri start a server for uri.interlex.org connected to <database> server curies start a server for curies.interlex.org server alt start a server for alternate interlex webservices dbsetup step through creation of a user (currently tgbugs) sync drop into a debug repl with a database connection sync run sync with the old mysql database post ontology post an ontology file by uploading directly to interlex post triples post an file with triples, but no ontology header to a specific reference name (want?) post curies post curies for a given user post resource post a link to an rdf 'file' for interlex to retrieve post class post entity post triple id get the interlex record for a curie or iri label get all interlex records where the rdfs:label matches a string term get all interlex records where any label or synonym matches a string search get all interlex records where the search index returns a match for a string Examples: export INTERLEX_API_KEY=$(cat path/to/my/api/key) interlex post triple ILX:1234567 rdfs:label "not-a-term" interlex post triple ILX:1234567 definition: "A meaningless example term" interlex post entity -r ilxtr:myNewProperty owl:AnnotationProperty _ 'my annotation property' 'use for stuff' interlex post class -r ilxtr:myNewClass ilxtr:myExistingClass 'new class' 'some new thing' interlex id -u base -n tgbugs ilxtr:brain Options: -t --test run with config used for testing --production run with config used for production -g --group=GROUP the group whose data should be returned [default: api] -u --user=USER alias for --group -n --names-group=NG the group whose naming conventions should be used [default: api] -r --readable user/uris/readable iri/curie -l --limit=LIMIT limit the number of results [default: 10] -f --input-file=FILE load an individual file -p --port=PORT manually set the port to use in the context of the current command -o --local run against local -c --gunicorn run against local gunicorn -d --debug enable debug mode --do-cdes when running sync include the cdes
graphml-to-ttl
convert graphml files to ttl files Usage: graphml-to-ttl [options] <file> graphml-to-ttl methods [options] <file> graphml-to-ttl workflow [options] <file> graphml-to-ttl paper [options] <file> Options: -o --output-location=LOC write converted files to [default: import tempfile.tempdir/]
googapis
api access for google sheets (and friends) Usage: googapis auth (sheets|docs|drive)... [options] [--drive-scope=<SCOPE>...] Examples: googapis auth sheets Options: --store-file=<PATH>... write to a specific store file -n --readonly set the readonly scope --drive-scope=<SCOPE>... add drive scopes (overrides readonly) values: appdata file metadata metadata.readonly photos.readonly readonly scripts -d --debug
defs
update ontology definitions Usage: defs [options] Options: -u --update push updates to google sheet -d --debug enable various debug options
clifun-demo
Helper classes for organizing docopt programs Usage: clifun-demo [options] clifun-demo sub-command-1 [options] <args>... clifun-demo sub-command-2 sub-command-1 [options] <args>... Usage-Bad: Example: clifun-demo [options] <args>... clifun-demo cmd [options] Reason: <args>... masks cmd Options: -o --optional an optional argument -d --debug
allen_transgenic_lines
Converts owl or ttl or raw rdflib graph into a pandas DataFrame. Saved in .pickle format. Usage: allen_transgenic_lines [options] Options: -h --help Display this help message -v --version Current version of file -r --refresh Update local copy -i --input=<path> Local copy of Allen Brain Atlas meta data [default: /tmp/allen-cell-types.json] -o --output=<path> Output path in ttl format [default: allen-cell-types]
allen_cell_types
Converts owl or ttl or raw rdflib graph into a pandas DataFrame. Saved in .pickle format. Usage: allen_cell_types [options] Options: -h --help Display this help message -v --version Current version of file -r --refresh Update local copy -i --input=<path> Local copy of Allen Brain Atlas meta data [default: /tmp/allen-cell-types.json] -o --output=<path> Output path of picklized pandas DataFrame [default: allen-cell-types]
Modules
sparcur_internal.dandittl
convert dandi terms yaml to ttl
sparcur.sparcron.core
services /etc/init.d/rabbitmq start /etc/init.d/redis start test with python -m sparcur.sparcron # run with PYTHONBREAKPOINT=0 celery --app sparcur.sparcron worker -n wcron -Q cron,default --concurrency=1 --detach --beat --schedule-filename ./sparcur-cron-schedule --loglevel=INFO PYTHONBREAKPOINT=0 celery --app sparcur.sparcron worker -n wexport -Q export --loglevel=INFO # to clean up celery -A sparcur.sparcron purge rabbitmqctl list_queues rabbitmqctl purge_queue celery;rabbitmqctl delete_queue cron;rabbitmqctl delete_queue export;rabbitmqctl delete_queue default
sparcur.normalization
string normalizers, strings that change their content to match a standard
sparcur.export.reprotcur
Split protcur.ttl into multiple files with one file per protocol.
pyontutils.utils_extra
Reused utilties that depend on packages outside the python standard library.
pyontutils.utils
A collection of reused functions and classes. Depends only on python standard library.
pyontutils.scigraph_client
WARNING: DO NOT MODIFY THIS FILE IT IS AUTOMATICALLY GENERATED BY scigraph.py AND WILL BE OVERWRITTEN Swagger Version: 2.0, API Version: 1.0.1 generated for http://localhost:9000/scigraph/swagger.json by scigraph.py
ontquery.query
Implementation of the query interface that provides a layer of separation between identifiers and lookup services for finding and validating them.
nifstd.nifstd_tools.sheets_sparc
Sparc View Tree WorkFlow: 1. Create schema class 2. Create sheet importing schema class 3. update collect_sheets in GoogleSheets 4. any merging in graphs and harding is down in the following functions in GoogleSheets (merge_graphs & hardcode_graph_paths) 5. rerun sheet.py to generate sparc_terms.txt/csv to use 6. OPTIONAL: edit sparc_terms.txt manually for any last minute complicated tasks Notes: - grid needs to be enabled to allow bold traversal - new sheets == need work flow
nifstd.nifstd_tools.ncbigene_slim
Build lightweight slims from curie lists. Used for sources that don't have an owl ontology floating.
htmlfn.htmlfn.__init__
Light weight functions for generating html and working with the rest of the unholy trinity.
augpathlib.meta
Classes for storing and converting metadata associated with path like objects.