Are you sure you want to leave this community? Leaving the community will revoke any permissions you have been granted in this community.
SciCrunch Registry is a curated repository of scientific resources, with a focus on biomedical resources, including tools, databases, and core facilities - visit SciCrunch to register your resource.
http://www.ini.uzh.ch/~acardona/trakem2.html
An ImageJ plugin for morphological data mining, three-dimensional modeling and image stitching, registration, editing and annotation. Two independent modalities exist: either XML-based projects, working directly with the file system, or database-based projects, working on top of a local or remote PostgreSQL database. What can you do with it? * Semantic segmentation editor: order segmentations in tree hierarchies, whose template is exportable for reuse in other, comparable projects. * Model, visualize and export 3D. * Work from your laptop on your huge, remote image storage. * Work with an endless number of images, limited only by the hard drive capacity. Dozens of formats supported thanks to LOCI Bioformats and ImageJ. * Import stacks and even entire grids (montages) of images, automatically stitch them together and homogenize their histograms for best montaging quality. * Add layers conveniently. A layer represents, for example, one 50 nm section (for TEM) or a confocal section. Each layer has its own Z coordinate and thickness, and contains images, labels, areas, nodes of 3d skeletons, profiles... * Insert layer sets into layers: so your electron microscopy serial sections can live inside your optical microscopy sections. * Run any ImageJ plugin on any image. * Measure everything: areas, volumes, pixel intensities, etc. using both built-in data structures and segmentation types, and standard ImageJ ROIs. And with double dissectors! * Visualize RGB color channels changing the opacity of each on the fly, non-destructively. * Annotate images non-destructively with floating text labels, which you can rotate/scale on the fly and display in any color. * Montage/register/stitch/blend images manually with transparencies, semiautomatically, or fully automatically within and across sections, with translation, rigid, similarity and affine models with automatically extracted SIFT features. * Correct the lens distortion present in the images, like those generated in transmission electron microscopy. * Add alpha masks to images using ROIs, for example to split images in two or more parts, or to remove the borders of an image or collection of images. * Model neuronal arbors with 3D skeletons (with areas or radiuses), and synapses with connectors. * Undo all steps. And much more...
Proper citation: TrakEM2 (RRID:SCR_008954) Copy
http://www.theseed.org/wiki/Home_of_the_SEED
The SEED is a framework to support comparative analysis and annotation of genomes. The cooperative effort focuses on the development of the comparative genomics environment and, more importantly, on the development of curated genomic data. Curation of genomic data (annotation) is done via the curation of subsystems by an expert annotator across many genomes, not on a gene by gene basis. From the curated subsystems we extract a set of freely available protein families (FIGfams). These FIGfams form the core component of our RAST automated annotation technology. Answering numerous requests for automatic Seed-Quality annotations for more or less complete bacterial and archaeal genomes, we have established the free RAST-Server (RAST=Rapid Annotation using Subsytems Technology). Using similar technology, we make the Metagenomics-RAST-Server freely available. We also provide a SEED-Viewer that allows read-only access to the latest curated data sets. We currently have 58 Archaea, 902 Bacteria, 562 Eukaryota, 1254 Plasmids and 1713 Viruses in our database. All tools and datasets that make up the SEED are in the public domain and can be downloaded at ftp://ftp.theseed.org
Proper citation: SEED (RRID:SCR_002129) Copy
http://www.predictprotein.org/
Web application for sequence analysis and the prediction of protein structure and function. The user interface intakes protein sequences or alignments and returned multiple sequence alignments, motifs, and nuclear localization signals., THIS RESOURCE IS NO LONGER IN SERVICE. Documented on January 15,2026.
Proper citation: Predictions for Entire Proteomes (RRID:SCR_002803) Copy
Database on transcriptional regulation in Escherichia coli K-12 containing knowledge manually curated from original scientific publications, complemented with high throughput datasets and comprehensive computational predictions. Graphic and text-integrated environment with friendly navigation where regulatory information is always at hand. They provide integrated views to understand as well as organized knowledge in computable form. Users may submit data to make it publicly available.
Proper citation: RegulonDB (RRID:SCR_003499) Copy
THIS RESOURCE IS NO LONGER IN SERVICE.Documented on January 14, 2023. Infrastructure for sharing data, tools and services, this virtual research environment (VRE) supports e-Neuroscience and is designed to provide services for data and processing of that data. While the system is initially focused on electrophysiology data (neural activity recordings are the primary data types), it is equally applicable to many domains outside neuroscience. The Portal Provides: * User login and customization. * Data upload/download. * Data handling including custom permissions for public, shared or private data. * The ability to invoke custom public, shared or private services that consume and produce data. For example, it would allow spike series to be run through a sorter, producing new data representing the sorted spikes. * The ability to host services written in a number of languages including, but not limited to Matlab, R, Python, Perl, Java. * A system to support metadata for data objects, which provides extensive support for entering metadata at the point of upload, and allows the generation of metadata from services to provide provenance information. * The ability to invoke additional visualization for the data, for example, via the Signal Data Explorer. A core part is the development of: (i) minimum reporting guidelines for annotation of data and other computational resources for the purpose of sharing, and; (ii) intermediate formats and APIs for translation between proprietary and bespoke data types. These recommendations are being implemented and the global community is encouraged both to engage in their specification and make use of them for their own data sharing systems. * MINI: Minimum Information about a Neuroscience Investigation - This framework represents the formalized opinion of the CARMEN consortium and its associates, and identifies the minimum reporting information required to support the use of electrophysiology in a neuroscience study, for submission to the CARMEN system. * NDTF: Neurophysiology Data Translation Format - This framework provides a vendor-independent mechanism for translating between raw and processed neurphysiology data in the form of time and image series. They are implementing NDTF in CARMEN but it may also be useful for third party applications.
Proper citation: Code Analysis Repository and Modelling for e-Neuroscience (RRID:SCR_002795) Copy
The Hepatitis C Virus (HCV) Database Project strives to present HCV-associated genetic and immunologic data in a user-friendly way, by providing access to the central database via web-accessible search interfaces and supplying a number of analysis tools.
Proper citation: HCV Databases (RRID:SCR_002863) Copy
A web-based hosting service for software development projects that use the Git revision control system offering powerful collaboration, code review, and code management. It offers both paid plans for private repositories, and free accounts for open source projects. Large or small, every repository comes with the same powerful tools. These tools are open to the community for public projects and secure for private projects. Features include: * Integrated issue tracking * Collaborative code review * Easily manage teams within organizations * Text entry with understated power * A growing list of programming languages and data formats * On the desktop and in your pocket - Android app and mobile web views let you keep track of your projects on the go.
Proper citation: GitHub (RRID:SCR_002630) Copy
http://burgundy.cmmt.ubc.ca/cgi-bin/RAVEN/a?rm=home
Tool to search for putative regulatory genetic variation in your favorite gene. Single nucleotide polymorphisms (SNPs) (from dbSNP and user defined) are analyzed for overlap with potential transcription factor binding sites (TFBS) and phylogenetic footprinting using UCSC phastCons scores from multiple alignments of 8 vertebrate genomes., THIS RESOURCE IS NO LONGER IN SERVICE. Documented on September 16,2025.
Proper citation: RAVEN (RRID:SCR_001937) Copy
Professionally curated repository for genetics, genomics and related data resources for soybean that contains the most current genetic, physical and genomic sequence maps integrated with qualitative and quantitative traits. SoyBase includes annotated Williams 82 genomic sequence and associated data mining tools. The genetic and sequence views of the soybean chromosomes and the extensive data on traits and phenotypes are extensively interlinked. This allows entry to the database using almost any kind of available information, such as genetic map symbols, soybean gene names or phenotypic traits. The repository maintains controlled vocabularies for soybean growth, development, and traits that are linked to more general plant ontologies. Contributions to SoyBase or the Breeder''s Toolbox are welcome.
Proper citation: SoyBase (RRID:SCR_005096) Copy
http://www.chem.qmul.ac.uk/iubmb/enzyme/
Recommendations of the Nomenclature Committee of the International Union of Biochemistry and Molecular Biology on the nomenclature and classification of enzymes by the reactions they catalyze. Also included are links to individual documents and advice is provided on how to suggest new enzymes for listing, or correction of existing entries. The common names of all listed enzymes are listed, along with their EC numbers. Where an enzyme has been deleted or transferred to another EC number, this information is also indicated. Each list is linked to either separate entries for each entry or to files with up to 50 enzymes in each file. A start has been made in showing the pathways in which enzymes participate. For other enzymes a glossary entry has been added which may be just a systematic name or a link to a graphic representation. The glossary from Enzyme Nomenclature, 1992 may also be consulted. This has been updated with subsequent glossary entries. Each enzyme entry has links to other databases. Enzyme Subclasses provide links to a list of sub-subclasses which in turn list the enzymes linked to separate files for each enzyme, or to a list as part of a file with up to 50 enzymes per file.
Proper citation: Enzyme Nomenclature (RRID:SCR_006583) Copy
A database for phenotyping human single nucleotide polymorphisms (SNPs)that primarily focuses on the molecular characterization and annotation of disease and polymorphism variants in the human proteome. They provide a detailed variant analysis using their tools such as: * TANGO to predict aggregation prone regions * WALTZ to predict amylogenic regions * LIMBO to predict hsp70 chaperone binding sites * FoldX to analyse the effect on structure stability Further, SNPeffect holds per-variant annotations on functional sites, structural features and post-translational modification. The meta-analysis tool enables scientists to carry out a large scale mining of SNPeffect data and visualize the results in a graph. It is now possible to submit custom single protein variants for a detailed phenotypic analysis., THIS RESOURCE IS NO LONGER IN SERVICE. Documented on September 16,2025.
Proper citation: SNPeffect (RRID:SCR_005091) Copy
http://www.patricbrc.org/portal/portal/patric/Home
A Bioinformatics Resource Center bacterial bioinformatics database and analysis resource that provides researchers with an online resource that stores and integrates a variety of data types (e.g. genomics, transcriptomics, protein-protein interactions (PPIs), three-dimensional protein structures and sequence typing data) and associated metadata. Datatypes are summarized for individual genomes and across taxonomic levels. All genomes, currently more than 10 000, are consistently annotated using RAST, the Rapid Annotations using Subsystems Technology. Summaries of different data types are also provided for individual genes, where comparisons of different annotations are available, and also include available transcriptomic data. PATRIC provides a variety of ways for researchers to find data of interest and a private workspace where they can store both genomic and gene associations, and their own private data. Both private and public data can be analyzed together using a suite of tools to perform comparative genomic or transcriptomic analysis. PATRIC also includes integrated information related to disease and PPIs. The PATRIC project includes three primary collaborators: the University of Chicago, the University of Manchester, and New City Media. The University of Chicago is providing genome annotations and a PATRIC end-user genome annotation service using their Rapid Annotation using Subsystem Technology (RAST) system. The National Centre for Text Mining (NaCTeM) at the University of Manchester is providing literature-based text mining capability and service. New City Media is providing assistance in website interface development. An FTP server and download tool are available.
Proper citation: Pathosystems Resource Integration Center (RRID:SCR_004154) Copy
http://smd.stanford.edu/cgi-bin/source/sourceSearch
SOURCE compiles information from several publicly accessible databases, including UniGene, dbEST, UniProt Knowledgebase, GeneMap99, RHdb, GeneCards and LocusLink. GO terms associated with LocusLink entries appear in SOURCE. The mission of SOURCE is to provide a unique scientific resource that pools publicly available data commonly sought after for any clone, GenBank accession number, or gene. SOURCE is specifically designed to facilitate the analysis of large sets of data that biologists can now produce using genome-scale experimental approaches Platform: Online tool
Proper citation: SOURCE (RRID:SCR_005799) Copy
http://cmr.jcvi.org/tigr-scripts/CMR/CmrHomePage.cgi
Database of all of the publicly available, complete prokaryotic genomes. In addition to having all of the organisms on a single website, common data types across all genomes in the CMR make searches more meaningful, and cross genome analysis highlight differences and similarities between the genomes. CMR offers a wide variety of tools and resources, all of which are available off of our menu bar at the top of each page. Below is an explanation and link for each of these menu options. * Genome Tools: Find organism lists as well as summary information and analyses for selected genomes. * Searches: Search CMR for genes, genomes, sequence regions, and evidence. * Comparative Tools: Compare multiple genomes based on a variety of criteria, including sequence homology and gene attributes. SNP data is also found under this menu. * Lists: Select and download gene, evidence, and genomic element lists. * Downloads: Download gene sequences or attributes for CMR organisms, or go to our FTP site. * Carts: Select genome preferences from our Genome Cart or download your Gene Cart genes. The Omniome is the relational database underlying the CMR and it holds all of the annotation for each of the CMR genomes, including DNA sequences, proteins, RNA genes and many other types of features. Associated with each of these DNA features in the Omniome are the feature coordinates, nucleotide and protein sequences (where appropriate), and the DNA molecule and organism with which the feature is associated. Also available are evidence types associated with annotation such as HMMs, BLAST, InterPro, COG, and Prosite, as well as individual gene attributes. In addition, the database stores identifiers from other centers such as GenBank and SwissProt, as well as manually curated information on each genome or each DNA molecule including website links. Also stored in the Omniome are precomputed homology data, called All vs All searches, used throughout the CMR for comparative analysis.
Proper citation: JCVI CMR (RRID:SCR_005398) Copy
http://www.proteomexchange.org
A data repository for proteomic data sets. The ProteomeExchange consortium, as a whole, aims to provide a coordinated submission of MS proteomics data to the main existing proteomics repositories, as well as to encourage optimal data dissemination. ProteomeXchange provides access to a number of public databases, and users can access and submit data sets to the consortium's PRIDE database and PASSEL/PeptideAtlas.
Proper citation: ProteomeXchange (RRID:SCR_004055) Copy
http://treebase.org/treebase-web/
Repository of phylogenetic information, specifically user-submitted phylogenetic trees and the data used to generate them. TreeBASE accepts all kinds of phylogenetic data (e.g., trees of species, trees of populations, trees of genes) representing all biotic taxa. Data in TreeBASE are exposed to the public if they are used in a publication that is in press or published in a peer-reviewed scientific journal, book, conference proceedings, or thesis. Data used in publications that are in preparation or in review can be submitted to TreeBASE but will not be available to the public until they have passed peer review.
Proper citation: TreeBASE (RRID:SCR_005688) Copy
The Global Biodiversity Information Facility (GBIF) was established by governments in 2001 to encourage free and open access to biodiversity data, via the Internet. Through a global network of countries and organizations, GBIF promotes and facilitates the mobilization, access, discovery and use of information about the occurrence of organisms over time and across the planet. GBIF provides three core services and products: # An information infrastructure an Internet-based index of a globally distributed network of interoperable databases that contain primary biodiversity data information on museum specimens, field observations of plants and animals in nature, and results from experiments so that data holders across the world can access and share them # Community-developed tools, standards and protocols the tools data providers need to format and share their data # Capacity-building the training, access to international experts and mentoring programs that national and regional institutions need to become part of a decentralized network of biodiversity information facilities. GBIF and its many partners work to mobilize the data, and to improve search mechanisms, data and metadata standards, web services, and the other components of an Internet-based information infrastructure for biodiversity. GBIF makes available data that are shared by hundreds of data publishers from around the world. These data are shared according to the GBIF Data Use Agreement, which includes the provision that users of any data accessed through or retrieved via the GBIF Portal will always give credit to the original data publishers. * Explore Species: Find data for a species or other group of organisms. Information on species and other groups of plants, animals, fungi and micro-organisms, including species occurrence records, as well as classifications and scientific and common names. * Explore Countries: Find data on the species recorded in a particular country, territory or island. Information on the species recorded in each country, including records shared by publishers from throughout the GBIF network. * Explore Datasets: Find data from a data publisher, dataset or data network. Information on the data publishers, datasets and data networks that share data through GBIF, including summary information on 10028 datasets from 419 data publishers.
Proper citation: GBIF - Global Biodiversity Information Facility (RRID:SCR_005904) Copy
THIS RESOURCE IS NO LONGER IN SERVICE. Documented on April 15,2025. Human protein knowledge platform. Knowledge platform for human proteins selects and filters high throughput data pertinent to human proteins from UniProtKB. Extends UniProtKB/Swiss-Prot annotations for human proteins to include several new data types.
Proper citation: neXtProt (RRID:SCR_008911) Copy
http://harvester.fzk.de/harvester/
Harvester is a Web-based tool that bulk-collects bioinformatic data on human proteins from various databases and prediction servers. It is a meta search engine for gene and protein information. It searches 16 major databases and prediction servers and combines the results on pregenerated HTML pages. In this way Harvester can provide comprehensive gene-protein information from different servers in a convenient and fast manner. As full text meta search engine, similar to Google trade mark, Harvester allows screening of the whole genome proteome for current protein functions and predictions in a few seconds. With Harvester it is now possible to compare and check the quality of different database entries and prediction algorithms on a single page. Sponsors: This work has been supported by the BMBF with grants 01GR0101 and 01KW0013.
Proper citation: Bioinformatic Harvester IV (beta) at Karlsruhe Institute of Technology (RRID:SCR_008017) Copy
Collection of dissemination and exchange recorded biomedical signals and open-source software for analyzing them. Provides facilities for cooperative analysis of data and evaluation of proposed new algorithm. Providies free electronic access to PhysioBank data and PhysioToolkit software. Offers service and training via on-line tutorials to assist users at entry and more advanced levels. In cooperation with annual Computing in Cardiology conference, PhysioNet hosts series of challenges, in which researchers and students address unsolved problems of clinical or basic scientific interest using data and software provided by PhysioNet. All data included in PhysioBank, and all software included in PhysioToolkit, are carefully reviewed. Researchers are further invited to contribute data and software for review and possible inclusion in PhysioBank and PhysioToolkit. Please review guidelines before submitting material.
Proper citation: PhysioNet (RRID:SCR_007345) Copy
Can't find your Tool?
We recommend that you click next to the search bar to check some helpful tips on searches and refine your search firstly. Alternatively, please register your tool with the SciCrunch Registry by adding a little information to a web form, logging in will enable users to create a provisional RRID, but it not required to submit.
Welcome to the NIF Resources search. From here you can search through a compilation of resources used by NIF and see how data is organized within our community.
You are currently on the Community Resources tab looking through categories and sources that NIF has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.
If you have an account on NIF then you can log in from here to get additional features in NIF such as Collections, Saved Searches, and managing Resources.
Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:
You can save any searches you perform for quick access to later from here.
We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.
If you are logged into NIF you can add data records to your collections to create custom spreadsheets across multiple sources of data.
Here are the sources that were queried against in your search that you can investigate further.
Here are the categories present within NIF that you can filter your data on
Here are the subcategories present within this category that you can filter your data on
If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.