Searching the RRID Resource Information Network

Our searching services are busy right now. Please try again later

  • Register
X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X

Leaving Community

Are you sure you want to leave this community? Leaving the community will revoke any permissions you have been granted in this community.

No
Yes
X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

SciCrunch Registry is a curated repository of scientific resources, with a focus on biomedical resources, including tools, databases, and core facilities - visit SciCrunch to register your resource.

Search

Type in a keyword to search

On page 24 showing 461 ~ 480 out of 795 results
Snippet view Table view Download 795 Result(s)
Click the to add this resource to a Collection

http://centreforstrokerecovery.ca/our-research/research-structure/stroke-patient-recovery-research-database-spred

THIS RESOURCE IS NO LONGER IN SERVICE. Documented on January 28,2025. The Stroke Patient Recovery Research Database (SPReD) initiative creates the infrastructure needed for the collection of a wide range of data related to stroke risk factors and to stroke recovery. It also promotes the analysis and management of large brain and vessel images. A major goal is to create a comprehensive electronic database Stroke Patient Recovery Research Database or SPReD and populate it with patient data, including demographic, biomarker, genetic and proteomic data and imaging data. SPReD will enable us to combine descriptions of our stroke patients from multiple projects that are geographically distributed. We will do this in a uniform fashion in order to enhance our ability to document rates of recovery; to study the effects of vascular risk factors and inflammatory biomarkers; and to use these data to improve their physical and cognitive recovery through innovative intervention programs. This comprehensive database will provide an integrated repository of data with which our researchers will investigate and test original ideas, ultimately leading to knowledge that can be applied clinically to benefit stroke survivors.

Proper citation: Stroke Patient Recovery Research Database (SPReD) (RRID:SCR_005508) Copy   


http://www.epilepsygenes.org/page/show/homepage

The Epilepsy Genetic Association Database (epiGAD) is an online repository of data relating to genetic association studies in the field of epilepsy. It summarizes the results of both published and unpublished studies, and is intended as a tool for researchers in the field to keep abreast of recent studies, providing a bird''s eye view of this research area. The goal of epiGAD is to collate all association studies in epilepsy in order to help researchers in this area identify all the available gene-disease associations. Finally, by including unpublished studies, it hopes to reduce the problem of publication bias and provide more accurate data for future meta-analyses. It is also hoped that epiGAD will foster collaboration between the different epilepsy genetics groups around the world, and faciliate formation of a network of investigators in epilepsy genetics. There are 4 databases within epiGAD: - the susceptibility genes database - the epilepsy pharmacogenetics database - the meta-analysis database - the genome-wide association studies (GWAS) database The susceptibility genes database compiles all studies related to putative epilepsy susceptibility genes (eg. interleukin-1-beta in TLE), while the pharmacogenetics studies in epilepsy (eg. ABCB1 studies) are stored in ''phamacogenetics''. The meta-analysis database compiles all existing published epilepsy genetic meta-analyses, whether for susceptibility genes, or pharmacogenetics. The GWAS database is currently empty, but will be filled once GWAS are published. Sponsors: The epiGAD website is supported by the ILAE Genetics Commission.

Proper citation: Epilepsy Genetic Association Database (RRID:SCR_006840) Copy   


http://www.broadinstitute.org/annotation/tetraodon/

This database have been funded by the National Human Genome Research Institute (NHGRI) to produce shotgun sequence of the Tetraodon nigriviridis genome. The strategy involves Whole Genome Shotgun (WGS) sequencing, in which sequence from the entire genome is generated. Whole genome shotgun libraries were prepared from Tetraodon genomic DNA obtained from the laboratory of Jean Weissenbach at Genoscope. Additional sequence data of approximately 2.5X coverage of Tetraodon has also been generated by Genoscope in plasmid and BAC end reads. Broad and Genoscope intend to pool their data and generate whole genome assemblies. Tetraodon nigroviridis is a freshwater pufferfish of the order Tetraodontiformes and lives in the rivers and estuaries of Indonesia, Malaysia and India. This species is 20-30 million years distant from Fugu rubripes, a marine pufferfish from the same family. The gene repertoire of T. nigroviridis is very similar to that of other vertebrates. However, its relatively small genome of 385 Mb is eight times more compact than that of human, mostly because intergenic and intronic sequences are reduced in size compared to other vertebrate genomes. These genome characteristics along with the large evolutionary distance between bony fish and mammals make Tetraodon a compact vertebrate reference genome - a powerful tool for comparative genetics and for quick and reliable identification of human genes.

Proper citation: Tetraodon nigroviridis Database (RRID:SCR_007123) Copy   


  • RRID:SCR_008148

    This resource has 10+ mentions.

https://wiki.cgb.indiana.edu/display/DGC/Home

The Daphnia Genomics Consortium (DGC) is an international network of investigators committed to mounting the freshwater crustacean Daphnia as a model system for ecology, evolution and the environmental sciences. Along with research activities, the DGC is: (1) coordinating efforts towards developing the Daphnia genomic toolbox, which will then be available for use by the general community; (2) facilitating collaborative cross-disciplinary investigations; (3) developing bioinformatic strategies for organizing the rapidly growing genome database; and (4) exploring emerging technologies to improve high throughput analyses of molecular and ecological samples. If we are to succeed in creating a new model system for modern life-sciences research, it will need to be a community-wide effort. Research activities of the DGC are primarily focused on creating genomic tools and information. When completed, the current projects will offer a first view of the Daphnia genome''s topography, including regions of high and low recombination, the distribution of transposable, repetitive and regulatory elements, the size and structure of genes and of their neighborhoods. This information is crucial in formulating testable hypotheses relating genetics and demographics to the evolutionary potential or constraints of natural populations. Projects aiming to compile identifiable genes with their function are also underway, together with robust methods to verify these findings. Finally, these tools are being tested, by exploring their uses in key ecological and toxicological investigations. Each project benefits from the leadership and expertise of many individuals. For further details, begin by contacting the project directors. The DGC consists of biologists from a broad spectrum of subdisciplines, including limnology, ecotoxicology, quantitative and population genetics, systematics, molecular biology and evolution, developmental biology, genomics and bioinformatics. In many regards, the rapid early success of the consortium results from its grass-roots origin promoting an international composition, under a cooperative model, with significant scientific breadth. We hold to this approach in building this network and encourage more people to participate. All the while, the DGC is structured to effectively reach specific goals. The consortium includes an advisory board (composed of experts of the various subdisciplines), whose responsibility is to act as the research community''s agent in guiding the development of Daphnia genomic resources. The advisors communicate directly to DGC members, who are either contributing genomic tools or actively seeking funds for this function. The consortium''s main body (given the widespread interest in applying genomic tools in environmental studies) are the affiliates, who make use of these tools for their research and who are soliciting support.

Proper citation: Daphnia genomics consortium (RRID:SCR_008148) Copy   


  • RRID:SCR_007974

    This resource has 10+ mentions.

http://www.genepath.org/

GenePath is a web-enabled intelligent assistant for the analysis of genetic data and for discovery of genetic networks. GenePath uses abductive inference to elucidate network constraints and logic to derive consistent networks. Typically, it starts with a set of genetic experiments, uses a set of embedded rules (patterns) to infer relations between genes and outcome, and based on these relations constructs a genetic network.

Proper citation: GenePath (RRID:SCR_007974) Copy   


http://www.type2diabetesgenetics.org/

Portal and database of DNA sequence, functional and epigenomic information, and clinical data from studies on type 2 diabetes and analytic tools to analyze these data. .Provides data and tools to promote understanding and treatment of type 2 diabetes and its complications. Used for identifying genetic biomarkers correlated to Type 2 diabetes and development of novel drugs for this disease.

Proper citation: Accelerating Medicines Partnership Type 2 Diabetes Knowledge Portal (AMP-T2D) (RRID:SCR_003743) Copy   


http://coot.embl.de/g2d/

THIS RESOURCE IS NO LONGER IN SERVICE, documented August 22, 2016. A database of candidate genes for mapped inherited human diseases. Candidate priorities are automatically established by a data mining algorithm that extracts putative genes in the chromosomal region where the disease is mapped, and evaluates their possible relation to the disease based on the phenotype of the disorder. Data analysis uses a scoring system developed for the possible functional relations of human genes to genetically inherited diseases that have been mapped onto chromosomal regions without assignment of a particular gene. Methodology can be divided in two parts: the association of genes to phenotypic features, and the identification of candidate genes on a chromosonal region by homology. This is an analysis of relations between phenotypic features and chemical objects, and from chemical objects to protein function terms, based on the whole MEDLINE and RefSeq databases.

Proper citation: Candidate Genes to Inherited Diseases (RRID:SCR_008190) Copy   


http://www.anim.med.kyoto-u.ac.jp/nbr/default.aspx

NBRP-Rat was established to overcome limitations associated with properly utilizing existing rat resources. The collection of existing strains and genetic sub strains, phenotypic and genotypic characterization, cryopreservation of embryos, distribution of the collected rat strains, and a publicly accessible database of all assembled data are the major goals of this project. Once achieved, this unique database including the unique rat strains will become a powerful tool for biomedical research. A catalog of comparable, standardized and well characterized rat strains will lead to new and more precise research topics as well as it will facilitate biomedical sciences, drug discovery, advanced chemical research, and contributes to life sciences worldwide. As mentioned before, the major goals of NBRP-Rat are the collection, preservation and supply of rat strains. The repository includes strains from Japan and abroad, spontaneous mutants, congenic and recombinant strains as well as transgenic and mutagenized rats. Deposited rat strains are not only conserved as cryopreserved embryos and sperm. Many reference and frequently used rat strains are also maintained as living animals under SPF conditions. Furthermore, NBRP-rat provides a unique database on various rat strain phenotypes accompanied with basic genetic information. This allows scientists the selection of standardized and research specific strains. The animals themselves are provided free of charge to the research community (except for shipping costs). Sponsors: This project is one part of the National BioResource Projects (NBRP) in Japan for more than 20 species including animals, plants, microbes, tissues and DNAs. It is founded by the Japanese Ministry of Education, Culture, Sports, Science and Technology (Monkasho) and started in 2002.

Proper citation: National Bio Resource Project for the Rat. (RRID:SCR_012774) Copy   


http://ccb.loni.usc.edu/

THIS RESOURCE IS NO LONGER IN SERVICE. Documented on August 31, 2022. Center focused on the development of computational biological atlases of different populations, subjects, modalities, and spatio-temporal scales with 3 types of resources: (1) Stand-alone computational software tools (image and volume processing, analysis, visualization, graphical workflow environments). (2) Infrastructure Resources (Databases, computational Grid, services). (3) Web-services (web-accessible resources for processing, validation and exploration of multimodal/multichannel data including clinical data, imaging data, genetics data and phenotypic data). The CCB develops novel mathematical, computational, and engineering approaches to map biological form and function in health and disease. CCB computational tools integrate neuroimaging, genetic, clinical, and other relevant data to enable the detailed exploration of distinct spatial and temporal biological characteristics. Generalizable mathematical approaches are developed and deployed using Grid computing to create practical biological atlases that describe spatiotemporal change in biological systems. The efforts of CCB make possible discovery-oriented science and the accumulation of new biological knowledge. The Center has been divided into cores organized as follows: - Core 1 is focused on mathematical and computational research. Core 2 is involved in the development of tools to be used by Core 3. Core 3 is composed of the driving biological projects; Mapping Genomic Function, Mapping Biological Structure, and Mapping Brain Phenotype. - Cores 4 - 7 provide the infrastructure for joint structure within the Center as well as the development of new approaches and procedures to augment the research and development of Cores 1-3. These cores are: (4)Infrastructure and Resources, (5) Education and Training, (6) Dissemination, and (7) Administration and Management. The main focus of the CCB is on the brain, and specifically on neuroimaging. This area has a long tradition of sophisticated mathematical and computational techniques. Nevertheless, new developments in related areas of mathematics and computational science have emerged in recent years, some from related application areas such as Computer Graphics, Computer Vision, and Image Processing, as well as from Computational Mathematics and the Computational Sciences. We are confident that many of these ideas can be applied beneficially to neuroimaging.

Proper citation: Center for Computational Biology at UCLA (RRID:SCR_000334) Copy   


http://www.semel.ucla.edu/creativity/

The purpose of this center is to study the molecular, cellular, systems and cognitive mechanisms that result in cognitive enhancements and explain unusual levels of performance in gifted individuals, including extraordinary creativity. Additionally, by understating the mechanisms responsible for enhancements in performance we may be better suited to intervene and reverse disease states that result in cognitive deficits. One of the key topics addressed by the Center is the biological basis of cognitive enhancements, a topic that can be studied in human subjects and animal models. In the past much of the focus in the brain sciences has been on the study of brain mechanisms that degrade cognitive performance (for example, on mutations or other lesions that cause cognitive deficits). The Tennenbaum Center for the Biology of Creativity at UCLA enables an interdisciplinary team of leading scientists to advance knowledge about the biological bases of creativity. Starting with a pilot project program, a series of investigations was launched, spanning disciplines from basic molecular biology to cognitive neuroscience. Because the concept of creativity is multifaceted, initial efforts targeted refinement of the component processes necessary to generate novel, useful cognitive products. The identified core cognitive processes: 1.) Novelty Generation the ability to flexibly and adaptively generate products that are unique; 2.) Working Memory and Declarative Memory the ability to maintain, and then use relevant information to guide goal-directed performance, along with the capacity to store and retrieve this information; and 3.) Response Inhibition the ability to suppress habitual plans and substitute alternate actions in line with changing problem-solving demands. To study the basic mechanisms underlying these complex brain functions we use translational strategies. Starting from foundational studies in basic neuroscience, we forged an interdisciplinary strategy that permits the most advanced techniques for genetic manipulation and basic neurobiological research to be applied in close collaboration with human studies that converge on the same core cognitive processes. Our integrated research program aims to reveal the genetic architecture and fundamental brain mechanisms underlying creative cognition. The work holds enormous promise for both enhancing healthy cognitive performance and designing new treatments for diverse cognitive disorders. Sponsors: The Tennenbaum Center for the Biology of Creativity was inspired by the vision and generosity of Michael Tennenbaum.

Proper citation: Tennenbaum Center for the Biology of Creativity (RRID:SCR_000668) Copy   


  • RRID:SCR_000689

    This resource has 100+ mentions.

http://soap.genomics.org.cn/

Software package that provides full solution to next generation sequencing data analysis consisting of an alignment tool (SOAPaligner/soap2), a re-sequencing consensus sequence builder (SOAPsnp), an indel finder ( SOAPindel ), a structural variation scanner ( SOAPsv ), a de novo short reads assembler ( SOAPdenovo ), and a GPU-accelerated alignment tool for aligning short reads with a reference sequence. (SOAP3/GPU)., THIS RESOURCE IS NO LONGER IN SERVICE. Documented on September 16,2025.

Proper citation: SOAP (RRID:SCR_000689) Copy   


  • RRID:SCR_003924

    This resource has 10+ mentions.

http://www.tidebc.org/

A collaborative care and research initiative with a focus on prevention and treatment of Intellectual disability (ID) that is due to inborn errors of metabolism (IEM), which can be treated with diet or drugs. Health care policy and institutional culture is still operating under the old premise that all ID is incurable and thus, many children born with treatable ID are at risk of not being treated. To acknowledge the multidisciplinary scope and the ways in which health care professionals and researchers will collaborate, the goals of the TIDE BC project are demonstrated within a framework of 7 Work Packages: * Implementation of a new Protocol for diagnostic evaluation of ID, focusing of treatable conditions; * Development of infrastructure to facilitate implementation, evaluation and sustainability of the Protocol; * Investments into next generation genomic technologies; * Improving evidence of and access to treatments; * Evaluation and health economy; * Knowledge dissemination; * Education and Mentoring. The objectives addressed in all Work Packages reflect a highly integrated cluster combining clinical care, research, evaluation, and knowledge dissemination.

Proper citation: TIDE BC (RRID:SCR_003924) Copy   


  • RRID:SCR_006722

    This resource has 1+ mentions.

http://www.zfatlas.psu.edu/

Atlas containing 2- and 3-dimensional, anatomical reference slides of the lifespan of the zebrafish to support research and education worldwide. Hematoxylin and eosin histological slides, at various points in the lifespan of the zebrafish, have been scanned at 40x resolution and are available through a virtual slide viewer. 3D models of the organs are reconstructed from plastic tissue sections of embryo and larvae. The size of the zebrafish, which allows sections to fall conveniently within the dimensions of the common 1 x 3 glass slide, makes it possible for this anatomical atlas to become as high resolution as for any vertebrate. That resolution, together with the integration of histology and organ anatomy, will create unique opportunities for comparisons with both smaller and larger model systems that each have their own strengths in research and educational value. The atlas team is working to allow the site to function as a scaffold for collaborative research and educational activity across disciplines and model organisms. The Zebrafish Atlas was created to answer a community call for a comprehensive, web-based, anatomical and pathological atlas of the zebrafish, which has become one of the most widely used vertebrate animal models globally. The experimental strengths of zebrafish as a model system have made it useful for a wide range of investigations addressing the missions of the NIH and NSF. The Zebrafish Atlas provides reference slides for virtual microscopic viewing of the zebrafish using an Internet browser. Virtual slide technology allows the user to choose their own field of view and magnification, and to consult labeled histological sections of zebrafish. We are planning to include a complete set of embryos, larvae, juveniles, and adults from approximately 25 different ages. Future work will also include a variety of comparisons (e.g. normal vs. mutant, normal vs. diseased, multiple stages of development, zebrafish with other organisms, and different types of cancer)., THIS RESOURCE IS NO LONGER IN SERVICE. Documented on September 16,2025.

Proper citation: Zebrafish Atlas (RRID:SCR_006722) Copy   


  • RRID:SCR_006312

    This resource has 100+ mentions.

https://cran.r-project.org/web/packages/LDheatmap/index.html

Software application that plots measures of pairwise linkage disequilibria for SNPs (entry from Genetic Analysis Software)

Proper citation: LDHEATMAP (RRID:SCR_006312) Copy   


  • RRID:SCR_009154

    This resource has 1000+ mentions.

http://wpicr.wpic.pitt.edu/WPICCompGen/hclust/hclust.htm

Software application that is a simple clustering method that can be used to rapidly identify a set of tag SNP's based upon genotype data (entry from Genetic Analysis Software), THIS RESOURCE IS NO LONGER IN SERVICE. Documented on September 16,2025.

Proper citation: HCLUST (RRID:SCR_009154) Copy   


http://www.cdc.gov/genomics/hugenet/default.htm

Human Genome Epidemiology Network, or HuGENet, is a global collaboration of individuals and organizations committed to the assessment of the impact of human genome variation on population health and how genetic information can be used to improve health and prevent disease. Its goals include: establishing an information exchange that promotes global collaboration in developing peer-reviewed information on the relationship between human genomic variation and health and on the quality of genetic tests for screening and prevention; providing training and technical assistance to researchers and practitioners interested in assessing the role of human genomic variation on population health and how such information can be used in practice; developing an updated and accessible knowledge base on the World Wide Web; and promoting the use of this knowledge base by health care providers, researchers, industry, government, and the public for making decisions involving the use of genetic information for disease prevention and health promotion. HuGENet collaborators come from multiple disciplines such as epidemiology, genetics, clinical medicine, policy, public health, education, and biomedical sciences. Currently, there are 4 HuGENet Coordinating Centers for the implementation of HuGENet activities: CDC''s Office of Public Health Genomics, Atlanta, Georgia; HuGENet UK Coordinating Center, Cambridge, UK; University of Ioannina, Greece; University of Ottawa , Ottawa, Canada. HuGENet includes: HuGE e-Journal Club: The HuGE e-Journal Club is an electronic discussion forum where new human genome epidemiologic (HuGE) findings, published in the scientific literature in the CDC''s Office of Public Health Genomics Weekly Update, will be abstracted, summarized, presented, and discussed via a newly created HuGENet listserv. HuGE Reviews: A HuGE Review identifies human genetic variations at one or more loci, and describes what is known about the frequency of these variants in different populations, identifies diseases that these variants are associated with and summarizes the magnitude of risks and associated risk factors, and evaluates associated genetic tests. Reviews point to gaps in existing epidemiologic and clinical knowledge, thus stimulating further research in these areas. HuGE Fact Sheets: HuGE Fact Sheets summarize information about a particular gene, its variants, and associated diseases. HuGE Case Studies: An on-line presentation designed to sharpen your epidemiological skills and enhance your knowledge on genomic variation and human diseases. Its purpose is to train health professionals in the practical application of human genome epidemiology (HuGE), which translates gene discoveries to disease prevention by integrating population-based data on gene-disease relationships and interventions. Students will acquire conceptual and practical tools for critically evaluating the growing scientific literature in specific disease areas. HUGENet Publications: Articles related to the HuGENet movement written by our HuGENet collaborators. HuGE Navigator: An integrated, searchable knowledge base of genetic associations and human genome epidemiology, including information on population prevalence of genetic variants, gene-disease associations, gene-gene and gene- environment interactions, and evaluation of genetic tests. HuGE Workshops: HuGENet has sponsored meetings and workshops with national and international partners since 2001. Available are detailed summaries, agendas or the ability to download speaker slides. HuGE Book: Human Genome Epidemiology: A Scientific Foundation for Using Genetic Information to Improve Health and Prevent Disease. (The findings and conclusions in this book are those of the author(s) and do not necessarily represent the views of the funding agency.) HuGENet Collaborators: HuGENet is interested in establishing collaborations with individuals and organizations working on population based research involving genetic information. HuGE Funding: Funding opportunities for specific population-based genetic epidemiology research projects are available. Research initiatives whose aims include assessing the prevalence of human genetic variation, the association between genetic variants and human diseases, the measurement of gene-gene or gene-environment interaction, and the evaluation of genetic tests for screening and prevention are compiled to create a posted listing. Additional information and application details can be found by clicking on the respective links.

Proper citation: Human Genome Epidemiology Network (RRID:SCR_013117) Copy   


https://cmmt.ubc.ca/

Center is part of University of British Columbia Faculty of Medicine, located at British Columbia Children Hospital Research Institute (BCCHR) in Vancouver, British Columbia, Canada. Research at CMMT is focused on discovering genetic susceptibility to illnesses such as Huntington Disease, Type 2 diabetes and bipolar disorder.

Proper citation: University of British Columbia Centre for Molecular Medicine and Therapeutics (RRID:SCR_017241) Copy   


  • RRID:SCR_017307

    This resource has 100+ mentions.

https://www.beast2.org/

Software package for advanced Bayesian evolutionary analysis by sampling trees. Used for phylogenetics, population genetics and phylodynamics. Program for Bayesian phylogenetic analysis of molecular sequences. Estimates rooted, time measured phylogenies using strict or relaxed molecular clock models. Framework can be extended by third parties. Comprised of standalone programs including BEAUti, BEAST, MASTER, RBS, SNAPP, MultiTypeTree, BDSKY, LogAnalyser, LogCombiner, TreeAnnotator, DensiTree and package manager.

Proper citation: BEAST2 (RRID:SCR_017307) Copy   


https://deepblue.mpi-inf.mpg.de/

Central data access hub for large collections of epigenomic data. It organizes data from different sources using controlled vocabularies and ontologies. Data Server for storing, organizing, searching, and retrieving genomic and epigenomic data, handling associated metadata, and to perform different types of analysis.

Proper citation: Deep Blue Epigenomic Data Server (RRID:SCR_017490) Copy   


  • RRID:SCR_017636

    This resource has 100+ mentions.

http://taylor0.biology.ucla.edu/structureHarvester/

Web based program for collating results generated by program STRUCTURE. Provides assess and visualize likelihood values across multiple values of K and hundreds of iterations for easier detection of number of genetic groups that best fit data. Reformats data for use in downstream programs, such as CLUMPP.It is complement for using software Structure in genetics population. Website and program for visualizing STRUCTURE output and implementing Evanno method., THIS RESOURCE IS NO LONGER IN SERVICE. Documented on September 16,2025.

Proper citation: Structure Harvester (RRID:SCR_017636) Copy   



Can't find your Tool?

We recommend that you click next to the search bar to check some helpful tips on searches and refine your search firstly. Alternatively, please register your tool with the SciCrunch Registry by adding a little information to a web form, logging in will enable users to create a provisional RRID, but it not required to submit.

Can't find the RRID you're searching for? X
  1. Neuroscience Information Framework Resources

    Welcome to the NIF Resources search. From here you can search through a compilation of resources used by NIF and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that NIF has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on NIF then you can log in from here to get additional features in NIF such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into NIF you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Sources

    Here are the sources that were queried against in your search that you can investigate further.

  9. Categories

    Here are the categories present within NIF that you can filter your data on

  10. Subcategories

    Here are the subcategories present within this category that you can filter your data on

  11. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

X