Are you sure you want to leave this community? Leaving the community will revoke any permissions you have been granted in this community.
SciCrunch Registry is a curated repository of scientific resources, with a focus on biomedical resources, including tools, databases, and core facilities - visit SciCrunch to register your resource.
http://www.bios.unc.edu/research/genomic_software/Matrix_eQTL/
Software tool for ultra fast eQTL analysis via large matrix operations.
Proper citation: MatrixEQTL (RRID:SCR_025513) Copy
https://brainlife.io/docs/using_ezBIDS/
Web-based BIDS conversion tool to convert neuroimaging data and associated metadata to BIDS standard. Guided standardization of neuroimaging data interoperable with major data archives and platforms.
Proper citation: ezBIDS (RRID:SCR_025563) Copy
https://bioconductor.org/packages/release/bioc/html/SomaticSignatures.html
Software R package for identifying mutational signatures of single nucleotide variants (SNVs) from high-throughput experiments.
Proper citation: SomaticSignatures (RRID:SCR_025620) Copy
http://alchemy.sourceforge.net/
ALCHEMY is a genotype calling algorithm for Affymetrix and Illumina products which is not based on clustering methods. Features include explicit handling of reduced heterozygosity due to inbreeding and accurate results with small sample sizes. ALCHEMY is a method for automated calling of diploid genotypes from raw intensity data produced by various high-throughput multiplexed SNP genotyping methods. It has been developed for and tested on Affymetrix GeneChip Arrays, Illumina GoldenGate, and Illumina Infinium based assays. Primary motivations for ALCHEMY''s development was the lack of available genotype calling methods which can perform well in the absence of heterozygous samples (due to panels of inbred lines being genotyped) or provide accurate calls with small sample batches. ALCHEMY differs from other genotype calling methods in that genotype inference is based on a parametric Bayesian model of the raw intensity data rather than a generalized clustering approach and the model incorporates population genetic principles such as Hardy-Weinberg equilibrium adjusted for inbreeding levels. ALCHEMY can simultaneously estimate individual sample inbreeding coefficients from the data and use them to improve statistical inference of diploid genotypes at individual SNPs. The main documentation for ALCHEMY is maintained on the sourceforge-hosted MediaWiki system. Features * Population genetic model based SNP genotype calling * Simultaneous estimation of per-sample inbreeding coefficients, allele frequencies, and genotypes * Bayesian model provides posterior probabilities of genotype correctness as quality measures * Growing number of scripts and supporting programs for validation of genotypes against control data and output reformating needs * Multithreaded program for parallel execution on multi-CPU/core systems * Non-clustering based methods can handle small sample sets for empirical optimization of sample preparation techniques and accurate calling of SNPs missing genotype classes ALCHEMY is written in C and developed on the GNU/Linux platform. It should compile on any current GNU/Linux distribution with the development packages for the GNU Scientific Library (gsl) and other development packages for standard system libraries. It may also compile and run on Mac OS X if gsl is installed.
Proper citation: ALCHEMY (RRID:SCR_005761) Copy
http://www.nitrc.org/projects/efficient_pt
A Matlab implementation for efficient permutation testing by using matrix completion.
Proper citation: Efficient Permutation Testing (RRID:SCR_014104) Copy
https://github.com/xinhe-lab/GSFA
Software R package that performs sparse factor analysis and differential gene expression discovery simultaneously on single cell CRISPR screening data.
Proper citation: Guided Sparse Factor Analysis (RRID:SCR_025023) Copy
Software tool for analysis of non-covalent interactions in molecular dynamics trajectories. Implemented in Python and is universally applicable to any kind of MD trajectory supported by MDAnalysis package.
Proper citation: PyContact (RRID:SCR_025066) Copy
https://github.com/PhysiCell-Tools/PhysiCell-Studio
Software graphical tool to allow easy editing of (XML) model, create initial positions of cells, run simulation, and visualize results. To contribute, fork and make PRs to the development branch. Used to create, execute, and visualize multicellular model using PhysiCell.
Proper citation: PhysiCell Studio (RRID:SCR_025311) Copy
https://github.com/COMBINE-lab/maximum-likelihood-relatedness-estimation
C++ program to infer biological relatedness from low coverage 2nd generation sequencing data. It uses information from genotype likelihoods rather than observed genotypes in maximum likelihood framework in order to estimate the overall coefficient of relatedness as well as individual kinship components between two samples. Maximum Likelihood Estimation of Biological Relatedness from Low Coverage Sequencing Data.
Proper citation: lcMLkin (RRID:SCR_025418) Copy
https://github.com/sokrypton/ColabFold
Software application offers accelerated prediction of protein structures and complexes by combining homology search of MMseqs2 with AlphaFold2 or RoseTTAFold. Used for protein folding.
Proper citation: ColabFold (RRID:SCR_025453) Copy
http://www.farsight-toolkit.org/wiki/FARSIGHT_Toolkit
THIS RESOURCE IS NO LONGER IN SERVICE. Documented on September 23, 2022. A collection of software modules for image data handling, pre-processing, segmentation, inspection, editing, post-processing, and secondary analysis. These modules can be scripted to accomplish a variety of automated image analysis tasks. All of the modules are written in accordance with software practices of the Insight Toolkit Community. Importantly, all modules are accessible through the Python scripting language which allows users to create scripts to accomplish sophisticated associative image analysis tasks over multi-dimensional microscopy image data. This language works on most computing platforms, providing a high degree of platform independence. Another important design principle is the use of standardized XML file formats for data interchange between modules.
Proper citation: Farsight Toolkit (RRID:SCR_001728) Copy
This is a database of 16S and 23S ribosomal RNA mutations reported in literature, expanded to include mutations in ribosomal proteins and ribosomal factors. Access to the expanded versions of the 16S and 23S Ribosomal RNA Mutation Databases has been improved to permit searches of the lists of alterations for all the data from (1) one specific organism, (2) one specific nucleotide position, (3) one specific phenotype, or (4) a particular author. Please send bibliographic citations for published work to be included in The Ribosomal Mutation Database to the curator via email. The database currently consists of 1024 records, including 485 16S rRNA records from Escherichia coli, 37 16S-like rRNA records from other organisms, 421 23S rRNA records from E. coli, and 81 23S-like records from other organisms. The numbering of positions in all records corresponds to the numbering in E. coli. We welcome any suggested revisions to the database, as well as information about newly characterized 16S or 23S rRNA mutations. The expanded database will be renamed to The Ribosomal Mutation Database and will include mutations in ribosomal proteins and ribosomal factors.
Proper citation: Ribosomal Mutation Database (RRID:SCR_001677) Copy
https://ecl.earthchem.org/view.php?id=329
Database contating hydrothermal spring geochemistry that hosts and serves the full range of compositional data acquired on seafloor hydrothermal vents from all tectonic settings. It can accommodate published historical data as well as legacy and new data that investigators contribute.
Proper citation: VentDB (RRID:SCR_001632) Copy
Passive and active source waveform data, event (earthquake) catalog, channel response data is available. This comprehensive data store of raw geophysical time-series data is collected from a large variety of sensors, courtesy of a vast array of US and International scientific networks, including seismometers (permanent and temporary), tilt and strain meters, infrasound, temperature, atmospheric pressure and gravimeters, to support basic research aimed at imaging the Earth's interior. IRIS also provides data and software for educational purposes. This consortium of over 100 US universities is dedicated to the operation of science facilities for the acquisition, management, and distribution of seismological data. IRIS programs contribute to scholarly research, education, earthquake hazard mitigation, and verification of the Comprehensive Nuclear-Test-Ban Treaty. Data is stored at the IRIS Data Management Center in Seattle, Washington. They currently manage a large archive from over tens of thousands of seismic stations and ship hundreds of terabytes of data yearly.
Proper citation: Incorporated Research Institutions for Seismology (RRID:SCR_002201) Copy
Assists scientists in finding Antarctic scientific data of interest and submitting data for long-term preservation in accordance with their obligations under the National Science Foundation (NSF) Office of Polar Programs (OPP) Data Policy.
Proper citation: U.S. Antarctic Program Data Coordination Center (RRID:SCR_002221) Copy
Project portal for publishing, citing, sharing and discovering research data. Software, protocols, and community connections for creating research data repositories that automate professional archival practices, guarantee long term preservation, and enable researchers to share, retain control of, and receive web visibility and formal academic citations for their data contributions. Researchers, data authors, publishers, data distributors, and affiliated institutions all receive appropriate credit. Hosts multiple dataverses. Each dataverse contains studies or collections of studies, and each study contains cataloging information that describes the data plus the actual data files and complementary files. Data related to social sciences, health, medicine, humanities or other sciences with an emphasis in human behavior are uploaded to the IQSS Dataverse Network (Harvard). You can create your own dataverse for free and start adding studies for your data files and complementary material (documents, software, etc). You may install your own Dataverse Network for your University or organization.
Proper citation: Dataverse Network Project (RRID:SCR_001997) Copy
Accepts and makes available geochemical, geochronlogical, and petrological data (analytical and synthesis) through this community-driven effort to facilitate the preservation, discovery, access and visualization of data generated. * PetDB holds geochemical data from sub-oceanic igneous and metamorphic rocks generated at mid-ocean ridges including back-arc basins, young seamounts, and old oceanic crust. Data are compiled primarily from the published literature. * SedDB integrates marine and terrestrial sediment geochemical data compiled primarily from the published literature. * Deep Lithosphere Data Set contains geochemical and petrological data from lower crust and upper mantle xenoliths. (more info) * VentDB contains hydrothermal spring geochemistry that hosts and serves the full range of compositional data acquired on seafloor hydrothermal vents from all tectonic settings. * NAVDAT - The Western North American Volcanic and Intrusive Rock Database * Geochron is an application that helps with the onerous task of data management for geochronological and thermochronological studies. * EarthChemPortal is the one-stop-shop for geochemical data that gives users the ability to search federated databases PetDB, NAVDAT, and GEOROC simultaneously, integrated into a common output format. (more info) * The EarthChem Library is a repository for geochemical datasets (analytical data, experimental data, synthesis databases) and other digital resources relevant to the field of geochemistry, contributed by the geochemistry community. * SESAR - System for Earth SAmple Registration
Proper citation: EarthChem (RRID:SCR_002207) Copy
http://metpetdb.rpi.edu/metpetweb/
Database / data repository for metamorphic petrology that is being designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at Rensselaer Polytechnic Institute as part of the National Cyberinfrastructure Initiative.
Proper citation: MetPetDB (RRID:SCR_002208) Copy
http://csdms.colorado.edu/wiki/Main_Page
Model repository and data related to earth-surface dynamics modeling. The CSDMS Modeling Tool (CMT) allows you to run and couple CSDMS model components on the CSDMS supercomputer in a user-friendly software environment. Components in the CMT are based on models, originally submitted to the CSDMS model repository, and now adapted to communicate with other models. The CMT tool is the environment in which you can link these components together to run new simulations. The CMT software runs on your own computer; but it communicates with the CSDMS HPCC, to perform the simulations. Thus, the CMT also offers you a relatively easy way of using the CSDMS supercomputer for model experiments. CSDMS deals with the Earth's surface - the ever-changing, dynamic interface between lithosphere, hydrosphere, cryosphere, and atmosphere. They are a diverse community of experts promoting the modeling of earth surface processes by developing, supporting, and disseminating integrated software modules that predict the movement of fluids, and the flux (production, erosion, transport, and deposition) of sediment and solutes in landscapes and their sedimentary basins. CSDMS: * Produces protocols for community-generated, continuously evolving, open software * Distributes software tools and models * Provides cyber-infrastructure to promote the quantitative modeling of earth surface processes * Addresses the challenging problems of surface-dynamic systems: self-organization, localization, thresholds, strong linkages, scale invariance, and interwoven biology & geochemistry * Enables the rapid development and application of linked dynamic models tailored to specific landscape basin evolution (LBE) problems at specific temporal and spatial scales * Partners with related computational and scientific programs to eliminate duplication of effort and to provide an intellectually stimulating environment * Supports a strong linkage between what is predicted by CSDMS codes and what is observed, both in nature and in physical experiments * Supports the imperatives in Earth Science research
Proper citation: Community Surface Dynamics Modeling System (RRID:SCR_002196) Copy
Paleoecology database for plio-pleistocene to holocene fossil data with a centralized structure for interdisciplinary, multiproxy analyses and common tool development; discipline-specific data can also be easily accessed. Data currently include North American Pollen (NAPD) and fossil mammals (FAUNMAP). Other proxies (plant macrofossils, beetles, ostracodes, diatoms, etc.) and geographic areas (Europe, Latin America, etc.) will be added in the near future. Data are derived from sites from the last 5 million years.
Proper citation: Neotoma Paleoecology Database (RRID:SCR_002190) Copy
Can't find your Tool?
We recommend that you click next to the search bar to check some helpful tips on searches and refine your search firstly. Alternatively, please register your tool with the SciCrunch Registry by adding a little information to a web form, logging in will enable users to create a provisional RRID, but it not required to submit.
Welcome to the NIF Resources search. From here you can search through a compilation of resources used by NIF and see how data is organized within our community.
You are currently on the Community Resources tab looking through categories and sources that NIF has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.
If you have an account on NIF then you can log in from here to get additional features in NIF such as Collections, Saved Searches, and managing Resources.
Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:
You can save any searches you perform for quick access to later from here.
We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.
If you are logged into NIF you can add data records to your collections to create custom spreadsheets across multiple sources of data.
Here are the sources that were queried against in your search that you can investigate further.
Here are the categories present within NIF that you can filter your data on
Here are the subcategories present within this category that you can filter your data on
If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.