WARFT

Last updated

WARFT or WAran Research FoundaTion is a nonprofit organization promoting interdisciplinary research among undergraduate students in the city of Chennai, India. Professor N. Venkateswaran founded the group in 2000 and continues to manage it as of 2011. The aim of WARFT is to understand and model the brain to enable drug discovery so that children affected by spasticity can live normal lives.

Contents

Since its inception, WARFT has researched brain modeling, supercomputing and associated areas. The goal of WARFT is to unravel the connectivity of the human brain regions through the MMINi-DASS project. Biologically accurate brain simulations require massive computational power and thus another research initiative at WARFT is the MIP Project directed towards evolving a design method for the development of a tera-operations supercomputing cluster.

Undergraduate research trainees at WARFT engage themselves in the areas of neuroscience, supercomputing architectures, processor design towards deep sub-micrometre, power-aware computing, low power issues, mixed signal design, fault tolerance and testing, digital signal processing. WARFT conducts Dhi Yantra, a workshop on brain modeling and supercomputing every year.

Aims

WARFT's mission is twofold. Firstly to promote innovation and research awareness in the minds of young undergraduate students. In this respect, WARFT conducts a two-year part-time Research Awareness Programme and Training (RAPT) for undergraduate students. Secondly to solve the mysteries of the brain and to hasten the discovery of drugs that can cure brain diseases.

Undergraduate research initiatives

There are two main inter-disciplinary research initiatives at WARFT :

The Multi Million Neuron interconnectivity - Dendrite Axon Soma and Synapse

The MMINi-DASS project is a large-scale brain simulation carried out to predict interconnectivity of a specific brain region and makes use of fMRI BOLD response of brain regions. This results in understanding of brain dynamics from the most fundamental level to cognitive and behavioral aspects. Modeling individual brain entities is a challenging task. Predicting their interconnectivity through simulation requires enormous computing power and thus, the project banks on the exponentially increasing computing power and its decreasing cost.

The Memory In Processor SuperComputer On Chip (MIP SCOC) and the Silicon Operating System (SILICOS)

The immense computational demand imposed by the MMINi-DASS PROJECT has given rise to the novel supercomputer design known as the MIP SCOC. The MIP approach incorporates the memory within the logic, reminiscent of The Berkeley IRAM Project. In the MIP SCOC architecture, memory is physically and logically integrated with the functional units of the processor. This bit-level integration of processing logic and memory has led to a tremendous increase in functionality of a single MIP SCOC node.

The MIP SCOC architecture includes powerful ALFU (Algorithm Level Functional units) like chain matrix adders, multipliers, sorters, multiple operand adders and graph theoretic units like Depth-First-Search, Breadth-First-Search. This introduces a higher level of abstraction through the algorithm-level instructions (ALISA). A single ALISA is equivalent to multiple parallel VLIW. The MIP SCOC architecture includes an on-chip compiler (Compiler-On-Silicon) to generate the required instructions to feed the ALFUs of the MIP node. The Primary COS (PCOS) partitions the incoming problem according to the algorithms involved. Each SCOS generates the instructions corresponding to that column. A distributed control design is employed specific to ALFU population type (forming different heterogeneous cores) enabling parallel operation of a very large number of ALFUs.

Groups

WARFT is divided into seven research groups:

According to WARFT's website, it has published 50 research papers as of 2008.

Dhi Yantra

Dhi Yantra is a workshop on brain modeling and supercomputing organized by WARFT every year. Three editions of this workshop, featuring scientists and researchers from various fields and geography, have been held. The fourth workshop was held in Chennai, India on July 10, 11 and 12, 2009.

Related Research Articles

Supercomputer Extremely powerful computer for its era

A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, there are supercomputers which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS).

IBM Blue Gene Series of supercomputers by IBM

Blue Gene is an IBM project aimed at designing supercomputers that can reach operating speeds in the petaFLOPS (PFLOPS) range, with low power consumption.

Reconfigurable computing is a computer architecture combining some of the flexibility of software with the high performance of hardware by processing with very flexible high speed computing fabrics like field-programmable gate arrays (FPGAs). The principal difference when compared to using ordinary microprocessors is the ability to make substantial changes to the datapath itself in addition to the control flow. On the other hand, the main difference from custom hardware, i.e. application-specific integrated circuits (ASICs) is the possibility to adapt the hardware during runtime by "loading" a new circuit on the reconfigurable fabric.

Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.

Neuromorphic engineering, also known as neuromorphic computing, is the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. A neuromorphic computer/chip is any device that uses physical artificial neurons to do computations. In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems. The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, and transistors. Training software-based neuromorphic systems of spiking neural networks can be achieved using error backpropagation, e.g., using Python based frameworks such as snnTorch, or using canonical learning rules from the biological learning literature, e.g., using BindsNet.

Autonomic computing (AC) is distributed computing resources with self-managing characteristics, adapting to unpredictable changes while hiding intrinsic complexity to operators and users. Initiated by IBM in 2001, this initiative ultimately aimed to develop computer systems capable of self-management, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth.

High-performance computing Computing with supercomputers and clusters

High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems. Today, computer systems approaching the teraflops-region are counted as HPC-computers.

Narendra Krishna Karmarkar is an Indian mathematician. Karmarkar developed Karmarkar's algorithm. He is listed as an ISI highly cited researcher.

Modelling biological systems is a significant task of systems biology and mathematical biology. Computational systems biology aims to develop and use efficient algorithms, data structures, visualization and communication tools with the goal of computer modelling of biological systems. It involves the use of computer simulations of biological systems, including cellular subsystems, to both analyze and visualize the complex connections of these cellular processes.

An artificial brain is software and hardware with cognitive abilities similar to those of the animal or human brain.

Quantum programming is the process of assembling sequences of instructions, called quantum programs, that are capable of running on a quantum computer. Quantum programming languages help express quantum algorithms using high-level constructs. The field is deeply rooted in the open-source philosophy and as a result most of the quantum software discussed in this article is freely available as open-source software.

The Pittsburgh Supercomputing Center (PSC) is a high performance computing and networking center founded in 1986 and one of the original five NSF Supercomputing Centers. PSC is a joint effort of Carnegie Mellon University and the University of Pittsburgh in Pittsburgh, Pennsylvania, United States.

The Center for Simulation of Advanced Rockets (CSAR) is an interdisciplinary research group at the University of Illinois at Urbana-Champaign, and is part of the United States Department of Energy's Advanced Simulation and Computing Program. CSAR's goal is to accurately predict the performance, reliability, and safety of solid propellant rockets.

The Human Brain Project (HBP) is a large ten-year scientific research project, based on exascale supercomputers, that aims to build a collaborative ICT-based scientific research infrastructure to allow researchers across Europe to advance knowledge in the fields of neuroscience, computing, and brain-related medicine.

Exascale computing refers to computing systems capable of calculating at least 1018 floating point operations per second (1 exaFLOPS). The terminology generally refers to the performance of supercomputer systems, with the Fugaku being the first supercomputer in history to hit this milestone (HPL-AI benchmark). In April 2020, the distributed computing network Folding@home attained one exaFLOPS of computing performance.

Supercomputing in Japan Overview of supercomputing in Japan

Japan operates a number of centers for supercomputing which hold world records in speed, with the K computer becoming the world's fastest in June 2011. and Fugaku took the lead in June 2020, and furthered it, as of November 2020, to 3 times faster than number two computer.

Quasi-opportunistic supercomputing

Quasi-opportunistic supercomputing is a computational paradigm for supercomputing on a large number of geographically disperse computers. Quasi-opportunistic supercomputing aims to provide a higher quality of service than opportunistic resource sharing.

SpiNNaker

SpiNNaker is a massively parallel, manycore supercomputer architecture designed by the Advanced Processor Technologies Research Group (APT) at the Department of Computer Science, University of Manchester. It is composed of 57,600 processing nodes, each with 18 ARM9 processors and 128 MB of mobile DDR SDRAM, totalling 1,036,800 cores and over 7 TB of RAM. The computing platform is based on spiking neural networks, useful in simulating the human brain.

Supercomputing in Pakistan Overview of supercomputing in Pakistan

The high performance supercomputing program started in mid-to-late 1980s in Pakistan. Supercomputing is a recent area of Computer science in which Pakistan has made progress, driven in part by the growth of the information technology age in the country. Developing on the ingenious supercomputer program started in 1980s when the deployment of the Cray supercomputers was initially denied.

VisualSim Architect

VisualSim Architect is an electronic system-level software for modeling and simulation of electronic systems, embedded software and semiconductors. VisualSim Architect is a commercial version of the Ptolemy II research project at University of California Berkeley. The product was first released in 2003. VisualSim is a graphical tool that can be used for performance trade-off analyses using such metrics as bandwidth utilization, application response time and buffer requirements. It can be used for architectural analysis of algorithms, components, software instructions and hardware/ software partitioning.