Lesley Shannon | |
---|---|
Nationality | Canadian |
Occupation | Professor |
Academic background | |
Alma mater | University of New Brunswick (BS) University of Toronto (MASc, PhD) |
Lesley Shannon is a Canadian professor who is chair for the Computer Engineering Option in the School of Engineering Science at Simon Fraser University. [1] She is also the current NSERC Chair for Women in Science and Engineering for BC and Yukon. [2] Shannon's chair operates the Westcoast Women in Engineering, Science and Technology (WWEST) program to promote equity, diversity and inclusion in STEM. [3] [4]
Shannon received her B.Sc., Electrical Engineering with the Computer Option from the University of New Brunswick in 1999 (Canada). She then completed her Masters of Applied Sciences and Ph.D. at the University of Toronto (Canada) in 2001 and 2006, respectively. [2]
Shannon's primary area of interest is Computing System Design, including architectures, design methodologies, and programming models. Her PhD research focused on developing tools, architectures and methodologies that help reduce the design time of embedded systems, particularly those implemented using FPGAs. [5]
Since her arrival at SFU, she expanded her research to include computing architectures for silicon and non-silicon based technologies (including FPGAs, heterogeneous computing, Networks-on-Chip (NoCs), and Multi-Processors Systems-on-Chip (MPSoCs)). [5]
Shannon was awarded the 2014 APEGBC Teaching Award of Excellence in recognition of her classroom and out-of-class mentoring activities and her contributions in leading a redesign of the school's undergraduate curriculum at SFU. [6] [7]
Her publications include "Odin II - An Open-Source Verilog HDL Synthesis Tool for CAD Research", [8] "FUSE: Front-End User Framework for O/S Abstraction of Hardware Accelerators", [9] and "Using reconfigurability to achieve real-time profiling for hardware/software codesign". [10] Additionally, she has published articles such as "TAIGA: A new RISC-V soft-processor framework enabling high performance CPU architectural features", [11] and "Performance and scalability of Fourier domain optical coherence tomography acceleration using graphics processing units". [12]
A field-programmable gate array (FPGA) is a type of configurable integrated circuit that can be repeatedly programmed after manufacturing. FPGAs are a subset of logic devices referred to as programmable logic devices (PLDs). They consist of an array of programmable logic blocks with a connecting grid, that can be configured "in the field" to interconnect with other logic blocks to perform various digital functions. FPGAs are often used in limited (low) quantity production of custom-made products, and in research and development, where the higher cost of individual FPGAs is not as important, and where creating and manufacturing a custom circuit wouldn't be feasible. Other applications for FPGAs include the telecommunications, automotive, aerospace, and industrial sectors, which benefit from their flexibility, high signal processing speed, and parallel processing abilities.
In computer engineering, a hardware description language (HDL) is a specialized computer language used to describe the structure and behavior of electronic circuits, usually to design application-specific integrated circuits (ASICs) and to program field-programmable gate arrays (FPGAs).
Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling. As power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.
Reconfigurable computing is a computer architecture combining some of the flexibility of software with the high performance of hardware by processing with flexible hardware platforms like field-programmable gate arrays (FPGAs). The principal difference when compared to using ordinary microprocessors is the ability to add custom computational blocks using FPGAs. On the other hand, the main difference from custom hardware, i.e. application-specific integrated circuits (ASICs) is the possibility to adapt the hardware during runtime by "loading" a new circuit on the reconfigurable fabric, thus providing new computational blocks without the need to manufacture and add new chips to the existing system.
Hardware acceleration is the use of computer hardware designed to perform specific functions more efficiently when compared to software running on a general-purpose central processing unit (CPU). Any transformation of data that can be calculated in software running on a generic CPU can also be calculated in custom-made hardware, or in some mix of both.
A soft microprocessor is a microprocessor core that can be wholly implemented using logic synthesis. It can be implemented via different semiconductor devices containing programmable logic, including both high-end and commodity variations.
Bluespec, Inc. is an American semiconductor device electronic design automation company based in Framingham, Massachusetts, and co-founded in June 2003 by computer scientists Arvind Mithal, professor of the Massachusetts Institute of Technology (MIT), and Joe Stoy of Oxford University. Arvind had formerly founded Sandburst in 2000, which specialized in producing chips for 10 Gigabit Ethernet (10GE) routers, for this task.
A data path is a collection of functional units such as arithmetic logic units (ALUs) or multipliers that perform data processing operations, registers, and buses. Along with the control unit it composes the central processing unit (CPU). A larger data path can be made by joining more than one data paths using multiplexers.
This is a glossary of terms used in the field of Reconfigurable computing and reconfigurable computing systems, as opposed to the traditional Von Neumann architecture.
Mitrionics was a Swedish company manufacturing softcore reconfigurable processors. It has been mentioned as one of EETimes "60 Emerging startups". The company was founded as Flow Computing in 2002 by Stefan Möhl, Pontus Borg, Andreas Rodman and Christian Merheim to commercialize a massively parallel reconfigurable processor implemented on FPGAs. Mitrion-C was then invented and developed by Stefan Möhl and Pontus Borg. It can be described as turning general purpose chips into massive parallel processors that can be used for high performance computing. Mitrionics massively parallel processor is available on Cray, Nallatech, and Silicon Graphics systems.
Ambric, Inc. was a designer of computer processors that developed the Ambric architecture. Its Am2045 Massively Parallel Processor Array (MPPA) chips were primarily used in high-performance embedded systems such as medical imaging, video, and signal-processing.
High-level synthesis (HLS), sometimes referred to as C synthesis, electronic system-level (ESL) synthesis, algorithmic synthesis, or behavioral synthesis, is an automated design process that takes an abstract behavioral specification of a digital system and finds a register-transfer level structure that realizes the given behavior.
Carl Ebeling is an American computer scientist and professor. His recent interests include coarse-grained reconfigurable architectures of integrated circuits.
Verilator is a free and open-source software tool which converts Verilog to a cycle-accurate behavioral model in C++ or SystemC. The generated models are cycle-accurate and 2-state; as a consequence, the models typically offer higher performance than the more widely used event-driven simulators, which can model behavior within the clock cycle. Verilator is now used within academic research, open source projects and for commercial semiconductor development. It is part of the growing body of free electronic design automation (EDA) software.
James Hoe is a Taiwanese-American professor of Electrical and Computer Engineering at Carnegie Mellon University (CMU). He is interested in many aspects of computer architecture and digital hardware design, including the specific areas of field-programmable gate array (FPGA) architecture for computing; digital signal processor (DSP) hardware; and high-level hardware design and synthesis. Professor Hoe’s current research focus is on devising a new FPGA architecture for power efficient, high-performance computing. His research group is working on developing an FPGA runtime environment that incorporates partial reconfiguration, virtualization, and protection features to manage an FPGA as a dynamically sharable multitasking compute resource.
Computing with Memory refers to computing platforms where function response is stored in memory array, either one or two-dimensional, in the form of lookup tables (LUTs) and functions are evaluated by retrieving the values from the LUTs. These computing platforms can follow either a purely spatial computing model, as in field-programmable gate array (FPGA), or a temporal computing model, where a function is evaluated across multiple clock cycles. The latter approach aims at reducing the overhead of programmable interconnect in FPGA by folding interconnect resources inside a computing element. It uses dense two-dimensional memory arrays to store large multiple-input multiple-output LUTs. Computing with Memory differs from Computing in Memory or processor-in-memory (PIM) concepts, widely investigated in the context of integrating a processor and memory on the same chip to reduce memory latency and increase bandwidth. These architectures seek to reduce the distance the data travels between the processor and the memory. The Berkeley IRAM project is one notable contribution in the area of PIM architectures.
Xilinx ISE is a discontinued software tool from Xilinx for synthesis and analysis of HDL designs, which primarily targets development of embedded firmware for Xilinx FPGA and CPLD integrated circuit (IC) product families. It was succeeded by Xilinx Vivado. Use of the last released edition from October 2013 continues for in-system programming of legacy hardware designs containing older FPGAs and CPLDs otherwise orphaned by the replacement design tool, Vivado Design Suite.
SciEngines GmbH is a privately owned company founded 2007 as a spin-off of the COPACOBANA project by the Universities of Bochum and Kiel, both in Germany. The project intended to create a platform for an affordable Custom hardware attack. COPACOBANA is a massively-parallel reconfigurable computer. It can be utilized to perform a so-called Brute force attack to recover DES encrypted data. It consists of 120 commercially available, reconfigurable integrated circuits (FPGAs). These Xilinx Spartan3-1000 run in parallel, and create a massively parallel system. Since 2007, SciEngines GmbH has enhanced and developed successors of COPACOBANA. Furthermore, the COPACOBANA has become a well known reference platform for cryptanalysis and custom hardware based attacks to symmetric, asymmetric cyphers and stream ciphers. 2008 attacks against A5/1 stream cipher an encryption system been used to encrypt voice streams in GSM have been published as the first known real world attack utilizing off-the-shelf custom hardware.
Olaf O. Storaasli is a scientist & engineer who worked at NASA), Oak Ridge National Laboratory), Centrus Energy, & Synective Labs. At NASA, he led a hardware, software & applications teams to successfully develop one of NASA's first parallel computers, the finite element machine, & developed rapid matrix equation algorithms tailored for high-performance computers to harness FPGA & GPU accelerators to solve science & engineering applications. He was a graduate advisor & instructor at the University of Tennessee, George Washington University & Christopher Newport University.
Verilog-to-Routing (VTR) is an open source CAD flow for FPGA devices. VTR's main purpose is to map a given circuit described in Verilog, a Hardware Description Language, on a given FPGA architecture for research and development purposes; the FPGA architecture targeted could be a novel architecture that a researcher wishes to explore, or it could be an existing commercial FPGA whose architecture has been captured in the VTR input format. The VTR project has many contributors, with lead collaborating universities being the University of Toronto, the University of New Brunswick, and the University of California, Berkeley. Additional contributors include Google, The University of Utah, Princeton University, Altera, Intel, Texas Instruments, and MIT Lincoln Lab.