Navlab

Last updated

Navlab is a series of autonomous and semi-autonomous vehicles developed by teams from The Robotics Institute at the School of Computer Science, Carnegie Mellon University. Later models were produced under a new department created specifically for the research called "The Carnegie Mellon University Navigation Laboratory". [1] Navlab 5 notably steered itself almost all the way from Pittsburgh to San Diego.

Contents

History

Research on computer controlled vehicles began at Carnegie Mellon in 1984 [1] as part of the DARPA Strategic Computing Initiative [2] and production of the first vehicle, Navlab 1, began in 1986. [3] [4] Navlab 1 burned in 1989 when conditioning system leaked liquid onto the computers. [5]

Applications

The vehicles in the Navlab series have been designed for varying purposes, "... off-road scouting; automated highways; run-off-road collision prevention; and driver assistance for maneuvering in crowded city environments. Our current work involves pedestrian detection, surround sensing, and short range sensing for vehicle control." [6]

Several types of vehicles have been developed, including "... robot cars, vans, SUVs, and buses." [1]

Vehicles

The institute has made vehicles with the designations Navlab 1 through 11. [6] The vehicles were mainly semi-autonomous, though some were fully autonomous and required no human input. [6]

Navlab 1 was built in 1986 using a Chevrolet panel van. [3] The van had 5 racks of computer hardware, including 3 Sun workstations, video hardware and GPS receiver, and a Warp supercomputer. [3] The computer had 100 MFLOP/sec, the size of a fridge, and a portable 5 kW generator. [7] The vehicle suffered from software limitations and was not fully functional until the late 80s, when it achieved its top speed of 20 mph (32 km/h). [3]

Navlab 2 was built in 1990 using a US Army HMMWV. [3] Computer power was uprated for this new vehicle with three Sparc 10 computers, "for high level data processing", and two 68000-based computers "used for low level control". [3] The Hummer was capable of driving both off- or on-road. When driving over rough terrain, its speed was limited with a top speed of 6 mph (9.7 km/h). When Navlab 2 was driven on-road it could achieve as high as 70 mph (110 km/h) [3]

Navlab 1 and 2 were semi-autonomous and used "... steering wheel and drive shaft encoders and an expensive inertial navigation system for position estimation." [3]

Navlab 5 used a 1990 Pontiac Trans Sport minivan. In July 1995, the team took it from Pittsburgh to San Diego on a proof-of-concept trip, dubbed "No Hands Across America", with the system navigating for all but 50 of the 2850 miles, averaging over 60 MPH. [8] [9] [10] In 2007, Navlab 5 was added to the Class of 2008 inductees of the Robot Hall of Fame. [11]

Navlabs 6 and 7 were both built with Pontiac Bonnevilles. Navlab 8 was built with an Oldsmobile Silhouette van. Navlabs 9 and 10 were both built out of Houston transit buses. [12]

ALVINN

The ALVINN (An Autonomous Land Vehicle in a Neural Network) was developed in 1988. [13] [14] [15] Detailed information is found in Dean A. Pomerleau's PhD thesis (1992). [16] It was an early demonstration of representation learning, sensor fusion, and data augmentation.

Architecture

ALVINN was a 3-layer fully connected feedforward network trained by backpropagation, with 1217-29-46 neurons and thus 36,627 weights. It had 3 types of inputs:

The output layer consisted of 46 units:

By inspecting the network weights, Pomerleau noticed that the feedback unit learned to measure the relative lightness of the road areas vs the non-road areas.

Training

ALVINN was trained by supervised learning on a dataset of 1200 simulated road images paired with corresponding range finder data. These images encompassed diverse road curvatures, retinal orientations, lighting conditions, and noise levels. Generating these images took 6 hours of Sun-4 CPU time.

The network was trained for 40 epochs using backpropagation on Warp (taking 45 minutes). For each training example, the steering output units were trained to produce a Gaussian distribution of activations, centered on the unit representing the correct steering angle.

At the end of training, the network achieved 90% accuracy in predicting the correct steering angle within two units of the true value on unseen simulated road images.

In live experiments, it ran on Navlab 1, with a video camera and a laser rangefinder. It could drive it at 0.5 m/s along a 400-meter wooded path under a variety of weathers: snowy, rainy, sunny and cloudy. This was competitive with traditional computer-vision-based algorithms at the time.

Later, they applied on-line imitation learning with real data by a person driving the Navlab 1. They noticed that because a human driver never strays far from the path, the network would never be trained on what action to take if it ever finds itself straying far from the path. To deal with this problem, they applied data augmentation, where each real image is shifted to the left by 5 different amounts and to the right by 5 different amounts, and the real human steering angle is shifted accordingly. In this way, each example is augmented to 11 examples.

It was found that with a short sequence of ~100 of images, the network could be online-trained to follow the road. This took just ~10 minutes of driving.

The first ALVINN was trained in February 1989, trained off-line on purely simulated images of the road, in an eight-hour run on the Warp machine. After training, it would be put on a Sun 3 computer on the Navlab -- the Warp machine was unnecessary, since neural networks are fast at inference time. It takes 0.75 seconds to process one image. On March 16, 1989, a new Navlab record of 1.3 m/s was set. They discovered in June 1989 that online training works. [17]

See also

Related Research Articles

Computer vision tasks include methods for acquiring, processing, analyzing, and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the form of decisions. "Understanding" in this context signifies the transformation of visual images into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.

An autonomous robot is a robot that acts without recourse to human control. Historic examples include space probes. Modern examples include self-driving vacuums and cars.

<span class="mw-page-title-main">Self-driving car</span> Vehicle operated with reduced human input

A self-driving car, also known as a autonomous car (AC), driverless car, robotaxi, robotic car or robo-car, is a car that is capable of operating with reduced or no human input. Self-driving cars are responsible for all driving activities, such as perceiving the environment, monitoring important systems, and controlling the vehicle, which includes navigating from origin to destination.

<span class="mw-page-title-main">Carnegie Mellon School of Computer Science</span> School for computer science in the United States

The School of Computer Science (SCS) at Carnegie Mellon University in Pittsburgh, Pennsylvania, US is a school for computer science established in 1988. It has been consistently ranked among the best computer science programs over the decades. As of 2024 U.S. News & World Report ranks the graduate program as tied for No. 1 with Massachusetts Institute of Technology, Stanford University and University of California, Berkeley.

The DARPA Grand Challenge is a prize competition for American autonomous vehicles, funded by the Defense Advanced Research Projects Agency, the most prominent research organization of the United States Department of Defense. Congress has authorized DARPA to award cash prizes to further DARPA's mission to sponsor revolutionary, high-payoff research that bridges the gap between fundamental discoveries and military use. The initial DARPA Grand Challenge in 2004 was created to spur the development of technologies needed to create the first fully autonomous ground vehicles capable of completing a substantial off-road course within a limited time. The third event, the DARPA Urban Challenge in 2007, extended the initial Challenge to autonomous operation in a mock urban environment. The 2012 DARPA Robotics Challenge, focused on autonomous emergency-maintenance robots, and new Challenges are still being conceived. The DARPA Subterranean Challenge was tasked with building robotic teams to autonomously map, navigate, and search subterranean environments. Such teams could be useful in exploring hazardous areas and in search and rescue.

Hans Peter Moravec is computer scientist and an adjunct faculty member at the Robotics Institute of Carnegie Mellon University in Pittsburgh, USA. He is known for his work on robotics, artificial intelligence, and writings on the impact of technology. Moravec also is a futurist with many of his publications and predictions focusing on transhumanism. Moravec developed techniques in computer vision for determining the region of interest (ROI) in a scene.

Zoë is a solar-powered autonomous robot with sensors able to detect microorganisms and map the distribution of life in the Atacama Desert of northern Chile, duplicating tasks that could be used in future exploration of Mars. Zoë is equipped with tools and sensors to search for direct evidence of life beneath the surface of the ground. Zoë significantly aids in research needed to study Mars because it acts as a mobile observer and analyzer of any signs of life in a given location. She collects primary data which will help in understanding conditions present on Mars. This project will verify reliability of autonomous, mobile, and scientifically made robots.

<span class="mw-page-title-main">Sandstorm (vehicle)</span> Autonomous vehicle

Sandstorm is an autonomous vehicle created by Carnegie Mellon University's Red Team, for the 2004 and 2005 DARPA Grand Challenge competition. It is a heavily modified 1986 M998 HMMWV.

<span class="mw-page-title-main">Vehicular automation</span> Automation for various purposes of vehicles

Vehicular automation is the use of technology to assist or replace the operator of a vehicle such as a car, truck, aircraft, rocket, military vehicle, or boat. Assisted vehicles are semi-autonomous, whereas vehicles that can travel without a human operator are autonomous. The degree of autonomy may be subject to various constraints such as conditions. Autonomy is enabled by advanced driver-assistance systems (ADAS) of varying capacity.

Ernst Dieter Dickmanns is a German pioneer of dynamic computer vision and of driverless cars. Dickmanns has been a professor at Bundeswehr University Munich (1975–2001), and visiting professor to Caltech and to MIT, teaching courses on "dynamic vision".

<span class="mw-page-title-main">DEPTHX</span> Autonomous underwater vehicle for exploring sinkholes in Mexico

The Deep Phreatic Thermal Explorer (DEPTHX) is an autonomous underwater vehicle designed and built by Stone Aerospace, an aerospace engineering firm based in Austin, Texas. It was designed to autonomously explore and map underwater sinkholes in northern Mexico, as well as collect water and wall core samples. This could be achieved via an autonomous form of navigation known as A-Navigation. The DEPTHX vehicle was the first of three vehicles to be built by Stone Aerospace which were funded by NASA with the goal of developing technology that can explore the oceans of Jupiter's moon Europa to look for extraterrestrial life.

<span class="mw-page-title-main">Traffic-sign recognition</span> Driver assistance system to recognize traffic signs

Traffic-sign recognition (TSR) is a technology by which a vehicle is able to recognize the traffic signs put on the road e.g. "speed limit" or "children" or "turn ahead". This is part of the features collectively called ADAS. The technology is being developed by a variety of automotive suppliers to improve the safety of vehicles. It uses image processing techniques to detect the traffic signs. The detection methods can be generally divided into color based, shape based and learning based methods.

The Warp machines were 3 generations of increasingly general-purpose systolic array processors. Each generation became increasingly general-purpose by increasing memory capacity and loosening the coupling between processors. Only the original WW-Warp forced a truly lock step sequencing of stages, which severely restricted its programmability but was in a sense the purest “systolic-array” design.

<span class="mw-page-title-main">National Robotics Engineering Center</span> Operating unit within the Robotics Institute of Carnegie Mellon University

The National Robotics Engineering Center (NREC) is an operating unit within the Robotics Institute (RI) of Carnegie Mellon University. NREC works closely with government and industry clients to apply robotic technologies to real-world processes and products, including unmanned vehicle and platform design, autonomy, sensing and image processing, machine learning, manipulation, and human–robot interaction.

<span class="mw-page-title-main">History of self-driving cars</span> Overview of the history of self-driving cars

Experiments have been conducted on self-driving cars since 1939; promising trials took place in the 1950s and work has proceeded since then. The first self-sufficient and truly autonomous cars appeared in the 1980s, with Carnegie Mellon University's Navlab and ALV projects in 1984 and Mercedes-Benz and Bundeswehr University Munich's Eureka Prometheus Project in 1987. In 1988, William L Kelley patented the first modern collision Predicting and Avoidance devices for Moving Vehicles. Then, numerous major companies and research organizations have developed working autonomous vehicles including Mercedes-Benz, General Motors, Continental Automotive Systems, Autoliv Inc., Bosch, Nissan, Toyota, Audi, Volvo, Vislab from University of Parma, Oxford University and Google. In July 2013, Vislab demonstrated BRAiVE, a vehicle that moved autonomously on a mixed traffic route open to public traffic.

<span class="mw-page-title-main">Argo AI</span> Autonomous driving technology company

Argo AI LLC was an autonomous driving technology company headquartered in Pittsburgh, Pennsylvania. The company was co-founded in 2016 by Bryan Salesky and Peter Rander, veterans of the Google and Uber automated driving programs. Argo AI was an independent company that built software, hardware, maps, and cloud-support infrastructure to power self-driving vehicles. Argo was mostly backed by Ford Motor Co. (2017) and the Volkswagen Group (2020). At its peak, the company was valued at $7 billion.

<span class="mw-page-title-main">Chris Urmson</span> CEO of self-driving technology company Aurora

Chris Urmson is a Canadian engineer, academic, and entrepreneur known for his work on self-driving car technology. He cofounded Aurora Innovation, a company developing self-driving technology, in 2017 and serves as its CEO. Urmson was instrumental in pioneering and advancing the development of self-driving vehicles since the early 2000s.

Deep reinforcement learning is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of the state space. Deep RL algorithms are able to take in very large inputs and decide what actions to perform to optimize an objective. Deep reinforcement learning has been used for a diverse set of applications including but not limited to robotics, video games, natural language processing, computer vision, education, transportation, finance and healthcare.

<span class="mw-page-title-main">Chan-Jin Chung</span> South Korean computer scientist (born 1959)

Chan-Jin Chung, commonly known as CJ Chung, is a full professor of computer science at Lawrence Technological University (LTU) in Michigan, USA. He founded an international autonomous robotics competition called Robofest in the 1999–2000 academic year as well as numerous educational programs for youth by integrating STEM, arts, autonomous robotics, and computer science. He also served as the founding USA National Organizer of World Robot Olympiad (WRO) in 2014 and 2015. He also started the WISER conference in 2014. He is working on developing a computer science curriculum for connected and autonomous vehicles (CAV) with a support from National Science Foundation . His research areas include evolutionary computation, cultural algorithms, intelligent systems & autonomous mobile robotics, software engineering, machine learning & deep learning, computer science education, and educational robotics.

Matthew Johnson-Roberson is an American roboticist, researcher, entrepreneur and educator. Since January 2022 he has served as director of the Robotics Institute at Carnegie Mellon University. Previously he was a professor at the University of Michigan College of Engineering since 2013, where he co-directed the UM Ford Center for Autonomous Vehicles (FCAV) with Ram Vasudevan. His research focuses on computer vision and artificial intelligence, with the specific applications of autonomous underwater vehicles and self-driving cars. He is also the co-founder and CTO of Refraction AI, a company focused on providing autonomous last mile delivery.

References

  1. 1 2 3 "Navlab: The Carnegie Mellon University Navigation Laboratory". The Robotics Institute. Retrieved 14 July 2011.
  2. "Robotics History: Narratives and Networks Oral Histories: Chuck Thorpe". IEEE.tv. 17 April 2015. Retrieved 2018-06-07.
  3. 1 2 3 4 5 6 7 8 Todd Jochem; Dean Pomerleau; Bala Kumar & Jeremy Armstrong (1995). "PANS: A Portable Navigation Platform". The Robotics Institute. Retrieved 14 July 2011.
  4. Thorpe, C.; Hebert, M.H.; Kanade, T.; Shafer, S.A. (May 1988). "Vision and navigation for the Carnegie-Mellon Navlab". IEEE Transactions on Pattern Analysis and Machine Intelligence. 10 (3): 362–373. doi:10.1109/34.3900.
  5. Gross, Thomas; Lam, Monica (1998-08). "Retrospective: a retrospective on the Warp machines". ACM: 45–47. doi:10.1145/285930.285950. ISBN   978-1-58113-058-4.{{cite journal}}: Check date values in: |date= (help); Cite journal requires |journal= (help)
  6. 1 2 3 "Overview". NavLab. The Robotics Institute. Archived from the original on 8 August 2011. Retrieved 14 July 2011.
  7. Hawkins, Andrew J. (2016-11-27). "Meet ALVINN, the self-driving car from 1989". The Verge. Retrieved 2024-08-07.
  8. "Look, Ma, No Hands". Carnegie Mellon University. 31 December 2017. Retrieved 31 December 2017.
  9. Freeman, Mike (3 April 2017). "Connected Cars: The long road to autonomous vehicles". Center for Wireless Communications. Archived from the original on 1 January 2018. Retrieved 31 December 2017.
  10. Jochem, Todd (3 April 2015). "Back to the Future: Autonomous Driving in 1995 - Robotics Trends". www.roboticstrends.com. Archived from the original on 29 December 2017. Retrieved 31 December 2017.
  11. "THE 2008 INDUCTEES". The Robot Institute. Archived from the original on 26 September 2011. Retrieved 14 July 2011.
  12. Shirai, Yoshiaki; Hirose, Shigeo (2012). Attention and Custom for Safe Behavior. Springer Science & Business Media. p. 249. ISBN   978-1447115809.{{cite book}}: |work= ignored (help)
  13. Pomerleau, Dean A. (1988). "ALVINN: An Autonomous Land Vehicle in a Neural Network". Advances in Neural Information Processing Systems. 1. Morgan-Kaufmann.
  14. Pomerleau, Dean (1990). "Rapidly Adapting Artificial Neural Networks for Autonomous Navigation". Advances in Neural Information Processing Systems. 3. Morgan-Kaufmann.
  15. Pomerleau, Dean A. (1990), "Neural Network Based Autonomous Navigation", Vision and Navigation, The Kluwer International Series in Engineering and Computer Science, vol. 93, Boston, MA: Springer US, pp. 83–93, doi:10.1007/978-1-4613-1533-9_5, ISBN   978-1-4612-8822-0 , retrieved 2024-08-07
  16. Pomerleau, Dean A. (1993). Neural Network Perception for Mobile Robot Guidance. Boston, MA: Springer US. doi:10.1007/978-1-4615-3192-0. ISBN   978-1-4613-6400-9.
  17. Crisman, Jill D.; Webb, Jon A. (1990), Thorpe, Charles E. (ed.), "The Warp Machine on Navlab", Vision and Navigation, vol. 93, Boston, MA: Springer US, pp. 309–347, doi:10.1007/978-1-4613-1533-9_14, ISBN   978-1-4612-8822-0 , retrieved 2024-12-10