Bin picking

Last updated

Bin picking (also referred to as random bin picking) is a core problem in computer vision and robotics. The goal is to have a robot with sensors and cameras attached to it pick-up known objects with random poses out of a bin using a suction gripper, parallel gripper, or other kind of robot end effector. Early work on bin picking made use of Photometric Stereo [1] in recovering the shapes of objects and to determine their orientation in space.

Amazon previously held a competition focused on bin picking referred to as the "Amazon Picking Challenge", which was held from 2015 to 2017. [2] The challenge tasked entrants with building their own robot hardware and software that could attempt simplified versions of the general task of picking and stowing items on shelves. The robots were scored by how many items were picked and stowed in a fixed amount of time. [3] The first Amazon Robotics challenge was won by a team from TU Berlin in 2015, [4] followed by a team from TU Delft and the Dutch company "Fizyr" in 2016. [5] The last Amazon Robotics Challenge was won by the Australian Centre for Robotic Vision at Queensland University of Technology with their robot named Cartman. [6] The Amazon Robotics/Picking Challenge was discontinued following the 2017 competition.

Although there can be some overlap, bin picking is distinct from "each picking" [7] [8] and the bin packing problem.

See also

Related Research Articles

Computer vision tasks include methods for acquiring, processing, analyzing, and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the form of decisions. "Understanding" in this context signifies the transformation of visual images into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.

An autonomous robot is a robot that acts without recourse to human control. Historic examples include space probes. Modern examples include self-driving vacuums and cars.

<span class="mw-page-title-main">Industrial robot</span> Robot used in manufacturing

An industrial robot is a robot system used for manufacturing. Industrial robots are automated, programmable and capable of movement on three or more axes.

<span class="mw-page-title-main">FIRST Lego League Challenge</span> Robotics competition

The FIRST LEGO League Challenge is an international competition organized by FIRST for elementary and middle school students.

The scale-invariant feature transform (SIFT) is a computer vision algorithm to detect, describe, and match local features in images, invented by David Lowe in 1999. Applications include object recognition, robotic mapping and navigation, image stitching, 3D modeling, gesture recognition, video tracking, individual identification of wildlife and match moving.

<span class="mw-page-title-main">FANUC</span> Japanese robotics company

FANUC is a Japanese group of companies that provide automation products and services such as robotics and computer numerical control wireless systems. These companies are principally FANUC Corporation of Japan, Fanuc America Corporation of Rochester Hills, Michigan, USA, and FANUC Europe Corporation S.A. of Luxembourg.

In the fields of computing and computer vision, pose represents the position and orientation of an object, usually in three dimensions. Poses are often stored internally as transformation matrices. The term “pose” is largely synonymous with the term “transform”, but a transform may often include scale, whereas pose does not.

Robot software is the set of coded commands or instructions that tell a mechanical device and electronic system, known together as a robot, what tasks to perform. Robot software is used to perform autonomous tasks. Many software systems and frameworks have been proposed to make programming robots easier.

<span class="mw-page-title-main">Robotic arm</span> Type of mechanical arm with similar functions to a human arm.

A robotic arm is a type of mechanical arm, usually programmable, with similar functions to a human arm; the arm may be the sum total of the mechanism or may be part of a more complex robot. The links of such a manipulator are connected by joints allowing either rotational motion or translational (linear) displacement. The links of the manipulator can be considered to form a kinematic chain. The terminus of the kinematic chain of the manipulator is called the end effector and it is analogous to the human hand. However, the term "robotic hand" as a synonym of the robotic arm is often proscribed.

An agricultural robot is a robot deployed for agricultural purposes. The main area of application of robots in agriculture today is at the harvesting stage. Emerging applications of robots or drones in agriculture include weed control, cloud seeding, planting seeds, harvesting, environmental monitoring and soil analysis. According to Verified Market Research, the agricultural robots market is expected to reach $11.58 billion by 2025.

The histogram of oriented gradients (HOG) is a feature descriptor used in computer vision and image processing for the purpose of object detection. The technique counts occurrences of gradient orientation in localized portions of an image. This method is similar to that of edge orientation histograms, scale-invariant feature transform descriptors, and shape contexts, but differs in that it is computed on a dense grid of uniformly spaced cells and uses overlapping local contrast normalization for improved accuracy.

Object recognition – technology in the field of computer vision for finding and identifying objects in an image or video sequence. Humans recognize a multitude of objects in images with little effort, despite the fact that the image of the objects may vary somewhat in different view points, in many different sizes and scales or even when they are translated or rotated. Objects can even be recognized when they are partially obstructed from view. This task is still a challenge for computer vision systems. Many approaches to the task have been implemented over multiple decades.

There are a number of competitions and prizes to promote research in artificial intelligence.

<span class="mw-page-title-main">Photometric stereo</span> 3D imaging technique

Photometric stereo is a technique in computer vision for estimating the surface normals of objects by observing that object under different lighting conditions (photometry). It is based on the fact that the amount of light reflected by a surface is dependent on the orientation of the surface in relation to the light source and the observer. By measuring the amount of light reflected into a camera, the space of possible surface orientations is limited. Given enough light sources from different angles, the surface orientation may be constrained to a single orientation or even overconstrained.

A vision-guided robot (VGR) system is basically a robot fitted with one or more cameras used as sensors to provide a secondary feedback signal to the robot controller to more accurately move to a variable target position. VGR is rapidly transforming production processes by enabling robots to be highly adaptable and more easily implemented, while dramatically reducing the cost and complexity of fixed tooling previously associated with the design and set up of robotic cells, whether for material handling, automated assembly, agricultural applications, life sciences, and more.

<span class="mw-page-title-main">Student Robotics</span>

Student Robotics is a registered charity that runs an annual robotics competition for teams of 16 to 19 year-olds. The charity aims to foster a world where engineering and artificial intelligence is accessible to young people with a stated mission "to bring the excitement of engineering and the challenge of coding to young people through robotics". The competition is free to enter and teams are provided with all of the core electronics that they need to build a robot. To encourage creative and ingenious solutions to problems, constraints on design are kept to a minimum, and the students can build and fashion their robots with any materials they choose; this results in a wide range of quirky, original robots. The robots must operate autonomously; once they are switched on to compete no interference from the team is allowed.

<span class="mw-page-title-main">Robotic sensors</span> Mechanical sensors, often based on human senses

Robotic sensors are used to estimate a robot's condition and environment. These signals are passed to a controller to enable appropriate behavior.

<span class="mw-page-title-main">NimbRo</span> Competitive robotics team

NimbRo is the robot competition team of the Autonomous Intelligent Systems group of University of Bonn, Germany. It was founded in 2004 at the University of Freiburg, Germany.

Zivid is a Norwegian machine vision technology company headquartered in Oslo, Norway. It designs and sells 3D color cameras with vision software that are used in autonomous industrial robot cells, collaborative robot (cobot) cells and other industrial automation systems.

Andy Zeng is an American computer scientist and AI engineer at Google DeepMind. He is best known for his research in robotics and machine learning, including robot learning algorithms that enable machines to intelligently interact with the physical world and improve themselves over time. Zeng was a recipient of the Gordon Y.S. Wu Fellowship in Engineering and Wu Prize in 2016, and the Princeton SEAS Award for Excellence in 2018.

References

  1. Robert J. Woodham (1980). "Photometric Method for Determining Surface Orientation from Multiple Images". Optical Engineering. 19 (1): 139. Bibcode:1980OptEn..19..139W. doi:10.1117/12.7972479.
  2. "Amazon Picking Challenge - RoboCup -". Robocup2016.org. Archived from the original on 2018-08-14. Retrieved 2018-08-16.
  3. "Challenge Rules" (PDF).
  4. "2015 Results" (PDF).
  5. "2016 Winner". 5 July 2016.
  6. "2017 Results".
  7. "Fully Automated Random Each Picking…..no really" (PDF). Mhlc.com. Archived from the original (PDF) on 17 August 2018. Retrieved 17 August 2018.
  8. "NSF Award Search: Award#1632460 - SBIR Phase II: Versatile Robot Hands for Warehouse Automation". Nsf.gov.