Bin picking

Last updated

Bin picking (also referred to as random bin picking) is a core problem in computer vision and robotics. The goal is to have a robot with sensors and cameras attached to it pick-up known objects with random poses out of a bin using a suction gripper, parallel gripper, or other kind of robot end effector. Early work on bin picking made use of Photometric Stereo [1] in recovering the shapes of objects and to determine their orientation in space.

Amazon previously held a competition focused on bin picking referred to as the "Amazon Picking Challenge", which was held from 2015 to 2017. [2] The challenge tasked entrants with building their own robot hardware and software that could attempt simplified versions of the general task of picking and stowing items on shelves. The robots were scored by how many items were picked and stowed in a fixed amount of time. [3] The first Amazon Robotics challenge was won by a team from TU Berlin in 2015, [4] followed by a team from TU Delft and the Dutch company "Fizyr" in 2016. [5] The last Amazon Robotics Challenge was won by the Australian Centre for Robotic Vision at Queensland University of Technology with their robot named Cartman. [6] The Amazon Robotics/Picking Challenge was discontinued following the 2017 competition.

Although there can be some overlap, bin picking is distinct from "each picking" [7] [8] and the bin packing problem.

See also

Related Research Articles

<span class="mw-page-title-main">Computer vision</span> Computerized information extraction from images

Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the human visual system can do.

An autonomous robot is a robot that acts without recourse to human control. The first autonomous robots environment were known as Elmer and Elsie, which were constructed in the late 1940s by W. Grey Walter. They were the first robots in history that were programmed to "think" the way biological brains do and meant to have free will. Elmer and Elsie were often labeled as tortoises because of how they were shaped and the manner in which they moved. They were capable of phototaxis which is the movement that occurs in response to light stimulus.

<span class="mw-page-title-main">Industrial robot</span> Robot used in manufacturing

An industrial robot is a robot system used for manufacturing. Industrial robots are automated, programmable and capable of movement on three or more axes.

Image analysis or imagery analysis is the extraction of meaningful information from images; mainly from digital images by means of digital image processing techniques. Image analysis tasks can be as simple as reading bar coded tags or as sophisticated as identifying a person from their face.

<span class="mw-page-title-main">FIRST Lego League Challenge</span>

The FIRST LEGO League Challenge is an international competition organized by FIRST for elementary and middle school students.

The scale-invariant feature transform (SIFT) is a computer vision algorithm to detect, describe, and match local features in images, invented by David Lowe in 1999. Applications include object recognition, robotic mapping and navigation, image stitching, 3D modeling, gesture recognition, video tracking, individual identification of wildlife and match moving.

In the fields of computing and computer vision, pose represents the position and orientation of an object, usually in three dimensions. Poses are often stored internally as transformation matrices. The term “pose” is largely synonymous with the term “transform”, but a transform may often include scale, whereas pose does not.

Robot software is the set of coded commands or instructions that tell a mechanical device and electronic system, known together as a robot, what tasks to perform. Robot software is used to perform autonomous tasks. Many software systems and frameworks have been proposed to make programming robots easier.

<span class="mw-page-title-main">Robotic arm</span> Type of mechanical arm with similar functions to a human arm

A robotic arm is a type of mechanical arm, usually programmable, with similar functions to a human arm; the arm may be the sum total of the mechanism or may be part of a more complex robot. The links of such a manipulator are connected by joints allowing either rotational motion or translational (linear) displacement. The links of the manipulator can be considered to form a kinematic chain. The terminus of the kinematic chain of the manipulator is called the end effector and it is analogous to the human hand. However, the term "robotic hand" as a synonym of the robotic arm is often proscribed.

<span class="mw-page-title-main">Agricultural robot</span> Robot deployed for agricultural purposes

An agricultural robot is a robot deployed for agricultural purposes. The main area of application of robots in agriculture today is at the harvesting stage. Emerging applications of robots or drones in agriculture include weed control, cloud seeding, planting seeds, harvesting, environmental monitoring and soil analysis. According to Verified Market Research, the agricultural robots market is expected to reach $11.58 billion by 2025.

The histogram of oriented gradients (HOG) is a feature descriptor used in computer vision and image processing for the purpose of object detection. The technique counts occurrences of gradient orientation in localized portions of an image. This method is similar to that of edge orientation histograms, scale-invariant feature transform descriptors, and shape contexts, but differs in that it is computed on a dense grid of uniformly spaced cells and uses overlapping local contrast normalization for improved accuracy.

Object recognition – technology in the field of computer vision for finding and identifying objects in an image or video sequence. Humans recognize a multitude of objects in images with little effort, despite the fact that the image of the objects may vary somewhat in different view points, in many different sizes and scales or even when they are translated or rotated. Objects can even be recognized when they are partially obstructed from view. This task is still a challenge for computer vision systems. Many approaches to the task have been implemented over multiple decades.

A human-based computation game or game with a purpose (GWAP) is a human-based computation technique of outsourcing steps within a computational process to humans in an entertaining way (gamification).

<span class="mw-page-title-main">Photometric stereo</span> 3D imaging technique

Photometric stereo is a technique in computer vision for estimating the surface normals of objects by observing that object under different lighting conditions. It is based on the fact that the amount of light reflected by a surface is dependent on the orientation of the surface in relation to the light source and the observer. By measuring the amount of light reflected into a camera, the space of possible surface orientations is limited. Given enough light sources from different angles, the surface orientation may be constrained to a single orientation or even overconstrained.

A vision-guided robot (VGR) system is basically a robot fitted with one or more cameras used as sensors to provide a secondary feedback signal to the robot controller to more accurately move to a variable target position. VGR is rapidly transforming production processes by enabling robots to be highly adaptable and more easily implemented, while dramatically reducing the cost and complexity of fixed tooling previously associated with the design and set up of robotic cells, whether for material handling, automated assembly, agricultural applications, life sciences, and more.

<span class="mw-page-title-main">Robotics</span> Design, construction, use, and application of robots

Robotics is an interdisciplinary branch of computer science and engineering. Robotics involves design, construction, operation, and use of robots. The goal of robotics is to design machines that can help and assist humans. Robotics integrates fields of mechanical engineering, electrical engineering, information engineering, mechatronics, electronics, bioengineering, computer engineering, control engineering, software engineering, mathematics, etc.

<span class="mw-page-title-main">Student Robotics</span>

Student Robotics is a registered charity that runs an annual robotics competition for teams of 16 to 19 year-olds. The charity aims to foster a world where engineering and artificial intelligence is accessible to young people with a stated mission "to bring the excitement of engineering and the challenge of coding to young people through robotics". The competition is free to enter and teams are provided with all of the core electronics that they need to build a robot. To encourage creative and ingenious solutions to problems, constraints on design are kept to a minimum, and the students can build and fashion their robots with any materials they choose; this results in a wide range of quirky, original robots. The robots must operate autonomously; once they are switched on to compete no interference from the team is allowed.

<span class="mw-page-title-main">Robotic sensors</span> Mechanical sensors, often based on human senses

Robotic sensors are used to estimate a robot's condition and environment. These signals are passed to a controller to enable appropriate behavior.

<span class="mw-page-title-main">NimbRo</span> Competitive robotics team

NimbRo is the robot competition team of the Autonomous Intelligent Systems group of University of Bonn, Germany. It was founded in 2004 at the University of Freiburg, Germany.

Zivid is a Norwegian machine vision technology company headquartered in Oslo, Norway. It designs and sells 3D color cameras with vision software that are used in autonomous industrial robot cells, collaborative robot (cobot) cells and other industrial automation systems.

References

  1. Robert J. Woodham (1980). "Photometric Method for Determining Surface Orientation from Multiple Images". Optical Engineering. 19 (1).
  2. "Amazon Picking Challenge - RoboCup -". Robocup2016.org. Archived from the original on 2018-08-14. Retrieved 2018-08-16.
  3. "Challenge Rules" (PDF).
  4. "2015 Results" (PDF).
  5. "2016 Winner".
  6. "2017 Results".
  7. "Fully Automated Random Each Picking…..no really" (PDF). Mhlc.com. Archived from the original (PDF) on 17 August 2018. Retrieved 17 August 2018.
  8. "NSF Award Search: Award#1632460 - SBIR Phase II: Versatile Robot Hands for Warehouse Automation". Nsf.gov.