Company type | Private |
---|---|
Industry | Industrial automation (hardware and software) |
Founded | 2015 |
Founders | Henrik Schumann-Olsen, Øystein Skotheim |
Headquarters | , |
Key people | Thomas Embla Bonnerud (CEO) |
Products | 3D Vision Systems, Vision Sensors, Vision Software |
Number of employees | 80 (2023) |
Website | www |
Zivid is a Norwegian machine vision technology company headquartered in Oslo, Norway. It designs and sells 3D color cameras with vision software that are used in autonomous industrial robot cells, collaborative robot (cobot) cells and other industrial automation systems. [1] [2] [3] [4] [5]
The company's primary hardware product is the industrial Zivid 2+, Zivid 2, and Zivid One+ 3D color cameras. It is supported by companion software products: the Zivid Software Development Kit (SDK) and the Zivid Studio, a graphical user interface (GUI).
Zivid 3D cameras are in use across a broad range of applications in different industries. These applications include bin-picking, assembly, and machine tending in automation and production. They are also being used in high-speed piece picking and parcel sorting in e-commerce and logistics.
The Zivid company (originally named Zivid Labs) was founded in 2015, by Henrik Schumann-Olsen and Øysten Skotheim, who were colleagues at SINTEF, Norway's largest independent research organization.
Henrik Schumann-Olsen and Øystein Skotheim worked together at SINTEF, conducting research into machine and robot vision solutions covering a range of different 3D imaging techniques. In 2010 Microsoft launched the Kinect motion sensor add-on for Xbox, integrating a new form of 3D depth camera.
Microsoft’s Kinect enabled researchers and tech-enthusiasts to modify an off-the-shelf 3D camera, and at SINTEF the vision team's concept of a ‘Kinect for Industry’ was born. By the end of 2014, a prototype product, named ShapeCrafter 3D, was introduced, showcasing 3D vision capabilities and color point clouds. ShapeCrafter was demonstrated for the first time at VISION 2014 in Stuttgart, Germany.
The Research Council of Norway provided 6M NOK for further research into 3D industrial machine vision cameras. Henrik Schumann-Olsen and Øystein Skotheim founded Zivid Labs as a spin-out from SINTEF.
In March 2017, Zivid Labs introduced its first mass-produced product, the Zivid One 3D color camera. The camera was rated IP65 for industrial use.
An upgraded version of Zivid One, the Zivid One+ was launched in November 2018 as VISION 2018 in Stuttgart, Germany. The Zivid One+ product portfolio included three 3D color cameras spanning working distances from 30 centimetres (12 in) to 3 metres (9.8 ft). [6] In September 2018, logistics company DHL installed its first fully automated e-fulfilment robot in its Behringe, Netherlands warehouse. The robotic system integrated the Zivid One 3D color camera and was used for de-palletizing, picking, and order-fulfilment operations. [7] [8] The Zivid One 3D camera received Red Dot's "Product Design" award, Vision System Design's "Gold Innovators Award" and inVISION Magazine's "Top Innovation Award". [9] [10] [11] Zivid appointed Thomas Embla Bonnerud as CEO. [12] [13] The company changed its name from Zivid Labs to Zivid.
Zivid introduced a new software development kit and graphical user interface in March 2019. The SDK provided Windows and Linux support, and included a re-engineered API and a second-generation vision engine. [14] The Zivid Studio GUI provided developers with a ready-to-use application for 3D point cloud capture, visualization and exploration. Zivid opened sales offices in China, South Korea, and North America, and appointed first distributors in Canada, China, Japan and USA. [15]
In November 2020, [16] Zivid announced Zivid 2, a faster, high-precision 3D color camera. More compact and lighter in weight than previous products, it was purpose-designed to suit both on-arm and stationary mounted applications.
Major updates to the Zivid software development kit were also announced in June and December 2020. The SDK 2.0 provided: stripe patterns to suppress interreflections, filtering to correct contrast distortion artifacts, enhanced HDR image capture sequencing, and multi-camera calibration.
To simplify camera mounting, Zivid announced a range of accessories in July 2020. For robot arm mounting, a camera mount, bracket, and extender to the ISO 9409-1-50-4-M6 coupling plate standard [17] [18] were introduced along with cable guide, power and data cables. For stationary applications, a reconfigurable pan and tilt camera mount was provided.
An all-in-one 3D camera developer kit bundle was introduced in November 2020, comprising: Zivid One+ or Zivid 2 camera, accessory set, in-field calibration board, tripod adapter and 2-year warranty.
In October 2022, [19] Zivid announced Zivid 2 L100, designed to enable robotic picking in deeper, larger bins than are typical of the manufacturing industry. L100 is built on the established Zivid 2 platform, and the original Zivid 2 will become the Zivid 2 M70.
In June 2023, [20] Zivid unveiled the Zivid 2+ family, comprising the M60, M130, and L110 3D camera models. This product line unified 5-megapixel 3D and 2D data, resulting in improved point cloud resolution and transparent imaging capabilities. This 3D camera lineup is designed to cover a broad range of use cases with different working distances and volumes that are particularly suited to particular applications.
"Transparency has long been considered an impossible feat in 3D machine vision, and as a leading innovator in 3D vision, Zivid strives to make the impossible possible. Years of R&D efforts are now manifesting themselves in a product that enables consistent and reliable captures from all but the clearest glass." [21]
To obtain a machine-readable 3-dimensional image of a target object, the Zivid camera technology uses a technique known as structured light, or fringe projection, to arrive at a high-definition point cloud, a highly-accurate set of data points in space. A defined grid pattern is projected onto an object in white LED light, and a 2D color image sensor captures any distortion of the pattern as is strikes the surface. [22] [ circular reference ] By merging multiple images, complete object depth and surface data are acquired and used to create a full-color 3D point cloud. The Zivid 3D color camera integrates a 1920 pixel x 1200 pixel image sensor to produce a high-quality 5.0 Mpixel point cloud resolution, with XYZ coordinate, native RGB and contrast data for each individual pixel in the point cloud. A good point cloud is characterized by a high density of points and no missing data, yielding a lifelike 3D model of the captured scene.
Model | Applications | Optimal range | Maximum range | Field of view | Spatial resolution | Point precision |
---|---|---|---|---|---|---|
Zivid 2+ M60 | Assembly, robot guiding, inspection | 350 mm - 900 mm | 1100 mm | 570 mm x 460 mm @ 600 mm | 0.24 mm @ 600 mm | 80 μm @ 600 mm |
Zivid 2+ M130 | Piece picking, logistics, bin picking | 1000 mm - 1600 mm | 2000 mm | 790 mm x 650 mm @ 1300 mm | 0.32 mm @ 1300 mm | 210 μm @ 1300 mm |
Zivid 2+ L110 | Depalletizing, big bin-picking | 800 mm - 1400 mm | 1700 mm | 1090 mm x 850 mm @ 1100 mm | 0.44 mm @ 1100 mm | 240 μm @ 1100 mm |
Model | Applications | Optimal range | Maximum range | Field of view | Spatial resolution | Point precision |
---|---|---|---|---|---|---|
Zivid 2 M70 | Tiny to large objects, stationary and on-arm robot mounted | 300 mm - 1200 mm | 1500 mm | 754 mm x 449 mm @ 700 mm | 0.39 mm @ 700 mm | 55 μm @ 700 mm |
Zivid 2 L100 | Bin-picking, deeper bins and long grippers, item picking | 800 mm - 1400 mm | 1600 mm | 1147 mm x 680 mm @ 1000 mm | 0.56 mm @ 1000 mm | 130 μm @ 1000 mm |
Model | Applications | Optimal range | Maximum range | Field of view | Spatial resolution | Point precision |
---|---|---|---|---|---|---|
Zivid One+ Small | Tiny and small objects, trays and boxes | 300 mm - 800 mm | 1000 mm | 164 mm x 132 mm @ 300 mm | 0.12 mm @ 300 mm | 30 μm @ 300 mm |
Zivid One+ Medium | Small to medium-sized objects, totes and bins | 600 mm - 1600 mm | 2000 mm | 433 mm x 271 mm @ 600 mm | 0.23 mm @600 mm | 60 μm @ 600 mm |
Zivid One+ Large | Medium to large sized objects, standard EU/USA pallets | 1200 mm – 2600 mm | 3000 mm | 843 mm x 530 mm @ 1200 mm | 0.45 mm @ 1200 mm | 300 μm @ 1200 mm |
The Zivid 3D color cameras and software are being used as the machine vision sub-system for a variety of autonomous industrial robot cells, collaborative robot cells and other industrial automation systems.
The cameras are applied to tasks including random bin picking, pick-and-place, de-palletizing, assembly, packaging and quality inspection in a range of different manufacturing and logistics sectors. [23]
The company name Zivid was derived by combining the English word ‘Vivid’, meaning very bright, clear and detailed, with the letter ‘Z’, the depth parameter in a 3D image.
Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the forms of decisions. Understanding in this context means the transformation of visual images into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.
AIBO is a series of robotic dogs designed and manufactured by Sony. Sony announced a prototype Aibo in mid-1998, and the first consumer model was introduced on 11 May 1999. New models were released every year until 2006. Although most models were dogs, other inspirations included lion cubs and space explorers. Only the ERS-7, ERS-110/111 and ERS-1000 versions were explicitly a "robotic dog", but the 210 can also be considered a dog due to its Jack Russell Terrier appearance and face. In 2006, AIBO was added into the Carnegie Mellon University Robot Hall of Fame.
Machine vision is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry. Machine vision refers to many technologies, software and hardware products, integrated systems, actions, methods and expertise. Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of computer science. It attempts to integrate existing technologies in new ways and apply them to solve real world problems. The term is the prevalent one for these functions in industrial automation environments but is also used for these functions in other environment vehicle guidance.
The Norwegian Institute of Technology was a science institute in Trondheim, Norway. It was established in 1910, and existed as an independent technical university for 58 years, after which it was merged into the University of Trondheim as an independent college.
FANUC is a Japanese group of companies that provide automation products and services such as robotics and computer numerical control wireless systems. These companies are principally FANUC Corporation of Japan, Fanuc America Corporation of Rochester Hills, Michigan, USA, and FANUC Europe Corporation S.A. of Luxembourg.
3D scanning is the process of analyzing a real-world object or environment to collect three dimensional data of its shape and possibly its appearance. The collected data can then be used to construct digital 3D models.
The Industrial Technology Research Institute is a technology research and development institution in Taiwan. It was founded in 1973 and is headquartered in Hsinchu City, Taiwan, with branch offices in the U.S., Europe, and Japan.
Industrial paint robots have been used for decades in automotive paint applications.
In computer vision and computer graphics, 3D reconstruction is the process of capturing the shape and appearance of real objects. This process can be accomplished either by active or passive methods. If the model is allowed to change its shape in time, this is referred to as non-rigid or spatio-temporal reconstruction.
A vision-guided robot (VGR) system is basically a robot fitted with one or more cameras used as sensors to provide a secondary feedback signal to the robot controller to more accurately move to a variable target position. VGR is rapidly transforming production processes by enabling robots to be highly adaptable and more easily implemented, while dramatically reducing the cost and complexity of fixed tooling previously associated with the design and set up of robotic cells, whether for material handling, automated assembly, agricultural applications, life sciences, and more.
A time-of-flight camera, also known as time-of-flight sensor, is a range imaging camera system for measuring distances between the camera and the subject for each point of the image based on time-of-flight, the round trip time of an artificial light signal, as provided by a laser or an LED. Laser-based time-of-flight cameras are part of a broader class of scannerless LIDAR, in which the entire scene is captured with each laser pulse, as opposed to point-by-point with a laser beam such as in scanning LIDAR systems. Time-of-flight camera products for civil applications began to emerge around 2000, as the semiconductor processes allowed the production of components fast enough for such devices. The systems cover ranges of a few centimeters up to several kilometers.
PrimeSense was an Israeli 3D sensing company based in Tel Aviv. PrimeSense had offices in Israel, North America, Japan, Singapore, Korea, China and Taiwan. PrimeSense was bought by Apple Inc. for $360 million on November 24, 2013.
Brainlab is a privately held German medical technology company headquartered in Munich, Bavaria. Brainlab develops software and hardware for radiotherapy and radiosurgery, and the surgical fields of neurosurgery, ENT and craniomaxillofacial, spine surgery, and traumatic interventions. Their products focus on image-guided surgery and radiosurgery, digital operating room integration technologies, and cloud-based data sharing.
Intel RealSense Technology, formerly known as Intel Perceptual Computing, is a product range of depth and tracking technologies designed to give machines and devices depth perception capabilities. The technologies, owned by Intel are used in autonomous drones, robots, AR/VR, smart home devices amongst many others broad market products.
PMD Technologies is a developer of CMOS semiconductor 3D time-of-flight (ToF) components and a provider of engineering support in the field of digital 3D imaging. The company is named after the Photonic Mixer Device (PMD) technology used in its products to detect 3D data in real time. The corporate headquarters of the company is located in Siegen, Germany.
Visage SDK is a multi-platform software development kit (SDK) created by Visage Technologies AB. Visage SDK allows software programmers to build facial motion capture and eye tracking applications.
Smart manufacturing is a broad category of manufacturing that employs computer-integrated manufacturing, high levels of adaptability and rapid design changes, digital information technology, and more flexible technical workforce training. Other goals sometimes include fast changes in production levels based on demand, optimization of the supply chain, efficient production and recyclability. In this concept, as smart factory has interoperable systems, multi-scale dynamic modelling and simulation, intelligent automation, strong cyber security, and networked sensors.
The Aphelion Imaging Software Suite is a software suite that includes three base products - Aphelion Lab, Aphelion Dev, and Aphelion SDK for addressing image processing and image analysis applications. The suite also includes a set of extension programs to implement specific vertical applications that benefit from imaging techniques.
Air-Cobot (Aircraft Inspection enhanced by smaRt & Collaborative rOBOT) is a French research and development project of a wheeled collaborative mobile robot able to inspect aircraft during maintenance operations. This multi-partner project involves research laboratories and industry. Research around this prototype was developed in three domains: autonomous navigation, human-robot collaboration and nondestructive testing.
Scandit AG, commonly referred to as Scandit, is a Swiss technology company that provides smart data capture software. Their technology allows any smart device equipped with a camera to scan barcodes, IDs and text and to perform additional functions using augmented reality and advanced analytics.