This article's tone or style may not reflect the encyclopedic tone used on Wikipedia.(December 2023) |
Company type | Subsidiary |
---|---|
Industry | Semiconductors |
Founded | 1995 |
Founder | Aucera Technology (Taiwan) |
Headquarters | , U.S. |
Area served | Worldwide |
Key people | Renrong Yu, Shaw Hong |
Products | Image sensor technologies |
Revenue | $1.379B |
Owner | Will Semiconductor |
Number of employees | 2,200 (2015) [1] |
Website | ovt |
OmniVision Technologies Inc. is an American subsidiary of Chinese semiconductor device and mixed-signal integrated circuit design house Will Semiconductor. [2] [3] The company designs and develops digital imaging products for use in mobile phones, laptops, netbooks and webcams, security and surveillance cameras, entertainment, automotive and medical imaging systems. Headquartered in Santa Clara, California, OmniVision Technologies has offices in the US, Western Europe and Asia. [4]
In 2016, OmniVision was acquired by a consortium of Chinese investors consisting of Hua Capital Management Co., Ltd., CITIC Capital and Goldstone Investment Co., Ltd. [5]
OmniVision was founded in 1995 by Aucera Technology (TAIWAN:奧斯來科技).
Some company milestones:
OmniVision's front-side illumination (FSI) technology is used to manufacture compact cameras in mobile handsets, notebook computers and other applications that require low-light performance without the need for flash.
OmniPixel3-GS expands on its predecessor, and is used for eye-tracking for facial authentication, [14] and other computer vision applications.
Backside illuminated image (BSI) technology differs from FSI architectures in how light is delivered to the photosensitive area of the sensor. In FSI architectures, the light must first pass through transistors, dielectric layers, and metal circuitry. In contrast, OmniBSI technology turns the image sensor upside down and applies color filters and micro lenses to the backside of the pixels, resulting in light collection through the backside of the sensor.
The second-generation BSI technology, developed in cooperation with Taiwan Semiconductor Manufacturing Company Limited (TSMC), is built using custom 65 nm design rules and 300mm copper processes. These technology changes were made to improve low-light sensitivity, dark current, and full-well capacity and provide a sharper image.
In this camera module, sensor and lens manufacturing processes are combined using semiconductor stacking methodology. Wafer-level optical elements are fabricated in a single step by combining CMOS image sensors, chip scale packaging processes, (CSP) and wafer-level optics (WLO). These fully integrated chip products have camera functionality and are intended to produce thin, compact devices.
RGB-iR technology uses a color filter process to improve color fidelity. By committing 25% of its pixel array pattern to infrared (IR) and 75% to RGB, it can simultaneously capture both RGB and IR images. This makes it possible to capture both day and night images with the same sensor. It is used for battery powered home security cameras as well as biometric authentication, such as gesture and facial recognition. [15]
OmniVision developed its PureCel and PureCel Plus image sensor technology to provide added camera functionality to smartphones and action cameras. The technical goal was to provide smaller camera modules that enable larger optical formats and offer improved image quality, especially in low-light conditions. [16]
Both of these technologies are offered in a stacked die format (PureCel-S and PureCelPlus-S). This stacked die methodology separates the imaging array from the image sensor processing pipeline into a stacked die structure, allowing for additional functionality to be implemented on the sensor while providing for much smaller die sizes compared to non-stacked sensors. PureCelPlus-S uses partial deep trench isolation (B-DTI) structures comprising an interfacial oxide, first deposited HfO, TaO, oxide, Ti-based liner, and a tungsten core. This is OmniVision's first DTI structure, and the first metal filled B-DTI trench since 2013. [17]
PureCel Plus uses buried color filter array (BCFA) to collect light with various incident light angles for tolerance improvements. Deep trench isolation reduces crosstalk by creating isolation walls between pixels inside silicon. In PureCel Plus Gen 2, OmniVision set out to improve deep trench isolation for better pixel isolation and low-light performance. Its target application is smartphone video cameras. [18]
Developed to address the low-light and night-vision performance requirements of advanced machine vision, surveillance, and automotive camera applications, OmniVision's Nyxel NIR imaging technology combines thick-silicon pixel architectures and careful management of the wafer surface texture to improve quantum efficiency (QE). In addition, extended deep trench isolation helps retain modulation transfer function without affecting the sensor's dark current, further improving night vision capabilities. [19] Performance improvements include image quality, extended image-detection range and a reduced light-source requirement, leading to overall lower system power consumption. [20]
This second generation near-infrared technology improves upon the first generation by increasing the silicon thickness to improve imaging sensitivity. Deep trench isolation was extended to address issues with crosstalk without impacting modulation transfer function. Wafer surface has been refined to improve the extended photon path and increase photon-electron conversion. The sensor achieves 25% improvement in the invisible 940-nm NIR light spectrum and a 17% increase in the barely visible 850-nm NIR wavelength over the first-generation technology. [21]
High-dynamic-range (HDR) imaging relies on algorithms to combined several image captures into one to create a higher quality image than native capture alone. LED lighting can create a flicker effect with HDR. This is a problem for machine vision systems, such as those used in autonomous vehicles. That is because LEDs are ubiquitous in automotive environments, from headlights to traffic lights, road signs and beyond. While the human eye can adapt to LED flickering, machine vision cannot. To mitigate this effect, OmniVision uses split-pixel technology. One large photodiode captures a scene using short exposure time. A small photodiode using long exposure simultaneously captures the LED signal. The two images are then joined in a final picture. The result is a flicker-free image. [22]
OmniVision CMOS image sensors range in resolution from 64 megapixels to below one megapixel. [23] In 2009, it received orders from Apple for both 3.2 megapixel and 5 megapixel CIS. [24]
OmniVision also manufactures application integrated circuits (ASICs) as companion products for its image sensors used in automotive, medical, augmented reality and virtual reality (AR/VR), and IoT applications. [25]
OmniVision's CameraCubeChip is a fully packaged, wafer-level camera module measuring 0.65 mm × 0.65 mm. It is being integrated into disposable endoscopes and catheters with diameters as small as 1.0mm. These medical devices are used for a range of medical procedures, from diagnostic to minimally invasive surgery. The used OV6948 sensor has a size of 0.575 mm × 0.575 mm and a resolution of 200 × 200 Pixel. [26]
OmniVision manufacturers liquid crystal on silicon (LCOS) projection technology for display applications. [27]
In 2018, Magic Leap used OmniVision's LCOS technology and their sensor bridge ASIC for the Magic Leap One augmented reality headset. [28]
The digital imaging market has converged into two paths: digital photography and machine vision. While smartphone cameras drove the market for some time, since 2017, machine vision applications have driven new developments. Autonomous vehicles, medical devices, miniaturized security cameras, and internet of things (IoT) devices all rely on advanced imaging technologies. [29] OmniVision's image sensors are designed for all imaging market segments including:
The following are examples of OmniVision products that have been adopted by end-users.
A charge-coupled device (CCD) is an integrated circuit containing an array of linked, or coupled, capacitors. Under the control of an external circuit, each capacitor can transfer its electric charge to a neighboring capacitor. CCD sensors are a major technology used in digital imaging.
In digital imaging, a pixel, pel, or picture element is the smallest addressable element in a raster image, or the smallest addressable element in a dot matrix display device. In most digital display devices, pixels are the smallest element that can be manipulated through software.
A digital camera, also called a digicam, is a camera that captures photographs in digital memory. Most cameras produced today are digital, largely replacing those that capture images on photographic film. Digital cameras are now widely incorporated into mobile devices like smartphones with the same or more capabilities and features of dedicated cameras. High-end, high-definition dedicated cameras are still commonly used by professionals and those who desire to take higher-quality photographs.
Digital image processing is the use of a digital computer to process digital images through an algorithm. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and distortion during processing. Since images are defined over two dimensions digital image processing may be modeled in the form of multidimensional systems. The generation and development of digital image processing are mainly affected by three factors: first, the development of computers; second, the development of mathematics ; third, the demand for a wide range of applications in environment, agriculture, military, industry and medical science has increased.
A Bayer filter mosaic is a color filter array (CFA) for arranging RGB color filters on a square grid of photosensors. Its particular arrangement of color filters is used in most single-chip digital image sensors used in digital cameras, and camcorders to create a color image. The filter pattern is half green, one quarter red and one quarter blue, hence is also called BGGR, RGBG, GRBG, or RGGB.
High-speed photography is the science of taking pictures of very fast phenomena. In 1948, the Society of Motion Picture and Television Engineers (SMPTE) defined high-speed photography as any set of photographs captured by a camera capable of 69 frames per second or greater, and of at least three consecutive frames. High-speed photography can be considered to be the opposite of time-lapse photography.
Digital photography uses cameras containing arrays of electronic photodetectors interfaced to an analog-to-digital converter (ADC) to produce images focused by a lens, as opposed to an exposure on photographic film. The digitized image is stored as a computer file ready for further digital processing, viewing, electronic publishing, or digital printing. It is a form of digital imaging based on gathering visible light.
An image sensor or imager is a sensor that detects and conveys information used to form an image. It does so by converting the variable attenuation of light waves into signals, small bursts of current that convey the information. The waves can be light or other electromagnetic radiation. Image sensors are used in electronic imaging devices of both analog and digital types, which include digital cameras, camera modules, camera phones, optical mouse devices, medical imaging equipment, night vision equipment such as thermal imaging devices, radar, sonar, and others. As technology changes, electronic and digital imaging tends to replace chemical and analog imaging.
An active-pixel sensor (APS) is an image sensor, which was invented by Peter J.W. Noble in 1968, where each pixel sensor unit cell has a photodetector and one or more active transistors. In a metal–oxide–semiconductor (MOS) active-pixel sensor, MOS field-effect transistors (MOSFETs) are used as amplifiers. There are different types of APS, including the early NMOS APS and the now much more common complementary MOS (CMOS) APS, also known as the CMOS sensor. CMOS sensors are used in digital camera technologies such as cell phone cameras, web cameras, most modern digital pocket cameras, most digital single-lens reflex cameras (DSLRs), mirrorless interchangeable-lens cameras (MILCs), and lensless imaging for cells.
In digital imaging, a color filter array (CFA), or color filter mosaic (CFM), is a mosaic of tiny color filters placed over the pixel sensors of an image sensor to capture color information.
Exmor is technology Sony implemented on some of their CMOS image sensors. It performs on-chip analog/digital signal conversion and two-step noise reduction in parallel on each column of the CMOS sensor.
Canesta was a fabless semiconductor company that was founded in April, 1999, by Cyrus Bamji, Abbas Rafii, and Nazim Kareemi.
A time-of-flight camera, also known as time-of-flight sensor, is a range imaging camera system for measuring distances between the camera and the subject for each point of the image based on time-of-flight, the round trip time of an artificial light signal, as provided by a laser or an LED. Laser-based time-of-flight cameras are part of a broader class of scannerless LIDAR, in which the entire scene is captured with each laser pulse, as opposed to point-by-point with a laser beam such as in scanning LIDAR systems. Time-of-flight camera products for civil applications began to emerge around 2000, as the semiconductor processes allowed the production of components fast enough for such devices. The systems cover ranges of a few centimeters up to several kilometers.
Himax Technologies, Inc. is a fabless semiconductor manufacturer headquartered in Tainan City, Taiwan founded on 12 June 2001. The company is publicly traded and listed on the Nasdaq Stock Market under the symbol HIMX. Himax Technologies Limited functions as a holding under the Cayman Islands Companies Law.
A back-illuminated sensor, also known as backside illumination (BI) sensor, is a type of digital image sensor that uses a novel arrangement of the imaging elements to increase the amount of light captured and thereby improve low-light performance.
InVisage Technologies is a fabless semiconductor company known for producing a technology called QuantumFilm, an image sensor technology that improves the quality of digital photographs taken with a cell phone camera. The company is based in Menlo Park, CA.
The HTC One M9+ is an Android smartphone manufactured and marketed by HTC which was announced on April 8, 2015. Initially on launch, the device was only sold in China. In July 2015, the device was released in Europe excluding the United Kingdom.
The ISOCELL CMOS camera sensors are a family of sensors produced by Samsung and available for purchase by other companies. They are used in a wide variety of products including mobile phones, computers and digital cameras.
sCMOS are a type of CMOS image sensor (CIS). These sensors are commonly used as components in specific observational scientific instruments, such as microscopes and telescopes. sCMOS image sensors offer extremely low noise, rapid frame rates, wide dynamic range, high quantum efficiency, high resolution, and a large field of view simultaneously in one image.