Shot transition detection

Last updated

Shot transition detection (or simply shot detection) also called cut detection is a field of research of video processing. Its subject is the automated detection of transitions between shots in digital video with the purpose of temporal segmentation of videos. [1]

Contents

Use

Shot transition detection is used to split up a film into basic temporal units called shots; a shot is a series of interrelated consecutive pictures taken contiguously by a single camera and representing a continuous action in time and space. [2]

This operation is of great use in software for post-production of videos. It is also a fundamental step of automated indexing and content-based video retrieval or summarization applications which provide an efficient access to huge video archives, e.g. an application may choose a representative picture from each scene to create a visual overview of the whole film and, by processing such indexes, a search engine can process search items like "show me all films where there's a scene with a lion in it."

Cut detection can do nothing that a human editor couldn't do manually, however it is advantageous as it saves time. Furthermore, due to the increase in the use of digital video and, consequently, in the importance of the aforementioned indexing applications, the automatic cut detection is very important nowadays.

Basic technical terms

An Abrupt Transition. Hard cut.jpg
An Abrupt Transition.
The dissolve blends one shot gradually into another with a transparency effect. Dissolve.jpg
The dissolve blends one shot gradually into another with a transparency effect.

In simple terms cut detection is about finding the positions in a video in that one scene is replaced by another one with different visual content. Technically speaking the following terms are used:

A digital video consists of frames that are presented to the viewer's eye in rapid succession to create the impression of movement. "Digital" in this context means both that a single frame consists of pixels and the data is present as binary data, such that it can be processed with a computer. Each frame within a digital video can be uniquely identified by its frame index, a serial number.

A shot is a sequence of frames shot uninterruptedly by one camera. There are several film transitions usually used in film editing to juxtapose adjacent shots; In the context of shot transition detection they are usually group into two types: [3]

"Detecting a cut" means that the position of a cut is gained; more precisely a hard cut is gained as "hard cut between frame i and frame i+1", a soft cut as "soft cut from frame i to frame j".

A transition that is detected correctly is called a hit, a cut that is there but was not detected is called a missed hit and a position in that the software assumes a cut, but where actually no cut is present, is called a false hit.

An introduction to film editing and an exhaustive list of shot transition techniques can be found at film editing.

Vastness of the problem

Although cut detection appears to be a simple task for a human being, it is a non-trivial task for computers. Cut detection would be a trivial problem if each frame of a video was enriched with additional information about when and by which camera it was taken. Possibly no algorithm for cut detection will ever be able to detect all cuts with certainty, unless it is provided with powerful artificial intelligence. [ citation needed ]

While most algorithms achieve good results with hard cuts, many fail with recognizing soft cuts. Hard cuts usually go together with sudden and extensive changes in the visual content while soft cuts feature slow and gradual changes. A human being can compensate this lack of visual diversity with understanding the meaning of a scene. While a computer assumes a black line wiping a shot away to be "just another regular object moving slowly through the on-going scene", a person understands that the scene ends and is replaced by a black screen.

Methods

Each method for cut detection works on a two-phase-principle:

  1. Scoring – Each pair of consecutive frames of a digital video is given a certain score that represents the similarity/dissimilarity between them.
  2. Decision – All scores calculated previously are evaluated and a cut is detected if the score is considered high.

This principle is error prone. First, because even minor exceedings of the threshold value produce a hit, it must be ensured that phase one scatters values widely to maximize the average difference between the score for "cut" and "no cut". Second, the threshold must be chosen with care; usually useful values can be gained with statistical methods.

Cut detection. (1) Hit: a detected hard cut. (2) Missed hit: a soft cut (dissolve), that was not detected. (3) False Hit: one single soft cut that is falsely interpreted as two different hard cuts. Cut Detection en.png
Cut detection. (1) Hit: a detected hard cut. (2) Missed hit: a soft cut (dissolve), that was not detected. (3) False Hit: one single soft cut that is falsely interpreted as two different hard cuts.

Scoring

There are many possible scores used to access the differences in the visual content; some of the most common are:

Finally, a combination of two or more of these scores can improve the performance.

Decision

In the decision phase the following approaches are usually used:

Cost

All of the above algorithms complete in O(n) — that is to say they run in linear time — where n is the number of frames in the input video. The algorithms differ in a constant factor that is determined mostly by the image resolution of the video.

Measures for quality

Usually the following three measures are used to measure the quality of a cut detection algorithm:


The symbols stand for: C, the number of correctly detected cuts ("correct hits"), M, the number of not detected cuts ("missed hits") and F, the number of falsely detected cuts ("false hits"). All of these measures are mathematical measures, i. e. they deliver values in between 0 and 1. The basic rule is: the higher the value, the better performs the algorithm.

Benchmarks

Comparison of benchmarks
BenchmarkVideosHoursFramesShot transitionsParticipantsYears
TRECVid12 - 424.8 - 7.5545,068 - 744,6042090 - 4806572001 - 2007
MSU SBD3121.451,900,000+1088372020 - 2021

TRECVid SBD Benchmark 2001-2007 [4]

Automatic shot transition detection was one of the tracks of activity within the annual TRECVid benchmarking exercise from 2001 to 2007. There were 57 algorithms from different research groups. Сalculations of F score were performed for each algorithm on a dataset, which was replenished annually.

Top research groups
GroupF scoreProcessing speed
(compared to real-time)
Open sourceUsed metrics and technologies
Tsinghua U. [5] 0.897×0.23NoMean of Pixel Intensities
Standard Deviation of Pixel Intensities
Color Histogram
Pixel-wise Difference
Motion Vector
NICTA [6] 0.892×2.30NoMachine learning
IBM Research [7] 0.876×0.30NoColor histogram
Localized Edges direction histogram
Gray-level Thumbnails comparison
Frame luminance

MSU SBD Benchmark 2020-2021 [8]

The benchmark has compared 6 methods on more than 120 videos from RAI and MSU CC datasets with different types of scene changes, some of which were added manually. [9] The authors state that the main feature of this benchmark is the complexity of shot transitions in the dataset. To prove it they calculate SI/TI metric of shots and compare it with others publicly available datasets.

Top algorithms
AlgorithmF scoreProcessing speed
(FPS)
Open sourceUsed metrics and technologies
Saeid Dadkhah [10] 0.79786 Yes Color histogram
Adaptive threshold
Max Reimann [11] 0.78776 Yes SVM for cuts
Neural networks for graduals transitions
Color Histogram
VQMT [12] 0.777308NoEdges histograms
Motion compensation
Color histograms
PySceneDetect [13] 0.776321 Yes Frame intensity
FFmpeg [14] 0.772165 Yes Color histogram

Related Research Articles

Frame rate is the frequency (rate) at which consecutive images (frames) are captured or displayed. The term applies equally to film and video cameras, computer graphics, and motion capture systems. Frame rate may also be called the frame frequency, and be expressed in hertz. Frame rate in electronic camera specifications may refer to the maximal possible rate, where, in practice, other settings may reduce the frequency to a lower number.

Canny edge detector

The Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images. It was developed by John F. Canny in 1986. Canny also produced a computational theory of edge detection explaining why the technique works.

Image segmentation Partitioning a digital image into segments

In digital image processing and computer vision, image segmentation is the process of partitioning a digital image into multiple image segments, also known as image regions or image objects. The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics.

Deinterlacing is the process of converting interlaced video into a non-interlaced or progressive form. Interlaced video signals are commonly found in analog television, digital television (HDTV) when in the 1080i format, some DVD titles, and a smaller number of Blu-ray discs.

The scale-invariant feature transform (SIFT) is a computer vision algorithm to detect, describe, and match local features in images, invented by David Lowe in 1999. Applications include object recognition, robotic mapping and navigation, image stitching, 3D modeling, gesture recognition, video tracking, individual identification of wildlife and match moving.

576i Standard-definition video mode

576i is a standard-definition digital video mode, originally used for digitizing analog television in most countries of the world where the utility frequency for electric power distribution is 50 Hz. Because of its close association with the legacy color encoding systems, it is often referred to as PAL, PAL/SECAM or SECAM when compared to its 60 Hz NTSC-colour-encoded counterpart, 480i.

Motion estimation Process used in video coding/compression

Motion estimation is the process of determining motion vectors that describe the transformation from one 2D image to another; usually from adjacent frames in a video sequence. It is an ill-posed problem as the motion is in three dimensions but the images are a projection of the 3D scene onto a 2D plane. The motion vectors may relate to the whole image or specific parts, such as rectangular blocks, arbitrary shaped patches or even per pixel. The motion vectors may be represented by a translational model or many other models that can approximate the motion of a real video camera, such as rotation and translation in all three dimensions and zoom.

Dissolve (filmmaking) Type of film transition in which one sequences fades over another

In the post-production process of film editing and video editing, a dissolve is a type of film transition in which one sequence fades over another. The terms fade-out and fade-in are used to describe a transition to and from a blank image. This is in contrast to a cut, where there is no such transition. A dissolve overlaps two shots for the duration of the effect, usually at the end of one scene and the beginning of the next, but may be used in montage sequences also. Generally, but not always, the use of a dissolve is held to indicate that a period of time has passed between the two scenes. Also, it may indicate a change of location or the start of a flashback.

Mattes are used in photography and special effects filmmaking to combine two or more image elements into a single, final image. Usually, mattes are used to combine a foreground image with a background image. In this case, the matte is the background painting. In film and stage, mattes can be physically huge sections of painted canvas, portraying large scenic expanses of landscapes.

Image histogram

An image histogram is a type of histogram that acts as a graphical representation of the tonal distribution in a digital image. It plots the number of pixels for each tonal value. By looking at the histogram for a specific image a viewer will be able to judge the entire tonal distribution at a glance.

Image stitching Combining multiple photographic images with overlapping fields of view

Image stitching or photo stitching is the process of combining multiple photographic images with overlapping fields of view to produce a segmented panorama or high-resolution image. Commonly performed through the use of computer software, most approaches to image stitching require nearly exact overlaps between images and identical exposures to produce seamless results, although some stitching algorithms actually benefit from differently exposed images by doing high-dynamic-range imaging in regions of overlap. Some digital cameras can stitch their photos internally.

In computer vision and image processing, a feature is a piece of information about the content of an image; typically about whether a certain region of the image has certain properties. Features may be specific structures in the image such as points, edges or objects. Features may also be the result of a general neighborhood operation or feature detection applied to the image. Other examples of features are related to motion in image sequences, or to shapes defined in terms of curves or boundaries between different image regions.

The histogram of oriented gradients (HOG) is a feature descriptor used in computer vision and image processing for the purpose of object detection. The technique counts occurrences of gradient orientation in localized portions of an image. This method is similar to that of edge orientation histograms, scale-invariant feature transform descriptors, and shape contexts, but differs in that it is computed on a dense grid of uniformly spaced cells and uses overlapping local contrast normalization for improved accuracy.

In computer vision, maximally stable extremal regions (MSER) are used as a method of blob detection in images. This technique was proposed by Matas et al. to find correspondences between image elements from two images with different viewpoints. This method of extracting a comprehensive number of corresponding image elements contributes to the wide-baseline matching, and it has led to better stereo matching and object recognition algorithms.

Pedestrian detection

Pedestrian detection is an essential and significant task in any intelligent video surveillance system, as it provides the fundamental information for semantic understanding of the video footages. It has an obvious extension to automotive applications due to the potential for improving safety systems. Many car manufacturers offer this as an ADAS option in 2017.

Local binary patterns (LBP) is a type of visual descriptor used for classification in computer vision. LBP is the particular case of the Texture Spectrum model proposed in 1990. LBP was first described in 1994. It has since been found to be a powerful feature for texture classification; it has further been determined that when LBP is combined with the Histogram of oriented gradients (HOG) descriptor, it improves the detection performance considerably on some datasets. A comparison of several improvements of the original LBP in the field of background subtraction was made in 2015 by Silva et al. A full survey of the different versions of LBP can be found in Bouwmans et al.

Video copy detection is the process of detecting illegally copied videos by analyzing them and comparing them to original content.

Features from accelerated segment test (FAST) is a corner detection method, which could be used to extract feature points and later used to track and map objects in many computer vision tasks. The FAST corner detector was originally developed by Edward Rosten and Tom Drummond, and was published in 2006. The most promising advantage of the FAST corner detector is its computational efficiency. Referring to its name, it is indeed faster than many other well-known feature extraction methods, such as difference of Gaussians (DoG) used by the SIFT, SUSAN and Harris detectors. Moreover, when machine learning techniques are applied, superior performance in terms of computation time and resources can be realised. The FAST corner detector is very suitable for real-time video processing application because of this high-speed performance.

Foreground detection

Foreground detection is one of the major tasks in the field of computer vision and image processing whose aim is to detect changes in image sequences. Background subtraction is any technique which allows an image's foreground to be extracted for further processing.

Saliency map

In computer vision, a saliency map is an image that highlights the region on which people's eyes focus first. The goal of a saliency map is to reflect the degree of importance of a pixel to the human visual system. For example, in this image, a person first looks at the fort and light clouds, so they should be highlighted on the saliency map. Saliency maps engineered in artificial or computer vision are typically not the same as the actual saliency map constructed by biological or natural vision.

References

  1. P. Balasubramaniam; R Uthayakumar (2 March 2012). Mathematical Modelling and Scientific Computation: International Conference, ICMMSC 2012, Gandhigram, Tamil Nadu, India, March 16-18, 2012. Springer. pp. 421–. ISBN   978-3-642-28926-2.
  2. Weiming Shen; Jianming Yong; Yun Yang (18 December 2008). Computer Supported Cooperative Work in Design IV: 11th International Conference, CSCWD 2007, Melbourne, Australia, April 26-28, 2007. Revised Selected Papers. Springer Science & Business Media. pp. 100–. ISBN   978-3-540-92718-1.
  3. Joan Cabestany; Ignacio Rojas; Gonzalo Joya (30 May 2011). Advances in Computational Intelligence: 11th International Work-Conference on Artificial Neural Networks, IWANN 2011, Torremolinos-Málaga, Spain, June 8-10, 2011, Proceedings. Springer Science & Business Media. pp. 521–. ISBN   978-3-642-21500-1. Shot detection is performed by means of shot transition detection algorithms. Two different types of transitions are used to split a video into shots: – Abrupt transitions, also referred as cuts or straight cuts, occur when a sudden change from one ...
  4. Smeaton, A. F., Over, P., & Doherty, A. R. (2010). Video shot boundary detection: Seven years of TRECVid activity. Computer Vision and Image Understanding, 114(4), 411–418. doi : 10.1016/j.cviu.2009.03.011
  5. Yuan, J., Zheng, W., Chen, L., Ding, D., Wang, D., Tong, Z., Wang, H., Wu, J., Li, J., Lin, F., & Zhang, B. (2004). Tsinghua University at TRECVID 2004: Shot Boundary Detection and High-Level Feature Extraction. TRECVID.
  6. Yu, Zhenghua, S. Vishwanathan and Alex Smola. “NICTA at TRECVID 2005 Shot Boundary Detection Task.” TRECVID (2005).
  7. A. Amir, The IBM Shot Boundary Detection System at TRECVID 2003, in: TRECVID 2005 Workshop Notebook Papers, National Institute of Standards and Technology, MD, USA, 2003.
  8. "MSU SBD Benchmark 2020". Archived from the original on 2021-02-13. Retrieved 2021-02-19.
  9. "MSU SBD Benchmark 2020". Archived from the original on 2021-02-13. Retrieved 2021-02-19.
  10. "SaeidDadkhah/Shot-Boundary-Detection". GitHub . 19 September 2021.
  11. "Shot-Boundary-Detection". GitHub . 11 September 2021.
  12. "MSU Scene Change Detector (SCD)".
  13. "Home - PySceneDetect".
  14. "Ffprobe Documentation".