Hany Farid | |
---|---|
Born | Mülheim, Germany | February 10, 1966
Alma mater | University of Rochester SUNY Albany University of Pennsylvania MIT |
Awards | Alfred P. Sloan Fellowship Guggenheim Fellowship |
Scientific career | |
Fields | Computer vision Digital forensics |
Institutions | Dartmouth College UC Berkeley |
Thesis | Range Estimation by Optical Differentiation (1997) |
Doctoral advisor | Eero Simoncelli |
Hany Farid (born February 10, 1966) [1] is an American university professor who specializes in the analysis of digital images and the detection of digitally manipulated images such as deepfakes. [2] Farid served as Dean and Head of School for the UC Berkeley School of Information. [3] In addition to teaching, writing, and conducting research, Farid acts as a consultant for non-profits, government agencies, and news organizations. He is the author of the book Photo Forensics (2016). [4]
Farid received his undergraduate degree in computer science and applied mathematics from the University of Rochester in 1989. He earned a M.S. in computer science from SUNY/Albany in 1992. His Ph.D. in computer science from the University of Pennsylvania was awarded in 1997. In 1999, Farid completed a two-year post-doctoral program in Brain and Cognitive Sciences at the Massachusetts Institute of Technology. [5]
External videos | |
---|---|
“The information apocalypse”, Hany Farid, Knowable Magazine . March 14, 2020 | |
"Creating, Weaponizing, and Detecting Deep Fakes", Hany Farid, Keynote Spark + AI Summit, June 25, 2020. |
Farid specializes in image analysis and human perception. He has been called the "father" of digital image forensics by NOVA scienceNOW . [6] [7] He is the recipient of a 2006 Guggenheim Fellowship and a 2002 Sloan Fellowship for his work in the field. [5] Farid was named a lifetime fellow of the National Academy of Inventors in 2016. [8] [9]
In January 2021, Hany Farid was appointed Associate Dean and Head of School for the School of Information. [3] He remains professor at the University of California, Berkeley with a joint appointment in the Department of Electrical Engineering & Computer Science and the School of Information. He is also a member of the Berkeley Artificial Intelligence Lab, the Center for Innovation in Vision and Optics, and the Vision Science program. [10]
Prior to joining Berkeley, Farid was the Albert Bradley 1915 Third Century Professor of Computer Science at Dartmouth College [11] and former chair of Dartmouth's Neukom Institute for Computational Science. Farid was well-known at Dartmouth for teaching the college's introductory course on programming and computer science. Joseph Helble, dean of the Thayer School of Engineering at Dartmouth, described Farid as a pioneer in the field of digital forensics. Farid joined Dartmouth's faculty in 1999. He remained at Dartmouth until 2019. [12]
Farid has consulted for intelligence agencies, news organizations, courts, and scientific journals seeking to authenticate the validity of images. [13] [14] [15] Research shows that humans aren't very good at discriminating between fakes and real photographs. [16] [17] [18] Faked images may be produced for a variety of purposes: deepfakes are often used to fake the identify of a person in pornographic materials. [19] Politically motivated faked images may be used to present disinformation and hate speech, and to undermine the credibility of media, government and elections. [20] Authenticating figures in scientific publications is critically important because graphics programs, such as Photoshop, are frequently used to crop and to label figures. Such manipulations can be used to alter, disguise, and falsify the data. [21]
In a series of papers in 2009, 2010 and 2015, after digitally analyzing a photograph of Lee Harvey Oswald holding a rifle and newspaper, Farid concluded [22] [23] that "the photo almost certainly was not altered". [24] When the 2013 World Press Photo of the Year was alleged as being a "fake" composite work, Farid spoke out against the allegation and criticized the underlying method of error level analysis. [25] In 2020, Farid and Matyáš Boháček trained a computer model to detect fake videos of Ukraine’s president Volodymyr Zelenskyy. [2] [26] [27]
As of 2018, Farid was a consultant for the Associated Press, Reuters, The New York Times, and the Defense Advanced Research Project Agency.
PhotoDNA is a system that uses robust hashing technology Farid worked on with Microsoft, which is "now widely used by Internet companies to stop the spread of content showing sexual exploitation or pornography involving children." In late 2015, Farid completed improvements to PhotoDNA that made it capable of analyzing video and audio files besides still images. In 2016, Farid proposed that the technology could be used to stem the spread of terror-related imagery, but there was little interest shown initially by social media companies. [28] In December 2016, Facebook, Twitter, Google and Microsoft announced plans to use PhotoDNA to tackle extremist content such as terrorist recruitment videos or violent terrorist imagery. [29]
In June 2016, Farid, as a senior advisor to the Counter Extremism Project (CEP), unveiled a software tool for use by Internet and social media companies to "quickly find and eliminate extremist content used to spread and incite violence and attacks." It functions similarly to PhotoDNA. [30] [31] [32]
To operationalize this new technology to combat extremism, Farid and CEP proposed the creation of a National Office for Reporting Extremism (NORex), which would house a comprehensive database of extremist content and function similar to the National Center for Missing & Exploited Children . [33] [34]
In the fall of 2018, Truepic acquired Farid's start-up, Fourandsix Technologies. Farid started Fourandsix Technologies with Kevin Connor, a former vice president at Adobe Systems. The first product released by Fourandsix was called Fourmatch. Fourmatch was designed to detect alterations of digital images. The primary use of Fourmatch was to check the authenticity of images introduced as evidence in court. [35]
As of February 2019, Farid was an advisor to Truepic. [36] The underlying idea behind the Truepic approach is to automatically verify a photo when it is taken, with camera-based apps that assess the image using proprietary algorithms. Later versions of the image can be compared against the original to detect alteration. If this type of verification technology becomes an industry standard, it could help news and social media websites, insurers and others to automatically screen images they receive. [37]
Farid was born to Egyptian parents in Germany. [38] He grew up in Rochester, New York. He is married to the neuroscientist Emily Cooper. Cooper, also a professor at the University of California, Berkeley, studies human vision and virtual reality. [39] Cooper met Farid when he spent a sabbatical from Dartmouth at Berkeley. [40]
Photograph manipulation involves the transformation or alteration of a photograph. Some photograph manipulations are considered to be skillful artwork, while others are considered to be unethical practices, especially when used to deceive. Motives for manipulating photographs include political propaganda, altering the appearance of a subject, entertainment and humor.
Digital forensics is a branch of forensic science encompassing the recovery, investigation, examination, and analysis of material found in digital devices, often in relation to mobile devices and computer crime. The term "digital forensics" was originally used as a synonym for computer forensics but has expanded to cover investigation of all devices capable of storing digital data. With roots in the personal computing revolution of the late 1970s and early 1980s, the discipline evolved in a haphazard manner during the 1990s, and it was not until the early 21st century that national policies emerged.
Human image synthesis is technology that can be applied to make believable and even photorealistic renditions of human-likenesses, moving or still. It has effectively existed since the early 2000s. Many films using computer generated imagery have featured synthetic images of human-like characters digitally composited onto the real or other simulated film material. Towards the end of the 2010s deep learning artificial intelligence has been applied to synthesize images and video that look like humans, without need for human assistance, once the training phase has been completed, whereas the old school 7D-route required massive amounts of human work .
James F. O'Brien is a computer graphics researcher and professor of computer science and electrical engineering at the University of California, Berkeley. He is also co-founder and chief science officer at Avametric, a company developing software for virtual clothing try on. In 2015, he received an award for Scientific and Technical Achievement from the Academy of Motion Pictures Arts and Sciences.
Computational criminology is an interdisciplinary field which uses computing science methods to formally define criminology concepts, improve our understanding of complex phenomena, and generate solutions for related problems.
TinEye is a reverse image search engine developed and offered by Idée, Inc., a company based in Toronto, Ontario, Canada. It is the first image search engine on the web to use image identification technology rather than keywords, metadata or watermarks. TinEye allows users to search not using keywords but with images. Upon submitting an image, TinEye creates a "unique and compact digital signature or fingerprint" of the image and matches it with other indexed images. This procedure is able to match even heavily edited versions of the submitted image, but will not usually return similar images in the results.
PhotoDNA is a proprietary image-identification and content filtering technology widely used by online service providers.
Perceptual hashing is the use of a fingerprinting algorithm that produces a snippet, hash, or fingerprint of various forms of multimedia. A perceptual hash is a type of locality-sensitive hash, which is analogous if features of the multimedia are similar. This is in contrast to cryptographic hashing, which relies on the avalanche effect of a small change in input value creating a drastic change in output value. Perceptual hash functions are widely used in finding cases of online copyright infringement as well as in digital forensics because of the ability to have a correlation between hashes so similar data can be found.
The Counter Extremism Project (CEP) is a non-profit non-governmental organization that combats extremist groups "by pressuring financial support networks, countering the narrative of extremists and their online recruitment, and advocating for strong laws, policies and regulations".
Hao Li is a computer scientist, innovator, and entrepreneur from Germany, working in the fields of computer graphics and computer vision. He is co-founder and CEO of Pinscreen, Inc, as well as associate professor of computer vision at the Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI). He was previously a Distinguished Fellow at the University of California, Berkeley, an associate professor of computer science at the University of Southern California, and former director of the Vision and Graphics Lab at the USC Institute for Creative Technologies. He was also a visiting professor at Weta Digital and a research lead at Industrial Light & Magic / Lucasfilm.
The Cyber Civil Rights Initiative (CCRI) is a non-profit organization. Founded in 2012 by Holly Jacobs, the organization offers services to victims of cybercrimes. The majority of which goes through its crisis helpline.
Ian J. Goodfellow is an American computer scientist, engineer, and executive, most noted for his work on artificial neural networks and deep learning. He is a research scientist at Google DeepMind, was previously employed as a research scientist at Google Brain and director of machine learning at Apple, and has made several important contributions to the field of deep learning, including the invention of the generative adversarial network (GAN). Goodfellow co-wrote, as the first author, the textbook Deep Learning (2016) and wrote the chapter on deep learning in the authoritative textbook of the field of artificial intelligence, Artificial Intelligence: A Modern Approach.
Deepfakes are images, videos, or audio which are edited or generated using artificial intelligence tools, and which may depict real or non-existent people. They are a type of synthetic media.
Video manipulation is a type of media manipulation that targets digital video using video processing and video editing techniques. The applications of these methods range from educational videos to videos aimed at (mass) manipulation and propaganda, a straightforward extension of the long-standing possibilities of photo manipulation. This form of computer-generated misinformation has contributed to fake news, and there have been instances when this technology was used during political campaigns. Other uses are less sinister; entertainment purposes and harmless pranks provide users with movie-quality artistic possibilities.
Digital cloning is an emerging technology, that involves deep-learning algorithms, which allows one to manipulate currently existing audio, photos, and videos that are hyper-realistic. One of the impacts of such technology is that hyper-realistic videos and photos makes it difficult for the human eye to distinguish what is real and what is fake. Furthermore, with various companies making such technologies available to the public, they can bring various benefits as well as potential legal and ethical concerns.
Artificial intelligence art is visual artwork created through the use of an artificial intelligence (AI) program.
Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.
Deepfake pornography, or simply fake pornography, is a type of synthetic pornography that is created via altering already-existing photographs or video by applying deepfake technology to the images of the participants. The use of deepfake porn has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals, typically female celebrities, and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions.
An audio deepfake is a product of artificial intelligence used to create convincing speech sentences that sound like specific people saying things they did not say. This technology was initially developed for various applications to improve human life. For example, it can be used to produce audiobooks, and also to help people who have lost their voices to get them back. Commercially, it has opened the door to several opportunities. This technology can also create more personalized digital assistants and natural-sounding text-to-speech as well as speech translation services.
Identity replacement technology is any technology that is used to cover up all or parts of a person's identity, either in real life or virtually. This can include face masks, face authentication technology, and deepfakes on the Internet that spread fake editing of videos and images. Face replacement and identity masking are used by either criminals or law-abiding citizens. Identity replacement tech, when operated on by criminals, leads to heists or robbery activities. Law-abiding citizens utilize identity replacement technology to prevent government or various entities from tracking private information such as locations, social connections, and daily behaviors.