Hany Farid

Last updated
Hany Farid
Born (1966-02-10) February 10, 1966 (age 57)
Mülheim, Germany
Alma mater University of Rochester
SUNY Albany
University of Pennsylvania
MIT
Awards Alfred P. Sloan Fellowship
Guggenheim Fellowship
Scientific career
Fields Computer vision
Digital forensics
Institutions Dartmouth College
UC Berkeley
Thesis Range Estimation by Optical Differentiation  (1997)
Doctoral advisor Eero Simoncelli

Hany Farid (born February 10, 1966) [1] is an American university professor who specializes in the analysis of digital images and the detection of digitally manipulated images such as deepfakes. [2] Farid served as Dean and Head of School for the UC Berkeley School of Information. [3] In addition to teaching, writing, and conducting research, Farid acts as a consultant for non-profits, government agencies, and news organizations. He is the author of the book Photo Forensics (2016). [4]

Contents

Education

Farid received his undergraduate degree in computer science and applied mathematics from the University of Rochester in 1989. He earned a M.S. in computer science from SUNY/Albany in 1992. His Ph.D. in computer science from the University of Pennsylvania was awarded in 1997. In 1999, Farid completed a two-year post-doctoral program in Brain and Cognitive Sciences at the Massachusetts Institute of Technology. [5]

Career

External videos
The commissar vanishes - difference image.png
Nuvola apps kaboodle.svg “The information apocalypse”, Hany Farid, Knowable Magazine . March 14, 2020
Nuvola apps kaboodle.svg "Creating, Weaponizing, and Detecting Deep Fakes", Hany Farid, Keynote Spark + AI Summit, June 25, 2020.

Farid specializes in image analysis and human perception. He has been called the "father" of digital image forensics by NOVA scienceNOW . [6] [7] He is the recipient of a 2006 Guggenheim Fellowship and a 2002 Sloan Fellowship for his work in the field. [5] Farid was named a lifetime fellow of the National Academy of Inventors in 2016. [8] [9]

University positions

In January 2021, Hany Farid was appointed Associate Dean and Head of School for the School of Information. [3] He remains professor at the University of California, Berkeley with a joint appointment in the Department of Electrical Engineering & Computer Science and the School of Information. He is also a member of the Berkeley Artificial Intelligence Lab, the Center for Innovation in Vision and Optics, and the Vision Science program. [10]

Prior to joining Berkeley, Farid was the Albert Bradley 1915 Third Century Professor of Computer Science at Dartmouth College [11] and former chair of Dartmouth's Neukom Institute for Computational Science. Farid was well-known at Dartmouth for teaching the college's introductory course on programming and computer science. Joseph Helble, dean of the Thayer School of Engineering at Dartmouth, described Farid as a pioneer in the field of digital forensics. Farid joined Dartmouth's faculty in 1999. He remained at Dartmouth until 2019. [12]

Consulting and media appearances

Farid has consulted for intelligence agencies, news organizations, courts, and scientific journals seeking to authenticate the validity of images. [13] [14] [15] Research shows that humans aren't very good at discriminating between fakes and real photographs. [16] [17] [18] Faked images may be produced for a variety of purposes: deepfakes are often used to fake the identify of a person in pornographic materials. [19] Politically motivated faked images may be used to present disinformation and hate speech, and to undermine the credibility of media, government and elections. [20] Authenticating figures in scientific publications is critically important because graphics programs, such as Photoshop, are frequently used to crop and to label figures. Such manipulations can be used to alter, disguise, and falsify the data. [21]

In a series of papers in 2009, 2010 and 2015, after digitally analyzing a photograph of Lee Harvey Oswald holding a rifle and newspaper, Farid concluded [22] [23] that "the photo almost certainly was not altered". [24] When the 2013 World Press Photo of the Year was alleged as being a "fake" composite work, Farid spoke out against the allegation and criticized the underlying method of error level analysis. [25] In 2020, Farid and Matyáš Boháček trained a computer model to detect fake videos of Ukraine’s president Volodymyr Zelenskyy. [2] [26] [27]

As of 2018, Farid was a consultant for the Associated Press, Reuters, The New York Times, and the Defense Advanced Research Project Agency.

PhotoDNA

PhotoDNA is a system that uses robust hashing technology Farid worked on with Microsoft, which is "now widely used by Internet companies to stop the spread of content showing sexual exploitation or pornography involving children." In late 2015, Farid completed improvements to PhotoDNA that made it capable of analyzing video and audio files besides still images. In 2016, Farid proposed that the technology could be used to stem the spread of terror-related imagery, but there was little interest shown initially by social media companies. [28] In December 2016, Facebook, Twitter, Google and Microsoft announced plans to use PhotoDNA to tackle extremist content such as terrorist recruitment videos or violent terrorist imagery. [29]

Counter Extremism Project

In June 2016, Farid, as a senior advisor to the Counter Extremism Project (CEP), unveiled a software tool for use by Internet and social media companies to "quickly find and eliminate extremist content used to spread and incite violence and attacks." It functions similarly to PhotoDNA. [30] [31] [32]

To operationalize this new technology to combat extremism, Farid and CEP proposed the creation of a National Office for Reporting Extremism (NORex), which would house a comprehensive database of extremist content and function similar to the National Center for Missing & Exploited Children . [33] [34]

Truepic

In the fall of 2018, Truepic acquired Farid's start-up, Fourandsix Technologies. Farid started Fourandsix Technologies with Kevin Connor, a former vice president at Adobe Systems. The first product released by Fourandsix was called Fourmatch. Fourmatch was designed to detect alterations of digital images. The primary use of Fourmatch was to check the authenticity of images introduced as evidence in court. [35]

As of February 2019, Farid was an advisor to Truepic. [36] The underlying idea behind the Truepic approach is to automatically verify a photo when it is taken, with camera-based apps that assess the image using proprietary algorithms. Later versions of the image can be compared against the original to detect alteration. If this type of verification technology becomes an industry standard, it could help news and social media websites, insurers and others to automatically screen images they receive. [37]

Personal life

Farid was born to Egyptian parents in Germany. [38] He grew up in Rochester, New York. He is married to the neuroscientist Emily Cooper. Cooper, also a professor at the University of California, Berkeley, studies human vision and virtual reality. [39] Cooper met Farid when he spent a sabbatical from Dartmouth at Berkeley. [40]

Publications

Books

Selected technical papers

Selected opinion pieces

Related Research Articles

<span class="mw-page-title-main">Digital forensics</span> Branch of forensic science

Digital forensics is a branch of forensic science encompassing the recovery, investigation, examination, and analysis of material found in digital devices, often in relation to mobile devices and computer crime. The term "digital forensics" was originally used as a synonym for computer forensics but has expanded to cover investigation of all devices capable of storing digital data. With roots in the personal computing revolution of the late 1970s and early 1980s, the discipline evolved in a haphazard manner during the 1990s, and it was not until the early 21st century that national policies emerged.

<span class="mw-page-title-main">UC Berkeley School of Information</span>

The University of California, Berkeley, School of Information, also known as the UC Berkeley School of Information or the I School, is a graduate school and, created in 1994, the newest of the schools at the University of California, Berkeley. It was previously known as the School of Information Management and Systems (SIMS) until 2006. Its roots trace back to a program initiated in 1918 which became the School of Librarianship in 1926 and, with a broader scope, the School of Library and Information Studies in 1976. The program is located in the South Hall, near Sather Tower in the center of the campus.

Ramesh Chandra Jain is a scientist and entrepreneur in the field of information and computer science. He is a Bren Professor in Information & Computer Sciences, Donald Bren School of Information and Computer Sciences, University of California, Irvine.

<span class="mw-page-title-main">Human image synthesis</span> Computer generation of human images

Human image synthesis is technology that can be applied to make believable and even photorealistic renditions of human-likenesses, moving or still. It has effectively existed since the early 2000s. Many films using computer generated imagery have featured synthetic images of human-like characters digitally composited onto the real or other simulated film material. Towards the end of the 2010s deep learning artificial intelligence has been applied to synthesize images and video that look like humans, without need for human assistance, once the training phase has been completed, whereas the old school 7D-route required massive amounts of human work .

<span class="mw-page-title-main">James F. O'Brien</span> American computer graphics academic

James F. O'Brien is a computer graphics researcher and professor of computer science and electrical engineering at the University of California, Berkeley. He is also co-founder and chief science officer at Avametric, a company developing software for virtual clothing try on. In 2015, he received an award for Scientific and Technical Achievement from the Academy of Motion Pictures Arts and Sciences.

Computational criminology is an interdisciplinary field which uses computing science methods to formally define criminology concepts, improve our understanding of complex phenomena, and generate solutions for related problems.

<span class="mw-page-title-main">Shankar Sastry</span> American academic

S. Shankar Sastry is the Founding Chancellor of the Plaksha University, Mohali and a former Dean of Engineering at University of California, Berkeley.

<span class="mw-page-title-main">Jitendra Malik</span> Indian-American academic (born 1960)

Jitendra Malik is an Indian-American academic who is the Arthur J. Chick Professor of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He is known for his research in computer vision.

PhotoDNA is a proprietary image-identification and content filtering technology widely used by online service providers.

<span class="mw-page-title-main">Counter Extremism Project</span> Nonprofit NGO that combats extremist groups

The Counter Extremism Project (CEP) is a non-profit non-governmental organization that combats extremist groups "by pressuring financial support networks, countering the narrative of extremists and their online recruitment, and advocating for strong laws, policies and regulations".

<span class="mw-page-title-main">Hao Li</span> American computer scientist & university professor

Hao Li is a computer scientist, innovator, and entrepreneur from Germany, working in the fields of computer graphics and computer vision. He is co-founder and CEO of Pinscreen, Inc, as well as associate professor of computer vision at the Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI). He was previously a Distinguished Fellow at the University of California, Berkeley, an associate professor of computer science at the University of Southern California, and former director of the Vision and Graphics Lab at the USC Institute for Creative Technologies. He was also a visiting professor at Weta Digital and a research lead at Industrial Light & Magic / Lucasfilm.

The Cyber Civil Rights Initiative (CCRI) is a non-profit organization founded by Holly Jacobs in 2012. The organization offers services to victims of cybercrimes through its crisis helpline. They have compiled resources to help victims of cybercrimes both in America and internationally. CCRI's resources include a list of frequently asked questions, an online image removal guide, a roster of attorneys who may be able to offer low-cost or pro-bono legal assistance, and a list of laws related to nonconsensual pornography and related issues. CCRI publishes reports on nonconsensual pornography, engages in advocacy work, and contributes to updating tech policy. CCRI offers expert advice to tech industry leaders such as Twitter, Facebook, and Google regarding their policies against nonconsensual pornography. CCRI is the lead educator in the United States on subject matter related to nonconsensual pornography, recorded sexual assault, and sextortion.

<span class="mw-page-title-main">Ian Goodfellow</span> American computer scientist

Ian J. Goodfellow is an American computer scientist, engineer, and executive, most noted for his work on artificial neural networks and deep learning. He was previously employed as a research scientist at Google Brain and director of machine learning at Apple and has made several important contributions to the field of deep learning including the invention of the generative adversarial network (GAN). Goodfellow co-wrote, as the first author, the textbook Deep Learning (2016) and wrote the chapter on deep learning in the authoritative textbook of the field of artificial intelligence, Artificial Intelligence: A Modern Approach.

Deepfakes are synthetic media that have been digitally manipulated to replace one person's likeness convincingly with that of another. Deepfakes are the manipulation of facial appearance through deep generative methods. While the act of creating fake content is not new, deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate or generate visual and audio content that can more easily deceive. The main machine learning methods used to create deepfakes are based on deep learning and involve training generative neural network architectures, such as autoencoders, or generative adversarial networks (GANs). In turn the field of image forensics develops techniques to detect manipulated images.

<span class="mw-page-title-main">Video manipulation</span> Editing of video content for malicious intent

Video manipulation is a type of media manipulation that targets digital video using video processing and video editing techniques. The applications of these methods range from educational videos to videos aimed at (mass) manipulation and propaganda, a straightforward extension of the long-standing possibilities of photo manipulation. This form of computer-generated misinformation has contributed to fake news, and there have been instances when this technology was used during political campaigns. Other uses are less sinister; entertainment purposes and harmless pranks provide users with movie-quality artistic possibilities.

Digital cloning is an emerging technology, that involves deep-learning algorithms, which allows one to manipulate currently existing audio, photos, and videos that are hyper-realistic. One of the impacts of such technology is that hyper-realistic videos and photos makes it difficult for the human eye to distinguish what is real and what is fake. Furthermore, with various companies making such technologies available to the public, they can bring various benefits as well as potential legal and ethical concerns.

Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.

Deepfake pornography, or simply fake pornography, is a type of synthetic porn that is created via altering already-existing pornographic material by applying deepfake technology to the faces of the actors. The use of deepfake porn has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals, typically female celebrities, and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions.

An audio deepfake is a type of artificial intelligence used to create convincing speech sentences that sound like specific people saying things they did not say. This technology was initially developed for various applications to improve human life. For example, it can be used to produce audiobooks, and also to help people who have lost their voices to get them back. Commercially, it has opened the door to several opportunities. This technology can also create more personalized digital assistants and natural-sounding text-to-speech as well as speech translation services.

Identity replacement technology is any technology that is used to cover up all or parts of a person's identity, either in real life or virtually. This can include face masks, face authentication technology, and deepfakes on the Internet that spread fake editing of videos and images. Face replacement and identity masking are used by either criminals or law-abiding citizens. Identity replacement tech, when operated on by criminals, leads to heists or robbery activities. Law-abiding citizens utilize identity replacement technology to prevent government or various entities from tracking private information such as locations, social connections, and daily behaviors.

References

  1. https://www.europarl.europa.eu/cmsdata/141881/Preventing%20and%20Countering%20Radicalisation_overview%20biography.pdf
  2. 1 2 Hsu, Jeremy (7 December 2022). "Deepfake detector spots fake videos of Ukraine's president Zelenskyy". New Scientist. Retrieved 20 December 2022.
  3. 1 2 "Hany Farid Appointed Associate Dean and Head of School". UC Berkeley School of Information. UC Berkeley. Jan 15, 2021. Retrieved 3 April 2021.
  4. Staff (4 November 2016). Photo Forensics-+9. The MIT Press. ISBN   9780262035347 . Retrieved 23 August 2019.
  5. 1 2 "Hany Farid - John Simon Guggenheim Memorial Foundation". 2006. Archived from the original on 4 June 2011. Retrieved 21 January 2010.
  6. "Profile: Hany Farid at NOVA scienceNOW". PBS. June 2008. Retrieved 21 January 2010.
  7. Morris, Errol (August 11, 2008). "Photography as a Weapon". The New York Times. Retrieved 21 January 2010.
  8. Blumberg, Joseph. "Hany Farid Honored by the National Academy of Inventors". Dartmouth News. Office of Communications, Dartmouth College. Retrieved 23 August 2019.
  9. "Fellows List". National Academy of Inventors. Retrieved 23 August 2019.
  10. Craypo, Eric. "Professor Hany Farid Joins Vision Science Faculty". Berkeley Vision Science. UC Berkeley. Retrieved 23 August 2019.
  11. "Featured Faculty Member: Hany Farid". UC Berkeley School of Information. Retrieved 2019-08-06.
  12. Mihaly, Abigail (23 April 2018). "Computer science professor Hany Farid to leave College for Berkeley". The Dartmouth. Retrieved 12 August 2019.
  13. Dreifus, Claudia (October 2, 2007). "Proving That Seeing Shouldn't Always Be Believing". The New York Times. Retrieved 21 January 2010.
  14. "Anwar, Eskay Seen At Apartment Lobby". Archived from the original on 2011-06-30. Retrieved 2011-06-25.
  15. US Experts Confirmed Anwar as Man in Video, Court Told.
  16. Farid, Hany (15 September 2019). "Image Forensics". Annual Review of Vision Science. 5 (1): 549–573. doi:10.1146/annurev-vision-091718-014827. ISSN   2374-4642. PMID   31525144. S2CID   202642073 . Retrieved 21 December 2022.
  17. Pierre-Louis, Kendra (19 July 2017). "You're probably terrible at spotting faked photos". Popular Science. Retrieved 20 December 2022.
  18. Nightingale, Sophie J.; Wade, Kimberley A.; Watson, Derrick G. (18 July 2017). "Can people identify original and manipulated photos of real-world scenes?". Cognitive Research: Principles and Implications. 2 (1): 30. doi: 10.1186/s41235-017-0067-2 . ISSN   2365-7464. PMC   5514174 . PMID   28776002.
  19. Westerlund, Mika (2019). "The Emergence of Deepfake Technology: A Review". Technology Innovation Management Review. 9 (11): 39–52. doi: 10.22215/timreview/1282 . ISSN   1927-0321. S2CID   214014129.
  20. Pawelec, Maria (September 2022). "Deepfakes and Democracy (Theory): How Synthetic Audio-Visual Media for Disinformation and Hate Speech Threaten Core Democratic Functions". Digital Society. 1 (2): 19. doi: 10.1007/s44206-022-00010-6 . PMC   9453721 . PMID   36097613.
  21. Dreifus, Claudia (3 October 2007). "Digital forensics: Proving that seeing shouldn't always be believing". The New York Times. Retrieved 20 December 2022.
  22. Wen, Tiffanie (9 June 2020). "The hidden signs that can reveal a fake photo". BBC Future. Retrieved 20 December 2022.
  23. Farid, H (2009). "The Lee Harvey Oswald backyard photos: real or fake?" (PDF). Perception. 38 (11): 1731–1734. doi:10.1068/p6580. PMID   20120271. S2CID   12062689 . Retrieved 20 December 2022.
  24. "Professor finds that iconic Oswald photo was not faked (w/ Video)". phys.org. November 5, 2009. Retrieved 20 December 2022.
  25. Steadman, Ian (2013-05-16). "'Fake' World Press Photo isn't fake, is lesson in need for forensic restraint". Wired UK . Retrieved 2015-09-11.
  26. Allyn, Bobby (16 March 2022). "Deepfake video of Zelenskyy could be 'tip of the iceberg' in info war, experts warn". NPR. Retrieved 20 December 2022.
  27. Boháček, Matyáš; Farid, Hany (29 November 2022). "Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms". Proceedings of the National Academy of Sciences. 119 (48): e2216035119. Bibcode:2022PNAS..11916035B. doi:10.1073/pnas.2216035119. ISSN   0027-8424. PMC   9860138 . PMID   36417442.
  28. Waddell, Kveh (June 22, 2016). "A Tool to Delete Beheading Videos Before They Even Appear Online". The Atlantic. Retrieved 10 September 2016.
  29. "Partnering to Help Curb Spread of Online Terrorist Content | Facebook Newsroom" . Retrieved 2016-12-06.
  30. "Software unveiled to tackle online extremism, violence". AFP. June 17, 2016.
  31. "A Tool to Delete Beheading Videos Before They Even Appear Online". The Atlantic. June 22, 2016.
  32. "Suppressing Extremist Speech: There's an Algorithm for That!". Foreign Policy. June 17, 2016.
  33. Nakashima, Ellen (June 21, 2016). "There's a new tool to take down terrorism images online. But social-media companies are wary of it". The Washington Post.
  34. "How to Stop the Next Viral Jihadi Video". Defense One. June 17, 2016.
  35. Shankland, Stephen (18 September 2012). "Fourandsix releases image-authenticator software". CNET. Retrieved 13 August 2019.
  36. McCorvey, J.J. (19 February 2019). "This image-authentication startup is combating faux social media accounts, doctored photos, deep fakes, and more". Fast Company. Retrieved 13 August 2019.
  37. Hao, Karen (November 1, 2018). "Deepfake-busting apps can spot even a single pixel out of place". MIT Technology Review. Retrieved 21 December 2022.
  38. https://www.albany.edu/ualbanymagazine/fall2017_farid-father-of-digital-forensics.shtml#:~:text=Farid%20was%20born%20to%20Egyptian,%2C%20admittedly%2C%20a%20mediocre%20student.
  39. Rothman, Joshua (5 November 2018). "In the Age of A.I., Is Seeing Still Believing?". The New Yorker. Retrieved 13 August 2019.
  40. Henderson, Rachel (28 November 2018). "PhD program alum Emily Cooper returns to Berkeley as faculty, studying vision in the real world and applying it to virtual worlds". Berkeley Neuroscience. Helen Wills Neuroscience Institute. Retrieved 23 August 2019.
  41. 1 2 3 4 5 "Hany Farid - Dartmouth Faculty Directory". Dartmouth College. Archived from the original on 27 June 2010. Retrieved 21 January 2010.