Company type | Private |
---|---|
Industry | Artificial Intelligence |
Founders | Kevin Guo, Dmitriy Karpman |
Headquarters | |
Website | thehive.ai |
Hive is an American artificial intelligence company offering machine learning models via APIs to enterprise customers. [1] Hive uses around 700,000 gig workers to train data for its models through its Hive Work app. [2] One of Hive's major offerings is to provide automated content moderation services. [3]
Hive is reported to have been engaged to provide content moderation services to social news aggregator Reddit, [4] Giphy, [4] BeReal, [5] Donald Trump-affiliated social network Truth Social, [6] and on online chat website Chatroulette. [7] Parler, after its shutdown by content service providers in early 2021 due to a lack of content moderation, integrated with Hive and was allowed back in the App Store. [8] Hive's content moderation models have been leveraged widely in the livestreaming industry, where the cost of human moderation is high. [9]
Hive's models have also been used in events such as the Super Bowl [10] [11] and March Madness, [12] and its contextual advertising models used by NBC Universal [13] and Vevo. [14]
Hive provides APIs to detect deepfakes [15] and AI-generated artwork. [16]
In early 2023, Hive released a free demo text classifier intended to detect AI-generated text. [17] Mark Hachman at PC World rated Hive's classifier favorably and found it more reliable than OpenAI's AI text classifier. [18]
Hive was founded by Kevin Guo and Dmitriy Karpman, and in April 2021, announced $85M in new capital at a valuation of $2 billion. [1]
Parler was an American alt-tech social networking service associated with conservatives. Launched in August 2018, Parler marketed itself as a free speech-focused and unbiased alternative to mainstream social networks such as Twitter and Facebook. Journalists described Parler as an alt-tech alternative to Twitter, with its users including those banned from mainstream social networks or who oppose their moderation policies.
Livestreaming, live-streaming, or live streaming is the streaming of video or audio in real time or near real time. While often referred to simply as streaming, the real time nature of livestreaming differentiates it from other forms of streamed media, such as video-on-demand, vlogs, and YouTube videos.
Music and artificial intelligence is the development of music software programs which use AI to generate music. As with applications in other fields, AI in music also simulates mental tasks. A prominent feature is the capability of an AI algorithm to learn based on past data, such as in computer accompaniment technology, wherein the AI is capable of listening to a human performer and performing accompaniment. Artificial intelligence also drives interactive composition technology, wherein a computer composes music in response to a live performance. There are other AI applications in music that cover not only music composition, production, and performance but also how music is marketed and consumed. Several music player programs have also been developed to use voice recognition and natural language processing technology for music voice control. Current research includes the application of AI in music composition, performance, theory and digital sound processing.
Qloo is a company that uses artificial intelligence (AI) to understand taste and cultural correlations. It provides companies with an application programming interface (API). It received funding from Leonardo DiCaprio, Elton John, Barry Sternlicht, Pierre Lagrange and others.
Adversarial machine learning is the study of the attacks on machine learning algorithms, and of the defenses against such attacks. A survey from May 2020 exposes the fact that practitioners report a dire need for better protecting machine learning systems in industrial applications.
Yubo is a French social networking app developed by TWELVE APP in 2015. It is designed to "meet new people" and "create a sense of community". The app had 60 million users as of 2022.
Deepfakes were originally defined as synthetic media that have been digitally manipulated to replace one person's likeness convincingly with that of another. The term was coined in 2017 by a Reddit user, and has later been expanded to cover any videos, pictures, or audio made with artificial intelligence to appear real, for example realistic-looking images of people who do not exist. While the act of creating fake content is not new, deepfakes leverage tools and techniques from machine learning and artificial intelligence, including facial recognition algorithms and artificial neural networks such as variational autoencoders (VAEs) and generative adversarial networks (GANs). In turn the field of image forensics develops techniques to detect manipulated images. Deepfakes have garnered widespread attention for their potential use in creating child sexual abuse material, celebrity pornographic videos, revenge porn, fake news, hoaxes, bullying, and financial fraud. The spreading of disinformation and hate speech through deepfakes has a potential to undermine core functions and norms of democratic systems by interfering with people's ability to participate in decisions that affect them, determine collective agendas and express political will through informed decision-making. Both the information technology industry and government have responded with recommendations to detect and limit their use.
MeWe is a global social media and social networking service. As a company based in Los Angeles, California it is also known as Sgrouples, Inc., doing business as MeWe. The site has been described as a Facebook alternative due to its focus on data privacy.
Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.
Deepfake pornography, or simply fake pornography, is a type of synthetic pornography that is created via altering already-existing pornographic material by applying deepfake technology to the faces of the actors. The use of deepfake porn has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals, typically female celebrities, and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions.
An audio deepfake is a product of artificial intelligence used to create convincing speech sentences that sound like specific people saying things they did not say. This technology was initially developed for various applications to improve human life. For example, it can be used to produce audiobooks, and also to help people who have lost their voices to get them back. Commercially, it has opened the door to several opportunities. This technology can also create more personalized digital assistants and natural-sounding text-to-speech as well as speech translation services.
DALL·E, DALL·E 2, and DALL·E 3 are text-to-image models developed by OpenAI using deep learning methodologies to generate digital images from natural language descriptions known as "prompts".
Live Transcribe is a smartphone application to get realtime captions developed by Google for the Android operating system. Development on the application began in partnership with Gallaudet University. It was publicly released as a free beta for Android 5.0+ on the Google Play Store on February 4, 2019. As of early 2023 it had been downloaded over 500 million times. The app can be installed from an .apk file by sideloading and it will launch, but the actual transcription functionality is disabled, requiring creation of an account with Google.
Prequel, Inc. is an American technology company and mobile app developer known for developing the Prequel mobile application, which enables editing photos and videos with filters and effects generated using artificial intelligence. Prequel was founded in 2018 by Serge Aliseenko and Timur Khabirov, who currently serves as the company’s CEO. It is headquartered in New York City. As of August 2022, it had been downloaded more than 100 million times.
ElevenLabs is a software company that specializes in developing natural-sounding speech synthesis software using deep learning.
Synthesia is a synthetic media generation company that develops software used to create AI generated video content. It is based in London, England.
Runway AI, Inc. is an American company headquartered in New York City that specializes in generative artificial intelligence research and technologies. The company is primarily focused on creating products and models for generating videos, images, and various multimedia content. It is most notable for developing the first commercial text-to-video generative AI models Gen-1 and Gen-2 and co-creating the research for the popular image generation AI system Stable Diffusion.
Undetectable AI (Undetectable.ai) is an AI content detection software that rewrites AI-generated text to make it appear more human.
In late January 2024, sexually explicit AI-generated deepfake images of American musician Taylor Swift were proliferated on social media platforms 4chan and X. The images led Microsoft to enhance Microsoft Designer's text-to-image model to prevent future abuse. Several artificial images of Swift of a sexual or violent nature were quickly spread, with one post reported to have been seen over 47 million times before its eventual removal. These images prompted responses from anti-sexual assault advocacy groups, US politicians, Swifties, Microsoft CEO Satya Nadella, among others, and it has been suggested that Swift's influence could result in new legislation regarding the creation of deepfake pornography.
Cara is an image sharing platform and social network for artists and creatives to share portfolios. It is available both as an app and as a website, and is run by founder Zhang Jingna and a group of volunteers. Cara states that it is "creators-first" and was founded to protect human artists from rapidly-proliferating AI-generated art on larger social media platforms such as Instagram and Facebook.