WikiArt

Last updated

Home page of WikiArt Wikiart.jpg
Home page of WikiArt

WikiArt (formerly known as WikiPaintings) is a visual art wiki, active since 2010.

Contents

The developers are based in Ukraine. [1] Since 2010, the Editor in Chief of WikiArt is the Ukrainian art critic Kseniia Bilash. [2]

In April 2022, access to WikiArt was restricted in Russia, by request of the General Prosecutor’s office, according to Roskomsvoboda. [3]

AI research

WikiArt is often used by scientists who study AI. They train AI on WikiArt data trying to discover its ability to recognize, classify, and generate art.

In 2015, computer scientists Babak Saleh and Ahmed Egammal of Rutgers University used images from WikiArt in training an algorithm to look at paintings and detect the works’ genre, style and artist. [4] Later, researchers from Rutgers University, the College of Charleston and Facebook's AI Lab collaborated on a generative adversarial network (GAN), training it on WikiArt data to tell the difference between a piece of art versus a photograph or diagram, and to identify different styles of art. [5] Then, they designed a creative adversarial network (CAN), also trained on WikiArt dataset, to generate new works that does not fit known artistic styles. [6]

In 2016, Chee Seng Chan (Associate Professor at University of Malaya) and his co-researchers trained a convolutional neural network (CNN) on WikiArt datasets and presented their paper "Ceci n’est pas une pipe: A Deep Convolutional Network for Fine-art Paintings Classification". [7] They released ArtGAN to explore the possibilities of AI in its relation to art. In 2017, a new study and improved ArtGAN was published: "Improved ArtGAN for Conditional Synthesis of Natural Image and Artwork". [8]

In 2018, an Edmond de Belamy portrait produced by a GAN was sold for $432,500 at a Christie's auction. The algorithm was trained on a set of 15,000 portraits from WikiArt, spanning the 14th to the 19th century. [9]

In 2019, Eva Cetinic, a researcher at the Rudjer Boskovic Institute in Croatia, and her colleagues, used images from WikiArt in training machine-learning algorithms to explore the relationship between the aesthetics, sentimental value, and memorability of fine art. [10]

In 2020, Panos Achlioptas, a researcher at Stanford University and his co-researchers collected 439,121 affective annotations involving emotional reactions and written explanations of those, for 81 thousand artworks of WikiArt. Their study involved 6,377 human annotators and it resulted in the first neural-based speaker model that showed non-trivial Turing test performance in emotion-explanation tasks. [11]

Related Research Articles

<span class="mw-page-title-main">Digital art</span> Collective term for art that is generated digitally with a computer

Digital art refers to any artistic work or practice that uses digital technology as part of the creative or presentation process. It can also refer to computational art that uses and engages with digital media.

Computer art is any art in which computers play a role in production or display of the artwork. Such art can be an image, sound, animation, video, CD-ROM, DVD-ROM, video game, website, algorithm, performance or gallery installation. Many traditional disciplines are now integrating digital technologies and, as a result, the lines between traditional works of art and new media works created using computers has been blurred. For instance, an artist may combine traditional painting with algorithm art and other digital techniques. As a result, defining computer art by its end product can thus be difficult. Computer art is bound to change over time since changes in technology and software directly affect what is possible.

<span class="mw-page-title-main">Casey Reas</span>

Casey Edwin Barker Reas, also known as C. E. B. Reas or Casey Reas, is an American artist whose conceptual, procedural and minimal artworks explore ideas through the contemporary lens of software. Reas is perhaps best known for having created, with Ben Fry, the Processing programming language.

<span class="mw-page-title-main">Deep learning</span> Branch of machine learning

Deep learning is the subset of machine learning methods based on artificial neural networks with representation learning. The adjective "deep" refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.

Convolutional neural network (CNN) is a regularized type of feed-forward neural network that learns feature engineering by itself via filters optimization. Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by using regularized weights over fewer connections. For example, for each neuron in the fully-connected layer 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascaded convolution kernels, only 25 neurons are required to process 5x5-sized tiles. Higher-layer features are extracted from wider context windows, compared to lower-layer features.

<span class="mw-page-title-main">Generative adversarial network</span> Deep learning method

A generative adversarial network (GAN) is a class of machine learning frameworks and a prominent framework for approaching generative AI. The concept was initially developed by Ian Goodfellow and his colleagues in June 2014. In a GAN, two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss.

The ImageNet project is a large visual database designed for use in visual object recognition software research. More than 14 million images have been hand-annotated by the project to indicate what objects are pictured and in at least one million of the images, bounding boxes are also provided. ImageNet contains more than 20,000 categories, with a typical category, such as "balloon" or "strawberry", consisting of several hundred images. The database of annotations of third-party image URLs is freely available directly from ImageNet, though the actual images are not owned by ImageNet. Since 2010, the ImageNet project runs an annual software contest, the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), where software programs compete to correctly classify and detect objects and scenes. The challenge uses a "trimmed" list of one thousand non-overlapping classes.

This gallery shows the results of numerous image scaling algorithms.

Data augmentation is a technique in machine learning used to reduce overfitting when training a machine learning model, achieved by training models on several slightly-modified copies of existing data.

<i>Edmond de Belamy</i> Painting created by artificial intelligence

Edmond de Belamy is a generative adversarial network (GAN) portrait painting constructed in 2018 by Paris-based arts-collective Obvious. Printed on canvas, the work belongs to a series of generative images called La Famille de Belamy. The name Belamy is a tribute to Ian Goodfellow, inventor of GANs; In French "bel ami" means "good friend" so it is a translated pun of Goodfellow. It achieved widespread notoriety after Christie's announced its intention to auction the piece as the first artwork created using artificial intelligence to be featured in a Christie's auction. It surpassed pre-auction estimates which valued it at $7,000 to $10,000, instead selling for $432,500.

<span class="mw-page-title-main">Neural style transfer</span> Type of software algorithm for image manipulation

Neural style transfer (NST) refers to a class of software algorithms that manipulate digital images, or videos, in order to adopt the appearance or visual style of another image. NST algorithms are characterized by their use of deep neural networks for the sake of image transformation. Common uses for NST are the creation of artificial artwork from photographs, for example by transferring the appearance of famous paintings to user-supplied photographs. Several notable mobile apps use NST techniques for this purpose, including DeepArt and Prisma. This method has been used by artists and designers around the globe to develop new artwork based on existent style(s).

<span class="mw-page-title-main">StyleGAN</span> Novel generative adversarial network

StyleGAN is a generative adversarial network (GAN) introduced by Nvidia researchers in December 2018, and made source available in February 2019.

<span class="mw-page-title-main">Artificial intelligence art</span> Machine application of knowledge of human aesthetic expressions

Artificial intelligence art is any visual artwork created through the use of artificial intelligence (AI) programs.

Artificial neural networks (ANNs) are models created using machine learning to perform a number of tasks. Their creation was inspired by neural circuitry. While some of the computational implementations ANNs relate to earlier discoveries in mathematics, the first implementation of ANNs was by psychologist Frank Rosenblatt, who developed the perceptron. Little research was conducted on ANNs in the 1970s and 1980s, with the AAAI calling that period an "AI winter".

Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.

Deepfake pornography, or simply fake pornography, is a type of synthetic porn that is created via altering already-existing pornographic material by applying deepfake technology to the faces of the actors. The use of deepfake porn has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals, typically female celebrities, and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions.

fast.ai is a non-profit research group focused on deep learning and artificial intelligence. It was founded in 2016 by Jeremy Howard and Rachel Thomas with the goal of democratizing deep learning. They do this by providing a massive open online course (MOOC) named "Practical Deep Learning for Coders," which has no other prerequisites except for knowledge of the programming language Python.

<span class="mw-page-title-main">Video super-resolution</span> Generating high-resolution video frames from given low-resolution ones

Video super-resolution (VSR) is the process of generating high-resolution video frames from the given low-resolution video frames. Unlike single-image super-resolution (SISR), the main goal is not only to restore more fine details while saving coarse ones, but also to preserve motion consistency.

<span class="mw-page-title-main">Stable Diffusion</span> Image-generating machine learning model

Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI spring.

<span class="mw-page-title-main">Text-to-image model</span> Machine learning model

A text-to-image model is a machine learning model which takes an input natural language description and produces an image matching that description.

References

  1. "Financial Report 2012 Q1". WikiPaintings blog. 6 April 2012. Archived from the original on 14 June 2012. Retrieved 18 August 2023. Unfortunately, our country (Ukraine) [...]
  2. "Forbes: Украинцы создали первую сетевую энциклопедию визуального искусства" [Forbes: Ukrainians have created the first online encyclopaedia of visual art]. Korrespondent.net (in Russian). 6 July 2012. Retrieved 18 August 2023.
  3. "WikiArt visual encyclopaedia blocked in Russia". Novaya Gazeta Europe . 16 August 2008. Retrieved 18 August 2023.
  4. Fessenden, Marissa (13 May 2015). "Computers Are Learning About Art Faster than Art Historians". Smithsonian Magazine . Retrieved 18 August 2023.
  5. Daley, Jason (3 July 2017). "AI Project Produces New Styles of Art". Smithsonian Magazine. Retrieved 18 August 2023.
  6. Cascone, Sarah (11 July 2017). "AI-Generated Art Now Looks More Human Than Work at Art Basel, Study Says". Artnet News . Retrieved 18 August 2023.
  7. Tan, Wei Ren; Chan, Chee Seng; Aguirre, Hernan E.; Tanaka, Kiyoshi (September 2016). "Ceci n'est pas une pipe: A deep convolutional network for fine-art paintings classification" (PDF). 2016 IEEE International Conference on Image Processing (ICIP). pp. 3703–3707. doi:10.1109/ICIP.2016.7533051. ISBN   978-1-4673-9961-6. S2CID   18920693.
  8. Wei Ren Tan; Chee Seng Chan; Aguirre, Hernan; Tanaka, Kiyoshi (2017). "Improved ArtGAN for Conditional Synthesis of Natural Image and Artwork". arXiv: 1708.09533 [cs.CV].
  9. Nugent, Ciara (20 August 2018). "The Painter Behind These Artworks Is an AI Program. Do They Still Count as Art?". Time . Retrieved 18 August 2023.
  10. Hampson, Michelle (14 June 2019). "What Can AI Tell Us About Fine Art?". IEEE Spectrum . Retrieved 18 August 2023.
  11. Achlioptas, Panos; Ovsjanikov, Maks; Haydarov, Kilichbek; Elhoseiny, Mohamed; Guibas, Leonidas (2021). "ArtEmis: Affective Language for Visual Art". arXiv: 2101.07396 [cs.CV].