DARPA Network Challenge

Last updated

The 2009 DARPA Network Challenge was a prize competition for exploring the roles the Internet and social networking play in the real-time communications, wide-area collaborations, and practical actions required to solve broad-scope, time-critical problems. The competition was sponsored by the Defense Advanced Research Projects Agency (DARPA), a research organization of the United States Department of Defense. The challenge was designed to help the military generate ideas for operating under a range of circumstances, such as natural disasters. [1] Congress authorized DARPA to award cash prizes to further DARPA's mission to sponsor revolutionary, high-payoff research that bridges the gap between fundamental discoveries and their use for national security.

Contents

In the competition, teams had to locate ten red balloons placed around the United States and then report their findings to DARPA. Due to the distributed nature of the contest, many teams used online resources, such as social media sites, to gather information or to recruit people that would look for balloons. Teams often had to deal with false submissions, and so they needed to come up with ways to validate and confirm reported sightings. The contest was concluded in under nine hours, much less than expected by DARPA, and had many implications with regards to the power of online social networking and crowdsourcing in general. [2]

In addition to the Network Challenge, DARPA has also conducted prize competitions in other areas of technology.

Specifics of the competition

Under the rules of the competition, the $40,000 challenge award would be granted to the first team to submit the locations of 10 moored, 8-foot, red weather balloons at 10 previously undisclosed fixed locations in the continental United States. The balloons were to be placed in readily accessible locations visible from nearby roads, each staffed by a DARPA agent who would issue a certificate validating each balloon location. [3] The balloons were deployed at 10:00 AM Eastern Time on December 5, 2009, and scheduled to be taken down at 5:00 PM. DARPA was prepared to deploy them for a second day and wait for up to a week for a team to find all of the balloons.

Part of the purpose of the challenge was to force participants to discern actual pertinent information from potential noise. Many teams came across false reports of sightings, both accidental and purposeful. One valid strategy was spamming social networks with false reports to throw competitors off the trail of real sightings. The verification of balloon sightings was paramount to success.

The contest was announced only about a month before the start date. This limited the amount of time teams had to prepare. The ability of many to do so showed the effectiveness of mass and social media to distribute information and organize people quickly. [3] The time in which information about the challenge spread was actually more compressed than a month. However, in the week preceding the launch day the official competition site increased in traffic from an average of 1,000 hits per day to 20,000 hits per day. Similarly, the efforts of many competing teams went viral in the last few days before the start date. [4]

DARPA selected the date of the competition to commemorate the 40th anniversary of the Internet.

Results

Even though DARPA was prepared to deploy the balloons for a second day and accept submissions for up to a week until a team found all 10 balloons, the MIT Red Balloon Challenge Team won the competition in under 9 hours. [2] A team from the Georgia Tech Research Institute (GTRI), which located nine balloons, won second place. Two other teams found eight balloons, five found seven, and the iSchools team (which represented Pennsylvania State University, University of Illinois at Urbana–Champaign, University of Pittsburgh, Syracuse University, and University of North Carolina at Chapel Hill), whose strategy is described below, finished tenth with six balloons. [3] In table form, the top ten teams were: [5]

PlaceNameHometown# BalloonsDate/time
1 MIT Red Balloon Challenge Team Cambridge, MA106:52:41 PM
2 GTRI "I Spy a Red Balloon" Team Atlanta, GA96:59:11 PM
3 Christian Rodriguez and Tara Chang (Red Balloon Race)Cambridge, MA86:52:54 PM
4 Dude It's a Balloon Glen Rock, NJ87:42:41 PM
5 Groundspeak Geocachers Seattle, WA74:02:23 PM
6 Army of Eyes Mutual Mobile Austin, TX74:33:20 PM
7Team DecinenaEvergreen, CO76:46:37 PM
8AnonymousAnonymous77:16:51 PM
9 Nerdfighters Missoula, MT78:19:24 PM
10 iSchools DARPA Challenge Team State College, PA66:13:08 PM

Winning strategy

The winning MIT team used a technique similar to multi-level marketing to recruit participants, with the prize money to be distributed up the chain of participants leading to successful balloon spottings, and all prize income remaining after distribution to participants to be given to charity. [6] The team's strategy for public collaboration in finding the balloons was explained on their website:

We're giving $2000 per balloon to the first person to send us the correct coordinates, but that's not all -- we're also giving $1000 to the person who invited them. Then we're giving $500 whoever invited the inviter, and $250 to whoever invited them, and so on ... (see how it works). It might play out like this. Alice joins the team, and we give her an invite link like http://balloon.media.mit.edu/alice. Alice then e-mails her link to Bob, who uses it to join the team as well. We make a http://balloon.media.mit.edu/bob link for Bob, who posts it to Facebook. His friend Carol sees it, signs up, then twitters about http://balloon.media.mit.edu/carol. Dave uses Carol's link to join ... then spots one of the DARPA balloons! Dave is the first person to report the balloon's location to us, and the MIT Red Balloon Challenge Team is the first to find all 10. Once that happens, we send Dave $2000 for finding the balloon. Carol gets $1000 for inviting Dave, Bob gets $500 for inviting Carol, and Alice gets $250 for inviting Bob. The remaining $250 is donated to charity.

The strategy was a variant of the Query Incentive Network model of Kleinberg and Raghavan, [7] with the main difference being that the incentive rewards in the team's technique scale down for later participants. [8] The recursive nature of the reward had two beneficial effects. First, participants had an incentive to involve others, as these new people would not become competitors for the reward but rather cooperating partners. Second, people not located in the United States were motivated to participate by passing along information even though they had no way of locating a balloon in person. This helped the team garner a large number (over 5,000) of participants. [3] The team only began with four initial participants. [4]

To determine whether submissions were legitimate or fake, the team employed at least three strategies. The first strategy was examining whether there were multiple submissions for a location. If this was the case, then the likelihood of a balloon actually being there was thought to be higher. A second strategy was to check whether the IP address of the submitter matched the supposed location of the balloon. A third strategy was to examine photos accompanying the submission. Real photos included a DARPA employee and a DARPA banner, details which were not announced, while faked ones did not. [3]

A detailed analysis of the winning strategy highlighted the important role that social media played. Analysis of Twitter data showed that while some teams relied on large initial bursts of activity over Twitter, mentions of those teams quickly faded. It was argued that due to the recursive incentive structure, the MIT team was able to create a more sustained social media impact than most teams. [8]

Second-place strategy

The second-place GTRI team used a strategy that relied heavily on Internet publicity and social media. They created a Web site three weeks before the launch day and used a variety of media-related efforts, including a Facebook group, in order to increase the visibility of the team and increase the chance that people who spotted the balloons would report the sightings to them.

The team promised to donate all winnings to charity to appeal to the altruism of participants. However, due to the lack of a structure that created much incentive as the winning MIT team's scheme, their network of participants grew to only about 1,400 people.

With regards to validating submissions, the team assumed that because of the charitable nature of their effort, the number of false submissions would be low. In any case, they primarily relied on personal validation, having phone conversations with submitters. [3]

Tenth-place strategy

The tenth-place iSchools team, which represented five universities, tried two distinct approaches. The first was directly recruiting team members to look for the balloons on launch day. These members included students, faculty, and alumni on official mailing lists and social media website groups for organizations on the team (e.g., Pennsylvania State University). Only a few of these observers actually participated, however, and only one balloon was found using this strategy.

The second strategy was using open-source intelligence methods to do cyberspace searching for results related to the challenge. This was the main source of their success in locating balloons. This strategy, in turn, consisted of two distinct sub-strategies. The first was to use a group of human analysts who would manually search online on a variety of information sources, including Twitter and the websites of competing teams, compile reported sightings, and then evaluate the validity of sightings based on the reputation of the sources.

Another strategy relating to cyberspace searching that the team used was an automated Web crawler which captured data from Twitter and opposing teams' websites and then analyzed it. This technology worked slowly and would have benefited from a longer contest duration, but the Twitter crawler proved to be especially useful because tweets sometimes contained geographic information.

To confirm the validity of possible sightings, recruited team members were used when possible. If none were available, new observers were recruited from organizations located near the sighting. The distributed location of the different organizations in the team allowed this to be a feasible strategy. Photographic analysis was used to confirm or dispute the validity of claims.

The team also encountered a case of another team accidentally leaking information about a sighting and then trying to cover it up. The iSchools team used a variety of information sources, including social networks, to determine what the real location was. This demonstrated the possibility of using information from a wide variety of public websites to determine the validity of something. [3]

Other strategies

Prior to the competition numerous people had discussed possible strategies, [9] including satellite photography, aerial photography and crowdsourcing to detect balloons, as well as the possibility of misinformation campaigns to stop other teams from winning. In the actual competition, there was a variety of strategies employed by teams.

One team leader, Jason Brindel of San Rafael, California, organized a team of around 140 people. [10] His plan was to create a web site and Twitter account dedicated to the challenge that would allow his team members to communicate their findings. Anyone participating in the challenge would be allowed to submit information, provided that they included details confirming about their submission. Brindel planned to have the team scour the Internet for mentions of balloons across news sites, blogs, and social media sites.

George Hotz, a Twitter celebrity now famous for hacking the PlayStation 3 and settling a suit by Sony, only prepared for the competition for an hour before posting a tweet an hour before the start of the competition. Hotz was able to locate 8 balloons successfully. Four were found within his Twitter network of almost 50,000 followers, and four were acquired through trades of information with other teams. [4]

The fifth-place finisher, the Groundspeak Geocachers, deployed active geocachers and Groundspeak employees to search for balloons. They were successful in finding eight balloons, but due to a data entry error, were only credited with seven. [11]

A team calling themselves Nerdfighters utilized their existing network of followers from the Brotherhood 2.0 vlog to launch a viral video before the competition. They managed to attract 2,000 active balloon seekers. They also utilized 3,000 Nerdfighters who scanned for Internet traffic related to the competition and specialized in launching a misinformation campaign, hoping to confuse or misdirect other teams. They also created a network of cell phone users to provide direct text message verification of findings. [4]

A team of iPhone application developers formed Army of Eyes, based out of Austin, TX. Their application was developed soon after the original challenge announcement in order to be made available by challenge launch day. [4]

The iNeighbors team, made up of members of an existing social media site for neighborhood watch communities, performed no recruitment or trading efforts. Their goal was to evaluate the ability of their network to effectively report on abnormal activity within neighborhoods. They were able to successfully locate five of the ten balloons. [4]

Reflections

The challenge generated a number of insights.

First, it showed how mass and social media can act complementarily. While mass media were useful primarily for spreading general information about the challenge, social media were effective for viral dissemination of information about the challenge to potential team recruits.

Second, it showed how social media can be useful as a data mining source. For example, the iSchools team did better than many other teams by simply monitoring public websites.

Third, the challenge showed the variety of ways in which social networking can be utilized. The MIT and GTRI teams used them primarily to facilitate fast communication between participants, while the iSchools team used it as a source of information.

Fourth, the challenge showed the general effectiveness of using crowdsourcing techniques to solve geographically-distributed, time-sensitive problems. The DARPA program managers were surprised by how quickly the challenge was completed. However, it can be difficult to filter useful data from public sites, and the independent verification of publicly listed information remains a challenge in efficiency and accuracy. [3]

DARPA noted that though social networks can be a powerful source of intelligence, using them may be politically sensitive due to the privacy concerns involved with data mining user content. Similarly, the winning MIT team surmised that their recursive approach would only be effective if the effort's goal was seen to be moral and good by its participants. [4]

Verified balloon locations

Balloon locations DARPABalloonMap.jpg
Balloon locations

The officially verified coordinates of the balloons, [12] listed by their tag numbers, were:

Inspired by the success of the DARPA Network Challenge, DARPA launched the Shredder Challenge in 2011. This competition aimed to explore methods to reconstruct documents shredded by a variety of paper shredding techniques. As with the DARPA Network Challenge, some teams used crowdsourcing to solicit human help in reconstructing the documents. [14] The winning team used a computer-vision algorithm to suggest fragment pairings to human assemblers for verification. [15]

On July 2, 2011, also inspired by the DARPA Network Challenge, the Langley Knights Challenge was launched. It differed in that there were knights to find in various locations in England and that had been placed on Google Maps so people in locations outside the UK could participate. [16]

In January 2012, the University of Pennsylvania School of Medicine launched the MyHeartMap Challenge to map Automatic External Defibrillators (AEDs) in the city of Philadelphia. [17] According to the organizer Dr. Raina Merchant, "DARPA succeeded with locating red balloons. AEDs are a natural extension of a brilliant idea." [18]

Also inspired by the DARPA Network Challenge, a contest called Tag Challenge was sponsored by the United States Department of State and the Institute of International Education. [19] Tag Challenge sought to have teams locate and obtain pictures of five individuals in five different cities across North America and Europe within twelve hours on March 31, 2012. Despite the fact that the potential winnings were considerably lower than for the DARPA Network Challenge, organizers sought to test the ability of the methods discovered in that challenge to "find a person of interest" rather than a statically located object. [20]

See also

Related Research Articles

<span class="mw-page-title-main">DARPA</span> Technology research and development agency of the U.S. Department of Defense

The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. Originally known as the Advanced Research Projects Agency (ARPA), the agency was created on February 7, 1958, by President Dwight D. Eisenhower in response to the Soviet launching of Sputnik 1 in 1957. By collaborating with academia, industry, and government partners, DARPA formulates and executes research and development projects to expand the frontiers of technology and science, often beyond immediate U.S. military requirements. The name of the organization first changed from its founding name, ARPA, to DARPA, in March 1972, changing back to ARPA in February 1993, then reverted to DARPA in March 1996.

<span class="mw-page-title-main">Geocaching</span> Outdoor recreational activity

Geocaching is an outdoor recreational activity, in which participants use a Global Positioning System (GPS) receiver or mobile device and other navigational techniques to hide and seek containers, called geocaches or caches, at specific locations marked by coordinates all over the world. The first geocache was placed in 2000, and by 2023 there were over 3 million active caches worldwide.

The DARPA Grand Challenge is a prize competition for American autonomous vehicles, funded by the Defense Advanced Research Projects Agency, the most prominent research organization of the United States Department of Defense. Congress has authorized DARPA to award cash prizes to further DARPA's mission to sponsor revolutionary, high-payoff research that bridges the gap between fundamental discoveries and military use. The initial DARPA Grand Challenge in 2004 was created to spur the development of technologies needed to create the first fully autonomous ground vehicles capable of completing a substantial off-road course within a limited time. The third event, the DARPA Urban Challenge in 2007, extended the initial Challenge to autonomous operation in a mock urban environment. The 2012 DARPA Robotics Challenge, focused on autonomous emergency-maintenance robots, and new Challenges are still being conceived. The DARPA Subterranean Challenge was tasked with building robotic teams to autonomously map, navigate, and search subterranean environments. Such teams could be useful in exploring hazardous areas and in search and rescue.

Collaborative intelligence characterizes multi-agent, distributed systems where each agent, human or machine, is autonomously contributing to a problem solving network. Collaborative autonomy of organisms in their ecosystems makes evolution possible. Natural ecosystems, where each organism's unique signature is derived from its genetics, circumstances, behavior and position in its ecosystem, offer principles for design of next generation social networks to support collaborative intelligence, crowdsourcing individual expertise, preferences, and unique contributions in a problem solving process.

<span class="mw-page-title-main">Crowdsourcing</span> Sourcing services or funds from a group

Crowdsourcing involves a large group of dispersed participants contributing or producing goods or services—including ideas, votes, micro-tasks, and finances—for payment or as volunteers. Contemporary crowdsourcing often involves digital platforms to attract and divide work between participants to achieve a cumulative result. Crowdsourcing is not limited to online activity, however, and there are various historical examples of crowdsourcing. The word crowdsourcing is a portmanteau of "crowd" and "outsourcing". In contrast to outsourcing, crowdsourcing usually involves less specific and more public groups of participants.

Crowdcasting is the combination of broadcasting and crowdsourcing. The process of crowdcasting uses a combination of push and pull strategies first to engage an audience and build a network of participants and then harness the network for new insights. Those insights are then used to shape broadcast programming. These insights and concepts can include new product ideas, new service ideas, new branding messages, or even scientific breakthroughs. These insights are extracted from participants' submissions.

Content creation or content creative is the act of producing and sharing information or media content for specific audiences, particularly in digital contexts. According to Dictionary.com, content refers to "something that is to be expressed through some medium, as speech, writing or any of various arts" for self-expression, distribution, marketing and/or publication. Content creation encompasses various activities including maintaining and updating web sites, blogging, article writing, photography, videography, online commentary, social media accounts, and editing and distribution of digital media. In a survey conducted by Pew, content creation was defined as "the material people contribute to the online world".

<span class="mw-page-title-main">DARPA Grand Challenge (2007)</span> Third driverless car competition of the DARPA Grand Challenge

The third driverless car competition of the DARPA Grand Challenge was commonly known as the DARPA Urban Challenge. It took place on November 3, 2007 at the site of the now-closed George Air Force Base, in Victorville, California, in the West of the United States. Discovery's Science channel followed a few of the teams and covered the Urban Challenge in its RobocarsArchived 2008-07-30 at the Wayback Machine series.

<span class="mw-page-title-main">Chaparral Lake</span> Waterbody and park in Maricopa County, Arizona

Chaparral Lake is located in Chaparral Park in west Scottsdale, Arizona, United States, at the northeast corner of Hayden and Chaparral Roads. It was the location of Balloon 2 in the 2009 DARPA Network Challenge.

Tonsler Park is a park in Charlottesville, Virginia. It was the location of Balloon 3 in the 2009 DARPA Network Challenge.

<span class="mw-page-title-main">DARPA Shredder Challenge 2011</span>

DARPA Shredder Challenge 2011 was a prize competition for exploring methods to reconstruct documents shredded by a variety of paper shredding techniques. The aim of the challenge was to "assess potential capabilities that could be used by the U.S. warfighters operating in war zones, but might also identify vulnerabilities to sensitive information that is protected by shredding practices throughout the U.S. national security community". The competition was sponsored by the Defense Advanced Research Projects Agency (DARPA), a research organization of the United States Department of Defense. Congress authorized DARPA to award cash prizes to further DARPA’s mission to sponsor revolutionary, high-payoff research that bridges the gap between fundamental discoveries and their use for national security.

The MyHeartMap Challenge is a community improvement initiative and part of a research study being conducted at the University of Pennsylvania to map automated external defibrillators (AEDs) in the city of Philadelphia.

The Tag Challenge is a social gaming competition, with a US$5,000 reward, in which participants were invited to find five "suspects" in a simulated law enforcement search in five different cities throughout North America and Europe on March 31, 2012. It aimed to determine whether and how social media can be used to accomplish a realistic, time-sensitive, international law enforcement objective. The challenge was won by a team that located 3 of the 5 suspects.

Crowdsourcing software development or software crowdsourcing is an emerging area of software engineering. It is an open call for participation in any task of software development, including documentation, design, coding and testing. These tasks are normally conducted by either members of a software enterprise or people contracted by the enterprise. But in software crowdsourcing, all the tasks can be assigned to or are addressed by members of the general public. Individuals and teams may also participate in crowdsourcing contests.

Government crowdsourcing is a form of crowdsourcing employed by governments to better leverage their constituents' collective knowledge and experience. It has tended to take the form of public feedback, project development, or petitions in the past, but has grown to include public drafting of bills and constitutions, among other things. This form of public involvement in the governing process differs from older systems of popular action, from town halls to referendums, in that it is primarily conducted online or through a similar IT medium.

<span class="mw-page-title-main">Social machine</span>

A social machine is an environment comprising humans and technology interacting and producing outputs or action which would not be possible without both parties present. It can also be regarded as a machine, in which specific tasks are performed by human participants, whose interaction is mediated by an infrastructure. The growth of social machines has been greatly enabled by technologies such as the Internet, the smartphone, social media and the World Wide Web, by connecting people in new ways.

<span class="mw-page-title-main">Tracking the Wild</span>

Tracking the Wild is a social media platform built specifically for wildlife. The platform has a two-pronged approach. On the one hand, it is a social media tool to share wildlife sightings and provide a host of reserve specific information. On the other hand, the platform embraces crowdsourcing and citizen science to generate valuable wildlife sightings data for conservation research.

<span class="mw-page-title-main">Iyad Rahwan</span> Syrian-Australian computational social scientist

Iyad Rahwan, is a Syrian-Australian scientist. He is the director of the Center for Humans and Machines at the Max Planck Institute for Human Development. Between 2015 and 2020, he was an associate professor of Media Arts & Sciences at the MIT Media Lab. Rahwan's work lies at the intersection of the computer and social sciences, where he has investigated topics in computational social science, collective intelligence, large-scale cooperation, and the social aspects of artificial intelligence.

Over the years, the U.S. Defense Advanced Research Projects Agency (DARPA) has conducted a number of prize competitions to spur innovations. A prize competition allows DARPA to establish an ambitious goal, which makes way for novel approaches from the public that might otherwise appear too risky to undertake by experts in a particular discipline.

References

  1. "MIT wins $40,000 prize in nationwide balloon-hunt contest". CNN. 2009. Archived from the original on 2012-01-20. Retrieved 2012-02-21.
  2. 1 2 "MIT Red Balloon Team Wins DARPA Network Challenge" (PDF). DARPA. Archived from the original (PDF) on November 11, 2010. Retrieved 2009-12-06.
  3. 1 2 3 4 5 6 7 8 John C. Tang; Manuel Cebrian; Nicklaus A. Giacobe; Hyun-Woo Kim; Taemie Kim; Douglas "Beaker" Wickert (2011). "Reflecting on the DARPA Red Balloon Challenge". Communications of the ACM. 54 (4): 78–85. doi: 10.1145/1924421.1924441 .
  4. 1 2 3 4 5 6 7 Defense Advanced Research Projects Agency. "DARPA Network Challenge Project Report" . Retrieved 2012-03-03.
  5. "DARPA Network Challenge Final Standings" (PDF). DARPA. Archived from the original (PDF) on November 11, 2010. Retrieved 2010-10-07.
  6. "How It Works". MIT Red Balloon Challenge Team. Archived from the original on 2010-01-11.
  7. J. Kleinberg; P. Raghavan (2005). "Query Incentive Networks". Proceedings of 46th Annual IEEE Symposium on FOCS: 132–141.
  8. 1 2 Galen Pickard; Wei Pan; Iyad Rahwan; Manuel Cebrian; Riley Crane; Anmol Madan; Alex Pentland (2011). "Time-Critical Social Mobilization". Science. 334 (6055): 509–512. arXiv: 1008.3172 . Bibcode:2011Sci...334..509P. doi:10.1126/science.1205869. PMID   22034432. S2CID   2950817.
  9. Adrian Hon (October 31, 2009). "How to Win the DARPA Network Challenge". Mssv.
  10. Gross, Doug. "Nationwide balloon-hunt contest tests online networking". CNN. Archived from the original on 1 March 2012. Retrieved 3 March 2012.
  11. "10 Balloonies - Groundspeak's DARPA War Room". Groundspeak. December 9, 2009.
  12. "DARPA Network Challenge Balloon Coordinates" (PDF). DARPA. Archived from the original (PDF) on August 19, 2010. Retrieved 2009-12-13.
  13. "Ten red balloons– and one's in Charlottesville!". The Hook. December 5, 2009.
  14. "Crowdsourcing the 'most challenging puzzle ever". CNET. November 17, 2011. Retrieved 2011-12-01.
  15. Drummond, Katie (December 2, 2011). "Programmers Shred Pentagon's Paper Puzzle Challenge". Wired . Retrieved December 5, 2011.
  16. "Find the Knights This Weekend: Social Mobilization Experiment". July 2011.
  17. McCullough, Marie (January 31, 2012). "Global contest will lead to help during heart attacks". The Philadelphia Inquirer . Archived from the original on June 9, 2013. Retrieved 2012-02-02.
  18. "MyHeartMap Challenge Media Page". University of Pennsylvania . Retrieved 2012-02-03.
  19. "Tag Challenge". Archived from the original on 14 July 2013. Retrieved 22 March 2012.
  20. Shachtman, Noah (March 1, 2012). "U.S. Wants You to Hunt Fugitives With Twitter". Wired . Retrieved 22 March 2012.