Original author(s) | Joshua Browder |
---|---|
Initial release | 2015 |
Operating system | iOS, Android |
Available in | English |
Type | Legal technology, chatbot |
Website | donotpay |
DoNotPay is an American company specializing in online legal services and chatbots. The product provides a "robot lawyer" service that claims to make use of artificial intelligence to contest parking tickets and provide various other legal services, with a subscription cost of $36 for three months. [1]
DoNotPay's effectiveness and marketing have been subject to praise and criticism. [2] [3] [4] In September 2024, the company received a $193,000 fine from the Federal Trade Commission (FTC) for falsely advertising the capabilities of its artificial intelligence (AI) services. [5] [6] The FTC also stated that the company never tested the legal accuracy of the chatbot's answers. [7]
DoNotPay started off as an app for contesting parking tickets. It sells services which generate documents on legal issues ranging from consumer protection to immigration rights; it states that these are generated via automation and AI. [8] The company claims its application is supported by the IBM Watson AI. [9] It is currently available in the United Kingdom and United States (in all 50 states). [10]
DoNotPay states that its services help customers seek refunds on flight tickets and hotel bookings, [11] cancel free trials, [12] sue people, [13] [14] apply for asylum or homeless housing, [10] seek claims from Equifax during the aftermath of its security breach, [15] and obtain U.S. visas and green cards. [16] DoNotPay offers a Free Trial Card feature which gives users a virtual credit card number that can be used to sign up for free online trials (such as Netflix and Spotify). [11] As soon as the free trial period ends, the card automatically declines any charges. [17] [18] DoNotPay also claims that its services allow users to automatically apply for refunds, cancel subscriptions, fight spam in people's inboxes, combat volatile airline prices, and file damage claims with city offices. [19] [20]
In 2021, DoNotPay raised $10 million from investors, including Andreesen Horowitz, Lux Capital, Tribe Capital, and others, reaching a valuation of $210 million. [21]
In 2016, Joshua Browder, the company's founder, told The Guardian that the chatbot had contested more than 250,000 parking tickets in London and New York and won 160,000 of them, although the newspaper did not appear to verify the claim. [22]
Browder's technology has received mixed reviews. For example, a blog post from The Guardian noted that it "just drafted an impressive notice under the Data Protection Act 1998 not to use my personal information for direct marketing." [23] Similarly, a writer with The American Lawyer noted that, "one of DoNotPay's chatbots helped me draft a strong, well-cited and appropriately toned letter requesting extended maternity leave." [24]
However, Legal Cheek tested the service in 2016 with "fairly basic legal questions" and noted that it failed to answer most of them. [25] Above the Law noted that the service may "be too good to be true" due to errors in the legal advice provided, noting that when dealing with "things as important as securing immigration status, which is one of the services DoNotPay promotes, mistakes can ruin lives." Above the Law ultimately recommended the service for "clear-cut issues like parking tickets or non-critical matters," while cautioning against its use for legal issues with higher stakes. [2]
In January 2023, Browder claimed that the organization would attempt to use DoNotPay live in court, but was forced to halt after being warned about the unlicensed practice of law. NPR wrote that "some observers" have had "mixed to shoddy results attempting to use its basic features", and noted that Browder, the company's founder, is known for attention-seeking stunts. [3]
In March 2023, the company faced two class-action lawsuits, one alleging that it "misled customers and misrepresented its product" [26] and another that the company is practicing law without a license. [27] The parties in the first lawsuit, "reached a settlement in principle" without exposing the details of the settlement. [28]
The second, practicing law without a license lawsuit, was ultimately dismissed with Chief District Judge Nancy Rosenstengel agreeing with DoNotPay, holding that the plaintiff law firm, MillerKing, had failed to establish standing because it had failed to allege that it has suffered any concrete injury. [29]
In September 2024, the United States Federal Trade Commission (FTC) announced an enforcement action against DoNotPay, alleging that the company "relied on artificial intelligence as a way to supercharge deceptive or unfair conduct that harms consumers[...]." [30] [31] For example, the company's advertising featured a quote supposedly from the Los Angeles Times which praised its services, but was actually from a high schooler's op-ed on the newspaper's "High School Insider" platform. [7] The FTC also stated that the company never tested the quality of its legal services or hired attorneys to assess the accuracy of the chatbot's answers. [7]
In the proposed settlement, DoNotPay did not admit liability, but did agree to several penalties, including a fine of $193,000 and limitations on its future marketing claims. [32]
A chatbot is a software application or web interface that is designed to mimic human conversation through text or voice interactions. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.
The Federal Trade Commission (FTC) is an independent agency of the United States government whose principal mission is the enforcement of civil (non-criminal) antitrust law and the promotion of consumer protection. The FTC shares jurisdiction over federal civil antitrust law enforcement with the Department of Justice Antitrust Division. The agency is headquartered in the Federal Trade Commission Building in Washington, DC.
Artificial consciousness, also known as machine consciousness, synthetic consciousness, or digital consciousness, is the consciousness hypothesized to be possible in artificial intelligence. It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience.
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.
An Internet bot, web robot, robot or simply bot, is a software application that runs automated tasks (scripts) on the Internet, usually with the intent to imitate human activity, such as messaging, on a large scale. An Internet bot plays the client role in a client–server model whereas the server role is usually played by web servers. Internet bots are able to perform simple and repetitive tasks much faster than a person could ever do. The most extensive use of bots is for web crawling, in which an automated script fetches, analyzes and files information from web servers. More than half of all web traffic is generated by bots.
Allen & Overy LLP was a British multinational law firm headquartered in London, England. The firm has 590 partners and over 5,800 employees worldwide. In 2023 A&O reported an increase in revenue to GBP2.1 billion and is the second largest law firm headquartered in the UK by revenue.
Cold calling is the solicitation of business from potential customers who have had no prior contact with the salesperson conducting the call. It is an attempt to convince potential customers to purchase the salesperson's product or service. Generally, it is an over-the-phone process, making it a form of telemarketing, but can also be done in-person by door-to-door salespeople. Though cold calling can be used as a legitimate business tool, scammers can use cold calling as well.
Sir William Felix Browder, is an American-born British financier and political activist. He is the CEO and co-founder of Hermitage Capital Management, the investment advisor to the Hermitage Fund, which was formerly the largest foreign portfolio investor in Russia. The Hermitage Fund was founded in partnership with Republic National Bank, with $25 million in seed capital. The fund, and associated accounts, eventually grew to $4.5 billion of assets under management. In 1997, the Hermitage Fund was the best-performing fund in the world, up by 238%. Browder's primary investment strategy was shareholder rights activism. Browder took on large Russian companies such as Gazprom, Surgutneftegaz, Unified Energy Systems, and Sidanco. In retaliation, on 13 November 2005, Browder was refused entry to Russia, deported to the UK, and declared a threat to Russian national security.
Jennifer Ann Crecente was an 18-year-old high school student who was shot and killed in southwest Austin, Texas, by Justin Crabbe, her ex-boyfriend, on February 15, 2006. Crecente's murder was the first in Austin in 2006. In response to her murder, two charitable organizations have been formed, a memorial grant created in her name, and legislation passed in Texas to prevent teen dating violence.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
LegalShield is an American corporation that sells legal service products direct to consumer through employer groups and through multi-level marketing in the United States, and Canada. It was available in the United Kingdom from 2019 to 2021. According to LegalShield's income disclosure regarding associates selling the product: "For Associates with 0-2 years of experience who made at least one sale, average annual earnings were $798 for 2019. Approximately 73% of all Associates across experience years made less than $1,000 in 2019."
LearningRx is a franchise based in Colorado Springs, Colorado. The company claims to improve cognitive abilities.
Babylon Health was a digital-first health service provider that combined an artificial intelligence-powered platform with virtual clinical operations for patients. Patients are connected with health care professionals through their web and mobile application.
Tay was a chatbot that was originally released by Microsoft Corporation as a Twitter bot on March 23, 2016. It caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch. According to Microsoft, this was caused by trolls who "attacked" the service as the bot made replies based on its interactions with people on Twitter. It was replaced with Zo.
Lawbots are a broad class of customer-facing legal AI applications that are used to automate specific legal tasks, such as document automation and legal research. The terms robot lawyer and lawyer bot are used as synonyms to lawbot. A robot lawyer or a robo-lawyer refers to a legal AI application that can perform tasks that are typically done by paralegals or young associates at law firms. However, there is some debate on the correctness of the term. Some commentators say that legal AI is technically speaking neither a lawyer nor a robot and should not be referred to as such. Other commentators believe that the term can be misleading and note that the robot lawyer of the future won't be one all-encompassing application but a collection of specialized bots for various tasks.
Lina M. Khan is a British-born American legal scholar who has served as chair of the Federal Trade Commission (FTC) since 2021. She is also an associate professor of law at Columbia Law School.
Artificial intelligence (AI) in hiring involves the use of technology to automate aspects of the hiring process. Advances in artificial intelligence, such as the advent of machine learning and the growth of big data, enable AI to be utilized to recruit, screen, and predict the success of applicants. Proponents of artificial intelligence in hiring claim it reduces bias, assists with finding qualified candidates, and frees up human resource workers' time for other tasks, while opponents worry that AI perpetuates inequalities in the workplace and will eliminate jobs. Despite the potential benefits, the ethical implications of AI in hiring remain a subject of debate, with concerns about algorithmic transparency, accountability, and the need for ongoing oversight to ensure fair and unbiased decision-making throughout the recruitment process.
LaMDA is a family of conversational large language models developed by Google. Originally developed and introduced as Meena in 2020, the first-generation LaMDA was announced during the 2021 Google I/O keynote, while the second generation was announced the following year.
Character.ai is a neural language model chatbot service that can generate human-like text responses and participate in contextual conversation. Constructed by previous developers of Google's LaMDA, Noam Shazeer and Daniel de Freitas, the beta model was made available to use by the public in September 2022. The beta model has since been retired on September 24, 2024, and can no longer be used.
ChatGPT is a generative artificial intelligence (AI) chatbot developed by OpenAI and launched in 2022. It is based on the GPT-4o large language model (LLM). ChatGPT can generate human-like conversational responses, and enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. It is credited with accelerating the AI boom, which has led to ongoing rapid investment in and public attention to the field of artificial intelligence. Some observers have raised concern about the potential of ChatGPT and similar programs to displace human intelligence, enable plagiarism, or fuel misinformation.