Rumman Chowdhury

Last updated

Rumman Chowdhury
Rumman Chowdhury at the AI for Good Global Summit 2018 (42110666692).jpg
Rumman Chowdhury at the AI for Good Global Summit 2018
Born1980 (age 4344)
Alma mater Massachusetts Institute of Technology
Columbia University
University of California, San Diego (PhD)
AwardsBBC 100 Women (2017)

TIME 100/AI (2023)

One of Five Who are Shaping AI (Forbes) (2018)

Bay Area's top 40 under 40 (San Francisco Business Times) (2018)

Inducted to the British Royal Society of the Arts (RSA)

Contents

Scientific career
Fields Ethical Artificial Intelligence
Institutions Accenture
Thesis Beating Plowshares into Swords: The Impact of the Metropolitan-Military Complex  (2017)
Doctoral advisor Thaddeus Kousser
Steven Erie [1]
Website rummanchowdhury.com

Rumman Chowdhury (b. 1980) is a Bangladeshi American data scientist, a business founder, and former responsible artificial intelligence lead at Accenture. She was born in Rockland County, New York. [2] She is recognized for her contributions to the field of data science.

Chowdhury's journey into the world of science was inspired by her love for science fiction, a passion that ignited her curiosity, often attributed to the "Dana Scully Effect." This fascination with the intersection of science and fiction laid the foundation for her future endeavors. [3]

Education

Chowdhury completed her undergraduate study in Management Science and Political Science at Massachusetts Institute of Technology. [2] She received a Master of Science from Columbia University in Statistics and Quantitative methods. [3] She holds a Doctorate Degree in Political Science from University of California, San Diego. [2] [1] She finished her PhD whilst working in Silicon Valley. [4] Her main interest for her career and higher educational studies was how data can be used to understand people's bias and ways to evaluate the impact of technology on humanity. [2]

Career

Early

Chowdhury taught data science at the boot camp Metis and worked at Quotient before joining Accenture in 2017. [2] She led their work on responsible artificial intelligence. [2] She was concerned about algorithmic bias and the AI workforce; particularly on retaining researchers. [2] She has spoken openly about the need to define what ethical AI actually means [5] and was responsible for coining the term "moral outsourcing". She works with companies on developing ethical governance and algorithms that explain their decisions transparently. [6] She is determined to use AI to improve diversity in recruitment. [7]

Chowdhury, alongside a team of early career researchers at the Alan Turing Institute, developed a Fairness Tool which scrutinises the data that is input to an algorithm and identifies whether certain categories (such as race or gender) may influence the outcome. [8] The tool both identifies and tries to fix bias, enabling organisations to make fairer decisions. [9]

All.ai, Parity and X Institute

Chowdhury designed All.ai, a language analysis tool that can monitor and improve the gender balance of speakers in meetings. [10]

In 2020, she founded Parity to bridge the translation gap between risk, legal, and data teams. [11]

She launched the X Institute, a program which teaches refugees about data science and marketing. [2]

She has given a keynote speech at Slush, talking about augmenting human capabilities. [12] She delivered a TED talk about humanity in the age of artificial intelligence. [12]

Twitter

From February 2021 to November 2022 Chowdhury was a director for the Machine Learning Ethics, Transparency and Accountability (META) team with Twitter. [13] [14] META's goal was to study and improve the machine learning systems used within Twitter, this included biased algorithms that may cause harm to the user. [15] Biased algorithms have been an issue for a long time in the tech industry; traits such as gender, sex, race, or social class hold potential segregation that may result in unfair decisions. META strived to avoid this by making Twitter better, fair, accountable, and more transparent for its users. [15] Most projects that META teams worked on involved research and data analysis, and the team was largely made up of professors, researchers, and engineers. [16] In 2021, Chowdhury and the META team published an analysis titled Examining algorithmic amplification of political content on Twitter. [17] In November 2022, Chowdhury was one of many Twitter employees that were laid off at short notice after Elon Musk's takeover of the company. [13]

Awards

In 2017, she was included in the 100 Women (BBC) in the "Glass Ceiling Team" category. [18] In 2018, she was named one of five people who are shaping AI by Forbes. [19] She was acknowledged by The Business Journals as one of the Bay Area's top 40 Under 40. [20] She has also been inducted into the British Royal Society of the Arts (RSA) to celebrate people who have made progress in social challenges. [21]

Related Research Articles

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.

<span class="mw-page-title-main">Algorithmic bias</span> Technological phenomenon with social implications

Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.

<span class="mw-page-title-main">Joy Buolamwini</span> Computer scientist and digital activist

Joy Adowaa Buolamwini is a Canadian-American computer scientist and digital activist formerly based at the MIT Media Lab. She founded the Algorithmic Justice League (AJL), an organization that works to challenge bias in decision-making software, using art, advocacy, and research to highlight the social implications and harms of artificial intelligence (AI).

<span class="mw-page-title-main">Meredith Whittaker</span> American artificial intelligence research scientist

Meredith Whittaker is the president of the Signal Foundation and serves on its board of directors. She was formerly the Minderoo Research Professor at New York University (NYU), and the co-founder and faculty director of the AI Now Institute. She also served as a senior advisor on AI to Chair Lina Khan at the Federal Trade Commission. Whittaker was employed at Google for 13 years, where she founded Google's Open Research group and co-founded the M-Lab. In 2018, she was a core organizer of the Google Walkouts and resigned from the company in July 2019.

The AI Now Institute is an American research institute studying the social implications of artificial intelligence and policy research that addresses the concentration of power in the tech industry. AI Now has partnered with organizations such as the Distributed AI Research Institute (DAIR), Data & Society, Ada Lovelace Institute, New York University Tandon School of Engineering, New York University Center for Data Science, Partnership on AI, and the ACLU. AI Now has produced annual reports that examine the social implications of artificial intelligence. In 2021-2, AI Now’s leadership served as a Senior Advisors on AI to Chair Lina Khan at the Federal Trade Commission. Its executive director is Amba Kak.

Marina Denise Anne Jirotka is professor of human-centered computing at the University of Oxford, director of the Responsible Technology Institute, governing body fellow at St Cross College, board member of the Society for Computers and Law and a research associate at the Oxford Internet Institute. She leads a team that works on responsible innovation, in a range of ICT fields including robotics, AI, machine learning, quantum computing, social media and the digital economy. She is known for her work with Alan Winfield on the 'Ethical Black Box'. A proposal that robots using AI should be fitted with a type of inflight recorder, similar to those used by aircraft, to track the decisions and actions of the AI when operating in an uncontrolled environment and to aid in post-accident investigations.

<span class="mw-page-title-main">Timnit Gebru</span> Computer scientist

Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works in the fields of artificial intelligence (AI), algorithmic bias and data mining. She is an advocate for diversity in technology and co-founder of Black in AI, a community of Black researchers working in AI. She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR).

ACM Conference on Fairness, Accountability, and Transparency is a peer-reviewed academic conference series about ethics and computing systems. Sponsored by the Association for Computing Machinery, this conference focuses on issues such as algorithmic transparency, fairness in machine learning, bias, and ethics from a multi-disciplinary perspective. The conference community includes computer scientists, statisticians, social scientists, scholars of law, and others.

<span class="mw-page-title-main">Sandra Wachter</span> Data Ethics, Artificial Intelligence, robotics researcher

Sandra Wachter is a professor and senior researcher in data ethics, artificial intelligence, robotics, algorithms and regulation at the Oxford Internet Institute. She is a former Fellow of The Alan Turing Institute.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

<span class="mw-page-title-main">Algorithmic Justice League</span> Digital advocacy non-profit organization

The Algorithmic Justice League (AJL) is a digital advocacy non-profit organization based in Cambridge, Massachusetts. Founded in 2016 by computer scientist Joy Buolamwini, the AJL uses research, artwork, and policy advocacy to increase societal awareness regarding the use of artificial intelligence (AI) in society and the harms and biases that AI can pose to society. The AJL has engaged in a variety of open online seminars, media appearances, and tech advocacy initiatives to communicate information about bias in AI systems and promote industry and government action to mitigate against the creation and deployment of biased AI systems. In 2021, Fast Company named AJL as one of the 10 most innovative AI companies in the world.

<span class="mw-page-title-main">Rashida Richardson</span> American attorney and scholar

Rashida Richardson is a visiting scholar at Rutgers Law School and the Rutgers Institute for Information Policy and the Law and an attorney advisor to the Federal Trade Commission. She is also an assistant professor of law and political science at the Northeastern University School of Law and the Northeastern University Department of Political Science in the College of Social Sciences and Humanities.

<span class="mw-page-title-main">Margaret Mitchell (scientist)</span> U.S. computer scientist

Margaret Mitchell is a computer scientist who works on algorithmic bias and fairness in machine learning. She is most well known for her work on automatically removing undesired biases concerning demographic groups from machine learning models, as well as more transparent reporting of their intended use.

<span class="mw-page-title-main">Deborah Raji</span> Nigerian-Canadian computer scientist and activist

Inioluwa Deborah Raji is a Nigerian-Canadian computer scientist and activist who works on algorithmic bias, AI accountability, and algorithmic auditing. Raji has previously worked with Joy Buolamwini, Timnit Gebru, and the Algorithmic Justice League on researching gender and racial bias in facial recognition technology. She has also worked with Google’s Ethical AI team and been a research fellow at the Partnership on AI and AI Now Institute at New York University working on how to operationalize ethical considerations in machine learning engineering practice. A current Mozilla fellow, she has been recognized by MIT Technology Review and Forbes as one of the world's top young innovators.

Abeba Birhane is an Ethiopian-born cognitive scientist who works at the intersection of complex adaptive systems, machine learning, algorithmic bias, and critical race studies. Birhane's work with Vinay Prabhu uncovered that large-scale image datasets commonly used to develop AI systems, including ImageNet and 80 Million Tiny Images, carried racist and misogynistic labels and offensive images. She has been recognized by VentureBeat as a top innovator in computer vision and named as one of the 100 most influential persons in AI 2023 by TIME magazine.

<span class="mw-page-title-main">Gemma Galdón-Clavell</span> Spanish technology policy analyst

Gemma Galdón-Clavell is a Spanish technology policy analyst who specializes in ethics and algorithmic accountability. She is a senior adviser to the European Commission and she has also provided advice to other international organisations. Forbes Magazine described her as “a leading voice on tech ethics and algorithmic accountability”.

Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.

Moral outsourcing refers to placing responsibility for ethical decision-making on to external entities, often algorithms. The term is often used in discussions of computer science and algorithmic fairness, but it can apply to any situation in which one appeals to outside agents in order to absolve themselves of responsibility for their actions. In this context, moral outsourcing specifically refers to the tendency of society to blame technology, rather than its creators or users, for any harm it may cause.

References

  1. 1 2 Chowdhury, Rumman (2017). Beating Plowshares into Swords: The Impact of the Metropolitan-Military Complex. escholarship.org (PhD thesis). University of California, San Diego. OCLC   992172239.
  2. 1 2 3 4 5 6 7 8 Apte, Poornima. "The Data Scientist Putting Ethics Into AI". OZY. Retrieved 20 November 2018.
  3. 1 2 "Rumman Chowdhury is California's Coolest Data Scientist". MM.LaFleur. 13 January 2017. Retrieved 20 November 2018.
  4. "Meet the San Francisco Business Times' 40 under 40 Class of 2018 - Rumman Chowdhury". San Francisco Business Times. 2018. Retrieved 23 November 2018.
  5. Yao, Mariya (12 June 2018), "Building Ethical & Responsible AI Technologies (Interview with Rumman Chowdhury of Accenture)", Top Bots, retrieved 21 November 2018
  6. Building Ethical & Responsible AI Technologies (AI For Growth, Rumman Chowdhury, Accenture), TOPBOTS: Applied AI For Business, 12 June 2018, retrieved 20 November 2018 via YouTube
  7. Welsh, John. "9 Developments In AI That You Really Need to Know". Forbes. Retrieved 21 November 2018.
  8. "CogX—Tackling The Challenge Of Ethics In AI | Accenture". www.accenture.com. Retrieved 20 November 2018.
  9. "5 Q's for Rumman Chowdhury, Global Lead for Responsible AI at Accenture". Center for Data Innovation. 17 August 2018. Retrieved 21 November 2018.
  10. Hinchliffe, Emma. "This app will help you speak up—or shut up—during meetings". Mashable. Retrieved 21 November 2018.
  11. "About Us". Parity. Archived from the original on 20 January 2021.
  12. 1 2 Slush (7 December 2017), Rumman Chowdhury: Augmenting Human Capabilities to New Dimensions , retrieved 21 November 2018
  13. 1 2 Carbonaro, Giulia (4 November 2022). "Twitter worker who pointed out right-wing bias on platform fired by Musk". Newsweek .
  14. "Big news! - I'm thrilled to be joining the @TwitterEng team today as Director of ML Ethics, Transparency & Accountability. With the META team, we'll work to improve ML transparency, inclusivity and accountability. 1/". Twitter. Retrieved 17 November 2021.
  15. 1 2 "Introducing our Responsible Machine Learning Initiative". blog.twitter.com. Retrieved 17 November 2021.
  16. ""A 'building the plane as you fly it' moment": Q&A with Twitter's ethical AI lead Rumman Chowdhury". Morning Brew. Retrieved 17 November 2021.
  17. Belli, Luca (21 October 2021). "Examining algorithmic amplification of political content on Twitter". blog.twitter.com. Retrieved 17 November 2021.
  18. "BBC 100 Women 2017: Who is on the list?". 27 September 2017. Retrieved 10 January 2024.
  19. Team, Insights. "Forbes Insights: 5 People Building Our AI Future". Forbes. Retrieved 17 November 2021.
  20. "40 Under 40 2018: Rumman Chowdhury, Accenture (Video)". www.bizjournals.com. Retrieved 17 November 2021.
  21. Tech & Startup Desk (8 September 2023). "Bangladeshi-origin Rumman Chowdhury in TIME's Top 100 in AI". The Daily Star. Retrieved 9 September 2023.