Women in Data Science Initiative

Last updated

Founded in 2015 at Stanford University, California by Dr. Margot Gerritsen and Karen Matthys, the Women in Data Science Initiative (WiDs) encourages women from around the world to connect with one another, to form local and regional networks, and to promote an inclusive and diverse community within the rapidly expanding field of data science. [1]

From problems with facial recognition technologies that do not recognize darker-skinned faces to predictive policing algorithms that misidentify threats and intensify surveillance in communities of color, the effects of the lack of diversity in data science are numerous. Biased data sets may result in additional forms of discrimination. When an early attempt to design a computer program to help with hiring decisions relied mainly on resumes from men, the program "taught itself that male candidates were preferable to women." [2] While Amazon immediately recognized this tendency and never used the program to evaluate job candidates, this example shows that relying on biased data may perpetuate inequalities. According to Gerritsen, “We cannot let these algorithms and these approaches of data-driven decision making really play the significant role that they are starting to play in our society at large if we do not really understand the ethics.” [3]

WiDs holds an annual Women in Data Science Worldwide conference annually, held as a 24-hour virtual event in 2021, is intended to inspire, educate, and sustain data science worldwide. In 2020, over 30,000 people participated, from 50 different countries. [1] WiDs has reached over 100,000 women around the world. [2] The Pune, India chapter of WiDs, for example, has over 5,000 members. Sucheta Dhere, ambassador of the WiDs Pune Chapter noted that computer vision, natural language processing, and machine learning "have a huge hiring potential in India," particularly for women. [4] In 2019, more than 250 women convened in Madrid for the WIDS conference, which brought together women working on artificial intelligence and robotics. [5] The Cambridge WiDS event was held at the Massachusetts Institute of Technology in 2020. Its signature event was a panel discussion on data science and fake news called “Data weaponized, data scrutinized: a war on information.” [6]

The Women in Data Science initiative also runs workshops on topics ranging from actionable ethics, automating machine learning, data analysis for health, and exploring artificial intelligence. [7]

Related Research Articles

Machine learning Study of algorithms that improve automatically through experience

Machine learning (ML) is the study of computer algorithms that can improve automatically through experience and by the use of data. It is seen as a part of artificial intelligence. Machine learning algorithms build a model based on sample data, known as training data, in order to make predictions or decisions without being explicitly programmed to do so. Machine learning algorithms are used in a wide variety of applications, such as in medicine, email filtering, speech recognition, and computer vision, where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks.

Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics. It also includes the issue of a possible singularity due to superintelligent AI.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

Explainable AI (XAI) is artificial intelligence (AI) in which the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

Algorithmic bias Technological phenomenon with social implications

Algorithmic bias describes systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. Bias can emerge from many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm. For example, algorithmic bias has been observed in search engine results and social media platforms. This bias can have impacts ranging from inadvertent privacy violations to reinforcing social biases of race, gender, sexuality, and ethnicity. The study of algorithmic bias is most concerned with algorithms that reflect "systematic and unfair" discrimination. This bias has only recently been addressed in legal frameworks, such as the European Union's General Data Protection Regulation (2018) and the proposed Artificial Intelligence Act (2021).

The AI Now Institute at NYU is an American research institute studying the social implications of artificial intelligence. AI Now was founded by Kate Crawford and Meredith Whittaker in 2017 after a symposium hosted by the White House under Barack Obama. It is located at New York University. AI Now is partnered with organizations such as the New York University Tandon School of Engineering, New York University Center for Data Science, Partnership on AI, and the ACLU. AI Now has produced annual reports that examine the social implications of artificial intelligence.

Argument technology is a sub-field of artificial intelligence that focuses on applying computational techniques to the creation, identification, analysis, navigation, evaluation and visualisation of arguments and debates. In the 1980s and 1990s, philosophical theories of arguments in general, and argumentation theory in particular, were leveraged to handle key computational challenges, such as modeling non-monotonic and defeasible reasoning and designing robust coordination protocols for multi-agent systems. At the same time, mechanisms for computing semantics of Argumentation frameworks were introduced as a way of providing a calculus of opposition for computing what it is reasonable to believe in the context of conflicting arguments.

Rumman Chowdhury Data scientist, AI specialist

Rumman Chowdhury was born in 1980 in Rockland County, New York. She is a Bengali American data scientist, a business founder, and former Responsible Artificial Intelligence Lead at Accenture. She enjoyed watching science fiction and attributes her curiosity about science to the Dana Scully effect. She completed her undergraduate study in Management Science and Political Science at Massachusetts Institute of Technology. She received a Master's of Science from Columbia University in Statistics and Quantitative methods. She holds a Doctorate Degree in Political Science from University of California, San Diego. She finished her PhD whilst working in Silicon Valley. Her main interest and focus for her career and higher educational studies was how data can be used to understand people's bias and ways to evaluate the impact of technology on humanity She has recently joined the META team at Twitter, where she working with her colleagues to make changes to Twitter's AI algorithm to make it more in line with ethical guidelines. She officially took the job to work with twitter in February 2021. She was thrilled because this was an opportunity for her to have the ability to have direct positive impact at a global scale. Rumman Chowdhury's most recent work is an analysis called Examining algorithmic amplification of political content on Twitter. This analysis was talked about and explained in October of 2021. This is one of her most recent studies she has conducted for twitter.

Timnit Gebru Computer scientist

Timnit Gebru is an American computer scientist who works on algorithmic bias and data mining. She is an advocate for diversity in technology and co-founder of Black in AI, a community of black researchers working in artificial intelligence.

Rachel Thomas (academic) American computer scientist

Rachel Thomas is an American computer scientist and founding Director of the Center for Applied Data Ethics at the University of San Francisco. Together with Jeremy Howard, she is co-founder of fast.ai. Thomas was selected by Forbes magazine as one of the 20 most incredible women in artificial intelligence.

<i>Coded Bias</i> 2020 American documentary film

Coded Bias is an American documentary film directed by Shalini Kantayya that premiered at the 2020 Sundance Film Festival. The film includes contributions from researchers Joy Buolamwini, Deborah Raji, Meredith Broussard, Cathy O’Neil, Zeynep Tufekci, Safiya Noble, Timnit Gebru, Virginia Eubanks, and Silkie Carlo, among others.

Algorithmic Justice League

The Algorithmic Justice League (AJL) is a digital advocacy organization based in Cambridge, Massachusetts. Founded by computer scientist Joy Buolamwini in 2016, AJL aims to raise awareness of the social implications of artificial intelligence through art and research. It was featured in the 2020 documentary Coded Bias. In other words, it is a combination of art and various forms of research to highlight the defacement of AI. The platform depicts itself as a group of activists who are aiming to alter a more ethical use of artificial intelligence by creating open online seminars and meetings, with the hopes of educating other individuals about the notion of racial-recognition technology.

Hanna Wallach Computational social scientist

Hanna Wallach is a computational social scientist and partner research manager at Microsoft Research. Her work makes use of machine learning models to study the dynamics of social processes. Her current research focuses on issues of fairness, accountability, transparency, and ethics as they relate to AI and machine learning.

Rashida Richardson American attorney and scholar

Rashida Richardson is a visiting scholar at Rutgers Law School and the Rutgers Institute for Information Policy and the Law and a senior fellow in the Digital Innovation and Democracy Initiative at the German Marshall Fund. She is scheduled to join the faculty at Northeastern Law as an Assistant Professor of Law and Political Science with the School of Law and the Department of Political Science in the College of Social Sciences and Humanities in July 2021. Richardson previously was the director of policy research at the AI Now Institute, where she designed, implemented and coordinated research strategies and initiatives on law, policy, and civil rights. During her career as an attorney, researcher, and scholar, Richardson has engaged in science communication and public advocacy.

Black in AI

Black in AI, formally called the Black in AI Workshop, is a technology research organization and affinity group, founded by computer scientists Timnit Gebru and Rediet Abebe in 2017. It started as a conference workshop, later pivoting into an organization. Black in AI increases the presence and inclusion of Black people in the field of artificial intelligence (AI) by creating space for sharing ideas, fostering collaborations, mentorship and advocacy.

Margaret Mitchell is a computer scientist who works on algorithmic bias and fairness in machine learning. She is most well known for her work on automatically removing undesired biases concerning demographic groups from machine learning models, as well as more transparent reporting of their intended use.

Abeba Birhane is an Ethiopian-born cognitive scientist who works at the intersection of complex adaptive systems, machine learning, algorithmic bias, and critical race studies. Birhane's groundbreaking work with Vinay Prabhu uncovered that large-scale image datasets commonly used to develop AI systems, including ImageNet and 80 Million Tiny Images, carried racist and misogynistic labels and offensive images. She has been recognized by VentureBeat as a top innovator in computer vision.

Himabindu "Hima" Lakkaraju is an Indian-American computer scientist who works on machine learning, artificial intelligence, algorithmic bias, and AI accountability. She is currently an Assistant Professor at the Harvard Business School and is also affiliated with the Department of Computer Science at Harvard University. Lakkaraju is known for her work on explainable machine learning. More broadly, her research focuses on developing machine learning models and algorithms that are interpretable, transparent, fair, and reliable. She also investigates the practical and ethical implications of deploying machine learning models in domains involving high-stakes decisions such as healthcare, criminal justice, business, and education. Lakkaraju was named as one of the world's top Innovators Under 35 by both Vanity Fair and the MIT Technology Review.

References

  1. 1 2 "Women in Data Science Initiative Holds Global Conference to Celebrate International Women's Day - Ms. Magazine". msmagazine.com. Retrieved 2021-03-13.
  2. 1 2 "Why the World Needs More Women Data Scientists". Center For Global Development. Retrieved 2021-03-31.
  3. "The human dimension of data is essential for impartial AI". SiliconANGLE. 2021-03-01. Retrieved 2021-03-13.
  4. "Pune: Conference on data science to be held on March 8". The Indian Express. 2021-02-26. Retrieved 2021-03-18.
  5. "Women in Data Science ha celebrado en Madrid su tercera edición". Directivos y Empresas (in Spanish). 2019-03-07. Retrieved 2021-03-31.
  6. "Annual Women in Data Science conference discusses fake news". MIT News | Massachusetts Institute of Technology. Retrieved 2021-03-31.
  7. "Women in Data Science (WiDS) Announces WiDS Workshops Initiative". finance.yahoo.com. Retrieved 2021-03-13.