Elham Tabassi | |
---|---|
Education | Sharif University of Technology, Santa Clara University |
Occupation(s) | Engineer, U.S. government leader |
Organization(s) | National Institute of Standards and Technology (NIST), National Artificial Intelligence (AI) Research Resource Task Force |
Honours | Time100 Most Influential People in AI |
Website | https://www.nist.gov/people/elham-tabassi |
Elham Tabassi is an engineer and government leader. [1] She was listed on the inaugural TIME100 Most Influential People in AI. Tabassi led the creation of the United States Artificial Intelligence Risk Management Framework, [2] adopted by both industry and government. [3] Tabassi was selected to serve on the National Artificial Intelligence (AI) Research Resource Task Force. [4] Tabassi began her career in government at the National Institute of Standards and Technology, pioneering various machine learning and computer vision projects with applications in biometrics evaluation and standards, included in over twenty five publications. [5] Her research has been deployed by the FBI and Department of Homeland Security. [6]
Inspired early in life by an aunt who studied at the Sharif University of Technology, Tabassi attended the same university and earned a degree in electrical engineering. She later attended Santa Clara University for a master's degree in electrical and electronics engineer. [7]
Tabassi joined the National Institute of Standards and Technology in 1999, where she has worked on machine learning and computer vision research projects with applications in biometrics evaluation and standards. [8] She has held roles as Electronics Engineer, Senior Research Scientist, Chief of Staff and Associate Director of Emerging Technologies for the Information Technology Laboratory.
She has been a member of the National AI Resource Research Task Force, Vice-Chair of OECD working party on AI Governance, Associate Editor of IEEE Transaction on Information Forensics and Security, and a fellow of Washington Academy of Sciences. [9]
The United States AI Safety Institute was created in 2024, and Tabassi was appointed as the Chief Technology Officer, responsible for leading key technical programs of the institute, focused on supporting the development and deployment of AI that is safe, secure and trustworthy. [10]
The National Institute of Standards and Technology (NIST) is an agency of the United States Department of Commerce whose mission is to promote American innovation and industrial competitiveness. NIST's activities are organized into physical science laboratory programs that include nanoscale science and technology, engineering, information technology, neutron research, material measurement, and physical measurement. From 1901 to 1988, the agency was named the National Bureau of Standards.
Electronic authentication is the process of establishing confidence in user identities electronically presented to an information system. Digital authentication, or e-authentication, may be used synonymously when referring to the authentication process that confirms or certifies a person's identity and works. When used in conjunction with an electronic signature, it can provide evidence of whether data received has been tampered with after being signed by its original sender. Electronic authentication can reduce the risk of fraud and identity theft by verifying that a person is who they say they are when performing transactions online.
In computing, control-\ is a control character in ASCII code and the Basic Latin code block of Unicode, also known as the file separator or field separator (FS) character. It is generated by pressing the \ key while holding down the Ctrl key on a computer keyboard, and has the decimal value 28. It is the highest-level of the four separators in the ASCII C0 and C1 control codes; the others are control-], control-^, and control-_. It was one of eight codes reserved as separators in the 1963 version of the ASCII standard; these were reduced to four separators in a 1965 revision of the standard.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
Manuela Maria Veloso is the Head of J.P. Morgan AI Research & Herbert A. Simon University Professor Emeritus in the School of Computer Science at Carnegie Mellon University, where she was previously Head of the Machine Learning Department. She served as president of Association for the Advancement of Artificial Intelligence (AAAI) until 2014, and the co-founder and a Past President of the RoboCup Federation. She is a fellow of AAAI, Institute of Electrical and Electronics Engineers (IEEE), American Association for the Advancement of Science (AAAS), and Association for Computing Machinery (ACM). She is an international expert in artificial intelligence and robotics.
NIST Special Publication 800-53 is an information security standard that provides a catalog of privacy and security controls for information systems. Originally intended for U.S. federal agencies except those related to national security, since the 5th revision it is a standard for general usage. It is published by the National Institute of Standards and Technology, which is a non-regulatory agency of the United States Department of Commerce. NIST develops and issues standards, guidelines, and other publications to assist federal agencies in implementing the Federal Information Security Modernization Act of 2014 (FISMA) and to help with managing cost effective programs to protect their information and information systems.
Dr. Robert L. Simpson Jr. was a computer scientist whose primary research interest was applied artificial intelligence. He served as Chief Scientist at Applied Systems Intelligence, Inc. (ASI) working with Dr. Norman D. Geddes, CEO. Dr. Simpson was responsible for the creation of the ASI core technology PreAct. ASI has since changed its name to Veloxiti Inc.
Arati Prabhakar is an American engineer and public official. Since October 3, 2022, she has served as the 12th director of the White House Office of Science and Technology Policy and the Science Advisor to the President.
Anil Kumar Jain is an Indian-American computer scientist and University Distinguished Professor in the Department of Computer Science & Engineering at Michigan State University, known for his contributions in the fields of pattern recognition, computer vision and biometric recognition. He is among the top few most highly cited researchers in computer science and has received various high honors and recognitions from institutions such as ACM, IEEE, AAAS, IAPR, SPIE, the U.S. National Academy of Engineering, the Indian National Academy of Engineering and the Chinese Academy of Sciences.
The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.
The National Cybersecurity Center of Excellence (NCCoE) is a US government organization that builds and publicly shares solutions to cybersecurity problems faced by U.S. businesses. The center, located in Rockville, Maryland, was established in 2012 through a partnership with the National Institute of Standards and Technology (NIST), the state of Maryland, and Montgomery County. The center is partnered with nearly 20 market-leading IT companies, which contribute hardware, software and expertise.
Shane Legg is a machine learning researcher and entrepreneur. With Demis Hassabis and Mustafa Suleyman, he cofounded DeepMind Technologies, and works there as the chief AGI scientist. He is also known for his academic work on artificial general intelligence, including his thesis supervised by Marcus Hutter.
Partnership on Artificial Intelligence to Benefit People and Society, otherwise known as Partnership on AI, is a nonprofit coalition committed to the responsible use of artificial intelligence. Coming into inception in September 2016, PAI grouped together members from over 90 companies and non-profits in order to explore best practice recommendations for the tech community.
Patricia Scanlon is an Irish technologist and businesswoman. She founded SoapBox Labs in 2013, a company that applies artificial intelligence to develop speech recognition applications that are specifically tuned to children's voices. Scanlon was CEO of SoapBox Labs from its founding until May 2021, when she became executive chair. In 2022, Scanlon was appointed by the Irish Government as Ireland’s first Artificial Intelligence Ambassador. In this role, she will "lead a national conversation" about the role of AI in people's lives, including its benefits and risks.
Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.
Anja Kaspersen is a director for Global Markets Development, New Frontiers and Emerging Spaces at IEEE, the world's largest technical professional organisation. Kaspersen is also a senior fellow at Carnegie Council for Ethics in International Affairs where she co-directs the Artificial Intelligence Equality Initiative with Wendell Wallach. With scholars and thinkers in the field of technology governance, supported by Carnegie Council for Ethics in International Affairs and IEEE, Kaspersen and Wallach provided a Proposal for International governance of AI.
Adam H. Russell is an American anthropologist who serves as Chief Vision Officer of the U.S. AI Safety Institute. He previously served as the acting deputy director of the Advanced Research Projects Agency for Health.
Neurotechnology is an algorithm and software development company founded in Vilnius, Lithuania in 1990.
Executive Order 14110, titled Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence is the 126th executive order signed by U.S. President Joe Biden. Signed on October 30, 2023, the order defines the administration's policy goals regarding artificial intelligence (AI), and orders executive agencies to take actions pursuant to these goals. The order is considered to be the most comprehensive piece of governance by the United States regarding AI.
Jade Leung is the Chief Technology Officer of the United Kingdom's AI Safety Institute, where she designs and oversees safety evaluations for frontier AI models.
{{cite web}}
: Missing or empty |title=
(help)