Artificial Intelligence

Overview of NIST Initiatives on Artificial Intelligence Standards, Principles, and Critical AI Issues

In this webinar, experts from the National Institute of Standards and Technology provide an overview of their key artificial intelligence initiatives including responsible, trustworthy, and explainable AI.

Wednesday, November 04, 2020 12:00 PM – 1:00 PM ET

Hosted by Digital.gov and the AI Community of Practice

Artificial intelligence (AI) has the potential to impact nearly all aspects of our society, including our economy, but the development and use of the new technologies it brings are not without technical challenges and risks. AI must be developed in a trustworthy manner to ensure reliability, safety, and accuracy.

Elham Tabassi and Mark Przybocki will provide an overview of ongoing National Institute of Standards and Technology (NIST) efforts supporting fundamental and applied research and standards for AI technologies.

Speakers:

National Institute of Standards and Technology logo for their Information Technology Laboratory has the acronym NIST in white text and the word cyber in light blue on a square black background.

Elham Tabassi is the chief of staff in the Information Technology Laboratory (ITL) at NIST. ITL, one of six research laboratories within NIST, supports NIST’s mission to promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life. ITL conducts fundamental and applied research in computer science and engineering, mathematics, and statistics that cultivates trust in information technology and metrology by developing and disseminating standards, measurements, and testing for interoperability, security, usability, and reliability of information systems.

Mark Przybocki is the acting chief of the Information Access Division (IAD), one of seven technical divisions in ITL. In this capacity, he leads NIST collaborations with industry, academia, and other government agencies to foster trust in emerging technologies that make sense of complex (human) information, by developing improvements to the measurement science, managing technical evaluations, and contributing to standards. The IAD is home to the high profile Text Retrieval Conference (TREC), several biometric benchmarking programs, and a growing number of technical evaluations for emerging human language, natural language processing, and speech, image, and video analytics technologies. Mr. Przybocki’s current interests are in AI benchmarking, explainable AI, and bias across the AI development lifecycle.


This talk is hosted by the AI Community of Practice (CoP). This community aims to unite federal employees who are active in, or interested in AI policy, technology, standards, and programs to accelerate the thoughtful adoption of AI across the federal government.

In this talk

Originally posted by Elham Tabassi on Nov 4, 2020

NIST

Originally posted by Krista Kinnard on Nov 4, 2020

GSA

Originally posted by Mark Przybocki on Nov 4, 2020

NIST

Originally posted by Steven Babitch on Nov 4, 2020