Dictionary · National Security Commission on Artificial Intelligence: The Final Report
L2 — definitions grouped by regulatory framework.
Nouns
22 senses- Test and Evaluation, Verification and Validation (TEVV)
A framework for assessing, incorporating methods and metrics to determine that a technology or system satisfactorily meets its design specifications and requirements, and that it is sufficient for its intended use.
- Traceability
A characteristic of an AI system enabling a person to understand the technology, development processes, and operational capabilities (e.g., with transparent and auditable methodologies along with documented data sources and design procedures).
- Agile
A philosophy and methodology used to describe the continuous, iterative process to develop and deliver software and other digital technologies. User requirements and feedback inform incremental development and delivery by developers.
- Computer Vision
The digital process of perceiving and learning visual tasks in order to interpret and understand the world through cameras and sensors.
- False Positive
An example in which the model mistakenly classifies an item as in the positive class
- Explainability
A characteristic of an AI system in which there is provision of accompanying evidence or reasons for system output in a manner that is meaningful or understandable to individual users (as well as to developers and auditors) and reflects the system’s process for generating the output (e.g., what alternatives were considered, but not proposed, and why not).
- Graphical Processing Unit (gpu)
A specialized chip capable of highly parallel processing. GPUs are well-suited for running machine learning and deep learning algorithms. GPUs were first developed for efficient parallel processing of arrays of values used in computer graphics. Modern-day GPUs are designed to be optimized for machine learning.
- Expert System
A computer system emulating the decision-making ability of a human expert through the use of reasoning, leveraging an encoding of domain-specific knowledge most commonly represented by sets of if-then rules rather than procedural code. The term “expert system” was used largely during the 1970s and ’80s amidst great enthusiasm about the power and promise of rule-based systems that relied on a “knowledge base” of domain-specific rules and rule-chaining procedures that map observations to conclusions or recommendations.
- False Negative
An example in which the predictive model mistakenly classifies an item as in the negative class.
- Generative Adversarial Network (gan)
An approach to training AI models useful for applications like data synthesis, augmentation, and compression where two neural networks are trained in tandem: one is designed to be a generative network (the forger) and the other a discriminative network (the forgery detector). The objective is for each network to train and better itself off the other, reducing the need for big labeled training data.
- Governance
The actions to ensure stakeholder needs, conditions, and options are evaluated to determine balanced, agreed-upon enterprise objectives; setting direction through prioritization and decision-making; and monitoring performance and ompliance against agreed-upon directions and objectives. AI governance may include policies on the nature of AI applications developed and deployed versus those limited or withheld.
- Human-Machine Teaming (HMT)
The ability of humans and AI systems to work together to undertake complex, evolving tasks in a variety of environments with seamless handoff both ways between human and AI team members. Areas of effort include developing effective policies for controlling human and machine initiatives, computing methods that ideally complement people, methods that optimize goals of teamwork, and designs that enhance human-AI interaction.
- Interpretability
The ability to understand the value and accuracy of system output. Interpretability refers to the extent to which a cause and effect can be observed within a system or to which what is going to happen given a change in input or algorithmic parameters can be predicted.
- Opacity
The nature of some AI techniques whereby the inferential operations are complex, hidden, or otherwise opaque to their developers and end users in terms of providing an understanding of how classifications, recommendations, or actions are generated and what overall performance will be.
- Precision
A metric for classification models. Precision identifies the frequency with which a model was correct when classifying the positive class.
- Prediction
Forecasting quantitative or qualitative outputs through function approximation, applied on input data or measurements.
- Recall
A metric for classification models; identifies the frequency with which a model correctly classifies the true positive items.
- Pseudo-Anonymization (pseudonymization)
A data management technique to strip identifiers linking data to an individual.
- Robust Ai
An AI system that is resilient in real-world settings, such as an object-recognition application that is robust to significant changes in lighting. The phrase also refers to resilience when it comes to adversarial attacks on AI components.
- Reinforcement Learning
A method of training algorithms to make suitable actions by maximizing rewarded behavior over the course of its actions. This type of learning can take place in simulated environments, such as game-playing, which reduces the need for real-world data.
- Robotic Process Automation (rpa)
Software to help in the automation of tasks, especially those that are tedious and repetitive.
- Responsible Ai
An AI system that aligns development and behavior to goals and values. This includes developing and fielding AI technology in a manner that is consistent with democratic values.