Tag: AI governance
-
Artificial Superintelligence (ASI)
Artificial Superintelligence (ASI) refers to a hypothetical level of artificial intelligence that vastly surpasses human intelligence across virtually all domains of interest. An ASI would outperform the best human minds in every field – from scientific discovery and creative innovation to social skills and general problem-solving. This concept represents the upper extreme of AI development,…
-
Large Language Model (LLM)
Large Language Models (LLMs) are advanced artificial intelligence systems designed to understand and generate human-like language. They belong to a class of foundation models – AI models trained on immense amounts of text data that give them broad capabilities across many tasks. Instead of being narrowly programmed for one purpose, an LLM learns from billions…
-
Human-in-the-Loop (HITL)
Human-in-the-Loop (HITL) refers to any system or process that integrates active human participation into an otherwise automated workflow or control loop. In an HITL model, a human operator is not just a passive observer but is involved in the operation, supervision, and decision-making of a computerized or autonomous system. The concept applies across multiple domains…
-
AI and Robotics Cooperatives: Empowering Shared Ownership in Tech
Artificial intelligence (AI) and robotics are transforming industries, but their rapid advancement has raised concerns about centralized control and unequal benefits. In response, a growing movement of AI and robotics cooperatives is emerging to democratize technology development. These cooperatives are organizations owned and governed by their members – whether workers, users, or communities – and…
-
Explainability (in AI)
Definition Explainability in artificial intelligence (AI) refers to the ability of an AI system or model to make its functioning and decision-making processes understandable to humans. In essence, an explainable AI system can provide clear reasons or justifications for its outputs, allowing people to comprehend how and why a particular decision or prediction was made.…
-
AI Ethics
AI Ethics refers to the field of study and set of practices concerned with the moral principles and societal implications governing the development and use of artificial intelligence (AI) technologies. In essence, AI ethics seeks to ensure that AI systems are designed and deployed in ways that are beneficial, fair, and accountable, while minimizing harm…
-
AI Alignment
AI Alignment refers to the process of ensuring that artificial intelligence (AI) systems act in accordance with human values, goals, and ethical principles. In essence, an aligned AI is one that reliably does what we intend it to do and behaves in ways that are beneficial (or at least acceptable) to humans, rather than pursuing…
-
AI Bias
Definition and Explanation of AI Bias AI Bias, also known as algorithmic bias or machine learning bias, refers to the systematic and unfair prejudices or distortions in the outputs of artificial intelligence systems. In essence, it means an AI system is producing results that are skewed or discriminatory against certain individuals or groups. These biased…