Multilateral Organizations
Multilateral Organizations
A practical guide for organizations in the region that wish to design, develop, and deploy traditional AI technologies in commercial and non-military or dual-use applications. The guide focuses on encouraging alignment within ASEAN and fostering the interoperability of AI frameworks across jurisdictions and includes recommendations on national-level and regional-level initiatives that governments in the region can consider implementing to design, develop, and deploy AI systems responsibly.
A legally binding treaty that sets out a comprehensive legal framework to ensure that AI systems respect human rights, democracy, and the rule of law. The convention establishes transparency and oversight requirements tailored to specific contexts and risks, including identifying content generated by AI systems. The convention will be opened for signature by CoE Member States as of September 5, 2024.
The Guidelines outline the responsibilities of AI technology providers in the news media sector.
Sets forth 11 principles that developers should follow. The Code of Conduct, directed at academia, civil society, and the public and private sectors, is intended to promote safe, secure, and trustworthy AI worldwide by providing voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems. Organizations are encouraged to apply these actions to all stages of the lifecycle to cover, when and as applicable, the design, development, deployment and use of advanced AI systems.
This recommended practice specifies governance criteria such as safety, transparency, accountability, responsibility and minimizing bias, and process steps for effective implementation, performance auditing, training and compliance in the development or use of artificial intelligence within organizations.
An international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems.
Guidance on how organizations that develop, produce, deploy or use products, systems and services that utilize AI can manage risk specifically related to AI. The guidance also aims to assist organizations to integrate risk management into their AI-related activities and functions and describes processes for the effective implementation and integration of AI risk management.
The Recommendation contains five high-level values-based principles and five recommendations for national policies and international co-operation. It also proposes a common understanding of key terms, such as “AI system” and “AI actors”.
This OECD paper offers an overview of the AI language model and NLP landscape with current and emerging policy responses from around the world. It explores the basic building blocks of language models from a technical perspective using the OECD Framework for the Classification of AI Systems. The paper also presents policy considerations through the lens of the OECD AI Principles.
This OECD tool is designed to help policy makers, regulators, legislators and others characterize AI systems deployed in specific contexts. It can be applied to the widest range of AI systems across the following dimensions: People & Planet; Economic Context; Data & Input; AI model; and Task & Output.
The objectives of the Framework are to:
Ten guiding principles jointly issued by the U.S. Food and Drug Administration (FDA), Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA) to help promote safe, effective, and high-quality medical devices that use artificial intelligence and machine learning (AI/ML).
These Guidelines, published by the UK National Cyber Security Centre (NCSC), the US Cybersecurity and Infrastructure Security Agency (CISA), and other international government agencies, are aimed primarily at providers of AI systems who are using models hosted by an organization or are using external application programming interfaces (APIs). The Guidelines suggest considerations and mitigations in four key areas to help reduce the overall risk to an organizational AI system development process: secure design; development; deployment; and operation and maintenance. The Guidelines follow a ‘secure by default’ approach and are aligned closely to practices defined in the UK NCSC’s Secure development and deployment guidance, NIST’s Secure Software Development Framework, and ‘secure by design principles’ published by CISA, the NCSC and international cyber agencies.
This report expands upon the ‘secure deployment’ and ‘secure operation and maintenance’ sections of the Guidelines for secure AI system development and incorporates mitigation considerations from Engaging with Artificial Intelligence (AI). It is for organizations deploying and operating AI systems designed and developed by another entity.