Developing artificial intelligence (AI) that is safe, secure, and trustworthy is a top priority for researchers and developers. This requires a multidisciplinary approach that incorporates various fields of expertise.
The development of AI systems that can learn from data and improve over time raises concerns about accountability and transparency. AI systems can make decisions based on complex algorithms and vast amounts of data, making it challenging to understand why a particular decision was made.
To address these concerns, researchers are exploring techniques such as explainable AI, which provides insights into how AI systems arrive at their decisions. This can help build trust in AI systems and ensure that they are used in ways that align with human values.
By prioritizing safety, security, and trustworthiness in AI development, we can unlock the full potential of AI while minimizing its risks.
Executive Order on Artificial Intelligence
The Biden Administration's Executive Order on artificial intelligence is a significant step towards ensuring the safe, secure, and trustworthy development and use of AI. This order has major implications for biotechnology, as it considers biorisk management a critical aspect of AI development.
The order includes provisions for screening DNA synthesis, which is a key area of concern when it comes to biorisk management. The White House is taking a proactive approach to addressing these risks, recognizing the potential consequences of uncontrolled AI development.
One of the key sections of the order is Section 4, which focuses on ensuring safe and secure AI. This section is crucial in understanding the government's approach to AI development and use.
CSET researchers are closely monitoring the implementation of the order, providing regular updates and analysis. Their 90-day review of the order's implementation is a valuable resource for those interested in tracking the government's progress.
Here are some key provisions of the order, broken down by section:
- Section 4: Ensuring Safe and Secure AI
- Section 4.1: Screening DNA Synthesis and Biorisk
- Section 4.4: Screening DNA Synthesis and Biorisk
- Section 5.1: Attracting AI Talent to the United States
- Section 10: Implementation via Draft OMB Memo
- Section 11: Implementation via NIST RFI Related to the Executive Order Concerning Artificial Intelligence
These provisions demonstrate the White House's commitment to ensuring the safe, secure, and trustworthy development and use of AI. By monitoring the government's progress and implementation of these provisions, we can better understand the impact of the order on the AI industry.
AI and National Security
The Department of State is actively working with allies and partners to promote international cooperation on AI safety science, as seen in the Annex Seoul Statement of Intent. This effort aims to further scientific and technological capabilities while protecting national and economic security.
The White House has also taken steps to address AI risks to national security with new regulations. In February 2024, the Center for Security and Emerging Technology (CSET) submitted recommendations to the National Institute of Standards and Technology (NIST) related to the Executive Order Concerning Artificial Intelligence.
The Executive Order aims to ensure safe and secure AI development and use. CSET's expert analysis highlights key provisions and milestones, such as the request for information related to the Executive Order.
Biden EO: DNA Synthesis and Biorisk
The Biden Executive Order on AI has significant implications for biotechnology, particularly when it comes to biorisk. The EO considers biorisk a major concern, indicating that the White House is taking a proactive approach to mitigating potential threats.
The EO specifically addresses DNA synthesis, highlighting its potential risks. DNA synthesis is a process where genetic material is artificially created or modified, and the EO aims to ensure that this process is done safely and securely.
The White House is taking steps to address biorisk by regulating DNA synthesis and other biotechnological advancements. This move demonstrates a commitment to national security and the responsible development of AI.
See what others are reading: Which Term Describes the Process of Using Generative Ai
White House Aims to Cut National Security Risks
The White House has taken steps to address national security risks associated with artificial intelligence (AI). The Executive Order Concerning Artificial Intelligence, issued in October 2023, aims to ensure safe and secure AI development and use.
The Department of State is also focused on AI, recognizing its potential to both benefit and harm national security. They're working with allies and partners to promote responsible AI development and deployment.
CSET's Assessment and CyberAI teams submitted a response to NIST's Request for Information related to the Executive Order on February 2, 2024. Their submission included recommendations for mitigating national security risks.
The Executive Order has major implications for biotechnology, including screening DNA synthesis and biorisk management. This demonstrates the White House's consideration of the potential risks and benefits of AI in various fields.
The Department of State is working to build partnerships that further US capabilities in AI technologies, protect national and economic security, and promote democratic values. They're engaging in bilateral and multilateral discussions to support responsible AI development and governance.
Here are some key aspects of the Executive Order and AI-related initiatives:
- Annex Seoul Statement of Intent toward International Cooperation on AI Safety Science
- Request for Information related to the Executive Order Concerning Artificial Intelligence
- Submission of recommendations by CSET's Assessment and CyberAI teams
Government Policy and Guidance
The U.S. government has been actively shaping the future of artificial intelligence through various policy initiatives. The Biden administration's Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence was released in October 2023, marking a significant step in AI policy.
This executive order has led to the creation of an EO Provision and Timeline tracker, which lists the specific government deliverables and deadlines for actioning the order's provisions. The tracker is maintained by CSET researchers and is updated periodically.
The Department of State has also developed a Compliance Plan for OMB Memorandum M-24-10, outlining strategies and frameworks for complying with federal requirements for AI governance and responsible AI innovation.
You might enjoy: Generative Ai Policy Template
U.S. Government
The U.S. government has been actively involved in shaping policy and guidance for artificial intelligence (AI). The AI.gov website is a resource for information on AI policy, including the U.S. Voluntary Technology Commitments (2023).
The Executive Order on AI 14110 (2023) was also issued by the U.S. government, outlining key provisions and deadlines for agencies to follow. CSET researchers have analyzed the EO and created a tracker of key provisions with deadlines.
The Department of State has a Compliance Plan for OMB Memorandum M-24-10, which outlines strategies and frameworks for complying with federal requirements under the memorandum. This plan focuses on strengthening AI governance, advancing responsible AI innovation, and managing AI-related risks.
The Department of State also prioritizes AI in its foreign policy, recognizing the technology's potential to promote democracy and human rights. The department engages in bilateral and multilateral discussions to support responsible AI development, deployment, and governance.
Here are some key U.S. government initiatives related to AI policy:
- AI.gov
- U.S. Voluntary Technology Commitments (2023)
- Executive Order on AI 14110 (2023)
- OMB Memorandum M-24-10
- Department of State Compliance Plan for OMB Memorandum M-24-10
Map: Inform Your Data Management Policy
To inform your data management policy, you need to understand the AI context and recognized risks associated with the various components of the system. This includes accounting for the data used to build your models, the data generated as a result, and the different risk tolerance, trustworthiness, and loss prevention associated with each.
Data management and governance policies should examine risk tolerance, trustworthiness, and loss prevention differently. This means considering the unique characteristics of each AI use case and adjusting your safety and security controls accordingly.
Different AI use cases may warrant different safety and security controls. For example, some may require more stringent controls to prevent data breaches, while others may need more flexible controls to accommodate changing data needs.
Sources
- https://www.ropesgray.com/en/insights/alerts/2023/11/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence
- https://www.state.gov/artificial-intelligence/
- https://cset.georgetown.edu/article/eo-14410-on-safe-secure-and-trustworthy-ai-trackers/
- https://psnet.ahrq.gov/issue/executive-order-safe-secure-and-trustworthy-development-and-use-artificial-intelligence
- https://www.gdit.com/perspectives/latest/safe-and-secure-ai-4-cyber-best-practices/
Featured Images: pexels.com