0 votes
asked ago by (58.3k points)
Mar 14 -- The United States Agency for International Development and the U.S. Department of State, in collaboration with the Department of Energy and the National Science Foundation, seek information to assist in carrying out responsibilities under Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued on October 30, 2023. Specifically, the E.O. directs USAID and the State Department to publish a Global AI Research Agenda to guide the objectives and implementation of AI-related research in contexts beyond United States borders. Comments containing information in response to this notice must be received on or before April 10, 2024.

To promote safe, responsible, and rights-affirming development and deployment of AI abroad, the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence directs:

“The Secretary of State and the Administrator of the United States Agency for International Development, in collaboration with the Secretary of Energy and the Director of NSF, shall develop a Global AI Research Agenda to guide the objectives and implementation of AI-related research in contexts beyond United States borders. The Agenda shall: (A) include principles, guidelines, priorities, and best practices aimed at ensuring the safe, responsible, beneficial, and sustainable global development and adoption of AI; and (B) address AI's labor-market implications across international contexts, including by recommending risk mitigations.”

USAID and the State Department are seeking information to assist in carrying out this action.

The rapid development of AI technologies is taking place in a highly-connected global context, in which funding, data, talent, and computing resources flow across borders to create globally-sourced products with global audiences. Building a safe, secure, trustworthy global AI ecosystem will require robust international collaboration and thorough understanding of the global impacts of AI technologies. As a result, the Global AI Research Agenda has three interrelated goals:

-- First, to leverage robust research collaborations to promote the safe, responsible, beneficial, and sustainable development of AI technologies around the world. This will require understanding of the best practices for building international partnerships, and using these partnerships to promote responsible research practices.
-- Second, to outline important areas of inquiry for the study of AI's human impacts in a global context. Given the rapid development of AI technology, we are still at an early stage of understanding how it may reshape our economies, societies, and selves. Because AI's reach is inherently global, this inquiry needs to take a global perspective, understanding how the human impacts of AI are modulated by language, culture, geography, and socioeconomic development.
-- Finally, to address the global labor market implications of AI. While many leading AI companies are based in the United States and other wealthy countries of the Global North, necessary inputs such as data labeling and human-feedback training involve workers in much more diverse settings. Similarly, the availability of commercial APIs and open-source models make the outputs of AI accessible around the world, potentially leading to unpredictable changes in the quantity, profitability, and nature of work.

The Global AI Research Agenda drafting committee is currently working with the following high-level structure for the Agenda. We welcome public input on this high-level structure, in particular whether other topics need to be emphasized in order to address the three goals above.

-- International Research Principles
-- AI Research Best Practices
-- AI Research Priorities
○ Sociotechnical perspectives on human-AI interactions (i.e., research approaches situating technological systems in their social, cultural, and economic contexts)
○ Advancing fundamental AI through international collaborations and research infrastructure
○ Applications of AI to address global challenges: climate, food security, health, etc.
○ Global perspectives on AI misuse: surveillance, information integrity, gender-based violence
○ Advancing safe, secure, inclusive, and trustworthy AI
-- Labor Market Implications and risk mitigation

USAID and State Department are interested in receiving information pertinent to any or all of the topics described below. . . .

• Research best practices: What sorts of guidelines, practices, or institutional arrangements can help various research stakeholders (universities, corporate R&D centers, conferences, journals, etc.) ensure that AI research is safe, ethical, and sensitive to global contexts? In particular, what criteria and frameworks are currently being used by AI conferences, publications, and funders?

• International engagement: What types of international research partnerships have been most effective in ensuring alignment on safe, secure, and trustworthy AI? What types have been challenging?

• Foundation models: How might research and engagement best practices differ between the developers of foundation models and “downstream” users of these models? What do users want and need from foundation model developers?

• Human impacts: What considerations are most important for safe and ethical research into the human impacts of AI systems (e.g., mental health, labor displacement, bias and discrimination)? How do these considerations vary in different global contexts?

• Enabling infrastructure: What are the best strategies to ensure access to computing resources, data, and other prerequisites for AI research?

• Global equity considerations: How might these best practices or strategies look different for partnerships in developed economies and those involving emerging economies? How might best practices differ for different types of partnerships (academic, private sector, government, public-private etc.)?

FRN: https://www.federalregister.gov/d/2024-05357

Please log in or register to answer this question.

...