Weather     Live Markets

The US government is seeking advice from leading artificial intelligence companies on how to use AI technology to protect critical infrastructure, such as airlines and utilities, from AI-powered attacks. A panel is being created by the Department of Homeland Security, which will include CEOs from major companies in various industries, including Google, Microsoft, OpenAI, defense contractors, and airlines. This collaboration between the government and the private sector is aimed at addressing the risks and benefits of AI without a specific national AI law in place.

The expert panel will provide recommendations to sectors such as telecommunications, pipeline operators, and electric utilities on the responsible use of AI and how to prepare for AI-related disruptions. DHS Secretary Alejandro Mayorkas emphasized the transformative potential of AI technology, while also acknowledging the risks involved and the need for best practices and concrete actions to mitigate these risks. The board includes a diverse range of participants, including technology providers, chipmakers, AI model developers, civil rights groups, government officials, and academic experts in AI.

The AI Safety and Security Board was established as a result of a 2023 executive order signed by President Joe Biden, calling for improvements in security, resilience, and incident response related to AI usage in critical infrastructure. This executive order also led to regulations governing how federal agencies can purchase and use AI in their systems. The US government already utilizes AI for various purposes, such as monitoring natural disasters and wildlife. However, the rise of deepfake technology, particularly in the form of fake audio and video content, poses a significant challenge for officials aiming to safeguard elections from misinformation and foreign interference.

Deepfake technology, which uses AI to create fake content, has become a major concern for US officials seeking to protect the integrity of elections from disinformation campaigns. The prevalence of deepfake audio and video poses unique challenges in identifying and combating deceptive content. US officials are particularly focused on preventing foreign adversaries from exploiting AI technology to influence elections. The advisory board on AI safety and security aims to address these risks and guide sectors on the responsible and safe use of AI in critical infrastructure.

The concern over deepfake technology extends to various sectors, including elections, where the use of fake audio and video content can have significant consequences. The AI advisory board recognizes the real threat posed by adversarial nations seeking to manipulate elections through AI technology. The collaboration between the government and industry leaders is crucial in addressing these challenges and developing strategies to counter potential attacks on critical infrastructure. By leveraging the expertise of AI experts and stakeholders, the US government aims to enhance security measures and resilience against AI-related threats.

Share.
Exit mobile version