Back in May, the Biden Administration issued a number of statements concerning the usage of AI. As a consequence of the meteoric rise of this new technology, AI tools have been used to spread misinformation, private data, biases etc. Attackers have grown accustomed to the new technology, and have started using deep fakes to deceive victims in phishing attempts.
Malware campaigns have started using generative AI as part of their strategy. A report from Meta claims that over 10 malware families have posed as ChatGPT or similar in order to compromise accounts as part of elaborate phishing scams that involved installing browser extensions that claimed to provide working ChatGPT features alongside malware.
On 21st of July, the White House released a fact sheet containing a list of eight voluntary agreements taken by the biggest AI companies that will shape the immediate future of AI development. These voluntary commitments follow three fundamental principles of future AI development:
- Security: Companies have to build AI tools with security in mind. This means they have to safeguard their models against cyber and insider threats and share best practices and standards to reduce misuse and improve national and societal security.
- Safety: Companies have a duty to make sure their products are safe before using. This involves running extensive (internal and external) tests, pushing their capabilities and safety to the limit and making results of these asessments public.
- Trust: Companies must ensure that their technology does not promote bias or racial discrimination, they must also be safe to use and protect the privacy of users and data integrity. Furthermore, users must be informed on whether online content is AI generated or not, through the implementation of watermarks or other similar systems.
The eight AI safety commitments include:
- Extensive internal or external red-team testing of AI systems and models
- Companies have to cooperate with each other and with the government, and are required to share standards and best practices, as well as security risks and breakthrough of established safeguards
- Investment into security against cyber and insider threats for proprietary or unreleased AI models and tools
- A commitment towards discovering and reporting weaknesses and issues within AI Models
- Development of watermarks or other similar systems that enable users to understand whether or not content is AI generated, for both audio and visual content
- Publicly report the limitations and capabilities of AI models, including domains of appropriate and inappropriate use
- Prioritize research on limiting harmful AI bias and discrimination, and protect the privacy of the user
- Develop frontier AI models that help tackle humanity's greatest crises (climate change, cancer prevention and curing, cybersecurity).
In addition, companies must support initiatives towards education and training of students and workers on the benefits of AI and help citizens understand the nature, potential and limitations of AI technology.
Other Posts you might be interested in:
An overview of the cyberespionage threat actor APT43, also known as Kimsuky or Thallium, which supports the interests of the North Korean regime and has been targeting government and military personnel, think tanks, policymakers, academics and researches throughout the western sphere.
Read MoreMicrosoft addressed a data exposure incident stemming from AI researchers inadvertently sharing open-source training data on GitHub, leading to the exposure of 38TB of private information. The swift mitigation measures highlight the importance of secure data practices in the context of AI-driven initiatives.
Read MoreThere is a new threat that job seekers and employers should be aware of - phishing and malware campaigns that target individuals during the current economic downturn. By exploiting job-themed emails, attackers are attempting to steal sensitive information or hack into devices.
Read More