
Back in May, the Biden Administration issued a number of statements concerning the usage of AI. As a consequence of the meteoric rise of this new technology, AI tools have been used to spread misinformation, private data, biases etc. Attackers have grown accustomed to the new technology, and have started using deep fakes to deceive victims in phishing attempts.
Malware campaigns have started using generative AI as part of their strategy. A report from Meta claims that over 10 malware families have posed as ChatGPT or similar in order to compromise accounts as part of elaborate phishing scams that involved installing browser extensions that claimed to provide working ChatGPT features alongside malware.
On 21st of July, the White House released a fact sheet containing a list of eight voluntary agreements taken by the biggest AI companies that will shape the immediate future of AI development. These voluntary commitments follow three fundamental principles of future AI development:
- Security: Companies have to build AI tools with security in mind. This means they have to safeguard their models against cyber and insider threats and share best practices and standards to reduce misuse and improve national and societal security.
- Safety: Companies have a duty to make sure their products are safe before using. This involves running extensive (internal and external) tests, pushing their capabilities and safety to the limit and making results of these asessments public.
- Trust: Companies must ensure that their technology does not promote bias or racial discrimination, they must also be safe to use and protect the privacy of users and data integrity. Furthermore, users must be informed on whether online content is AI generated or not, through the implementation of watermarks or other similar systems.
The eight AI safety commitments include:
- Extensive internal or external red-team testing of AI systems and models
- Companies have to cooperate with each other and with the government, and are required to share standards and best practices, as well as security risks and breakthrough of established safeguards
- Investment into security against cyber and insider threats for proprietary or unreleased AI models and tools
- A commitment towards discovering and reporting weaknesses and issues within AI Models
- Development of watermarks or other similar systems that enable users to understand whether or not content is AI generated, for both audio and visual content
- Publicly report the limitations and capabilities of AI models, including domains of appropriate and inappropriate use
- Prioritize research on limiting harmful AI bias and discrimination, and protect the privacy of the user
- Develop frontier AI models that help tackle humanity's greatest crises (climate change, cancer prevention and curing, cybersecurity).
In addition, companies must support initiatives towards education and training of students and workers on the benefits of AI and help citizens understand the nature, potential and limitations of AI technology.
Other Posts you might be interested in:
There is a new threat that job seekers and employers should be aware of - phishing and malware campaigns that target individuals during the current economic downturn. By exploiting job-themed emails, attackers are attempting to steal sensitive information or hack into devices.
Read MoreAs companies generate and accumulate increasingly large amounts of data, it becomes essential for them to develop and implement data retention policies. These policies help companies manage their data in a consistent and secure manner while also ensuring they comply with legal requirements and regulations.
Read MoreData is a prized asset and protecting it from insider threats is paramount. From implementing robust access controls to fostering a culture of cybersecurity awareness, this article provides practical insights to safeguard your data against both inadvertent and malicious insider actions. By combining technological measures with education and stringent policies, organizations can create a comprehensive defense strategy to mitigate the risks posed by insider threats in today's dynamic digital landscape.
Read More