AI Generated Content will be Watermarked as part of an Eight Voluntary Commitment around the use and oversight of AI tools

Gallery Thumb 1

Back in May, the Biden Administration issued a number of statements concerning the usage of AI. As a consequence of the meteoric rise of this new technology, AI tools have been used to spread misinformation, private data, biases etc. Attackers have grown accustomed to the new technology, and have started using deep fakes to deceive victims in phishing attempts.

Malware campaigns have started using generative AI as part of their strategy. A report from Meta claims that over 10 malware families have posed as ChatGPT or similar in order to compromise accounts as part of elaborate phishing scams that involved installing browser extensions that claimed to provide working ChatGPT features alongside malware.

On 21st of July, the White House released a fact sheet containing a list of eight voluntary agreements taken by the biggest AI companies that will shape the immediate future of AI development. These voluntary commitments follow three fundamental principles of future AI development:

  • Security: Companies have to build AI tools with security in mind. This means they have to safeguard their models against cyber and insider threats and share best practices and standards to reduce misuse and improve national and societal security.
  • Safety: Companies have a duty to make sure their products are safe before using. This involves running extensive (internal and external) tests, pushing their capabilities and safety to the limit and making results of these asessments public.
  • Trust: Companies must ensure that their technology does not promote bias or racial discrimination, they must also be safe to use and protect the privacy of users and data integrity. Furthermore, users must be informed on whether online content is AI generated or not, through the implementation of watermarks or other similar systems.

The eight AI safety commitments include:

  • Extensive internal or external red-team testing of AI systems and models
  • Companies have to cooperate with each other and with the government, and are required to share standards and best practices, as well as security risks and breakthrough of established safeguards
  • Investment into security against cyber and insider threats for proprietary or unreleased AI models and tools
  • A commitment towards discovering and reporting weaknesses and issues within AI Models
  • Development of watermarks or other similar systems that enable users to understand whether or not content is AI generated, for both audio and visual content
  • Publicly report the limitations and capabilities of AI models, including domains of appropriate and inappropriate use
  • Prioritize research on limiting harmful AI bias and discrimination, and protect the privacy of the user
  • Develop frontier AI models that help tackle humanity's greatest crises (climate change, cancer prevention and curing, cybersecurity).

In addition, companies must support initiatives towards education and training of students and workers on the benefits of AI and help citizens understand the nature, potential and limitations of AI technology.

Other Posts you might be interested in:

Unveiling the Top Cybersecurity Threats: Safeguarding Your Business with DeepBlue Computers

Unveiling the Top Cybersecurity Threats: Safeguarding Your Business with DeepBlue Computers

Explore the prevalent cybersecurity threats businesses face, including phishing attacks, ransomware, and insider threats. Discover the importance of partnering with a cybersecurity firm for tailored defense strategies, and why DeepBlue Computers is a good choice for your cybersecurity needs.

Read More
Unveiling the Top Cybersecurity Threats: Safeguarding Your Business with DeepBlue Computers

Strengthening Cybersecurity: Best Practices for SMBs

Explore essential cybersecurity practices for small and medium-sized businesses, covering employee training, password policies, multi-factor authentication, and more. Elevate your business's security with DeepBlue Computers, offering customized solutions and expertise to fortify against evolving cyber threats.

Read More
Google Cloud offers Open Source Software for free

Google Cloud offers Open Source Software for free

Google Cloud has made its Assured Open Source Software platform free, which provides access to vetted open source software packages. The program includes over 1,000 Java and Python packages and features advanced security testing methods to ensure the packages are safe and reliable for developers to use.

Read More