In a recent incident, Microsoft's AI research division faced a security vulnerability, as discovered by Wiz, resulting in the exposure of 38TB of private data. White hat hackers identified a shareable link using Azure Statistical Analysis System (SAS) tokens on June 22, 2023. The misconfiguration was swiftly reported to the Microsoft Security Response Center, leading to the invalidation of the SAS token by June 24 and subsequent token replacement on the GitHub page by July 7.
The vulnerability originated from a Shared Access Signature token for an internal storage account when an employee unintentionally shared a URL on a public GitHub repository. This allowed the ethical hackers at Wiz to gain unauthorized access to the entire storage account, revealing a vast 38TB of private data, including disk backups of two former employees' workstation profiles, internal Microsoft Teams messages, secrets, private keys, passwords, and open-source AI training data. Notably, SAS tokens, designed for Azure file-sharing, don't expire, making them less ideal for sharing critical data externally, as highlighted in a Microsoft security blog on September 7. It's important to note that, according to Microsoft, no customer data was compromised, and there was no risk of other Microsoft services being breached due to the nature of the exposed AI dataset.
While this incident is not exclusive to Microsoft's AI training efforts, it underscores the broader issue of securing very large open-source datasets. Wiz, in its blog post, emphasized the inherent security risks associated with high-scale data sharing in AI research and provided insights for organizations to avoid similar incidents.
Wiz suggests cautioning employees against oversharing data and recommends that organizations consider relocating public AI datasets to dedicated storage accounts. Additionally, the incident highlights the need for vigilance against supply chain attacks, where attackers may inject malicious code into files accessible to the public due to improper permissions.
This case underscores the broader challenge of securing large open-source datasets, emphasizing the need for caution in data sharing and considerations for relocating public AI datasets to dedicated storage accounts. Wiz advises organizations to be vigilant against supply chain attacks and stresses the importance of heightened awareness of security risks throughout the AI development process. As AI adoption rises, collaboration between security, data science, and research teams is crucial to establishing robust defenses against evolving threats.
Other Posts you might be interested in:
Dutch cybersecurity firm ThreatFabric has detected a new variant of the Android Trojan Xenomorph, classified as Xenomorph.C. This new version introduces a number of new features, which allows attackers to automate fraudulent transactions without human interaction. Xenomorph's creators, Hadoken Group plan to target hundreds of banks across all continents.
Read MoreData is a prized asset and protecting it from insider threats is paramount. From implementing robust access controls to fostering a culture of cybersecurity awareness, this article provides practical insights to safeguard your data against both inadvertent and malicious insider actions. By combining technological measures with education and stringent policies, organizations can create a comprehensive defense strategy to mitigate the risks posed by insider threats in today's dynamic digital landscape.
Read MoreAmidst economic uncertainties and budget constraints, SMEs struggle with complex tech stacks, compliance obligations, and a severe skills shortage, prompting the consideration of Security Operations Centers (SOCs) and Managed Service Providers (MSPs) as crucial solutions to enhance their cybersecurity defenses."
Read More