Australian Public Data Collected by Meta Since 2007 for AI Model Training
Meta stated that it does not utilize public photos posted by individuals under the age of 18, but it did not directly address whether photos of children shared through adult accounts were extracted.
An inquiry by an Australian Senate committee has questioned tech behemoth Meta regarding the ethical implications of using Australian users’ photos for training artificial intelligence (AI) without their explicit consent.
Labor Senator Tony Sheldon, acting as the chairman of the Select Committee on AI Adoption, interrogated Meta’s representatives about the company’s utilization of Australian Instagram and Facebook posts dating back to 2007 to train its AI model, as reported in June 2024 by the media.
Meta’s Global Privacy Policy Director, Melinda Claybaugh, replied by stating that the company leverages “public data” from its services and products.
She elaborated, “This means that when you make a post on Facebook or Instagram, you decide on the audience for that post, whether it’s text or an image. If you opt to make that post public, you are essentially sharing that information publicly.”
When pressed by Greens Senator David Shoebridge for further clarification, Claybaugh admitted, “From 2007, Meta has made the decision to scrape all public photos and text from every Australian Instagram or Facebook post since 2007 unless specifically set to private.”
She asserted, “We are using public photos shared by individuals over the age of 18.”
However, Claybaugh evaded providing a direct response to whether Meta had utilized photos of children shared by adults on their accounts.
Meta’s Ethical AI Training Probe
Shoebridge raised concerns about the ethical implications of Meta’s AI training, highlighting that some users did not authorize the use of their images by the company.
“Don’t you see the ethical dilemma here with Meta’s actions?” he questioned.
In her reply, Claybaugh mentioned that Meta has implemented measures to mitigate issues related to the utilization of users’ personal data.
She explained, “We take various steps during data collection, model training, and apply filters to ensure that personal data cannot be exposed through the generative AI product or associated with any specific individual.”
“Our AI development process involves comprehensive accountability measures, mitigations at every stage of development, and rigorous testing for privacy safety concerns,” she added. “We also regularly incorporate feedback from external experts to address and incorporate their concerns into our processes.”
Absence of Opt-Out for Australian Users
Sheldon pointed out the absence of an opt-out option for Australian users within Meta’s platform, contrasting it with the provision for EU users to opt out of AI training data usage. He inquired about the reasons behind this discrepancy.
Responding to this, Claybaugh linked the disparity to privacy laws in the EU.
She stated, “In Europe, there is an ongoing debate related to the interpretation of existing privacy laws concerning AI training. Consequently, we have refrained from launching our AI products there due to the ambiguous legal environment.”
Claybaugh confirmed the opt-out availability for EU users but did not commit to providing the same option for Australian users when pressed by Sheldon.
“Is this currently in effect?” Sheldon queried.
“No, it is not,” Claybaugh replied.
Misuse of Australian Children’s Personal Data for AI Training
Claybaugh’s testimony raises concerns about the growing misuse of children’s personal photos for AI training purposes.
In July 2024, the non-governmental organization Human Rights Watch (HRW) discovered numerous images of Australian children as young as three years old included in a data set called LAION-5B, used for training popular data applications.
The HRW raised privacy concerns, warning that malicious actors could potentially exploit these images to create inappropriate content involving children.
Hye Jung Han, an advocate and researcher at HRW, called on the Australian government to implement laws safeguarding children’s data against misuse in AI applications.
“Children should not have to live in fear of their images being misused against them,” she emphasized.