AI Uncovers Sensitive Data Through Ads
A groundbreaking study by the ARC Centre of Excellence for Automated Decision-Making and Society has highlighted significant online privacy risks. Researchers from UNSW Sydney and QUT discovered that online ads can reveal sensitive personal information, such as political preferences and employment status, through AI analysis.
The research involved examining over 435,000 Facebook ads viewed by 891 Australians. This data was collected through the Australian Ad Observatory project, a major initiative of the ARC ADM+S. The study demonstrated that large language models (LLMs) could infer personal traits without needing access to browsing history or personal data.
Lead author Baiyu Chen from UNSW said, “The key point is that the ads a person sees are not random. Advertising systems optimise delivery based on inferred profiles and behaviours, so the pattern of ads shown can indicate traits such as gender, age, and political preferences.”
Implications for Privacy
The study revealed that profiles could be constructed quickly and efficiently from short browsing sessions. AI systems were able to match and sometimes exceed human ability in deducing personal characteristics, achieving this at a fraction of the cost and time required for human analysis.
According to Professor Flora Salim and Dr Benjamin Tag, streams of ads act like digital fingerprints, allowing private attributes to be reconstructed with high accuracy. Even when predictions are not exact, they can offer significant insights into an individual’s life stage or socioeconomic status.
The findings presented at the ACM Web Conference 2026 emphasise, “Actionable profiling is feasible even with brief observation periods.” This suggests a systemic vulnerability in the ad ecosystem, as prolonged tracking isn’t necessary for effective profiling.
Researchers warned of potential exploitation by browser extensions that collect ads to build user profiles, bypassing existing platform safeguards. They called for urgent regulatory measures to address these inherent privacy risks in the rapidly evolving ad ecosystem.
Professor Daniel Angus emphasised the need for responsible web AI governance, particularly in the generative AI era. This research underscores a critical blind spot in web privacy: the unintentional leakage of user attributes through algorithmic advertising exposure.
The study showed that AI systems could reconstruct user profiles over 200 times cheaper and 50 times faster than traditional human analysis, highlighting the efficiency of AI in handling large datasets. This efficiency raises concerns about widespread misuse if not regulated properly.

