Facebook confirms ‘massive ranking failure’
Less than 24 hours after we learned about Meta’s smear campaign against rival social network TikTok, it has been confirmed that Facebook was showing harmful content to users over a period of six months.
The failure was detailed in an internal document obtained by the Verge. It detailed a “massive ranking failure” where Facebook’s systems failed to suppress posts containing nudity, violence, and propaganda from Russian state media.
Facebook wants to create a brand-safe environment. They’re failing. When Facebook allows ads to appear alongside the types of content it failed to downrank here, that’s incredibly troubling for brands and publishers. Facebook has a history of self-inflicted wounds, scandals, and a lack of accountability when issues like these have been exposed and made headlines. To date, it hasn’t irreparably hurt them. The big question is how long brands will stop investing money in a platform that has shown great interest in taking their money but little interest in protecting them from being associated with such harmful content.
Source: Facebook confirms ‘massive ranking failure’
How paid search marketers can address brand measurement challenges and grow in their careers
Climbing the corporate ladder isn’t always the most straightforward process, especially for paid search marketers. Professionals seeking to grow their careers need to map out a path, accounting for their unique roles within their organizations.
“The most important thing is to look at your current position,” said Pascal Skropke, CMO of Design-Bestseller, at SMX Next. “Not everybody has the opportunity to work at a fast-growing e-commerce company or startup.”
He added, “Look at where you are and find out if it’s possible to take steps within your company — and understand what your company needs to succeed.”
Skropke says marketers seeking to climb this ladder should establish an “anchor point” within their company — the cross-section of their company’s needs and career goals. One of these points is the issue of campaign advertising measurement — the lack of direction, accurate data models, and resources for many brands in the digital retail space.
Source: How paid search marketers can address brand measurement challenges and grow in their careers
Google’s Privacy Sandbox ad technology testing begins
Google has announced that the first round of testing is here for its crucial Privacy Sandbox initiative. In this round of testing developers will gain access to Privacy Sandbox’s newest measurement proposals: Topics, FLEDGE and Attribution Reporting. These ad technologies are the replacement for the beleaguered FLoC initiative that was killed off back in January.
What are these technologies again?
- FLEDGE, or First Locally Executed Decision over Groups Experiment, calculates ad auction data in the browser itself instead of at the server to help increase privacy by limiting a user’s data flow.
- Topics is a technology that helps identify interests for advertising while retaining greater user privacy.
- Attribution reporting allows for better measurement of the conversion from ad clicks or views.
Together these technologies are aimed at helping limit the use of personal data while ensuring accuracy in reporting.
Origin trials. Announced back in 2020, origin trials are a way that developers can test experimental features for a limited time before the public. These trials generally occur on a first-party basis only and on one “origin”. As of today, developers will be able to see and test the code for Topics, FLEDGE, and Attribution Reporting in the Canary version of Chrome.
After this is iteration is rolled out, testing will then begin in a limited Chrome beta and then to a stable version of Chrome. The origin trials for the above-mentioned Privacy Sandbox technologies are worldwide.
If you are interested in API access, Google has a developer guidance page here to help.
Source: Google’s Privacy Sandbox ad technology testing begins
Facebook ads ‘Interest Categories’ may be up to 33% inaccurate
Interest Category targeting on Facebook may be up to 33% inaccurate, according to a new study from North Carolina State University. The NC State researchers conducted two separate experiments, the first to find out which activities were associated with “interest” on Facebook and the second to analyze the accuracy of user interests from participants around the world. The results were far from comforting.
Experiment 1. The first experiment involved the setup of 14 accounts, a small sample size, that merely viewed or scrolled through a page in order to see if those topics within the content consumed would be pulled into the “Interest Categories” accounts. The goal was to see what interests would then be associated with the newly formed account and to qualitatively infer how accurate the newly assigned interests were.
The findings show that 33.22% of the inferred interests were either inaccurate or irrelevant. “The key finding here is that Facebook takes an aggressive approach to interest inference,” Aafaq Sabir, lead author of a paper on the work and a Ph.D. student at NC State, said.
Experiment 2. To get even deeper, the NC State team wanted to see if the findings would hold true for a more diverse group of users. The 146 study participants were selected by Amazon Mechanical Turk from “different parts of the world”. A browser extension would then extract data from the participants’ Facebook accounts and question the participant about the validity of the interest data.
This study found that 29.3% of the interests that Facebook had listed for participants were not actually of interest. “That’s comparable to what we saw in our controlled experiments,” said Anupam Das, co-author of the paper and an assistant professor of computer science at NC State.
What’s unclear. With samples sizes this small, we should take this study with a grain of salt. While the data is unflattering to Facebook’s targeting, much of the experiment is unclear on the parameters used to determine what is of interest and what is not of interest. Additionally, the second experiment in this study relies on user feedback that does not appear to have quantitative parameters in place for reporting, which may muddy results.
Source: Facebook ads ‘Interest Categories’ may be up to 33% inaccurate