Author Topic: Social media platforms and search engines still littered with scam ads  (Read 33 times)

Offline javajolt

  • Administrator
  • Hero Member
  • *****
  • Posts: 35233
  • Gender: Male
  • I Do Windows
    • windows10newsinfo.com
Social media platforms and search engines are still littered with scam ads, showing that the tech giants have not got their houses in order and are failing to protect their users from scams ahead of the Online Safety Act coming into force, Which? has found

The consumer champion looked at scams appearing on online platforms and found blatant fraudulent advertising, from copycats of major retail brands, to investment scams and ‘recovery’ scams, which target previous victims of scams. Scam adverts using the identities of celebrities such as Richard Branson, despite them having nothing to do with the ads, also continue to target consumers.

In November and December 2023, the consumer champion combed the biggest social media sites: Facebook, Instagram,TikTok, X (formerly Twitter) and YouTube. Researchers also looked at the two biggest search engines, Google and Bing.

Which? researchers were able to find a range of obvious scam adverts easily, even though the landmark Online Safety Act had received Royal Assent weeks earlier. The Act will not officially come into force on scam adverts until after Ofcom finalises the codes of practice, which the regulator will use to set the standard platforms must meet.

Which? is concerned the findings suggest online platforms may not be taking scam adverts seriously enough and will continue to inadvertently profit from the misery inflicted by fraudsters until the threat of multi-million pound fines becomes a reality.  This is why Ofcom must make sure that its online safety codes of practice prioritise fraud prevention and takedown.

While it is positive the government has passed key legislation such as the Online Safety Act, it is now time to appoint a dedicated fraud minister to make fighting fraud a national priority.

Which? used a variety of methods including setting up fresh social media accounts for the purposes of the investigation. Researchers tailored these accounts to interests frequently targeted by scam advertisers, such as shopping with big-name retailers, competitions and money-saving deals, investments, weight-loss gummies and getting help to recover money after a scam.

Researchers also scoured ad libraries – the searchable databases of adverts that are available for Facebook, Instagram and TikTok – and investigated scams reported by some of the 26,000 members of its Which? Scam Action and Alerts community on Facebook.

Which? captured scams they came across in the course of everyday browsing and scrolling for personal use. Researchers collected more than 90 examples of potentially fraudulent adverts. Whenever they were confident of something being a scam and in-site scam reporting tools were available, they reported the adverts. Most platforms did not update on the outcome of these reports.

The exception was Microsoft, the parent company of Bing, which confirmed an advert had violated its standards and said it would act, but did not specify how. Which? found what it considered to be clear examples of scam adverts on Bing, Facebook, Google, Instagram and X.

On Meta’s ad library, Which? found Facebook and Instagram hosting multiple copycat adverts impersonating major retailers around the time of the Black Friday sales, including electricals giant Currys plus clothing brands River Island and Marks & Spencer. Each advert attempted to lure victims to bogus sites in a bid to extract their payment details.

On YouTube and TikTok, Which? found sponsored videos in which individuals without Financial Conduct Authority authorisation gave often highly inappropriate investment advice. While these are not necessarily scam videos and would not come under the remit of the new laws, they are nonetheless extremely concerning and Which? has shared these examples with the platforms.

An advert impersonating Currys, appearing on both Facebook and Instagram, attempted to lure in victims by claiming to offer ‘90% off on a wide range of products’. However, it went through to a completely different URL and was evidently a scam to lure in shoppers.

On X, a dodgy advert led to a fake BBC website and featured an article falsely using Martin Lewis to endorse a dodgy company called Quantum AI, which promotes itself as a crypto get-rich-quick platform. Beneath the advert was a note added by the platform with some context added by other site users, known as readers’ notes. It warned that: ‘This is yet another crypto scam using celebrities’. Despite the warning the advert remained live.

Which? found that suspected scam adverts were particularly quick and easy to come by on search engines. For example, when researchers posed as drivers searching on Google for the ‘paybyphone app’ to pay for parking, they were confronted with two adverts for impostor websites – onlytelephone.com and homeautomationinnovators.com – appearing at the top of search results and using PayByPhone’s logo without permission.

Both websites claimed to offer a ‘free download’, but included identical small print at the bottom of their websites revealing a monthly charge of £24.99.  Which? reported both adverts and PayByPhone confirmed that the advertisers had nothing to do with the genuine parking app.

Microsoft, owner of Bing, and TikTok were the only platforms to tell Which? they had removed the scam or harmful content reported to them by the consumer champion. Facebook, Google, Instagram and X did not say whether the adverts reported to them had been blocked or removed.

Which? believes a robust set of duties and enforcement of the Online Safety Act cannot come soon enough. While the onus should not fall on consumers to protect themselves, they can get advice on avoiding the latest scams by signing up for Which?’s scam alerts service.

Rocio Concha, Which? Director of Policy and Advocacy said:

Quote
“Most of the major social media platforms and search engines are still failing to protect their users from scam ads, despite forthcoming laws that will force them to tackle the problem.

“Ofcom must put a code of conduct in place that puts robust duties on platforms to detect and take down scams using the Online Safety Act. The government needs to make tackling fraud a national priority and appoint a fraud minister who can ensure there is a coordinated pushback against the epidemic of fraud gripping the UK.

“Although the onus should not fall on consumers, there are steps they can take to make becoming a fraud victim less likely. To hear about the latest scams they can sign up to the Which? scam alert service and people can get advice about how to protect themselves by visiting www.gov.uk/stopthinkfraud.”

NOTE:

Which? offers a free weekly scam alert newsletter which consumers can sign up for at which.co.uk/scamalert.

Right of replies

Google (also the parent company of YouTube) said:

‘Protecting users is our top priority and we have strict ads policies that govern the types of ads and advertisers we allow on our platforms. We enforce our policies vigorously, and if we find ads that are in violation we remove them. We continue to invest significant resources to stop bad actors and we are constantly evaluating and updating our policies and improving our technology to keep our users safe.’

Google explained that it invests significant resources to stop bad actors. In 2022 it removed more than 5.2 billion ads, restricted more than 4.3 billion ads, and suspended 6.7 million advertiser accounts.

Meta, which owns both Facebook and Instagram, told Which? that scams are an industry-wide issue and increasingly sophisticated. It explained it has systems in place to block scams, and that financial services advertisers are now required to be authorised by the Financial Conduct Authority to target UK users. Fake or fraudulent accounts and content can be reported in a few clicks, with reports reviewed by a trained team with the power to remove the offending content. Meta also said it will work with the police and support their investigations.

TikTok explained that its Community Guidelines prohibit fraud and scams, as well as content that coordinates, facilitates, or shares instructions on how to carry out scams. It added that it had removed all the videos Which? shared with it for violating its Community Guidelines, plus the related accounts.

It told Which? that it employs 40,000 safety professionals dedicated to keeping TikTok safe, using technologies and moderation teams to identify, review and remove content or accounts that violate its Community Guidelines.

According to TikTok, it removed 88.6% of videos that violated its fraud and scams policy before the content was reported, with 83.1% removed within 24 hours. Users who encounter suspicious content are encouraged to report it under ‘Frauds and Scams’.

Microsoft, Bing’s owner, told Which? that its policies prohibit advertising content that is deceptive, fraudulent or can be harmful to users.

Microsoft confirmed that the content Which? reported had been removed and that multiple advertisers were blocked from its networks. It added that it will continue to monitor its ad network for similar accounts and will take action to protect its customers.

X was also contacted for comment.

About Which?

Which? is the UK’s consumer champion, here to make life simpler, fairer, and safer for everyone. Our research gets to the heart of consumer issues, our advice is impartial, and our rigorous product tests lead to expert recommendations. We’re the independent consumer voice that influences politicians and lawmakers, investigates, holds businesses to account, and makes change happen. As an organization, we’re not for profit and all for making consumers more powerful.

The information in this press release is for editorial use by journalists and media outlets only. Any business seeking to reproduce the information in this release should contact the Which? Endorsement Scheme team at endorsementscheme@which.co.uk.

source