By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

On the internet, people need to worry about more than just opening suspicious email attachments or entering their sensitive information into harmful websites—they also need to worry about their Google searches. That’s because last year, as revealed in our 2024 ThreatDown State of Malware report, cybercriminals flocked to a malware delivery method that doesn’t require they know a victim’s email address, login credentials, personal information, or, anything. Instead, cybercriminals need to fool someone into clicking on a search result that looks remarkably legitimate. This is the work of “malicious advertising,” or “malvertising,” for short. Malvertising is not malware itself. Instead, it’s a sneaky process of placing malware, viruses, or other cyber infections on a person’s computer, tablet, or smartphone. The malware that eventually slips onto a person’s device comes in many varieties, but cybercriminals tend to favor malware that can steal a person’s login credentials and information. With this newly stolen information, cybercriminals can then pry into sensitive online accounts that belong to the victim. But before any of that digital theft can occur, cybercriminals must first ensnare a victim, and they do this by abusing the digital ad infrastructure underpinning Google search results. Think about searching on Google for “running shoes”—you’ll likely see ads for Nike and Adidas. A Google search for “best carry-on luggage” will invariably produce ads for the consumer brands Monos and Away. And a Google search for a brand like Amazon will show, as expected, ads for Amazon. But cybercriminals know this, and in response, they’ve created ads that look legitimate, but instead direct victims to malicious websites that carry malware. The websites themselves, too, bear a striking resemblance to whatever product or brand they’re imitating, to maintain a charade of legitimacy. From these websites, users download what they think is a valid piece of software, instead of downloading malware that leaves them open to further attacks. Indeed, malvertising is often understood as a risk to businesses. Still, the copycat websites created by cyber criminals can and often do impersonate popular brands for everyday users, too. If Google ads have been around for over a decade, why are they only being abused by cybercriminals now? The truth is, that malvertising has been around for years, but a particular resurgence was recorded more recently. In 2022, cybercriminals lost access to one of their favorite methods of delivering malware. That summer, Microsoft announced that it would finally block “macros” that were embedded into files that were downloaded from the internet. Macros are essentially instructions that users can program so that multiple tasks can be bundled together. The danger, though, is that cybercriminals would pre-program macros within certain files for Microsoft Word, Excel, or PowerPoint, and then send those files as malicious email attachments. Once those attachments were downloaded and opened by users, the embedded macros would trigger a set of instructions directing a person’s computer to install malware from a dangerous website online. Macros were a scourge for cybersecurity for years, as they were effective and easy to deliver. But when Microsoft restricted macro capabilities in 2022, cybercriminals needed to find another malware delivery channel. They focused on malvertising. Today’s malvertising is increasingly sophisticated, as cybercriminals can create and purchase online ads that target specific types of users based on location and demographics. Concerningly, modern malvertising can even avoid basic fraud detection as cybercriminals can create websites that determine whether a user is a real person or simply a bot that is trawling the web to find and flag malicious activity. Learn more by visiting OUR FORUM.

The consumer champion looked at scams appearing on online platforms and found blatant fraudulent advertising, from copycats of major retail brands to investment scams and ‘recovery’ scams, which target previous victims of scams. Scam adverts using the identities of celebrities such as Richard Branson, despite them having nothing to do with the ads, also continue to target consumers. In November and December 2023, the consumer champion combed the biggest social media sites: Facebook, Instagram, TikTok, X (formerly Twitter) and YouTube. Researchers also looked at the two biggest search engines, Google and Bing. Which? researchers could easily find a range of obvious scam adverts, even though the landmark Online Safety Act had received Royal Assent weeks earlier. The Act will not officially come into force on scam adverts until after Ofcom finalizes the codes of practice, which the regulator will use to set the standard platforms must meet. Which? is concerned the findings suggest online platforms may not be taking scam adverts seriously enough and will continue to inadvertently profit from the misery inflicted by fraudsters until the threat of multi-million-pound fines becomes a reality. This is why Ofcom must make sure that its online safety codes of practice prioritize fraud prevention and takedown. While it is positive the government has passed key legislation such as the Online Safety Act, it is now time to appoint a dedicated fraud minister to make fighting fraud a national priority. Which? used a variety of methods including setting up fresh social media accounts for the investigation. Researchers tailored these accounts to interests frequently targeted by scam advertisers, such as shopping with big-name retailers, competitions and money-saving deals, investments, weight-loss gummies, and getting help to recover money after a scam. Researchers also scoured ad libraries – the searchable databases of adverts that are available for Facebook, Instagram, and TikTok – and investigated scams reported by some of the 26,000 members of its Which? Scam Action and Alerts community on Facebook. Which? captured scams they came across in the course of everyday browsing and scrolling for personal use. Researchers collected more than 90 examples of potentially fraudulent adverts. Whenever they were confident of something being a scam and in-site scam reporting tools were available, they reported the adverts. Most platforms did not update on the outcome of these reports. The exception was Microsoft, the parent company of Bing, which confirmed an advert had violated its standards and said it would act but did not specify how. Which? found what it considered to be clear examples of scam adverts on Bing, Facebook, Google, Instagram, and X. On Meta’s ad library, Which? found Facebook and Instagram hosting multiple copycat adverts impersonating major retailers around the time of the Black Friday sales, including electricals giant Currys plus clothing brands River Island and Marks & Spencer. Each advert attempted to lure victims to bogus sites in a bid to extract their payment details. On YouTube and TikTok, Which? found sponsored videos in which individuals without Financial Conduct Authority authorization gave often highly inappropriate investment advice. While these are not necessarily scam videos and would not come under the remit of the new laws, they are nonetheless extremely concerning Which? has shared these examples with the platforms. An advert impersonating Currys, appearing on both Facebook and Instagram, attempted to lure in victims by claiming to offer ‘90% off on a wide range of products’. However, it went through to a completely different URL and was a scam to lure in shoppers. On X, a dodgy advert led to a fake BBC website and featured an article falsely using Martin Lewis to endorse a dodgy company called Quantum AI, which promotes itself as a crypto-get-rich-quick platform. Beneath the advert was a note added by the platform with some context added by other site users, known as readers’ notes. It warned that: ‘This is yet another crypto scam using celebrities’. Despite the warning, the advert remained live. For more visit OUR FORUM.

Whether you believe AI will be the salvation of humankind or the death of it, whether you think it’s little more than a plaything to while away your time or the surest way to get onto the fast track at work, you’re going to use it someday. Maybe today. Maybe tomorrow. Maybe next week or next month. But one day, you’ll turn to it. And you’ll most likely be surprised at how helpful it can be, even in its earliest days. For many business users, that means using Copilot, Microsoft’s umbrella name for a variety of AI products. There are already highly targeted Copilots for various Microsoft products, notably Copilot for Microsoft 365, which integrates with Microsoft Office apps like Word, Outlook, and OneNote. That Copilot is only available for business customers willing to pay a hefty $30 per user per month, essentially doubling the price of the Microsoft 365 E3 plan, for instance. There’s also a $20-per-month Copilot Pro subscription for individuals that offers integration with Office apps and priority access during peak times. In this article, though, we’re going to give you tips about how to get the most out of the everyday, free version of Copilot, available directly inside Windows, inside the Edge browser, inside Microsoft’s Bing search engine, and on the web for anyone using Windows or macOS, as long as they have a Microsoft account. Before you start using Copilot, you need to understand exactly what it is — and what it isn’t. It’s what’s called generative AI, or genAI for short. It’s called that because it can create, or generate, different kinds of content — notably text, images, and videos. In this article, we’ll primarily cover text-based content. For text generation, Copilot uses a large language model (LLM) to do its work. It’s based on the GPT-4 model, developed by a company called OpenAI in which Microsoft is the largest investor. It’s trained on massive amounts of articles, books, web pages, and other publicly available text. Based on that training, it can respond to questions, summarize articles and documents, write documents from scratch, and much more. Like its more famous cousin, OpenAI’s ChatGPT, Copilot works as a chatbot. You ask it a question or feed it a prompt, and it generates a response. You can ask a series of follow-up queries in an ongoing conversation, or start over with a new query. Using Copilot can initially be somewhat eerie because its responses are often human-like. But don’t be fooled — it has no human intelligence. So when asking it for information, give it very precise detailed information about what you want, and be as concise as possible. Microsoft also recommends that you “avoid using relative terms, like yesterday or tomorrow, and pronouns, like it and they. Instead, use specifics, such as an exact date or a person’s name.” If you use Windows 11, Copilot for Windows (still in preview) is always just a click away — there’s an icon of it just to the right of the search box. (If you don’t see the icon, try updating to the latest version of Windows 11. If you’re using Windows in a business or educational setting, your organization may not yet have enabled Copilot.) As I write this, the Copilot preview in Windows 10 is available only for those who are Windows Insiders and have opted in to get the newest preview updates. Eventually, though, it will make its way to all Windows 10 users. Click the Copilot icon and Copilot appears in a right-hand pane. The pane stays open no matter what you’re doing in Windows — running an app, switching between windows, or just looking at the desktop. It always stays the same size and takes up the entire right side of your screen. So running it this way is your best bet if you run it regularly and use it throughout the day. Learn more by visiting OUR FORUM.