Google revealed yesterday that two zero-day vulnerabilities — one in Chrome and one in Windows — let hackers send malicious code to users.The vulnerabilities were discovered on February 27th, and Google has since patched Chrome, but Windows is still vulnerable.The unpatched Windows vulnerability lets hackers escalate local privileges to execute malicious code.Google wrote in its blog post that it’s only seen Windows 7 32-bit systems contain this vulnerability.Older versions before Windows 7 may also be at risk.Microsoft has told Google it’s aware of the issue and is working on a fix, but it’s already 10 days late addressing the problem.
The end of 2018 saw a rash of cyberattacks mounted by way of networked printers.In one notable incident, as reported by Kaspersky, a hacker targeted 50,000 printers and caused them to print a message supporting a YouTuber named PewDiePie.According to Kaspersky, the hacker had utilized Shodan, a search engine for devices, and found no fewer than 800,000 vulnerable printers from which to choose.Even the IT department may fall into this complacency, focusing instead on more obvious security work like patching employee PC’s or monitoring BYOD smartphones.You send documents to print over Ethernet or Wi-Fi.This arrangement is common because people may want to look up and print web pages directly from the printer, scan and email documents from the printer or use the device to send or receive faxes.
Nearly five years ago, Google formed its Project Zero research group to reduce the impact of zero-day attacks on users, and since then it has reported numerous bugs to companies such as Apple — notably chastising its rival last October for taking too long to fix bugs, and sneaking details of fixes into already published security advisories.Today, the Project Zero team revealed (via NeoWin) another “high severity” macOS kernel bug that can allow an attacker to take control of a Mac, which it says Apple has left unfixed for 90 days.If this sounds similar to the last Google-Apple bug situation, it is and it isn’t: Once again, the latest bug could impact millions of Mac users, but this isn’t a case of complete neglect.This bug enables an attacker to quietly modify a mounted disk image, then get the Mac to run the modified code by exploiting macOS’s memory management system.The reason it’s so severe is that users mount disk images all the time, yet macOS doesn’t re-check the images when it automatically purges and reloads content in the course of managing its limited memory.Because of that, the Mac will have no idea that it’s copying modified and potentially malicious code to be executed.
Google today offered an update on its Application Security Improvement Program.First launched five years ago, the program has now helped more than 300,000 developers fix more than 1 million apps on Google Play.In 2018 alone, it resulted in over 30,000 developers fixing over 75,000 apps.Google originally created the Application Security Improvement Program to harden Android apps.The goal was simple: help Android developers build apps without known vulnerabilities, thus improving the overall ecosystem.If one is present, Google lets the developer know and helps them fix it.
It may sound scary, but while you’re making yourself a cup of coffee, a hacker just may be brewing up an attack.According to security firm McAfee, an internet-connected coffee maker produced by Mr. Coffee and Wemo suffers from a security vulnerability that could let a malcious actor intercept traffic from the device and even schedule the machine to make coffee without the owner’s permission.The affected device is the Mr. Coffee Coffee Maker with Wemo, first introduced back in 2014.According to McAfee, Wemo devices communicate with a connected Wemo smartphone app, and can transfer date in two ways: Remotely via the internet or locally, bysending the information directly to the Wemo application.McAfee researchers discovered it is possible to intercept transmissions made between the Mr. Coffee Coffee Maker with Wemo and the connected Wemo app.This can occur because the data is transferred in plaintext with no additional encryption or protection to prevent the information from being viewed by a malicious third party.
Many modern laptops and an increasing number of desktop computers are much more vulnerable to hacking through common plug-in devices than previously thought, according to new research.The research, to be presented today (26 February) at the Network and Distributed Systems Security Symposium in San Diego, shows that attackers can compromise an unattended machine in a matter of seconds through devices such as chargers and docking stations.Many modern laptops and an increasing number of desktops are susceptible.The researchers, from the University of Cambridge and Rice University, exposed the vulnerabilities through Thunderclap, an open-source platform they have created to study the security of computer peripherals and their interactions with operating systems.It can be plugged into computers using a USB-C port that supports the Thunderbolt interface and allows the researchers to investigate techniques available to attackers.The researchers, led by Dr Theodore Markettos from Cambridge's Department of Computer Science and Technology, say that in addition to plug-in devices like network and graphics cards, attacks can also be carried out by seemingly innocuous peripherals like chargers and projectors that correctly charge or project video but simultaneously compromise the host machine.
A bunch of apps from some major players — including Expedia, Hollister, Air Canada, Abercrombie & Fitch, Hotels.com and Singapore Airlines — recently came to grief because of a security/privacy hole in a third-party analytics app they all used, according to a report from TechCrunch.That sort of thing shouldn't be happening — and yet everyone seems focused on the wrong lesson.The analytics app, called Glassbox, captures all information from a user's interaction with the app, including keystrokes entered and spots on the touchscreen the user touched or clicked.It also may include some screen captures.We have notified the developers that are in violation of these strict privacy terms and guidelines, and will take immediate action if necessary."Apple gave the developer less than a day to remove the code and resubmit the app, and if it didn't meet that deadline, the app would be removed from the App Store, the email said, according to the TechCrunch story.
Casinos, the FBI, security researchers… and how not to handle vuln disclosureLike many white hat hackers, Dylan Wheeler admits that as a teenager he got his hands a little dirty and his hat, a little black – in his case eventually fleeing Australia from local authorities and the FBI after being accused of stealing more than $100 million-worth of intellectual property, including specifications for an Xbox One games system used to train US soldiers to fly Apache helicopters.Slipping out of the country via the Czech Republic and now based in the UK, he turned to responsible vulnerability disclosure (helping identify security issues through “ethical hacking” and, if asked, helping to fix them) and does contractual security auditing work via his company “Day After Exploit Ltd”.The teenage shenanigans are behind him, he told Computer Business Review.The security researcher claims that he was assaulted on Tuesday by Jessie Gill, an executive from Atrient*, a vendor which makes digital loyalty reward kiosks for casinos, after trying to make a vulnerability disclosure.They had a bot trawling the engine looking for an identifier for Jenkins servers, and found Atrient kiosks – connected to internal casino networks – communicating “home” via unencrypted plain text, with a connected API server also extremely vulnerable to injection of malicious code.
Prof asks: What good comes from letting everyone know a vulnerability exists?Professor Gus Uht, engineering professor-in-residence at the University of Rhode Island, USA, argues that everyone would be safer if those who discover serious vulnerabilities refrain from revealing the details to the public, allowing the flaws to be secretly fixed by vendors and developers, and updates pushed out before anyone crafts suitable exploits to hack victims.The discovered security blunders would thus be privately reported, and kept under wraps until someone actually exploits them in the wild, at which point people can be alerted to make sure they've installed the necessary and available patches.In effect, Prof Uht fears disclosing details of weaknesses within software and hardware too soon gives crooks a chance to build exploit code and go on the offensive."It is our view that this is not the best thing to do since it effectively broadcasts weaknesses, and thus aids and abets black hat hackers as to the best ways to compromise systems.They are effectively being invented and empower black hats to wreak havoc without making systems safer.
Capsule8 demos takeover technique to help sysadmins check for vulnerabilitiesThose who haven't already patched a trio of recent vulnerabilities in the Linux world's SystemD have an added incentive to do so: security biz Capsule8 has published exploit code for the holes.However, Capsule8 or others may reveal ways to bypass those protections, so consider this a heads-up, or an insight into exploit development.Google Project Zero routinely reveals the inner magic of its security exploits, if you're into that.In mid-January, Qualys, another security firm, released details about three flaws affecting systemd-journald, a systemd component that handles the collection and storage of log data.Patches for the vulnerabilities – CVE-2018-16864, CVE-2018-16865, and CVE-2018-16866 – have been issued by various Linux distributions.
Part of the problem is the rough history of the feature itself.We’ve reached out to Apple for comment about when and how they first heard about the bug, or if the team independently discovered it.)Jake Williams, founder of Rendition Infosec, says the most basic form of bug testing is an automated process called “fuzzing,” which he says involves sending an improperly formatted input to see if the system breaks (e.g.But the FaceTime bug dealt with a chain of unusual UI maneuvers rather than a particular input, so it would have passed through a fuzzing test unnoticed.The bug would have been more likely to turn up in quality assurance testing, which involves real-world use examples with real users.But calling your own phone number after starting a call with someone else is relatively rare, so it could have easily slipped through the cracks.
Pantsdown vulnerability affects various BMC stacks as well as OpenBMC on systems using two particular Aspeed chipsAn oversight in the firmware for various baseband management controllers (BMCs) can be exploited by miscreants to bury spyware deep inside a server, potentially poisoning it for the next owner.Malware successfully abusing this security blunder can remain invisible to hypervisor, operating system, and antivirus software, can survive reboots and disk wipes by hiding in the BMC flash memory, potentially infect the OS, and get up to all sorts of other mischief.It requires root-level access to exploit this particular flaw.One attack scenario would involve an admin reprogramming the BMC chipset so that the next owner of the machine is secretly snooped on by spyware.Whoever was using the machine next would likely be none-the-wiser that bootkit-level malware was lurking in the endpoint's motherboard firmware.
Security researchers from Check Point found vulnerabilities with Epic Games' website, which allowed potential hackers to log into people's Fortnite accounts without needing a password.Once they had access to the compromised accounts, the researchers found that you could listen in on friends' conversations and use the victims' credit card information to purchase in-game items.The researchers discovered the vulnerabilities in November, and it was fixed by January."We were made aware of the vulnerabilities and they were soon addressed.We thank Check Point for bringing this to our attention.In August, Epic Games fixed a security flaw with its installer for Android devices, after researchers from Google disclosed a vulnerability that could have tricked victims into installing a fake version of the game.
Bug bounties are a way for companies to check the security of their software by offering cash to freelancers who hunt for security exploits and then report them so that they can be fixed.The idea is that everyone benefits from this process: the company gets its software checked by a larger variety of people than they could employ by themselves, the bug hunters get offered legitimate cash for finding a security flaw instead of selling that information on the black market, and the public gets software which has been more thoroughly checked for security issues.Big tech companies like Google and Intel have been running bug bounty programs for years.Now the European Union is getting in on the action too.From January 2019, the EU will be launching a bug bounty program as part of their Free and Open Source Software Audit project (FOSSA), focused on security issues with open source software.The FOSSA project was started back in 2014 when security vulnerabilities were found in the OpenSSL Open Source encryption library which is used for the encryption of internet traffic.
Plus: State-backed hacks now need permission from a judgeOn the same day that certain types of British state-backed hacking now need a judge-issued warrant to carry out, GCHQ has lifted the veil and given the infosec world a glimpse inside its vuln-hoarding policies.The spying agency's internal Equities Process is the way by which it decides whether or not to tell tech vendors that its snoopers have discovered a hardware or software vulnerability.If they keep discovered vulns to themselves, they can exploit them for their own ends, for which the public reason is given as disrupting "the activities of those who seek to do the UK harm" – including Belgian phone operators.If GCHQ discloses vulns it has found to the affected vendor, that can "benefit global users of the technology", in the agency's words, as well as tending to build trust – something the Peeping Tom agency is dead keen on following the international damage done to its reputation after the Snowden disclosures."Where the software in question is no longer supported by the vendor," it said, "were a vulnerability to be discovered in such software, there would be no route by which it could be patched."
But sometimes, after weighing up the implications, we decide to keep the fact of the vulnerability secret and develop intelligence capabilities with it”GCHQ and NCSC today for the first time published the decision making process they use to decide whether to retain a technology vulnerability for intelligence purposes, or disclose it to a vendor to be patched.Many it refers back to vendors for “repair”; indeed the NCSC was named one of the top five bounty hunters under Microsoft’s “bug bounty” programme this year.Such nation state retention of so called 0days, or previously unknown vulnerabilities, has become increasingly controversial however, after 0days stockpiled by governments leaked into the wild and were weaponised by “bad actors”.[They] provide yet another example of why the stockpiling of vulnerabilities by governments is such a problem.This is an emerging pattern…”
It took about six months for popular consumer drone maker DJI to fix a security vulnerability across its website and apps, which if exploited could have given an attacker unfettered access to a drone owner’s account.The vulnerability, revealed Thursday by researchers at security firm Check Point, would have given an attacker complete access to a DJI users’ cloud stored data, including drone logs, maps, any still or video footage — and live feed footage through FlightHub, the company’s fleet management system — without the user’s knowledge.Taking advantage of the flaw was surprisingly simple — requiring a victim to click on a specially crafted link.But in practice, Check Point spent considerable time figuring out the precise way to launch a potential attack — and none of them were particularly easy.For that reason, DJI called the vulnerability “high risk” but “low probability,” given the numerous hoops to jump through first to exploit the flaw.A victim would have had to click on a malicious link from the DJI Forum, where customers and hobbyists talk about their drones and activities.
Popular Wi-Fi access points used by businesses are open to two critical security flaws, researchers said Thursday.Researchers at Armis Labs, a security company with a focus on internet of things devices, found in tests that a hacker could completely take over network access points using the vulnerabilities on Bluetooth Low Energy chips.Because of that longevity, BLE chips are more likely to be used in IoT devices and medical devices.The BLE chip vulnerabilities -- researchers are labeling the pair of flaws "Bleeding Bit" -- would let attackers hijack vulnerable networks and spread malware to any devices connected to those networks, Armis Labs said.That's why the US and UK governments warned in April that Russian hackers were targeting millions of routers around the world.Texas Instruments has already issued its patch.
For the second time in roughly a year, D-Link has failed to act on warnings from security researchers involving the company’s routers.The latest incident arose after Silesian University of Technology researcher Błazej Adamczyk contacted D-Link last May about three vulnerabilities affecting eight router models.Following the warning, D-Link patched two of the affected routers, but did not initially reveal how it would proceed for the remaining six models.After further prompting from Adamczyk, D-Link revealed that the remaining six routers would not get a security patch because they were considered end-of-life models, leaving affected owners out in the cold.Though these are not current models in D-Link’s portfolio, many of the listed models are still likely to be in use.As a result of this impasse, Adamczyk released details about the security flaws, following responsible security protocols after giving D-Link notice and the opportunity to address the issues.
Cellular network security is already important for phone calls and personal data, but in the 5G era, it will become life-and-death critical as cars, hospitals, factories, and entire cities depend on 5G networks for commands.Even though the 5G standard was built with improved security as a fundamental pillar, Swiss government researchers have discovered holes that they’re working to fix before most networks launch.The 5G standard includes the Authentication and Key Agreement (AKA) — an authentication, confidentiality, and privacy assurance system that lets devices and networks know they can trust each other.According to the Swiss Federal Institute of Technology, known domestically as ETH, the 5G AKA has indeed been improved over the version used in 3G and 4G networks, and amongst other things blocks a current technique that can impermissibly track users and device locations.Unfortunately, however, the 5G AKA has at least two major disclosed security holes.Using a cryptographic tool, ETH researchers found that the 5G standard’s minimum security assumptions fall short of the AKA’s critical security aims — in other words, a “poor implementation of the current standard” could enable a rogue user to offload his usage charges onto other users.
More

Top