Posts Tagged ‘Google’
Tuesday, May 21, 2013 @ 05:05 PM gHale
Google released Chrome 27, which includes a long list of security fixes for the browser, many of which are for high-risk vulnerabilities.
As a result of all the vulnerabilities, the company handed out more than $14,000 in rewards to researchers who reported bugs fixed in the latest iteration of Chrome.
Designed to provide incentives for security researchers to report vulnerabilities in Chrome and Chrome OS to the company privately rather than publicly disclosing them. Rewards can range from a few hundred dollars for minor flaws up to tens of thousands of dollars for especially severe issues.
None of the vulnerabilities addressed in Chrome 27 fit the latter description, with the highest payment being $3133.70 to Atte Kettunen for some memory safety issues. Chrome users should update their browsers as soon as possible to protect themselves against exploits.
Here are the bugs fixed in Chrome 27:
• High CVE-2013-2837: Use-after-free in SVG. Credit to Sławomir Błażek.
• Medium CVE-2013-2838: Out-of-bounds read in v8. Credit to Christian Holler.
• High CVE-2013-2839: Bad cast in clipboard handling. Credit to Jon of MWR InfoSecurity.
• High CVE-2013-2840: Use-after-free in media loader. Credit to Nils of MWR InfoSecurity.
• High CVE-2013-2841: Use-after-free in Pepper resource handling. Credit to Chamal de Silva.
• High CVE-2013-2842: Use-after-free in widget handling. Credit to Cyril Cattiaux.
• High CVE-2013-2843: Use-after-free in speech handling. Credit to Khalil Zhani.
• High CVE-2013-2844: Use-after-free in style resolution. Credit to Sachin Shinde (@cons0ul).
• High CVE-2013-2845: Memory safety issues in Web Audio. Credit to Atte Kettunen of OUSPG.
• High CVE-2013-2846: Use-after-free in media loader. Credit to Chamal de Silva.
• High CVE-2013-2847: Use-after-free race condition with workers. Credit to Collin Payne.
• Medium CVE-2013-2848: Possible data extraction with XSS Auditor. Credit to Egor Homakov.
• Low CVE-2013-2849: Possible XSS with drag+drop or copy+paste. Credit to Mario Heiderich.
Tuesday, May 14, 2013 @ 03:05 PM gHale
Internet Explorer 10 continues to outperform the other browsers with a block rate against malware of 99.96 percent, new research shows.
Web browsers continue to be a main security point in fighting against malware and NSS Labs conducted an analysis of the security evaluating protection offered by the five main browsers: Safari 5, Chrome 25/26, Internet Explorer 10, Firefox 19 and Opera 12 against malware downloads.
While Chrome’s malware download protection improved significantly — rising to more than 83 percent from 70 percent in NSS’ October 2012 comparative test — Internet Explorer 10 was tops at the 99.96 percent rate.
Safari, Firefox and Opera continue to lag far behind Chrome and Internet Explorer with overall block rates of 10.16 percent, 9.92 percent and 1.87 percent respectively.
Google and Microsoft utilize application reputation services to enhance their general URL blocking capabilities.
While Chrome saw a larger jump in its overall block rate, up 13 percent from the last test period, this leap only brought Chrome up to the same levels of protection as Internet Explorer without the added application reputation. Microsoft IE’s block rate jumped 16.79 percent with the addition of its Application Reputation service, taking it to the 99.96 percent number.
Google’s Safe Browsing API v2 includes additional application reputation-based download protection that integrates into Chrome, but not into Firefox or Safari and the results speak for themselves.
The latest API’s additional functionality is seven times more effective than the Safe Browsing API alone and accounts for 73.16 percent of Chrome’s overall block rate. Without the application reputation service, Chrome, Firefox and Safari all have block rates around 10 percent.
While Application Reputation itself can be a highly effective technology, it is also prone to false positives and user error. Perfectly good software, but virtually unknown, may end up blocked and highly malicious software engineered to have excellent reputational aspects may evade protection.
It’s important to note that Chrome relies upon its application reputation protection almost four times as often as Internet Explorer just to achieve the same protection rates as Internet Explorer achieves without application reputation.
Because unique malware attacks through infected web pages are often live for only short periods of time, the faster a web browser can detect and block a malware attack, the better.
“Web browsers remain the primary infection vector for most consumers and enterprises,” said Randy Abrams, research director at NSS Labs. “Improving the browser’s malware block rate substantially impacts one’s security profile. Both Google’s Download Protection and Microsoft’s App Rep allow users to override browser protecting, however, Google relies on this less reliable protection mechanism nearly four times as often as does Microsoft. The net result is that IE 10 users are offered superior protection over Chrome users with one quarter the risk of making a bad download decision. Firefox, Safari, and Opera users are afforded little protection at all by their browsers.”
Friday, April 19, 2013 @ 03:04 PM gHale
Microsoft plans to introduce an extension that will make the login process for Microsoft accounts more secure.
The method, called “two-step verification,” will require not only a conventional password, but also a code the user gets by text message, email, or from a special app, according to a Microsoft blog post.
The Microsoft Authenticator app uses a standard protocol to create one-time codes that can also work for services like Google and Dropbox. Microsoft does not specify in its announcement what that standard is.
Google introduced two-factor authentication more than two years ago, especially the option to go through the two-step process only once on devices that see use often. And just like Google’s method, there will be application-specific passwords for certain programs and services that don’t support two-factor authentication.
Things could get difficult if users find themselves unable to receive or generate the codes needed for the second step, for example if their smartphone ends up stolen. Without the second code, users will have to wait 30 days before being able to access their account, Microsoft said. In Google’s system, users can generate special backup codes and keep them safe for such cases, a recommended step. Microsoft will most likely also include some kind of backup system at some point. If a user can’t produce either of the two codes, they will simply lose access to their user account, according to the blog post.
According to the announcement, the new option will turn up in the next few days under “Security info” in the settings for each Microsoft account.
Tuesday, April 16, 2013 @ 03:04 PM gHale
Google fixed a series of serious vulnerabilities in its Chrome OS, including three high-risk bugs that could lead to code execution on vulnerable machines.
All of the vulnerabilities that Google fixed in Chrome OS are in the O3D plugin, an API that enables developers to create 3D applications for the Web. Three of the vulnerabilities are high-risk and the other flaw is a medium severity bug.
The following are the vulnerabilities that Google fixed in Chrome OS 26:
•  Medium CVE-2013-2832: Uninitialized memory left in buffer in O3D plug-in.Credit to Ralf-Philipp Weinmann.
•  High CVE-2013-2833: Use-after-free in O3D plug-in. Credit to Ralf-Philipp Weinmann.
•  High CVE-2013-2834: Origin lock bypass of O3D and Google Talk plug-ins. Credit to Ralf-Philipp Weinmann.
•  High CVE-2013-2835: Origin lock bypass of O3D and Google Talk plug-ins. Credit to Google Chrome Security Team (Chris Evans).
Ralf-Philipp Weinmann, the researcher who discovered three of the flaws, received $31,336 in bug bounties from Google for his work. That’s at the highest end of the rewards that Google pays out in its Chromium reward program. Most of the rewards are in the $1,000-$3,000 range, with some going above that, depending upon the severity of the vulnerability and difficulty of exploitation.
“We’re pleased to reward Ralf-Philipp Weinmann $31,336 under the Chromium Vulnerability Rewards Program for a chain of three bugs, including demo exploit code and very detailed write-up. We are grateful to Ralf for his work to help keep our users safe,” said Ben Henry of the Chrome team.
Thursday, April 11, 2013 @ 04:04 PM gHale
There is now a combined client- and server-side system that uses blacklisting, whitelisting and the characteristics of an executable file to catch nearly 99 percent of all malicious downloads, said Google researchers.
The content-agnostic malware protection system (CAMP) was part of a research paper presented in February at the Network and Distributed System Security Symposium. The system for the Chrome browser addresses the inherent weaknesses of using whitelisting and blacklisting as a defense against malicious binaries.
“In practice, these approaches continue to provide value for popular binaries at either extreme of maliciousness — the current large outbreak of malware, the benign binaries shipped with an OS — but bridging the gap between whitelist and blacklist detection for Web malware remains a significant challenge,” according to the research paper from Moheeb Abu Rajab, Lucas Ballard, Noe Lutz, Panayiotis Mavrommatis and Niels Provos.
The researchers said 70 percent of the time CAMP can catch malicious downloads on the computer, with the remainder requiring deeper analysis on a Google server. Keeping the analysis as much as possible on the client is important in protecting user privacy.
When cloud-based antivirus systems are in play, binaries typically upload to the cloud for examination, resulting in a much greater loss of privacy, Google said.
“While CAMP also moves detection of malware into the cloud, it reduces the privacy impact by employing whitelists so that most download URLs stay within the browser and do not need to be sent to a third party,” the paper said. “Binary payloads never leave the browser.”
The use of the browser instead of a remote server for some tasks is a key difference between CAMP and Microsoft’s SmartScreen technology. The latter is in Internet Explorer to protect against malicious downloads and links.
In terms of detection rates, major antivirus engines detect between 35 percent and 70 percent of malware binaries, while CAMP’s success rater is 98.6 percent, the paper said. During a six-month evaluation period, Google tested CAMP on the Windows computers of 200 million users, and identified about 5 million malicious downloads each month.
The system first compares downloads against a whitelist of known benign executables and a blacklist of known malware. The latter also involves communicating with Google’s server-based Safe Browsing service.
If a clear determination cannot occur using the lists, then CAMP begins the analysis, which starts with the browser gathering characteristics of the binary. They would include the final download URL and the IP address of the server hosting the download, as well as the size of the binary, its content hashes and certificates attached to it.
The browser also logs the URL that referred the computer user to the download. This is important, because the URL can undergo examination to determine whether it is part of a chain of URL redirects set up to hide the original. Multiple referrals are a good indicator of malware.
Once all the information comes together, it goes out to Google’s servers, which analyzes the information and decides whether the binary is benign, malicious or unknown. The ruling goes through to the browser, which provides a notification to the user.
However, Lance James, chief scientist at application security vendor Vigilant, said that as an overall security system, CAMP falls short because it does not catch malware that exploits vulnerabilities within the browser.
Such malware often gets into a computer by email recipients tricked into clicking on a malware-carrying attachment.
“[CAMP] may be able to see 99 percent of malware downloaded through the browser, but they won’t see 99 percent of malware that is never seen by the browser,” James said. “There’s a big blind spot and that’s a problem.”
Google acknowledges that browser-exploiting malware is not the focus of the system. “CAMP is specifically designed to protect from user-initiated malware downloads, e.g. distributed by means of social engineering, that do not involve browser exploitation,” researcher Moheeb Abu Rajab said.
Click here to download the research paper.
Tuesday, April 2, 2013 @ 09:04 AM gHale
Google Chrome 26, the latest version of the company’s browser, is out and it contains a number of security patches, with the biggest fix being for a high-priority use-after-free vulnerability in the Web Audio component of the browser.
That vulnerability is the only one in Chrome 26 for which Google paid a bug bounty as part of its reward program. All of the other vulnerabilities ended up discovered by members of the company’s own security team or the bugs just didn’t qualify for a reward.
This continues a trend of the number of vulnerabilities qualifying for rewards from Google declining as it becomes more and more difficult to find serious bugs in the browser.
Google has raised the amount of money paid for serious vulnerabilities in order to attract more submissions from security researchers, but the improved defenses in Chrome have made life more difficult for would-be submitters.
Here is a list of vulnerabilities patched by Google in Chrome 26:
• High CVE-2013-0916: Use-after-free in Web Audio.
• Low CVE-2013-0917: Out-of-bounds read in URL loader.
• Low CVE-2013-0918: Do not navigate dev tools upon drag and drop.
• [Linux only] Medium CVE-2013-0919: Use-after-free with pop-up windows in extensions.
• Medium CVE-2013-0920: Use-after-free in extension bookmarks API.
• High CVE-2013-0921: Ensure isolated web sites run in their own processes.
• Low CVE-2013-0922: Avoid HTTP basic auth brute force attempts.
• Medium CVE-2013-0923: Memory safety issues in the USB Apps API.
• Low CVE-2013-0924: Check an extension’s permissions API usage again file permissions.
• Low CVE-2013-0925: Avoid leaking URLs to extensions without the tabs permissions.
• Medium CVE-2013-0926: Avoid pasting active tags in certain situations.
Thursday, March 14, 2013 @ 05:03 PM gHale
Google posted a set of informational videos and articles to help website owners recover their sites after they have suffered a hack attack.
“Help for Hacked Sites” relies on knowledge from Google’s webmaster support members and software engineers to help end users determine whether their sites have been compromised and if so, how to fix them.
A post on Google’s Online Security Blog by Developer Programs Tech Lead Maile Ohye explained the series.
The post also references a collection of statistics taken from antivirus awareness group StopBadware and a study the firm did last month, “Compromised Websites: An Owner’s Perspective.”
The study found 26 percent of webmasters who have had their sites hacked report their sites remain compromised. This new series aims to decrease those figures.
When it comes to mitigating a hacked site, the videos offer tips on building a support team, quarantining the infected site, assessing the damage and identifying the vulnerability before starting to clean up and maintain the compromised site.
In a video, Ohye describes how hackers can exploit vulnerable plug-ins on some sites to spread spam links and malicious software and how some site owners can remain oblivious until they start to see warnings from Google and Chrome.
Click here for videos, along with a series of articles and other tips.
Monday, March 11, 2013 @ 05:03 PM gHale
A novel approach can allow the massive infrastructure powering cloud computing to run as much as 15 to 20 percent more efficiently.
To show it, Google has already applied this model, said computer scientists at the University of California, San Diego, and Google.
Computer scientists looked at a range of Google web services, including Gmail and search. They used a unique approach to develop their model. Their first step was to gather live data from Google’s warehouse-scale computers as they were running in real time. Their second step was to conduct experiments with data in a controlled environment on an isolated server. The two-step approach was key, said Lingjia Tang and Jason Mars, faculty members in the Department of Computer Science and Engineering at the Jacobs School of Engineering at UC San Diego.
“These problems can seem easy to solve when looking at just one server,” said Mars. “But solutions do not scale up when you’re looking at hundreds of thousands of servers.”
The work is one example of the research Mars and Tang are pursuing at the Clarity Lab at the Jacobs School, their newly formed research group. Clarity is an acronym for Cross-Layer Architecture and Runtimes.
“If we can bridge the current gap between hardware designs and the software stack and access this huge potential, it could improve the efficiency of web service companies and significantly reduce the energy footprint of these massive-scale data centers,” Tang said.
Researchers sampled 65 K of data every day over a three-month span on one of Google’s clusters of servers, which was running Gmail. When they analyzed that data, they found the application was running significantly better when it accessed data located nearby on the server, rather than in remote locations. But they also knew the data they gathered was noisy because of other processes and applications running on the servers at the same time. They used statistical tools to cut through the noise. But they had to do more experiments.
Next, computer scientists went on to test their findings on one isolated server, where they could control the conditions in which the applications were running. During those experiments, they found data location was important, but competition for shared resources within a server, especially caches, also played a role.
“Where your data is versus where your apps are matters a lot,” Mars said. “But it’s not the only factor.” Servers come equipped with multiple processors, which in turn can have multiple cores. Random-access memory is on to each processor, allowing quick access to data regardless of where it is. However, if an application running on a certain core is trying to access data from another core, the application is going to run more slowly. And this is where the researchers’ model comes in.
“It’s an issue of distance between execution and data,” Tang said. Based on these results, computer scientists developed a novel metric, called the NUMA score, which can determine how well random-access memory ends up allocated in warehouse-scale computers. Optimizing the NUMA score can lead to 15 to 20 percent improvements in efficiency. Improvements in the use of shared resources could yield even bigger gains—a line of research Mars and Tang are pursuing in other work.