Posts Tagged ‘tool’
Wednesday, October 23, 2013 @ 03:10 PM gHale
Google unveiled tools designed to protect sites from distributed denial of services (DDoS) attacks and also bypass censorship.
The DDoS protection tool is Project Shield and is currently by invitation only. The tool relies on the company’s existing PageSpeed service, which distributes resources throughout the Google infrastructure and among users of the service to improve website performance.
Now, the same concept can see use in defending sites against DDoS attacks. In essence, all users pool resources together so an attack against any of the sites faces the entire network and not just one server.
“Project Shield is an initiative to use Google’s infrastructure to protect free expression online. The service currently combines Google’s DDoS mitigation technologies and Page Speed Service (PSS), which allow websites to serve their content through Google to be better protected from DDoS attacks,” Google said.
Google’s service is free but it is only available by invitation. If users want to try it out, they can fill out the online form.
In time, Project Shield might evolve into a standard tool available for regular sites for free or for a price for larger organizations.
Wednesday, July 31, 2013 @ 04:07 PM gHale
By Gregory Hale
Know all the facts before rushing to a decision or judgment, said General Keith Alexander.
That is the essential idea behind the PRISM program, the National Security Agency’s controversial intelligence gathering program. That tool was a vital part in thwarting 54 terrorist attacks worldwide, Alexander said during his keynote address at the Black Hat security conference in Las Vegas Wednesday. Of those 54 potential attacks, 13 were in the U.S., 25 in Europe, 11 in Asia and five in Africa.
The program and the NSA came to light after NSA contractor Edward Snowden leaked information warning the extent of mass data collection was far greater than the public knew and included what he characterized as dangerous and criminal activities.
“I believe what has happened; the damage to our country is significant and irreversible,” Alexander said.
Alexander came off defending what the NSA is all about and what it is trying to do in defending the country. Alexander said U.S. companies are not providing far reaching access to customer data, and only 35 NSA analysts have authorization to search phone metadata and emails. He also talked about the intense oversight involved from the three branches of government so as not to obstruct civil liberties.
Alexander talked about two programs, one is Section 215 Authority, which is a program designed to identify the communications of persons suspected to be associated with terrorist organizations communicating with individuals inside the U.S.
The other program was Section 702 Authority, which is for foreign intelligence purposes and applies only to communications of foreign persons located abroad and requires valid documentation for foreign intelligence purposes such as counterterrorism.
“Under 702, the U.S. does not unilaterally obtain information from the servers of U.S. companies,” Alexander said. “Industry is compelled to comply with this program.”
The genesis of the two programs was the result of terrorist incidents from the World Trade Center Attack in 1993 to the 9/11 attacks to the Boston Marathon attack this past spring.
“The intelligence community according to the 911 commission failed to connect the dots. We didn’t know because we didn’t have the tools and capabilities that showed (the attackers) were actually in California,” Alexander said.
“Virtually all democracies have lawful intercept programs,” he said. The goal of the programs is to collect information, but not a huge depth of information, Alexander said. In Section 215, the NSA will collect date and time of call; calling number; called number; duration of call, and origin of metadata. The NSA does not collect content of calls; no voice; no SMS, no names; addresses, and no credit cards.
In one case these programs helped disrupt a terrorist plot to bomb the New York City subway system, Alexander said.
Time was of the essence in this case. The attacker was in California and started driving across the country. “We intercepted this in early September 6 or 7 and the targeted attack date was by the 14th of September. The FBI had to put the pieces together quickly.”
We gave the email address to the FBI and they took that email address and determined a phone number that connected to New York City and they found that number also connected to other terrorist groups.
“This would have been the biggest terrorist attack since 9/11 on U.S. soil,” he said. “The initial tip came from the PRISM 702 data. We were able to stop the attack,” Alexander said.
As a part of the foreign intelligence program, the NSA intercepted an email from a terrorist in Pakistan. “By using 702 (the foreign intelligence program), we intercepted some communications and was able to get a phone number that was a potential terrorist.
Is what the NSA doing perfect? No, but Alexander said he wants to reach out and try to see how to improve upon intelligence gathering.
“Put the facts on the table. The nation needs to know we are going to do the right thing. If we make a mistake we will hold our selves accountable.”
Thursday, February 28, 2013 @ 04:02 PM gHale
A specially crafted RTF document was leveraging a vulnerability in Word to execute a tool from NVIDIA’s graphics card drivers on victims’ computers.
The executable file, called nv.exe, has a digital signature, and is actually the original file with no changes, said researchers at Sophos.
The reason for this method became clear after the NvSmartMax.dll library, which copied with the Word document and the .exe file onto computers, ended up analyzed: The library was home to the actual malicious code that set up a permanent backdoor, the researchers said. The malicious functions in the library ended up executed by the nv.exe file signed by NVIDIA.
The attackers took advantage of the fact that executable files first look for libraries in their own folder. In this case, nv.exe therefore tries to execute functions from its DLL but, instead, finds and uses an evil twin first. The attackers may have been using the signed binary as a detour in order to help their malicious code slip past any anti-virus software installed.
The prepared Word document consists of a statement from the Tibetan Youth Congress, a non-governmental organization that works for Tibetan independence, which suggests this cyber attack was once again targeting pro-Tibet groups.
Wednesday, May 2, 2012 @ 12:05 PM gHale
Skype is investigating a new tool that collects a person’s last known IP address, a potential privacy-compromising issue.
Instructions posted on Pastebin showed how it is possible to show a person’s IP address without adding the targeted user as a contact by looking at the person’s general information and log files.
“This is an ongoing, industry-wide issue faced by all peer-to-peer software companies,” said officials at Skype, owned by Microsoft. “We are committed to the safety and security of our customers and we are taking measures to help protect them.”
In October, Skype acknowledged a research paper showing how it is possible to determine a Skype user’s IP address without that user knowing. It also demonstrated more than half the time the IP address could accurately link to sharing content using the BitTorrent file-sharing protocol.
An IP address is an important piece of information that can track the approximate location of a user and their service provider. But the information is not necessarily accurate, as a person could be using a VPN, whose data center may in a different country than the actual user.
Another way to broadcast inaccurate IP addresses is browsing the Internet using The Onion Router (TOR), an anonymizing service that routes a person’s Internet traffic through a network of worldwide servers in a fashion that is difficult to trace. An IP address also just identifies a computer and not the person sitting behind a keyboard.
Skype uses a peer-to-peer system to route its data traffic, which it also encrypts. But its encryption system is proprietary and not open for scrutiny, which has prompted caution from security experts.
Monday, April 16, 2012 @ 01:04 PM gHale
Apple delivered to the core as they made good on a promise to decontaminate Macs infested with the Flashback malware.
The newest Mac OS X Java update includes a tool that will “remove the most common variants of the Flashback malware,” Apple’s advisory read.
On Tuesday, Apple acknowledged the Flashback malware campaign that exploited a Java vulnerability that left hundreds of thousands of Macs infected. At the same time, Apple pledged to create a detect-and-delete tool that would scrub compromised machines of the attack code. By Thursday, the promise came true.
This was not a new problem for Apple as it had to come up with a similar tool last year, one designed to eliminate MacDefender fake security software. In like speedy fashion, Apple released the anti-MacDefender tool a week after it unveiled those plans.
Thursday’s update also disables automatic execution of Java applets in the Java browser plug-in; the exploit used by Flashback to infect Macs hide inside a malicious Java applet hosted on compromised websites.
One of the reasons Flashback was able to infect so many Macs was because the Java plug-in automatically ran the offered applet. Apple’s move is a step toward disabling Java, the advice most security experts have suggested to users.
Users can circumvent Java’s new off-by-default setting by configuring Java’s preferences. But even then, Apple will intercede.
“As a security hardening measure, the Java browser plug-in and Java Web Start are deactivated if they are unused for 35 days,” Apple said.
Java Web Start is an Oracle technology that lets users single-click launch a Java app from within a browser without first downloading it to the machine.
Tuesday, February 14, 2012 @ 12:02 PM gHale
Apple has done a great job at encrypting passwords in iWork documents, but one company is now able to apply a distributed attack approach to recover lost passwords.
This makes Distributed Password Recovery the first tool to recover passwords for Numbers, Pages and Keynote apps, said ElcomSoft officials.
“The recovery process is painfully slow”, said Andy Malyshev, ElcomSoft chief technology officer. “Apple used strong AES encryption with 128-bit keys, which makes password attack the only feasible solution. We’re currently able to try several hundred password combinations per second on an average CPU. This is slow, and thus only distributed attacks can be used to achieve a reasonable recovery time. However, the human factor and our product’s advanced dictionary attacks help recover a significant share of these passwords in reasonable timeframe.”
With strong encryption and long keys, an attack on encryption keys is not feasible as long as the encryption uses proper implementation. Therefore, Elcomsoft Distributed Password Recovery handles the case by performing an attack against user-selectable passwords, attempting to recover the original plain-text password.
Considering the very nature of iWork as an inexpensive, simple-to-use, consumer-oriented product, chances of hitting the right password by executing a distributed dictionary attack are good.
Here are some features of the program:
• Hardware acceleration (patent pending) reduces password recovery time by a factor of 50
• Support for NVIDIA CUDA cards, ATI Radeon and Tableau TACC1441 hardware accelerators
• Linear scalability with no overhead allows using up to 10,000 workstations without performance drop-off
• Allows up to 64 CPUs or CPU cores and up to 32 GPUs per processing node
• Broad compatibility recovers document and system passwords to various file formats (click for the complete list of formats)
• Brute-force and dictionary attacks
• Distributed password recovery over LAN, Internet or both
• Console management for flexible control from any networked PC
• Plug-in architecture allows for additional file formats
• Schedule support for flexible load balancing
• Minimum bandwidth utilization saves network resources and ensures zero scalability overhead
• Storing all discovered passwords, forming a separate/internal dictionary (password cache)
Thursday, January 12, 2012 @ 06:01 PM gHale
By Nicholas Sheble
For the first time, comprehensive greenhouse gas (GHG) data reported directly from large facilities and suppliers across the U.S. are easily accessible to the public via the Environmental Protection Agency’s (EPA) GHG Reporting Program.
The 2010 GHG data released Wednesday includes public information from facilities in nine industry groups that directly emit large quantities of GHGs.
A greenhouse gas is a gas in the atmosphere that absorbs and emits radiation in the thermal infrared range. This process is the cause of global warming. GHGs include water vapor, carbon dioxide, methane, nitrous oxide, and ozone.
Three coal-fired power plants owned by Southern Company led the emitter hit parade and each released more than 20 million metric tons of carbon dioxide equivalent in 2010.
Two of the plants, Scherer and Bowen, are in Georgia. The third, James H. Miller Jr., is in Alabama.
The fourth-largest emitter was the Martin Lake, TX power plant of Energy Future Holdings Corporation subsidiary, Luminant.
Duke Energy Corp.’s largest plant, the Gibson plant in Indiana, came in fifth.
“Thanks to strong collaboration and feedback from industry, states and other organizations, today we have a transparent, powerful data resource available to the public,” said Gina McCarthy, assistant administrator for EPA’s Office of Air and Radiation.
“The GHG Reporting Program data provides a critical tool for businesses and other innovators to find cost- and fuel-saving efficiencies that reduce greenhouse gas emissions, and foster technologies to protect public health and the environment,” McCarthy said.
One can view and sort GHG data for calendar year 2010 from over 6,700 facilities by facility, location, industrial sector, and the type of GHG emitted. This information helps communities to identify nearby sources of GHGs, help businesses compare and track emissions, and provide information to state and local governments.
GHG data for direct emitters show that in 2010:
• Power plants were the largest stationary sources of direct emissions with 2,324 million metric tons of carbon dioxide equivalent (mmtCO2e), followed by petroleum refineries with emissions of 183 mmtCO2e.
• CO2 accounted for the largest share of direct GHG emissions with 95 percent, followed by methane with 4 percent, and nitrous oxide and fluorinated gases accounting for the remaining 1 percent.
• One hundred facilities each reported emissions over seven mmtCO2e, including 96 power plants, two iron and steel mills, and two refineries.
Click here to access EPA’s GHG Reporting Program Data and Data Publication Tool. On the right side of the page click on “View GHG data” and then play around with the interactive sorting functions to find facilities of interest.
Nicholas Sheble (firstname.lastname@example.org) is an engineering writer and technical editor in Raleigh, NC.
Wednesday, October 26, 2011 @ 09:10 PM gHale
By Gregory Hale
If you talk to people focused on security, you would think whitelisting is the end all solution that will keep a system safe.
The reality is, yes, whitelisting is a quality security solution. But is it the answer to all security issues that could affect a system? The answer to that question if you listen to Nate Bowman, cyber security researcher with the Department of Homeland Security Program’s Industrial Control System Cyber Emergency Response Team, the answer is a solid maybe.
“It is nice to have whitelisting as a tool in the tool box, but it is not a cure all,” said Bowman during the ISCJWG meeting in Long Beach, CA, Wednesday.
Whitelisting is all about creating a list of applications that are allowable. The concept tries to stop an undesirable action from happening. It has a deny all capability. That compares to blacklisting which is an allow all strategy. Blacklisting tries to fight off the malware once it gets into the system. Whitelisting only allows in what the user wants in.
Bowman gave a small case history of where one company was suffering from an advanced persistent threat (APT) attack. The victim found the threat and saw a PW dump, which is a tool to find user names and passwords. The company was using whitelisting and it found the attempt to find the names and was able to hold off that attack. But, Bowman said, the APT was, in fact, persistent, and it then went into PS exec mode. That is a utility to keep moving forward in the attack to gain more names and passwords. Again, the whitelisting was able to stop that attack. The APT then tried to work around the whitelisting. After repeated attempts the company was able to thwart the attack.
Whitelisting does have benefits where it does reduce risk, increases visibility and helps with compliance issues. It does, however, have some limitations. It is not effective against memory corruption attacks. The higher up the execution stack you go, the more trouble whitelisting has, Bowman said. SQL injections and cross scripting attacks are not as well protected.
Challenges for whitelisting include management. “It is a nightmare to manage whitelisting technology,” Bowman said. “It is easier to include whitelisting in a static environment than it is in a changing environment.”
There is also the idea of a cultural change, he said. “Users will complain about the freedom they will have to give up.”
“Whitelisting works for Industrial Control Systems,” Bowman said. “It is a nice marriage between the two. It works best with static systems and deterministic systems.”