Archive for ‘Misc’

05/12/2016

Palo Alto Networks integrates cloud security for AWS


Palo Alto Networks has announced the integration of its VM-Series virtualised next-generation firewalls Amazon Web Services (AWS) Auto Scaling and elastic Load Balancing.

Using a combination of AWS Lambda, CloudFormation templates, and Amazon CloudWatch services, along with bootstrapping and XML API automation features supported by the VM-Series, joint customers can now automatically scale the cyber breach prevention capabilities of the Palo Alto Networks next-generation security platform as their AWS workload demands fluctuate.

Tim Jefferson, global ecosystem lead, Security, Amazon Web Services, said: “Palo Alto Networks integration with AWS Auto Scaling and ELB, combined with the new validated security competency, enables our joint customers to confidently and dynamically deploy the Palo Alto Networks VM-Series virtualised next-generation firewall delivering security that keeps pace with our customers’ business.”

Lee Klarich, executive vice president, Product Management, Palo Alto Networks, added: “Palo Alto Networks Next-Generation Security Platform provides customers with superior cyber breach prevention capabilities, including security for cloud-based applications. Through our close integration with AWS, customers can now grow and scale their cloud environments with even greater ease and automation while enhancing and maintaining their security posture across public cloud and hybrid environments.”

Palo Alto Networks also joins the AWS Competency Program for Security, which highlights partners who have demonstrated success in building products and solutions on AWS to support customers in multiple areas, including infrastructure security, policy management, identity management, security monitoring, vulnerability management, and data protection.

05/12/2016

5 steps to securely moving apps to the cloud

02/12/2016

Multiplication with crossing lines – China ١٢ضرب الأعداد بالطريقة الصينيةx٣٢

27/11/2016

Clever attack uses the sound of a computer’s fan to steal data

IN THE PAST two years a group of researchers in Israel has become highly adept at stealing data from air-gapped computers—those machines prized by hackers that, for security reasons, are never connected to the internet or connected to other machines that are connected to the internet, making it difficult to extract data from them.
Mordechai Guri, manager of research and development at the Cyber Security Research Center at Ben-Gurion University, and colleagues at the lab, have previously designed three attacks that use various methods for extracting data from air-gapped machines—methods involving radio waves, electromagnetic waves and the GSM network, and even the heat emitted by computers.
Now the lab’s team has found yet another way to undermine air-gapped systems using little more than the sound emitted by the cooling fans inside computers. Although the technique can only be used to steal a limited amount of data, it’s sufficient to siphon encryption keys and lists of usernames and passwords, as well as small amounts of keylogging histories and documents, from more than two dozen feet away. The researchers, who have described the technical details of the attack in a paper (.pdf), have so far been able to siphon encryption keys and passwords at a rate of 15 to 20 bits per minute—more than 1,200 bits per hour—but are working on methods to accelerate the data extraction.
“We found that if we use two fans concurrently [in the same machine], the CPU and chassis fans, we can double the transmission rates,” says Guri, who conducted the research with colleagues Yosef Solewicz, Andrey Daidakulov, and Yuval Elovici, director of the Telekom Innovation Laboratories at Ben-Gurion University. “And we are working on more techniques to accelerate it and make it much faster.”

The Air gap myth

Air-gapped systems are used in classified military networks, financial institutions and industrial control system environments such as factories and critical infrastructure to protect sensitive data and networks. But such machines aren’t impenetrable. To steal data from them an attacker generally needs physical access to the system—using either removable media like a USB flash drive or a firewire cable connecting the air-gapped system to another computer. But attackers can also use near-physical access using one of the covert methods the Ben-Gurion researchers and others have devised in the past.

One of these methods involves using sound waves to steal data. For this reason, many high-security environments not only require sensitive systems be air-gapped, they also require that external and internal speakers on the systems be removed or disabled to create an “audio gap”. But by using a computer’s cooling fans, which also produce sound, the researchers found they were able to bypass even this protection to steal data.

Most computers contain two or more fans—including a CPU fan, a chassis fan, a power supply fan, and a graphics card fan. While operating, the fans generate an acoustic tone known as blade pass frequency that gets louder with speed. The attack involves increasing the speed or frequency of one or more of these fans to transmit the digits of an encryption key or password to a nearby smartphone or computer, with different speeds representing the binary ones and zeroes of the data the attackers want to extract—for their test, the researchers used 1,000 RPM to represent 1, and 1,600 RPM to represent 0.

The attack, like all previous ones the researchers have devised for air-gapped machines, requires the targeted machine first be infected with malware—in this case, the researchers used proof-of-concept malware they created called Fansmitter, which manipulates the speed of a computer’s fans. Getting such malware onto air-gapped machines isn’t an insurmountable problem; real-world attacks like Stuxnet and Agent.btz have shown how sensitive air-gapped machines can be infected via USB drives.

To receive the sound signals emitted from the target machine, an attacker would also need to infect the smartphone of someone working near the machine using malware designed to detect and decode the sound signals as they’re transmitted and then send them to the attacker via SMS, Wi-Fi, or mobile data transfers. The receiver needs to be within eight meters or 26 feet of the targeted machine, so in secure environments where workers aren’t allowed to bring their smartphones, an attacker could instead infect an internet-connected machine that sits in the vicinity of the targeted machine.
Normally, fans operate at between a few hundred RPMs and a few thousand RPMs. To prevent workers in a room from noticing fluctuations in the fan noise, an attacker could use lower frequencies to transmit the data or use what’s known as close frequencies, frequencies that differ only by 100 Hz or so to signify binary 1’s and 0’s. In both cases, the fluctuating speed would simply blend in with the natural background noise of a room.

“The human ear can barely notice [this],” Guri says.

The receiver, however, is much more sensitive and can even pick up the fan signals in a room filled with other noise, like voices and music.

The beauty of the attack is that it will also work with systems that have no acoustic hardware or speakers by design, such as servers, printers, internet of things devices, and industrial control systems.

The attack will even work on multiple infected machines transmitting at once. Guri says the receiver would be able to distinguish signals coming from fans in multiple infected computers simultaneously because the malware on those machines would transmit the signals on different frequencies.

There are methods to mitigate fan attacks—for example, by using software to detect changes in fan speed or hardware devices that monitor sound waves—but the researchers say they can produce false alerts and have other drawbacks.

Guri says they are “trying to challenge this assumption that air-gapped systems are secure,” and are working on still more ways to attack air-gapped machines. They expect to have more research done by the end of the year.

14/11/2016

Facebook hits 20Gbps for data transmission ( internet drone)

A test by Facebook of a point-to-point radio link for its Aquila Internet drone system. The transmitter used a cascade of custom-built millimeter wave components that were attached to a commercial 60-centimeter diameter parabolic antenna. The transmitter and the receiver were separated by a distance of 13.2 km in Southern California.

Facebook has succeeded in transmitting data at almost 20Gbps between two towers in Southern California in tests of a technology key to its plans to deliver internet service to rural areas using drones.
The tests were conducted earlier this year and made use of frequencies in the so-called E-band, a group of millimeter wave frequencies between 60 and 90GHz.

Such signals are capable of high-bandwidth data transmission but are susceptible to attenuation from distance, weather, and obstacles, so they are typically used for short-range, point-to-point transmissions.
Facebook used a 60-centimeter dish to send data over a 13-kilometer link between Malibu and Woodland Hills.
That test initially shot data at between 100Mbps and 3Gbps and allowed engineers to collect transmission data on clear days and during clouds, fog, high winds, and rain, Facebook said in a Thursday blog post.

For the tests, Facebook ended up building many of its own microwave transmission components into the system, which used a 1.2-meter dish on the receiving end.
The signal received by the dish would drop by half if it was mispointed by just 0.2 degrees, so accuracy was key.
The get the data rate up to 20Gbps, the dish needed to be almost spot on: an accuracy of more than 0.07 degrees. In perspective, that’s the equivalent of a baseball pitcher hitting a quarter, Facebook said.

The distance achieved is significant and already useful for point-to-point links that could transmit high-bandwidth internet signals terrestrially.
But it isn’t quite far enough to be used as the backhaul link for the company’s envisaged drone internet service.
Facebook’s Aquila drones will deliver internet access to remote areas from altitudes of between 60,000 and 90,000 feet, roughly 18 to 27 kilometers. Because the drones won’t always be overhead of the ground station, the link distance could be longer.

For the drone service, Facebook will need to increase the range to between 30 and 50 kilometers and increase the bandwidth to 30Gbps, the company said.
The next step for the California tests is a ground-to-air system that will transmit data to a Cessna aircraft with a data transceiver on board.

Testing of a system that can shoot a 20Gbps data link to the aircraft at an altitude of up to 20,000 feet (six kilometers) has already begun. In 2017, Facebook plans to up the speed to 40Gbps both to and from the aircraft.
“We still have several connectivity and technical challenges to resolve before the technology is fully ready for deployment,” Facebook said.

08/11/2016

LEGO Mindstorms EV3 Demo

07/11/2016

bmc digital IT Exchange 2016 – Dubai Armani Hotel Burj Khalifa 


BMC digital enterprise management is an integrated set of IT solutions designed to fast track digital business for the ultimate competitive advantage from mainframe to mobile to cloud and beyond.

BMC Remedy

BMC Atrium CMBD
BMC BladeLogic Network automation

BMC Truesight Capacity Optimization 
BMC Truesight Operations management

Entuity

Vyomlabs Robotic Process Automation 

Quintica


BMC digital enterprise management is an integrated set of IT solutions designed to fast track digital business for the ultimate competitive advantage from mainframe to mobile to cloud and beyond.

BMC Remedy

BMC Atrium CMBD
BMC BladeLogic Network automation

BMC Truesight Capacity Optimization 
BMC Truesight Operations management

Entuity
Vyomlabs Robotic Process Automation 
Quintica

27/10/2016

The VMware-AWS hybrid cloud partnership put pressure on OpenStack adoption


The VMware-AWS hybrid cloud partnership pressures OpenStack supporters to simplify deployment of the open source cloud platform in the enterprise.

VMware’s plans to release next year a hybrid cloud technology for Amazon Web Services, or AWS, moves the virtualization vendor a step ahead of OpenStack supporters still trying to reduce the complexity of the open source cloud platform.

VMware is preparing to provide customers a single set of tools for migrating workloads between their private cloud and the AWS public cloud. Essentially, the VMware-AWS hybrid cloud provides unified networking, storage, CPU and security services customers can deploy across their data center and Amazon’s.

The single platform — the result of a partnership announced last week — competes directly with OpenStack as a hybrid cloud platform. “OpenStack is the guy in the middle of all of this, and [it] is the guy that probably has the most to lose by the two of them teaming up,” said John Fruehe, an analyst at Moor Insights & Strategy, based in Austin, Texas.

The race to provide the best hybrid cloud technology is driven by projected market demand. A survey of more than 6,100 enterprises in 31 countries found 44% prepared to increase cloud spending over the next two years, according to IDC. More than 70% of heavy cloud users were pursuing a hybrid cloud strategy. In 2015, overall spending on infrastructure products for cloud IT reached $29 billion, an increase of nearly 22% from the previous year.

27/10/2016

Facebook’s 100-gigabit switch design is out in the open (Manufacturer is Accton Technology- Taiwan)

Facebook’s bid to open up networking is moving up into nosebleed territory for data centers. The company’s 100-gigabit switch design has been accepted by the Open Compute Project, a step that should help to foster an open ecosystem of hardware and software on high-speed networking gear.

The 32-port Wedge 100 is the follow-on to Facebook’s Wedge 40, introduced about two years ago and now in use in practically all of the company’s data centers, said Omar Baldonado, director of software engineering on Facebook’s networking team. Mostly, it’s a faster version of that switch, upping the port speed to 100Gbps (bits per second) from 40Gbps. But Facebook also added some features to make service easier, like a cover that can be removed without tools and LED status lights to check the condition of a the cooling fans from a distance.

The social networking giant needs 100Gbps now at the top of many of its server racks. The company has deployed hundreds of the Wedge 100 switches and thousands of the Wedge 40. Next up is 400 gigabits, Baldonado said.

But Facebook is also trying to make networking more like computing, by promoting standard hardware that’s not tied to built-in software. Its vehicle for this is the networking project of OCP, which Facebook spearheaded several years ago.

Facebook designed the Wedge 100 for its own needs and then submitted it to OCP for review. Based on feedback, it made some changes to serve a broader user base before the design was accepted by OCP. Now the Wedge 100 is officially part of the OCP ecosystem.

Facebook both designs its own hardware and writes its own software, which most companies can’t do. But OCP’s certification lets enterprises choose network hardware and software separately, or at least buy switches with a choice of software. The idea is to make switches like servers, giving users more flexibility. Baldonado admits open networking isn’t that mature yet but says it’s making progress.

Edgecore Networks is now selling a switch based on the Wedge 100 design. (Its parent company, Taiwan’s Accton Technology, manufactures the units that Facebook uses.) Several optical transceiver vendors also are interested in making parts for the switch, Facebook says.

27/10/2016

Internet version 2

I believe the world would be willing to pay for a new internet, one in which the minimum identity verification is two-factor or biometric. I also think that, in exchange for much greater security, people would be willing to accept a slightly higher price for connected devices — all of which would have embedded crypto chips to assure that a device or person’s digital certificate hadn’t been stolen or compromised.
This professional-grade internet would have several centralized services, much like DNS today, that would be dedicated to detecting and communicating about badness to all participants. If someone’s computer or account was taken over by hackers or malware, that event could quickly be communicated to everyone who uses the same connection. Moreover, when that person’s computer was cleaned up, centralized services would communicate that status to others. Each network connection would be measured for trustworthiness, and each partner would decide how to treat each incoming connection based on the connection’s rating.
This would effectively mean the end of anonymity on the internet. For those who prefer today’s (relative) anonymity, the current internet would be maintained.
But people like me and the companies I’ve worked for that want more safety would be able to get it. After all, many services already offer safe and less safe versions of their products. For example, I’ve been using Instant Relay Chat (IRC) for decades. Most IRC channels are unauthenticated and subject to frequent hacker attacks, but you can opt for a more reliable and secure IRC. I want the same for every protocol and service on the internet.
I’ve been writing about the need for a more trustworthy internet for a decade-plus. The only detail that has changed is that the internet has become increasingly mission-critical — and the hacks have grown much worse. At some point, we won’t be able to tolerate teenagers taking us offline whenever they like.

23/10/2016

An IoT botnet is partly behind Friday’s massive DDOS attack

Malware that can build botnets out of IoT devices is at least partly responsible for a massive distributed denial-of-service attack that disrupted U.S. internet traffic on Friday, according to network security companies.
Since Friday morning, the assault has been disrupting access to popular websites by flooding a DNS service provider called Dyn with an overwhelming amount of internet traffic.
Some of that traffic has been observed coming from botnets created with the Mirai malware that is estimated to have infected over 500,000 devices, according to Level 3 Communications, a provider of internet backbone services.
About 10 percent of those Mirai infected devices are participating in Friday’s DDOS attack, said Dale Drew, the company’s chief security office in Periscope livestream. However, other botnets are also partaking in the attack, he added.
DDOS attacks and botnets are nothing new. However, the Mirai malware appears especially worrisome for its awesome power. An attack on the website of cybersecurity Brian Krebs last month managed to deliver 665Gbps of traffic to Kreb’s site, making it one of the largest DDOS attacks ever recorded.
Unlike other botnets that rely on PCs, the Mirai malware targets internet-connected devices such as cameras and DVRs that have weak default passwords, making them easy to infect. Adding to the worry is that the developer behind Mirai has released the malware’s source code to the hacker community.
Security firm Flashpoint said it has been able to confirm that some of the Mirai-infected machines involved in Friday’s attack are DVRs.
The botnets participating in Friday’s assault, however, are separate and distinct from those used to take down Kreb’s website back In September, the security firm said.
Both Level 3 and Flashpoint have said copycat hackers have been trying to exploit the Mirai code since it was publicly released.
Friday’s attack is still ongoing, according to Dyn. Its engineers are trying to mitigate “several attacks” aimed at its infrastructure. The company has also reportedly said that the DDOS attacks are coming from “tens of millions of IP addresses at the same time.”

13/10/2016

ITSM Insight: Key facts driving market changeرؤى إدارية لخدمات تكنولوجيا المعلومات: ماهية العوامل التي تقود تطوّر السوق

11/10/2016

BlockChain BitCoin B$ vs Traditional trading

What is Blockchain? It’s the Tech that runs Bitcoin. In the finance industry it could save $1.7 Trillion in yearly transaction fees and it could disrupt other industries in similar ways. This “financial explainer video” explains how it works, why it’s so safe and all the other cool stuff it might be used for.


Blockchain is a Peer to Peer software technology that protects the integrity of a digital piece of information. It was invented to create the alternative currency Bitcoin but may be used for other cryptocurrencies, online signature services, voting systems and many other applications. In this video we explain how it works and what makes it special.

Everyone uses paper money. When you get a 10 dollar bill you trust that it’s not fake. If instead someone sent you an email saying “here’s 10 dollars” you probably wouldn’t trust it. But when we “transfer money”, use an ATM or pay with a deposit card that’s pretty much exactly what we do. We’re sending money in a digital message.


To make sure no one’s cheating or sending money they don’t have, these “messages” go through a few trusted banks that keep a record of everything. They know how much money everyone has and deduct it properly for every transaction.

But this becomes expensive when there’s a million transactions around the world every minute. The Economist estimates that banks charged us more than 1.7 trillion dollars to process these payments in 2014. That’s about 2% of the entire world economy! With blockchain we can save a lot of this cost because it lets us send money just like sending an email.


Instead of sending a lot of payment information through a few servers, blockchain uses thousands of personal computers on the internet. All transactions are copied and cross checked between every computer in a system wide accounting book called the ledger, which becomes very safe at scale.

Blockchain doesn’t just allow us to create safe money online, it lets us protect any piece of digital information. This could be online Identity cards, voter ballots, contracts and many other “legal instruments”, bringing bureaucracy into the 21st century.

01/10/2016

White-box servers eases Private cloudmigration

For a while, it seemed like all of IT was destined to move to public cloud, changing the face of technology and eliminating many data centers. The driving force for this was, and still is, cost. Amazon Web Services is fairly inexpensive, and all its major competitors have had to match its pricing. But new ways in which the major cloud providers reduce their operational costs may suggest private cloud — not public cloud — is the more cost-effective option in the long run.
An emerging set of vendors called original design manufacturers (ODMs) design servers to-order — called white box servers — for major cloud service providers (CSPs), such as Google. Other large companies, including major banks and scientific labs, such as CERN, are examples of a widening customer base for these white box systems.
ODMs have a low-margin, high-volume sales model, more like a grocery store than a traditional high-tech vendor. The result is that CSPs pay as little as 30% of traditional server prices for the white box alternatives.
Technology has also converged on what we call commercial off-the-shelf (COTS) systems. These are very standardized designs; Intel and AMD, for example, provide motherboard reference designs to manufacturers, and all the interfaces are standardized. This minimizes the interoperability risks and maintenance requirements that we saw with proprietary computers.
All of this changes the buying profile for servers and storage across the IT industry and, most noticeably, in the cloud. Anyone buying gear for private cloud has the option to purchase low-cost, white box models from ODMs.
In addition to being low-cost, a white box server could also reduce workloads for private cloud administrators, especially when managing the configurations and software images installed upon them. This is because white boxes adhere strictly to open COTS standards. The COTS model used in white boxes enables automated orchestration, converting manually driven scripts into automated policy deployments. Without this standardization, it’s more difficult for organizations to build a hybrid cloud.
Despite their benefits, white box servers can pose a few challenges. For instance, finding vendor support for a white box server can be more difficult. However, compared with a decade or so ago, the risks are smaller and typically within the scope of most admin teams. In addition, white box servers could require organizations to work with multiple vendors — such as different providers for drives and adapter cards — but COTS standards help with this.
Open source software that runs in clouds is flexible when it comes to hardware. It’s unusual for a COTS configuration to not run OpenStack, for example, or Ceph. The open source software vendors or communities provide guidance on minimum hardware configurations to help reduce complexity.
Many organizations wonder whether they should use designs approved by the Open Compute Project (OCP). While the OCP is a good way to see what Facebook and others do, OCP designs may not be affordable for many enterprises.
Some of the old-guard server vendors have usurped OCP, offering “OCP-compliant” products at prices similar to those for traditional systems. However, many white box products do as good or better jobs, and may still be cheaper than OCP units. In addition, users can run any x64-compatible code on a white box server, so Linux and Windows are good to go, as are all hypervisors.
Low-cost white box servers are becoming mainstream and rapidly increasing market share. This is the future of computing, unless a radical technology shift moves things back to favor traditional vendors. It also opens up a cheaper alternative that allows private or hybrid clouds to become more cost-competitive with the public clouds.