Hazim Anzawi – حازم عنزاوي | Tech news
|Introducing the 1st Public Cloud in the Middle East|
|Looking for scalable computing and on-demand IT infrastructure, while minimizing capital expenses?
Public cloud is the right solution for you!
Provision your public cloud servers in minutes with eHDF’s Public Cloud hosting services
Starts from $31 Per Month*
Sent from my iPadmc
Apple is making a push for the corporate IT market through a partnership with IBM, which will develop iOS apps for its Big Data and analytics services and promote iPhones and iPads to clients.
The deal will see IBM, one of the industry’s largest providers of corporate IT, develop more than 100 apps and services exclusively for Apple’s iOS, the companies said in a statement Tuesday.
It will also tailor its cloud services for iOS and provide iOS device activation, supply and management to its customers, including the ability to lease rather than buy Apple hardware.
Apple, meanwhile, will offer new AppleCare warranty services for enterprise customers that include 24-hour assistance and on-site support.
Apple CEO Tim Cook said in a statement that the partnership opens “a large market opportunity for Apple.”
“This is a radical step for enterprise and something that only Apple and IBM can deliver,” he said.
The first IBM apps to stem from the collaboration could be out by autumn, the companies said – which is when Apple’s iOS 8 platform is expected to launch.
Initial apps will be targeted at the retail, health care, banking, travel, transportation, telecommunications and insurance industries.
The deal with IBM could provide a significant boost for Apple in the corporate sector, potentially leading to increased wide-scale deployment of its products among workers.
IBM claims three-quarters of Fortune 100 companies use its enterprise software, and it says it has enjoyed top market share for five straight years in IDC’s ranking of the world’s biggest enterprise social software makers.
Big Switch Networks this week is unveiling an SDN controller designed to bring Google-like hyperscale networking to enterprises.
The software, called Cloud Fabric, runs on bare metal switches and allows users to build cloud “pods” where racks of servers, switches and applications can be grouped logically to serve a specific function, such as an OpenStack private cloud (http://www.networkworld.com/article/2172063/cloud-computing/gartner-analyst-slams-openstack–again.html), Hadoop big data cluster, or a virtual desktop infrastructure.
This is in contrast to operating, provisioning and monitoring each individual business application, such as SAP or Oracle or Microsoft Office, as its own silo.
Cloud Fabric works with Big Switch’s SwitchLight operating system on spine switches, and SwitchLight OS on leafs. Optional components are an OpenStack plugin and SwitchLight virtual switches on leafs.
SwitchLight speaks OpenFlow 1.3 to the Cloud Fabric controller. The controller also supports high availability and zero touch provisioning features, Big Switch says. It supports Microsoft HyperV, KVM, VMware and Open vSwitch-enabled hypervisors.
Big Switch believes next-generation data center switching – white box hardware and software, fabrics, virtual switching and overlays – will gradually supplant traditional top-of-rack and aggregation switching in the data center. Though it represented less than $1 billion of the total $8 billion-plus data center switching market in 2013, next-generation data center switching will be about a $6 billion piece of the overall $13 billion market by 2019, the company predicts.
“There’s a big shift going on in the market,” says Big Switch CEO Doug Murray. “We’re at an inflection point where the market is going to shift over the next three years.”
Cloud Fabric, which is in 10 beta sites, will ship this quarter.
Big Switch hopes Cloud Fabric enjoys the same momentum as its Big Tap monitoring application. Bookings for that product doubled in the second quarter compared to the first, with wins in financial, federal government, public sector service provider and high tech verticals in three continents, says Murray.
Big Tap 4.0, the company’s newest release, has added support for Accton and Dell hardware, and now allows users to share tools between data centers, Murray says.
Big Switch also landed its first million dollar customer in the first quarter, Murray says, with a deployment across 16 data centers. And all this progress before its resale partnership with Dell commences in the second half of the year.
And the company has expanded and added new partners, some of them replacing ones who have departed, Murray says. Among the new partners are HortonWorks for enterprise-grade Hadoop deployments; FireEye for VM-based security; Accto; Telchemy for video analytics; and iwNetworks for bare metal Ethernet switches.
XML SOAP is a language that allows a program running in one operating system to communicate with another program in another operating system over the internet.
A group of vendors from Microsoft, IBM, Lotus and others, created an XML-based protocol that lets you activate applications or objects within an application across the Internet. SOAP codifies the practice of using XML and HTTP to invoke methods across networks and computer platforms.
With distributed computing and web applications, a request for an application comes from one computer (the “client”) and is transmitted over the Internet to another computer (the “server”). There are many ways of doing this, but SOAP makes it easy by using XML and HTTP – which are already standard web formats.
Web applications are where SOAP really comes into its own. When you view a web page you are using aweb browser to query a web server and view a web page. With SOAP, you would use your computer client application to query a server and run a program. You can’t do that with standard web pages or HTML.
Right now, you might use online banking to access your bank accounts. My bank has the following options:
One of the reasons that these three functions are separated is because they reside on different machines. Ie. the program that runs the online bill paying is one one computer server, while the credit card and bill paying applications are on other servers. With SOAP, this doesn’t matter.
You might have a Java method that gets an account balance called
With standard web based applications, that method is only available to the programs that call it and are on the same server. Using SOAP, you can access that method across the Internet via HTTP and XML.
There are many possible applications for SOAP, here are just a couple:
One thing to consider when looking into implementing SOAP on your business server is that there are many other ways to do the same thing that SOAP does. But the number one benefit you’ll gain from using SOAP is it’s simplicity. SOAP is just XML and HTTP combined to send and receive messages over the Internet. It is not constrained by the application language (Java, C#, Perl) or the platform (Windows, UNIX, Mac), and this makes it much more versatile than other solutions.
Security information and event management (SIEM) is an approach to security management that seeks to provide a holistic view of an organization’s information technology (IT) security. SIEM combines SIM (security information management) and SEM (security event management) functions into one security management system. The acronym is pronounced “sim” with a silent e.
The underlying principle of a SIEM system is that relevant data about an enterprise’s security is produced in multiple locations and being able to look at all the data from a single point of view makes it easier to spot trends and see patterns that are out of the ordinary.
SIEM systems collect logs and other security-related documentation for analysis. Most SIEM systems work by deploying multiple collection agents in a hierarchical manner to gather security-related events from end-user devices, servers, network equipment — and even specialized security equipment like firewalls, antivirus or intrusion prevention systems. The collectors forward events to a centralized management console, which performs inspections and flags anomalies. To allow the system to identify anomalous events, it’s important that the SIEM administrator first creates a profile of the system under normal event conditions.
At the most basic level, a SIEM system can be rules-based or employ a statistical correlation engine to establish relationships between event log entries. In some systems, pre-processing may happen at edge collectors, with only certain events being passed through to a centralized management node. In this way, the volume of information being communicated and stored can be reduced. The danger of this approach, however, is that relevant events may be filtered out too soon.
SIEM systems are typically expensive to deploy and complex to operate and manage. While Payment Card Industry Data Security Standard (PCI DSS) compliance has traditionally driven SIEM adoption in large enterprises, concerns over advanced persistent threats (APTs) have led smaller organizations to look at the benefits a SIEM managed security service provider (MSSP) can offer.
New firewall appliance combines Palo Alto’s Panorama central management platform with ESXi VMs by plugging into the NSX virtual network controller.
When VMware launched NSX earlier this year, it promised a network controller — extensible using published APIs — that allows higher level network services such as firewalls, load balancers and application accelerators to plug in at any point in a virtual network. VMware touted more than 20 partners working on NSX integration. The vision sounded great, but given VMware’s inclusion of several network services in NSX including firewall, load balancing and VPN termination, it was easy to assume that the promised virtual ecosystem was DOA.
Palo Alto Networks has countered that assumption with its firewall appliance for NSX. Selling optional services against a built-in feature is never an easy task, but Palo Alto has taken up the challenge.
The new appliance marries Palo Alto’s next-generation firewall, in virtual appliance form, and Panorama central management platform with ESXi VMs by plugging into the NSX virtual network controller. It’s a logical extension ofPalo Alto’s existing VM-Series by moving the virtual firewall from the VM to the hypervisor, plugging directly into the NSX vSwitch to access all hosts on a given system.
According to Danelle Au, solutions marketing director at Palo Alto Networks, tighter integration into the network control plane via NSX allows better tracking of VM movement, more granular control and easier service insertion to existing VMware infrastructure.
The NSX-Panorama integration means that Palo Alto virtual firewalls now take orders from two masters: the NSX controller for deployment and network insertion (addresses, VLANs, etc.) and the Panorama console for security policy. By using the same security management system as Palo Alto’s hardware appliances, while also being integrated with the NSX network controller, the design facilitates keeping security and server management duties separated, according to Au. Security admins continue to set policy, define rules and monitor events from their existing management console while server admins can deploy and move VMs without worrying about changing network or security configurations.
Key to the automation and dynamic updating of security policy configuration is the use of what Palo Alto calls containers that set security policy based on application, user groups or content. As Au wrote in this blog post, the firewall’s “dynamic address groups feature can now populate application container context directly from VMware so security policies will incorporate the latest attributes of virtual machines.”
Au added that communication between the NSX controller and Panorama security management system results in a fully automated deployment. Once a VM admin provisions applications in the appropriate container, the respective network and security controllers take care of the configuration details. The system also means applications can migrate to different hosts without the server admin having to worry about security implications.
FireEye, ranked the fastest growing communications/networking company in North America on Deloitte’s 2013 Technology Fast 500™, is transforming the IT security landscape to combat today’s advanced cyber attacks and we want you to be part of our team.
FireEye’s disruption in the IT security industry has been all over media outlets such as in BusinessWeek, Bloomberg TV, The Wall Street Journal, Fox News, and several others. A leader in advanced technology, FireEye has received the Wall Street Journal Technology Innovation Award as well as the JPMorgan Chase Hall of Innovation Award. FireEye has also been recognized as one of the top 5 IPOs of 2013 by Wall Street Journal.
Following the acquisition of Mandiant, FireEye is now the ONLY company that can deliver a comprehensive platform to detect, resolve, and prevent advance attacks on a global basis. FireEye is now the go-to company for some of the largest enterprises and government agencies across the globe.
FireEye has invented a purpose-built, virtual machine-based security platform that provides real-time threat protection to enterprises and governments worldwide against the next generation of cyber attacks. These highly sophisticated cyber attacks easily circumvent traditional signature-based defenses, such as next-generation firewalls, IPS, anti-virus, and gateways. The FireEye Threat Prevention Platform provides real-time, dynamic threat protection without the use of signatures to protect an organization across the primary threat vectors and across the different stages of an attack life cycle. The core of the FireEye platform is a virtual execution engine, complemented by dynamic threat intelligence, to identify and block cyber attacks in real time.
The Mandiant Platform connects the dots between our customer’s network security solutions and their endpoints. It equips customers to hunt for adversaries by identifying evidence of compromise and forensic artifacts on their endpoints left behind by attacker activity. Customers apply Mandiant’s intelligence directly and also integrate to SIEM, log management and “next generation” network security solutions to automatically identify threats that are present in their network so they can stop advanced attacks when they are just beginning to unfold.
FireEye has over 1,500 customers across more than 40 countries, including over 100 of the Fortune 500.
The consultant’s primary role is to assist your organization with certain areas of your inclusiveness work. While the consultant may act as an educator, a catalyst for deeper change, a resource, or a facilitator, the leadership of the process remains within your organization. The Inclusiveness Committee, staff, board members, and executive director have the power, and the greater responsibility, to lead the process of becoming more inclusive.
There are generally four categories of work for which you may want to hire the services of a consultant or a consulting team:
1. Overall Guidance: The consultant works with the Inclusiveness Committee throughout the inclusiveness initiative to plan and execute the initiative and acts as a meeting or process facilitator.
2. Information Gathering: The consultant designs and gathers data during the information-gathering phase. Consultants can be particularly useful in collecting qualitative data through interviews and focus groups, since their neutral position with the organization can lead to more honest responses from internal and external stakeholders.
3. Cultural Competency/Diversity Training: The consultant conducts diversity/inclusiveness trainings to create a more inclusive culture and help stakeholders become more aware of how the organization may be creating an unwelcome atmosphere for diverse communities. In this instance, you may want to use one consultant or a consulting team for all of the trainings or you may wish to bring in content specialists for different trainings and use an "integrating facilitator." An integrating facilitator works with you throughout your process, helping to provide continuity between trainings.
4. Evaluation: The consultant creates an evaluation plan to measure the efficacy of trainings and progress of your inclusiveness initiative.
Upon reviewing proposals made in response to your Request for Proposals (RFP), and negotiating with the consultant you select, you may need to adjust the role you have defined for your consultant.
The role that your consultant plays can be a combination of the above, or just one – it depends on your organization’s needs and the consultant that you select. Consultants may be brought in for day-long sessions, for multiple trainings, or to assist you with particular topics. The time you spend with your consultant – if you hire one – and the work the consultant does, depend upon your organization’s specific needs and budget.
eHosting DataFort (eHDF), the region’s leading managed hosting and cloud infrastructure services provider and member of TECOM Investments, has been chosen by Dubai Aluminium PJSC (DUBAL), which operates one of the largest single-site aluminium smelters in the world, to host its offsite Disaster Recovery (DR) and Business Continuity Management (BCM) service as a fall back option in the event of downtime or disruption.
DUBAL (an operating subsidiary of Emirates Global Aluminium or EGA) has a production capacity of more than one million tonnes per annum and serves over 325 customers in at least 60 countries worldwide. The organisation has huge ICT requirements, which are met by its in-house IT experts who manage a large IT infrastructure including an in-house data centre.
Given its massive operational requirements, maintaining business functionality during any potential crisis scenario is critical for DUBAL. The implications that any disruption could have on its long-term business growth prompted DUBAL to consider an off-site BCM and disaster recovery service. The company identified Enterprise Resource Planning (ERP) and email as the most critical applications for which a disaster recovery site at a third party data centre was imperative.
eHDF is providing DUBAL a disaster recovery site at its data centre along with managed services such as managed security services, network infrastructure and management services, monitoring services and systems management services.
DUBAL’s ISO20000 compliant data centres required the security and services of a certified managed services company. eHDF adheres to international standards and best practices, and has been awarded several ISO certifications.
Disaster recovery and BCM services are high on every Chief Information Officer’s IT planning list. According to a recent BCM survey conducted by eHDF in collaboration with Continuity and Resilience, the Business Continuity Institute (BCI) and DNV GL Business Assurance, 22% of the respondents invest between US$250,000 to US$1 million every year to implement and sustain their BCM program.
Ahmad M Almulla, Senior Vice President, Information Technology, said: “We are pleased to work with eHDF, who implemented a comprehensive disaster recovery and BCM solution as per our specifications, thus demonstrating a strong understanding of DUBAL’s requirements. Both the DUBAL and eHDF teams worked together seamlessly to achieve the defined objectives and even conducted live tests of the system to ensure everything is in place. eHDF’s technical competency has ensured smooth delivery of the solution.”
Yasser Zeineldin, CEO of eHosting DataFort, said: “We appreciate the trust that DUBAL has placed in us. As an experienced service provider for BCM and disaster recovery services, we have seen a significant demand for disaster recovery services over the last few years. It is imperative for organisations to have a backup plan in place to cope with any incidents that can impact their bottom line.”
With world-class data centres, resilient and scalable infrastructure and round-the-clock managed operations, eHosting DataFort has established itself as a market leader in the field of managed hosting, Disaster Recovery and cloud infrastructure services.
This photo made available by G Data Software dated June 16, 2014 shows G Data Software spokesman, Thorsten Urbanski, holding a Chinese-made Star N9500 smartphone. G Data says it found malicious code hidden deep in the propriety software of the Star N9500 when it ordered the handset from a Web site late last month. The find is the latest in a series of incidents where smartphones have appeared on customers’ doorsteps preloaded with malicious software. (AP Photo/G Data Software, Frank Born)
Just as computing power and primary storage are becoming virtual shared resources, backup capacity is also starting to be pooled, with promises of easier management.
The latest vendor that wants to turn many backup components into one is Hewlett-Packard, which introduced HP StoreOnce Federated Catalyst at its Discover conference in Las Vegas on Monday. The new software lets enterprises take several HP StoreOnce 6500 backup appliances, each of which may have several backup stores within it, and manage them all as one virtual store of data. Later, HP will introduce it for the older StoreOnce 6200 line and may extend the technology to its software-based StoreOnce Virtual Storage Appliance and let users extend the backup pool to cloud resources.
That might make life easier for the IT department at BlueShore Financial, a financial institution in British Columbia, Canada, that uses the StoreOnce 6200.
“Right now it’s a lot of manual work to manage all these individual Catalyst stores,” said Ryan Burgess, manager of technical infrastructure at BlueShore. The company runs a lean IT shop with just 12 employees, none a storage specialist. Burgess is investigating Federated Catalyst as a potential time-saver and a way to spread data from various applications across all the nodes in a 6200. Right now, data streams from VMware and Microsoft Exchange and SQL Server all have to go into their own data stores.
“From what I understand and what I expect, it will be giving us that easier management, but also better utilization of the assets we have,” Burgess said. That could help prevent one node from getting overloaded and forcing BlueShore to buy more hardware, he said.
StoreOnce Federated Catalyst is designed to do that work itself. The software lets IT managers pool the capacity of multiple storage nodes — initially, four StoreOnce 6500 nodes — to form what appears as one large data store. Within that pool, an enterprise can add new nodes and add capacity to existing nodes. The software also extends data deduplication across the whole store, so with 860TB of usable capacity and a 20-to-1 deduplication ratio, Federated Catalyst can encompass a total logical backup capacity of 17PB.
Typical backup systems on the market now can only perform deduplication across one node, according to HP, while the StoreOnce 6500 has been able to do so on two controllers. Though Federated Catalyst allows for deduplication across four 6500 controllers to start with, HP could expand that to eight, the maximum number of nodes in a cluster, said Craig Nunes, vice president of marketing for HP Storage.
All told, Federated Catalyst can cut management overhead by 75 percent, according to HP.
List prices for the StoreOnce 6500 start at $375,000. Federated Catalyst will be available starting in July, with prices starting at $37,500 per couplet, or pair of nodes configured for failover and autonomic restart.
Sent from my iPadmc