Archive for ‘Misc’


Passport Won’t Stamp BlackBerry For Gains

Struggling smartphone company BlackBerry has released its response to the “phablet” craze: the Passport. Designed like the boxy object for which it is named — “the universal symbol of mobility,” BlackBerry said – the Passport is a 4.5-inched square that BlackBerry touts has better resolution than Apple’s AAPL +2.94% iPhone 6 Plus or Samsung’s Galaxy S5. Yet for all the superlatives included in the Passport announcement, early Wall Street reviews have been quick to point out that the Passport might have trouble taking off.

Based on the details about the new phone that BlackBerry released Wednesday afternoon, the Passport is aiming to meet (and surpass) its competitors feature by feature while also amending the company’s previous missteps: Siri, meet BlackBerry Assistant. Struggling with that iOS 8 battery life? BlackBerry says its new phone has “the largest [battery] among the top selling smartphones and phablets and, when tested against a very active user, provides up to 30 hours of mixed use.” And for BlackBerry devotees who were shocked by the Z10′s missing keyboard, the Passport has brought the keyboard back.

“The BlackBerry Passport was created to drive productivity and to break through the sea of rectangular-screen, all-touch devices,” BlackBerry chairman and CEO John Chen said in a statement.

“We believe the square shape of the screen could make the vast store of Android apps now available via the Amazon app store a disappointing experience as most were designed for a longer screen implying they would need to be redesigned,” wrote Citi analyst Ehud Gelblum in a note on Wednesday.

Gelblum also criticized BlackBerry’s pricing plan, which calls for the Passport to be available in the US (via Amazon) and in Canada (via Telus) for a “6-day exclusive for $200″ with a two-year contract, but that price will rise to $250 on October 1. Without a contract, the Passport will cost $599, but Gelblum said that BlackBerry has plans to raise that price, a move he called “an odd and not very customer-friendly decision.” As it is, the Passport is only $50 cheaper than the iPhone 6; it’s more competitive against the iPhone 6 Plus, which costs $150 more than the Passport.

The Passport announcement was not enough to move Citi’s price target or rating for BlackBerry; the investment firm reiterated its “sell” call and $8 price target.

Unfortunately for Chen, early reviews indicate that the Passport might be too square to be hip.


RISO HC ComColor Printer Introduction


Gartners Magic Quadrant for Solid-State Arrays 2014

Solid-state arrays provide performance levels an order of magnitude faster than disk-based storage arrays at competitive prices per GB, enabled by in-line data reduction and lower-cost NAND SSD. This Magic Quadrant will help IT leaders better understand SSA vendors’ positioning.

This Magic Quadrant covers SSA vendors that offer dedicated SSA product lines positioned and marketed with specific model numbers, which cannot be used as, upgraded or converted to general-purpose or hybrid storage arrays. SSA is a new subcategory of the broader external controller-based (ECB) storage market.

Considering the potential disruptive nature of SSAs on the general-purpose ECB disk storage market, Gartner has elected to report only on vendors that qualify as an SSA. We do not consider solid-state drive (SSD)-only general-purpose disk array configurations in this research. To meet these inclusion criteria, SSA vendors must have a dedicated model and name, and the product cannot be configured with hard-disk drives (HDDs) at any time. These systems typically (but not always) include an OS and data management software optimized for solid-state technology.

Vendors Evaluation Criteria

Ability to Execute

Product or Service evaluates the capabilities of the products or solutions offered to the market.
Overall Viability includes an assessment of the organization’s financial health.
Sales Execution/Pricing looks at the vendor’s capabilities in all presales activities and the structure that supports team.
Market Responsiveness/Record focuses on the vendor’s capability to respond, change direction, be flexible and achieve competitive success.
Marketing Execution directly leads to unaided awareness and a vendor’s ability to be considered by the marketplace.
Customer Experience looks at a vendor’s capability to deal with postsales issues.
Operations considers the ability of the organization to meet its goals and commitments including skills, experiences.

Inclusion and Exclusion Criteria

To be included in the Magic Quadrant for SSA, a vendor must:

Offer a self-contained, SSD-only system that has a dedicated model name and model number (see Note 1).
Have an SSD-only system. It must be initially sold with 100% SSD and cannot be reconfigured, expanded or upgraded at any point with any form of HDDs within expansion trays via any vendor’s special upgrade, specific customer customization or vendor product exclusion process into a hybrid or general-purpose SSD and HDD storage array.
Sell its product as a stand-alone product, without the requirement to bundle it with other vendors’ storage products in order to be implemented in production.
Provide at least five references that Gartner can interview. There must be at least one client reference from Asia/Pacific, EMEA and North America, or the two geographies within which the vendor has a presence.
Provide an enterprise-class support and maintenance service, offering 24/7 customer support (including phone support). This can be provided via other service organizations or channel partners.
Have established notable market presence as demonstrated by the amount of PBs sold, number of clients or significant revenue.

Completeness of Vision

Market Understanding looks at the technology provider’s capability to understand buyers’ needs and respond appropriately.
Marketing Strategy relates to what vendor solution message is described.
Sales Strategy considers the strategy for selling products that uses the appropriate network of direct and indirect sales.
Offering (Product) Strategy looks at a vendor’s product road map and architecture.
Business Model assesses a vendor’s approach to the market.
Vertical/Industry Strategy measures the vendor’s strategy to direct resources, skills.
Innovation measures a vendor’s ability to move the market into new solution areas.
Geographic Strategy measures the vendor’s ability to direct resources, skills and offerings.


EMC has two SSD-based products in the SSA market: (1) the XtremIO scale-out technology, which EMC acquired in May 2012; and (2) the VNX-F array, which is based on the traditional general-purpose VNX unified storage array and exploits the proven VNX HDD-based hardware controllers and software. Both offerings are positioned and sold as dedicated SSAs. EMC has a large and relatively loyal installed base for the XtremIO products. EMC has a significant and broad, but overlapping, SSD product portfolio. The portfolio will be enhanced by EMC’s acquisition of DSSD and its technology, which will initially be positioned as an extreme performance networked appliance. EMC has been a vocal visionary concerning SSD for more than a decade, but its market-leading messaging has outpaced some of its product introductions. Compared with competitor SSAs, the XtremIO product was late to market and became generally available only in November 2013. With a concrete offering, XtremIO, together with VNX-F, has enabled EMC to grab the No. 4 market share position in the SSA segment for 2013. EMC has gained traction for the XtremIO product, and has continued its momentum through 1H14 via concerted sales efforts and competitive pricing.

  • EMC has a highly successful global sales force, exceptional marketing, and highly rated support and maintenance capability.
  • Large and loyal EMC customers have been provided with early products and attractive competitive introductory pricing. These customers can expect beneficial purchase terms.
  • XtremIO offers inclusive software pricing, and customers do not have to budget, track or purchase extra licenses when capacity is upgraded.
  • EMC is offering XtremIO at competitive prices to its installed base, but transparency of information (such as list prices, discount levels and independent performance benchmarks) is unavailable. To avoid hidden future costs, customers should fix all XtremIO purchases and upgrades at these competitive introductory prices.
  • VNX-F includes data reduction in the base system price. Unlike XtremIO and most competitors’ offerings, VNX-F still uses a traditional licensing structure, which requires customers to pay additional support and license charges for other upgrades and extra features (such as data protection suite).
  • While XtremIO’s product integration with ViPR has been announced, it is not currently available. Given the product overlap between the XtremIO and VNX-F products, operational and administration complexity is an issue.


HP is one of the late entrants into the SSA market, with availability of its HP 3PAR StoreServ 7450 model in June 2013. While HP is relatively new to the SSA market with its own product, it had an OEM partnership with Violin Memory, which ended in late 2011, in favor of HP’s organic approach. The 3PAR storage architecture is sufficiently flexible to exploit SSD media, complete with purpose-built SSA features. Compared with EMC and IBM, HP has not aggressively marketed, sold and generally mined its installed base. HP has almost entirely leveraged its 3PAR hardware architecture and management platform, but has made some important enhancements centered on efficiently maximizing the resident SSD technology. This affords HP a cost-effective approach, as well as robust reliability that can be supported with solid warranty terms, including a five-year SSD warranty and six 9s (99.9999%) of availability guarantees for four-node deployments.

  • HP has leveraged its hardware and storage software design, which are sufficiently modern and flexible enough to accommodate the nuances of solid-state technology and to implement new data reduction services.
  • HP 3PAR StoreServ 7450 offers a proven compatibility matrix for a broad variety of application workloads, cost-effective thin provisioning, and a familiar interface for customers, as well as a scale-out architecture.
  • HP has an extensive channel presence, global sales ability and a substantial customer base that is complemented with worldwide support and service capabilities.
  • Customers need to request more evidence to demonstrate the ROI to distinguish its product functionality and capability from other SSA and general-purpose arrays.
  • Despite the familiarity gained by HP’s leveraging its storage architecture, its media reporting abilities need further refinement.
  • Some client references have had limited visibility into HP’s SSA product strategy, and HP and its partners have limited mind share in the market.


IBM acquired Texas Memory Systems (TMS) in September 2012, and subsequently announced in April 2013 that it would invest $1 billion into all aspects of flash (SSD) storage technology. IBM has leveraged its storage technology, specifically Storwize compression software and the IBM SAN Volume Controller (SVC) layer, which has been placed on top of the FlashSystem array to provide high-level data services. TMS had a successful track record of producing low-latency storage using DRAM for over 30 years, and using flash-based storage for nearly 10 years. The IBM-engineered FlashSystem products are available as a stand-alone storage enclosure — the FlashSystem 840 — which has limited software features. In March 2014, IBM made available the FlashSystem V840, which is the storage enclosure combined with the FlashSystem control enclosure, to provide data services such as compression, mirroring, thin provisioning and replication. This usage of the SVC for the FlashSystem control enclosure follows a pattern within IBM’s storage division, where the SVC is placed on top of many IBM products (such as the DS8000, Storwize V7000 and XIV storage arrays) to provide a common and interoperable platform abstracting the diverse products beneath it, an approach that has internal cost and reuse advantages. However, with such a diverse number of devices, the complexity of managing compatibility, fixes, and software and hardware regression testing between an exponentially increasing number of software and hardware platforms increases dependencies among product lines. Basic storage controller features — such as redundant array of independent disks (RAID), hot code load, controller failover, port failover, caching and administration software — are duplicated in the storage enclosure (FlashSystem 840) and the control enclosure (SVC). Compared with competitors, IBM charges separately for higher-level features such as compression.

  • Within the SSA market, the TMS platform has one of the longest proven track records with respect to array performance.
  • There is a quick and short learning curve for IBM Storwize V7000 and SVC customers, because the same SVC-based management interface is used on many other key IBM storage product lines.
  • IBM has successfully exploited its system company advantage and has cross-sold the FlashSystem into its customer base through direct and indirect channel incentives and bundling discounts with SVC.
  • Compared with the FlashSystem V840, the FlashSystem 840 has limited data services and will require IBM or non-IBM virtualization products for data services.
  • The FlashSystem 840 is dependent on the SVC product line to provide data services, such as compression, thin provisioning, snapshots and mirroring, among other features, for additional costs.
  • Clients starting with the FlashSystem 840 that later decide they require extra storage features will need to purchase extra SVC-based hardware. This increases the operating expenditure (opex) considerations (such as wiring, power, cooling and physical rack space requirements) compared with the FlashSystem 840 by itself.


NetApp announced the first EF array model in February 2013, and updated it with the EF550 in November 2013, helping continue its product momentum. Compared with smaller SSA startups, NetApp was a late entrant to the SSA market. However, NetApp was able to reuse existing products and technology, as the EF Series is based on the mature E Series hardware and the SANtricity platform acquired from the acquisition of LSI’s Engenio business. This has led to an intricately managed positioning and sales challenge between the EF and FAS products. The EF Series is targeted at workloads that need high performance. Unlike the FAS Series, the EF Series is primarily sold through a direct sales force. NetApp’s customers and prospects can elect to deploy the EF Series, choose the recently productized All-Flash FAS offerings, or wait for the launch of FlashRay in late 2014. Although FlashRay has been delayed thus far, NetApp claims it will be a dedicated SSA product built from the ground up and optimized for SSD technology.

  • NetApp has a deep understanding of SSDs. Its diverse portfolio of SSD offerings features good workload analysis tools that can profile applications and match them to the right products, helping customers rightsize their environments from several perspectives: reliability, availability, serviceability, manageability and performance.
  • With the EF Series, NetApp has changed its pricing structure to an all-inclusive one, which simplifies license management during upgrades and long-term budgeting.
  • The EF Series provides support for a wide variety of high-speed interconnect protocols, including FC, Internet SCSI (iSCSI), SAS and InfiniBand.
  • With the scheduled launch of FlashRay, which has been in development for more than two years, the EF Series needs to compete for product development, marketing and sales dollars within NetApp, which raises questions about the long-term viability of the EF Series product line.
  • The EF Series uses more reliable, but more expensive, enterprise-grade SSD (single-level cell [SLC] and enterprise multilevel cell [eMLC]) and, given the lack of any data reduction capabilities, it may not be cost-competitive for diverse workloads.
  • The EF Series has a complex graphical user interface (GUI), compared with newer designs from competitors, and Ontap/FAS customers will require new skills to operate and administer the EF Series.

Market Overview

There has been a growing demand for SSAs to meet the low-latency performance requirements of enterprise- and Web-scale applications. Over the last decade, CPU performance has improved by an order of magnitude, while the performance of HDD within general-purpose storage arrays stagnated, an increasingly accentuating divergence. SSAs have corrected this imbalance by temporarily satiating the demand for storage performance. This has led to the quick and successful adoption of SSA, evidenced by the fact that the total revenue for SSA in 2013 was $667 million, with a huge year-over-year growth of 182%.

The SSA market witnessed a considerable uptake in adoption in 2013, fueled by significant and continued investments in startups and with established vendors opting to acquire emerging vendors, although some are still pursuing an organic approach to growth. Large incumbent system vendors, such as EMC, HP, IBM and NetApp, have been focused on cross selling their new SSA products to their established customers, thereby quickly obtaining large market shares. However, once this captive segment has been mined, a vendor’s ability to grow market share in the long term will be predicated on overall product ability, sales bandwidth and execution as it competes outside its installed base. Nearly half of the vendors in this Magic Quadrant have pursued a vertically integrated approach based upon direct procurement of SSD memory, with the remaining vendors choosing to outsource the SSD storage and procure functionality from external suppliers to focus on an SSD-optimized data management software strategy.

Between 2010 and 2012, most customers were interested primarily in high-performance and low-latency SSAs. Given the lack of available data management features, customers tolerated the feature shortcomings in favor of raw performance. As initial storage performance issues were capably addressed, customers wanted to address multiple application workloads that required a rich data management software portfolio consisting not only of storage efficiency and resiliency technologies purpose-built for SSAs, but also the underlying SSD memory technology. During 2013, we witnessed the advent of comprehensive data management software features, such as deduplication, compression, thin provisioning, snapshots and replication technologies that, when specifically tailored to SSD, can provide compelling benefits, particularly in application workloads that see favorable data reduction ratios. This trend of innovative and comprehensive data management software on the more mature SSA platforms has continued into 2014, and has started to permeate at the application level, which will drive the industry in 2015 and beyond. It is through the synergy of cost-effective hardware and purpose-built software that the industry will see further consolidation in order to reach maturation.

As this market matures and SSAs gain feature equivalency with general-purpose arrays, we expect decreasing differentiation between general-purpose storage arrays and SSAs. Vendors of general-purpose arrays product lines and server SSD cards have created specific array models full of SSD media. These models are tactical implementations that enable the vendors to market directly into the SSA segment, while they create longer-term strategies or create purpose-built SSAs. If these vendors maintain their investments in these general-purpose array SSD variations over a longer period and they prove not to be a viable tactical stopgap, they may need to create specific SSAs. The SSA market is distinct. It has matured from the early solid-state appliance offerings, because the data services provided are equivalent and, in certain cases (such as data reduction and administration), offer richer and improved features than general-purpose storage arrays. SSAs have matured to levels competitive with general-purpose storage arrays in all but scale. The average usable capacity of the SSA purchased is approximately 38TB. The preferred connection protocol is Fibre Channel: 63% of all SSAs attach to servers use Fibre Channel, and 33% use the iSCSI protocol. NFS and Common Internet File System (CIFS) attach are, therefore, rarely used. Online transaction processing (OLTP), analytics and server virtualization are the top three workloads that customers consider for SSAs, with virtual desktop infrastructure (VDI) being the fourth most popular workload. While a majority of SSA deployments are for a single workload, Gartner is seeing interest in converging multiple workloads on the same product, which, in many cases, are being enabled by features such as QoS.

Vendors Products Portfolios



The XtremIO product was designed from inception to efficiently use external SSDs and currently uses robust, but more costly, enterprise SAS eMLC SSDs to deliver sustained and consistent performance. It has a purpose-built, performance-optimized scale-out architecture that leverages content-based addressing to achieve inherent balance, always-on and in-line data reduction, optimal resource utilization in its storage layout, a flash-optimized data protection scheme called XDP, and a very modern, simple-to-use graphical user interface (GUI). XtremIO arrays presently scale out to six X-Bricks, with each X-Brick having dual controllers providing a total of 120TB of physical flash, measured before the space-saving benefits of thin provisioning, data reduction and space-efficient writable snapshots. The addition of nodes currently requires a system outage, and upgrades to some version 3 features such as compression will also require a disruptive upgrade, which EMC will mitigate with professional services to avoid interruptions to hosts and applications. Compared with similar EMC scale-out product architectures, such as the Isilon scale-out array, which stores data across nodes and therefore can sustain a node outage, in an XtremIO cluster, blocks of data cannot be accessed if a single X-Brick has a complete outage, such as a simultaneous loss of both controllers, because data is stored only once on a single X-Brick.


The lower-capacity 46TB VNX-F is based on the existing VNX unified general-purpose disk array. It has postprocess deduplication and a relatively more complex management interface due to the requirement to support the legacy (or its inherited) VNX architecture. We do not expect the VNX-F SSA and general-purpose VNX software and hardware architectures to diverge. As a result of the requirement for the software and hardware to support two different models/forks, new software features may take longer to become available due to the increased complexity of supporting two separate product lines and storage formats that use the same software stack and may require different fixes and firmware upgrades.

HP 3PAR StoreServ 7450

The 7450 is based on the HP StoreServ general-purpose array architecture, which leverages HP’s proprietary application-specific integrated circuit (ASIC) and additional DRAM capacity. The design uses a memory-mapping look-up implementation similar to an operating system’s virtual to physical RAM translation, which is media-independent and lends itself well to virtual memory-mapping media such as SSDs. This is a particularly compelling attribute because of its efficient usage of the external SSDs by reducing the amount of overprovisioning required, as well as enabling a lean cost structure by leveraging consumer-grade MLC SSD. Another added benefit is maximizing SSD endurance as the granular, systemwide wear leveling extends the durability of the less reliable consumer MLC (cMLC) SSD media. Due to the media-independent memory-mapping 3PAR storage software architecture, which is implemented on SSD and general-purpose array models, we do not expect a software bifurcation. However, with more model variations, there will be longer testing and qualification periods.

The system scales to larger capacities than most competitors, with a maximum raw capacity of 460TB when configured with 1.9TB SSDs. The array does not currently have full in-line deduplication and compression, but does exploit existing 3PAR zero block bit pattern matching and thin provisioning to improve storage efficiency. The array performs well in shared environments due to its mature multitenancy and quality of service (QoS) features. However, no file protocols are supported. Pricing of all data services is tied to the general-purpose array 3PAR model and is based on host and capacity, making it complex compared to new entrants. The 3PAR 7450 platform has an extensive and proven compatibility matrix and reliability track record that is supported with a six 9s (99.9999%) high-availability guarantee during the first 12 months.

IBM FlashSystem V840

The FlashSystem family consists of the older 700 series and the newer 800 series SSA, and all models only support block protocols. The FlashSystem 840 has more connection options with QDR InfiniBand in addition to FC, FCoE and iSCSI connections. Alternatively, the FlashSystem V840, which adds in IBM’s SVC, can scale to 320TB due to the internal FlashSystem Control Enclosure, and it inherits the broad SVC compatibility matrix but only supports FC protocols. Similarly, the V840 has richer data services in terms of QoS, compression, thin provisioning, snapshots and replication features, whereas the 840 lacks these. The 840 is designed to be a simple performance-oriented point product, whereas the V840 is for more general-purpose deployments. Both, however, lack deduplication. Additional features are provided by IBM’s SVC product, FlashSystem Control Enclosure, which has a simple-to-learn-and-operate administrative GUI.

The addition of control enclosures with the V840 increases the number of separate products and components that come with the SVC layer, which reduces performance when using real-time compression compared to the 840. The SVC control enclosure layer increases product complexity, as it has separate software levels that need to be maintained and tested across IBM storage product families. In V840-based configurations, customers need to administer and operate two devices: (1) the control enclosure; and (2) the storage enclosure, which also increases system complexity, product upgrades and problem determination.

NetApp EF Series

NetApp’s EF Series is an all-SSD version of the E-Series, a product line that NetApp inherited as part of the Engenio acquisition. There are two models in the EF product line — the EF540, which was launched in early 2013, and the EF550, which was launched in late 2013 with an SSD hardware refresh. The EF Series runs the SANtricity operating system and has its own management GUI. The product supports FC, iSCSI and InfiniBand. NetApp has made changes to the software to monitor SSD wear life and recently expanded the scalable raw capacity to 192TB. The EF Series product doesn’t support any data reduction features. Existing NetApp OnCommand suite customers cite the need for improvement in the SANtricity management console. Given the focus of the EF Series on high-bandwidth workloads, InfiniBand has been a prominent interface, but now FC implementations have become the predominant interface as end-user acceptance of the product broadened. The long-term viability of the EF Series as a product line will remain in question, with NetApp’s all-new FlashRay set for launch toward the end of the year with potentially better data services and manageability.


The Dangers of Dark Data and How to Minimize Your Exposure


As “dark fiber” is to the telecommunications industry so, also, is “dark data” to many businesses and organizations. These vast pools of untapped, largely unprotected data simply sit there, doing not much of anything for the bottom line.

Isaac Sacolik’s Dark Data: A Business Definition describes it as “data that is kept ‘just in case’ but hasn’t (so far) found a proper usage.”


A dark fibre or unlit fibre is an unused optical fibre, available for use in fibre-optic communication.

The term dark fibre was originally used when referring to the potential network capacity of telecommunication infrastructure, but now also refers to the increasingly common practice of leasing fibre optic cables from a network service provider, or, generally, to the fibre installations not owned or controlled by traditional carriers.

Dark Data Holds Unfulfilled Promises But Also Poses Dark Threats

Most discussions of dark data tend to focus on its potential value and utility to an organization. Indeed, for those outfits willing to expend resources (money, tools and time) to develop and exploit the information and value locked up inside dark data, such potential is undoubtedly attractive. This also explains why many organizations are reluctant to part with dark data, even if they have no plans to put it to work on their behalf, either in the near term or further down the planning horizon.

As with many potentially rewarding and intriguing information assets, organizations must also be aware that the dark data they possess – or perhaps more chillingly, the dark data about them, their customers and their operations that’s stored in the cloud, outside their immediate control and management – can pose risks to their continued business health and well-being.

Such risks depend on the kinds and quality of data that a determined investigator might be able to glean from a collection of dark data made available to them. Given the kinds of data that most organizations collect, those risks might include some or all of the following:

  • Legal and regulatory risk. If data covered by mandate or regulation – such as confidential, financial information (credit card or other account data) or patient records – appears anywhere in dark data collections, its exposure could involve legal and financial liability.
  • Intelligence risk. If dark data encompasses proprietary or sensitive information reflective of business operations, practices, competitive advantages, important partnerships and joint ventures, and so forth, inadvertent disclosure could adversely affect the bottom line or compromise important business activities and relationships.
  • Reputation risk. Any kind of data breach reflects badly on the organizations affected thereby. This applies as much to dark data (especially in light of other risks) as to other kinds of breaches.
  • Opportunity costs. Given that the organization has decided not to invest in analysis and mining of dark data by definition, concerted efforts by third parties to exploit its value represent potential losses of intelligence and value based upon its contents.
  • Open-ended exposure. By definition, dark data contains information that’s either too difficult or costly to extract to be mined, or that contains unknown (and therefore unevaluated) sources of intelligence and exposure to loss or harm. Dark data’s secretsmay be very dark and damaging indeed, but one has no way of knowing for sure. This can’t cultivate complacency or indifference in those who contemplate those risks at all seriously.

Mitigating Risks Posed by Dark Data

Given that dark data poses risks that are possibly both considerable and consequential, what can organizations do to manage those risks? As it turns out, there are numerous useful strategies and technologies that can provide some degree of protection against such risks, both known and unknown.

Ongoing inventory and assessment. Dark data holdings should be recognized and subject to periodic reconnaissance. They should also drive ongoing research into new tools and technologies to help extract value from such data. Yesterday’s dark data may become a shining source of insight, thanks to new tools or analytic techniques. Somebody needs to keep an eye on such things and be ready to put them to work when the benefits of their use outweighs their costs. In addition, performing a regular inventory requires understanding where dark data resides, how it’s stored, how it’s protected and what kinds of access controls help maintain its security.

Ubiquitous encryption. Any digital asset with potential value and possible risk must be stored in encrypted form, whether on the organization’s premises and equipment or elsewhere in the cloud. No dark data should be readily accessible to casual inspection, under any circumstances. Strong encryption should make it extremely difficult for those who do manage to obtain dark data to unlock its contents, and equally strong access controls and monitoring should make it obvious who can (and has) access such information for any purposes whatsoever.

Retention policies and safe disposal. It’s always worth considering if and how dark data should be retained or properly disposed of, subject to Department of Defense-approved methods of erasure or destruction, depending on whether only contents or both contents and media must be done away with. IT and executive management should work with organizational units or divisions to decide if dark data should be retained and, if so, how best to maintain security and manage risk. Carefully considered data retention policies can help guide and drive such decisions and should be formulated, promulgated and maintained.

Auditing dark data for security purposes. Most organizations of any size conduct periodic security audits, evaluating risks, exposures, incident response and policy. Dark data needs to be folded into this process and visited sufficiently often to manage risks of exposure as well as potential loss or harm.

Perhaps It’s Not So Dark After All?

Given the right appreciation for both potential value and possible risk, organizations can deal with dark data to balance one against the other. Only by taking stock of what’s out there, though, and only by employing a dispassionate and thorough approach to risk and exposure management, can an organization get a rope around its dark data holdings.

By digging into its dark data collections, keeping those holdings whose potential value outweighs its risks and deleting those whose risks outmatch its potential returns, an organization can be sure it’s proactively keeping what may prove worthwhile in the future while jettisoning what may poison future productivity or profitability.


The limits of Apple’s iOS 8 privacy features

ss apple iphone evolution carousel 100412891 orig

The privacy improvements in the latest version of Apple’s mobile operating system provide necessary, but limited, protection to customers, experts say.

With the release of iOS 8 this week, iPhones and iPads configured with a passcode would encrypt most personal data, making it indecipherable without knowing the four-number password.

By tying the encryption key to the passcode and making sure the key never leaves the device, Apple placed the burden on law enforcement to obtain a search warrant and go directly to the customer to get data from their device during an investigation.

“Unlike our competitors, Apple cannot bypass your passcode and therefore cannot access this data,” Chief Executive Tim Cook said on the company’s new privacy site. “So it’s not technically feasible for us to respond to government warrants for the extraction of this data from devices in their possession running iOS 8.”

Rival Google reacted quickly to Cook’s comments, and announced that it would turn on data encryption by default in the next version of Android. The OS has had encryption as an option for more than three years, with the keys stored on the smartphone or tablet.

On Friday, privacy experts said they supported Apple’s latest move, which they viewed as putting more control over personal data in the hands of customers.

“The fact that they (law enforcement) now have to go directly to you, and can’t do it without your knowledge, is a huge win for Apple’s customers in terms of their privacy and security,” Jeremy Gillula, staff technologist at the Electronic Frontier Foundation, said.

However, experts also said the protection had its limits, since customers often store on iCloud a lot of the data encrypted on the device, such as photos, messages, email, contacts and iTunes content.

In addition, information related to voice communications, such as call logs, is stored with the wireless carrier, as well as on the smartphone.

Once in iCloud, law enforcement or government officials investigating national security cases could legally force Apple to hand over the data.

Apple’s new privacy mechanism also has a weakness. Plugging the iPhone or iPad into a Mac or Windows PC that have been paired with the devices would circumvent the passcode-based encryption.

Unless the devices had been turned off, the password would not be needed to access data from the computers.

“This means that if you’re arrested, the police will seize both your iPhone and all desktop/laptop machines you own, and use files on the desktop to dump and access all of the above data on your iPhone,” Jonathan Zdziarski, an iOS forensics expert, said in his blog. “This can also be done at an airport, if you are detained.”

Without naming Google, Cook made a point to emphasize that Apple’s profits depended on selling hardware, not collecting customers’ personal information and then selling it to advertisers.

“A few years ago, users of Internet services began to realize that when an online service is free, you’re not the customer. You’re the product,” Cook said.

The privacy changes came after Apple suffered a black eye this month when cyber-thieves accessed celebrities’ iCloud accounts and, in some cases, posted naked photos online. Apple found that the attackers did not compromise iCloud security, but obtained the credentials to the accounts some other way.

Apple beefed up iCloud security recently by introducing two-factor authentication, which was already available to people with an Apple account tied to iTunes and other services.

“Two-step verification is good, and long over-due,” Rebecca Herold, a privacy adviser to law firms and businesses, said.


How To & Training: vSphere 5.5


It’s time you protect your mission-critical data – eHDF


Impact of Data Breach on Business
It’s time you protect your mission-critical data!

Leverage eHDFs Managed Security Services and benefit from:

  • Compliance & governance
  • Customized solutions with guaranteed SLAs
  • Round-the- clock surveillance by trained and dedicated professionals
  • Reduced total cost of ownership
  • Protection against unauthorized access
  • Comprehensive management view
Contact us today to know more!| 043913040|

5 hidden iOS 8 features that all Apple users should know

Learn about the hidden features in iOS 8 that you’ll have a hard time living without… once you find out about them.

Hidden features in iOS 8

With each iteration, Apple adds new features to iOS to make it even better. With iOS 8, Apple started with a great foundation in iOS 7 and added quite a few new features that integrate more deeply with the Mac. You’ve probably already heard about a lot of the big features introduced in iOS 8, but there are plenty other unannounced features that to uncover. Once you learn about these features, you’ll wonder how you ever lived without them.

1. Find My iPhone: Send last location before battery dies

Find my iPhone is a great feature, but if your battery dies, you’ll be unable to track it, and your chances of seeing your device again diminishes significantly. Fortunately, a new feature in iOS 8 can help save your day (and perhaps your device).

With iOS 8, Find my iPhone/iPad/iPod Touch can now send the last known device location to iCloud before your battery dies. This can really be helpful to track down your device.

To enable this feature, perform these steps:

  1. Open Settings
  2. Tap iCloud | Find my [device type]
  3. Turn on the option for Send Last Location
  4. Send the last known location of your device to iCloud so that you can track it down if the battery dies.

Once you’ve enabled this feature, whenever the device battery becomes critically low, it will make it a priority to send the device’s current location to iCloud. Of course, this will work perfectly if your device has a cellular connection but not so well if your device connects to Wi-Fi and is away from a known access point.

2. Load a desktop webpage instead of mobile in Safari

Sometimes, websites load in a stripped-down mode called “mobile site” or “mobile versions.” Most of these mobile-based sites don’t offer the same features as their desktop counterparts, leaving users unable to perform some tasks. Fortunately, iOS 8 makes loading the desktop site a bit easier.

Simply follow these steps to load a desktop page (Figure B) on your iPhone, iPad, or iPod touch:

  1. Open Safari
  2. Navigate to a website that loads a mobile site instead of a desktop version
  3. Tap in the address field, then swipe down on the screen (as if refreshing the screen)
  4. Tap the Request Desktop Site button that appears
  5. Allowing users to request a desktop site in mobile Safari has long been a requested feature.

When you tap this button, Safari will refresh the webpage and will ask the site to use the desktop assets instead of the mobile version, giving you full access to the site and all of its resources.

3. Scan your credit cards in Safari when paying for purchases

Online purchases can be a pain. You’ve have to pull out your credit cards and manually enter in all of the required information. However, mobile purchases are on the rise, and the possibility that you’ll enter the wrong digits in the first go is pretty much a given when typing on such a small keyboard.

Apple Pay will most likely change this experience, but until then, you can use Safari’s credit card reading ability today in iOS 8.

To scan your credit card information and place it into the web page automatically using this Safari feature, do the following:

  1. Open a website and perform the checkout process
  2. When you get to the payment method screen of the checkout process, place the cursor inside of the card number field
  3. Tap the Scan Credit Card button that appears in the toolbar above the keyboard
  4. Position your card in the frame
  5. Once you do this, your credit card information will be scanned, processed, and entered into the appropriate fields in Safari. No need to hunt-and-peck for the keys during the payment process.

4. Find battery-hogging apps

iOS battery life is continually getting better, and you can get all- day battery life from most modern iOS devices. With iOS 8, Apple lets you now see how much battery life iOS apps are using.

To find those battery-draining apps that reside on your device, follow these steps:

  1. Open Settings
  2. Navigate to General | Usage | Battery Usage
  3. After a few minutes, the Battery Usage section will be generated, and it will display the apps that are using the most energy. The statistics provided here are a proportion of the battery used by each listed application when the iPad is not charging.

When viewing battery usage, you’ll now get a listing of the apps that are responsible for the energy loss since the last charge.

5. Make your photos invisible instead of deleting them

Sometimes, you may not want to display all of your photos in the Photos app (perhaps you’ve got private photos that you don’t wish to share when giving a presentation, for instance). For those photos, iOS 8 can now hide them from view, allowing you to still keep them around.

To hide one of your photos, navigate to the photo that you’d like to have hidden, and tap and hold on it until a menu appears. Tap the Hide button in the menu that appears above the photo

Hiding photos can help when giving a presentation or when sharing a photo album with your friends via AirPlay.

When hiding a photo from your library, the photo will be hidden from the Moments, Collections, and Years sections, but it will still be visible in the Albums tab. You can unhide it by navigating to one of the Albums containing it, then tapping and holding on the photo, and selecting Unhide.


Green storage is dead. Long live green storage

Green storage may no longer be centre stage, as it was at the height of its hype a few years ago, but the need to reduce datacentre energy usage ensures that green storage is still a pressing concern. That is because storage is gobbling up an ever-larger portion of datacentre energy.

Global datacentre energy use is growing at about 7% a year, with storage consuming between 10% and 40% of all energy in any particular datacentre, according to the Storage Networking Industry Association (SNIA).


The SNIA estimates that servers consume 60% of a datacentre’s energy, leaving networking and storage with 20% each. However, the servers’ share is falling because virtualisation reduces energy consumption, leaving storage with a larger proportion of the whole.

This clearly affects enterprises’ bottom lines directly, but there are other impacts, too. Analyst Clive Longbottom, service director atQuocirca, says: “Those in the high-energy usage markets will be caught up in theCarbon Reduction Commitment (CRC) tax system, so anything they can do to lower energy usage will be useful.

“Those who aren’t caught in the CRC web can still do with saving energy. It costs money, after all, and with the lack of efficiency there is in IT equipment, every extra watt of energy going in will create a significant percentage of heat that needs to be cooled. So, each watt needed for storage could end up as 2.5W or more of actual energy consumed, assuming apower usage effectiveness [PUE] ratio of 2.5.”

Cutting the storage energy bill

Longbottom’s advice for businesses that want to reduce their storage energy bill is as follows: “First, de-clutter by getting rid of unwanted and/or unneeded data. Then work on cleansing data – make sure what you have is what you should have. Apply data deduplication to reduce the amount of data under storage by as much as 80%. Virtualise, so you can pool storage and make more efficient use of it. Minimise the number of arrays being used. Turn off, put in off-site storage or sell redundant arrays.”

Longbottom adds: “Where possible, move to flash storage. It is a much lower energy user and gives out a lot less heat. If not moving to flash, make sure you have variable-speed disks that can go into a low switch-down mode when not being used, to minimise energy usage. Use off-site storage systems where they make sense. Energy costs are then included in the data subscription price and can be guaranteed across a longer period of time.

“Also, look at how server-side storage can be used. The age of the SAN is coming to an end; every converged computing system has its own direct attached storage; every blade, every tower, every pizza-box server has a hard disk drive in there.”

SNIA board member SW Worth offers advice as part of the organisation’s green storage initiative. He says data reduction techniques are key to the type of de-cluttering suggested by Longbottom, and the most effective combination uses RAID 6 with snapshots, data deduplication and thin provisioning.

But not all energy-saving technologies have succeeded. One example is MAID (Massive Array of Idle Disks), for which Nexsan is now the standard-bearer. As Storage Switzerlandanalyst Eric Slack points out, the technology has not proved very attractive because the number of use cases is limited.

He says: “Too many applications couldn’t tolerate the wait of up to a couple of minutes to access data as drives spun up and IT wasn’t willing to take the chance that this data would be recalled.”

No single solution can reduce storage power consumption, says Steve Watt, CIO at the University of St Andrews. “Any future reductions will be achieved through a combination of efforts and by dealing with the issue from all angles, such as looking at the whole datacentre ecosystem,” he says.

How to reduce storage power usage

The biggest savings can be achieved by reducing the number of spinning disks in the datacentre. However, in a highly efficient storage system, most disks will be needed most of the time, says Longbottom, which reduces potential savings here.

Flash storage costs more, but can help resolve this dilemma. An Intel-sponsored report from J Gold Associates calculates that payback can be achieved within days, given the reduction in energy use, maintenance and hardware failures possible with flash storage.

As a consequence, Longbottom predicts much greater usage of flash at reasonable prices as EMC productises more of its XtremIO and DSSD IP, and IBM does likewise with its TMS acquisition, while the pure plays – Pure Storage, Violin, Nimble, and so on – also make progress.

But purchasing patterns and deployment details are not the only means of reducing energy consumption.

Storage suppliers can help by cutting their products’ energy use and SNIA has developed a set of criteria that help suppliers test and label the energy usage of their products. ItsEmerald Program aims to provide a uniform format to report storage system power requirements, power usage and efficiency under various workloads.

A user’s green storage perspective

Customers have also responded to the need to reduce energy.

A survey of European storage professionals by SearchStorage in 2010 found that green storage considerations are important when buying storage hardware and that many users would pay extra for more energy-efficient hardware. Testimony to the importance of “green friendliness” is that 66% of those questioned would pay more for a green product.

Watt has taken steps to reduce energy use in line with his university’s aim of becoming carbon neutral by 2016, such as consolidating storage from 400 servers spread across 50 sites into a Dell Compellent SAN.

He says: “This has resulted in a significant reduction in power usage, especially as it was combined with an aggressive virtualisation of application servers and the locating of these in a highly efficient campus datacentre.

“Reducing energy use entails utilising the inbuilt features and functionality of the Dell Compellent SAN. This has allowed us to use thin provisioning of storage, which provides efficiency gains in utilisation, requiring fewer disks, while hierarchical storage management has also allowed far more effective resource utilisation. This SAN also delivers file deduplication, which further optimises our use of storage.”


Apple Isn’t Interested in Big Data

Apple Tim Cook

Apple CEO, Tim Cook, says consumers aren’t its product

Big Data is big business nowadays and any company with any sense is making the most of user information that they hold. So, when Apple CEO Tim Cook said that the company – which has well over 800 million users – isn’t interested in the Big Data opportunities that its customers provide, it came as quite the shock.

While Google and Amazon offer cheaper hardware in exchange for being able to target adverts and products at you from your usage data, Apple charges a premium and then does little to nothing with the information you send their way.

“We’re not reading your email,” said Tim Cook during an interview with Charlie Rose, “we’re not reading your iMessage. If the government laid a subpoena on us to get your iMessages, we can’t provide it. It’s encrypted and we don’t have the key.”

“Our business is not based on having information about you”, continued Cook. “You’re not our product. Our product are these [iPhones] and this watch [Apple Watch], and Macs, and so forth. And so we run a very different company.

“I think everyone has to ask, how do companies make their money? Follow the money. And if they’re making money mainly by collecting gobs of personal data, I think you have a right to be worried. And you should really understand what’s happening to that data, and the companies – I think – should be very transparent.”

Obviously, that’s not the case for every company out there that’s taking customer data. In a lot of cases customers don’t really know where their data is going and for what purpose it is used. Obviously customers will only give data away if they think it’s of a worthy value exchange, but how many companies actually go out there and be transparent with what they’re using customer data for?

It’s interesting that Apple feel this way too, as it clearly shows that – for a business with an incredibly strong set of products and delivery platforms – there’s no need to use and sell customer data and target third-party apps at them. Customers may pay more for the privilege of not being hassled in such a manner, but it seems to be working. And it’s statements like that from Tim Cook that will only strengthen brand loyalty – more so during this turbulent time around data collection thanks to GCHQ and the NSA scandals of the last year.

“We have hundreds and millions of customers,” said Cook in reference to NSA requests for data. “So it’s a very rare instance that there’s been any data asked. And one of the reasons is, we don’t keep a lot. We’re not the treasure trove of places to come to.”


Etisalat upgrades broadband speeds for free

Salvador Anglada, chief business officer at Etisalat.

Etisalat has doubled the broadband speeds for its business customers in the UAE at no extra cost.

The speeds have boosted the broadband speed by 2.5 times – a business customer with 4MBps speed will be automatically upgraded, free of charge, to 10MBps broadband speed.

New business customers will also enjoy the benefits of higher speeds at lower rates.

Salvador Anglada, chief business officer at Etisalat said the free upgrade is aimed at helping businesses grow.

“The double speed upgrade is our promise and commitment to make our technology work for you and grow your business, allow you to do more and stand out from your competition,” said Anglada.

“Among other benefits, Etisalat’s speed upgrades on its fixed-line network will provide faster and extremely reliable fixed-line Internet connection, ensuring increased efficiencies, better global connectivity, reduced costs and improved customer service, resulting in increased profitability,” he added.


Windows 9 video leaks: Virtual desktops and Notification Center in action


10 key technology items for your 2015 budget

heroImage: It’s that time of year again — time for IT to lay out plans for next year and prepare for budget discussions. Here are 10 items that are likely to top enterprise IT shopping lists for 2015.

1: WAN optimization

With the rush of new web-enabled applications for customers and employees doing business in the field, corporate IT must concern itself with internal network health — as well as the quality of the end-user experience on the internet itself. New IT investments will be made in network end devices that measure “outside traffic” and in cloud-based solutions that can patrol internet traffic throughout the world, rerouting traffic when necessary.

2: Big data

Investment will continue in big data solutions in the data center and in the hiring of business analysts and data analysts with big data skills. This investment will be characterized with more movement toward using real time big data streams and implementing automation that can take advantage of the Internet of Things.

3: Cloud

Enterprises will continue migration to a hybrid cloud environment that mixes on premises, private cloud, and public cloud solutions. The use of cloud, especially to address external business processes and other gritty issues that have eluded internal enterprise systems for years, will be used by IT to speed solutions to market for the competitive benefit of the business.

4: DevOps

As more IT shops work to increase collaboration and co-development between application developers and system software specialists, vendors will answer with toolsets that can automatically set up and deploy underlying systems infrastructure. This will allow developers to focus more on top-level business coding. Investments will be made because of the potential to speed applications to market and to hedge against shortfalls in deeper level systems skills that many IT staffs are starting to experience.

5: Mobility

Enterprises will continue to invest in mobile devices and in the development of mobile business applications for internal employee use and for external use by customers. They are also likely to look for new hires with mobile application development skills.

6: Virtualization

Virtualization of hardware has been a stalwart in data centers for more than a decade. In 2015, there will be an equivalent effort in virtualizing infrastructure software as well. The goal in virtualizing software is to further ratchet down licensing costs. Any software virtualization tool that can facilitate this is likely to find budget dollars.

7: Digital assets

Enterprises will be looking to value and to monetize their digital assets, whether they come in the form of a website, a social media presence, content development and management, or new applications that perform in the digital universe. Many companies will formalize this process by hiring digital managers who can focus on a digital asset approach that combines IT and technology investment with marketing and sales channels, revenue, and brand development.

8: Data center facilities

A major contributor to corporate sustainability initiatives is the data center. Cooling and heating systems and facility construction improvements will be funded to facilitate joint facilities and IT projects that are focused on reducing data center energy consumption.

9: Audits

Rapid deployment of web-facing and mobile applications and the establishment of BYOD (bring your own device) policies will prompt many businesses to set aside extra budget dollars for security and IT policy and governance audits.

10: Data center automation

IT will continue to fund and implement automation in the data center that controls energy consumption and enables the running of “lights out” operations in nightly batch job runs, data archiving and storage, backups and system synchronization for purposes of business continuation and rapid failover.


Citrix Cloud Platform offers the most functionality for the price – Info-Tech Value Award

Info-Tech evaluated nine competitors in the Cloud Management market, including the following notable performers:


• Citrix CloudPlatform offers flexible hypervisor and hardware deployment options with the support of a mature vendor and a strong open source community.

• Abiquo’s capabilities to integrate with existing systems, in combination with its stand out self service functionality make it a great solution for enterprises needing a private or hybrid cloud.

• VMware. A virtualization force with a solid foundation for private cloud deployments, offering standout performance monitoring and drill down capabilities.

Value Award:

• Citrix. A hardened open source solution with a simple price per socket annual subscription model, Cloud Platform offers the most functionality for the price.

Trend Setter Award:

• Flexiant’s unique customization and whitelabeling, integrated billing, and it’s ability to support multiple hypervisors, make it a standout solution that’s ahead of the curve in advanced features.

Info-Tech Insight

1. Match features to use case. Solutions targeting service providers often have broader hypervisor support and chargeback functionality, but tend not to integrate with VMware vSphere.

2. Evaluate hypervisor support. Most solutions support both VMware ESX and KVM, but few support Microsoft Hyper-V or Citrix XenServer. As hybrid cloud scenarios become more commonplace, it will be important for service providers to select a solution that supports the install base of its clients. Enterprises should look to solutions that can be layered on existing hardware, whether commodity servers, blade systems or virtualized infrastructures with storage.

3. Consider Amazon hook-ins for bursting scenarios. If there is a strong use case for public cloud integration for seasonal or project-based variation in requirements for
resources, capability to integrate with Amazon using its APIs is a plus.

Citrix offers includes the hypervisor layer with XenServer, the cloud orchestration layer with CloudPlatform and the user portal with CloudPortal Business Manager.

• Citrix CloudPlatform is a commercially available and fully supported version of the open source Apache CloudStack project. Citrix contributed acquired (2011) CloudStack to the Apache Software Foundation in April of 2012.


• CloudPlatform has proven to be a mature solution with over 150 paid customers deployed in large production environments, including service providers (NTT, GoDaddy), enterprises (Nokia, BechTel) and Web 2.0 companies (Zynga, NetFlix, Samsung).

• Citrix has a strong cloud portfolio spanning servers, networking, and desktops, but also offers flexibility by supporting multiple hypervisors (XenServer, ESX, OVS, and KVM).

• Single management console is accessible via command line interface, web-GUI, or orchestrated using APIs, and includes a guided configuration as a simple launch option for new users.

• Supports Amazon APIs for hybrid cloud deployments.


• CloudPlatform requires purchase of CloudPortal Business Manager to enable visibility to end users around costs.

• There is a perception that Open Stack is currently getting more developer support. That said, over 100 companies have contributed CloudPlatform code, and many are active in both communities, including Cisco, IBM, Red Hat, and SUSE.

Below please find some relevant information:

CloudPortal Business Manager:


Get every new post delivered to your Inbox.

Join 197 other followers