Archive for ‘Misc’

15/01/2017

 التكنولوجيا باتت جامحة خارج السيطرة، العالم إلى أين، ماذا تخبئ لنا الأيام

  

في عام 1988، كوداك كانت توظّف 170 ألف موظفا وتبيع 85% من الصور الورقية في العالم وبالرغم من ذلك منذ سنوات قليلة، الشركة أفلست وفقدت قوتها التنافسية بشكل كبير جدا. وما حدث لشركة كوداك، سيحدث لكثير من القطاعات في السنوات العشرة القادمة، والمعظم لم يفهم بعد هذه التغييرات المتسارعة. 

١- فالبرمجيات ستدمر كثيرا من الصناعات التقليدية في مدة زمنية تتراوح بين خمس وعشرة سنوات. فشركة Uber أكبر شركة تكسي في العالم، وهي لا تملك سيارة واحدة، وشركة Airbnb أكبر شركة فندقة في العالم، وهي لا تملك عقارا واحدا. 

٢- الذكاء الاصطناعي: الكمبيوترات أصبحت تفهم أكثر فأكثر الوسط المحيط. في الولايات المتحدة الأمريكية، المحامون الشباب بالكاد يحصلون على عمل، وذلك خاصة بعد اختراع شركة IBM للحاسب Watson الذي يستطيع تقديم مشورات قانونية في غضون ثوان، وبدقة 90% بينما الإنسان المحترف دقته لا تتعدى 70%. لهذا إذا كنت تدرس القانون، توقف حالا. في المستقبل، سيقل عدد المحامين بنسبة 90% ولن يبق إلا المختصون. كمبيوتر Watson يساعد حاليا بتشخيص السرطان وبدقة أعلى بأربع مرات عن الطبيب البشري. شركة فيسبوك تمتلك برمجية تعرف على الوجوه بدقة أعلى أيضا من قدرة الإنسان، ومن المتوقع أن يتغلب ذكاء الآلة على ذكاء البشر في عام 2030.

٣- السيارات المؤتمتة: في عام 2018، سيتم إنتاج أول سيارة مؤتمتة دون سائق. وفي عام 2020، ستغزو هذه التكنولوجيا الأسواق وستدمر سوق السيارات التقليدية، وعندها لن تحتاج لامتلاك سيارة! ستستدعي سيارة بهاتفك، وستقلك إلى المكان الذي تريد، ولن تحتاج حتى لاصطفافها. أطفالنا لن يمتلكوا سيارات، ولن يحتاجوا لإصدار رخصة قيادة، لأن عدد السيارات سيقل بنسبة 90-95%. وبامتلاك هذا النوع من السيارات، ستقل الحوادث لتصبح حادثا واحدة في مساحة 10 مليون كم، وستحافظ على حياة البشر أكثر. معظم شركات السيارات التقليدية ستفلس! هم يحاولون الآن تبني طرقا ثورية جديدة في التصنيع، ولكن لن يتغلبوا على شركات مثل Google و Tesla و Apple التي تصنع كمبيوترا متحركا على عجلات. بل شركات تأمين السيارات ستواجه مشاكلا عظيمة وحقيقية، وستختفي هي أيضا لعدم الحاجة لها. التأمين سيصبح أرخص بمئة مرة عما هي عليه اليوم. قطاع العقار سيتغير كذلك، لأنك ستفضل الإقامة في أماكن بعيدة عن المدينة طالما فرصة عملك من البيت سانحة ومتوفرة!

٤- السيارات الكهربائية ستصبح منتجا رئيسيا في عام 2020، والتلوث سيقل والمدن ستكون أكثر جمالا. ويمكن بكل بساطة ملاحظة تراجع استهلاك الوقود التقليدي خلال 30 عاما الماضية فقط. في هذا العام تحديدا، عدد المحطات الشمسية تفوق على نظيراتها التقليدية في دول العالم أجمع. 

٥- الصحة: سيتم إنتاج برمجيات تقيس مؤشراتك الحيوية من خلال هاتفك، وفي المستقبل سيمتلك كل إنسان عناية صحية عالية الجودة وبالمجان، وكل ذلك عبر الهاتف. 

٦- الطباعة ثلاثية الأبعاد: سعر الطابعة الواحدة انخفض من 18000 دولار إلى 400 دولار في عشرة سنوات فقط، وبسرعة مضاعفة. شركات الأحذية بدأت باستغلال هذه التقنية، والأمر ذاته في صناعات الطائرات والمحطات الفضائية. مع نهاية هذا العام، سيتم إنتاج هواتف ذكية بخاصية 3D Scanning. ستستطيع قريبا مسح قدمك عبر هاتفك، وطباعة حذائك في بيتك. في الصين، استطاعوا مؤخرا صناعة مبنى كامل من ستة طوابق بخاصية الطباعة ثلاثية الأبعاد. وبحلول عام 2027، سيصبح 10% مما تصنعه دول العالم قاطبة يستخدم هذه التكنولوجيا. 

٧- فرص العمل: في خلال العشرين سنة القادمة، 70-80% من فرص العمل الموجودة اليوم ستختفي، وسيتغير العالم بشكل كبير جدا، ومن غير الواضح إذا كنا جاهزين لهذا.

٨- الزراعة: سيتم إنتاج روبوت بقيمة 100 دولار للزراعة، والمزارعون في العالم الثالث سيكونون مدراء لحقولهم عدا عن كونهم عاملين.

٩- مزاج الإنسان: هناك اليوم تطبيق يخبرك عن مزاجك الحالي من خلال معالم وجهك، وسينتج مستقبلا تطبيقا على الهواتف الذكية لكشف الكذب من خلال معالم الوجه. تخيل أثر هذا التغيير على المناظرات السياسية، وأنت قادر على معرفة الصادق من الكاذب. 

١٠- عمر الإنسان: حاليا، يزيد عمر الإنسان التقديري بمقدار ثلاثة شهور كل عام. وفي عام 2036، ستصبح هذه الزيادة عاما كاملا كل عام، أي أنه من المقدر مستقبلا أن يعيش معظم البشر لفترات تزيد كثيرا عن 100 عام. 

١١- التعليم: في عام 2020، سيمتلك 70% من البشر هاتفا ذكيا، وهذا يعني أن الجميع سيكون له القدر ذااته من التعليم عالي الجودة ومن خلال هاتفه فقط. فأي فكرة استثمار مستقبلية بعيدة عن عالم الهاتف الذكي، ستفشل حتما. التغيير قادم، والسؤال إن كنا نحن جاهزون له!

04/01/2017

Fast growing IT Engineering skills 2017المهارات الهندسية الأكثر رواجاً وطلباً لقطاع تكنولوجيا المعلومات



Exploring more ..

Spark:

Data Engineer

Spark Architect

Azure:

UI Developer

Senior Engineer, Cloud Infrastructure

Salesforce:

Salesforce Administrator, Databases and Initiatives

Senior Salesforce.com Developer

BigData:

Big Data Developer

Big Data Architect

JIRA Project Management:

Portal Support Engineer

Developer, Software

Apache Hive:

Senior software engineer – Big Data

Senior software engineer

Apache Cassandra:

Software Engineer Cloud

Sr. Administrator, Database Cassandra

Juniper:

Senior Manager, Information Security Ops

Network Deployment Planning Engineer

Others:

ALCS Senior Electrical Engineer

Electrical Engineer
Cloud Support Engineer 

Cloud Specialist

Exploring more ..

31/12/2016

Happy new year 2017 عام سعيد خال من المآسي تتحقق فيه الأمنيات

Free from War act

Free from Children Abuse

Free from crisis

17/12/2016

Cloud Computing IaaS Provider Comparison 

Let’s evaluate the compute offerings of AWS, Azure, Google, and IBM SoftLayer. For a high-level view of the differences (in compute, network, storage, database, analytics, and other services) among these cloud provider.
RightScale comparing tool

1- Amazon Web Services

AWS was first to market with a cloud compute offering, and it gained a sizable head start. Today AWS Elastic Compute Cloud (EC2) has approximately 40 different instance sizes in its current generation (“instance” is the term AWS and Google use for what others call a “virtual machine,” “VM,” “virtual server,” or “VS”). The previous generations of instance types (including the aforementioned M1) are still available, although they are not “above the fold” on any AWS price sheets or product description literature. There are about 15 instance sizes in the previous generations. While they are currently fully supported, it would not be surprising if AWS looks to sunset these instance types at some point in the future.
Focusing on the current generation, some of these instance types come with attached “ephemeral” storage (storage that is deprovisioned when the instance is terminated), while many others come with no attached volumes and instead specify “EBS only” with regard to storage. This means you must separately provision, attach, and pay for the storage. (EBS is AWS’s Elastic Block Storage offering, which will be discussed in a future article in this series.)
The current generation of instances is organized into instance families that are optimized for certain use cases. Some of the current instances address general-purpose workloads, while others are tailored for computationally intensive applications. Still others are optimized for workloads with high memory requirements or for applications that require high amounts of storage (up to 48TB). Some instances provide GPUs that can be used for video rendering, graphics computations, and streaming.
Additionally, some instance families support “burstable” performance. These provide a baseline CPU performance, but can burst to higher CPU rates for finite periods of time provided by “CPU credits” that the instance accumulates during periods of low CPU utilization. Evaluate your use case and workload carefully before deciding upon burstable instance types.
It is important to benchmark your application to ensure that on average it stays at or below the baseline. Not only that, you want to ensure that the CPU bursts are not so long they exhaust your credits, and the CPU valleys are sufficiently long to allow for credit replenishment. If you exhaust your CPU credits, your application may run in a “CPU starved” state that will obviously hinder performance. Burstable instances are a great tool for the right application, but they can prove very problematic when used incorrectly.

AWS instance types can be optionally configured to meet specific use cases, performance targets, or compliance regulations. For example, certain instance types can be configured in an enhanced networking model that allows for increased packet rates, lowers latencies between instances, and decreases network jitter. Additionally, instances can be launched into high-performance computing (HPC) clusters or deployed on dedicated hardware that allows for single-tenant configurations, which may be required for certain security or compliance regulations.
There are also different pricing structures and deployment models that can be used within AWS EC2. The standard deployment model is “on-demand,” which means, as the name implies, you launch when you need them. On-demand instances run for a fixed hourly cost (fractional hours are rounded up to the next hour) until you explicitly terminate them. There are also “spot instances,” which allow you to bid for any excess compute capacity AWS may have at any given time. Spot instances can often be obtained for a fraction of the on-demand cost (savings in excess of 80 percent are not uncommon).
However, they come with the caveat that they may be terminated at any time if the current spot price exceeds your bid price. It is a real-time marketplace in which the highest bid (the price you are willing to pay per hour for the instance) “wins.” You can achieve tremendous cost savings with spot instances, but they are only suited for workloads that can be interrupted (processing items from an external queue, for example). 
AWS offers “spot blocks,” which are similar to spot instances in that you specify the price you are willing to pay, but you also specify the number of instances you want at that price, and a duration in hours up to a maximum of six. If your bid is accepted, your desired number of instances will run for the time specified without interruption, but they will be terminated when the time period expires. This deployment model is useful for predictable, finite workloads such as batch processing tasks.
AWS offers discounts through reserved instances (RIs), which require you to commit to a specific instance type running a specific operating system in a specific availability zone (AZ) of your desired AWS region. You must commit to a one- or three-year term, and in return your hourly cost for the instance will be greatly reduced (up to 75 percent for a three-year commitment).
However, you are generally constrained to the instance type, operating system, and AZ that you selected for the duration of the contract, so careful planning is essential. You can request modifications within certain limitations, but those requests are subject to approval by AWS based on available capacity. Clearly, committing to one or three years of reserved instances isn’t for everyone. Other providers have similar discounting policies that are far simpler to implement and don’t require having years of visibility into your workload (Google’s Sustained Use Discounts, for example, which will be described shortly).
AWS has the most complete set of offerings in the compute arena, but it doesn’t have a lock on unique and interesting features. Other vendors are continually adding new compute options that make them attractive alternatives for many use cases.
2- Microsoft Azure

Microsoft takes a similar approach to compute instance types with Azure, but uses slightly different nomenclature. Instances are called virtual machines (VMs), although you will see the word “instance” sprinkled throughout the online documentation. VMs are grouped into seven different series with between five and a dozen different sizes in each group. Each series is optimized for a particular type of workload, including general-purpose use cases, computationally intensive applications, and workloads with high memory requirements. An eighth group (the “N” series), which is composed of GPU-enabled instances, were recently released for general availability this month.
All told, Azure has approximately 70 different VM sizes, covering a wide array of use cases and workload requirements. All VM types in Azure come with attached ephemeral storage, varying from about 7GB to about 7TB. (Azure measures attached storage in gibibytes, not gigabytes, so the numbers don’t come out as neat and clean as we are typically used to.)
As the maximum capacity of attached storage for an Azure VM is considerably less than for an AWS EC2 instance (the aforementioned 48TB, for example), you may want to provision additional storage. This can be allocated from an Azure Storage account associated with your Azure subscription (“subscription” is the Azure term for what is generally known as an “account”). Azure provides both a standard storage option (HDD) and a “premium” storage option (SSD). I’ll discuss these in more detail in a later post in this series.
Azure also provides a VM size (the A0) that is intentionally oversubscribed with regard to the underlying physical hardware. This means the CPU performance on this VM type can be affected by “noisy neighbors” running VMs on the same physical node. Azure specifies an expected baseline performance, but acknowledges that performance may vary as much as 15 percent from that baseline. The A0 is a very inexpensive VM, and if a particular workload can tolerate the variability, it may be an attractive option. 
Azure charges for VMs on a per-minute basis instead of on an hourly rate like AWS. Thus, a VM that runs for 61 minutes on Azure is charged for 61 minutes, whereas AWS would charge you for a full two hours. Azure has an offering similar to AWS’s reserved instances. Called the “Azure Compute prepurchase plan,” this allows you to reap significant discounts (as much as 63 percent) by making an upfront prepurchase, with a one-year commitment, on a particular VM family, size, region, and operating system.
However, the prepurchase plan is available only to customers holding an active Enterprise Agreement (EA) with Microsoft. Because an EA can greatly influence your pricing model, VM pricing on Azure is kind of like the pricing of airline seats on any particular flight: No two people pay the same price, though they are all sitting in the same type of seat and going to the same place. If you have an EA with Microsoft, be sure to speak to your sales representative about your Azure usage.
Microsoft has made great strides in IaaS over the last few years. Azure has started to close the overall gap with AWS, particularly the gaps in its compute offering. As many enterprises are already engaged with Microsoft on some level (or multiple levels), it would not be surprising to see this trend continue.
3- Google Cloud Platform

Google Compute Engine (GCE), the service within Google Cloud Platform that manages IaaS compute resources, also provides numerous options for launching virtual machines. Like AWS, GCE calls the VMs “instances” and the different options “machine types.” These are grouped into several categories (standard, high CPU, and high memory), with multiple sizes within each category.
Currently you’ll find approximately 20 different predefined machine types in GCE, with available memory ranging from 600MB to 208GB. None of these predefined machine types provides ephemeral storage, which is a change from the early days of GCE when ephemeral storage was an option. Ephemeral storage was a casualty of GCE’s live migration (or “transparent maintenance”) service, which enables a VM to be migrated from one physical node to another without any interaction (or even knowledge of the process) by the customer. This feature is unique to GCE and a powerful differentiator to AWS.
Another unique feature of GCE is the ability to create custom machine types. That is, you can specify the configuration of virtual CPUs and available memory if none of the predefined machine types fits your needs. There are limitations to what can be configured, and prices for custom machine types are higher than for predefined instances, but for certain use cases and workloads, custom machines may be an attractive option. 
GCE also provides a few “shared core” machine types, which are similar in concept to the oversubscribed VM sizes in Azure. These machine types provide “opportunistic” bursting, which allows the instance to consume additional CPU cycles when they are not being consumed by other workloads on the same physical CPU. GCE does not use a “CPU credit” system such as AWS uses to balance peaks and valleys of utilization. Instead the bursts occur whenever the stars of application need and CPU cycle availability align.

Similar to Azure (but unlike AWS), GCE employs a per-minute pricing model, rounded up to the next minute, with a floor of 10 minutes. Thus, an instance that runs for three minutes is charged for 10 minutes of execution, while an instance that was operational for 12.5 minutes would be charged for 13.
GCE also provides a mechanism to access unused capacity at a reduced rate, somewhat similar to AWS spot instances. Google calls these preemptible VM instances and provides them at an 80 percent discount as compared to the fixed, on-demand hourly rate. Like AWS spot instances, a GCE preemptible instance may be terminated (“preempted”) at any time. However, whereas spot instances could (in theory) operate indefinitely, preemptible instances will always be terminated after 24 hours. Preemptible instances are not covered under GCE’s SLA, and they cannot be live migrated, so the use cases may be limited. But if you have an appropriate use for them, they can deliver great cost savings. 
A compelling feature of GCE’s pricing model is the aforementioned Sustained Use Discount (SUD). In the SUD model, any instance in use for more than 25 percent of the monthly billing cycle is automatically discounted for every minute beyond that initial 25 percent, with no interaction required by the customer. The discount is 20 percent for usage between 25 and 50 percent of the full month, 40 percent for usage between 50 and 75 percent of the month, and 60 percent for usage above 75 percent of the month. In addition, this discount is not limited to a specific instance but can apply to multiple instances of the same machine type.
In other words, if you have two instances that run for a quarter of the month, you get a discount of 40 percent, not 20 percent. Further, it’s a simple and elegant pricing model that doesn’t require the user to make any upfront commitments or predictions about future utilization. You simply launch your instances, and if they are operational for more than 25 percent of the billing cycle, a discount is automatically applied.
Although Google was the second major player to enter the IaaS game behind AWS, it has seen slower adoption among enterprise users than Azure, most likely due to Microsoft’s long-established stronghold on enterprises. However, as Google continues to expand and innovate the GCE product suite and introduce unique offerings, its footprint in these organizations continues to expand.
4- IBM SoftLayer
SoftLayer takes a slightly different approach to compute instances (“virtual servers” or “VSes” in SoftLayer parlance) in that there are no predefined sizes. Similar to GCE’s custom machine types, you can configure a virtual server to your own specs, drawing on core/RAM configurations from one core with 1G of RAM to 56 cores with 242GB. Not every combination is available (you can’t select one core with 242GB of RAM, for example), but there is a vast array of options. As such, SoftLayer does not have “families” or “series” of VSes, but with the ability to customize the CPU count and RAM capacity, you can effectively build your own high-CPU or high-memory virtual servers.
SoftLayer does not offer burstable virtual servers or a spot market for unused capacity, but it does have something akin to AWS’s reserved instances. Make a monthly commitment to a specific VS, and you get a lower effective monthly rate. In this model you are paying for an entire month of VS usage, so it only makes sense if you have an application that will require an always-on configuration for the month. The savings you get in return are typically in the eight to 10 percent range; unless you are sure of your usage needs, the incentive to forgo the flexibility that hourly billing affords is not substantial.
A unique service that SoftLayer provides is the ability to provision bare-metal servers. This is done via the SoftLayer portal (or the SoftLayer API), so the ordering experience is the same as for VSes. The difference is that bare-metal servers can take up to four hours to be provisioned, whereas VSes typically boot within 10 minutes. Nevertheless, considering the additional complexity involved behind the scenes, this seems a reasonable turnaround time.
As with VSes, bare-metal servers are available in a variety of configurations, and you can choose between an hourly usage model or a monthly commitment. Monthly commitments open the door to a far wider array of server configurations than is available under the hourly usage model. Bare-metal servers offer the advantage of single-tenant configurations (similar to AWS’s dedicated instances and hosts), which may be required for security or compliance reasons, or to accommodate more restrictive software licensing (IP-locked, MAC-locked, and so on).
SoftLayer does not have as strong a foothold in the IaaS market as the other IaaS vendors discussed here, but it has some differentiating offerings (many in the network and appliance arenas, which are outside the scope of this article) that make them attractive for specific use cases and workloads. Because SoftLayer is an IBM company, it enjoys established relationships with many large enterprise customers.

17/12/2016

Technology trends in 2017: A bluffer’s guide


If bluffers have become used to calling on:

SoCloMo to remind themselves that social, cloud and mobile have been the uber trends of the last few years then it might be time for a new coinage. 

RoboMoboFlextastic or similar, perhaps, to highlight the increasing influence of robotics, software bots, ultra-mobile devices and adaptive platforms 

That enable IT departments to deal with the vagaries of frightening geopolitics and economic uncertainty perhaps. Or maybe, like the art movement that followed the avant garde 100 years ago, 

2017 will see a Return to Order where the flashy, up and coming trends slow down.
So if it’s not too downbeat, let’s look at some trends that might stall this year.

1- Driverless cars. Everybody’s obsessing over them but it’s likely (to me at least) that this is a market that will go niche and B2B before it goes on the road for most of us. Convoys on private routes such as military roads would make sense as might roads with very predictable traffic loads in certain countries with modern infrastructure – Singapore could well lead. But for most developed cities with their legacy issues it will take a long time before risks are ironed out and regulations are reset.

2- Drones. This is another overheated segment where hype has sprinted past reality thanks to over-zealous marketing by the likes of Amazon and Facebook. Drones have a clear appeal for B2C hobbyists and in some niches such as logistics but as B2B delivery infrastructure? No way: too dangerous, too unreliable, too little regulation.

3- Mobile devices. Harbingers of doom refer to dramatic slowdowns in smartphones and tablets but really that’s thanks to the fact that makers have done such a fine job in building desirable, affordable products that they have saturated the market. Laptops offer tremendous value and are more reliable than ever while the old cycle of ‘forced march’ upgrades, thanks to new versions of Windows or Intel’s latest chips, is broken. New(ish) wearable categories such as smart watches do not have sufficient mainstream appeal, at least for now.

OK, enough of the negativity. 

What can we expect to see more of in 2017?

4- Lots of M&A. Large growth markets are already seeing winners and losers being clearly demarcated. In cloud platforms for example, AWS, Microsoft, Google and IBM already have a scale advantage over others (although Alibaba and Tencent remain worth watching for a non-western approach). In an environment where buy-side organisations might well elect to double-down on large suppliers there should be plenty of scope for giants to pick off rivals for their customer bases, geographic presences, technology IP, or people. The biggest tech companies have lots of cash and will want to place large bets this year; many of these deals will be transnational with, for example, more Chinese takeovers of US and European tech companies – unless blockades kick in at the behest of incoming political leaders…

5- Personal assistants. The popularity of Amazon’s Alexa points to demand for voice-controlled help. If part-baked products like the Echo can do so well, imagine what will happen when one of the ecosystem leaders (probably from Amazon, Apple, Microsoft and Google) cracks the code on a reliable, holistic service that knows enough about us to anticipate our needs.

6- Messaging services. Messaging is increasingly the gift that keeps on giving to internet giants and Snapchat’s pending IPO might give the market fresh impetus, driving more freebie features and services from WhatsApp, WeChat and the rest in a battle for market share.

7- Online to offline. Just as 20 years ago bricks-and-mortar companies hurried to reinvent themselves online, there’s a strong chance we’ll see a significant trend whereby web-native brands amplify awareness and supplement logistics by opening stores. Expect more brands to join Amazon, Warby Parker, Zalando, Drop Dead and have their own retail outlets, pop-ups or franchised stands and kiosks in department stores.

8- Virtual Reality and Augmented Reality. With hardware availability, a burgeoning ISV community and the success of Pokémon Go, the dawn chorus for VR and AR will soon deafen. A strong pent-up demand will surely see vast B2C uptake and create a market in its wake for B2B applications such as real estate home walkthroughs.

9- Flying cars. It’s possible that all of the excitement over driverless cars will be overtaken by flying cars. Silicon Valley VCs and other powerbrokers already report a flurry of proposals and it’s quite possible that the regulatory and infrastructure hassles that await driverless will be more easily evaded in flight, especially in areas where traffic is light.

10- Telepresence. High-end videoconferencing has spent far too long in the queue to be mainstream but everything from the cost of real estate to our traffic-choked roads point to it finally being more widely adopted. Shame on the industry for not already having hit the price points and created the packages that would make this a no-brainer.

11/12/2016

Software-Defined WAN (SD-WAN) and Cloud integration in few words

07/12/2016

Juniper Networks: “Proving the Business Value of Network Transformation” says IDC

img_7513

Juniper Networks’ technology development is focused on responding to the challenges of contemporary networking deployments by implementing three main technology approaches:

» Introduction of flattened network architectures, optimized for improved performance in north-south and east-west directions

» Robust and versatile implementation of SDN via its Contrail SDN controller, or support for overlay technologies such as VMware NSX and OpenStack’s Neutron

» Strong support for open networking and network appliance disaggregation, giving customers the option of combining Juniper hardware with other network operating systems

In addition, Juniper has maintained a strong pace of introducing virtualization into its portfolio, primarily by virtualizing router and security functions, and supporting NFV for its service provider customers.

In management and operations, Juniper promotes efficient management interfaces and a unified network operating system. To quantify the business value of Juniper Networks’ switching, routing, and security solutions, IDC interviewed nine customers in Western Europe.

The average return on investment (ROI) for Juniper’s solutions for the customers that were interviewed was 349% over five years, with an average payback period of 8.6 months from deployment.

Business benefits were realized in four main areas: infrastructure, network administration and management, IT user productivity, and overall business productivity.

To put this into context, IT hardware investments commonly provide payback in 9 to 12 months. Payback in 6 to 9 months is normally considered to be very rapid, so the 8.6-month payback for Juniper Networks in this study should be considered an exceptional performance.

Network managers who were interviewed, both in service provider and enterprise clients, were able to provision faster, resulting in better response times to customer and business requests, and spent less time keeping the lights on, allowing more time for proactive initiatives that take the company’s business forward.

IDC also found that networking performance and reliability was considerably improved, and network management was significantly optimized for those interviewed. This shows, in IDC’s view, that high-performance, virtualized, and open networking and security solutions, such as those provided by Juniper Networks, are a financially prudent investment for companies looking to transform their network and security infrastructure to better support their business requirements.

Interview participants also generally found that Juniper’s solutions allow the networking staff to change the focus of their work from maintaining day-to-day operations to spending more time on supporting clients, users, or business-critical applications.

More Download Full report …

07/12/2016

 Juniper Networks Summit “The Open Disruptive Decade” – Dubai


.

To Build more than a network EX QFX SRX series 

ATP Sky Juniper Cloud

Deception Analysis 

IBM QRadar SIEM

Threat Intelligence 

SDN with OpenFlow h/w wiring devices

SDSN 

JuniOS

SD-WAN ISP

Application based traffic steering and centrally 

orchestrated with Analytical feedback

Cloud – enabled branches

IPVPN MPLS is declining

IoT WAN connectivity 

.

05/12/2016

Palo Alto Networks integrates cloud security for AWS


Palo Alto Networks has announced the integration of its VM-Series virtualised next-generation firewalls Amazon Web Services (AWS) Auto Scaling and elastic Load Balancing.

Using a combination of AWS Lambda, CloudFormation templates, and Amazon CloudWatch services, along with bootstrapping and XML API automation features supported by the VM-Series, joint customers can now automatically scale the cyber breach prevention capabilities of the Palo Alto Networks next-generation security platform as their AWS workload demands fluctuate.

Tim Jefferson, global ecosystem lead, Security, Amazon Web Services, said: “Palo Alto Networks integration with AWS Auto Scaling and ELB, combined with the new validated security competency, enables our joint customers to confidently and dynamically deploy the Palo Alto Networks VM-Series virtualised next-generation firewall delivering security that keeps pace with our customers’ business.”

Lee Klarich, executive vice president, Product Management, Palo Alto Networks, added: “Palo Alto Networks Next-Generation Security Platform provides customers with superior cyber breach prevention capabilities, including security for cloud-based applications. Through our close integration with AWS, customers can now grow and scale their cloud environments with even greater ease and automation while enhancing and maintaining their security posture across public cloud and hybrid environments.”

Palo Alto Networks also joins the AWS Competency Program for Security, which highlights partners who have demonstrated success in building products and solutions on AWS to support customers in multiple areas, including infrastructure security, policy management, identity management, security monitoring, vulnerability management, and data protection.

05/12/2016

5 steps to securely moving apps to the cloud

02/12/2016

Multiplication with crossing lines – China ١٢ضرب الأعداد بالطريقة الصينيةx٣٢

27/11/2016

Clever attack uses the sound of a computer’s fan to steal data

IN THE PAST two years a group of researchers in Israel has become highly adept at stealing data from air-gapped computers—those machines prized by hackers that, for security reasons, are never connected to the internet or connected to other machines that are connected to the internet, making it difficult to extract data from them.
Mordechai Guri, manager of research and development at the Cyber Security Research Center at Ben-Gurion University, and colleagues at the lab, have previously designed three attacks that use various methods for extracting data from air-gapped machines—methods involving radio waves, electromagnetic waves and the GSM network, and even the heat emitted by computers.
Now the lab’s team has found yet another way to undermine air-gapped systems using little more than the sound emitted by the cooling fans inside computers. Although the technique can only be used to steal a limited amount of data, it’s sufficient to siphon encryption keys and lists of usernames and passwords, as well as small amounts of keylogging histories and documents, from more than two dozen feet away. The researchers, who have described the technical details of the attack in a paper (.pdf), have so far been able to siphon encryption keys and passwords at a rate of 15 to 20 bits per minute—more than 1,200 bits per hour—but are working on methods to accelerate the data extraction.
“We found that if we use two fans concurrently [in the same machine], the CPU and chassis fans, we can double the transmission rates,” says Guri, who conducted the research with colleagues Yosef Solewicz, Andrey Daidakulov, and Yuval Elovici, director of the Telekom Innovation Laboratories at Ben-Gurion University. “And we are working on more techniques to accelerate it and make it much faster.”

The Air gap myth

Air-gapped systems are used in classified military networks, financial institutions and industrial control system environments such as factories and critical infrastructure to protect sensitive data and networks. But such machines aren’t impenetrable. To steal data from them an attacker generally needs physical access to the system—using either removable media like a USB flash drive or a firewire cable connecting the air-gapped system to another computer. But attackers can also use near-physical access using one of the covert methods the Ben-Gurion researchers and others have devised in the past.

One of these methods involves using sound waves to steal data. For this reason, many high-security environments not only require sensitive systems be air-gapped, they also require that external and internal speakers on the systems be removed or disabled to create an “audio gap”. But by using a computer’s cooling fans, which also produce sound, the researchers found they were able to bypass even this protection to steal data.

Most computers contain two or more fans—including a CPU fan, a chassis fan, a power supply fan, and a graphics card fan. While operating, the fans generate an acoustic tone known as blade pass frequency that gets louder with speed. The attack involves increasing the speed or frequency of one or more of these fans to transmit the digits of an encryption key or password to a nearby smartphone or computer, with different speeds representing the binary ones and zeroes of the data the attackers want to extract—for their test, the researchers used 1,000 RPM to represent 1, and 1,600 RPM to represent 0.

The attack, like all previous ones the researchers have devised for air-gapped machines, requires the targeted machine first be infected with malware—in this case, the researchers used proof-of-concept malware they created called Fansmitter, which manipulates the speed of a computer’s fans. Getting such malware onto air-gapped machines isn’t an insurmountable problem; real-world attacks like Stuxnet and Agent.btz have shown how sensitive air-gapped machines can be infected via USB drives.

To receive the sound signals emitted from the target machine, an attacker would also need to infect the smartphone of someone working near the machine using malware designed to detect and decode the sound signals as they’re transmitted and then send them to the attacker via SMS, Wi-Fi, or mobile data transfers. The receiver needs to be within eight meters or 26 feet of the targeted machine, so in secure environments where workers aren’t allowed to bring their smartphones, an attacker could instead infect an internet-connected machine that sits in the vicinity of the targeted machine.
Normally, fans operate at between a few hundred RPMs and a few thousand RPMs. To prevent workers in a room from noticing fluctuations in the fan noise, an attacker could use lower frequencies to transmit the data or use what’s known as close frequencies, frequencies that differ only by 100 Hz or so to signify binary 1’s and 0’s. In both cases, the fluctuating speed would simply blend in with the natural background noise of a room.

“The human ear can barely notice [this],” Guri says.

The receiver, however, is much more sensitive and can even pick up the fan signals in a room filled with other noise, like voices and music.

The beauty of the attack is that it will also work with systems that have no acoustic hardware or speakers by design, such as servers, printers, internet of things devices, and industrial control systems.

The attack will even work on multiple infected machines transmitting at once. Guri says the receiver would be able to distinguish signals coming from fans in multiple infected computers simultaneously because the malware on those machines would transmit the signals on different frequencies.

There are methods to mitigate fan attacks—for example, by using software to detect changes in fan speed or hardware devices that monitor sound waves—but the researchers say they can produce false alerts and have other drawbacks.

Guri says they are “trying to challenge this assumption that air-gapped systems are secure,” and are working on still more ways to attack air-gapped machines. They expect to have more research done by the end of the year.

14/11/2016

Facebook hits 20Gbps for data transmission ( internet drone)

A test by Facebook of a point-to-point radio link for its Aquila Internet drone system. The transmitter used a cascade of custom-built millimeter wave components that were attached to a commercial 60-centimeter diameter parabolic antenna. The transmitter and the receiver were separated by a distance of 13.2 km in Southern California.

Facebook has succeeded in transmitting data at almost 20Gbps between two towers in Southern California in tests of a technology key to its plans to deliver internet service to rural areas using drones.
The tests were conducted earlier this year and made use of frequencies in the so-called E-band, a group of millimeter wave frequencies between 60 and 90GHz.

Such signals are capable of high-bandwidth data transmission but are susceptible to attenuation from distance, weather, and obstacles, so they are typically used for short-range, point-to-point transmissions.
Facebook used a 60-centimeter dish to send data over a 13-kilometer link between Malibu and Woodland Hills.
That test initially shot data at between 100Mbps and 3Gbps and allowed engineers to collect transmission data on clear days and during clouds, fog, high winds, and rain, Facebook said in a Thursday blog post.

For the tests, Facebook ended up building many of its own microwave transmission components into the system, which used a 1.2-meter dish on the receiving end.
The signal received by the dish would drop by half if it was mispointed by just 0.2 degrees, so accuracy was key.
The get the data rate up to 20Gbps, the dish needed to be almost spot on: an accuracy of more than 0.07 degrees. In perspective, that’s the equivalent of a baseball pitcher hitting a quarter, Facebook said.

The distance achieved is significant and already useful for point-to-point links that could transmit high-bandwidth internet signals terrestrially.
But it isn’t quite far enough to be used as the backhaul link for the company’s envisaged drone internet service.
Facebook’s Aquila drones will deliver internet access to remote areas from altitudes of between 60,000 and 90,000 feet, roughly 18 to 27 kilometers. Because the drones won’t always be overhead of the ground station, the link distance could be longer.

For the drone service, Facebook will need to increase the range to between 30 and 50 kilometers and increase the bandwidth to 30Gbps, the company said.
The next step for the California tests is a ground-to-air system that will transmit data to a Cessna aircraft with a data transceiver on board.

Testing of a system that can shoot a 20Gbps data link to the aircraft at an altitude of up to 20,000 feet (six kilometers) has already begun. In 2017, Facebook plans to up the speed to 40Gbps both to and from the aircraft.
“We still have several connectivity and technical challenges to resolve before the technology is fully ready for deployment,” Facebook said.

08/11/2016

LEGO Mindstorms EV3 Demo