Archive for ‘Misc’


NFV vs SDN Network Function Virtualization

The goal of NFV is to decouple network functions from dedicated hardware devices and allow network services that are now being carried out by routers, firewalls, load balancers and other dedicated hardware devices to be hosted on virtual machines (VMs). Once the network functions are under the control of a hypervisor, the services that once require dedicated hardware can be performed on standard x86 servers

This capability is important because it means that network administrators will no longer need to purchase dedicated hardware devices in order to build a service chain. Because server capacity will be able to be added through software, there will be no need for network administrators to overprovision their data centers which will reduce both capital expenses (CAPex) and operating expenses (OPex). If an application running on a VM required more bandwidth, for example, the administrator could move the VM to another physical server or provision another virtual machine on the original server to take part of the load. Having this flexibility will allow an IT department to respond in a more agile manner to changing business goals and network service demands.
NFV is different from software-defined networking (SDN) but is complementary to it; when SDN runs on the NFV infrastructure, the SDN forwards the data packets from one network device to another while the network routing (control) functions run on a virtual machine in, for example, a rack mount server. The NFV concept, which was presented by a group of network service providers at the Software Defined Network and OpenFlow World Congress in October 2012, is being developed by the ETSI Industry Specification Group (ISG) for Network Functions Virtualization.


Public sector needs CIOs to lead digital transformation

Public sector CIOs can lead a digital revolution in their organisations, creating new citizen services, according to Gartner, but only if they can overcome the risk-adverse culture typically found in government.

The analyst company says that CIOs in government organisations are well positioned to drive digitisation efforts and to implement better ways of working, but that change requires vision and changes to culture and leadership practices.
Public sector organisations often face restraint from top-down hierarchies, cultural legacies and the lack of a compelling vision, Gartner said, but there are successful examples of organisations that have changed focus for the better.
Elise Olding, research vice president at Gartner, said that public sector leaders — including CIOs — must create a culture that is less averse to change, unified in vision and direction, and that can manage change more effectively over longer time frames.
“Public sector organisations often have cultural and organisational mechanisms to buffer them from rapid swings in the political or economic landscape. While this provides stability, it also makes large-scale organisational change a difficult prospect,” Olding said.
“None of these challenges are insurmountable. Based on our conversations with public sector CIOs who have seen success in their digital transformation, Gartner has identified three key recommendations,” she added.
Gartner’s key recommendations for successful digital leadership are:
1- Promote a Compelling Vision
In an ideal scenario, a CIO will receive clear direction on the strategic intent of the organisation and the role IT will play in that. Too often, however, public sector organisations lack a clear ‘business’ strategy to which the CIO can align IT investments. Yet, in either case, it’s vital the CIO formulates a vision of how technology investments will achieve a desired future state for the organization.
“The best kind of vision should fit on a postcard,” said Olding. “It expresses in clear, non-technical terms on one page what is wrong with the status quo, and outlines a set of activities and investments that will improve things.”
A vision like this allows for engagement with executive leaders, so they can affirm, revise or reject and replace the strategic direction the CIO has outlined for the IT organisation. If clear executive direction was lacking from the outset, this engagement may serve as a catalyst to improve the strategy outside the IT organisation. If a clear direction was in place, the vision will still affirm and provide a template for IT’s role in bringing it to reality.
2- Make Change Inclusive
“Getting executive buy-in is just the first step; the vision is the cornerstone for action,” said Olding. “It’s critical to communicate the vision to midlevel management and frontline workers in a way that demonstrates how their role fits into the vision, and how the completed vision will improve their role. A credible answer to the question, ‘What’s in it for me?’ builds caring and belief.”
It’s also important that the vision shows how it builds on the good work of earlier efforts. This will not be the first vision seen by most employees. Many of them will have invested in one or more previous visions, only to see them swept away or discredited by a new round of leaders. They may be justifiably sceptical of a new picture. To win their support CIOs must avoid hyping their vision as a panacea, but rather present it as an iteration and expansion of previous achievements.
In addition to honouring the culture and legacy of an organisation and how it contributes to the future vision, CIOs must cultivate ‘change agents’. These are employees who clearly understand the vision and its benefits, and champion it among their peers. CIOs can better harness the creativity and insights of the entire organisation when they constantly invite, encourage and support employees at all levels who show desire to make the vision a reality.
3- Alter Leadership Practices
Embracing change will require changes for everyone, and that starts with leadership. Organisational cultures can foster myths that are comfortable yet counterproductive. Such myths are rooted in the language of “that’s how we’ve always done things,” which reinforces a victim mentality and smothers innovation.
“The CIOs who succeed in transforming the business actively confront ingrained behaviours, traditions and legacy processes,” said. Olding. “They challenge leadership and are successful in instilling a clearly defined sense of urgency around their vision that gains the trust and support of the entire organisation, from leadership to frontline workers.”


AR/VR Augmented vs Virtual Reality 

One of the biggest confusions in the world of augmented reality is the difference between augmented reality and virtual reality. Both are earning a lot of media attention and are promising tremendous growth. So what is the difference between virtual reality vs. augmented reality?

What is Virtual Reality?

Virtual reality (VR) is an artificial, computer-generated simulation or recreation of a real life environment or situation. It immerses the user by making them feel like they are experiencing the simulated reality firsthand, primarily by stimulating their vision and hearing.

VR is typically achieved by wearing a headset like Facebook’s Oculus equipped with the technology, and is used prominently in two different ways:

To create and enhance an imaginary reality for gaming, entertainment, and play (Such as video and computer games, or 3D movies, head mounted display).
To enhance training for real life environments by creating a simulation of reality where people can practice beforehand (Such as flight simulators for pilots).

Virtual reality is possible through a coding language known as VRML (Virtual Reality Modeling Language) which can be used to create a series of images, and specify what types of interactions are possible for them.

What is Augmented Reality?
Augmented reality (AR) is a technology that layers computer-generated enhancements atop an existing reality in order to make it more meaningful through the ability to interact with it. AR is developed into apps and used on mobile devices to blends digital components into the real world in such a way that they enhance one another, but can also be told apart easily.

Augmented Reality

AR technology is quickly coming into the mainstream. It is used to display score overlays on telecasted sports games and pop out 3D emails, photos or text messages on mobile devices. Leaders of the tech industry are also using AR to do amazing and revolutionary things with holograms and motion activated commands.

Augmented Reality vs. Virtual Reality
Augmented reality and virtual reality are inverse reflections of one in another with what each technology seeks to accomplish and deliver for the user. Virtual reality offers a digital recreation of a real life setting, while augmented reality delivers virtual elements as an overlay to the real world.

How are Virtual Reality and Augmented Reality Similar?

Augmented and virtual realities both leverage some of the same types of technology, and they each exist to serve the user with an enhanced or enriched experience.

Both technologies enable experiences that are becoming more commonly expected and sought after for entertainment purposes. While in the past they seemed merely a figment of a science fiction imagination, new artificial worlds come to life under the user’s control, and deeper layers of interaction with the real world are also achievable. Leading tech moguls are investing and developing new adaptations, improvements, and releasing more and more products and apps that support these technologies for the increasingly savvy users.

Science and Medicine
Additionally, both virtual and augmented realities have great potential in changing the landscape of the medical field by making things such as remote surgeries a real possibility. These technologies been already been used to treat and heal psychological conditions such as Post Traumatic Stress Disorder (PTSD).
How do Augmented and Virtual Realities Differ?

Augmented reality enhances experiences by adding virtual components such as digital images, graphics, or sensations as a new layer of interaction with the real world. Contrastingly, virtual reality creates its own reality that is completely computer generated and driven.

Delivery Method
Virtual Reality is usually delivered to the user through a head-mounted, or hand-held controller. This equipment connects people to the virtual reality, and allows them to control and navigate their actions in an environment meant to simulate the real world.
Augmented reality is being used more and more in mobile devices such as laptops, smart phones, and tablets to change how the real world and digital images, graphics intersect and interact.

How do they work together?

virtual reality vs. augmented reality– they do not always operate independently of one another, and in fact are often blended together to generate an even more immersing experience. For example, haptic feedback-which is the vibration and sensation added to interaction with graphics-is considered an augmentation. However, it is commonly used within a virtual reality setting in order to make the experience more lifelike though touch.

Virtual reality and augmented reality are great examples of experiences and interactions fueled by the desire to become immersed in a simulated land for entertainment and play, or to add a new dimension of interaction between digital devices and the real world. Alone or blended together, they are undoubtedly opening up worlds-both real and virtual alike.


Brain-Based CPU Chips معالجات حاسوبية تحاكي خلايا الدماغ


Gartner’s predictions in 2018

20% percent of business content will be authored by machines.
6000,000,000 six billion connected things will be requesting support.
45% percent of the fastest-growing companies will have fewer employees than instances of smart machines.
3 three million workers globally will be supervised by a “robo-boss.”
CDA customer digital assistant will recognize individuals by face and voice across channels and partners.
2 two million employees will be required to wear health and fitness tracking devices as a condition of employment.
By 2020, Autonomous software agents outside of human control will participate in 5% of all economic transactions.
By 2020, smart agents will facilitate 40 percent of mobile interactions.


10 hardware breakthroughs that could revolutionize IT strategy

In the next five years, innovations in storage, devices, chips, and other hardware will revolutionize IT. Here are 10 emerging hardware technologies CIOs should begin to consider in their strategic roadmaps.

1: Mobile devices with hardened security

Security continues to be a major challenge with mobile devices. One option is Intel’s Software Guard Extension (SGX) technology, which will support the use of secure encrypted memory spaces on commodity CPUs. The goal is to provide applications with an area of secure and protected memory and execution. This could be a boon for mobile devices, a leading source of security breaches that corporate IT must contend with. “We will see the start of a new generation of systems solutions that guarantee security even if the operating system or other infrastructure gets compromised by hackers,” said Sriram Rajamani, Microsoft Research India’s assistant managing director, in an eWeek piece on tech predictions.

2: New chip architectures that improve machine learning performance

As more IoT and machine-based applications enter the IT mainstream, new chip architectures will improve performance over what is presently available with graphical processing units (GPUs). These performance improvements will dramatically improve data transfers and the execution of machine-based learning and analytics.

3: Drones

Unmanned aerial vehicles (UAVs) will continue to push themselves into commercial applications, whether it is delivering packages, taking photographic images, or surveying physical terrain that is difficult to access. They will collect IoT data through sensors and channel it into central communications.

4: Unmanned robots

Robots can carry out simple medical procedures, clean facilities, and pick and pack items in warehouses. The intelligence in these automated machines will be further increased as new technologies come onboard to collect everything that has been learned by all machines into a central data bank that any machine can access.

5: More user-friendly virtual reality gear

Bulky headsets have encumbered VR users and made them dizzy or seasick, prompting companies to avoid VR applications. That’s about to change. For example, Google cardboard provides a small holder for your smart phone and delivers a full-bodied video experience that rivals those produced through older headgear. More comfortable VR headgear will pave the way for greater corporate adoption of VR.

6: New storage technology for greener power grids

Data centers will continue to go green as power companies find better ways to seamlessly blend diverse energy sources, such as solar, wind, and traditional fossil fuel, into a seamless and uninterrupted supply of energy. Today, the use of hybrid energy is difficult because sources like wind and solar can be variable. But better storage can solve this and usher in a new green era that could save data centers and other energy users an estimated $3.4 billion per year.

7: More on-the-job wearables

Jupiter research predicts that smart glasses, smart watches, and a range of motion-sensing devices could improve productivity by 30%. Gartner predicts that by 2018, two million employees, such as law enforcement officers and paramedics, will be required to wear health and fitness tracking devices.

8: Local energy harvesting for Internet access

With automation and a plethora of IoT devices being added to the internet’s workload, new technology is needed to add to bandwidth and ready access. University of Washington researchers have developed technology that enables internet-connected temperature and motion sensors, cameras, etc., to communicate by using only energy harvested from nearby TV, radio, cellphone, and Wi-Fi signals. A principle known as backscattering allows IoT devices to absorb energy emitted by other electronics, enabling them to reduce their internet bandwidth demands. This localized Wi-Fi consumes just 1/10,000th as much power as existing Wi-Fi chipsets.

9: More compact flash memory

3D NAND technology continues to move forward, delivering smaller and more lightweight laptops, tablets, and other devices. Much of this progress is due to the ability of companies like Intel and Micron to stack flash memory cells vertically, which conserves space and enables devices to be smaller and thinner.

10: Nonvolatile memory

With nonvolatile memory, computers can retrieve information even after being turned off and back on. Going forward, we will see new forms of nonvolatile memory that will enable data to be stored at less cost and less power. This will enable smaller devices to store even more data.


Starting your first day as a CXO

Welcome on board 

Brief introduction 

Get the full image

Business knowhow 

Understanding the work space (people, equipment, software)

Explore the production workflow processes, business model

Familiarize with organization policies and procedures

Determine performance influencers and key indicators 

Socialize with board members

Find underestimated unhappy mentors

Avoid undercover agents insiders and gossip tellers

To know the communication deficits and blockbusters

Search for bullies and donkey workers and progress suppressors

Define a plan from where to start and whom to start with

Be effective and efficient 

Team headhunting 

Show self esteem 

Find shortcuts and zigzaggers

Capacity sizing and planning 

Define Communication channels 

Discover failure stories 

Keep listening and learn

Evaluate underpinning contracts

Establish direct vendor relationships and SLA


How IoT will change the job market

The Internet of Things (IoT) is poised to bring millions of devices online, and as many as a quarter million unique IoT applications will be developed by the year 2020. That means opportunities for skilled developers and technologists will abound. However, there are other, subtler ways the IoT will affect the job market.

“We’re seeing tech companies around the globe getting organized and creating IoT strategies, but where they’re struggling is they don’t have the processes and talent in-house to make these things happen,” says Ryan Johnson, categories director for global freelance marketplace Upwork. By tracking data from Upwork’s database, Johnson and his team have identified major technology skills companies need to drive a successful IoT strategy.

Hard skills

Skills like circuit design, AutoCAD and microcontroller programming will address businesses’ need to adapt circuit design to new form factors and system requirements; design new hardware and add programming and data memory onto microcontrollers, Johnson says.

There’s also great demand for talent skilled in machine learning, algorithm development and data analytics so that companies can develop new ways to gather, analyze and take action based on the data points connected devices create, says Scott Noteboom, CEO of machine learning company LitBit.

There will be a major focus on how interconnected and Internet-enabled devices can communicate with the people who design, develop and build them, Noteboom says, but there will also be an emphasis on how these devices can communicate with each other.

Machine-to-machine communication
“One of the things we’re looking at is the idea of crowdsourcing, but in this case the ‘crowd’ is made up of all these connected machines,” Notebooms says. How can machines share insights on failure data? Imagine a sensor in a household air conditioning unit that measures refrigerant level, energy usage, heat and other necessary metrics. Not only can it learn that when the refrigerant level gets too low, the unit vibrates differently and produces a higher level of heat, now it can go on the Internet and share that data with all the other relevant AC units, “This is what it looks, feels and sounds like when you’re about to fail!”

“They all set up new alerts so their owners know when to have the refrigerant levels checked? That’s the ‘perfect storm’ before the compressor is going to burn out; the unit’s failing. That’s pretty awesome,” Notebooms says.

The IoT has already demonstrated an increased need for security, because of the potential for increased data exposure as well as device and physical security of connected “things,” says Johnson.

“The added scale and complexity of IoT connectivity, communications and the endpoints themselves complicates things. Within security infrastructure, we’re seeing strong demand on our platform for network security developers and programmers, and people with vulnerability analysis experience to conduct in-depth assessments to identify threats to embedded systems such as local controllers/gateways and determine the risk at the device level,” Johnson says.

The human factor
There’s another, more human way the IoT will affect the job market that’s not often addressed, says Noteboom. With so many connected, communicative devices that could potentially have the ability to take on arduous, repetitive tasks and “drudge work,” the IoT could open up new avenues of creativity and collaboration, he says.

“The IoT has the potential to change the human experience the same way the assembly line and the Industrial revolution did. It changes the human-machine relationship in similar ways; machines will soon be able to do repetitive tasks driven by their past experiences,” he says. That means more time and energy for solving problems by creating technology that can address pollution, save energy, using biotechnology to create new ways to grow crops or generate electrical power through the use of technology, he says.

“If you can use IoT in a data center, for instance, to figure out optimal cooling levels and regulate power consumption, you can help companies save energy without having as many personnel involved. IoT can help reduce the amount of repetitive work, and that will free up people to do more learning, exploring and creating new ideas, new knowledge. Instead of focusing on the accumulation of learning things, we can focus on creating new things that will help our fellow humans,” Noteboom says.


What It Means To Be a HIPAA-Compliant Call Center

Anyone in charge of the operations of a medical practice understands that there are many tasks that need to be completed on a daily basis. With a healthcare practice, customer interaction is vital. Your patients want to speak with a representative even outside of normal office hours. Because of this, many offices consider utilizing an answering service to provide patients the level of personal contact they require. However, before you trust your important patient information to a medical call center, you want to make certain that the information will be protected. Not only do you want to do this as a service to your patients, but you are responsible for making certain that HIPAA privacy standards are upheld by the call center from which you contract services. Learn more about the different ways a call center can follow these HIPAA guidelines so that you know what to expect when you contract with one of these organizations.
What are the Basic HIPAA Requirements?

The entire set of HIPAA privacy regulations are quite complex and as such outside the scope of a short blog. However, the basic layout of the requirements are as follows:
The goal of HIPAA is to protect the health information of patients. Protected health information relates directly to an individual’s past, current, or future medical care. This can include health care billing and payment information, as well as demographic information.
Patients have the right to decide how their health care information is used. Therefore a patient must sign a release of information before it can be shared outside of the doctor-patient setting. The reasoning behind this is that it will better control how patient medical records are managed.
Once the health care professional has information from a patient, they are required to follow certain guidelines to protect it. Wrongful disclosure or misuse of medical information is prohibited and could subject a medical professional to fines and/or imprisonment.
Because of the stringency of these guidelines, it is vital that the doctor’s office only works with a call center that will keep the information just as secure as the practice itself.
Assessing a Potential Call Center for HIPAA-Compliance

There are a number of ways that the medical call center can keep this patient information confidential. However, look for a call center that – at the very least – follows these simple guidelines:
Hires only screened professionals to work with sensitive data. This can prevent an inside information leak. Ask any prospective answering service provider what sort of screening process they have in place for agents that answer calls on behalf of healthcare providers.

Follow a privacy policy. This can mean not accepting sensitive information over unsecured email or data connections. Changing the way data is shared can make a big difference in its overall security.

Ensure that the call center has the flexibility to work with the medical practice to develop customized procedures and policies that make sure that your specific needs are met.

Utilize encryption on computers, smartphones and any other devices that house patient information. This can prevent information from leaking to a hacker or in an accidental breach of the computer’s basic security system.

Regularly conduct security assessments. This ensures that the facility does not have any gaps in privacy services.

Has a disaster recovery plan. Should a catastrophic event befall the call center, a properly conceived disaster recovery (DR) plan will ensure that all data pertaining to your business and your patients remains secure and can be restored and retrieved.

Call center management and staff receive on-going HIPAA training. Staying up-to-date on current practices and regulations requires an ongoing dedication to training.

Your medical call center provider is an important business partner that provides a critical service to your business -they handle patient communications so that your staff can focus on patient treatment. Since so much patient information flows through the call center or answering service, it is every bit as important for the call center to be HIPAA-compliant as it is for your internal team to be compliant. You should undertake a thorough review of a potential call center’s practices in order to be confident that your patients’ information will be properly handled.


117 مليون كلمة مرور لموقع لينكد إن معروضة للبيع 

ينوي قرصان الانترنت الذي قام بسرقة ملايين عناوين البريد الالكتروني وكلمات المرور التابعة لمستخدمي موقع لينكد إن في العام 2012، ببيعها الآن والتي يصل عددها إلى 117 مليون بريد إلكتروني وكلمة مرور.

يتذكر مستخدمي الموقع جيداً حادثة التهكير التي تعرض لها الموقع في العام 2012، والتي سرب من خلال ما وصل إلى 65 مليون كلمة مرور على شبكة الانترنت، بالإضافة إلى سرقة بيانات ملايين العملاء في الموقع.

وبعد أربع سنوات على الحادثة يقوم القرصان المدعو Peace بعرض بيع قاعدة بيانات تخص ما لا يقل عن 167 مليون حساب على لينكد إن، والتي تمكن من كسر 117 مليون كلمة مرور منها.

 جديد في “لينكد إن” خلال شهرين فقط

والحل الوحيد هنا هو إن لم تقوموا بتغيير كلمة المرور منذ العام 2012، فعليكم تغييرها الآن وبسرعة وعدم اختيار أي من الكلمات السهلة، فمن الأفضل أن تكون مزيج منوع من الكلمات التي تعني لكم شيء أو حتى كلمات أغنية مفضلة لديكم على أن تحتوي على بعض الرموز.


4 big plans to fix the Internet

Here are several promising security proposals that could make a difference in Internet security. None are holistic solutions, but each could make the Internet a safer place, if they could garner enough support.
1. Get real about traffic routing

The Internet Society, an international nonprofit organization focusing on Internet standards, education, and policy, launched an initiative called MANRS, or Mutually Agreed Norms for Routing Security.

Under MANRS, member network operators — primarily Internet service providers — commit to implementing security controls to ensure incorrect router information doesn’t propagate through their networks. The recommendations, based on existing industry best practices, include defining a clear routing policy, enabling source address validation, and deploying antispoofing filters. A “Best Current Operational Practices” document is in the works.

It’s Networking 101: The data packets have to reach their intended destination, but it also matters what path the packets take. If someone in Canada is trying to access Facebook, his or her traffic shouldn’t have to pass through China before reaching Facebook’s servers. Recently, traffic to IP addresses belonging to the U.S. Marine Corps was temporarily diverted through an ISP in Venezuela. If website traffic isn’t secured with HTTPS, these detours wind up exposing details of user activity to anyone along the unexpected path.

Attackers also hide their originating IP addresses with simple routing tricks. The widely implemented User Datagram Protocol (UDP) is particularly vulnerable to source address spoofing, letting attackers send data packets that appear to originate from another IP address. Distributed denial-of-service attacks and other malicious attacks are hard to trace because attackers send requests with spoofed addresses, and the responses go to the spoofed address, not the actual originating address.

When the attacks are against UDP-based servers such as DNS, multicast DNS, the Network Time Protocol, the Simple Server Discovery Protocol, or the Simple Network Management Protocol, the effects are amplified.

Many ISPs are not aware of different attacks that take advantage of common routing problems. While some routing issues can be chalked up to human error, others are direct attacks, and ISPs need to learn how to recognize potential issues and take steps to fix them. “ISPs have to be more responsible about how they are routing traffic,” Webb says. “A lot of them are susceptible to attack.”

ISOC had nine network operators participating in the voluntary program when it launched in 2014; now there are more than 40. For MANRS to make a difference, it needs to expand so that it can influence the market. ISPs that decide not to bother with the security recommendations may find they lose deals because customers will sign with MANRS-compliant providers. Or smaller ISPs may face pressure from larger upstream providers who refuse to carry their traffic unless they can show they’ve implemented appropriate security measures.

It would be great if MANRS became a de facto standard for all ISPs and network providers, but scattered safe neighborhoods are still good enough. “If you require everyone to do it, it is never going to happen,” Webb says.
2. Strengthen digital certificate auditing and monitoring

There have been many attempts to address the issues with SSL, which protects the majority of online communications. SSL helps identify if a website is the site it claims to be, but if someone tricks a certificate authority (CA) into fraudulently issuing digital certificates for a site, then the trust system breaks down.

Back in 2011, an Iranian attacker breached Dutch CA DigiNotar and issued certificates, including ones for Google, Microsoft, and Facebook. The attacker was able to set up man-in-the-middle attacks with those certificates and intercept traffic for the sites. This attack succeeded because the browsers treated the certificate from DigiNotar as valid despite the fact that the sites had certificates signed by a different CA.

Google’s Certificate Transparency project, an open and public framework for monitoring and auditing SSL certificates, is the latest attempt to solve the man-in-the-middle problem.

When a CA issues a certificate, it’s recorded on the public certificate log, and anyone can query for cryptographic proof to verify a particular certificate. Monitors on servers periodically examine the logs for suspicious certificates, including illegitimate certificates issued incorrectly for a domain and those with unusual certificate extensions.

Monitors are similar to credit reporting services, in that they send alerts regarding malicious certificate usage. Auditors make sure the logs are working correctly and verify a particular certificate appears in the log. A certificate not found in the log is a clear signal to browsers that the site is problematic.

With Certificate Transparency, Google hopes to tackle wrongly issued certificates, maliciously acquired certificates, rogue CAs, and other threats. Google certainly has technology on its side, but it has to convince users that this is the right approach.

DNS-based Authentication of Named Entities (DANE) is another attempt to solve the man-in-the-middle problem with SSL. The DANE protocol reinforces the point that a sound technology solution doesn’t automatically win users. DANE pins SSL sessions to the domain name system’s security layer DNSSEC.

While DANE successfully blocks man-in-the-middle attacks against SSL and other protocols, it is haunted by the specter of state surveillance. DANE relies on DNSSEC, and since governments typically owns DNS for top-level domains, there is concern about trusting federal authorities to run the security layer. Adopting DANE means governments would have the kind of access certificate authorities currently wield — and that makes users understandably uneasy.

Despite any misgivings users may have about trusting Google, the company has moved forward with Certificate Transparency. It even recently launched a parallel service, Google Submariner, which lists certificate authorities that are no longer trusted.
3. Tackle the malware problem once and for all

Almost a decade ago Harvard University’s Berkman Center for Internet & Society launched StopBadware, a joint effort with tech companies such as Google, Mozilla, and PayPal to experiment with strategies to combat malicious software.

In 2010 Harvard spun off the project as a stand-alone nonprofit. StopBadware analyzed badware — malware and spyware alike — to provide removal information and to educate users on how to prevent recurring infections. Users and webmasters can look up URLs, IPs, and ASNs, as well as report malicious URLs. Technology companies, independent security researchers, and academic researchers collaborated with StopBadware to share data about different threats.
4. Reinvent the Internet

Then there’s the idea that the Internet should be replaced with a better, more secure alternative.

Doug Crockford, currently a senior JavaScript architect at PayPal and one of the driving forces behind JSON, has proposed Seif: an open source project that reinvents all aspects of the Internet. He wants to redo transport protocols, redesign the user interface, and throw away passwords. In short, Crockford wants to create a security-focused application platform to transform the Internet.

Seif proposes replacing DNS addressing with a cryptographic key and IP address, HTTP with secure JSON over TCP, and HTML with a JavaScript-based application delivery system based on Node.js and Qt. CSS and DOMs will also go away under Seif. JavaScript, for its part, would remain the key cog in building simpler, more secure Web applications.

Crockford also has an answer for SSL’s reliance on certificate authorities: a mutual authentication scheme based on a public key cryptographic scheme. Details are scarce, but the idea depends on searching for and trusting the organization’s public key instead of trusting a specific CA to issue the certificates correctly.


Enterprise Security 360 Oberoi-Dubai


D- Link Easy Voiz PBX DVX-3000

HPE Hyper Converged 250 for Microsoft CPS standard

Comguard Centrify


Huawei Safe City Summit



Gitex 2017 16-20 OCT 2016


Gulf Air creates private cloud to support open-source big data engine

Airline is using a private cloud and open-source software to enable it to analyse social media and understand what consumers think about it
Gulf Air has created a private cloud to support a big data engine that will enable it to monitor consumer sentiment about the airline on social media.

Bahrain’s national carrier is using Red Hat Enterprise Linux, Red Hat JBoss Enterprise Application Platform, and Red Hat Storage as a platform for its Arabic Sentiment Analysis system, which monitors people’s comments through their social media posts.

It processes the posts and provides reports on what customers are saying about Gulf Air.

The open-source software meant no licence fees as the airline was able to run it on its existing infrastructure.

Gulf Air, which has 28 aircraft serving 39 cities in 22 countries, has developed a sentiment analysis engine using big data technologies that can address social media posts in both Arabic and English. It is based on an open-source Hadoop big data framework running across servers in Gulf Air’s private cloud environment.

The private cloud encompasses 200 servers running more than 100 core applications and holds more than 50 terabytes of data.

Gulf Air’s IT team also uses this as the basis for a wider analysis of the state of the market and actions taken by the carrier’s competitors.


Transforming IT from Communism to Capitalism with Software Defined

As many of you likely already know, Interop opened this week in Las Vegas. What you might not know is that it hosted a one-day Software-Defined Architecture Summit that focused on everything from what is software-defined technology to how to manage the migration from hardware- to software-based technology, and everything in between. If you missed the event, don’t worry. You can still get access to the Summit presentations here.

I spoke to one of the Summit’s presenters, Jack Poller, a lab analyst with the Enterprise Strategy Group, and he had an interesting take on how software-defined everything—software-defined networking, storage and data center—is revolutionizing the IT industry or more aptly, transforming IT from Communism to Capitalism.

The way Poller explains it, computers have evolved over time from systems defined by chips—hardware computing—to something now defined by software; in other words, software computing, which is really just virtualization. He further adds that, “This transformation from hardware to software has played out over and over again; first with computer systems, then with storage and networking. Each time, we’ve evolved from viewing the world as based on our hardware architecture and using that to solve a specific problem to a more general view of using a software architecture to solve our problems.”

As that technology evolution occurs, it flips power and control from one group to another. Poller points out that this flip is something that’s occurred throughout history, a recent example of which can be seen in the music industry.

Years ago, musicians made their money off of albums and performed concerts as a way to generate interest in people buying those albums. Today; however, with modern Internet and free music downloads, recorded music is no longer scarce. Instead, what’s become scarce are live musical performances. So now, musicians use their recorded music as a form of promotion to drive consumers to their concerts, and their concerts are where they make money.

As Poller explains, the technology of the Internet inverted the power structure in the music industry. He sees the same thing happening to the IT industry because of software-defined technology. “Computers used to be these big, very complex, very expensive machines, and because they were such capital- and resource-intensive things you ended up with a centralized resource and centralized control structure, which is essentially what communism is. Now, with software-defined technology, there is still a centralized block of resources, but those resources can be divvied out as necessary.”

In other words, as the evolution to software-defined technology takes hold, IT will transform from being the people in control of the resource and defining what it is, to the people who push the authority, responsibility, control, and decision-making regarding the resource down to the people actually using it; those who are closest to the problem they are trying to solve.

Poller suggests thinking of it this way: “Traditionally, IT says the company needs a storage system, we’re going to invest X amount of dollars in the system and it has to meet the needs of most of the company—even though it doesn’t necessarily meet the particular needs of any one group in the company. With software-defined technology, IT can now get a software-defined storage system and build it exactly to the needs of any group in the company.”

This is a critical point, as in the past, the only way to tailor a storage solution to the needs of a particular group was to buy specific storage units to meet those requirements. With software-defined technology, groups can say I need a chunk of storage that has this response time quality or this level of capacity and IT can control that through software. Furthermore, IT can delegate administrative control to the group who needs that property most so they can use it however they want.

By inverting the power and control, software-defined technology is making IT less like Communism and more like Capitalism. And according to Poller, this is a transformation that’s already begun to take place, although as he points out, “It’s just the beginning of the revolution.”


Get every new post delivered to your Inbox.

Join 301 other followers