Tag Archives: compliance

Cloud Computing Data Security

Meeting the requirements for cloud data security entails applying existing security techniques and following sound security practices. To be effective, cloud data security depends on more than simply applying appropriate countermeasures. Taken collectively, countermeasures must comprise a resilient mosaic that protects data at rest as well as data in motion.

While the use of encryption is a key component for cloud security, even the most robust encryption is pointless if the keys are exposed or if encryption endpoints are insecure. Customer or tenant control over these endpoints will vary depending on the service model and the deployment model.

It is understandable that prospective cloud adopters would have security concerns around storing and processing sensitive data in a public or hybrid or even in a community cloud. Compared to a private data center, these concerns usually center on two areas:

  • Decreased control by the owning organization when data is no longer managed within an organization’s premises
  • Concern by the owning organization that multitenancy clouds inherently pose risks to sensitive data

In both cases, the potential risk of data exposure is real but not fundamentally new. This is not to say that cloud computing does not bring unique challenges to data security.

Control over Data and Public Cloud Economics

In contrast to use of a public cloud, maintaining organizational physical control over stored data or data as it traverses internal networks and is processed by on-premises computers does offer potential advantages for security. But the fact is that while many organizations may enforce strict on-premises-only data policies, few organizations actually follow through and implement the broad controls and the disciplined practices that are necessary to achieve full and effective control.

So, additional risks may be present when data doesn’t physically exist within the confines of an organization’s controlled facility—this is not necessarily the security issue that it may appear to be. To begin, achieving the potential advantages with on-premises data requires that your security strategy and implementation deliver on the promise of better security.

The basic problem is that most organizations are neither qualified to be in the information security business nor are they in that business—they are simply using computers and networks to get their work done! Although secure computing is a desired quality, information security expertise is not a core-competency for most computer users nor is it common in most organizations. Returning to the point:

  • Moving data off premises does not necessarily pose new risks, and it may in fact improve your security.
  • Entrusting your data to an external custodian may result in better security and may well be more cost effective.

Two examples that underscore this are the commercial service offerings to either store highly sensitive data for disaster recovery or assure the destruction of magnetic media. In both cases, many highly paranoid organizations tightly control how they use these services—but the point is that they use external services, and when they do so, they entrust their data to external custodians.

It is important to state that some kinds of data are simply too sensitive and that the consequence of data exposure is too great for some customers to seriously consider using a public cloud for processing. This applies to any information category that entails national security information or information that is subject to regulatory controls, which cannot yet be met by public target cloud offerings. Likewise, it is unlikely that a well-governed organization would release highly sensitive future product plans to any environment where the organization would be uncertain that the information custodian (the CSP) did not enforce the information owning organization’s interests as well as the organization itself would.

In these examples, it is not the case that security needs for these categories can’t be met in a public cloud, rather the cost of providing such security assurance is incompatible with the cost model of a public cloud. If a CSP is to meet these needs that would demand additional controls, procedures, and practices that would make the cloud offering noncompetitive for most users. Consequently, where such data security needs prevail, other delivery models (community or private cloud) may be more appropriate. This is depicted in Figure 1. Note that this situation is a function of generally available and anticipated offerings in the public cloud space. Quite likely, this will change as security becomes more of a competitive discriminator in cloud computing.

FIGURE 1 Meeting security needs: public, community, and private clouds.

One can easily imagine future high-assurance public clouds that charge more for their service than lower-assurance public clouds do today. We might also expect that some higher-assurance clouds would limit access by selective screening of customers based on entry requirements or regulation. Limiting access to such a cloud would reduce risk—not eliminate it—by limiting access if screening is effective.

Organizational Responsibility: Ownership and Custodianship
While an organization has responsibility for ensuring that their data is properly protected as discussed above, it is often the case that when data resides within premises, appropriate data assurance is not practiced or even understood as a set of actionable requirements. When data is stored with a CSP, the CSP assumes at least partial responsibility (PaaS) if not full responsibility (SaaS) in the role of data custodian. But even with divided responsibilities for data ownership and data custodianship, the data owner does not give up the need for diligence for ensuring that data is properly protected by the custodian.

By the nature of the service offerings, and as depicted in Figure 2, a data owning organization can benefit from their CSP having control and responsibility for customer data in the SaaS model. The data owning organization is progressively responsible beginning with PaaS and expanding with IaaS. But appropriate data assurance can entail significant security competence for the owning organization.

FIGURE 2 Owning organization has increasing control and responsibility over data.

Ultimately, risks to data security in clouds are presented to two states of data: data that is at rest (or stored in the cloud) and data that is in motion (or moving into or out of the cloud). Once again, the security triad (confidentiality, integrity, and availability) along with risk tolerance drives the nature of data protection mechanisms, procedures, and processes. The key issue is the exposure that data is subject to in these states.

Data at Rest and in Motion

Data at rest refers to any data in computer storage, including files on an employee’s computer, corporate files on a server, or copies of these files on off-site tape backup. Protecting data at rest in a cloud is not radically different than protecting it outside a cloud. Generally speaking, the same principles apply. As discussed in the previous section, there is the potential for added risk as the data owning enterprise does not physically control the data. But as also noted in that discussion, the trick to achieving actual security advantage with on-premises data is following through with effective security.

Referring back to Figure 1, the less control the data owning organization has—decreasing from private cloud to public cloud—the more concern and the greater the need for assurance that the CSPs security mechanisms and practices are effective for the level of data sensitivity and data value. (But in Figure 2, we saw that the owning organization’s responsibility for security runs deeper into the stack for the owning organization as they move from SaaS to PaaS and again to IaaS.)

If you are going to use an external cloud provider to store data, a prime requirement is that risk exposure is acceptable. Risk exposure varies in part as a function of service delivery as it does for deployment.

A secondary requirement is to verify that the provider will act as a true custodian of your data. A data owning organization has several opportunities in proactively ensuring data assurance by a CSP. To begin with, selecting a CSP should be based on verifiable attestation that the CSP follows industry best practices and implements security that is appropriate for the kinds of data they are entrusted with. Such certifications will vary according to the nature of the information and whether regulatory compliance is necessary. Understandably, one should expect to pay more for services that involve such certifications. One likely trend here is that higher assurance cloud services may come with indemnification as a means of insurance or monetary backing of assurance for a declared level of security. Whatever the future may hold, we can expect that practices in this space will evolve.

Data in Motion
Data in motion refers to data as it is moved from a stored state as a file or database entry to another form in the same or to a different location. Any time you upload data to be stored in the cloud, the time at which the data is being uploaded data is considered to be data in transit. Data in motion can also apply to data that is in transition and not necessarily permanently stored. Your username and password for accessing a Web site or authenticating yourself to the cloud would be considered sensitive pieces of data in motion that are not actually stored in unencrypted form.

Because data in motion only exists as it is in transition between points—such as in memory (RAM) or between end points—securing this data focuses on preventing the data from being tampered with as well as making sure that it remains confidential. One risk has to do with a third party observing the data while it was in motion. But funny things happen when data is transmitted between distant end points, to begin with packets may be cached on intermediate systems, or temporary files may be created at either end point. There is no better protection strategy for data in motion than encryption.

Common Risks with Cloud Data Security

Several risks to cloud computing data security are discussed in this section. None of these are unique to the cloud model, but they do pose risk and must be considered when addressing data security. They include phishing, CSP privileged access, and the source or origin of data itself.

One indirect risk to data in motion in a cloud is phishing. Although it is generally considered unfeasible to break public key infrastructure (PKI) today (and therefore break the authentication and encryption), it is possible to trick end users into providing their credentials for access to clouds. Although phishing is not new to the security world, it represents an additional threat to cloud security. Listed below are some protection measures that some cloud providers have implemented to help address cloud-targeted phishing related attacks:

  • Salesforce.com Login Filtering Salesforce has a feature to restrict access to a particular instance of their customer relationship management application. For example, a subscriber can tell Salesforce not to accept logins, even if valid credentials are provided, unless the login is coming from a whitelisted IP address range. This can be very effective in preventing phishing attacks by preventing an attacker login unless he is coming from a known IP address range.
  • Google Apps/Docs/Services Logged In Sessions & Password Rechecking Many Google services randomly prompt users for their passwords, especially in response when a suspicious event was observed. Furthermore, many Google’s services display the IP address from the previous login session along with automatic notification of suspicious events, such as login from China shortly after an IP address from the United States did for the same account.
  • Amazon Web Services Authentication Amazon takes authentication to cloud resources seriously. When a subscriber uses EC2 to provision a new cloudhosted virtual server, by default, Amazon creates cryptographically strong PKI keys and requires those keys to be used for authentication to that resource. If you provision a new LINUX VM and want to SSH to it, you have to use SSH with key-based authentication and not a static password.

But these methods are not always fool proof—with phishing, the best protection is employee/subscriber training and awareness to recognize fraudulent login/ capturing events. Some questions that you might ask your CSP related to protection from phishing-related attacks are:

  • Referring URL Monitoring Does the CSP actively monitor the referring URLs for authenticated sessions? A wide-spread phishing attack targeting multiple customers can come from a bogus or fraudulent URL.
  • Behavioral Policies Does the CSP employ policies and procedures that mandate that a consistent brand is in place (often phishing attacks take advantages of branding weaknesses to deceive users)? Does their security policy prohibit weak security activities that could be exploited? An example would be if they prohibit the sending of e-mails with links that users can click on that automatically interact with their data. Another example would be whether they allow password resets to occur without actively proving user identity via a previously confirmed factor of authentication (that is, initiate a password request on the Web and they confirm the identity of the user based on an out-of-band SMS text message to their cell phone).

Phishing is a threat largely because most cloud services currently rely on simple username and password authentication. If an attacker succeeds in obtaining credentials, there is not much preventing them from gaining access.

Provider Personnel with Privileged Access
Another risk to cloud data security has to do with a number of potential vectors for inappropriate access to customer sensitive data by cloud personnel. Plainly stated, outsourced services—be they cloud-based or not—can bypass the typical controls that IT organizations typically enforce via physical and logical controls.

This risk is a function of two primary factors: first, it largely has to do with the potential for exposure with unencrypted data and second, it has to do with privileged cloud provider personnel access to that data. Evaluating this risk largely entails CSP practices and assurances that CSP personnel with privileged access will not access customer data.

Data Origin and Lineage
The origin, integrity, lineage, and provenance of data can be a primary concern in cloud computing. Proving the origin of information or data has importance in many areas, including patents or proving ownership of valuable data sets that are based on independent analysis of commonly available information sources.

For compliance purposes, it may be necessary to have exact records as to what data was placed in a public cloud, when it occurred, what VMs and storage it resided on, and where it was processed. In fact, it may be equally important to be able to prove that certain datasets were not transferred to a cloud, for instance, when there are sensitivity or EU-privacy concerns about what national borders such data may have crossed.

While reporting on data lineage and provenance may be very important for regulatory purposes, it may be very difficult to do so with a public cloud. This is largely due to the degree of abstraction that exists between actual physical resources—such as disk drives and servers—and the virtualized resources that a public cloud user has access to. Visibility into a provider’s operations in terms of technical mechanisms can be impossible to obtain, for understandable reasons.

Where such requirements exist that the origin and custody of data or information must be maintained in order to prevent tampering, to preclude exposure outside a jurisdictional realm, or to assure continuing integrity of data, it may be completely inappropriate to use a public cloud or even a low-assurance private cloud. One can imagine that if such requirements become increasingly common, cloud-based services will arise to profit from the opportunity. In the absence of a public service and where a private cloud is cost prohibitive, alternative approaches should be considered— easiest among them the use of a hybrid or community cloud.


Like it? Share it! facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

How to Enter the Cloud

Cloud computing is a reality, and it’s a force that I believe IT professionals need to come to terms with quickly. The economic motivation for cloud is high; business need for speed and agility is like never before, and the technology has reached a level where it makes prudent investments in cloud services not only possible but fast and easy.

The cloud is here and it won’t go away, but what is it really, why should organisations use it and what are the risks? If you live in a corporate IT organisation, responsible for IT infrastructure, what factors do you need to consider?

What we really mean when we talk about cloud
“Cloud” has become a catch-all term for utility or on-demand compute, but there are a lot of things that cloud isn’t. Let’s start by establishing some common terminology:

  • Cloud: generally IT as a Service (ITaaS)
  • Cloud computing: a business model for delivering IT as a service
  • Cloud services: the deliverable or what you actually get. This encompasses the following areas of ITaaS:
  • Infrastructure as a Service (servers, network, storage, management, reporting)
  • Platform as a Service (application building blocks and standards)
  • Software as a Service (applications)
  • Storage as a Service (primary, back-up, archive, DR)

In my experience the best way to define cloud is actually to look at the problem it is trying to solve. For instance, when customers ask me about cloud, most of the time what they are thinking about falls into three main areas:

  • Decreased storage costs: achieved via storage efficiency
  • Data centre efficiency: achieved via virtualisation and internal or private clouds
  • Conversion of capital expenditure into operational expenditure: achieved via external or public clouds

Whether to create your own cloud, or use a third party
The big question behind cloud computing is whether a company should build or expand its own data centre (a private cloud), or whether it should outsource and access computing resources remotely over the Internet (a public cloud).

The solution is individual to every organisation; there is no single blueprint to apply and IT strategists and architects have to do their own homework. Organisational factors such as the need to balance opex with capex, attitude to risk, security, criticality of applications and the need for redundancy are unique to every organisation and demand a unique cloud analysis and definition.

How to define a cloud infrastructure and “cloud-safe” data management policy
There are two fundamentals to developing a robust cloud-based IT infrastructure:

  1. Governance and compliance for outsourced public cloud applications
  2. The creation of internal cloud services to drive down costs and time to market for in house applications

If your organisation is just beginning to explore the cloud, you need to identify which services can reside in the cloud and which should be internal. Determine what systems and services are core to your business or store your crucial intellectual property. These should be categorised as high risk and not considered cloud opportunities in the near term.

You also need to develop a sourcing strategy to achieve the low cost, scalability and flexibility your business is seeking. This should include all the necessary protections such as data ownership and mobility, compliance and other elements familiar from more traditional IT contracts.

Implementing an external / public cloud infrastructure
Since there are applications (CRM, ERP, messaging and collaboration) that are common to every company, outsourcing to an external cloud provider that can do a better job managing the application at a lower cost structure makes sense. Governance plays a central role in deciding which applications can be safely outsourced, and how to manage the processes. You will need to assess the applications and build policies based upon the type of data. Factors to consider include: how it is accessed and by whom, security and compliance aspects, and the strategic importance or competitive advantage the application or data offers.

Second, you need to assess the cloud service provider’s service offerings. Look at their capabilities, security, SLAs on availability and performance to see if they meet the levels required by the applications before agreeing to cloud-outsource the application.

What are the risks of using an external cloud?
You should pay careful attention to:

  • Service Levels. Understand the service levels you can expect for transaction  response times, data protection, and speed of data recovery.
  • Privacy. If someone else hosts and serves your data they could be approached  by the U.S. government to access and search that data without your  knowledge or approval. Current indications are that they would be  obligated to comply.
  • Compliance. You are probably already aware of the regulations that apply to  your business. In theory, providers of cloud services can provide the same  level of compliance for data stored in the cloud, but, since most of these  services are young, you’ll need to take extra care.
  • Data Ownership. Do you still own your data once it goes into the cloud? You may  think the answer to this question is obvious, but the recent flap over Facebook’s  attempt to change its terms of use suggests that the question is worth a second look.
  • Data Mobility. Can you share data between cloud services? If you terminate a  cloud relationship can you get your data back? What format will it be in?  How can you be sure all other copies are destroyed?

As with any service that’s going to be critical to your company, the best advice is to ask a lot of questions and get all commitments in writing.

Implementing internal / private cloud infrastructure
Internal clouds will help the business launch applications faster and at much lower cost. This is about building ITaaS capabilities in house, or building shared infrastructure that is offered as a service to the business. You’ll need pooled infrastructure, policy based automation to simplify provisioning, metrics and charge backs, service assurance and conformance to SLAs, as well as forward-looking capacity planning. Add a self-service portal to your internal cloud and now the applications teams are happy they can deploy faster and lower cost and the corporate IT governance guys will be happy too.

This space is evolving fast, so start with the basics; pool the infrastructure and use a vendor that offers dynamic virtualised infrastructure to quickly activate applications, or repurpose capacity and performance as loads from applications ramp up or down. For this you need unified storage, network, and servers that can cater for wide range of applications requirements and choose highly efficient infrastructure.

Internal cloud services drive down costs and time to market for in house applications is built on a pooled dynamic infrastructure with utilisation levels in excess of 75%. This is achieved through thin provisioning, deduplication, and cloning technologies (which can raise utilisation levels well in excess of 100%). The bottom line is that this approach yields big cost savings.

Cloud computing is n’t going away. It’s an IT concept we must all sign-up to.

  • Provisioning an effective  cloud infrastructure is individual to every business.
  • In evaluating public  versus private clouds—be aware of what you’re getting into and how to  get out of it.
  • For an external cloud, if  there’s too much risk, don’t do it. Be selective about what you choose to  put in an external cloud. No amount of IT cost-saving can justify breaking  a business.
  • For internal clouds, make  sure you understand what your data centre is capable of and consider  vendors that can offer greatest flexibility and real unified computing.



Like it? Share it! facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Companies Ill-Prepared To Achieve Cloud Goals

The majority of large organisations are migrating internal virtual infrastructure to the cloud because they believe it will reduce costs, according to a recent survey. The survey finds that only 17 per cent of organisations achieved their utilisation and ROI goals with virtualisation and yet, they intend to use similar planning and management approaches for their move to the cloud.

The survey interviewed 94 executives responsible for virtual and cloud infrastructure decisions at organisations with more than 25,000 employees. It revealed that many organisations are ill-prepared to make the move: 77 per cent of respondents plan to use cloud-vendor supplied tools or spreadsheets to plan the migration of workloads to the cloud and only 48 per cent plan to implement new solutions to manage cloud infrastructure.

While cloud operating models have the potential to reduce spend, it is more likely that infrastructure costs will increase if these initiatives are poorly planned and managed. Virtualisation provided many organisations with some quick hits in terms of cost savings on hardware, but the reality is that few have fully met their objectives for utilisation and ROI. Despite this, the majority of organisations are betting on the cloud without dramatically changing the approach to planning these environments.

Cloud operating models can naturally increase inefficient use of capacity and the amount of “excess” capacity an organisation has on hand in internal clouds by their very design:

  • Providing users with self-serve access to capacity can result in buffet-style over-indulgence as application owners request more capacity than they actually need to safe-guard against risk.
  • Pre-defined instance configurations and sized “buckets” of capacity may enable easier management, but they can also result in built-in excess capacity in allocations vs. customising allocations for each workload’s true requirement.
  • Increased responsiveness requires a supply of excess capacity to be held as a demand buffer for new workloads. Sizing this capacity requirement, however, is tricky and teams could end up with unnecessary idle capacity taking up room on the data centre floor.

Key findings from the survey reveal that organisations will face a direct conflict between high hopes for cost reduction and poor planning and management methods:

  • 39 per cent of respondents felt that virtualisation costs were higher than expected or delivered an uncertain ROI.
  • 70 per cent of respondents felt that moving to cloud infrastructure would decrease costs and 42 per cent cited cost reduction as the primary reason they would move systems off of internal virtualised infrastructure to the cloud.
  • Despite the hopes for cost reduction, a total of 77 per cent planned to take a very basic and biased approach to migration planning, using a cloud vendor-provided tool or spreadsheets to plan the migration of their workloads to the cloud.
  • 75 per cent planned workload movements using spreadsheets in currently virtual environments, which not only slows response times, but also takes a very simplistic approach to sizing and placement in internal cloud environments.

According to Gartner analyst Alessandro Perilli, in the June 9, 2011 research paper “The Big Mind Shift: Capacity Management for Virtual and Cloud Infrastructures”:

“Gartner defines “optimised” as a virtual infrastructure where the workload placement satisfies all of an organisation’s technical, business, and compliance constraints and the capacity is allocated to avoid resource wasting (i.e., rightsized),”

Perilli also recommends:

“The capacity management tool should allow for the definition of complex, multi-dimensional placement rules according to the technical, business, and compliance constraints inherent to each service that the infrastructure is hosting.”

Strategic workload placement is critical to achieving savings, particularly in internal clouds. Taking a manual approach to planning cloud migration, like many organisations have done with virtualisation is a recipe for inefficiency and reduced return on investment. There are simply too many factors to consider in placement and capacity sizing decisions to be able to do so efficiently and accurately using home grown tools.


Like it? Share it! facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Putting A Padlock On The Cloud

Ask anyone in IT what the biggest barrier to adopting cloud computing services is and the most likely answer is security.

As Shirief Nosseir, EMEA security product marketing director at CA Technologies, explains securing the cloud isn’t rocket science; cloud is just another environment in which security should be seen as an enabler rather than a barrier.

Many organisations perceive the adoption of any form of cloud computing to change a company’s risk profile, but unless a company is willing to increase its appetite for risk, its profile should not change regardless of whether it adopts a public or private cloud.

However, at this stage of market maturity, where cloud sourcing decisions are decentralised and proper policies and procedures are not adequately enforced yet, it is currently quite common for lines of business to bypass the IT organisation altogether and go out and acquire cloud services (particularly Software-as-a-Service), without thoroughly vetting them for security risks.

Before moving any part of the business to the cloud, organisations need to consider the different cloud deployment options available, including service models (Software-as-a-Service, Platform-as-a-Service and Infrastructure-as-a-Service), internal versus external hosting and public versus private deployments. They also need to understand their on-premise services that they already have and identify the candidates suitable for moving to the cloud.

Also, if considering external cloud services, evaluate the different providers and service level guarantees that they offer (similar to traditional outsourcing). Then as in any security area, by taking a risk-based approach that is contrasted with costs and business value, organisations can leverage a framework to help them make better informed decisions and keep control of their risk profile to stay in line with their risk appetite.

Today, there is no doubt that the cloud is here to stay. It is already widely adopted by many organisations and adoption will continue to rapidly grow. Given the reduced cost, increased flexibility and opportunities it brings, cloud computing is compelling for many organisations.

It is important for IT departments to be proactive and be quick to embrace the cloud model, as this is an opportunity for security to be seen as an enabler rather than a brake on the system.

The cloud offers an irresistible business case and executives will often not stop from consuming cloud services that they need, just because these services were not vetted enough for security – a statement that should raise few eyebrows from risk, security and compliance professionals.

However, this is a trend that many of us already see happening in organisations. For instance, in a CA Technologies sponsored cloud security survey some of the key findings show that 49 per cent of respondents said their organisation uses cloud computing applications without thoroughly vetting them for security risks, while 68 per cent of respondents said that their security leaders are not the most responsible for securing the cloud computing resources in their organisations.

It is also worth mentioning that business supporters of cloud computing often highlight the business’s ability to buy IT services themselves, bypassing their IT organisation altogether. IT organisations that will resist the move to the cloud will ultimately be made irrelevant.

As the cloud is just another computing model that needs to co-exist with other (traditional) platforms, organisations should not be creating new separate policies to secure the cloud. They need to look at their entire environment, including the cloud, and develop a coherent set of policies that cut through their entire infrastructure. Start with what they have and work to adjust them to accommodate the cloud model.

At the same time, it is clear that traditional security models are now going through an evolution in an attempt to keep up with the new order of things. Take the data sprawl issue as an example: one of the common cloud security challenges that organisations face is identifying what data is appropriate to process and move into the cloud.

Nowadays, as data has transformed into bits and bytes, copying sensitive data or sending it across the globe is just a mouse click away. As we all know, this brought about new levels of efficiency and fuelled the democratisation of information.

On the flip side, we ended up with data sprawl. In most cases now, we have little control over how information is being used and shared and by whom it is being consumed. With the enormous amounts of information we process and share on a daily basis, we are not able to keep track of where all copies of our sensitive information are located. Needless to say, data sprawl has introduced all sorts of security problems, since we simply cannot secure what we cannot locate and control.

With cloud computing, data sprawl becomes even more of an issue. By nature, a cloud is highly dynamic, often extends beyond the typical boundaries of our organisation and typically is shared with other tenants. Clearly, traditional perimeter security cannot offer enough control over data and its movement to and in the cloud.

Although typical data loss prevention (DLP) technologies do a good job at locating, classifying and controlling information, they are simply not enough for what is truly needed. An identity-centric approach to information protection and control becomes paramount in cloud environments.

Content awareness (provided by DLP solutions) allows us to understand what information is held in our files and documents, whereas an identity-centric approach adds more intelligence to data sprawl and brings in the context of who is trying to use the data and how they should be allowed to use it (e.g. email, copy, print, etc).

Consequently, DLP technologies need to become more identity centric and integrated with identity and access management (IAM) technologies. Conversely, IAM needs to become more content aware to provide the right level of control that fosters information sharing, while mitigating unnecessary risks.

In turn, a content-aware identity and access management approach is paramount to be able to effectively ensure that only appropriate data is moved into the cloud.


Like it? Share it! facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Risk Management in Cloud Computing

In a troubled economy, cloud computing seems like a great cost saving alternative and it is. Whether in good times or bad, any pragmatic cost saving measure is a ‘good’ measure.

Google, Microsoft, IBM and all other known and unknown cloud providers offer today’s CIO an array of major cost saving alternatives to the traditional data center and IT department. The temptation to put things on/in the cloud and sit back can be extremely compelling. But like everything that appears too good to be true, cloud computing comes with a set of risks that CIOs and CTOs would do well to recognize before making the plunge.

Before we get into the specifics of how best to manage risk when planning to move assets to the cloud, let’s look at a few numbers to help us understand what the Joneses are doing. Is cloud computing already mainstream?

ISACA’s 2010 survey on cloud computing adoption presents some interesting findings. Forty five percent of IT professionals think the risks far outweigh the benefits and only 10 percent of those surveyed said they’d consider moving mission critical applications to the cloud. In a nutshell, ISACA’s statistics and other industry published numbers around cloud adoption indicate that cloud computing is a mainstream choice but definitely not the primary choice.

While some organizations have successfully moved part or all of their information assets into some form of cloud computing infrastructure, the large majority still haven’t done much with this choice. So we ask, is it premature for organizations to have a cloud computing strategy? Au contraire! The CIO who has not yet begun to think of a cloud strategy may soon be left behind. In most organizations, there are definitely some areas that could be safely and profitably moved to the cloud. The extent to which an organization should move it’s information assets to the cloud and take advantage of the tremendous benefits by doing so is determined by the application of a risk assessment framework to all candidate information assets. For this, it’s essential to understand the risks and then have a mitigation strategy each.

Who accesses your sensitive data:
The physical, logical and personnel controls that were put in place when the data was in-house in your data center are no longer valid when you move your organization’s information on the cloud. The cloud provider maintains its own hiring practices, rotation of individuals, and access control procedures. It’s important to ask and understand the data management and hiring practices of the cloud provider you choose. Large providers like IBM will walk their clients through the process, how sensitive data moves around the cloud and who gets to see what.

Regulatory compliance: Just because your data is now residing on a provider’s cloud; you are not off the hook, you are still accountable to your customers for any security and integrity issues that may affect your data. The ability of the cloud provider to mitigate your risk is typically done through a process of regular external audits, PEN tests, compliance with PCI standards, ensuring SAS 70 Type II standards to name a few. You are responsible to weigh the risks to your organization’s information and ensure that the cloud provider has standards and procedures in place to mitigate them.

Geographical spread of your data:
You may be surprised to know that your data may not be residing in the same city, state or for that matter country as your organization. While the provider may be contractually obliged to you to ensure the privacy of your data, they may be even more obliged to abide by the laws of the state, and or country in which your data resides. So your organization’s rights may get marginalized. Ask the question and weigh the risk.

Data loss and recovery: Data on the cloud is almost always encrypted; this is to ensure security of the data. However, this comes with a price – corrupted encrypted data is always harder to recover than unencrypted data. It’s important to know how your provider plans to recover your data in a disaster scenario and more importantly how long it will take. The provider must be able to demonstrate bench-marked scenarios for data recovery in a disaster scenario.

What happens when your provider gets acquired: A seamless merger/acquisition on the part of your cloud provider is not always business as usual for you, the client. The provider should have clearly acknowledged and addressed this as one of the possible scenarios in their contract with you. Is there an exit strategy for you as the client – and what are the technical issues you could face to get your data moved someplace else? In short, what is your exit strategy?

Availability of data: The cloud provider relies on a combination of network, equipment, application, and storage components to provide the cloud service. If one of these components goes down, you won’t be able to access your information. Therefore, it is important to understand how much you can do without a certain kind of information before you make a decision to put it on the cloud. If you are an online retailer, and your customer order entry system cannot be accessed because your application resides on the cloud that just went down, that would definitely be unacceptable. It’s important to weigh your tolerance level for unavailability of your information against the vendors guaranteed uptime.

Cloud computing is relatively new in its current form, given that, it is best applied to specific low to medium risk business areas. Don’t hesitate to ask questions, and if necessary, engage an independent consulting company to guide you through the process. Picking a cloud provider requires far more due diligence than routine IT procurement. At this stage there is no clear cut template for success. The rewards can be tremendous if the risks are well managed.



Like it? Share it! facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Security in The Cloud: Top Issues in Building Users’ Trust

IT decision makers from a range of public and private sector organisations ranked loss of control of data and where data is held as the top security concern.

Despite economic pressure for business to cut costs and fervent assurances from cloud computing technology suppliers, security remains a top barrier to cloud adoption, research by the UK’s National Computing Centre (NCC) has revealed.

Interest in cloud computing is high and many organisations say they are planning to moving in that direction. But the reality is that only 20% of UK organisations are using infrastructure-as-a-service and only 36% are using software-as-a-service, according to the NCC research.

Building user trust in the cloud computing

The advantages of the cloud computing model of a reduced cost of ownership, no capital investment, scalability, self-service, location independence and rapid deployment are widely extolled, so what will it take to get businesses to adopt cloud computing en masse?

The short answer is that it all boils down to trust.

Trust is not easily defined, but most people agree that when it comes to cloud computing, transparency is essential to creating trust.

Businesses must be able to see cloud service providers are complying with agreed data security standards and practices.

These must include controls around who has access to data, staff security vetting practices, and the technologies and processes to segregate, backup and delete data.

Suppliers of cloud technologies and services are quick to claim that cloud computing is well equipped to provide the necessary controls. Virtualisation, they argue, underlies cloud computing, and therein lies the potential to achieve hitherto impossible levels of security.

While virtualisation is viewed with suspicion and fear by many IT directors, suppliers like RSA, IBM and other say that the technology enables organisations to build security into the infrastructure and automate security processes, to surpass traditional data protection levels.

Cloud computing cost savings obscure security issues

Aside from all the positive spin around cloud computing technologies, a trusted, standard model of cloud computing that will enable faster rates and higher levels of adoption is still a long way off, with relatively little progress being made in that regard in the past year, says William Beer, director of OneSecurity at PricewaterhouseCoopers (PwC).

Despite some isolated progress on the technology front, many organisations already using cloud-based services are motivated mainly by the cost savings they can achieve, and consequently pay little, if any, attention to security, says Beer.

“We are still being surprised by the weaknesses and lack of maturity in security models used by many of the cloud-based services on offer,” he says.

It will take a significant data breach by a cloud services provider, he believes, before consumers of cloud services will realise the inadequacy of current models and demand better safeguards around their corporate data.

2010 was a year of experimentation for cloud computing

During 2010, unexpectedly, the cloud went from a place for development and quality assurance, to a place for real production applications and data to live, says Gary Palgon, vice-president of product management as security consultancy firm nuBridges.

“This was primarily due to the cost savings to businesses and the tougher economy forced them there. The result is that the timeframe for acceptance of production applications in the cloud has accelerated,” he says.

However, Palgon recognises there is still some way to go before CISOs will readily accept putting sensitive company data in the cloud.

2010 was a year of experimentation and piloting for cloud computing, rather than one of full-scale implementations in the mid-market, says Bob Walder, research director at Gartner.

But, he says, dismissing cloud computing in 2011, because there is no high market penetration today, will lose IT providers a bigger opportunity two years from now.

In the short term, he says, IT providers should create cloud solutions that are viewed as extensions of existing IT environments.

On the other side of the equation, says Beer, all organisations should be looking at the benefits cloud computing can bring to their business.

“They should be looking at cloud, they should be looking at it today, but they should be looking at it cautiously,” he says.

Cloud computing must specialise by sector-specific security requirements

While the initial positive uptake, which varies from sector according to risk appetite, is mainly driven by cost, PwC believes that to move things on, cloud computing service providers will have to begin adapting to the specific security requirements of highly regulated sectors, such as financial services.

Service providers will also have to recognise that all UK organisations are obliged to comply with data protection legislation, policed by the Information Commissioner’s Office, which has steadily increasing powers of enforcement.

Initiatives will have to come from the service providers themselves, because progress on standards that depend on industry consensus is traditionally slow, says Beer.

RSA, the security division of EMC, has a vested interest in fostering cloud computing, and to this end, is planning to take a leadership position in planning to introduce a set of cloud-based services to be known collectively as RSA’s Cloud Trust Authority.

Lack of trust in cloud computing is slowing broader adoption of cloud services, RSA executive chairman Art Coviello told attendees of RSA Conference 2011 in San Francisco.

The aim of RSA’s initiative is to provide the tools organisations need to give them the necessary oversight of operations at cloud service providers, to assure customers that security service level agreements are being met and build the trust necessary of organisations to adopt cloud computing for mission critical applications and storage.

Best practices derived from initiatives such as these, says Beer, may give rise to cloud-specific standards, but again he points out that reaching agreements on standards, interoperability and third party certification programmes always takes time.

In search of cloud computing security standards

In the real world, some cloud computing service providers are turning to existing security standards such as ISO 27001, even though there is still much debate about whether its suitability to the cloud environment.

This approach is typically at the insistence of customers, says Beer, but is having the positive effect of making service providers see the commercial benefit of security standards, which may help build momentum in the industry.

A lack of common security standards and the ease of retrieving data if a change of supplier is required, were among the top security concerns among IT decision makers in the UK, research by the NCC revealed.

Using existing standards is a start, but Beer believes that ultimately the cloud computing industry will have to establish its own standards because the business model is fundamentally different as is the way users will engage with services.

But some progress is being made in this direction, says Gerry O’Neill, vice-president of the Cloud Security Alliance (CSA), UK & Ireland Chapter.

A great deal of effort has been dedicated over the past year to bring greater clarity and definition to questions of security and assurance in cloud services, he says.

In the public sector there are several examples of guidance and processes being developed for secure and appropriate use of Cloud services, says O’Neill. These include the UK Government G-Cloud project, ENISA Cloud Security Report, and the US Government’s FedRAMP Guidelines.

There have also been a number of industry-wide initiatives aimed at giving the necessary assurance to CISOs, CIOs and business managers to enable them to use cloud services with a degree of assurance which matches their organisation’s appetite on risk and compliance.

These initiatives include the Cloud Security Alliance, A6 (known as Cloud Audit), and the Common Assurance Maturity Model (CAMM).

For its part, says O’Neill, the CSA – formed 18 months ago to promote the use of best practices for providing security assurance within cloud computing – has been bringing together stakeholders around the world with the aim of progressing the definition of cloud security frameworks and guidance. The CSA has also developed the first recognised personal certification in the cloud security space, namely the Certificate in Cloud Security Knowledge (CCSK).

“By the end of 2011, PwC would like to see more consensus around standards, as well as an escalation of the security considerations of cloud implementations so they are considered as important by organisations as scalability, cost and technology,” says Beer.

Until organisations consider how security is built into the cloud computing models they are considering, they will always face significant data protection challenges, he says.

Organisations should expect service providers to be able to answer basic questions around their security model and provide indicators of what they are doing to keep information safe in the same way they can answer questions about technology, scalability and cost.

The role of consumers in determining the future of cloud computing

Consumers of cloud services also have a role to play in improving security in the cloud by applying all they have learned from outsourcing models and mistakes of the past and ensuring security requirements are built into contracts in the form of service level agreements.

Also, as with traditional outsourcing, organisations moving to the cloud should never lose sight of the fact they remain responsible for their data and cannot shift blame to their cloud service provider if things go wrong.

The NCC research found almost a quarter of organisations polled had experienced security incidents involving the service provider’s staff. Corrupt data affected 20% of respondents, 17% suffered data loss and 7% had data stolen.

Steve Fox, managing director for the NCC, says that as it takes time to modernise legislation and standards are voluntary, if cloud suppliers are to tap the latent demand for cloud computing services, they must not only address security concerns, but they must also improve existing service levels.

The way forward for cloud computing

The natural progression, says Palgon, is from keeping applications and data on-premise; to running applications in the cloud while still keeping the sensitive data locally; and finally to running applications and storing sensitive data in the cloud.

Some organisations are currently in the second phase, with some security suppliers enabling this hybrid approach by putting tokens in the cloud so the data vault can still be on-premise.

Commentators generally agree most organisations will eventually arrive at the third stage where all applications and data are in the cloud. But Palgon says this will happen only when the third-party data security companies have the credibility to store the data safely.

“We will first have to arrive at the situation that we can store data in the cloud with the same confidence that we store money and other valuables in banks today,” he says.

Moving into this third phase will mean the full vision for cloud will have been achieved, with cloud service providers able to store information more securely than individual organisations can themselves in the same way as banks can store money and other valuables more securely than its customers at a reasonable cost.

In other words, CISOs will accept putting sensitive data in the cloud only when service providers can guarantee better security than their own organisation can, or the same level of security at a lower cost.

The beauty of cloud computing, says Beer, is that service providers will be able to attract, retain and train to the right level much larger security teams than most business organisations would have internally.

In the absence of fully trusted cloud-based service providers that enable complete visibility of operations in compliance with established standards, the status quo of hybrid operations that pull together on-premise, private and public cloud systems is likely to continue.

Businesses will continue to use cloud computing services according to a risk-based model, putting as much as they can into the cloud to cut costs, but keeping high-risk data on premises to maintain the highest level of control and visibility over this data.

In the year ahead, the CSA’s Gerry O’Neill predicts a marked and steady increase in the uptake of assured cloud services as stakeholders engage to hammer out certifications.

A high degree of co-operation and partnering will help prevent the unwanted proliferation of a myriad of unrelated initiatives or compliance frameworks, which has got to be good news for the over-audited and compliance-weary CIO, he says.


Like it? Share it! facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Unlocking the Promise of Cloud Computing

Cloud computing can help automotive manufacturers gain a competitive edge in every aspect of their business—from product design and manufacturing to global expansion. But, success will largely depend on their ability to fully leverage its capability in response to several emerging trends and challenges.

When assessing what cloud can do for their businesses, automotive leaders need to take into account the distinct and rapidly evolving challenges that their industry faces today. These include the fundamental and ongoing changes in the way that automotive companies communicate and transact with their customers; the need to capture, manage, protect and analyze their ever-expanding collection of customer data; the requirement to decrease their information technology (IT) operating costs while upgrading their capabilities; and the need to expand into new and emerging markets at low cost.

Moreover, companies are facing an expanding multitude of regulations around issues including environmental protection and in-car safety, while rising commodity and raw material prices are also impacting profitability.

Many companies are realizing that cloud computing can represent the next progression from traditional enterprise resource planning (ERP)-style systems. These ERP systems provide most of the management processes and applications used by original equipment manufacturers (OEMs). These may not need to be as flexible as networked, Web-based platforms increasingly used in other industries for activities such as supply chain management and collaboration. As a result, automotive companies are seeking low-price opportunities to upgrade their IT capabilities, while reducing the operating costs of those that will remain. Cloud is being considered more often for this step.

Cloud feasibility

The cost savings and operational flexibility that cloud computing offers can help automakers respond to these and other industry challenges. However, it will be important that companies do not take the potential benefits of clouds at face value, but rather perform a thorough assessment of how best cloud computing can aid them.

First, companies should determine how they will manage cloud capabilities with their existing legacy systems to produce seamless operations. Many are considering this strategy to reduce IT infrastructure costs and increase responsiveness in the marketplace. As customers engage automakers and their collaborative partners through multiple channels, such as In-Vehicle Infotainment (IVI) services, mobile devices and social networking sites, the ability to be more responsive will become critically important.

This will extend to satisfying growing customer expectations for better and differentiated services based on data provided to automakers who are aiming to improve the customer experience. Real-time analytics that can provide predictive analysis for those services will require a great deal of data and computing power that may be well served by cloud computing.

Security and data privacy are concerns that can and must be satisfied to ensure a smooth transition to the cloud. For companies using cloud computing, it will be essential to work with the provider to ensure that it can achieve parity or better levels of security, privacy, and legal compliance than the company currently possesses. The provider also should be required to give a risk assessment and describe how it intends to mitigate any issues found.

Finally, companies will need to look closely into the costs of cloud computing. This should include reviewing rigorous return-on-investment case studies based on actual usage. Savings estimates are not enough. Potential purchasers must evaluate different kinds of cloud services pricing models and develop an effective approach for measuring the costs and return from clouds.

While it is important to take precautions, it is also important to understand that the relatively low capital investment, quick deployment and fast return on cloud services make their widespread industry adoption more a case of when, not if. To avoid missing a distinct competitive advantage, automakers should seriously evaluate cloud computing.


Like it? Share it! facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Virtualization 2.0: A Foundation for Successful Cloud Computing

Virtualization – not quite the nirvana it was promised to be. We expected exponentially better efficiency, higher availability and huge savings for IT budgets. However, now that the honeymoon is over, most organizations feel slighted. Not only have the promised benefits never been realized, but IT organizations also have been saddled with ever-increasing user demand and out-of-control costs – not to mention virtual sprawl, vendor lock-in and high provisioning effort.

With all of these issues, enterprises are looking to solve the problems of “Virtualization 1.0.” Last year, a Gartner study showed that CIOs were looking to cloud computing in more strategic ways, in the hope that the cloud will improve IT operations.

So cloud computing will fix all this, won’t it?

Actually, cloud computing will just compound the problems of virtualization unless we adopt a new management model because the problems of Virtualization 1.0 largely stem from a single undeniable fact: The average human brain cannot keep up with the complexity of a virtualized environment.

In every virtualized IT organization, there is a smart guy, or group of guys, spending a significant portion of their time provisioning virtual machines (VMs). While provisioning a VM is conceptually simple, there is a vital decision to be made: On which physical machine should the VM run? The importance of this simple question cannot be underestimated, and it can be a really complex task to determine the right answer.

Let’s start with the easy stuff: Which physical machines have the capacity to run the workload? Which are running the right hypervisor? Now, here are the harder questions: On which physical machine would the workload most efficiently fit (perhaps you have 1,000 of them)? Which machines have been reserved for a particular task (perhaps because of their high cost or particular configuration)? Are there any special security or governance requirements that limit where this VM can be geographically placed? And now for the killer: Is there anything already running on the physical machine that would cause a compliance issue if we place the new VM there?

You are getting the idea, but we are not done yet. Remember, this needs to be done every time you provision a new VM. But since VMs come and go, you really need to do it every time you restart a VM.

Even if your guys are all Einstein, this is going to be practically impossible. And even if you could do it exactly right, every time, there’s another problem: High-end Virtualization 1.0 solutions include features like high availability and resource scheduling that move VMs automatically – and break everything you just worked out.

Far from fixing it, cloud computing just makes this problem exponentially worse. More machines, more locations and more people provisioning machines equals more complexity. Far from being the enabler of the cloud, virtualization becomes the inhibitor.

How do we solve the problems of Virtualization 1.0?

Virtualization strategy needs to evolve past relying on humans to make each deployment and management decision ad hoc. Enterprises need automated, business-policy-driven provisioning and management. Virtualization 2.0 is that evolution and is built upon three key foundations: separation, delegation and allocation.

Separate the physical and the virtual, separate the application team from the IT infrastructure organization. IT contributes compute, network and storage resources to a resource cloud, and virtual enterprises (a logical unit of users) consume resources from it. Virtual enterprises never access the physical layer and they neither know nor care from where their resources come. IT maintains control of the physical infrastructure and can give multi-tenancy control to various aspects of the virtual infrastructure to authorized users.

Delegate self-service provisioning to virtual enterprises in complete safety because of that separation. Virtual enterprise users access image libraries to spin up pre-configured corporate images that maintain company standards. IT no longer needs to spend days or weeks provisioning according to user demand.

resources to self-service virtual enterprises according to business policies. When a new VM is created (or restarted), the policies determine how that VM is deployed. For example, the CIO sets a policy for the compliance rules his enterprise must follow; that enterprise’s VMs would be automatically deployed based on that policy. Or let’s say the CIO wants only the most expensive hardware used for certain applications – IT sets a policy to make sure the VMs are automatically deployed accordingly. The same could apply for a green policy or even performance. Policies ensure that VMs are deployed automatically according to security, compliance, efficiency, cost and performance rules.

How do we really benefit?

IT responsiveness skyrockets because the time that was previously devoted to provisioning can be used elsewhere. Value-add activities like capacity planning are now possible. Increased agility due to on-demand deployment enables development teams to test what-if scenarios. Utilization greatly improves and server efficiency can be optimized. Security and compliance concerns are mitigated because the system cannot deploy anything unless it adheres to policy. Virtual sprawl is minimized because virtual enterprises manage their VMs under resource limits – encouraging them to take down defunct machines to free up unused resources when they approach their limits. Users are empowered to control their own VMs, IT has better control over resources, and the CIO can control costs and budgets with business policies.

How do we actually implement this?

This kind of business-policy-driven automation is only possible with the right management tool, one that integrates with your existing management tools and is fully customizable to your needs. It should enable you to avoid vendor lock-in, which prevents your business from being as competitive as possible. Gartner Group reports that by 2012, 49 percent of enterprises expect to have a heterogeneous virtual environment. Enterprises will want to use the free hypervisors for non-critical applications but still be able to use the expensive hypervisors when necessary.

IT departments must be empowered with enterprise-class cloud management software built on open standards and the three fundamentals to manage their entire, globally deployed infrastructure in order to fully realize the benefits of cloud computing. Without the Virtualization 2.0 trifecta of separation, delegation and allocation, any cloud solution will suffer from the same problems of Virtualization 1.0. However, with the new model, the load placed on IT staff can be reduced and savings can be realized through dynamic provisioning based on policy and minimal management efforts.

Without the capabilities and policies of Virtualization 2.0 in place, CIOs may find their heads stuck where their data is not – in the clouds.


Like it? Share it! facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

In the Cloud, Governance Trumps Ownership

In more than a decade of talking about cloud computing, I have found the principle of ownership has been a recurring theme. People feel comfortable owning their computing. They know where they stand. Since cloud computing means giving up ownership, it makes people uncomfortable, uncertain of their ground.

But while there’s comfort in ownership, it’s not of itself a guarantee of security or certainty. People often talk of the risks of trusting computing that lies “outside the firewall,” as if cloud computing providers don’t use firewalls. Of course they do, and in many cases, their firewalls are more robust and better policed than the average enterprise firewall. What the phrase really means is, “outside my firewall.” There’s an implicit assumption that it must be better, simply because it’s mine.

Even if I concede that it might not be the world’s most secure device imaginable, at least I know I can trust it. It’s sitting on my own premises, configured and managed by own staff, and up-to-date with my organization’s current security and access policies.

Or is it?

We use the term ‘on-premise’ to describe computing that’s within the domain of an organization. But it doesn’t always mean what it appears to mean. Many acres of so-called on-premise computing assets are actually deployed elsewhere, at co-location centers and facilities management sites. The organization trusts the operators of those third-party premises to control access and security.

In larger organizations, it’s not even safe to assume that staff working on your own site are direct employees. With many IT consultants and other administration staff either outsourced or brought in as contractors, the assumption that on-premise assets are configured and maintained by the organization’s own direct employees ignores the facts on the ground.

At least the organization still sets its own processes and policies. With proper procedures in place for ensuring everyone knows the rules and puts them into practice, you can be confident that the IT infrastructure is operating as it should and that any risks and threats are correctly managed.

And how do you do that?

The real reason we like ownership is that, whenever we need to, we know we can just walk in and make a hands-on assessment of the situation on the ground. If we’re honest with ourselves, that sense of direct, actionable accountability is probably covering a multitude of sins. We know there are times when our own people or our contractors, whether through lack of training, process flaws or sheer carelessness, get things wrong. We probably tolerate errors within our own organization that we would never accept from a third-party provider because we know we have the power to put things right to our own satisfaction if we ever need to.

Yet in a modern IT infrastructure, there are other ways of controlling proper policy and process. The technology allows us to instrument, verify and audit whether procedures are being followed correctly. Accountability, governance, compliance and problem resolution are no longer dependent on physical access. It can all be done electronically in real-time.

Using a third-party cloud computing provider can therefore be just as trustworthy and certain as relying on in-house resources, provided the instrumentation and governance of policy and process is as good. In practice, this is one area where public cloud providers did not begin well. Some providers espoused an arrogant mirror-image of the “it’s my firewall” mindset: “We don’t publish an SLA, you can trust us because we’re a big, friendly online brand.”

Fortunately, those attitudes are now being challenged. For customers willing to pay the extra cost, the current generation of cloud providers offers better transparency into processes, a more granular choice of policy settings and enterprise-grade instrumentation and reporting. Because investments are pooled across the entire customer base, a cloud provider can operate the technology at a larger scale and sophistication than most of their customers would wish or need to do individually.

There’s still some work to do to establish the process and policy stipulations it’s reasonable to demand from third-party providers. Enterprises must focus on specifying the results they want, rather than attempting to constrain the provider’s underlying technology and operational choices in unnecessary detail. But in principle, a proper governance infrastructure is capable of delivering more control from a third-party provider than most enterprises realistically have over what happens today within their own on-premise IT.

Ownership is not the critical factor here. What matters is having the right mechanism in place for proper accountability and governance.


Like it? Share it! facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

5 Overlooked Threats to Cloud Computing

A lack of understanding about security risks is one of the key factors holding back cloud computing.

Report after report after report harps on security as the main speed bump slowing the pace of cloud adoption. But what tends to be overlooked, even by cloud advocates, is that overall security threats are changing as organizations move from physical environments to virtual ones and on to cloud-based ones.

Viruses, malware and phishing are still concerns, but issues like virtual-machine-launched attacks, multi-tenancy risks and hypervisor vulnerabilities will challenge even the most up-to-date security administrator. Here are 5 overlooked threats that could put your cloud computing efforts at risk.

1. DIY Security.
The days of security through obscurity are over. In the past, if you were an anonymous SMB, the threats you worried about were the typical consumer ones: viruses, phishing and, say, Nigerian 419 scams. Hackers didn’t have enough to gain to focus their energy on penetrating your network, and you didn’t have to worry about things like DDoS attacks – those were a service provider problem.

Remember the old New Yorker cartoon: “on the Internet no one knows you’re a dog”? Well, in the cloud, no one knows you’re an SMB.

“Being a small site no longer protects you,” said Marisa S. Viveros, VP of IBM Security Services. “Threats come from everywhere. Being in the U.S. doesn’t mean you’ll only be exposed to U.S.-based attacks. You – and everyone – are threatened from attackers from everywhere, China, Russia, Somalia.”

To a degree, that’s been the case for a while, but even targeted attacks are global now, and if you share an infrastructure with a higher-profile organization, you may also be seen as the beachhead that attackers can use to go after your bigger neighbors.

In other words, the next time China or Russia hacks a major cloud provider, you may end up as collateral damage. What this all adds up to is that in the cloud, DIY security no longer cuts it. Also, having an overworked general IT person coordinating your security efforts is a terrible idea.

As more and more companies move to cloud-based infrastructure, only the biggest companies with the deepest pockets will be able to handle security on their own. Everyone else will need to start thinking of security as a service, and, perhaps, eventually even a utility.

2. Private clouds that aren’t.

One way that security-wary companies get their feet wet in the cloud is by adopting private clouds. It’s not uncommon for enterprises to deploy private clouds to try to have it both ways. They get the cost and efficiency benefits of the cloud but avoid the perceived security risks of public cloud projects.

Plenty of private clouds, though, aren’t all that private. “Many ‘private’ cloud infrastructures are actually hosted by third parties, which still leaves them open to concerns of privileged insider access from the provider and a lack of transparency to security practices and risks,” said Geoff Webb, Director of Product Marketing for CREDANT Technologies, a data protection vendor.

Much of what you read about cloud security still treats it in outdated ways. At the recent RSA conference, I can’t tell you how many times people told me that the key to cloud security was to nail down solid SLAs that cover security in detail. If you delineate responsibilities and hold service providers accountable, you’re good to go.

There is some truth to that, but simply trusting a vendor to live up to SLAs is a sucker’s game. You – not the service provider – will be the one who gets blamed by your board or your customers when sensitive IP is stolen or customer records are exposed.

A service provider touting its security standards may not have paid very close attention to security. This is high-tech, after all, where security is almost always an afterthought.

3. Multi-tenancy risks in private and hybrid clouds.
Many companies, when building out their private or hybrid clouds, are hitting walls. The easy stuff has been virtualized, things like test development and file printing.

“A lot of companies have about 30 percent of their infrastructure virtualized. They’d like to get to 60-70 percent, but the low-hanging fruit has all been picked. They’re trying to hit mission-critical and compliance workloads, but that’s where security becomes a serious roadblock,” said Eric Chiu, President of virtualization and cloud security company HyTrust.

Multi-tenancy isn’t strictly a public cloud issue. Different business units – often with different security practices – may occupy the same infrastructure in private and hybrid clouds.

“The risk to systems owned by one business unit with good security practices may be undermined by the poor security practices of a sister business unit. Such things are extremely difficult to measure and account for, especially in large, multinational organizations,” Webb said.

Another issue is application tiers. In poorly designed private clouds, non-mission critical-apps often share the same resources as mission-critical ones. “How do most companies separate those?” asked Chiu.

“They air-gap it, so the biggest threat for most virtualization and private cloud environments is misconfiguration,” he said. “Eighty percent of downtime is caused by inappropriate administrative changes.”

4. Poorly secured hypervisors and overstressed IPS.
Every new technology brings with it new vulnerabilities, and a gaping cloud/virtualization vulnerability is the hypervisor.

“Many people are doing nothing at all to secure virtualized infrastructures. The hypervisor is essentially a network. You have whole network running inside these machines, yet most people have no idea what sort of traffic is in there,” Anthony said.

Buffer overflow attacks have been successful against hypervisors, and hypervisors are popping up in all sorts of devices that people wouldn’t think of as having them, including Xbox 360s.

Even when organizations believe that they have a handle on the traffic within their cloud environments, they may be fooling themselves, especially if they are relying on legacy security tools. Everyone knows that they need an IPS solution to protect their cloud deployments, but they have no idea what the actual scale of the problem is.

Moreover, many of these appliances have packet inspection settings that by default fail on. In other words, if the device is overwhelmed with, say, video traffic, the majority of traffic passes through as safe and only small samples are inspected for threats.

The IPS will typically trigger a low-level alarm or record this spike in a log, but how many IT units have time to look at logs unless they know they have a problem? Organizations are also slow to realize that they need an array of different protection in virtualized cloud environments than they had in traditional on-premise settings. Or they do realize this and are choosing to ignore it due to budget and time constraints.

The IBM security executives I talked to at RSA ticked off a number of security solutions they would recommend to better protect cloud environments, including IPS solutions with 20 GBps capabilities, DLP and application security. Much of what their advice boiled down to (see item #1 again) is that security is becoming too big of a problem to tackle for most organizations on their own.

5. Insider threats.

Are insider threats keeping you up at night now? Unfortunately, virtualization and the cloud ramp up the risk of insider threats – at least for the time being.

“A smaller number of administrators are now likely to have access to a greater amount of hosted data and systems than ever before, as the cloud systems are managed by a cloud infrastructure management team. This can leave sensitive data open to access by individuals who previously did not have access to it, eroding separation of duties and practices and raising the risk of insider attacks,” Webb said. The ability to walk off with key assets is also simply much easier to do, rights or not, in a virtualized environment than a physical one.

“When the banking restrictions came out, people were worried about someone walking into the physical data center and grabbing a rack of tapes and walking off with it,” Chiu said. Those fears spurred the much higher frequency of encrypting of data at rest.

How do you steal those same assets in a virtual environment, where data encryption is often still an oversight?

“If you have administrative credentials, you pick the virtual machine you want, right click and copy it,” Chiu said. It’s not that hard to spot someone walking out of the building with a box of tapes. A virtual machine on a USB drive isn’t going to raise a single eyebrow.


Like it? Share it! facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail