Log Management Authors: Dana Gardner, Pat Romanski, Elizabeth White, David H Deans, Carmen Gonzalez

Related Topics: @CloudExpo, Containers Expo Blog, Log Management

@CloudExpo: Blog Feed Post

Notes From the Cloud Academy

RAIC - Redundant Arrays of Inexpensive Cloud services

We have been running the Cloud Academy roundtables in several European countries. I’d like to share some of the more interesting questions, debates and insights around a number of topics, starting today with RAIC—Redundant Arrays of Inexpensive Cloud Services. Other topics will include:

  • A TV industry analogy: Competition for the IT department
  • Cloud Shortcuts: Can the Cloud make( internal) IT more agile
  • Service Level Management and the Cloud
  • Cloud R&R - Retained responsibilities for IT
  • Elastic Services: Everybody wants to be a manager

Redundant Arrays of Inexpensive Cloud services
Today’s post discusses whether we can ensure performance and availability of public cloud services. I’m not sure we can. Public cloud services are a bit like the weather: we are lucky if we can predict what it is going to be like, but cannot manage or change it as we don’t control the underlying elements. The same holds true trying to “manage” public cloud services.

So what do we do? Give up on public cloud services altogether? No, that would be throwing out the baby with the bathwater. Instead, we can follow a method we have been using in IT for a long time. If we cannot count on a certain item to be always available, we make sure we have a fail over option.

The best example comes from storage. At a certain moment, people realized that even the most expensive disks encountered failures now and then. So they developed a strategy where failure of an individual disk is not so important. The result was RAID, a redundant array of inexpensive disks that, transparently tot the user, served the requested data from other disks in the array when one of the disks failed. In typical IT fashion, we used the name RAID 0 for a configuration where we had no raid at all, RAID 2 for 2 disks etc. The benefit of higher raid numbers is is that the predicted availability increases significantly by adding marginally more redundant capacity.

How do we apply a similar “redundant array” approach to cloud services? The idea of contracting for two email services or two CRM systems is counter-intuitive for most IT folks, since for years we strived to standardize on one of each . And the reality is that if half the company uses one email system and the other half another, 50% of the people are still down if one fails. So instead of looking at email in isolation, we should look at all the employee communication options. These may include email, instant messaging , VOIP, even a social media functions similar to Facebook or Twitter. If based on different technologies and sourced from different vendors, the chances of them all being down at the same time is extremely unlikely.

Using chat or instant messaging as a backup for email is not how we traditionally think in IT---- and challenging such traditional thinking is exactly the idea of the Cloud Academy - but it aligns with the next generation of IT users. An example: Teenagers (like the two living in my home) instantly switch from MSN to Google chat or to Hyves or Facebook or even to hotmail or text messaging, if the service they are using is behaving strangely. They are not particularly interested in whether a particular service is down; their only interest is whether they can continue to communicate with their friends.

Of course, since today’s IT departments proactively monitor the infrastructure and know the status of systems, they rarely get a call saying “all systems are down.. But that’s not true with external cloud services. We need to find an alternative early- warning system, something like a weather report on the status of the external cloud services our user depend upon. An interesting site in this context is http://www.unifiedmonitoring.com/.

So what conclusions did we reach in our (sometime heated) Cloud Academy debate?

Using public cloud services is another step in giving up control of the underlying components. Years ago, when companies bought the first computers , they were expected to program these themselves in Assembler. Later, they bought higher- level language compilers, followed by complete off the shelf software packages followed now by infrastructure and software as a service. Along each step, IT has lost some control, but in exchange we are no longer required to do all the work.

We do, however ,have to make conscious decisions when to cede control. This differs by industry, type of application and possible risk. Using public cloud services in many cases already makes sense today. But when using them, we need to have some way to monitor availability and outcome so that we can make smart or pragmatic tradeoffs and precautions when the services are not available.

Read the original blog entry...

More Stories By Gregor Petri

Gregor Petri is a regular expert or keynote speaker at industry events throughout Europe and wrote the cloud primer “Shedding Light on Cloud Computing”. He was also a columnist at ITSM Portal, contributing author to the Dutch “Over Cloud Computing” book, member of the Computable expert panel and his LeanITmanager blog is syndicated across many sites worldwide. Gregor was named by Cloud Computing Journal as one of The Top 100 Bloggers on Cloud Computing.

Follow him on Twitter @GregorPetri or read his blog at blog.gregorpetri.com

IoT & Smart Cities Stories
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.