Total Pageviews

Sunday, October 5, 2014

#IBMEnterprise @IBMEnterprise Cloud Analytics and Engagement my highlights...

For all of you heading out to Las Vegas for Enterprise2014 to hear about the IBM's viewpoint on how to deploy Infrastructure for Cloud Analytics and Engagement you will certainly have an exciting conference.  Some of my personal highlights will be:

Steve Groom - CEO from Vissensa
Presenting as part of the Executive Summit on Monday, Steve will spend 15-minutes addressing the main tent on how he leverages IBM technology within his MSP business to deliver innovative services.  A highlight of Steve's presentation will be when he focuses on the innovative Print-aaS application they are launching that runs on their newly acquired Enterprise Cloud System.

Lubo Cheytanov - CEO of L3C
Being interviewed by IBM's Alex Gogh as part of the ISV/MSP Mashup on Tuesday afternoon on how his business a London based MSP drives top line growth and client acquisition based on Mainframe technology, Lubo will be well worth watching.

Klaus Kristiansen - CEO of Ubiquitech
Being interviewed by IBM's Judy Smolski as part of the ISV/MSP Mashup on Tuesday afternoon about how Ubiquitech offer industry leading secure pull printing solutions as-a-Service in the cloud whilst leveraging mainframe security technologies.

I will be really socially active this week, so please engage with me via @StevenDickens3 on Twitter, if you want to meet up even better - Tweet me to arrange a time and location or join me at my own personal sessions at Enterprise are:


The Economics of Cloud on the Mainframe
Monday 4:15-5.30pm
Veronese 2402

The Economics of Cloud on the Mainframe
Tuesday 2.30-3.45pm
Marco Polo 703

Mission Critical Workloads-as-a-Service
Wednesday 10.30-11.45am
Veronese 2402

Softlayer and the Mainframe
Wednesday 2.30-3.45pm
Veronese 2402

Softlayer and the Mainframe
Thursday 9.00-10.15am
Veronese 2402

How MSP's are leveraging the mainframe for public cloud
Thursday 1-2.15pm
Veronese 2402.




Tuesday, September 23, 2014

Why Storm Clouds are gathering over the MSP/CSP Landscape

The exit of RackSpace from the IaaS market is an omen of foreboding. The Clouds are coming, and with it the biggest storm that IT has ever witnessed. Tier 2 and 3 players will be washed away from local markets and specific industry verticals if they do not prepare. This event is much bigger than the Y2K that had us all sweating on that New Years Eve.

5 Must Do Transformations for MSP/CSP Survival.

  • Rethink the offer.
  • Move to Specific Software Solution .
  • Discover a Unique slant on how to offer quality of Service, Availability, and Security.
  • Be Innovative.
  • Be Daring.

As Cloud based Solutions take over the industry it will be survival of the fittest – and the smartest and the most innovative. Be sensitive to your clients’ needs; do not just heap tin on them and call it a solution or you will be sorry – very sorry.


So, the race is on. Will the market discover the holes in the MSP/CSP offerings and close them out to face the storm? We will find out soon enough.

Friday, August 15, 2014

The Post RackSpace IaaS market dynamic and what this means for the Mainframe

So the world of Cloud Computing continues to evolve on a almost daily basis, but what specific event has forced pen to paper… or virtual pen to paper at least.  Well the exit of RackSpace from the IaaS market to me was a major event in the evolution of the cloud and the commoditization of what I call the first wave of cloud, namely Simple IT-aaS. 

If one of the hiperscale darlings of the IaaS space can no longer make sufficient margin from this sector then what hope have tier 2 and 3 player got for local markets or specific industry verticals.  In the scribes humble opinion MSP/CSP’s looking to survive and yes I do mean survive beyond the next 3-years then they need to find a niche and develop their tailored offering quickly. 

No longer can an MSP throw some x86 based VMWare guests up on the web and hope to build a sustainable business, the MSP/CSP needs to be offering a specific software solution to a vertical, a unique slant on how to offer quality of service, availability or security.  As this evolution to what I call the 2nd wave of cloud workloads or Mission Critical IT-aaS becomes the new battle ground for MSP’s then the focus will be on innovation and daring to be different.  Following the crowd and hoping to be lucky will lead to some of the 35,000 MSP’s currently operating globally going to the wall.  This is to be expected as any marketplace matures new entrants get consolidated out and the successful survive, call it the Darwinian survival of the fittest.

So to all those MSP’s out there not wanting to become the next Dodo, I suggest you take a long hard look at the building blocks of your business, namely your infrastructure and ask you yourself one hard question: Have we dared to be different? Or have we followed the accepted practice and built a derivation of what everyone else in the industry is offering.


You may not like the answer, but better to know now rather than let the market find you out…

Friday, May 30, 2014

The 2nd wave of cloud computing workloads is the next battleground for IaaS and PaaS MSP's

Its been a while...

Sorry for my neglect over the last few of months, it has been a busy time as I transition to a new role in a new country so hopefully you are glad for me to be back...

As the cloud market continues to mature and the large players continue to define the marketplace, a new dynamic is beginning to emerge.  While the, what I call, "simple IT-aaS" market continues its race to the bottom price wise, the larger players are starting to act like they believe this model is not sustainable in the medium term. Let me expand on this premise...

If you look at x86 virtualisation technology as a pre-cursor to cloud, then the adoption went like this... first off you virtualised your test and dev environment so you could trial the benefits, then as your processes matured you rolled VMWare across your production x86 estate.  Whilst you were achieving all you had hoped for with this new technology, you didn't touch your UNIX and Mainframe backend systems so you took 50% cost out of 30% of your IT budget.  A noble pursuit but did you really 'move the needle.

Cloud is following the same adoption model, whilst moving your 'Simple IT' to an AWS (or Softlayer) instance is a noble pursuit, are you again going to move the needle when you take a macro perspective on your IT budget.  If you manage to take your x86 on premise estate and move it off premise then for sure you will achieve savings and clear the decks of a number of issues and support nightmares in the process, but will you really affect your bottom line IT cost base?

The next battleground is how you take your systems that have been architected to deliver 99.999% availability 24x7x365 and run your business critical systems and look to deliver them as-a-Service, either on premise or ultimately off premise in the public cloud.  My belief is that only when you can deliver on this project will you truly be able to declare victory.

So how do you achieve 99.999% in the cloud when the best of the best (i.e. AWS according to Gartner's IaaS magic quadrant) only deliver at best 99.9% ???

One approach I would advocate is to map out your mission critical workloads and look for common components of the applications, such as database and middleware and look to decompose your applications into horizontal 'utilities' that can be delivered as-a-Service.  Take for instance database, I am sure in any large corporate there are hundreds of servers running databases such as Oracle and DB2 on UNIX servers that also run the application the database supports.  Why not try to consolidate these databases into a horizontal utility that can be provided to any application that requires a database.  This way you can centralise and offer DB-aaS to your applications.  If you take this approach you can then, based on latency and other factors, decide whether this utility is on premise or in the public cloud.

However a word of caution, if you look to aggregate multiple databases onto one utility service, then that service better scale, be available and be secure...

I hear dissent... No cloud platform can scale, be available and offer enterprise grade security on the levels you need to offer high I/O workloads such as database as a horizontal utility...

Can you see where this is going yet...

Well there is one cloud platform that can handle Enterprise (class) Cloud (workloads in one) System... IBM's new Enterprise Cloud System.

Check it out at http://www-03.ibm.com/systems/z/solutions/cloud/system.html and get in touch if you want to know more via @StevenDickens3 on Twitter...

Wednesday, March 12, 2014

Cloud - the 2nd wave of workloads

As we all saw with the adoption of virtualisation technologies in the 2005-2010 time frame there is a gradual move toward adoption of a new paradigm in computing.  As the now de facto VM Ware evolved clients put test and dev workloads into virtualised environments first.  This was rapidly followed by small workloads and now any self respecting x86 application is virtualised.

This same evolution is happening in the cloud albeit at an accelerated pace, we are past the 1st wave of clients moving the first 'test' workloads into the cloud and we are now seeing more and more applications moving to the cloud.  However as this 2nd wave of workloads starts to migrate into the cloud, clients are increasingly struggling to find the QoS and SLA's they have grown accustomed to with on-premise infrastructures that have been designed from the ground up to be available, secure and fault tolerant.

Every day we hear of yet another Cloud outage or security breach that casts doubt over the suitability for these mission critical workloads to be hosted off premise in a cloud environment be it public or private.  Just check out http://cloutage.org/incidents and the tale of woe would and should scare any CIO.

However all is not doom and gloom, on my travels of late I have had cause to discuss QoS in the cloud, I recently chaired a panel discussion on this very topic at Cloud Expo Europe, and have some suggestions for the wary CIO.  Two approaches come to mind that both have merit if your requirements for Cloud computing are more mission critical than the norm.  Firstly Softlayer 'Bare-Metal as a Service' approach, this offering enables the client to gain access to an off-premise cloud environment that is dedicated, i.e. not shared or virtualised.  For more details check out the website:

https://www.softlayer.com/dedicated-servers/

Why is this a good idea?  Well for a start none of us likes a noisy neighbour, especially one who can consume our resources and impact our service, with a dedicated server in the cloud we are free to operate without this concern.  Another benefit is that with a dedicated server you are able to do with it what you please, when you went with out having to worry.  Finally by going off-premise you can ensure your precious server sits in a modern DC with the best TLC and connectivity.

The 2nd approach is to either approach a Cloud Service Provider (CSP) who is running a System z based cloud environment such as the soon to launch L3C in the UK, or any one of another group of global CSP's who are starting to offer such services or build your own on premise Linux cloud based on System z.  With a compelling TCO against the likes of Amazon Web Services and competing x86 architectures and all the inherited QoS benefits from using the same hardware as the mainframe the on premise cloud based on System z is a perfect platform for the 2nd wave of enterprise mission critical workloads that need to be re-hosted in the cloud.


Monday, January 20, 2014

"Zed's Dead Baby"- The relevance of the mainframe to modern computing and IT trends.



“Zed’s dead, baby.” These may be the immortal words of a motorbike wielding Bruce Willis in Pulp Fiction, but also some of the first words uttered to me when I made the decision to move into the System z software team. The perception of mainframe is simple: it’s a dinosaur. In a world where companies like Apple render their previous generations of hardware ‘vintage’ or ‘obsolete’ after 5 years, and release software upgrades annually and free of charge, how can a product that is celebrating it’s 50th birthday this April still be relevant? Equally, as the current IT landscape evolves towards mega trends such as Cloud, Mobile and Big Data, how can this ‘vintage’ platform keep up?

In 1969, IBM and the mainframe helped NASA put the first men on the moon, and now, 45 years later, focus is still skywards for putting work in the Cloud (tenuous link). Cloud has always been part of the mainframe, since its inception VMs have been a basic component of the mainframe hardware. Despite this, for many x86 seems the natural choice for Cloud workloads: cheap and simple. However, while x86 serves the commodity workloads, System z is undeniably the most suitable choice for high complexity and high criticality Cloud workloads. The mainframe is famed for unmatched reliability and the ultimate security choice, something other platforms simply cannot contend with. What’s more, in the market where IBM System z Cloud mainly operates (private, in-house), Cloud computing costs less per virtual machine on IBM mainframe vs x86* and therefore the power usage per VM is significantly lower on z, in turn improving OpEx costs.
Cloud is a huge part of the System z strategy, with a focus on orchestration choices for our customers- in order to automate deployment and lifestyle management which results in a reduced time to market and improved productivity.

With the average System z box standing at 2 metres tall, how can it possibly be described as mobile? With many believing the mainframe can only be a solution for banking giants and top secret agencies, System z is often pushed aside in customer mobile strategy. But mobile is no simple task. With over 200 million employees bring their own devices who, on average, are checking their mobiles 30 times per hour, and never being more than an arm’s reach away from their beloved smart phone; demand for a powerful, secure platform is high.  Enterprise mobile strategy plays into this ideally: from seamless building and development of applications from Rational; to word class security and management, preventing costly breaches in Tivoli; and extending capability and user flexibility. While mobile can generate billions of transactions; System z can handle more than 30 billion transactions a day. While the billions of mobile users globally expect real time data; System z on average has less than 6 minutes of downtime a year. So despite not being pocket-sized, System z is the clear choice for business transformation into the mobile world.

We now generate more data in two days than we did in total up until 2003. This data can either be wasted, sit on tapes and disks and hard drives until the day when the auditors finally tell us we can destroy it. Or it could become an incredibly powerful business tool. Businesses who implement analytics tools outperform their competitors by over two times. Which CIO wouldn’t want that? But there are hurdles. Many companies grow through acquisition, and even if they don’t, they can develop a large server sprawl, which in turn creates siloed workloads and no single repository of data for companies. System z can provide that. By making sure your data is available, consolidated, and secure, System z can form the basis of your analytics and data management strategy. Additionally, creating a new analytics environment can take up to 6 months at a cost of approximately $250,000. However, with the simplicity of the System z architecture, companies can have deployed analytics environment in 2 days at $25,000. Not bad for an out of date, dinosaur platform.

So is z still relevant? Well, if you want an incredibly powerful infrastructure, which boasts maximum security and unmatched reliability: yes. If you want to be able to keep ahead of the latest IT trends, with value and strategy plans spanning mobile, cloud and data analytics: you bet it is.
Zed’s back, baby.

*based on 275 VMs

Friday, January 17, 2014

“Without IBM and the systems that they provided, we would not have landed on the Moon.”

‘This is one small step for man, one giant leap for mankind’. Of course these weren’t the words muttered by a techy in an IBM lab when years of work had finally completed the mainframe. We all know where these words were really heard across the universe in 1969, when the world first saw two men land on the moon. However, this famous quote is still quite relevant to the IBM mainframe but in a different sense. Few people know that the IBM ® System/360 mainframe helped NASA to launch the Apollo missions making history and changing the way we see the world today.

Mainframe computing has done just that! It's developed the world we live in so these achievements should be remembered when celebrating the Big Iron’s Big Five O! Considered as one of the most successful computers in history, the mainframe led the way for innovative computing changing in more ways than just its name! Our ever-evolving IT industry has tested and pushed the super computer and whilst it's taken a fair battering along the way, it's safe to say that it epitomises Darwin's theory of ‘survival of the fittest’. Fit is a good way to describe it: it's strong, doesn't lack energy and it definitely looks the part!

Arguably its biggest contribution is pioneering real-time transaction processing which led the way to credit card authorizations; one of many things we take for granted today. Online transactions are so simple that you’ll find many a student waking up hung-over with no recollection of the night before, but an interesting eBay parcel on its way...When sequential input output processing was the norm, the ground-breaking ability to take this online helped businesses become more responsive. Personally I can’t imagine computing life before it; we were a patient nation! Now every transaction is fulfilled with ease; we have faith in computers to give us the correct information and pay people successfully because they rarely fail us.  Today I have click and shopped saving me time, money and effort. Had I chosen to actually leave my house I probably wouldn’t have written this entry. Revolutionary! 

Aside from how efficient our lives are becoming because of ground-breaking technologies we also see productivity for the biggest firms in all industry sectors. The extreme scalability, high data handling capability and vast security measures enable firms to rely heavily on this machine that trudges away in the background. I’m sure you’ll know it’s not the cheapest but I’ve yet to find an inexpensive high-valued asset and when I do I probably won’t share it!  Ironically, our mainframe grows in capability inversely to size. Its resilience is second to none allowing for mission-critical data to be handled without the fear of disruption. We are not aware of the impact that the mainframe has on our lives for we don’t see or hear of it but housework is never noticed until it’s not done.

Friday, January 10, 2014

‘Someones hacked into the mainframe!’ View of the mainframe from the 1990s.




‘The Mainframe’ to those born in the 90’s probably refers to a scene in a Hollywood action film where the villain has ‘hacked into the mainframe’ causing all sorts of problems for James Bond and John McClane. In fact, only last week on Celebrity Big Brother were the words ‘hack into the mainframe’ uttered referring to a plastic screen on an extremely unrealistic alien space ship, so you can excuse the confusion that people have when referring to the platform. Although those within the industry understand how detrimental a malicious act like this would be for any business, I’m sure most are also aware that of all the platforms running, the mainframe would more than likely be the last to suffer a security breech. This year in April we celebrate the 50th birthday of the mainframe, now often referred to as the System Z Platform, so I feel it’s a relevant time to address some of the myths and speculation surrounding the platform.

Like most people my age I had little experience of the mainframe until I started working for IBM where I have developed an understanding into this super computer and a month ago I saw my first z196 in the flesh. The common misconception amongst 20-something year olds is that the mainframe is a ‘dead’ or ‘dying’ piece of hardware and this is a myth that I would like dispel. A computer that used to be larger than my living room is now 201.3cm tall, 156.5cm wide and 186.7cm deep making it a much more practical solution for customers. Although some clients previously attempted to move away from the mainframe, we are beginning to witness a trend of customers proactively moving workloads back onto the platform to benefit from the scalability and energy cost savings. These are typical mainframe customers such as large banks and large retailers who have to process an extremely vast amount of data daily.

A development that actually happened during 90s has not been given enough attention in my opinion and that is the integration of Linux onto the System Z Platform. This is a development that allows the mainframe to act in a very similar ways to other X based distributed platforms, bridging the gap between the two. I think it is something that should be highlighted, especially within our software division as if the software is installed with this in mind it will also run Linux on the distributed platform, giving the customer more choice as to where they put specific workloads.

So when asking the question around what the mainframe means to those from the 90s, the answer will massively depend on whom you ask. Those from a non technical background might say, “a company’s main computer” whereas some within the industry, could say “the platform that IBM used to sell software on.” For myself I see it as somewhere that over the next decade will begin to become ever more prominent within the technology industry. The reasoning behind changing the name to System Z was to refer to the almost ‘zero’ downtime that the hardware was capable of and to give an ‘old’ dog a new name. But why change the name of something that gains publicity within popular culture? If I told my friends I worked in mainframe software, they would think that I had somehow gained some fantastic technical expertise, but when I say System Z, they look at me blankly.

With the half century anniversary of the mainframe this year I am sure that there will be a lot of developments, innovation and press surrounding the platform and the vision for it in the future.
The most I can hope for from this blog is to engage with those, like myself, born in the 1990’s in a time when IBM had failed to invest in their core hardware and to challenge their perception of the platform, but on a wider scale, anyone who has their doubts about System Z as a platform. I agree that in some cases it is not the right way to go, however, not considering utilizing a mainframe that already exists within an organization is a cardinal sin.


Thursday, January 2, 2014

Mainframe Platform Economics

Happy 2014 to all my readers, I hope you had a merry holiday with your families and friends and that you have a happy and prosperous 2014.

So with that out of the way, I would like to cover the thorny subject of Mainframe Platform Economics.  Despite your obvious audible groan as you read that last sentence, I hope to make it as light hearted as possible and poke some fun along the way...

So the Mainframe is obviously expensive and everybody knows their are better platforms to run applications on in 2014, heck the mainframe is 50-years old in 2014 there is no way it can compete with modern technology from a TCO perspective...

No, I have not been abducted by aliens, I just thought I would open up with the 'accepted' wisdom that us Mainframers hear pretty much every day from our IT brethren who look after distributed platforms. We have all heard these words or something similar when the subject of platform selection for any new application is being discussed.

Let me put in context some of the things such simplistic statements over look or more sinisterly ignore on purpose:

Total Cost of Acquisition Vs Total Cost of Ownership
Since the financial meltdown of 2008 the cost of running businesses has come more strongly into focus, the role of the Accountant and ultimately the CFO has gained a level of involvement in investment decisions previously unseen.  Given this rise to the forefront of IT decision making, of cost, then it is not surprising that investment decisions need to be made soundly and with foresight to how they affect the cost base of the business over the medium to long term.  I think that most readers would subscribe to the view that financial prudence is good practice, unless of course you are Larry Ellison (bugger that New Years Resolution didn't last long).  So why given this background do I see an increasing trend to focus on upfront costs or Total Cost of Acquisition?  This is short sightedness or delusion depending on how much credit you want to give the person proposing this approach.  Looking at how much a server costs is obviously important and should be ignored, but neither should the fact that it has little impact on the running costs over 5-years.  Indulge me if you please, would you buy a V12 Jaguar XJS that was 15-years old for £3000 or a £4000 Ford Fiesta for your 18-year olds first car after they pass their driving test.  Now if you are part of the Jaguar Owners Club and have landed here by the vagaries of Google I apologise profusely, but if you are not then you will get the point.  We all know the Jag would be thirsty, expensive to insure and cost a lot more to service...  You get my point...

Platform Selection - Security, Availability and Scalability
Again we in IT all know that a platform that is insecure, always down and has limited scalability should never be purchased... So why in so many business cases this author has seen over the last 19-years are these critical factors at worst not covered or at best glossed over.  When a business case looks solely at the year one costs then be wary.  We all know that applications grow and increasingly issues such as Security need to be taken into account up front.  Why then are these two areas not given the focus they so richly deserve?  Could it be that if we include security and scalability provision the TCA would suffer and the CFO would never sign off our business case and they don't know about IT so what the heck... Availability is another ball game altogether, take a look at this very interesting Wikipedia link:

http://en.wikipedia.org/wiki/High_availability

Once you have read this, is Amazon's new Cloud service that offers 99.5 availability still such a good decision.  What are you going to do for instance for the 50.4 minutes every week when your mission critical platform is down, put the kettle on? play Sudoku? apply online for a new job?  It really is not that difficult to put a cost on availability. One simple way would be to note the revenue/profit generated by an application down and then calculate how many hours in a year (365*24) and then work out what an hour of downtime costs.  I know this bewitching maths and beyond most 5-year olds but seriously people how can this be ignored in a platform decision?

How to choose the Platform for your new application
IBM has an approach called Fit For Purpose that looks at ALL of the factors above plus a number of others and proposes a decision tree for platform selection.  However let me propose an alternative method.  You made a decision about how you got to work today, classic Jaguar (hope that makes up for my earlier error), helicopter (Larry Ellison again), hovercraft, train, car or push bike or some combination.  Your decision probably included variables such as length of journey, time available, weather, cost etc.  Apply this same principle when you next choose an underlying platform for your application and remember that other platforms apart from x86 are available...