CloudEnterprise.info

Posts Tagged ‘Amazon

Don’s recent attempt to look at financials of 10 publicly traded “cloud” companies got me willing to expand his research to a bigger picture.

After all, limiting the scope to 100% cloud companies really skews the charts to “Salesforce.com and everyone else” leaving such cloud juggernauts as Amazon and Google out of the picture.

As Don notes, Salesforce.com is doing extremely well: in Q1 2011 the company demonstrated 34% year-over-year growth rate and made $504 million in revenue. Their 2010 revenue was about $1.66 billion.

Companies like Google and Amazon are indeed much harder to analyze. Neither of them discloses cloud-related revenue which sort of vanishes in the grand scheme of core business such as respectively online advertisement and retail.

In this blog post I decided to have a look at where these two cloud businesses stand.

Amazon

In August 2010, UBS Investment Research estimated that Amazon Web Services were on track to make $500 million in 2010 (up from $275 mln in 2009), and $750 mln in 2011 (out of total $44 bln revenue of Amazon as a whole). By 2014 they are expected to get to $2.5 billion.

Profits are estimated to be around $58.2 million in 2010, $100.7 million in 2011.

As a side note on the Infrastructure as a Service space, Rackspace is considered to be number 2 cloud provider and they are way behind Amazon with target revenue for 2011 set to $100 mln (for cloud services).

Google

Google Apps is Google’s core subscription cloud service, and again a small fraction of the total company’s revenue (and with Android’s success no longer the most cherished ‘secondary business’ either).

The latest interview with Google Enterprise (which includes Google Apps) boss – David Girouard does not say much:

3,000 business are moving to the suite each day, and over three million have moved since its debut in 2007. But it’s unclear how much revenue Google is generating from subscriptions. All we know is that it’s under $1bn a year, less than four per cent of the company’s overall revenue. The aim, however, is to create a multi-billion-dollar business – in the near term. “Not a decade from now,” Girouard said, “but within a few years.”

Obviously ‘under $1 billion’ is a huge range.

A year ago, in May 2010, Nikesh Arora, president of Google’s Global Sales Operations and Business Development provided more detailed information:

First of all, back then the number of customers was one-third lower: “There are 2 million small businesses that have signed up”.

And secondly he provided a date estimate for reaching the $1 billion mark: “In perhaps three- or four years, I hope it will be more than a billion dollar revenue stream.”

With that kind of growth, to get to a billion dollars in 3 years, Google Apps need to be making  $300 million in revenue a year at the moment. On the other hand, when Google Apps were claiming 1 million users in early 2009, their revenue target for the year was $40 million. So with 3 times more users today, they might very well be at the 3 times the revenue – $120 million a year for Google Apps. My guess, is that the broad range ($120-$300 mln) might be related to them including or excluding advertisement revenue coming from free Google Apps accounts.

Anyone else?

I am actually quite impressed with how revenue of Salesforce.com compare to cloud businesses of Amazon and Google.

For now I would probably just limit the analysis to these 2 vendors. Microsoft is trying hard to get into this business with their Office 365 and Windows Azure launches. However, to be fair to the company I would probably wait another year before discussing their financial performance.

And that’s just for the software vendors. IBM‘s CFO Mark Loughridge claims that cloud services will generate $7 billion in revenue for his company by 2015, and I am pretty sure that hardware vendors are not losing money on shipping servers to all the new cloud datacenter either.

Have I missed any of the big players you would have expected to see in this analysis? Let me know.

Is there hard ROI to use a cloud IaaS instead of a server in your garage/basement/on-premise datacenter? I think there increasingly is and justifying self-hosting is getting increasingly tough.

I would actually go as far as posit that you can now get a server in a public datacenter at price comparable to your electricity bill alone!

If you don’t believe me – let’s do a quick math.

Mark Kolich noticed in his blog that the server he had running at his home was consuming 220 W, which at the consumer electricity costs of 12-cents per kWh means:

0.220 kWh * 12 cents = 2.64 cents per hour

Almost 3 cents/hour for electricity alone not taking into account: labor, server hardware amortization, data-storage costs (replacing a failed disk), cooling costs, ISP costs, security costs (routers, firewalls, etc.), power backup costs (a UPS) and so on. Mark notes that he could have probably bought a newer more energy efficient server – but the required investment would not justify the savings.

The shocking part is that the recent price competition of cloud infrastructure (IaaS) and platform (PaaS) vendors took the current cloud servers costs to roughly the same order of costs. Here’s a quick survey of a few major cloud players:

  • Microsoft is rolling out their 5 cent/hour option (with additional further discounts if you pre-pay for reserved use – e.g. say you have a bunch of instances which you have running all the time and you are willing to pre-pay for the next few months).
  • Same thing with Amazon: minimal price (although for a slightly more limited version) is already in 2 cent for Linux / 3 cent for Windows instance area, with reserved/pre-paid option getting as low as 0.7 cents/Linux & 1.3 cents/Windows.
  • Rackspace pricing starts at 1.5 cents/hour for Linux, and 8 cents/hour for Windows.

My take on these numbers is that you need to have a really good reason to go into hosting when there is so much price competition in that space and the margins are going down so fast.

The only good reason I can think of is hosting being your competitive advantage in some way. For example, being a local hosting company in a country which legislation is making it hard to use foreign datacenters. Or offering some level of compliance which public hosters cannot provide. And as a matter of fact both of these differentiators are gradually going away with the vendors quickly getting all the possible certifications and compliance stamps you can think of, as well as opening datacenters around the globe.

Cloud is cheaper than your own hosting regardless on how you calculate the costs. Get used to it.

Dmitry

Here’s my attempt to put together the list of things I expect to happen to Cloud Computing in 2009 – kind of natural thing to do the fist day of the year, right?

Overall, this is going to be a year when cloud computing will start rapidly maturing with competition heating up on the infrastructure/platform level, real private cloud solutions hitting the market, traditional applications increasingly moving to SaaS or hybrid model, and browser offline becoming a reality.

Let’s go through these one by one – and go through the IaaS and PaaS markets first.

Platform and Infrastructure as a Service (PaaS and IaaS) markets maturing and blurring.

IaaS is basically Amazon EC2 approach with hosters giving customers the ability to instantiate and control virtual machines running in the datacenter. This is a natural progression from the traditional server hosting model. However, this model of raw VM does not provide a lot of opportunity to differentiate which in turn is leading to higher competition and lower profit margins. We will see more and more platform functionality being added to infrastructure offerings and these two layers merging more and more.

Amazon is clearly adding more and more services besides EC2, and partners such as RightScale are adding automated scaling features normally associated with PaaS.

Even newcomers are now often shooting for something in between right from the get go. Can you tell where Windows Azure is? It is already kind of both infrastructure and platform.

Speaking of Windows Azure, this is likely going to be the year when it will hit the market. Folks at Microsoft are doing their best to make it easier for existing software ecosystem to get in with effectively the same or very similar tools they use today. The sheer size of the ecosystem, and this evolutionary approach is likely to immediately make Microsoft a serious player in the space.

VMware can definitely get into the top 3 as well if they execute well with their vCloud initiative. They would need to make sure that:

  • Their hosting partners can compete effectively against Amazon, Microsoft, Google, and others.
  • This pick your partner approach does not confuse the market, and
  • They don’t end up being behind competition by limiting themselves to basic infrastructure only.

The interesting aspect of that is that VMware really has the potential of forcing Microsoft to let partners run Azure. Today this is not the case and the only place where Azure exists is Microsoft’s datacenter.

It remains to be seen whether pure Platform as a Service players such as Salesforce.com (with its Force.com) and Google App Engine will be in the leaders group. They will likely start feeling pressure from the infrastructure level as I mentioned already but it might be challenging for them have the ease of migration and the flexibility that IaaS solutions have.

Also, Google seems to be making surprisingly small progress lately. They have posted some information on the upcoming System Status site and billing/quota dashboard – which means that the beta status is likely to be gone soon. However, their development story (Python as the only programming language and quite limited development environment) and the economy forcing them to concentrate on their core search and ad business are limiting their ability to compete.

Thoughts, comments on any of these?

I will continue with other trends next week.

Technorati Tags:
, , , , , , , , ,

Microsoft's Gen4 DatacenterSeems that Microsoft is suddenly surprisingly extremely open on how they design and run their datacenters. Not only we have a write up by Michael Manos on their Generation 4 datacenter architecture (and a great concept video), but even more surprisingly James Hamilton is giving us a spreadsheet of their datacenter expense structure!

Can you find information like that for Google or Amazon? The answer is no. Because the way they run their datacenters is a part of their competitive edge – especially for Amazon who compete at the infrastructure level where low pricing is important so efficiency is paramount.

Why would Microsoft do that? Apparently these blog posts are not just something individuals put out but a concerted move by the company. My guess is that there’s not much they loose now by giving away that information: after all they are not really a player in that space today (Live is way behind Google and Windows Azure is at a very early stage), and even a smaller vendor going after Microsoft would like to mimic their approach achieving the same economy of scale and competing against Microsoft is not going to be that easy.

However, the posts intent to help establish Microsoft’s credibility in the space (despite Hotmail success and a lot of other online efforts, the software giant is not really viewed as a web 2.0+ company). The message is: we are very serious about this market and this transition, and we are innovating and leading the industry to some kind of next generation approach leaving others behind.

We’ll see how this all plays out. Meanwhile do check out the links if you have not done so yet:

Technorati Tags: , , , , ,

Is someone going to step in and provide commercial support for the Eucalyptus open-source project basically following the Red Hat model which made Linux commercially successful and generated good revenue for Red Hat?

Here’s what I am thinking based on what I saw and heard at the Cloud Computing Expo (see my notes here):

  1. We know that Amazon’s Web Services are rapidly becoming the de-facto industry standard for cloud infrastructure.
  2. The biggest complaints that people have against them are that they are only provided by Amazon, and thus lock you in one vendor and do not support the “private” clouds (on-premise deployment).
  3. Both of these could potentially be addressed by the Eucalyptus project which implements EC2 and (in the next release) S3 APIs and can be used on hardware of your choice.
  4. Eucalyptus team is not planning to use their project commercially and Rich Wolski is skeptical about competing against Amazon.
  5. The project is open source.

In my mind this means that there is significant unsatisfied demand and a technology which someone could use to satisfy it. So my expectation is that we will see someone doing that. If not now then in next year when Eucalyptus is more feature-complete and stable (which is expected in the Spring).

Heck, this could be a very interesting play by Citrix considering that both Amazon and Citrix are using XEN, and Citrix is now trying hard to move beyond terminal access to virtualization and cloud space…

Technorati Tags:
, , , ,

I have posted my notes from all the sessions I attended at the last week’s Cloud Computing Expo 2008 on:

(By the way, I have just updated all three posts and added links to the session slides!)

Now it is time to share a few general comments on the conference.

First and foremost, cloud computing is happening. There was a lot of excitement and optimism throughout the event. And frankly this was quite contrasting to the SOA keeping talking about whether SOA is getting anywhere, how to justify SOA projects, whether it is a journey or a destination, and so on.

This was a vendor event. I’ve met very few actual IT guys coming to the conference to learn more about their options. The vast majority of attendees were system integrators, plus some hosters, and venture capitalists trying to figure out how they make money on the trend.

The whole space is very young. As someone put it: cloud computing is about 700 days old. That means that there are a lot of arguments about definitions, and where things are going, and so on. And that also gives a lot of vibe and a lot of fresh community spirit.

A lot of vendors trying to redefine what they are doing as cloud computing or find a cloud computing game within their technology. Obviously all hosting vendors are now cloud vendors, VMware is a cloud company, rPath is providing cloud virtual appliances, IBM is setting up clouds for customers, Cisco is giving everyone with the networks they need and so on. It takes time and effort to figure out what is real and what is hype. Next year the hype will probably just keep growing making this task even harder.

We are mostly at the infrastructure level on the way to platform and management. If you think about what kind of cloud services can be there, the lowest level is infrastructure: you get the ability to run your virtual machines in someone’s datacenter (think Amazon EC2). Then, moving up the stack we have Platform-as-a-Service where instead of direct access to VMs you get the ability to submit your application code and let the platform do the rest (think Google App Engine). And finally, we have Software-as-a-Service – precanned applications which you just use and maybe somewhat customize for yourself (think Salesforce.com).

By far most of the sessions I attended were at the infrastructure level. At the most you would hear a pitch of managing that infrastructure more efficiently, or having some kind of templates, or pre-built solutions you could use.

I expect things to start changing as all these companies start trying to move up the value chain and provide more platform/services to differentiate from competition. In a sense you already see that with Microsoft’s Windows Azure which is somewhere between infrastructure and platform.

Amazon is by far the current leader. There’s no one even close. Everyone integrates with Amazon. All value-add services are provided for Amazon first and then maybe for others. Someone was saying that Amazon’s Web-Services APIs might easily simply become the new x86 instruction set of cloud computing.

Everyone is talking about not getting locked in. And everyone is pitching that only if you use their APIs or their machine/file formats – then you will become independent of the hoster or someone else. Basically avoiding one dependency by accepting another.

Overall, very exciting times, and a great event put together by the folks at Sys-Con!

For details on what was covered at the event and links to the presentations see my previous notes.

Technorati Tags: , ,

Here are my notes from the third day of SYS-CON’s Cloud Computing Expo (see also my notes from day 1 and day 2):

Peter Nickolov – President & CTO of 3tera – gave a pitch on how their technology (AppLogic) lets customers use cloud computing for high-availability solutions.

In a nutshell, Peter had an instance of SugarCRM which in his demo could fail-over from one datacenter to another. No changes in the code were required everything was set in the configuration of the application and AppLogic: he copied the application (front-end machines, back-end machines, load-balancers, etc.) to another cloud and set MySQL replication between them. Then when one application goes down their load-balancers detect unavailability of the primary site and give the IP address to the secondary one.

Peter said it took a couple of days to set up the demo. Obviously, SugareCRM was a relatively easy target because all the state information is in a single database, so MySQL replication was sufficient to have the application ready for the hot switch. But nevertheless this was a pretty impressive demo of how AppLogic’s building blocks can provide the additional layer of management and datacenter independence you might want to have with your hosters.

Andrew Comas from CORDYS gave a fairly boring general session on their Process Factory product – basically some kind of mesh-up editor for corporate use.

VMware had 2 sessions that day – by Dan Chu (Vice President of Emerging Products and Markets – which at VMware includes everything from overseeing the SMB space, to virtual appliances, to cloud computing) and by Preeti Somal (Vice President R&D Cloud Computing).

Basically, these were a pitch for the upcoming vCloud solution. The basic idea is actually very close to what we get from 3tera and rPath: the cloud is just a set of virtual machines, let’s make them standardized across the datacenters and provide administrators the ability to manage them as a system – and we got a great flexible solution without a hosting vendor lock-in.

vCloud is definitely more of a roadmap rather than a solution you can try:

VMware vCloud Roadmap

Today, they have their existing on-premise Virtual Infrastructure which a lot of us are using in our companies. In addition to that they have over a 100 hosting partners committed to providing this infrastructure in their datacenter – thus providing the flexibility to choose the hosting vendor.

Next year we will start getting into the second phase – so called “vCloud Services”: which basically means that we will get OVF-based way of grouping virtual machines into systems together with associated policies. And we might get a few sample solutions like the “flex capacity” scanario which was demoed during the VMWorld keynote in September.

Finally, they will provide full Virtual Center integration so you can manage your VMs in one console regardless of whether they are deployed in your network or by a hoster (they are calling that Federation) and more advanced architecture capabilities.

It is yet unknown how much will vCloud move beyond just VMs into additional services such as message queuing to VM interaction, storage and so on. They are saying that some of the infrastructure will be provided (for example load balancing) but not everything because they want to stick to creating the common platform which partners will use for the actual solutions.

My bet is that if they want to compete effectively against Microsoft’s Windows Azure and Amazon’s ever increasing set of Web Services they will have to move up the stack and provide more than the basic VM infrastructure. The question is how fast they can move into these new areas and how much the task of keeping all the datacenter partners happy will slow them down.

Their main bets are on application compatibility – just re-use any VMs you have today – and broad hosting partner range. They are also hoping that their vCloud APIs (RESTfull web services) will enable broad ISV ecosystem.

[Download VMware slides]

Erik Carlin from Rackspace‘s cloud computing division – Mosso talked about cloud standardization. This included:

  1. Common taxonomy: Software-as-a-Service (e.g. Salesforce), Platform-as-a-Service (e.g. Google App Engine) and Infrastructure-as-a-Service (e.g. Amazon EC2).
  2. APIs for storage, compute, network, and data. Right now, even when APIs are common across datacenters (e.g. with vCloud and 3tera) you still get locked into application vendor providing you these. Something like Red Hat’s Libvert abstract hypervisor API could help. Ruben from Enomaly pushing that through the Cloud Interoperability Forum.
    Other issues include identity (Erik thinks that OpenID has the biggest potential here while WS-* will probably be used by Microsoft only), and dependencies on particular cloud services.
  3. Pricing Complexity. How do you actually calculate the compute power provided and what is the Standard Processing Calculation unit? Work on virtualization benchmarks by VMware and Intel can help.
  4. Compliance issues: depending on the industry and application you might get to adhere to HIPAA, SAS70, PCI, or Safe Harbor (Rackspace is certified for the latter, European thing for datastorage).

Overall, common standards should provide for interoperability, lock-in avoidance, fail-over scenarios, better tools for all, cloud bursting and multi-cloud applications – which will enable positive network effects and increase the overall market for everyone.

[Download Erik’s slides]

Next we had Rich Wolski presenting his Eucalyptus project – an open-source clone of Amazon’s EC2 and S3.

Rich is absolutely amazing and his sessions are definitely a must-attend. He talked a lot about the architecture of their solution and how people are using it to try/test their EC2 solutions before deploying them with Amazon:

  • They currently have about 80 downloads a day. Download requires no registration so they do not know how exactly they are being used.
  • The biggest installation Rich knows includes 260 nodes.
  • He does not believe that Eucalyptus can be used to compete against Amazon – you still need people, datacenter infrastructure, know how to do machine rollover and so on
  • They currently have 5 engineers on the project and drop monthly releases. At the moment they do not accept external contribution but might start doing that in spring when they stabilize.

[Download Rich’s slides]

Finally we had Gerrit Huizenga – Solutions Architect from IBM and part of their cloud taskforce share his views on cloud computing. I was surprised that he was actually downplaying the role and newness of cloud computing as much as he could but I guess that is part of being from an established corporation with huge established software and consulting business.

[Download Gerrit’s slides]

That is it for my day to day notes. I will also publish my summary notes once I recover from all the recent traveling and catch up on my email.

Technorati Tags:
, , , , , , , , , , ,


RSS My company’s main blog

  • An error has occurred; the feed is probably down. Try again later.

My Recent Tweets

Blogroll

Legal

The posts on this blog are provided “as is” with no warranties and confer no rights. The opinions expressed on this site are mine and mine alone, and do not necessarily represent those of my employer Jelastic or anyone else for that matter. All trademarks acknowledged.

© 2008-2012 Dmitry Sotnikov

%d bloggers like this: