Qualcomm is the walking definition of a high-tech company. It makes the brains and guts of cell phones, embedded computers; everything from the humble telephone modem to virtual and "augmented" reality systems. It's also a large enterprise, a global concern with more than 16,000 employees and $10 billion in revenue.
Like the majority of enterprises now, Qualcomm faces the perennial challenge of keeping its IT systems up to snuff in a world where the flow of information and competitive pressure will only increase. It's turning to
Just being able to understand capacity from a business perspective -- how much you have and how much you're using -- is a huge next step.
Vanessa Alvarez, infrastructure analyst for Forrester Research,
"I looked at cloud two to three years ago from a huge, global perspective and said, 'We need to get this going, I want a worldwide cloud,'" said Matt Clark, senior director of IT for Qualcomm, "then I'm like, 'Hold on, I'm trying to boil the ocean here.'"
Clark said that a couple of factors drove the need for cloud. One was a need to move beyond virtualization, which he said the company had milked for all it was worth; the other was the fact that different business units have rapidly diverging IT needs.
Clark said that on the hardware side, Qualcomm's roots as a tech company meant that many engineers were actively involved in IT; after all, that's what the company helps to build. Sales and services wings, however, were moving in leaps and bounds away from the nitty- gritty of network design and chip design.
"Other business units have emerging technologies for social media/communications-type products, and they need to scale up and down very quickly," he said.
Too much virtualization?
Clark said that Qualcomm had stayed on top of computing technology to aid its chip business. It's run a massive grid computing operation for 12 years to process complex simulation and design tasks, and it's been involved in virtualization since 2002. However, Clark considers the firm at a competition-deadening saturation point on virtualization.
"We were getting killed," he said. "We're 90% virtualized on the Windows front, 60% on Linux, and from a Solaris standpoint we don't really track virtualization, since everything's in containers anyway."
The problem was a bit of a paradox, Clark said. Virtualization was too easy once it was widespread; a project team could have a server in 15 minutes instead of weeks, and they were overjoyed. The problem was they weren't giving them back. IT provisioning ran amuck, and even though virtualization was efficient on infrastructure, that didn't mean resources were getting used well.
"That was our huge challenge in virtualization," he said. "We had some bumps in the beginning, once people said, 'This is free, and I can keep it? Well, give me seven of them, give me 10.'"
Clark said that virtualization meant they could scale up very well, but scaling down was practically impossible because the company didn't have a chargeback model that would create an incentive by drawing money out of a business unit's budget for using resources. "When something's quote-unquote free because it's part of the labor-based allocation, they don't want to get rid of it," he said.
One of the ways Clark started with cloud addressed this issue by giving out time-limited sandboxes for engineers and developers. A pool of virtual resources and a stack of base images are available for anyone to fire up and use for 90 days. If they want to keep on using those servers after that, they have to bring in IT and make the business case. Otherwise, the environment goes away.
"My true definition of cloud is efficient use of resources across all platforms and being able to scale up and down as quickly as possible," Clark said. "It's very, very challenging with the tech that's out there today to really say that 'cloud is here,' because it really isn't."
Scratching the cloud itch
Clark said the itch to manage IT efficiently has been there for a long time. A few years ago, Qualcomm had something it called the QGrid, a pool of infrastructure dedicated to business applications that Clark said was designed to free the bean counters from the button pushers. "That was not successful, because we were so highly virtualized already that there was no money-saving component," he said.
Other business units have emerging technologies for social media/ communications-type products, and they need to scale up and down very quickly.
Matt Clark, senior director of IT for Qualcomm,
Services like AWS have provided the model, but no firm with really large amounts of intellectual property was going to chance public cloud services for critical infrastructure for quite some time, never mind the severe practical limitations to getting data in and out of the cloud.
Instead, Clark wants a "federated cloud" of IT resources divvied up and shared in a fully governed, fully automated way across each business unit. He said as long as IT can stay in the catbird seat, it's all good.
"I'm really looking to build a federated cloud," he said, "where some business units have their own cloud, some of them may be able to scale out to Amazon or AT&T and some are just internal."
Vanessa Alvarez, infrastructure analyst for Forrester Research, said Qualcomm's cloud story is a natural outgrowth of a maturing IT marketplace. IT departments were beginning to understand that if they couldn't adapt to the service delivery model, they would find themselves at a competitive disadvantage.
"Just being able to understand capacity from a business perspective -- how much you have and how much you're using -- is a huge next step," she said, and one that virtualization didn't address. Cloud computing, or at least the promise of cloud, looks like the next big step because it does address this. At its heart, cloud is a change in how we consume our IT, not what we're consuming.
Carl Brooks is the Technology Writer for SearchCloudComputing.com. Contact him at firstname.lastname@example.org.