michelangelus - Fotolia
Several updates and policy changes to Microsoft's Azure database lineup aim to entice more customers to migrate workloads to the cloud and keep pace with rivals, such as AWS.
Among the significant database updates from Microsoft's Connect conference last week is a price cut for Cosmos DB, Microsoft's globally distributed database that competes with Google Cloud Spanner and Amazon's DynamoDB.
Microsoft bills Cosmos DB customers by the amount of storage they use and through a measure called request units per second (RU/s). Initially, Microsoft set the minimum for Cosmos DB instances at 10,000 RU/s, with scale-up increments of 1,000 RU/s. But, now, throughput levels can begin at just 400 RU/s and scaled in 100 RU/s chunks.
Microsoft also unveiled version 3.0 of Cosmos DB's software development kit, with what the vendor calls a more intuitive object model and support for streams, as well as a Cosmos DB feature for Cross-Origin Resource Sharing. With this feature, web applications can communicate directly with Cosmos DB instances from the browser, which provides a snappier end-user experience than a middle tier to broker messages.
Microsoft has put a lot of chips down on Cosmos DB to make it more attractive and broadly applicable, said Doug Henschen, vice president and principal analyst at Constellation Research in Cupertino, Calif. Cosmos DB's roots are in DocumentDB, a NoSQL data store, but Microsoft has added APIs for SQL, the Gremlin graph database, MongoDB and Cassandra.
This price cut also addresses a competitive weakness for Microsoft.
"The idea of multi-region is associated with big companies and deployments," Henschen said.
It's likely that Microsoft set the entry-level pricing too high, thinking that was Cosmos DB's sweet spot, but then realized that many other companies and even startups build applications that could grow to that scale, Henschen said.
Azure warehouse options grow, but work remains
Meanwhile, Microsoft lowered the bar for Azure SQL Data Warehouse Gen2 customers, who now can begin with 100 compute data warehouse units (cDWU) that bundle CPU, memory and I/O versus the previous 500 cDWU minimum. The new tier will be available this month in 15 Azure regions, with the rest added next year, Microsoft said.
Curt Monashprincipal analyst, Monash Research
The smaller-footprint data warehouse option is welcome for customers who want to experiment before a larger-scale move to Azure, but Microsoft lags behind some peers in a key area, Henschen said. The likes of Snowflake and Teradata offer separate CPU, memory and I/O, so they aren't forced to scale in lockstep, which means customers can make more sophisticated performance tweaks, Henschen said.
Microsoft has also pushed MariaDB into general availability on Azure. Originally conceived as an alternative to mainline MySQL after Oracle's 2009 purchase of Sun Microsystems, MariaDB has evolved beyond a primary focus on MySQL workload compatibility to stand on its own rights, Henschen said.
Microsoft now promises 99.99% availability for MariaDB on Azure, has begun public preview for virtual network support for MariaDB and added "data-in" replication to push data from an on-premises MariaDB instance to one on Azure.
The general availability of MariaDB strengthens Microsoft's open source credibility, and the addition of support for MySQL and Postgres on Azure achieves parity with most of its main competitors. Oracle doesn't offer Postgres on its own cloud, because it's competitive with Oracle's flagship database, Henschen said.
Also generally available is the Business Critical service tier in Azure SQL Database Managed Instance, aimed at customers who want to migrate on-premises SQL server workloads to Azure. The tier boosts performance and availability with always-on database replicas and flash storage.
"They're removing more barriers to SQL Server customers that have performance concerns over going to the cloud," Henschen said.
Microsoft's database strategy: Beyond bells and whistles
Microsoft's Azure database updates follow a spate of database-related news from the recent AWS re:Invent conference, including options for time series and blockchain applications. While Microsoft Connect didn't feature anything as dramatic, the vendor has a deliberate strategy for databases on Azure, said Curt Monash, principal analyst at Monash Research in Acton, Mass.
Microsoft introduced its database management system in the 1990s to compete against enterprise incumbents on price and ease of administration, drawing from its strengths as a vendor of end-user productivity software, Monash said. That appears to be a main focus for Azure's cloud databases, as well, given the company's signals to on-premises database customers that the road to Azure is well-paved and that the final destination will look familiar to them.
"Advantages in price or ease of administration can be more important than those in programmability," Monash said. "If you're looking for good portability later, then you'll try not to use any of the leading-edge features anyway and instead just use the ones found in many products. This is especially true in the case of languages and APIs; using differentiated features in those causes lock-in."