• dianahlavacova

Building Azure SQL- biggest cloud database!

Microsoft introduced SQL Azure (initially called SQL Data Services) in 2008 when database size was limited to either 1GB or 5GB, depending on your service tier. That was an extremely low limit for the average data volumes of the time. While evolving, the Azure SQL Database service emerged to give users support for storing up to 4TB in a single database.



For most people 4TB is a lot of data but there are many workloads that require much more data than that. In order to meet the needs of those large data sets, Microsoft introduced Azure SQL Database Hyperscale, previously code-named "Socrates" which became generally available in May at the Microsoft Build conference. The current Azure SQL Database limit of 4TB is based on the maximum size of the local solid-state drive (SSD) in a given physical server. Azure SQL Database Hyperscale is planning to create a service that supports databases of up to 100TB and beyond.


Working with relational databases is not easy. Designing a scale-out database solution is an interesting proposition. In building Microsoft faced that this model is the lack of specialized high-performance storage in its cloud platform. Azure Storage is all network-attached and uses an object-based design, which means it has slightly higher latencies than specialized storage area networks (SANs). That, combined with the limits of the overall size and hardware available in cloud-based virtual machines, meant a novel approach had to be taken.


In the existing Azure SQL Database architecture, Microsoft has leveraged technology similar to Always On availability groups to provide both high availability and durability. This required four copies of the data, which were kept in sync via log records being shipped to all replicas. It had a couple of key limitations. The size of a database could not grow beyond the storage capacity of a single machine, and the performance costs of large storage operations -- such as backup and restore or seeding a new availability replica -- grow linearly with the size of the database. These reasons are why the current limit of 4TB exists. To overcome all of these obstacles, Microsoft made a few architectural decisions and feature enhancements. The first is snapshot-based backups: SQL Server 2012 introduced the ability to store data files in Azure Storage as http endpoints, and later advances in Azure Storage combined with code enhancements allowed for constant backup with very fast restore times, even at hundreds of terabytes.


Buffer pool extensions (BPE) was another feature used here that debuted to minimal fanfare on-premises. In Azure SQL Database Hyperscale, Microsoft further improved this feature with the use of Hekaton, its in-memory, latch-free database structure, to make the data written there recoverable in the event of failure. BPE, which is now called RBPEX for "resilient buffer pool extension," ultimately allows the scaling-out of data via the use of page servers whichare a key element of this scale-out architecture. Each page server stores a subset of the data in your database, which is somewhere between 128GB and 1TB. Page servers are kept up-to-date by transaction log records being played back to them.


Hyperscale architecture

Another enhancement here is the Accelerated Database Recovery feature in SQL Server 2019, which makes the undo phase of recovery much faster. Each server uses an SSD-based cache using RBPEX on local SSD. Durability of the Azure data pages is provided by the underlying Azure Storage infrastructure.The database engine lives on what are called compute nodes which have SSD-based caches that are designed to reduce the number of trips to the page servers to retrieve data. Just like in an on-premises SQL Server implementation, only one compute node can process reads and writes. However, there are secondary nodes that provide failover support and scale-out of read workloads.


Also the transaction log is broken out into its own service. As writes come into the primary data node, the log service persists them to Azure Premium Storage SSDs and persists those write operations into a cache, which in turn forwards the relevant log records to both the compute nodes and the page server nodes to keep them up-to-date. Finally, the log records are stored in Azure Storage for long-term retention, eliminating the need for the DBA to truncate the log.


Azure SQL Database Hyperscale offers an interesting approach to scale-out data service. You will note that there is no on-premises offering of this feature; the public cloud offers the nearly infinite scale and consistent deployment platform that is required to have the service work as expected.


0 views
  • Grey YouTube Icon
  • LInkedIn MS SQL
  • Facebook
  • Twitter

+420 222 745 000 | info@apponfly.com | mssql.apponfly.com