The main data on Facebook is stored in a relational database.

To store the social network and Facebook Messenger data, Facebook utilizes a fork of My Sq l 5.6. (more than 1B users). They store the social network in in nod (B+ tree index, fast reads and slow writes) and the messenger data in Rock Db (LSM tree index, fast writes and slow reads). MySQL is entirely based on flash. Initially, all data in Facebook Messenger was stored in HBase, but it was moved to MySQL to conserve storage space, decrease read latency, and enhance support.

This is an example of the items that Facebook stores in My S q l (which is possibly outdated):

Objects from Facebook that are stored in My Sq l

More information is available here. The following performance figures (as of2014) are impressive:

Reads take 4 milliseconds, while writes take 5 milliseconds.

450M rows read per second at its peak

Peak network bytes per second: 38 GB

13 million queries per second at its peak

3.5 million rows updated per second at its peak

Disk operations per second in Inorb: 5.2 million at its peak

Facebook’s data is stored on over 1000 MySQL servers (it’s a so-called universal (multi-tenant) database). The database is sharded and duplicated extensively. To minimize the number of shards they need to visit to locate the data, related data is stored together in a shard.

They also have a large number of bin log users. Bin log is a MySQL replication method that is storage type agnostic.

Because the MySQL database is extensively cached, over 99 percent of queries are served from it.

Backend of Facebook

HPHP is a frontend application, and cluster and region caches are Memcached instances that prevent race situations by using leases rather than locks.

The cluster cache stores extremely hot data that receives 10 to 100 million visits per second and is cached for each area. The data in the region cache is less often accessed, and just a single copy of the data is kept.

TAO is a graph service that runs on a Memcached server and is aware of the Facebook graph. It enables the cache’s performance to be improved. It also supports read after write consistency, which implies that whenever you change anything, the effect of that modification must always be visible (however other users may not see the most recent update).

Wormhole is a mechanism that connects publishers and subscribers. Other programs may consume the data since MySQL broadcasts bin log updates to this system.

Facebook has other databases.

For aggregation and reporting, Hadoop and Hive are utilized. They save less important information such as user clicks.

Log Device is a Rocks Db-based distributed data storage for logs.

Facebook used to store messages on Hydra Base, a database built on top of HBase (deprecated).

Beringia is a time series storage engine that runs in memory. Beringia can store up to 10 billion different time series and process up to 18 million queries per minute. It is used to track and store system measurements such as product statistics (e.g., how many messages are sent per minute), service statistics (e.g., the rate of queries hitting the cache tier vs. the MySQL tier), and system statistics (e.g., CPU, memory, and network usage) so that we can see the real-time load on our infrastructure and make resource allocation decisions.

Presto is an open source, high-performance, distributed query engine that Facebook uses to conduct SQL analytics on large data. Presto is being used by Facebook to speed up a large batch pipeline workload in their Hive Warehouse. Custom analytics workloads are also supported by Presto.

Twitter

Twitter uses its own version of MySQL 5.6 to store main data such as interest graphs, timelines, user data, and tweets. Twitter maintains data in hundreds of schemas, with thousands of servers processing millions of requests every second in their biggest cluster. A flock dB is the name of the graph service.

MySQL processes 50 million queries per second and reads and writes petabytes of data.

Flock Db is a database service.

Twitter’s proprietary sharing technology is used to extensively shard the database cluster. They also utilize MySQL replication, which is standard.

The majority of Twitter’s data is stored in Manhattan and Hadoop.

Manhattan is a multi-tenant, low-latency key value storage system (replaces Cassandra that was used for the same purpose before). It is used to keep track of all tweets, messages, and Twitter accounts. There are three storage engines in it:

Sea dB is a read-only file format used by Hadoop to store batch processed data.

For high write workloads, a stable, log-structured merge tree-based format is used.

For high read and mild write workloads, use b tree, a b tree-based format.

Manhattan is replicated, much like Cassandra, and provides eventual consistency as the default mode of operation. A per-share distributed log (written on top of Apache Book Keeper) and a consensus mechanism are used to map keys to shards, and each shard is replicated. Manhattan, like Cassandra, uses two-level keys, with the partitioning key identifying the shard and the local key identifying values on a shard.

Redis is used to store your timeline in memory, while demarche (a derivative of Memcached) is used to cache data such as users, tweets, and so on.

Twitter’s Blob store is a photo/video storage service.

Vertica is a column storage that Tableau uses as a backend.

Where Twitter requires high consistency, such as handling ad campaigns, ad exchanges, and internal tools, relational databases such as MySQL and PostgreSQL are utilized.

The backbone of Twitter

The distribution of data among all data storage is as follows:

Click on this link for additional information.

Google

For various products, Google utilizes a variety of databases.

Bigtable is used by Google Search for online crawling and storing unstructured data such as web logs for Google Analytics. Megastore/MySQL were also utilized by Google for a number of services; however they were deprecated in favor of Spanner.

Spanner is a NewSQL database (which implies it’s almost relational:). It was created with the intention of storing data for AdWords. It became extremely popular because it supports ACID transactions and is very simple to build web applications on top of it. Spanner is now accessible as part of the Google Cloud Platform. Spanner has progressed from a versioned key-value store akin to Bigtable to a temporal multi-version database. Data is kept in schematized semi-relational tables; data is versioned, with each version timestamped with its commit time; old versions of data are subject to customizable garbage-collection rules; and applications may access data at old timestamps. Spanner offers a SQL-based query language and enables general-purpose transactions.

Finally

To store various types of data, several Google apps utilize both Spanner and Bigtable.