CINXE.COM

Engineering at Meta

<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:georss="http://www.georss.org/georss" xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#" > <channel> <title>Engineering at Meta</title> <atom:link href="https://engineering.fb.com/feed/" rel="self" type="application/rss+xml" /> <link>https://engineering.fb.com/</link> <description>Engineering at Meta Blog</description> <lastBuildDate>Tue, 19 Nov 2024 17:12:16 +0000</lastBuildDate> <language>en-US</language> <sy:updatePeriod> hourly </sy:updatePeriod> <sy:updateFrequency> 1 </sy:updateFrequency> <generator>https://wordpress.org/?v=6.7.1</generator> <site xmlns="com-wordpress:feed-additions:1">147945108</site> <item> <title>Sequence learning: A paradigm shift for personalized ads recommendations</title> <link>https://engineering.fb.com/2024/11/19/data-infrastructure/sequence-learning-personalized-ads-recommendations/</link> <dc:creator><![CDATA[]]></dc:creator> <pubDate>Tue, 19 Nov 2024 17:00:43 +0000</pubDate> <category><![CDATA[Data Infrastructure]]></category> <category><![CDATA[ML Applications]]></category> <category><![CDATA[Production Engineering]]></category> <guid isPermaLink="false">https://engineering.fb.com/?p=21954</guid> <description><![CDATA[<p>AI plays a fundamental role in creating valuable connections between people and advertisers within Meta’s family of apps. Meta’s ad recommendation engine, powered by deep learning recommendation models (DLRMs), has been instrumental in delivering personalized ads to people. Key to this success was incorporating thousands of human-engineered signals or features in the DLRM-based recommendation system. [...]</p> <p><a class="btn btn-secondary understrap-read-more-link" href="https://engineering.fb.com/2024/11/19/data-infrastructure/sequence-learning-personalized-ads-recommendations/">Read More...</a></p> <p>The post <a rel="nofollow" href="https://engineering.fb.com/2024/11/19/data-infrastructure/sequence-learning-personalized-ads-recommendations/">Sequence learning: A paradigm shift for personalized ads recommendations</a> appeared first on <a rel="nofollow" href="https://engineering.fb.com">Engineering at Meta</a>.</p> ]]></description> <content:encoded><![CDATA[<p><span style="font-weight: 400;">AI plays a fundamental role in creating valuable connections between people and advertisers within Meta’s family of apps. Meta’s ad recommendation engine, powered by</span> <a href="https://ai.meta.com/blog/dlrm-an-advanced-open-source-deep-learning-recommendation-model/" target="_blank" rel="noopener"><span style="font-weight: 400;">deep learning recommendation models (DLRMs)</span></a><span style="font-weight: 400;">, has been instrumental in delivering personalized ads to people. Key to this success was incorporating thousands of human-engineered signals or features in the DLRM-based recommendation system.</span></p> <p><span style="font-weight: 400;">Despite training on vast amounts of data, there are limitations to current DLRM-based ads recommendations with manual feature engineering due to the inability of DLRMs to leverage sequential information from people’s experience data. To better capture the experiential behavior, the ads recommendation models have undergone foundational transformations along two dimensions:</span><span style="font-weight: 400;"><br /> </span></p> <ol> <li><span style="font-weight: 400;">Event-based learning: learning representations directly from a person’s engagement and conversion events rather than traditional human-engineered features.</span></li> <li><span style="font-weight: 400;">Learning from sequences: developing new sequence learning architectures to replace traditional DLRM neural network architectures.</span></li> </ol> <p><span style="font-weight: 400;">By incorporating these advancements from the fields of natural language understanding and computer vision, Meta’s next-generation ads recommendation engine addresses the limitations of traditional DLRMs, resulting in more relevant ads for people, higher value for advertisers, and better infrastructure efficiency.</span></p> <p><span style="font-weight: 400;">These innovations have enabled our ads system to develop a deeper understanding of people’s behavior before and after converting on an ad, enabling us to infer the next set of relevant ads. Since launch, the new ads recommendation system has improved ads prediction accuracy – leading to higher value for advertisers and 2-4% more conversions on select segments.</span></p> <h2>The limits of DLRMs for ads recommendations</h2> <p><span style="font-weight: 400;">Meta’s DLRMs for personalized ads rely on a wide array of signals to understand people’s purchase intent and preferences. DLRMs have revolutionized learning from </span><a href="https://ai.meta.com/blog/dlrm-an-advanced-open-source-deep-learning-recommendation-model/" target="_blank" rel="noopener"><span style="font-weight: 400;">sparse features</span></a><span style="font-weight: 400;">, which capture a person’s interactions on entities like Facebook pages, which have massive cardinalities often in the billions. The success of DLRMs is founded on their ability to learn generalizable, high dimensional representations, i.e., embeddings from sparse features. </span></p> <p><span style="font-weight: 400;">To leverage tens of thousands of such features, various strategies are employed to combine features, transform intermediate representations, and compose the final outputs. Further, s</span><span style="font-weight: 400;">parse features </span><span style="font-weight: 400;">are built by aggregating attributes across a person’s actions over various time windows with different data sources and aggregation schemes. </span></p> <p><span style="font-weight: 400;">Some examples of legacy sparse features thus engineered would be:</span><span style="font-weight: 400;"><br /> </span></p> <ul> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Ads that a person clicked in the last N days → [Ad-id1, Ad-id2, Ad-id3, …, Ad-idN]</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Facebook pages a person visited in the past M days with a score of how many visits on each page  → [(Page-id1, 45), (Page-id2, 30), (Page-id3, 8), &#8230;]</span><span style="font-weight: 400;"><br /> </span></li> </ul> <p><span style="font-weight: 400;">Human-engineered sparse features, as described above, have been a cornerstone for personalized recommendations with DLRMs for several years. </span><span style="font-weight: 400;">But this approach has limitations:</span><span style="font-weight: 400;"><br /> </span></p> <ul> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Loss of sequential information: Sequence information, i.e., the order of a person’s events, can provide valuable insights for better ads recommendations relevant to a person&#8217;s behavior. Sparse feature aggregations lose the sequential information in a person&#8217;s journeys.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Loss of granular information: Fine-grained information like collocation of attributes in the same event is lost as features are aggregated across events.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Reliance on human intuition: Human intuition is unlikely to recognize non-intuitive, complex interactions and patterns from vast quantities of data.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Redundant feature space: Multiple variants of features get created with different aggregation schemes. Though providing incremental value, overlapping aggregations increase compute and storage costs and make feature management cumbersome.</span></li> </ul> <p><span style="font-weight: 400;">People’s interests evolve over time with continuously evolving and dynamic intents. Such complexities are hard to model with handcrafted features. Modeling these inter-dynamics helps achieve a deeper understanding of a person’s behavior over time for better ad recommendations. </span></p> <h2>A paradigm shift with learning from sequences for recommendation systems</h2> <p><span style="font-weight: 400;">Meta’s new system for ads recommendations uses sequence learning at its core. This necessitated a complete redesign of the ads recommendations system across data storage, feature input formats, and model architecture. The redesign required building a new people-centric infrastructure, training and serving optimization for state-of-the-art sequence learning architectures, and model/system codesign for efficient scaling.</span></p> <h3>Event-based features</h3> <p><span style="font-weight: 400;">Event-based features (EBFs) are the building blocks for the new sequence learning models. EBFs – an upgrade to traditional features – standardizes heterogeneous inputs to sequence learning models along three dimensions:</span></p> <ol> <li><span style="font-weight: 400;">Event streams: the data stream for an EBF, e.g. the sequence of recent ads people engaged with or the sequence of pages people liked.</span></li> <li><span style="font-weight: 400;">Sequence length defines how many recent events are incorporated from each stream and is determined by the importance of each stream.</span></li> <li><span style="font-weight: 400;">Event Information: captures semantic and contextual information about each event in the stream such as the ad category a person engaged with and the timestamp of the event.</span></li> </ol> <p><span style="font-weight: 400;">Each EBF is a single coherent object that captures all key information about an event. EBFs</span><span style="font-weight: 400;"> allow us </span><span style="font-weight: 400;">to incorporate rich information and </span><span style="font-weight: 400;">scale inputs systematically. </span><span style="font-weight: 400;">EBF sequences replace legacy sparse features as the main inputs to the recommendation models. When combined with event models described below, EBFs have ushered in a departure from human-engineered feature aggregations.</span><span style="font-weight: 400;"><br /> </span></p> <h3>Sequence modeling with EBFs</h3> <p><span style="font-weight: 400;">An event model synthesizes event embeddings from event attributes. It learns embeddings for each attribute and uses linear compression to summarize them into a single event attributed-based embedding. Events are timestamp encoded to capture their recency and temporal order. The event model combines timestamp encoding with the synthesized event attribute-based embedding to produce the final event-level representation – thus translating an EBF sequence into an event embedding sequence.</span></p> <p><span style="font-weight: 400;">This is akin to how language models use embeddings to represent words. The difference is that EBFs have a vocabulary that is many orders of magnitude larger than a natural language because they come from heterogeneous event streams and encompass millions of entities.</span></p> <p><span style="font-weight: 400;">The event embeddings from the event model are then fed into the sequence model at the center of the next-generation ads recommendation system. The event sequence model is a person level event summarization model that consumes sequential event embeddings. It utilizes state-of-the-art attention mechanisms </span><span style="font-weight: 400;">to</span><span style="font-weight: 400;"> synthesize the event embeddings to a predefined number of  embeddings that are keyed by the ad to be ranked</span><span style="font-weight: 400;">. With techniques like multi-headed attention pooling, the complexity of the self-attention module is reduced from </span><i><span style="font-weight: 400;">O</span></i><span style="font-weight: 400;">(N*N) to </span><i><span style="font-weight: 400;">O</span></i><span style="font-weight: 400;">(M*N) . M is a tunable parameter and N is the maximum event sequence length.</span></p> <p><span style="font-weight: 400;">The following figure illustrates the differences between DLRMs with a human-engineered features paradigm (left) and the sequence modeling paradigm with EBFs (right) from a person’s event flow perspective.</span></p> <p><img fetchpriority="high" decoding="async" class="aligncenter size-large wp-image-21985" src="https://engineering.fb.com/wp-content/uploads/2024/11/Event-Sequence-Learning-Meta.png?w=1024" alt="" width="1024" height="899" srcset="https://engineering.fb.com/wp-content/uploads/2024/11/Event-Sequence-Learning-Meta.png 1999w, https://engineering.fb.com/wp-content/uploads/2024/11/Event-Sequence-Learning-Meta.png?resize=916,804 916w, https://engineering.fb.com/wp-content/uploads/2024/11/Event-Sequence-Learning-Meta.png?resize=768,674 768w, https://engineering.fb.com/wp-content/uploads/2024/11/Event-Sequence-Learning-Meta.png?resize=1024,899 1024w, https://engineering.fb.com/wp-content/uploads/2024/11/Event-Sequence-Learning-Meta.png?resize=1536,1349 1536w, https://engineering.fb.com/wp-content/uploads/2024/11/Event-Sequence-Learning-Meta.png?resize=96,84 96w, https://engineering.fb.com/wp-content/uploads/2024/11/Event-Sequence-Learning-Meta.png?resize=192,169 192w" sizes="(max-width: 992px) 100vw, 62vw" /></p> <h2>Scaling the new sequence learning paradigm</h2> <p><span style="font-weight: 400;">Following the redesign to shift from sparse feature learning to event-based sequence learning, the next focus was scaling across two domains — scaling the sequence learning architecture and scaling event sequences to be longer and richer.</span></p> <h3>Scaling sequence learning architectures</h3> <p><span style="font-weight: 400;">A custom transformer architecture that incorporates complex feature encoding schemes to fully model sequential information was developed to enable faster exploration and adoption of state-of-the-art techniques for recommendation systems. The main challenge with this architectural approach is achieving the performance and efficiency requirements for production. A request to Meta’s ads recommendation system has to rank thousands of ads in a few hundred milliseconds.</span></p> <p><span style="font-weight: 400;">To scale representation learning for higher fidelity, the existing sum pooling approach</span> <span style="font-weight: 400;">was replaced</span> <span style="font-weight: 400;">with a new architecture that learned feature interactions from unpooled embeddings.</span> <span style="font-weight: 400;">Whereas the prior system based on aggregated features was highly optimized for fixed length embeddings that are pooled by simple methods like averaging, sequence learning introduces new challenges because different people have different event lengths. Longer variable length event sequences, represented by jagged embedding tensors and unpooled embeddings, result in larger compute and communication costs with higher variance.</span><span style="font-weight: 400;"><br /> </span><span style="font-weight: 400;"><br /> </span><span style="font-weight: 400;">This challenge of growing costs is addressed by adopting hardware codesign innovations for supporting jagged tensors, namely:</span><span style="font-weight: 400;"><br /> </span></p> <ul> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Native PyTorch capabilities to support Jagged tensors.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Kernel-level optimization for processing Jagged tensors on GPUs.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">A </span><a href="https://dl.acm.org/doi/10.1145/3640457.3688040" target="_blank" rel="noopener"><span style="font-weight: 400;">Jagged Flash Attention </span></a><span style="font-weight: 400;">module to support Flash Attention on Jagged tensors.</span><span style="font-weight: 400;"><br /> </span></li> </ul> <h3>Scaling with longer, richer sequences</h3> <p><span style="font-weight: 400;">Meta’s next-generation recommendation system’s ability to learn directly from event sequences to better understand people’s preferences is further enhanced with longer sequences and richer event attributes.</span></p> <p><span style="font-weight: 400;">Sequence scaling entailed:</span><span style="font-weight: 400;"><br /> </span></p> <ul> <li style="font-weight: 400;" aria-level="1"><b>Scaling with longer sequences: </b><span style="font-weight: 400;">Increasing sequence lengths gives deeper insights and context about a person’s interests. Techniques like multi-precision quantization and value-based sampling techniques are used to efficiently scale sequence length.</span></li> <li style="font-weight: 400;" aria-level="1"><b>Scaling with richer semantics</b><span style="font-weight: 400;">: EBFs enable us to capture richer semantic signals about each event e.g. through multimodal content embeddings. Customized vector quantization techniques are used to efficiently encode the embedding attributes of each event. This yields a more informative representation of the final event embedding.</span></li> </ul> <h2>The impact and future of sequence learning</h2> <p><span style="font-weight: 400;">The event sequence learning paradigm has been widely adopted across Meta’s ads systems, resulting in gains in ad relevance and performance, more efficient infrastructure, and accelerated research velocity. Coupled with our focus on advanced </span><a href="https://arxiv.org/pdf/2406.05898" target="_blank" rel="noopener"><span style="font-weight: 400;">transformer architectures</span></a><span style="font-weight: 400;">, event sequence learning has reshaped Meta’s approach to ads recommendation systems. </span></p> <p><span style="font-weight: 400;">Going forward, the focus will be on further scaling event sequences by 100X, developing more efficient sequence modeling architectures like linear attention and state space models, key-value (KV) cache optimization, and multimodal enrichment of event sequences.</span></p> <h2><span style="font-weight: 400;">Acknowledgements</span></h2> <p><i><span style="font-weight: 400;">We would like to thank </span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;Neeraj Bhatia&quot;,&quot;per_e&quot;:&quot;neerajb@meta.com&quot;,&quot;type&quot;:&quot;person&quot;}">Neeraj Bhatia</span></i><i><span style="font-weight: 400;">, Zhirong Chen, Parshva Doshi, </span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;Jonathan Herbach&quot;,&quot;per_e&quot;:&quot;jherbach@meta.com&quot;,&quot;type&quot;:&quot;person&quot;}">Jonathan Herbach</span></i><i><span style="font-weight: 400;">, </span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;Yuxi Hu&quot;,&quot;per_e&quot;:&quot;yuxihu@meta.com&quot;,&quot;type&quot;:&quot;person&quot;}">Yuxi Hu</span></i><i><span style="font-weight: 400;">, Abha Jain, Kun Jiang, </span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;Santanu Kolay&quot;,&quot;per_e&quot;:&quot;skolay@meta.com&quot;,&quot;type&quot;:&quot;person&quot;}">Santanu Kolay</span></i><i><span style="font-weight: 400;">, Boyang Li,  Hong Li</span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;Junjie Yang&quot;,&quot;per_e&quot;:&quot;junjieyang@meta.com&quot;,&quot;type&quot;:&quot;person&quot;}">,</span></i> <i><span style="font-weight: 400;">Paolo Massimi, </span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;Sandeep Pandey&quot;,&quot;per_e&quot;:&quot;sppandey@meta.com&quot;,&quot;type&quot;:&quot;person&quot;}">Sandeep Pandey</span></i><i><span style="font-weight: 400;">, </span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;Dinesh Ramasamy&quot;,&quot;per_e&quot;:&quot;dineshr@meta.com&quot;,&quot;type&quot;:&quot;person&quot;}">Dinesh Ramasamy</span></i><i><span style="font-weight: 400;">, </span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;Ketan Singh&quot;,&quot;per_e&quot;:&quot;ktns@meta.com&quot;,&quot;type&quot;:&quot;person&quot;}">Ketan Singh</span></i><i><span style="font-weight: 400;">, Doris Wang, Rengan Xu, Junjie Yang, and the entire event sequence learning team involved in the development and productionization of the next-generation sequencing learning-based ads recommendation system.</span></i></p> <p>The post <a rel="nofollow" href="https://engineering.fb.com/2024/11/19/data-infrastructure/sequence-learning-personalized-ads-recommendations/">Sequence learning: A paradigm shift for personalized ads recommendations</a> appeared first on <a rel="nofollow" href="https://engineering.fb.com">Engineering at Meta</a>.</p> ]]></content:encoded> <post-id xmlns="com-wordpress:feed-additions:1">21954</post-id> </item> <item> <title>How Meta built large-scale cryptographic monitoring</title> <link>https://engineering.fb.com/2024/11/12/security/how-meta-built-large-scale-cryptographic-monitoring/</link> <dc:creator><![CDATA[]]></dc:creator> <pubDate>Tue, 12 Nov 2024 17:00:10 +0000</pubDate> <category><![CDATA[Security]]></category> <guid isPermaLink="false">https://engineering.fb.com/?p=21935</guid> <description><![CDATA[<p>Cryptographic monitoring at scale has been instrumental in helping our engineers understand how cryptography is used at Meta. Monitoring has given us a distinct advantage in our efforts to proactively detect and remove weak cryptographic algorithms and has assisted with our general change safety and reliability efforts. We’re sharing insights into our own cryptographic monitoring [...]</p> <p><a class="btn btn-secondary understrap-read-more-link" href="https://engineering.fb.com/2024/11/12/security/how-meta-built-large-scale-cryptographic-monitoring/">Read More...</a></p> <p>The post <a rel="nofollow" href="https://engineering.fb.com/2024/11/12/security/how-meta-built-large-scale-cryptographic-monitoring/">How Meta built large-scale cryptographic monitoring</a> appeared first on <a rel="nofollow" href="https://engineering.fb.com">Engineering at Meta</a>.</p> ]]></description> <content:encoded><![CDATA[<ul> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Cryptographic monitoring at scale has been instrumental in helping our engineers understand how cryptography is used at Meta.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Monitoring has given us a distinct advantage in our efforts to proactively detect and remove weak cryptographic algorithms and has assisted with our general change safety and reliability efforts.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">We’re sharing insights into our own cryptographic monitoring system, including challenges faced in its implementation, with the hope of assisting others in the industry aiming to deploy cryptographic monitoring at a similar scale.</span></li> </ul> <p><span style="font-weight: 400;">Meta’s managed cryptographic library, FBCrypto, plays an important role within Meta’s infrastructure and is used by the majority of our core infrastructure services. Given this, having a robust monitoring system in place for FBCrypto has been instrumental in ensuring its reliability as well as in helping our engineers understand how cryptography is used at Meta so they can make informed development decisions.</span></p> <p><span style="font-weight: 400;">Monitoring the health of our library allows us to detect and revert bugs before they reach production services. The data from our monitoring service provides insight into the usage of FBCrypto, allowing us to make data-driven decisions when deciding what improvements to make to the library. For example, it helps us identify components that need more attention either because they are on a hot path or are less stable.</span></p> <p><span style="font-weight: 400;">Understanding exactly how clients are using said library is a common pain point in managing any widely distributed library. But the improved understanding of FBCrypto provided by our monitoring helps us maintain a high bar for security posture. Since there is a limit to how much data a symmetric cryptographic key can protect, logging allows us to detect key overuse and rotate keys proactively. It also helps us build an inventory of cryptography usage, making it easy to identify the callsites of weakened algorithms that need to be migrated – a very important task because we need to proactively switch from weakened algorithms to newer, more robust ones as cryptography strength decays over time.</span></p> <p><span style="font-weight: 400;">More generally, improved understanding helps us to make emergency algorithm migrations when a vulnerability of a primitive is discovered.</span></p> <p><span style="font-weight: 400;">More recently, this is aiding our efforts to ensure</span> <a href="https://engineering.fb.com/2024/05/22/security/post-quantum-readiness-tls-pqr-meta/" target="_blank" rel="noopener"><span style="font-weight: 400;">post-quantum readiness</span></a><span style="font-weight: 400;"> in our asymmetric use cases. The available data improves our decision-making process while prioritizing quantum-vulnerable use cases</span></p> <h2><span style="font-weight: 400;">How cryptographic monitoring works at Meta</span></h2> <p><span style="font-weight: 400;">Effective cryptographic monitoring requires storing persisted logs of cryptographic events, upon which diagnostic and analytic tools can be used to gather further insights. Supporting logging at the scale of FBCrypto requires an implementation with unique performance considerations in mind. Given that FBCrypto is used along many high-volume and critical code paths, a naive logging implementation could easily overwhelm a standard logging infrastructure or cause significant performance regressions. This is true for most widely distributed libraries and is especially true in the field of cryptography, where the sheer volume of usage can come as a complete surprise to those unfamiliar with the space. For example, we recently disclosed that roughly 0.05% of CPU cycles at Meta are spent on X25519 key exchange. </span></p> <p><span style="font-weight: 400;">Most of Meta’s logs are constructed and written via</span> <a href="https://engineering.fb.com/2019/10/07/core-infra/scribe/" target="_blank" rel="noopener"><span style="font-weight: 400;">Scribe</span></a><span style="font-weight: 400;">, Meta’s standard logging framework. From there, data persists in</span> <a href="https://research.facebook.com/publications/scuba-diving-into-data-at-facebook/" target="_blank" rel="noopener"><span style="font-weight: 400;">Scuba</span></a><span style="font-weight: 400;"> and</span> <a href="https://research.facebook.com/publications/hive-a-warehousing-solution-over-a-map-reduce-framework/" target="_blank" rel="noopener"><span style="font-weight: 400;">Hive</span></a><span style="font-weight: 400;">, Meta’s short-term and long term data stores, respectively.</span></p> <p><span style="font-weight: 400;">Typically, the Scribe API is called directly to construct a log for every “event” that needs to be logged. For FBCrypto, this would mean constructing a log for nearly every cryptographic operation that our library is used for. Unfortunately, given the sheer frequency of such operations, a solution like this would consume an unreasonable amount of write throughput and storage capacity. A common solution to this problem would be to introduce sampling (i.e., only log 1/X cryptographic operations, and increase X until we no longer have capacity concerns). However, we felt strongly about not introducing any sampling since doing so would result in most logs being omitted, giving us a less clear picture of the library’s usage.</span></p> <p><span style="font-weight: 400;">Instead, the logging uses a “buffering and flushing” strategy, in which cryptographic events are aggregated across time and flushed to a data store at a preconfigured interval.</span></p> <p><span style="font-weight: 400;">During the aggregation, a “count” is maintained for every unique event. When it comes time to flush, this count is exported along with the log to convey how often that particular event took place. </span></p> <p><span style="font-weight: 400;">Below is a rough illustration of what this looks like:</span></p> <p><img decoding="async" class="aligncenter wp-image-21936" src="https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-1-e1731001505528.png?w=859" alt="" width="600" height="450" /></p> <p><span style="font-weight: 400;">In the above example, the key named “myKeyName” is used to perform encryption using the AES-GCM-SIV encryption algorithm (in practice we log more fields than just key name, method, and algorithm). The operation happens five times and is assigned on a count of five. Since machines often compute millions of cryptographic operations per day, this strategy can lead to significant compute savings in production. </span></p> <h3><span style="font-weight: 400;">A client-side view</span></h3> <p><span style="font-weight: 400;">The aggregation and flushing is implemented within FBCrypto, so the logging and flushing code sits on the client hosts. When clients call a given cryptographic operation (e.g., “encrypt()”), the operation is performed and the log is added to our aggregated buffer. We refer to the object that holds the buffer as the “buffered logger.”</span></p> <p><span style="font-weight: 400;">Note that the logging does not change the interface of FBCrypto, so all of this is transparent to the clients of the library. </span></p> <p><img decoding="async" class="aligncenter wp-image-21937" src="https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-2-e1731001623468.png?w=939" alt="" width="600" height="338" srcset="https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-2-e1731001623468.png 939w, https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-2-e1731001623468.png?resize=580,326 580w, https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-2-e1731001623468.png?resize=916,516 916w, https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-2-e1731001623468.png?resize=768,433 768w, https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-2-e1731001623468.png?resize=96,54 96w, https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-2-e1731001623468.png?resize=192,108 192w" sizes="(max-width: 992px) 100vw, 62vw" /></p> <p><span style="font-weight: 400;">In multithreaded environments all threads will log to the same buffer. For this to be performant, we need to choose the right underlying data structure (see the section below on </span><i><span style="font-weight: 400;">“Additional optimizations”</span></i><span style="font-weight: 400;"> for more details).</span></p> <p><span style="font-weight: 400;">While the aggregation works to reduce space and time overhead, the logs need to eventually be written to storage for further use. To do this, a background thread runs on the client host to periodically call the Scribe API to export the logs and flush the map’s contents. </span></p> <p><span style="font-weight: 400;">Below is an overview of the overall flow: </span></p> <p><img loading="lazy" decoding="async" class="aligncenter wp-image-21941" src="https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-3-cropped.png?w=1024" alt="" width="600" height="520" srcset="https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-3-cropped.png 1478w, https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-3-cropped.png?resize=916,793 916w, https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-3-cropped.png?resize=768,665 768w, https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-3-cropped.png?resize=1024,887 1024w, https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-3-cropped.png?resize=96,83 96w, https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-3-cropped.png?resize=192,166 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <h3><span style="font-weight: 400;">Additional optimizations</span></h3> <p><span style="font-weight: 400;">We had to make some additional optimizations to support cryptographic monitoring on Meta’s major products (Facebook, Whatsapp, Instagram, etc.).</span></p> <p><span style="font-weight: 400;">With careful design choices around the logging logic and data structures used, our cryptographic logging operates with </span><b>no sampling </b><span style="font-weight: 400;">and has had a negligible impact on compute performance across Meta’s fleet.</span></p> <h4><span style="font-weight: 400;">Partially randomized flushing</span></h4> <p><span style="font-weight: 400;">Due to the nature of our buffering and flushing strategy, certain clients who were running jobs that restarted large sets of machines at around the same time would have those machines’ logs get flushed at about the same time. This would result in “spiky” writes to the logging platform, followed by longer periods of underutilization between flushes. To normalize our write throughput, we distribute these spikes across time by applying a randomized delay on a per-host basis before logs are flushed for the first time. This leads to a more uniform flushing cadence, allowing for a more consistent load on Scribe. </span></p> <p><span style="font-weight: 400;">The figure below demonstrates how this works:</span></p> <p><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-21939" src="https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-4.png?w=1024" alt="" width="1024" height="388" srcset="https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-4.png 1999w, https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-4.png?resize=916,347 916w, https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-4.png?resize=768,291 768w, https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-4.png?resize=1024,388 1024w, https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-4.png?resize=1536,582 1536w, https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-4.png?resize=96,36 96w, https://engineering.fb.com/wp-content/uploads/2024/11/Cryptographic-monitoring_Meta-4.png?resize=192,73 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <h4><span style="font-weight: 400;">Derived crypto</span></h4> <p><span style="font-weight: 400;">FBCrypto supports a feature called derived crypto, which allows “child” keysets to be derived from “parent” keysets by applying a key derivation function (KDF) to all the keys in the keyset with some salt. This feature is used by a few large-scale use cases that need to generate millions of keys.</span></p> <p><span style="font-weight: 400;">Our logging initially created a unique row in the buffered logger for every derived keyset, which used a lot of space and put increased load on backend data stores. To address this, we now aggregate the cryptographic operations of derived keys under the name of the parent key. This reduces our overall capacity needs without harming our ability to detect key overuse since, in the worst case, the aggregations would be a pessimistic counter for any given child key. </span></p> <p><span style="font-weight: 400;">Thanks to this aggregation, we were able to cut down on the vast majority of our logging volume, compared to the space that would have been used with no aggregation. </span></p> <h4><span style="font-weight: 400;">The Folly library </span></h4> <p><span style="font-weight: 400;">Internally, our buffering makes use of the</span> <a href="https://github.com/facebook/folly/blob/main/folly/concurrency/ConcurrentHashMap.h" target="_blank" rel="noopener"><span style="font-weight: 400;">folly::ConcurrentHashMap</span></a><span style="font-weight: 400;">, which is built to be performant under heavy writes in multithreaded environments, while still guaranteeing atomic accesses.  </span></p> <h3><span style="font-weight: 400;">Unified offerings</span></h3> <p><span style="font-weight: 400;">Meta’s existing infrastructure and its emphasis on unified offerings are key to supporting this at scale (see the</span><a href="https://engineering.fb.com/2019/10/07/core-infra/scribe/"> <span style="font-weight: 400;">Scribe</span></a><span style="font-weight: 400;"> logging framework and the FBCrypto library). These properties often mean that solutions only have to be implemented once in order for the entire company to benefit.</span></p> <p><span style="font-weight: 400;">This is especially true here. Most machines in Meta’s fleet can log to Scribe, giving us easy log ingestion support. Furthermore, the wide adoption of FBCrypto gives us insights into cryptographic operations without needing clients to migrate to a new library/API. </span></p> <p><span style="font-weight: 400;">From an engineering perspective, this helps us overcome many hurdles that others in the industry might face. For example, it helps us avoid fragmentation that might require multiple custom solutions to be implemented, which would increase our engineering workload.</span></p> <h2><span style="font-weight: 400;">The impact of cryptographic monitoring</span></h2> <p><span style="font-weight: 400;">The insights from our cryptographic monitoring efforts have served multiple use cases across our security and infrastructure reliability efforts.</span></p> <h3><span style="font-weight: 400;">Preemptively mitigating security vulnerabilities</span></h3> <p><span style="font-weight: 400;">Thanks to our long retention window, we can monitor trends over time and use them for more predictive modeling and analysis. We can present our findings to cryptography experts, who can do further analysis and predict whether vulnerabilities may emerge. This allows us to preemptively identify clients using cryptography in risky ways and work with them to mitigate these issues before they become real security vulnerabilities. </span></p> <p><span style="font-weight: 400;">This is particularly beneficial in preparation for the world of</span><a href="https://en.wikipedia.org/wiki/Post-quantum_cryptography"> <span style="font-weight: 400;">post-quantum cryptography</span></a><span style="font-weight: 400;"> (PQC), which requires us to find clients using vulnerable algorithms and ensure they are migrated off in a timely fashion. </span></p> <p><span style="font-weight: 400;">We have also found that being able to preemptively detect these vulnerabilities well in advance has led to stronger support during cross-team collaborations. Thanks to the ample notice, teams can seamlessly integrate any necessary migration efforts into their roadmap with minimal interruption to their ongoing projects.</span></p> <h3><span style="font-weight: 400;">Promoting infrastructure reliability</span></h3> <p><span style="font-weight: 400;">Our root dataset has also served as a useful proxy for client health. This is partially thanks to the lack of sampling, as we can see the exact number of calls taking place, along with their respective success rates. This has been particularly important during large-scale migrations, where anomalous drops in success rate, call volume, etc., may indicate a bug in a new code path. Indeed, numerous detectors and alarms have been built off our dataset to help us perform big migrations safely.</span></p> <p><span style="font-weight: 400;">The dataset also contains library versioning information, so we can monitor what versions of our library are running across the fleet in real-time. This has been especially useful for rolling out new features, as we can see exactly which clients have picked up the latest changes. This allows us to move faster and more confidently, even when running large-scale migrations across the fleet. </span></p> <h2><span style="font-weight: 400;">Challenges to cryptographic monitoring</span></h2> <p><span style="font-weight: 400;">Supporting cryptographic logging at Meta’s scale has had its own unique set of challenges.</span></p> <h3><span style="font-weight: 400;">Capacity constraints</span></h3> <p><span style="font-weight: 400;">Despite our optimizations, we have occasionally found ourselves putting increased load on Scribe (see point above about underestimating cryptographic usage) and have worked with the Scribe team to manage the unexpected increase in write throughput. Doing so has been relatively easy for the company, considering the design optimizations mentioned above.</span></p> <p><span style="font-weight: 400;">We also occasionally put an increased load on</span> <a href="https://research.facebook.com/publications/scuba-diving-into-data-at-facebook/" target="_blank" rel="noopener"><span style="font-weight: 400;">Scuba</span></a><span style="font-weight: 400;">, which is optimized to be performant for real-time data (i.e., warm storage) and can be inefficient if used for larger datasets. To minimize compute costs, we also rely on</span><a href="https://research.facebook.com/publications/hive-a-warehousing-solution-over-a-map-reduce-framework/"> <span style="font-weight: 400;">Hive</span></a><span style="font-weight: 400;"> tables for longer-term storage (i.e., cold storage). </span></p> <h3><span style="font-weight: 400;">Flushing on shutdown</span></h3> <p><span style="font-weight: 400;">Besides flushing the logs in the shared singleton map at a preconfigured time interval, client machines will also do one final flush to log all remaining contents of their log buffer to Scribe when a job is being shut down. We have found that operating in a “shutdown environment” can lead to a number of interesting scenarios, particularly when attempting to access Scribe and its dependencies. Many of these scenarios boil down to the nuances of</span><a href="https://github.com/facebook/folly/blob/main/folly/Singleton.h"> <span style="font-weight: 400;">folly::Singleton</span></a><span style="font-weight: 400;">, which is Meta’s go-to library for managing singletons. Likewise, running something “on shutdown” in Java requires using only synchronous I/O code and operating quickly.</span></p> <h2><span style="font-weight: 400;">Our next initiatives for cryptographic monitoring</span></h2> <p><span style="font-weight: 400;">While our work thus far has been largely a success, there are many exciting avenues for improvements. For example, further optimizing Scribe throughput and Scuba storage utilization to make more efficient use of Meta’s infrastructure  </span></p> <p><span style="font-weight: 400;">We will also continue to leverage the logging data to further develop monitoring and data analytics to promote security and reliability. On the security side, this means continuing to take an inventory of use cases that would be vulnerable in a PQC world and migrate them to more resilient algorithms/configurations. In terms of reliability, it means gaining a better understanding of the end-to-end latency for cryptography use cases.</span></p> <p><span style="font-weight: 400;">Within all of this it’s also important that we continue driving the unification of cryptographic offerings and monitoring tooling. While FBCrypto provides a unified set of offerings, there are other cryptographic use cases across Meta that use a different set of tools for telemetry and data collection. More non-trivial work is needed to achieve full unification with all use cases.</span></p> <h2><span style="font-weight: 400;">Acknowledgments</span></h2> <p><i><span style="font-weight: 400;">This work could not have been accomplished without the critical efforts of numerous folks, particularly Grace Wu, Ilya Maykov, Isaac Elbaz, and the rest of the CryptoEng team at Meta.</span></i></p> <p>The post <a rel="nofollow" href="https://engineering.fb.com/2024/11/12/security/how-meta-built-large-scale-cryptographic-monitoring/">How Meta built large-scale cryptographic monitoring</a> appeared first on <a rel="nofollow" href="https://engineering.fb.com">Engineering at Meta</a>.</p> ]]></content:encoded> <post-id xmlns="com-wordpress:feed-additions:1">21935</post-id> </item> <item> <title>Diff Authoring Time: Measuring developer productivity at Meta</title> <link>https://engineering.fb.com/2024/10/25/developer-tools/diff-authoring-time-dat-measuring-developer-productivity-meta/</link> <dc:creator><![CDATA[]]></dc:creator> <pubDate>Fri, 25 Oct 2024 16:32:59 +0000</pubDate> <category><![CDATA[Culture]]></category> <category><![CDATA[DevInfra]]></category> <category><![CDATA[Meta Tech Podcast]]></category> <guid isPermaLink="false">https://engineering.fb.com/?p=21928</guid> <description><![CDATA[<p>At Meta, we’re always looking for ways to enhance the productivity of our engineers and developers. But how exactly do you measure developer productivity? On this episode of the Meta Tech Podcast Pascal Hartig (@passy) sits down with Sarita and Moritz, two engineers at Meta who have been working on Diff Authoring Time (DAT) – a [...]</p> <p><a class="btn btn-secondary understrap-read-more-link" href="https://engineering.fb.com/2024/10/25/developer-tools/diff-authoring-time-dat-measuring-developer-productivity-meta/">Read More...</a></p> <p>The post <a rel="nofollow" href="https://engineering.fb.com/2024/10/25/developer-tools/diff-authoring-time-dat-measuring-developer-productivity-meta/">Diff Authoring Time: Measuring developer productivity at Meta</a> appeared first on <a rel="nofollow" href="https://engineering.fb.com">Engineering at Meta</a>.</p> ]]></description> <content:encoded><![CDATA[<p>At Meta, we’re always looking for ways to enhance the productivity of our engineers and developers. But how exactly do you measure developer productivity?</p> <p>On this episode of the Meta Tech Podcast Pascal Hartig (<a href="https://www.threads.net/@passy_" target="_blank" rel="noopener">@passy</a>) sits down with Sarita and <a href="https://x.com/Inventitech" target="_blank" rel="noopener">Moritz</a>, two engineers at Meta who have been working on Diff Authoring Time (DAT) – a method for measuring how long it takes to submit changes to a codebase.</p> <p>They talk about the challenges of measuring productivity, how DAT is implemented, and the new abilities it unlocks for developers.</p> <p>Download or listen to the podcast episode below:</p> <p><iframe loading="lazy" style="border: none;" title="Libsyn Player" src="//html5-player.libsyn.com/embed/episode/id/33265257/height/90/theme/custom/thumbnail/yes/direction/forward/render-playlist/no/custom-color/000000/" width="100%" height="90" scrolling="no" allowfullscreen="allowfullscreen"></iframe><br /> You can also find the episode wherever you get your podcasts, including:</p> <ul> <li><a href="https://open.spotify.com/episode/4D7HJeNs40U2C6uMQoPcMc?si=tfM2ZSC7REGIAGq693fSaA" target="_blank" rel="noopener">Spotify</a></li> <li><a href="https://podcasts.apple.com/gb/podcast/measuring-developer-productivity-with-diff-authoring/id1370910331?i=1000671324538" target="_blank" rel="noopener">Apple Podcasts</a></li> <li><a href="https://pca.st/7vbp2djc" target="_blank" rel="noopener">Pocket Casts</a></li> <li><a href="https://overcast.fm/itunes1370910331" target="_blank" rel="noopener">Overcast</a></li> </ul> <p>The <a href="https://insidefacebookmobile.libsyn.com/" target="_blank" rel="noopener">Meta Tech Podcast</a> is a podcast, brought to you by Meta, where we highlight the work Meta’s engineers are doing at every level – from low-level frameworks to end-user features.</p> <p>Send us feedback on <a href="https://instagram.com/metatechpod" target="_blank" rel="noopener">Instagram</a>, <a href="https://threads.net/@metatechpod" target="_blank" rel="noopener">Threads</a>, or <a href="https://twitter.com/metatechpod" target="_blank" rel="noopener">X</a>.</p> <p>And if you’re interested in learning more about career opportunities at Meta visit the <a href="https://www.metacareers.com/?ref=engineering.fb.com" target="_blank" rel="noopener">Meta Careers</a> page.</p> <p>The post <a rel="nofollow" href="https://engineering.fb.com/2024/10/25/developer-tools/diff-authoring-time-dat-measuring-developer-productivity-meta/">Diff Authoring Time: Measuring developer productivity at Meta</a> appeared first on <a rel="nofollow" href="https://engineering.fb.com">Engineering at Meta</a>.</p> ]]></content:encoded> <post-id xmlns="com-wordpress:feed-additions:1">21928</post-id> </item> <item> <title>IPLS: Privacy-preserving storage for your WhatsApp contacts</title> <link>https://engineering.fb.com/2024/10/22/security/ipls-privacy-preserving-storage-for-your-whatsapp-contacts/</link> <dc:creator><![CDATA[]]></dc:creator> <pubDate>Tue, 22 Oct 2024 12:59:46 +0000</pubDate> <category><![CDATA[Security]]></category> <category><![CDATA[WhatsApp]]></category> <guid isPermaLink="false">https://engineering.fb.com/?p=21812</guid> <description><![CDATA[<p>Your contact list is fundamental to the experiences you love and enjoy on WhatsApp. With contacts, you know which of your friends and family are on WhatsApp, you can easily message or call them, and it helps give you context on who is in your groups. But losing your phone could mean losing your contact [...]</p> <p><a class="btn btn-secondary understrap-read-more-link" href="https://engineering.fb.com/2024/10/22/security/ipls-privacy-preserving-storage-for-your-whatsapp-contacts/">Read More...</a></p> <p>The post <a rel="nofollow" href="https://engineering.fb.com/2024/10/22/security/ipls-privacy-preserving-storage-for-your-whatsapp-contacts/">IPLS: Privacy-preserving storage for your WhatsApp contacts</a> appeared first on <a rel="nofollow" href="https://engineering.fb.com">Engineering at Meta</a>.</p> ]]></description> <content:encoded><![CDATA[<p><span style="font-weight: 400;">Your contact list is fundamental to the experiences you love and enjoy on WhatsApp. With contacts, you know which of your friends and family are on WhatsApp, you can easily message or call them, and it helps give you context on who is in your groups. But losing your phone could mean losing your contact list as well. Traditionally, WhatsApp has lacked the ability to store your contact list in a way that can be easily and automatically restored in the event you lose it. What’s more, the only place you were able to add contacts was from your mobile device, by either typing in a phone number or scanning a QR code.</span></p> <p><span style="font-weight: 400;">As part of WhatsApp&#8217;s new feature to privately add and manage your contacts on WhatsApp across linked devices, we&#8217;re announcing a novel encrypted storage system we’ve designed called Identity Proof Linked Storage (IPLS). IPLS allows you to save your contacts and automatically restore them directly through WhatsApp. With IPLS in place, you can now create contacts directly within WhatsApp and choose to sync them to your phone or securely save them only to WhatsApp – giving you the ability to create contacts that are specific to your account. If you use linked devices, this also allows you to add and manage contacts seamlessly regardless of which device you’re on.</span></p> <p><span style="font-weight: 400;">Additionally, if you have multiple accounts on the same phone, such as a work and personal account, you can now customize your contact list for each account. If you lose your phone, your contact list can be restored on a newly registered device. </span></p> <p><span style="font-weight: 400;">Contact names are stored encrypted within WhatsApp, and we’ve built this with additional, robust protections by using IPLS to deter access to contacts to anyone except the user.</span></p> <p><span style="font-weight: 400;">IPLS incorporates new privacy technology that protects your contact lists in a privacy-preserving fashion. To further ensure the safety and security of this system, we’ve </span><a href="https://www.cloudflare.com/press-releases/2024/cloudflare-helps-secure-the-worlds-most-popular-messaging-applications/"><span style="font-weight: 400;">partnered with Cloudflare</span></a><span style="font-weight: 400;"> to provide</span><a href="https://blog.cloudflare.com/key-transparency/"><span style="font-weight: 400;"> independent third-party auditing</span></a><span style="font-weight: 400;"> of its cryptographic properties. The new technology stack was reviewed by external researchers and NCC Group Cryptography Services, an independent cybersecurity consultancy. </span></p> <h2>What is Identity Proof Linked Storage?</h2> <p><span style="font-weight: 400;">IPLS is a novel system at WhatsApp that allows users to store their contact names in an encrypted way. IPLS allows the client device to save the contact information using a strong encryption key generated on the client device. Its retrieval is based on the client authenticating its primary device identity.</span></p> <p><span style="font-weight: 400;">IPLS is based on two existing pieces of technology that are already used at scale by WhatsApp: </span><a href="https://engineering.fb.com/2023/04/13/security/whatsapp-key-transparency/" target="_blank" rel="noopener"><span style="font-weight: 400;">key transparency</span></a><span style="font-weight: 400;"> and our <a href="https://engineering.fb.com/2021/09/10/security/whatsapp-e2ee-backups/" target="_blank" rel="noopener">hardware security module (HSM)</a>. </span></p> <p><span style="font-weight: 400;">Certain events associated with your phone’s WhatsApp application (such as installing or reinstalling) trigger the creation of a new cryptographic keypair that is associated with your phone number. WhatsApp’s key transparency system publishes records of these primary device identity key changes to an append-only, cryptographic </span><a href="https://github.com/facebook/akd/" target="_blank" rel="noopener"><span style="font-weight: 400;">Auditable Key Directory (AKD)</span></a><span style="font-weight: 400;"> that allows WhatsApp clients to automatically verify a user’s encryption key. </span></p> <p><span style="font-weight: 400;">Key transparency allows WhatsApp, and the public at large, to cryptographically verify if a given phone number used for a WhatsApp account is tied to a given identity key.</span></p> <p><span style="font-weight: 400;">The HSMs are employed by </span><a href="https://www.whatsapp.com/security/WhatsApp_Security_Encrypted_Backups_Whitepaper.pdf" target="_blank" rel="noopener"><span style="font-weight: 400;">WhatsApp end-to-end encrypted backups</span></a><span style="font-weight: 400;"> and allow for private, tamper-resistant execution of application logic within WhatsApp data centers in a privacy-preserving way. Data processing within HSM’s security boundary remains opaque even to WhatsApp insiders with the highest privilege and physical access to the hardware. </span></p> <h2><span style="font-weight: 400;">The components of IPLS</span></h2> <h3><span style="font-weight: 400;">The AKD and Cloudflare integration</span></h3> <p><span style="font-weight: 400;">As mentioned, the first building block of IPLS is WhatsApp’s AKD, which maps a client phone number to a client identity key. Primary device identity is used to authenticate the client to ensure that only the owner of the contact encryption key is allowed to restore the contacts.</span></p> <p><span style="font-weight: 400;">To strengthen the single instance nature of AKD, </span><a href="https://blog.cloudflare.com/key-transparency/" target="_blank" rel="noopener"><span style="font-weight: 400;">WhatsApp has engaged Cloudflare</span></a><span style="font-weight: 400;"> to act as an additional witness of the additions to AKD. Cloudflare digitally signs each epoch, and associated root hash, and returns a digital signature validation confirming that the directory was not tampered with. The HSM-based Key Vault validates Cloudflare signature using Cloudflare’s public key.</span></p> <p><span style="font-weight: 400;">WhatsApp relies on the availability of the Cloudflare signing service and cannot proceed with the updates to AKD in the absence of the digital signature of each update.</span></p> <p><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-21822" src="https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-1_crop-Copy.png?w=1024" alt="" width="1024" height="320" srcset="https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-1_crop-Copy.png 1920w, https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-1_crop-Copy.png?resize=916,286 916w, https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-1_crop-Copy.png?resize=768,240 768w, https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-1_crop-Copy.png?resize=1024,320 1024w, https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-1_crop-Copy.png?resize=1536,480 1536w, https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-1_crop-Copy.png?resize=96,30 96w, https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-1_crop-Copy.png?resize=192,60 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <p><span style="font-weight: 400;">In addition, WhatsApp provides auditable proofs of consistency for the transitions between epochs. The auditable proofs are published to a write-once, read-many enabled Amazon S3 instance, which has a public interface for any entity to retrieve the proofs.</span></p> <p><span style="font-weight: 400;">Using AKD and partnering with Cloudflare ensures that there is only a single instance of the directory that is validated by a 3rd party.</span></p> <h3><span style="font-weight: 400;">HSM-based key storage</span></h3> <p><span style="font-weight: 400;">To ensure privacy for user contacts registered on WhatsApp, contact names are first encrypted using a symmetric encryption key generated by the user’s device, and then stored in the HSM-based Key Vault. Storage and retrieval of the contact encryption key occurs via an end-to-end encrypted channel between the client and the HSM-based Key Vault, ensuring that the data in transit remains opaque to WhatsApp.  </span></p> <p><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-21823" src="https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-2_crop.png?w=1024" alt="" width="1024" height="320" srcset="https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-2_crop.png 1920w, https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-2_crop.png?resize=916,286 916w, https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-2_crop.png?resize=768,240 768w, https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-2_crop.png?resize=1024,320 1024w, https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-2_crop.png?resize=1536,480 1536w, https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-2_crop.png?resize=96,30 96w, https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-2_crop.png?resize=192,60 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <p><span style="font-weight: 400;">Storing the contact key in the HSM-based Key Vault ensures its availability even when the user loses their phone. If a user loses their client device and wants to restore their contacts, the new client device can retrieve the contact key by establishing a secure session with the HSM-based Key Vault. The Key Vault verifies the client identity key by accessing AKD via a secure cryptographic protocol and verifying that the client has the corresponding private key.</span></p> <p><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-21824" src="https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-3_crop.png?w=1024" alt="" width="1024" height="320" srcset="https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-3_crop.png 1920w, https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-3_crop.png?resize=916,286 916w, https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-3_crop.png?resize=768,240 768w, https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-3_crop.png?resize=1024,320 1024w, https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-3_crop.png?resize=1536,480 1536w, https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-3_crop.png?resize=96,30 96w, https://engineering.fb.com/wp-content/uploads/2024/10/WhatsApp-IPLS-image-3_crop.png?resize=192,60 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <p><span style="font-weight: 400;">Once the client is verified, the new client is allowed to access the contact key in the HSM-based Key Vault using the secure channel established with the client identity key and the HSM key.</span></p> <h2>Privacy-preserving contacts storage at WhatsApp scale</h2> <p><span style="font-weight: 400;">IPLS is a new system that deters unauthorized access to sensitive data by effectively coupling any data access to publicly auditable identity key changes published to WhatsApp’s key transparency infrastructure. This approach is similar to how a QR code scanning technology can be used to detect a public key compromise in an </span><a href="https://faq.whatsapp.com/820124435853543" target="_blank" rel="noopener"><span style="font-weight: 400;">end-to-end encrypted messaging</span></a><span style="font-weight: 400;"> system.</span></p> <p><span style="font-weight: 400;">WhatsApp’s new approach on contacts will give users more ways to easily manage contacts across devices and accounts and store them securely without losing them if they change phones or reinstall WhatsApp. We’re excited about how IPLS has helped enable this new feature and will help ensure WhatsApp contacts are encrypted and can easily move with users when they get a new phone. </span></p> <p>The post <a rel="nofollow" href="https://engineering.fb.com/2024/10/22/security/ipls-privacy-preserving-storage-for-your-whatsapp-contacts/">IPLS: Privacy-preserving storage for your WhatsApp contacts</a> appeared first on <a rel="nofollow" href="https://engineering.fb.com">Engineering at Meta</a>.</p> ]]></content:encoded> <post-id xmlns="com-wordpress:feed-additions:1">21812</post-id> </item> <item> <title>OCP Summit 2024: The open future of networking hardware for AI</title> <link>https://engineering.fb.com/2024/10/15/data-infrastructure/open-future-networking-hardware-ai-ocp-2024-meta/</link> <dc:creator><![CDATA[]]></dc:creator> <pubDate>Tue, 15 Oct 2024 17:06:46 +0000</pubDate> <category><![CDATA[Data Center Engineering]]></category> <category><![CDATA[Data Infrastructure]]></category> <category><![CDATA[DevInfra]]></category> <category><![CDATA[ML Applications]]></category> <category><![CDATA[Networking & Traffic]]></category> <category><![CDATA[Open Source]]></category> <category><![CDATA[Production Engineering]]></category> <guid isPermaLink="false">https://engineering.fb.com/?p=21834</guid> <description><![CDATA[<p>At Open Compute Project Summit (OCP) 2024, we’re sharing details about our next-generation network fabric for our AI training clusters. We’ve expanded our network hardware portfolio and are contributing two new disaggregated network fabrics and a new NIC to OCP. We look forward to continued collaboration with OCP to open designs for racks, servers, storage [...]</p> <p><a class="btn btn-secondary understrap-read-more-link" href="https://engineering.fb.com/2024/10/15/data-infrastructure/open-future-networking-hardware-ai-ocp-2024-meta/">Read More...</a></p> <p>The post <a rel="nofollow" href="https://engineering.fb.com/2024/10/15/data-infrastructure/open-future-networking-hardware-ai-ocp-2024-meta/">OCP Summit 2024: The open future of networking hardware for AI</a> appeared first on <a rel="nofollow" href="https://engineering.fb.com">Engineering at Meta</a>.</p> ]]></description> <content:encoded><![CDATA[<ul> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">At Open Compute Project Summit (OCP) 2024, we’re sharing details about our next-generation network fabric for our AI training clusters.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">We’ve expanded our network hardware portfolio and are contributing two new disaggregated network fabrics and a new NIC to OCP.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">We look forward to continued collaboration with OCP to open designs for racks, servers, storage boxes, and motherboards to benefit companies of all sizes across the industry.</span></li> </ul> <p><span style="font-weight: 400;">At Meta, we believe that open hardware drives innovation. In today’s world, where more and more data center infrastructure is being devoted to supporting new and emerging AI technologies, open hardware takes on an important role in assisting with disaggregation. By breaking down traditional data center technologies into their core components we can build new systems that are more flexible, scalable, and efficient. </span></p> <p><span style="font-weight: 400;">Since helping found OCP in 2011, we’ve shared our data center and component designs, and open-sourced our network orchestration software to spark new ideas both in our own data centers and across the industry. Those ideas have made Meta’s data centers</span> <a href="https://sustainability.atmeta.com/2024-sustainability-report/" target="_blank" rel="noopener"><span style="font-weight: 400;">among the most sustainable and efficient in the world</span></a><span style="font-weight: 400;">. Now, through OCP, we’re bringing new open advanced network technologies to our data centers, and the wider industry, for advanced AI applications.</span></p> <p><span style="font-weight: 400;">We’re announcing two new milestones for our data centers: Our next-generation network fabric for AI, and a new portfolio of network hardware that we’ve developed in close partnership with multiple vendors.</span></p> <figure id="attachment_21877" aria-describedby="caption-attachment-21877" style="width: 960px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-21877" src="https://engineering.fb.com/wp-content/uploads/2024/10/OCP-2024-DSF-Meta-1.png?w=960" alt="" width="960" height="540" srcset="https://engineering.fb.com/wp-content/uploads/2024/10/OCP-2024-DSF-Meta-1.png 960w, https://engineering.fb.com/wp-content/uploads/2024/10/OCP-2024-DSF-Meta-1.png?resize=580,326 580w, https://engineering.fb.com/wp-content/uploads/2024/10/OCP-2024-DSF-Meta-1.png?resize=916,515 916w, https://engineering.fb.com/wp-content/uploads/2024/10/OCP-2024-DSF-Meta-1.png?resize=768,432 768w, https://engineering.fb.com/wp-content/uploads/2024/10/OCP-2024-DSF-Meta-1.png?resize=96,54 96w, https://engineering.fb.com/wp-content/uploads/2024/10/OCP-2024-DSF-Meta-1.png?resize=192,108 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /><figcaption id="caption-attachment-21877" class="wp-caption-text">Disaggregated network fabrics offer significant advantages in scalability over modular-chassis fabric switches.</figcaption></figure> <h2><span style="font-weight: 400;">DSF: Scheduled fabric that is disaggregated and open </span></h2> <p><span style="font-weight: 400;">Network performance and availability play an important role in extracting the best performance out of our</span> <a href="https://engineering.fb.com/2024/03/12/data-center-engineering/building-metas-genai-infrastructure/" target="_blank" rel="noopener"><span style="font-weight: 400;">AI training clusters</span></a><span style="font-weight: 400;">. It’s for that reason that we’ve continued to push for disaggregation in the backend network fabrics for our AI clusters. Over the past year we have developed a Disaggregated Scheduled Fabric (DSF) for our next-generation AI clusters to help us develop open, vendor-agnostic systems with interchangeable building blocks from vendors across the industry. DSF-based fabrics allow us to build large, non-blocking fabrics to support high-bandwidth AI clusters.</span></p> <p><span style="font-weight: 400;">DSF extends our disaggregating network systems to our VoQ-based switched systems that are powered by the open</span> <a href="https://github.com/opencomputeproject/SAI" target="_blank" rel="noopener"><span style="font-weight: 400;">OCP-SAI</span></a><span style="font-weight: 400;"> standard and</span> <a href="https://engineering.fb.com/2018/09/04/data-infrastructure/research-in-brief-building-switch-software-at-scale-and-in-the-open/" target="_blank" rel="noopener"><span style="font-weight: 400;">FBOSS</span></a><span style="font-weight: 400;">, Meta’s own network operating system for controlling network switches. VoQ-based traffic scheduling ensures proactive congestion avoidance in the fabric rather than reactive congestion signaling and reaction.</span></p> <p><span style="font-weight: 400;">The DSF fabric supports an open and standard Ethernet-based RoCE interface to endpoints and accelerators across several xPUs and NICs, including Meta’s </span><a href="https://ai.meta.com/blog/next-generation-meta-training-inference-accelerator-AI-MTIA/"><span style="font-weight: 400;">MTIA</span></a><span style="font-weight: 400;"> as well as from several vendors. </span></p> <h2><span style="font-weight: 400;">DSF platforms for next-generation AI fabrics </span></h2> <h3><span style="font-weight: 400;">Arista 7700R4 series</span></h3> <p><span style="font-weight: 400;">The DSF platforms, Arista 7700R4 series,  consist of dedicated leaf and spine systems that are combined to create a large, distributed switch. As a distributed system, DSF is designed to support high scale AI clusters.</span></p> <p><img loading="lazy" decoding="async" class="size-large wp-image-21878 aligncenter" src="https://engineering.fb.com/wp-content/uploads/2024/10/7700R4C-38PE-e1729011213805.png?w=476" alt="" width="476" height="267" srcset="https://engineering.fb.com/wp-content/uploads/2024/10/7700R4C-38PE-e1729011213805.png 476w, https://engineering.fb.com/wp-content/uploads/2024/10/7700R4C-38PE-e1729011213805.png?resize=96,54 96w, https://engineering.fb.com/wp-content/uploads/2024/10/7700R4C-38PE-e1729011213805.png?resize=192,108 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <p><span style="font-weight: 400;">7700R4C-38PE: DSF Leaf Switch</span></p> <ul> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">DSF Distributed Leaf Switch (Broadcom Jericho3-AI based)</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">18 x 800GE (36 x 400GE) OSFP800 host ports</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">20 x 800Gbps (40 x 400Gbps) fabric ports</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">14.4Tbps of wirespeed performance with 16GB of buffers</span></li> </ul> <p><img loading="lazy" decoding="async" class="size-large wp-image-21879 aligncenter" src="https://engineering.fb.com/wp-content/uploads/2024/10/7720R4-128PE-e1729011256820.png?w=597" alt="" width="597" height="335" srcset="https://engineering.fb.com/wp-content/uploads/2024/10/7720R4-128PE-e1729011256820.png 597w, https://engineering.fb.com/wp-content/uploads/2024/10/7720R4-128PE-e1729011256820.png?resize=580,326 580w, https://engineering.fb.com/wp-content/uploads/2024/10/7720R4-128PE-e1729011256820.png?resize=96,54 96w, https://engineering.fb.com/wp-content/uploads/2024/10/7720R4-128PE-e1729011256820.png?resize=192,108 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <p><span style="font-weight: 400;">7720R4-128PE: DSF Spine Switch</span></p> <ul> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">DSF Distributed Spine Switch (Broadcom Ramon3 based)</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Accelerated compute optimized pipeline</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">128 x 800Gbps (256 x 400Gbps) fabric ports</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">102.4Tbps of wirespeed performance</span></li> </ul> <h2><span style="font-weight: 400;">51T switches for  next-generation 400G/800G fabrics</span></h2> <figure id="attachment_21880" aria-describedby="caption-attachment-21880" style="width: 600px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-21880" src="https://engineering.fb.com/wp-content/uploads/2024/10/Minipack3-e1729010564784.png?w=600" alt="" width="600" height="401" srcset="https://engineering.fb.com/wp-content/uploads/2024/10/Minipack3-e1729010564784.png 600w, https://engineering.fb.com/wp-content/uploads/2024/10/Minipack3-e1729010564784.png?resize=96,64 96w, https://engineering.fb.com/wp-content/uploads/2024/10/Minipack3-e1729010564784.png?resize=192,128 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /><figcaption id="caption-attachment-21880" class="wp-caption-text">Minipack3 (Broadcom Tomahawk5 based, designed by Meta and manufactured by Celestica) 51.2T switch.</figcaption></figure> <p><span style="font-weight: 400;">Meta will deploy two next-generation 400G fabric switches, the Minipack3 (the latest version of </span><a href="https://engineering.fb.com/2019/03/14/data-center-engineering/f16-minipack/" target="_blank" rel="noopener"><span style="font-weight: 400;">Minipack</span></a><span style="font-weight: 400;">, Meta’s own fabric network switch) and the Cisco 8501, both of which are also backward compatible with previous 200G and 400G switches and will support upgrades to 400G and 800G.</span></p> <p><span style="font-weight: 400;">The Minipack3 utilizes Broadcom’s latest Tomahawk5 ASIC while the Cisco 8501 is based on Cisco’s Silicon One G200 ASIC. These high-performance switches transmit up to 51.2 Tbps with 64x OSFP ports, and the design is optimized without the need of retimers to achieve maximum power efficiency. They also have significantly reduced power per bit compared with predecessor models.</span></p> <p><span style="font-weight: 400;">Meta will run both the Minipack3 and Cisco 8501 on FBOSS.</span></p> <figure id="attachment_21881" aria-describedby="caption-attachment-21881" style="width: 600px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-21881 size-large" src="https://engineering.fb.com/wp-content/uploads/2024/10/Cisco-8501-e1729010680692.png?w=600" alt="" width="600" height="230" srcset="https://engineering.fb.com/wp-content/uploads/2024/10/Cisco-8501-e1729010680692.png 600w, https://engineering.fb.com/wp-content/uploads/2024/10/Cisco-8501-e1729010680692.png?resize=96,37 96w, https://engineering.fb.com/wp-content/uploads/2024/10/Cisco-8501-e1729010680692.png?resize=192,74 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /><figcaption id="caption-attachment-21881" class="wp-caption-text">Cisco 8501 (Cisco Silicon One G200 based, designed and manufactured by Cisco) 51.2T switch.</figcaption></figure> <h2><span style="font-weight: 400;">Optics: 2x400G FR4 optics for 400G/800G optical interconnection </span></h2> <p>&nbsp;</p> <p><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-21882" src="https://engineering.fb.com/wp-content/uploads/2024/10/400G-FR4--e1729010852824.png?w=372" alt="" width="372" height="209" srcset="https://engineering.fb.com/wp-content/uploads/2024/10/400G-FR4--e1729010852824.png 372w, https://engineering.fb.com/wp-content/uploads/2024/10/400G-FR4--e1729010852824.png?resize=96,54 96w, https://engineering.fb.com/wp-content/uploads/2024/10/400G-FR4--e1729010852824.png?resize=192,108 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <p><span style="font-weight: 400;">Meta&#8217;s data center fabrics have evolved from 200 Gbps/400 Gbps to 400 Gbps/800 Gbps and </span><span style="font-weight: 400;">we’ve already deployed 2x400G optics in our data centers</span><span style="font-weight: 400;">.</span></p> <h2><span style="font-weight: 400;">Evolving FBOSS and SAI for DSF</span></h2> <p><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-21883" src="https://engineering.fb.com/wp-content/uploads/2024/10/SAI-FBOSS-logo.png?w=456" alt="" width="456" height="168" srcset="https://engineering.fb.com/wp-content/uploads/2024/10/SAI-FBOSS-logo.png 456w, https://engineering.fb.com/wp-content/uploads/2024/10/SAI-FBOSS-logo.png?resize=96,35 96w, https://engineering.fb.com/wp-content/uploads/2024/10/SAI-FBOSS-logo.png?resize=192,71 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <p><span style="font-weight: 400;">We continue to embrace OCP-SAI to onboard the new network fabrics, switch hardware platforms, and optical transceivers to FBOSS. We have collaborated with vendors, and the OCP community, to evolve SAI. It now supports new features and concepts like DSF and other enhanced routing schemes.</span></p> <p><span style="font-weight: 400;">Developers and engineers from all over the world can work with this open hardware and contribute their own software that they, in turn, can use themselves and share with the wider industry.</span></p> <h2><span style="font-weight: 400;">FBNIC: A multi-host foundational NIC designed by Meta</span></h2> <p><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-21884" src="https://engineering.fb.com/wp-content/uploads/2024/10/FBNIC-e1729010986979.png?w=600" alt="" width="600" height="280" srcset="https://engineering.fb.com/wp-content/uploads/2024/10/FBNIC-e1729010986979.png 600w, https://engineering.fb.com/wp-content/uploads/2024/10/FBNIC-e1729010986979.png?resize=96,45 96w, https://engineering.fb.com/wp-content/uploads/2024/10/FBNIC-e1729010986979.png?resize=192,90 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <p><span style="font-weight: 400;">We are continuing to design more ASICs, including the ASIC for FBNIC. FBNIC is a true multi-host foundational NIC and contains the first of our Meta-designed network ASICs for our server fleet and </span><a href="https://ai.meta.com/blog/next-generation-meta-training-inference-accelerator-AI-MTIA/"><span style="font-weight: 400;">MTIA</span></a><span style="font-weight: 400;"> solutions. It can support up to four hosts with complete datapath isolation for each host.The FBNIC driver has been upstreamed (available from v6.11 kernel). The NIC module was designed by Marvell and has been contributed to OCP.</span></p> <p><span style="font-weight: 400;">FBNIC’s key features include:</span></p> <ul> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Network interfaces for up to 4&#215;100/4&#215;50/4&#215;25 GE with SerDes support for up to 56G PAM4 per lane.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Up to 4 independent PCIe Gen5 slices</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">HW offloads including LSO, Checksum</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Line rate timestamping (for each host all the way from PHY) for PTP</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Header-Data split to assist Zero-Copy</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Compliant with OCP NIC 3.0, version 1.2.0, design specification</span></li> </ul> <h2><span style="font-weight: 400;">The future is open</span></h2> <p><span style="font-weight: 400;">Advancing AI means building data center infrastructure that goes beyond scale. It also has to allow for flexibility and perform efficiently and sustainably. At Meta, we envision a future of AI hardware systems that are not only scalable, but also open and collaborative.</span></p> <p><span style="font-weight: 400;">We encourage anyone who wants to help advance the future of networking hardware for AI to engage with OCP and Meta to help share the future of AI infrastructure. </span></p> <p>The post <a rel="nofollow" href="https://engineering.fb.com/2024/10/15/data-infrastructure/open-future-networking-hardware-ai-ocp-2024-meta/">OCP Summit 2024: The open future of networking hardware for AI</a> appeared first on <a rel="nofollow" href="https://engineering.fb.com">Engineering at Meta</a>.</p> ]]></content:encoded> <post-id xmlns="com-wordpress:feed-additions:1">21834</post-id> </item> <item> <title>Meta’s open AI hardware vision</title> <link>https://engineering.fb.com/2024/10/15/data-infrastructure/metas-open-ai-hardware-vision/</link> <dc:creator><![CDATA[]]></dc:creator> <pubDate>Tue, 15 Oct 2024 17:00:13 +0000</pubDate> <category><![CDATA[Data Center Engineering]]></category> <category><![CDATA[Data Infrastructure]]></category> <category><![CDATA[DevInfra]]></category> <category><![CDATA[ML Applications]]></category> <category><![CDATA[Networking & Traffic]]></category> <category><![CDATA[Open Source]]></category> <guid isPermaLink="false">https://engineering.fb.com/?p=21832</guid> <description><![CDATA[<p>At the Open Compute Project (OCP) Global Summit 2024, we’re showcasing our latest open AI hardware designs with the OCP community. These innovations include a new AI platform, cutting-edge open rack designs, and advanced network fabrics and components.  By sharing our designs, we hope to inspire collaboration and foster innovation. If you&#8217;re passionate about building [...]</p> <p><a class="btn btn-secondary understrap-read-more-link" href="https://engineering.fb.com/2024/10/15/data-infrastructure/metas-open-ai-hardware-vision/">Read More...</a></p> <p>The post <a rel="nofollow" href="https://engineering.fb.com/2024/10/15/data-infrastructure/metas-open-ai-hardware-vision/">Meta’s open AI hardware vision</a> appeared first on <a rel="nofollow" href="https://engineering.fb.com">Engineering at Meta</a>.</p> ]]></description> <content:encoded><![CDATA[<ul> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">At the Open Compute Project (OCP) Global Summit 2024, we’re showcasing our latest open AI hardware designs with the OCP community.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">These innovations include a new AI platform, cutting-edge open rack designs, and advanced network fabrics and components. </span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">By sharing our designs, we hope to inspire collaboration and foster innovation. If you&#8217;re passionate about building the future of AI, we invite you to engage with us and OCP to help shape the next generation of open hardware for AI.</span></li> </ul> <p><span style="font-weight: 400;">AI has been at the core of the experiences Meta has been delivering to people and businesses for years, including AI modeling innovations to optimize and improve on features like </span><a href="https://ai.meta.com/blog/facebook-feed-improvements-ai-show-more-less/" target="_blank" rel="noopener"><span style="font-weight: 400;">Feed</span></a><span style="font-weight: 400;"> and our </span><a href="https://engineering.fb.com/2024/07/10/data-infrastructure/machine-learning-ml-prediction-robustness-meta/" target="_blank" rel="noopener"><span style="font-weight: 400;">ads system</span></a><span style="font-weight: 400;">. As we develop and release new, advanced AI models, we are also driven to advance our infrastructure to support our new and emerging AI workloads.</span></p> <p><span style="font-weight: 400;">For example, </span><a href="https://ai.meta.com/blog/meta-llama-3-1/" target="_blank" rel="noopener"><span style="font-weight: 400;">Llama 3.1 405B</span></a><span style="font-weight: 400;">, Meta’s largest model, is a dense transformer with 405B parameters and a context window of up to 128k tokens. To train a large language model (LLM) of this magnitude, with over 15 trillion tokens, we had to make substantial optimizations to our entire training stack. This effort pushed our infrastructure to operate across more than 16,000 NVIDIA H100 GPUs, making Llama 3.1 405B the first model in the Llama series to be trained at such a massive scale. </span></p> <p><span style="font-weight: 400;">But things have accelerated. We’ve rapidly scaled up our training clusters to support our AI workloads. Today, we’re training our models on two</span> <a href="https://engineering.fb.com/2024/03/12/data-center-engineering/building-metas-genai-infrastructure/" target="_blank" rel="noopener"><span style="font-weight: 400;">24K-GPU clusters</span></a><span style="font-weight: 400;">.</span></p> <p><span style="font-weight: 400;">We don’t expect this upward trajectory for AI clusters to slow down any time soon. In fact, we expect the amount of compute needed for AI training will grow significantly from where we are today.</span></p> <p><span style="font-weight: 400;">Building AI clusters requires more than just GPUs. Networking and bandwidth play an important role in ensuring the clusters&#8217; performance. Our systems consist of a tightly integrated HPC compute system and an isolated high-bandwidth compute network that connects all our GPUs and domain-specific accelerators. This design is necessary to meet our injection needs and address the challenges posed by our need for bisection bandwidth.</span></p> <p><span style="font-weight: 400;">In the next few years, we anticipate greater injection bandwidth on the order of a terabyte per second, per accelerator, with equal normalized bisection bandwidth. This represents a growth of more than an order of magnitude compared to today&#8217;s networks!</span></p> <p><span style="font-weight: 400;">To support this growth, we need a high-performance, multi-tier, non-blocking network fabric that can utilize modern congestion control to behave predictably under heavy load. This will enable us to fully leverage the power of our AI clusters and ensure they continue to perform optimally as we push the boundaries of what is possible with AI.</span></p> <p><span style="font-weight: 400;">Scaling AI at this speed requires open hardware solutions. Developing new architectures, network fabrics, and system designs is the most efficient and impactful when we can build it on principles of openness. By investing in open hardware, we unlock AI’s full potential and propel ongoing innovation in the field.</span></p> <h2><span style="font-weight: 400;">Introducing Catalina: Open Architecture for AI Infra</span></h2> <figure id="attachment_21841" aria-describedby="caption-attachment-21841" style="width: 456px" class="wp-caption alignleft"><img loading="lazy" decoding="async" class=" wp-image-21841" src="https://engineering.fb.com/wp-content/uploads/2050/05/Catalina-Front-Back-2.png?w=683" alt="" width="456" height="683" srcset="https://engineering.fb.com/wp-content/uploads/2050/05/Catalina-Front-Back-2.png 720w, https://engineering.fb.com/wp-content/uploads/2050/05/Catalina-Front-Back-2.png?resize=611,916 611w, https://engineering.fb.com/wp-content/uploads/2050/05/Catalina-Front-Back-2.png?resize=683,1024 683w, https://engineering.fb.com/wp-content/uploads/2050/05/Catalina-Front-Back-2.png?resize=96,144 96w, https://engineering.fb.com/wp-content/uploads/2050/05/Catalina-Front-Back-2.png?resize=192,288 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /><figcaption id="caption-attachment-21841" class="wp-caption-text">Catalina front view (left) and rear view (right).</figcaption></figure> <p><span style="font-weight: 400;">Today, we announced the upcoming release of Catalina, our new high-powered rack designed for AI workloads, to the OCP community. Catalina is based on the <a href="https://nvidianews.nvidia.com/news/nvidia-contributes-blackwell-platform-design-to-open-hardware-ecosystem-accelerating-ai-infrastructure-innovation" target="_blank" rel="noopener">NVIDIA Blackwell platform full rack-scale solution</a>, with a focus on modularity and flexibility. It is built to support the latest NVIDIA GB200 Grace Blackwell Superchip, ensuring it meets the growing demands of modern AI infrastructure. </span></p> <p><span style="font-weight: 400;">The growing power demands of GPUs means open rack solutions need to support higher power capability. With Catalina we’re introducing the Orv3, a high-power rack (HPR) capable of supporting up to 140kW.</span></p> <p><span style="font-weight: 400;">The full solution is liquid cooled and consists of a power shelf that supports a compute tray, switch tray, the Orv3 HPR, the</span> <a href="https://engineering.fb.com/2021/11/09/data-center-engineering/ocp-summit-2021/" target="_blank" rel="noopener"><span style="font-weight: 400;">Wedge 400</span></a><span style="font-weight: 400;"> fabric switch, a management switch, battery backup unit, and a rack management controller.</span></p> <p><span style="font-weight: 400;">We aim for Catalina’s modular design to empower others to customize the rack to meet their specific AI workloads while leveraging both existing and emerging industry standards.</span></p> <h2><span style="font-weight: 400;">The Grand Teton Platform now supports AMD accelerators</span></h2> <p><img loading="lazy" decoding="async" class="aligncenter wp-image-21858" src="https://engineering.fb.com/wp-content/uploads/2024/10/Grand-Teton-AMD-MI300X-Open-small.png?w=916" alt="" width="600" height="436" srcset="https://engineering.fb.com/wp-content/uploads/2024/10/Grand-Teton-AMD-MI300X-Open-small.png 1109w, https://engineering.fb.com/wp-content/uploads/2024/10/Grand-Teton-AMD-MI300X-Open-small.png?resize=916,665 916w, https://engineering.fb.com/wp-content/uploads/2024/10/Grand-Teton-AMD-MI300X-Open-small.png?resize=768,557 768w, https://engineering.fb.com/wp-content/uploads/2024/10/Grand-Teton-AMD-MI300X-Open-small.png?resize=1024,743 1024w, https://engineering.fb.com/wp-content/uploads/2024/10/Grand-Teton-AMD-MI300X-Open-small.png?resize=96,70 96w, https://engineering.fb.com/wp-content/uploads/2024/10/Grand-Teton-AMD-MI300X-Open-small.png?resize=192,139 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <p><span style="font-weight: 400;">In 2022, we announced </span><a href="https://engineering.fb.com/2022/10/18/open-source/ocp-summit-2022-grand-teton/" target="_blank" rel="noopener"><span style="font-weight: 400;">Grand Teton</span></a><span style="font-weight: 400;">, our next-generation AI platform (the follow-up to our Zion-EX platform). Grand Teton is designed with compute capacity to support the demands of memory-bandwidth-bound workloads, such as Meta’s <a href="https://ai.facebook.com/blog/dlrm-an-advanced-open-source-deep-learning-recommendation-model/" target="_blank" rel="noopener">deep learning recommendation models (</a></span><span style="font-weight: 400;">DLRMs</span><span style="font-weight: 400;">), as well as compute-bound workloads like content understanding.</span></p> <p><span style="font-weight: 400;">Now, we have expanded the Grand Teton platform to support the AMD Instinct MI300X and will be contributing this new version to OCP. Like its predecessors, this new version of Grand Teton</span><span style="font-weight: 400;"> features a single monolithic system design with fully integrated power, control, compute, and fabric interfaces. This high level of integration simplifies system deployment, enabling rapid scaling with increased reliability for large-scale AI inference workloads.</span></p> <p><span style="font-weight: 400;">In addition to supporting a range of accelerator designs, now including the AMD Instinct MI300x, Grand Teton offers significantly greater compute capacity, allowing faster convergence on a larger set of weights. This is complemented by expanded memory to store and run larger models locally, along with increased network bandwidth to scale up training cluster sizes efficiently.</span></p> <h2><span style="font-weight: 400;">​Open Disaggregated Scheduled Fabric </span></h2> <p><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-21860" src="https://engineering.fb.com/wp-content/uploads/2024/10/OCP-2024-DSF-Meta.png?w=1024" alt="" width="1024" height="508" srcset="https://engineering.fb.com/wp-content/uploads/2024/10/OCP-2024-DSF-Meta.png 1871w, https://engineering.fb.com/wp-content/uploads/2024/10/OCP-2024-DSF-Meta.png?resize=916,454 916w, https://engineering.fb.com/wp-content/uploads/2024/10/OCP-2024-DSF-Meta.png?resize=768,381 768w, https://engineering.fb.com/wp-content/uploads/2024/10/OCP-2024-DSF-Meta.png?resize=1024,508 1024w, https://engineering.fb.com/wp-content/uploads/2024/10/OCP-2024-DSF-Meta.png?resize=1536,762 1536w, https://engineering.fb.com/wp-content/uploads/2024/10/OCP-2024-DSF-Meta.png?resize=96,48 96w, https://engineering.fb.com/wp-content/uploads/2024/10/OCP-2024-DSF-Meta.png?resize=192,95 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <p><span style="font-weight: 400;">Developing open, vendor-agnostic networking backend is going to play an important role going forward as we continue to push the performance of our AI training clusters. Disaggregating our network allows us to work with vendors from across the industry to design systems that are innovative as well as scalable, flexible, and efficient.</span></p> <p><span style="font-weight: 400;">Our new <a href="https://engineering.fb.com/2024/10/15/data-infrastructure/open-future-networking-hardware-ai-ocp-2024-meta/" target="_blank" rel="noopener">Disaggregated Scheduled Fabric (DSF)</a> for our next-generation AI clusters offers several advantages over our existing switches. By opening up our network fabric we can overcome limitations in scale, component supply options, and power density. DSF is powered by the open</span><a href="https://github.com/opencomputeproject/SAI" target="_blank" rel="noopener"><span style="font-weight: 400;"> OCP-SAI</span></a><span style="font-weight: 400;"> standard and</span><a href="https://engineering.fb.com/2018/09/04/data-infrastructure/research-in-brief-building-switch-software-at-scale-and-in-the-open/" target="_blank" rel="noopener"><span style="font-weight: 400;"> FBOSS</span></a><span style="font-weight: 400;">, Meta’s own network operating system for controlling network switches. It also supports an open and standard Ethernet-based RoCE interface to endpoints and accelerators across several GPUS and NICS from several different vendors, including our partners at NVIDIA, Broadcom, and AMD.</span></p> <p><span style="font-weight: 400;">In addition to DSF, we have also developed and built new 51T fabric switches based on Broadcom and Cisco ASICs. Finally, we are sharing our new FBNIC, a new NIC module that contains our first Meta-design network ASIC. In order to meet the growing needs of our AI </span></p> <h2><span style="font-weight: 400;">Meta and Microsoft: Driving Open Innovation Together</span></h2> <p><span style="font-weight: 400;">Meta and Microsoft have a long-standing partnership within OCP, beginning with the development of the </span><a href="https://www.opencompute.org/documents/switch-abstraction-interface-ocp-specification-v0-2-pdf" target="_blank" rel="noopener"><span style="font-weight: 400;">Switch Abstraction Interface (SAI)</span></a><span style="font-weight: 400;"> for data centers in 2018. Over the years together, we&#8217;ve contributed to key initiatives such as the </span><a href="https://www.opencompute.org/blog/new-open-accelerator-infrastructure-oai-sub-project-to-launch-within-the-ocp-server-project" target="_blank" rel="noopener"><span style="font-weight: 400;">Open Accelerator Module (OAM)</span></a><span style="font-weight: 400;"> standard and SSD standardization, showcasing our shared commitment to advancing open innovation.</span></p> <p><span style="font-weight: 400;">Our current </span><a href="https://azure.microsoft.com/en-us/blog/accelerating-industry-wide-innovations-in-datacenter-infrastructure-and-security/" target="_blank" rel="noopener"><span style="font-weight: 400;">collaboration focuses on Mount Diablo</span></a><span style="font-weight: 400;">, a new disaggregated power rack. It’s a cutting-edge solution featuring a scalable 400 VDC unit that enhances efficiency and scalability. This innovative design allows more AI accelerators per IT rack, significantly advancing AI infrastructure. We’re excited to continue our collaboration through this contribution.</span></p> <h2><span style="font-weight: 400;">The open future of AI infra</span></h2> <p><a href="https://about.fb.com/news/2024/07/open-source-ai-is-the-path-forward/" target="_blank" rel="noopener"><span style="font-weight: 400;">Meta is committed to open source AI</span></a><span style="font-weight: 400;">. We believe that open source will put the benefits and opportunities of AI into the hands of people all over the word. </span></p> <p><span style="font-weight: 400;">AI won’t realize its full potential without collaboration. We need open software frameworks to drive model innovation, ensure portability, and promote transparency in AI development. We must also prioritize open and standardized models so we can leverage collective expertise, make AI more accessible, and work towards minimizing biases in our systems.​</span></p> <p><span style="font-weight: 400;">Just as important, we also need open AI hardware systems. These systems are necessary for delivering the kind of high-performance, cost-effective, and adaptable infrastructure necessary for AI advancement.​</span></p> <p><span style="font-weight: 400;">We encourage anyone who wants to help advance the future of AI hardware systems to engage with the OCP community. By addressing AI’s infrastructure needs together, we can unlock the true promise of open AI for everyone.​</span></p> <p>The post <a rel="nofollow" href="https://engineering.fb.com/2024/10/15/data-infrastructure/metas-open-ai-hardware-vision/">Meta’s open AI hardware vision</a> appeared first on <a rel="nofollow" href="https://engineering.fb.com">Engineering at Meta</a>.</p> ]]></content:encoded> <post-id xmlns="com-wordpress:feed-additions:1">21832</post-id> </item> <item> <title>How open source AI can improve population estimates, sustainable energy, and the delivery of climate change interventions</title> <link>https://engineering.fb.com/2024/10/03/ml-applications/open-source-ai-population-maps-meta/</link> <dc:creator><![CDATA[]]></dc:creator> <pubDate>Thu, 03 Oct 2024 16:00:14 +0000</pubDate> <category><![CDATA[AI Research]]></category> <category><![CDATA[ML Applications]]></category> <category><![CDATA[Open Source]]></category> <guid isPermaLink="false">https://engineering.fb.com/?p=21793</guid> <description><![CDATA[<p>Data for Good at Meta is open-sourcing the data used to train our AI-powered population maps. We’re hoping that researchers and other organizations around the world will be able to leverage these tools to assist with a wide range of projects including those on climate adaptation, public health and disaster response. The dataset and code [...]</p> <p><a class="btn btn-secondary understrap-read-more-link" href="https://engineering.fb.com/2024/10/03/ml-applications/open-source-ai-population-maps-meta/">Read More...</a></p> <p>The post <a rel="nofollow" href="https://engineering.fb.com/2024/10/03/ml-applications/open-source-ai-population-maps-meta/">How open source AI can improve population estimates, sustainable energy, and the delivery of climate change interventions</a> appeared first on <a rel="nofollow" href="https://engineering.fb.com">Engineering at Meta</a>.</p> ]]></description> <content:encoded><![CDATA[<ul> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Data for Good at Meta is open-sourcing the data used to train our AI-powered population maps.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">We’re hoping that researchers and other organizations around the world will be able to leverage these tools to assist with a wide range of projects including those on climate adaptation, public health and disaster response.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">The dataset and code are available now on </span><a href="https://github.com/facebookresearch/HighResolutionSettlementLayer" target="_blank" rel="noopener"><span style="font-weight: 400;">GitHub</span></a><span style="font-weight: 400;">.</span></li> </ul> <p><span style="font-weight: 400;">To support the ongoing work of researchers, governments, nonprofits, and humanitarians around the world, the Data for Good at Meta program is open-sourcing the first set of training data and sample code used to construct </span><a href="https://dataforgood.facebook.com/dfg/tools/high-resolution-population-density-maps" target="_blank" rel="noopener"><span style="font-weight: 400;">Meta’s AI-powered population maps.</span></a></p> <p><span style="font-weight: 400;">As the world looks towards the increasing threat of climate change, Meta’s AI-powered population maps, and the data behind them, offer significant opportunities to direct investments in disaster preparedness through improved estimation of</span> <a href="https://www.nature.com/articles/s41467-019-09282-y" target="_blank" rel="noopener"><span style="font-weight: 400;">global flood exposure</span></a><span style="font-weight: 400;"> and in</span> <a href="https://www.cambridge.org/core/journals/global-sustainability/article/upscaling-urban-data-science-for-global-climate-solutions/D2D622B43CD50A9B2FD5DF855BCC0F18?fbclid=IwY2xjawEnQjVleHRuA2FlbQIxMAABHbTiWUPUhcbX0JBxfPLVwtg9fd6wyYO98jy1N0MatP_Fse1Sv7078P2pYg_aem_Y5QcbSZqolPCKpdKynnlfQ" target="_blank" rel="noopener"><span style="font-weight: 400;">climate adaptation planning</span></a><span style="font-weight: 400;">.</span></p> <p><span style="font-weight: 400;">By open sourcing these tools, we hope that other researchers can generate new insights for speeding the delivery of sustainable energy and climate resilient infrastructure around the world.</span></p> <h2>Why we need better population maps</h2> <p><span style="font-weight: 400;">Accurate estimates of population are taken for granted in many countries. Governments in advanced economies can rely on a variety of sources including tax records or census datasets to better estimate their population and make informed decisions on the delivery of services. However, in other parts of the world, accurate population data is hard to come by. In certain low- and middle-income countries, the most recent census may have been conducted decades ago or lack accurate representation of vulnerable populations. Furthermore, estimates between censuses are often fraught with inaccuracies and remote populations may be entirely missing from official sources. As a result, uncounted communities may live outside the reach of critical programs. </span></p> <p><span style="font-weight: 400;">To combat this challenge, Meta began </span><a href="https://ai.meta.com/research/publications/mapping-the-world-population-one-building-at-a-time/" target="_blank" rel="noopener"><span style="font-weight: 400;">the process of mapping the world’s population using artificial intelligence and satellite imagery</span></a><span style="font-weight: 400;"> in 2017. Alongside other leading population mapping institutions like </span><a href="https://people.climate.columbia.edu/units/view/5" target="_blank" rel="noopener"><span style="font-weight: 400;">Columbia University’s Center for Earth Science Information Network</span></a><span style="font-weight: 400;"> (CIESIN) and </span><a href="https://www.worldpop.org/" target="_blank" rel="noopener"><span style="font-weight: 400;">WorldPop at the University of Southampton</span></a><span style="font-weight: 400;">, we have </span><a href="https://data.humdata.org/organization/meta" target="_blank" rel="noopener"><span style="font-weight: 400;">openly published hundreds of high resolution population maps and datasets</span></a><span style="font-weight: 400;">. These have been used around the world by governments and nonprofits for social programs ranging from the </span><a href="https://openknowledge.worldbank.org/server/api/core/bitstreams/a155c5ae-cd99-5635-a9de-4b86905f402f/content" target="_blank" rel="noopener"><span style="font-weight: 400;">targeting of COVID-19 interventions</span></a><span style="font-weight: 400;"> to the delivery of clean water. As the world’s natural resource and energy demands scale, accurate population estimates also offer significant opportunities to improve sustainability efforts.</span></p> <figure id="attachment_21795" aria-describedby="caption-attachment-21795" style="width: 1024px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-21795" src="https://engineering.fb.com/wp-content/uploads/2024/10/Meta-open-source-population-maps-1.png?w=1024" alt="" width="1024" height="791" srcset="https://engineering.fb.com/wp-content/uploads/2024/10/Meta-open-source-population-maps-1.png 1118w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-open-source-population-maps-1.png?resize=916,708 916w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-open-source-population-maps-1.png?resize=768,594 768w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-open-source-population-maps-1.png?resize=1024,791 1024w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-open-source-population-maps-1.png?resize=96,74 96w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-open-source-population-maps-1.png?resize=192,148 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /><figcaption id="caption-attachment-21795" class="wp-caption-text">The World Bank leveraged Meta’s AI-powered population maps to identify potential COVID-19 hotspots in Kinshasa, DRC.</figcaption></figure> <h2>Background on Meta’s AI-powered population maps</h2> <p><span style="font-weight: 400;">Data for Good’s AI-powered population maps estimate the number of people living within 30-meter grid tiles in nearly every country around the world. These maps leverage computer vision techniques &#8211; similar to those leveraged to </span><a href="https://about.fb.com/news/2021/01/using-ai-to-improve-photo-descriptions-for-blind-and-visually-impaired-people/" target="_blank" rel="noopener"><span style="font-weight: 400;">identify objects in photos for the visually impaired</span></a><span style="font-weight: 400;"> &#8211; to identify human-made structures in satellite imagery. The outputs of Meta’s AI model are then combined with population stock estimates from </span><span style="font-weight: 400;">CIESIN</span><span style="font-weight: 400;"> to approximate the number of people living in each tile. </span></p> <p><span style="font-weight: 400;">In addition to total population counts, Meta’s population maps also include demographic breakdowns for groups such as the number of children under five, women of reproductive age, youth, and the elderly. </span></p> <p><span style="font-weight: 400;">AI-powered population estimates </span><span style="font-weight: 400;">have been scientifically evaluated to be among the most accurate in the world for mapping population distribution for a variety of geographies and use-cases. For example, </span><a href="https://www.nature.com/articles/s41598-022-07720-4" target="_blank" rel="noopener"><span style="font-weight: 400;">this 2022 paper by researchers at the University of Southampton and University of Ghana in </span><i><span style="font-weight: 400;">Nature &#8211; Scientific Reports</span></i> </a><span style="font-weight: 400;">compares various population density estimates for use in mapping flooding risk in west Africa. Other studies have investigated a variety of use-cases such as mapping </span><a href="https://link.springer.com/article/10.1007/s11069-023-06283-5" target="_blank" rel="noopener"><span style="font-weight: 400;">landslide risk</span></a><span style="font-weight: 400;"> and </span><a href="https://www.biorxiv.org/content/10.1101/2020.06.18.160101v1.full" target="_blank" rel="noopener"><span style="font-weight: 400;">malaria eradication</span></a><span style="font-weight: 400;">; and a range of countries including </span><a href="https://www.mdpi.com/2306-5729/3/3/33" target="_blank" rel="noopener"><span style="font-weight: 400;">Haiti, Malawi, Madagascar, Nepal, Rwanda, and Thailand</span></a><span style="font-weight: 400;">. </span></p> <h2>Open-sourcing training data for our AI population maps</h2> <p><span style="font-weight: 400;">This initial set of training data consists of almost 10 million labels for over 126 gigabytes of satellite imagery and includes human labels on these satellite imagery patches indicating if a building is present. </span><a href="https://resources.maxar.com/data-sheets/imagery-basemaps-data-sheet" target="_blank" rel="noopener"><span style="font-weight: 400;">These labels were created on satellite imagery dating from 2011 &#8211; 2020</span></a><span style="font-weight: 400;">;</span><span style="font-weight: 400;"> however, even labels made on older imagery are useful to train the next generation of machine vision models (like </span><a href="https://ai.meta.com/sam2/" target="_blank" rel="noopener"><span style="font-weight: 400;">Meta’s Segment Anything</span></a><span style="font-weight: 400;">) to more accurately identify buildings in a range of land-cover environments. In addition to this first batch, we plan to release additional data and code for computer vision training in the future.</span></p> <p><span style="font-weight: 400;">Open sourcing Meta’s training data and code allows population mapping partners like CIESIN and WorldPop to continue the progress made in the last decade. These tools reduce development costs for research units to generate even more accurate population estimates and also allows researchers working on building detection to improve their methods, especially when combined with more recent satellite imagery. Future data released from CIESIN and data collaborations like GRID3 will continue to push boundaries of spatial resolution and accuracy as the result of their work collaborating with many African countries to generate, validate, and use core spatial datasets in support of sustainable development.   </span></p> <blockquote class="blockquote"><p><i><span style="font-weight: 400;">To better visualize village settlement locations and calculate service coverage, World Vision turned to an innovative dataset developed by Meta&#8217;s Data for Good (D4G) and Columbia University&#8217;s Center for International Earth Science Information Network  (CIESIN). The resulting High Resolution Settlement Layer (HRSL) has been a game-changer for visualizing the geography of clean water.<br /> </span></i><span style="color: #636c72; font-size: 12.8px;"><i><span style="font-weight: 400;">–A</span></i>llen Hollenbach, Technical Director for World Vision Water and Sanitation</span></p></blockquote> <h2>Applications in sustainable electrification, clean water, and climate change adaptation</h2> <p><span style="font-weight: 400;">Nonprofit organizations and governments around the world have already leveraged Meta’s AI-powered population maps for a range of social impact programs, including </span><a href="https://dataforgood.facebook.com/dfg/resources/world-bank-global-electrification-platform-case-study" target="_blank" rel="noopener"><span style="font-weight: 400;">the World Bank’s</span></a><span style="font-weight: 400;"> rural electrification efforts in Somalia and Benin and similar efforts in Uganda by the </span><a href="https://www.wri.org/update/using-metas-relative-wealth-index-and-high-resolution-population-density-data-help-expand" target="_blank" rel="noopener"><span style="font-weight: 400;">World Resources Institute</span></a><span style="font-weight: 400;">.  </span></p> <p><a href="https://storymaps.arcgis.com/stories/a73563c0d11b433fa35e0bd10a546087" target="_blank" rel="noopener"><span style="font-weight: 400;">World Vision</span></a><span style="font-weight: 400;"> has also used these datasets in accelerating the progress in five-year plans for water and sanitation in places like Rwanda and Zambia and  just recently announced </span><a href="https://storymaps.arcgis.com/stories/50e5063b79374c3d924d662ba6f2e863" target="_blank" rel="noopener"><span style="font-weight: 400;">having reached one million additional Rwandans with clean water</span></a><span style="font-weight: 400;"> using insights from these maps to track progress towards universal water coverage.</span></p> <figure id="attachment_21796" aria-describedby="caption-attachment-21796" style="width: 1024px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-21796" src="https://engineering.fb.com/wp-content/uploads/2024/10/Meta-open-source-population-maps-2.png?w=1024" alt="" width="1024" height="683" srcset="https://engineering.fb.com/wp-content/uploads/2024/10/Meta-open-source-population-maps-2.png 1999w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-open-source-population-maps-2.png?resize=916,611 916w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-open-source-population-maps-2.png?resize=768,512 768w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-open-source-population-maps-2.png?resize=1024,683 1024w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-open-source-population-maps-2.png?resize=1536,1024 1536w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-open-source-population-maps-2.png?resize=96,64 96w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-open-source-population-maps-2.png?resize=192,128 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /><figcaption id="caption-attachment-21796" class="wp-caption-text">World Vision used Meta’s high resolution population maps to identify the population and associated settlements closest to existing water points and target areas where new water points were needed.</figcaption></figure> <p><span style="font-weight: 400;">Innovation in global population mapping is only possible through the type of collaboration Meta continues to have with Columbia University and WorldPop and a shared commitment to open source enables researchers and governments around the world to participate in this process.</span></p> <p><span style="font-weight: 400;">Please visit the <a href="https://dataforgood.facebook.com/" target="_blank" rel="noopener">Data for Good</a> website for more information about Meta&#8217;s Data for Good program. A</span><span style="font-weight: 400;">nd please visit this blog for more <a href="https://about.fb.com/news/2020/06/privacy-matters-data-for-good/" target="_blank" rel="noopener">information about how we protect user privacy in our tools.</a></span></p> <h2>Acknowledgements</h2> <p><em>We&#8217;d like to thank our external collaborators: Professor Andy Tatem, Director of WorldPop at University of Southampton, UK; and Greg Yetman, Associate Director for Geospatial Applications at CIESIN, Columbia University, and for their partnership and support on this work.</em></p> <p>The post <a rel="nofollow" href="https://engineering.fb.com/2024/10/03/ml-applications/open-source-ai-population-maps-meta/">How open source AI can improve population estimates, sustainable energy, and the delivery of climate change interventions</a> appeared first on <a rel="nofollow" href="https://engineering.fb.com">Engineering at Meta</a>.</p> ]]></content:encoded> <post-id xmlns="com-wordpress:feed-additions:1">21793</post-id> </item> <item> <title>React at Meta Connect 2024</title> <link>https://engineering.fb.com/2024/10/02/android/react-at-meta-connect-2024/</link> <dc:creator><![CDATA[]]></dc:creator> <pubDate>Wed, 02 Oct 2024 16:00:47 +0000</pubDate> <category><![CDATA[Android]]></category> <category><![CDATA[iOS]]></category> <category><![CDATA[Open Source]]></category> <category><![CDATA[Virtual Reality]]></category> <category><![CDATA[Web]]></category> <guid isPermaLink="false">https://engineering.fb.com/?p=21703</guid> <description><![CDATA[<p>At Meta, React and React Native are more than just tools; they are integral to our product development and innovation. With over five thousand people at Meta building products and experiences with React every month, these technologies are fundamental to our engineering culture and our ability to quickly build and ship high quality products. In [...]</p> <p><a class="btn btn-secondary understrap-read-more-link" href="https://engineering.fb.com/2024/10/02/android/react-at-meta-connect-2024/">Read More...</a></p> <p>The post <a rel="nofollow" href="https://engineering.fb.com/2024/10/02/android/react-at-meta-connect-2024/">React at Meta Connect 2024</a> appeared first on <a rel="nofollow" href="https://engineering.fb.com">Engineering at Meta</a>.</p> ]]></description> <content:encoded><![CDATA[<p><span style="font-weight: 400;">At Meta, </span><a href="https://react.dev/"><span style="font-weight: 400;">React</span></a><span style="font-weight: 400;"> and </span><a href="https://reactnative.dev/"><span style="font-weight: 400;">React Native</span></a><span style="font-weight: 400;"> are more than just tools; they are integral to our product development and innovation. With over five thousand people at Meta building products and experiences with React every month, these technologies are fundamental to our engineering culture and our ability to quickly build and ship high quality products. In this post, we will dive into the development experiences of some of the product teams who leveraged React and React Native to deliver exciting projects showcased at Meta Connect 2024.</span></p> <h2>Instagram and Facebook For Meta Quest</h2> <div style="width: 750px;" class="wp-video"><!--[if lt IE 9]><script>document.createElement('video');</script><![endif]--> <video class="wp-video-shortcode" id="video-21703-1" width="750" height="360" loop="1" autoplay="1" muted="1" playsinline="1" preload="metadata" controls="controls"><source type="video/mp4" src="https://engineering.fb.com/wp-content/uploads/2024/10/RNBlogDemo-compressed.mp4?_=1" /><a href="https://engineering.fb.com/wp-content/uploads/2024/10/RNBlogDemo-compressed.mp4">https://engineering.fb.com/wp-content/uploads/2024/10/RNBlogDemo-compressed.mp4</a></video></div> <p><span style="font-weight: 400;"><br /> At Connect, Mark Zuckerberg shared that we have re-built Instagram and Facebook for mixed reality (MR) on Meta Quest. Our goal was to bring our flagship social experiences to the Meta Quest headset, letting people catch up with their friends and watch Stories and Reels, all while showcasing new possibilities enabled only through MR. </span></p> <p><span style="font-weight: 400;">Building Meta’s social apps from scratch in MR required our teams to thoughtfully leverage the platform capabilities offered by Meta Quest while keeping a tremendously high bar for quality. The teams </span><span style="font-weight: 400;">first had to decide how to build them: reusing the existing Android apps, writing a new native Android app, or using React Native to build from scratch. We wanted to offer a hero experience that looked and felt at home on Meta Quest, taking advantage of the additional input types, gestures, and larger visual surface area. Instead of simply porting our mobile social apps, we chose React Native as it enabled our teams to iterate and build quickly with robust animation capabilities, great performance, and a shared platform that powers most of the 2D Meta Quest system apps.</span></p> <p><span style="font-weight: 400;">On Instagram, React Native enabled our teams to build rich animations and novel interactions that embody the brand’s deep focus on quality and delight. For this new app, we introduced seamless transitions of video posts from feed into a full screen view side by side with comments, without dropping a single frame. We enabled the ability to swipe through stacks of photos with the controller joystick or pinching your hands. We also introduced a unique hover animation over interactive elements that smoothly follows your controller movements.</span></p> <p><span style="font-weight: 400;">When building Facebook for Meta Quest, our teams took advantage of the mature code and infrastructure that supports our </span><a href="http://facebook.com"><span style="font-weight: 400;">Facebook.com desktop experience</span></a><span style="font-weight: 400;">. We leveraged code sharing technologies to reuse </span><span style="font-weight: 400;">some of the most complex and robust features from Facebook.com like Newsfeed and commenting. Some of these code sharing technologies include our Meta open source projects like </span><a href="https://stylexjs.com/"><span style="font-weight: 400;">StyleX</span></a><span style="font-weight: 400;"> and </span><a href="https://github.com/facebook/react-strict-dom"><span style="font-weight: 400;">React Strict DOM</span></a><span style="font-weight: 400;">. By sharing code, our teams could spend less time on repetitive business logic and focus more on adding Meta Quest specific interactions and experiences.</span></p> <h2>Meta Horizon mobile app</h2> <p><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-21782" src="https://engineering.fb.com/wp-content/uploads/2024/10/Meta-Horizon-React-Connect-2024-compressed.jpg?w=1024" alt="" width="1024" height="578" srcset="https://engineering.fb.com/wp-content/uploads/2024/10/Meta-Horizon-React-Connect-2024-compressed.jpg 1999w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-Horizon-React-Connect-2024-compressed.jpg?resize=580,326 580w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-Horizon-React-Connect-2024-compressed.jpg?resize=916,517 916w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-Horizon-React-Connect-2024-compressed.jpg?resize=768,434 768w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-Horizon-React-Connect-2024-compressed.jpg?resize=1024,578 1024w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-Horizon-React-Connect-2024-compressed.jpg?resize=1536,868 1536w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-Horizon-React-Connect-2024-compressed.jpg?resize=96,54 96w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-Horizon-React-Connect-2024-compressed.jpg?resize=192,108 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <p><span style="font-weight: 400;">This year, we also </span><a href="https://www.meta.com/blog/quest/horizon-mobile-app/"><span style="font-weight: 400;">rolled out the new Meta Horizon mobile app</span></a><span style="font-weight: 400;"> – a new look and a new name. We expanded the app to make it easier to socialize and express yourself both in and out of the headset. We added a dedicated tab to easily customize your avatar and express your mood, right from your phone. People can also visit Horizon Worlds and complete quests from the app to unlock exclusive avatar styles, items, and emotes.</span></p> <p><span style="font-weight: 400;">We’ve also continued to improve app performance. At Meta, our teams typically look to Facebook Marketplace as a React Native performance benchmark. However, the Meta Horizon app is a standalone app with React Native in the initialization path of the app’s cold start, compared to the Facebook app which initializes React Native when you visit your first React Native surface and not on app start. The performance results our teams delivered with React Native exceeded our original expectations and are on par with Meta’s mobile social apps.</span></p> <p><span style="font-weight: 400;">Our Meta Horizon team worked closely with our React team to profile our application and find opportunities for improvement using Android Systrace, React DevTools, and the new </span><a href="https://www.youtube.com/live/b48Lax2-jOQ?si=OgqKzyw-AAnIUefZ&amp;t=4290"><span style="font-weight: 400;">React Native DevTools</span></a><span style="font-weight: 400;">. </span><span style="font-weight: 400;">The most impactful improvement that our teams made was initiating network queries earlier. Instead of initiating network requests when a component of the product surface was rendered, </span><a href="https://relay.dev/" target="_blank" rel="noopener"><span style="font-weight: 400;">Relay</span></a><span style="font-weight: 400;">, our GraphQL library, made it easy for our teams to move that network fetch to start when the navigation button from the previous surface was clicked.</span></p> <h2>Meta Horizon Store</h2> <p><img loading="lazy" decoding="async" class="aligncenter size-large wp-image-21783" src="https://engineering.fb.com/wp-content/uploads/2024/10/Meta-Horizon-Store-Quest-React-Connect-2024-crop-compressed.jpg?w=1024" alt="" width="1024" height="688" srcset="https://engineering.fb.com/wp-content/uploads/2024/10/Meta-Horizon-Store-Quest-React-Connect-2024-crop-compressed.jpg 1536w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-Horizon-Store-Quest-React-Connect-2024-crop-compressed.jpg?resize=916,615 916w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-Horizon-Store-Quest-React-Connect-2024-crop-compressed.jpg?resize=768,516 768w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-Horizon-Store-Quest-React-Connect-2024-crop-compressed.jpg?resize=1024,688 1024w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-Horizon-Store-Quest-React-Connect-2024-crop-compressed.jpg?resize=96,65 96w, https://engineering.fb.com/wp-content/uploads/2024/10/Meta-Horizon-Store-Quest-React-Connect-2024-crop-compressed.jpg?resize=192,129 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <p><span style="font-weight: 400;">We also announced that the Meta Horizon Store is now open for all developers to publish apps, </span><a href="https://developers.meta.com/horizon/blog/building-2d-apps-on-the-meta-horizon-store" target="_blank" rel="noopener"><span style="font-weight: 400;">including 2D apps</span></a><span style="font-weight: 400;">. To support this change, we made major changes to the Horizon Store; changes to our navigation to support significantly more categories, better ranking and categorization of apps, and a new “Early Access” section.</span></p> <p><span style="font-weight: 400;">The Meta Horizon Store includes the surfaces that let you discover and acquire applications and games for Meta Quest, as well as explore Worlds you can travel to in Horizon. Since we have a centralized team that maintains the Store across four platforms (Android, iOS, Horizon OS, Web) and we need feature parity across these interfaces, the team has benefited tremendously from being able to use React and React Native even though these are primarily separate implementations today. These technologies have enabled the team to roll out new features and experiments much faster with a smaller team.</span></p> <p><span style="font-weight: 400;">Just like the new Instagram and Facebook apps, and everything else using React at Meta, our teams use the bleeding edge of React infra like the React Compiler and the New React Native Architecture. The React team partnered with multiple teams over the last few years to build out infrastructure and capabilities to enable cross platform code sharing, which the Meta Horizon Store team has started to take advantage of. For example, the Meta Horizon Store&#8217;s navigation and routing infrastructure was originally quite different between platforms. The team is now reusing Meta’s internal router for React apps that was </span><a href="https://www.youtube.com/watch?v=KT3XKDBZW7M" target="_blank" rel="noopener"><span style="font-weight: 400;">originally built for Facebook.com</span></a><span style="font-weight: 400;"> which now also works with React Native. We also converted the Meta Horizon Store on the web from using pure CSS to using </span><a href="https://stylexjs.com/" target="_blank" rel="noopener"><span style="font-weight: 400;">StyleX</span></a><span style="font-weight: 400;">, which in combination with </span><a href="https://github.com/facebook/react-strict-dom" target="_blank" rel="noopener"><span style="font-weight: 400;">React Strict DOM</span></a><span style="font-weight: 400;">, has enabled them to reuse the Spotlight section of the Meta Horizon Store across web and mixed reality. This enabled us to more quickly support internationalized text rendering and light/dark mode for banners, and accelerated future enhancements for our merchandising team.</span></p> <h2>Meta Spatial Editor</h2> <div style="width: 750px;" class="wp-video"><video class="wp-video-shortcode" id="video-21703-2" width="750" height="360" poster="https://engineering.fb.com/wp-content/uploads/2024/10/Meta-Spatial-Editor1-compressed.jpg" preload="metadata" controls="controls"><source type="video/mp4" src="https://engineering.fb.com/wp-content/uploads/2024/10/spatial-compressed.mp4?_=2" /><a href="https://engineering.fb.com/wp-content/uploads/2024/10/spatial-compressed.mp4">https://engineering.fb.com/wp-content/uploads/2024/10/spatial-compressed.mp4</a></video></div> <p><span style="font-weight: 400;"><br /> We announced the </span><a href="https://developers.meta.com/horizon/develop/spatial-sdk"><span style="font-weight: 400;">Meta Spatial SDK</span></a><span style="font-weight: 400;"> and Meta Spatial Editor to enable mobile developers to create immersive experiences for Meta Horizon OS using familiar Android languages, libraries, and tools, along with unique Meta Quest capabilities, such as physics, MR, and 3D. Creating great 3D experiences always requires being able to visualize and edit your scenes directly. The Meta Spatial Editor is a new desktop app that lets you import, organize, and transform your assets into visual compositions and export them, using the glTF standard, into Meta Spatial SDK.</span></p> <p><span style="font-weight: 400;">Our teams built the app with </span><a href="https://microsoft.github.io/react-native-windows/"><span style="font-weight: 400;">React Native for Desktop</span></a><span style="font-weight: 400;">, providing users with native Windows and macOS apps and providing our teams with the incredible developer experience of React. One of the key factors in the teams&#8217; decision to use React Native for Desktop instead of other web-based desktop solutions is that React Native enables the team to utilize native integrations when needed. The main 3D scene in the app is powered by a custom 3D rendering engine, requiring a custom React Native Native Component integration. The React Native panels on the scene let users modify all sorts of properties which then communicate with the 3D renderer via C++, enabling us to update the UI at 60fps.</span></p> <p><span style="font-weight: 400;">The Meta Spatial Editor team had many engineers who primarily had a C++ background and were used to building with Qt. These team members were initially skeptical of JavaScript but ended up loving the developer experience provided by React Native, such as Fast Refresh. Web developers take for granted that code changes can be seen on file-save, but it is still extremely uncommon for native engineers. This developer experience enabled our teams to build much more quickly with React Native.</span></p> <h2>This is how Meta builds React</h2> <p><span style="font-weight: 400;">Over a decade ago, Meta introduced React to the industry through open source. Our React team at Meta is so proud of these experiences that were announced at Meta Connect 2024. These products showcase the power, expressivity, and flexibility of what’s possible with React: delightful interactions, deeply complex integrations, and </span><span style="font-weight: 400;">incredibly responsive interfaces. </span><span style="font-weight: 400;">And of course, they all render natively on their respective platforms to match user expectations.</span></p> <p><span style="font-weight: 400;">Over the past decade, the React team has partnered deeply with both teams at Meta as well as members of the open source community to enable these types of product and developer experiences. Engineers at Meta use React on every platform where we ship user interfaces: web, mobile, desktop, and new platforms such as MR. Each time the React team has added support for a new platform, the team has invested in deeply understanding the idioms and expectations for user experiences on that platform, then adapting and optimizing React accordingly. We’ve consistently found that improving React for one platform benefits others as well — an approach the React teams described in their </span><a href="https://reactnative.dev/blog/2021/08/26/many-platform-vision"><span style="font-weight: 400;">Many Platform Vision</span></a><span style="font-weight: 400;">. </span></p> <p><span style="font-weight: 400;">This pattern has continued as the teams expanded support to the constraints and opportunities of mixed reality devices. Our teams have improved startup and application responsiveness, improved efficiency to reduce battery drain, and taken major steps to enable code sharing across web and native platforms — with platform-specific customizations. These wins have consistently benefited our apps on other platforms, with user experience improvements in products such as Facebook.com and Facebook Marketplace. </span></p> <p><span style="font-weight: 400;">Our engineers invest in these improvements knowing that they will benefit not only products created by Meta, but all React products in the world. Meta continues to share these improvements with the open source community whenever we have built our confidence that they are stable enough for broader adoption. We’ve previously shared some of these technologies with the open source community, including </span><a href="https://youtu.be/lyEKhv8-3n0?si=sg-gbtEMtUCxFqOs&amp;t=2269"><span style="font-weight: 400;">React Compiler</span></a><span style="font-weight: 400;">, </span><a href="https://react.dev/blog/2024/04/25/react-19"><span style="font-weight: 400;">React 19</span></a><span style="font-weight: 400;">, React Native’s </span><a href="https://youtu.be/Q5SMmKb7qVI?si=i5K0pUmYCYOeBbKu&amp;t=766"><span style="font-weight: 400;">New Architecture</span></a><span style="font-weight: 400;">, </span><a href="https://stylexjs.com/"><span style="font-weight: 400;">StyleX</span></a><span style="font-weight: 400;">, </span><a href="https://github.com/facebook/react-strict-dom"><span style="font-weight: 400;">React Strict DOM</span></a><span style="font-weight: 400;">, and performance improvements </span><a href="https://www.youtube.com/watch?v=rElD4RaR3gk" target="_blank" rel="noopener"><span style="font-weight: 400;">to Hermes</span></a><span style="font-weight: 400;">. These innovations and more are currently under development, and our teams look forward to sharing them with the open source community in the future!</span></p> <p><small><em>Stranger Things <img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2122.png" alt="™" class="wp-smiley" style="height: 1em; max-height: 1em;" />/© Netflix. Used with permission.</em></small></p> <p>The post <a rel="nofollow" href="https://engineering.fb.com/2024/10/02/android/react-at-meta-connect-2024/">React at Meta Connect 2024</a> appeared first on <a rel="nofollow" href="https://engineering.fb.com">Engineering at Meta</a>.</p> ]]></content:encoded> <post-id xmlns="com-wordpress:feed-additions:1">21703</post-id> </item> <item> <title>Inside Bento: Jupyter Notebooks at Meta</title> <link>https://engineering.fb.com/2024/09/17/data-infrastructure/inside-bento-jupyter-notebooks-at-meta/</link> <dc:creator><![CDATA[]]></dc:creator> <pubDate>Tue, 17 Sep 2024 17:53:56 +0000</pubDate> <category><![CDATA[Culture]]></category> <category><![CDATA[Data Infrastructure]]></category> <category><![CDATA[DevInfra]]></category> <category><![CDATA[Open Source]]></category> <category><![CDATA[Meta Tech Podcast]]></category> <guid isPermaLink="false">https://engineering.fb.com/?p=21690</guid> <description><![CDATA[<p>This episode of the Meta Tech Podcast is all about Bento, Meta’s internal distribution of Jupyter Notebooks, an open-source web-based computing platform. Bento allows our engineers to mix code, text, and multimedia in a single document and serves a wide range of use cases at Meta from prototyping to complex machine learning workflows. Pascal Hartig [...]</p> <p><a class="btn btn-secondary understrap-read-more-link" href="https://engineering.fb.com/2024/09/17/data-infrastructure/inside-bento-jupyter-notebooks-at-meta/">Read More...</a></p> <p>The post <a rel="nofollow" href="https://engineering.fb.com/2024/09/17/data-infrastructure/inside-bento-jupyter-notebooks-at-meta/">Inside Bento: Jupyter Notebooks at Meta</a> appeared first on <a rel="nofollow" href="https://engineering.fb.com">Engineering at Meta</a>.</p> ]]></description> <content:encoded><![CDATA[<p>This episode of the Meta Tech Podcast is all about <a href="https://developers.facebook.com/blog/post/2021/09/20/eli5-bento-interactive-notebook-empowers-development-collaboration-best-practices/" target="_blank" rel="noopener">Bento</a>, Meta’s internal distribution of Jupyter Notebooks, an open-source web-based computing platform. Bento allows our engineers to mix code, text, and multimedia in a single document and serves a wide range of use cases at Meta from prototyping to complex machine learning workflows.</p> <p>Pascal Hartig (<a href="https://www.threads.net/@passy_" target="_blank" rel="noopener">@passy</a>) is joined by Steve, whose team has built several features on top of Jupyter, including <a href="https://engineering.fb.com/2023/08/29/security/scheduling-jupyter-notebooks-meta/" target="_blank" rel="noopener">scheduled notebooks</a>, sharing with colleagues, and <a href="https://engineering.fb.com/2024/06/10/data-infrastructure/serverless-jupyter-notebooks-bento-meta/" target="_blank" rel="noopener">running notebooks without a remote server component</a> by leveraging WebAssembly in the browser.</p> <p>Download or listen to the podcast episode below:</p> <p><iframe loading="lazy" style="border: none;" title="Libsyn Player" src="//html5-player.libsyn.com/embed/episode/id/32811392/height/90/theme/custom/thumbnail/yes/direction/forward/render-playlist/no/custom-color/000000/" width="100%" height="90" scrolling="no" allowfullscreen="allowfullscreen"></iframe></p> <p>You can also find the episode wherever you get your podcasts, including:</p> <ul> <li><a href="https://open.spotify.com/episode/0RvTSFzjAlqJzW9tuJwokl" target="_blank" rel="noopener">Spotify</a></li> <li><a href="https://podcasts.apple.com/us/podcast/inside-bento-serverless-jupyter-notebooks-at-meta/id1370910331?i=1000667487405" target="_blank" rel="noopener">Apple Podcasts</a></li> <li><a href="https://pca.st/7vbp2djc" target="_blank" rel="noopener">PocketCasts</a></li> <li><a href="https://overcast.fm/itunes1370910331" target="_blank" rel="noopener">Overcast</a></li> </ul> <p>The <a href="https://insidefacebookmobile.libsyn.com/">Meta Tech Podcast</a> is a podcast, brought to you by Meta, where we highlight the work Meta’s engineers are doing at every level – from low-level frameworks to end-user features.</p> <p>Send us feedback on <a href="https://instagram.com/metatechpod" target="_blank" rel="noopener">Instagram</a>, <a href="https://threads.net/@metatechpod" target="_blank" rel="noopener">Threads</a>, or <a href="https://twitter.com/metatechpod" target="_blank" rel="noopener">X</a>.</p> <p>And if you’re interested in learning more about career opportunities at Meta visit the <a href="https://www.metacareers.com/?ref=engineering.fb.com" target="_blank" rel="noopener">Meta Careers</a> page.</p> <p>The post <a rel="nofollow" href="https://engineering.fb.com/2024/09/17/data-infrastructure/inside-bento-jupyter-notebooks-at-meta/">Inside Bento: Jupyter Notebooks at Meta</a> appeared first on <a rel="nofollow" href="https://engineering.fb.com">Engineering at Meta</a>.</p> ]]></content:encoded> <post-id xmlns="com-wordpress:feed-additions:1">21690</post-id> </item> </channel> </rss>

Pages: 1 2 3 4 5 6 7 8 9 10