Applied Machine Learning and AI for Engineers: From Fundamentals to Advanced Techniques

Author

Posted Oct 27, 2024

Reads 700

An artist’s illustration of artificial intelligence (AI). This image represents how machine learning is inspired by neuroscience and the human brain. It was created by Novoto Studio as par...
Credit: pexels.com, An artist’s illustration of artificial intelligence (AI). This image represents how machine learning is inspired by neuroscience and the human brain. It was created by Novoto Studio as par...

Applied machine learning and AI is revolutionizing the field of engineering, enabling the creation of intelligent systems that can learn from data and make decisions on their own. This is made possible by the application of algorithms and statistical models that can analyze and interpret complex data.

Machine learning algorithms are trained on large datasets, allowing them to learn patterns and relationships that would be difficult or impossible for humans to identify. In fact, a dataset of 10,000 samples can be used to train a machine learning model to achieve high accuracy in classification tasks.

As engineers, it's essential to understand the fundamentals of machine learning and AI, including supervised and unsupervised learning, regression, classification, and clustering. By grasping these concepts, engineers can design and develop intelligent systems that can improve efficiency, accuracy, and decision-making in various industries.

Machine Learning Fundamentals

Most machine learning models fall into one of two broad categories: supervised learning models and unsupervised learning models. Supervised learning models make predictions, and they're trained with labeled data.

Credit: youtube.com, AI vs Machine Learning

A great example of a supervised learning model is the US Postal Service's model that turns handwritten zip codes into digits. Another example is the model your credit card company uses to authorize purchases.

Unsupervised learning models, on the other hand, don't require labeled data. They're used to provide insights into existing data, or to group data into categories and categorize future inputs accordingly.

Classification

Classification is a fundamental concept in machine learning that involves categorizing data into predefined groups. It's a crucial task in many industries, including e-commerce and advertising.

Google's research paper on predicting advertiser churn for Google AdWords in 2010 is a great example of classification in action. The paper explores how to use machine learning to predict which advertisers are likely to leave Google AdWords.

Classification can be used to categorize documents, items, or even products. For instance, Walmart's research paper on large-scale item categorization in e-commerce using multiple recurrent neural networks in 2016 shows how to use machine learning to categorize products quickly and efficiently.

Credit: youtube.com, Classification and Regression in Machine Learning

Here are some notable examples of classification in machine learning:

  1. Prediction of Advertiser Churn for Google AdWords (Paper) Google2010
  2. High-Precision Phrase-Based Document Classification on a Modern Scale (Paper) LinkedIn2011
  3. Chimera: Large-scale Classification using Machine Learning, Rules, and Crowdsourcing (Paper) Walmart2014
  4. Large-scale Item Categorization in e-Commerce Using Multiple Recurrent Neural Networks (Paper) NAVER2016
  5. Discovering and Classifying In-app Message Intent at AirbnbAirbnb2019
  6. Teaching Machines to Triage Firefox BugsMozilla2019
  7. Categorizing Products at ScaleShopify2020
  8. How We Built the Good First Issues FeatureGitHub2020
  9. Testing Firefox More Efficiently with Machine LearningMozilla2020
  10. Using ML to Subtype Patients Receiving Digital Mental Health Interventions (Paper) Microsoft2020
  11. Scalable Data Classification for Security and Privacy (Paper) Facebook2020
  12. Uncovering Online Delivery Menu Best Practices with Machine LearningDoorDash2020
  13. Using a Human-in-the-Loop to Overcome the Cold Start Problem in Menu Item TaggingDoorDash2020
  14. Deep Learning: Product Categorization and ShelvingWalmart2021
  15. Large-scale Item Categorization for e-Commerce (Paper) DianPing, eBay2012
  16. Semantic Label Representation with an Application on Multimodal Product CategorizationWalmart2022
  17. Building Airbnb Categories with ML and Human-in-the-LoopAirbnb2022

Forecasting

Machine learning is all about making predictions and forecasts, and one of the most important applications is in forecasting. Companies like Uber and Gojek have developed automated forecasting tools to predict demand and supply.

Forecasting is a crucial aspect of business operations, and it's not just about predicting numbers, but also about understanding the underlying patterns and trends. For instance, Uber's automated forecasting tool uses machine learning to predict demand and supply, while Gojek's tool uses a similar approach to forecast demand.

Companies like DoorDash and Grubhub have also developed forecasting tools to predict order volume and supply. These tools use machine learning algorithms to analyze historical data and make predictions about future demand.

One of the key challenges in forecasting is retraining machine learning models in the wake of unexpected events, such as the COVID-19 pandemic. Companies like DoorDash have developed strategies to retrain their models and adapt to changing circumstances.

Credit: youtube.com, Forecasting with Machine Learning Demystified

Here are some examples of companies that have developed forecasting tools:

These are just a few examples of companies that have developed forecasting tools using machine learning. The possibilities are endless, and the applications are vast.

Machine learning has revolutionized the way we search for information online. One of the key applications of machine learning is in search ranking, which is the process of determining the order in which search results are displayed to users.

Amazon, for instance, uses a complex ranking system to display search results, which involves multiple factors such as relevance, popularity, and user behavior. In fact, Amazon's search ranking system is so sophisticated that it can even detect and prevent clickjacking attacks.

The goal of search ranking is to provide users with the most relevant and useful search results, which can be achieved through the use of machine learning algorithms. These algorithms can analyze large amounts of data, identify patterns, and make predictions about the relevance of search results.

Credit: youtube.com, Machine Learning Fundamentals: Cross Validation

In 2016, Yahoo developed a ranking system that uses a combination of machine learning algorithms to determine the relevance of search results. The system, known as "Ranking Relevance in Yahoo Search", uses a variety of factors such as keyword matching, document similarity, and user behavior to rank search results.

Here are some key statistics about search ranking:

  • In 2017, Twitter used deep learning to improve the ranking of search results on its platform.
  • Alibaba's e-commerce search engine uses a ranking system that takes into account factors such as user behavior, product attributes, and merchant information.
  • In 2019, Airbnb developed a search ranking system that uses machine learning to personalize search results based on user behavior and preferences.

By using machine learning to improve search ranking, companies can provide users with more relevant and useful search results, which can lead to increased engagement, conversion rates, and customer satisfaction.

Sequence Modelling

Sequence modelling is a key area of machine learning where algorithms are trained to recognize patterns in sequential data. This can be particularly useful in applications like predicting clinical events or understanding consumer histories.

Recurrent neural networks (RNNs) are a type of neural network architecture well-suited for sequence modelling tasks. They can learn to recognize patterns in sequential data, such as time series data or text sequences.

Credit: youtube.com, Sequence Models - Learn Machine Learning

For example, a study by Sutter Health in 2015 used RNNs to predict clinical events, while another study by Zalando in 2016 used deep learning to understand consumer histories.

The applications of sequence modelling are diverse, and include early detection of heart failure onset, notification attendance prediction, and click-through rate prediction.

Here are some notable examples of sequence modelling in practice:

These examples demonstrate the potential of sequence modelling in a variety of domains, and highlight the importance of this area of machine learning.

Weak Supervision

Weak supervision is a technique used in machine learning to train models with limited or noisy labeled data. This approach is often used when it's not feasible to collect large amounts of high-quality labeled data.

One notable example of weak supervision is Snorkel DryBell, a case study by Google in 2019 that deployed weak supervision at an industrial scale. This project demonstrated the effectiveness of weak supervision in real-world applications.

Credit: youtube.com, Exploring the power of weak supervised learning

Weak supervision can be achieved through various methods, including label synthesis, weak labeling, and active learning. These methods can be used individually or in combination to train models with limited labeled data.

The Osprey system, developed by Intel in 2019, is another example of weak supervision in action. Osprey uses weak supervision to address imbalanced extraction problems without requiring code modifications.

In some cases, weak supervision can be used to improve the performance of machine-learned products. The Overton system, developed by Apple in 2019, is designed to monitor and improve machine-learned products by providing feedback to developers.

Here are some key examples of weak supervision in action:

By leveraging weak supervision, developers can build more robust and accurate machine learning models, even with limited labeled data.

Generation

Machine learning is a field that's rapidly advancing, with new breakthroughs and innovations emerging every year. One area where we've seen significant progress is in the generation of new content, such as text, images, and even entire videos.

Credit: youtube.com, AI, Machine Learning, Deep Learning and Generative AI Explained

Better language models have been developed, enabling machines to generate human-like language that's indistinguishable from the real thing. This has huge implications for applications like chatbots, virtual assistants, and language translation software.

The GPT-3 model, for example, has been shown to be a few-shot learner, able to learn new tasks with minimal training data. This means that machines can learn to perform complex tasks with just a few examples, rather than requiring large amounts of training data.

In addition to text generation, researchers have also made significant strides in image generation and super resolution. Image GPT, a model developed by OpenAI, can generate high-quality images from text prompts, while deep learned super resolution techniques have been used to enhance the quality of feature films.

Here are some key papers and projects that have contributed to these advancements:

  1. Better Language Models and Their Implications (Paper)
  2. Image GPT (Paper, Code)
  3. Language Models are Few-Shot Learners (Paper)
  4. Deep Learned Super Resolution for Feature Film Production (Paper)
  5. Unit Test Case Generation with Transformers

Machine vs AI

Machine learning and artificial intelligence (AI) are often used interchangeably, but technically speaking, machine learning is a subset of AI.

Credit: youtube.com, What's the Difference Between AI, Machine Learning, and Deep Learning?

Machine learning encompasses not only machine learning models but also other types of models such as expert systems and reinforcement learning systems.

An example of a reinforcement learning system is AlphaGo, which was the first computer program to beat a professional human Go player.

It trains on games that have already been played and learns strategies for winning on its own.

Deep learning is a subset of machine learning and what most people refer to as AI today.

Deep learning is machine learning performed with neural networks.

There are forms of deep learning that don't involve neural networks, but the vast majority of deep learning today involves neural networks.

Machine learning models can be divided into conventional models and deep-learning models.

Conventional models use learning algorithms to model patterns in data, while deep-learning models use neural networks to do the same.

Neural networks have been developed to excel at certain tasks, including computer vision and tasks involving human languages.

We'll take a closer look at neural networks in Chapter 8.

On a similar theme: Reinforcement Learning

Sources

  1. Big Data Analytics and Applied Machine Learning with Python ... (emory.edu)
  2. applyingML (applyingml.com)
  3. Improving Accuracy By Certainty Estimation of Human Decisions, Labels, and Raters (fb.com)
  4. Paper (thodrek.github.io)
  5. Data Management Challenges in Production Machine Learning (research.google)
  6. Monitoring Data Quality at Scale with Statistical Modeling (uber.com)
  7. Introducing Fabricator: A Declarative Feature Engineering Framework (doordash.engineering)
  8. Developing scalable feature engineering DAGs (outerbounds.com)
  9. Open sourcing Feathr – LinkedIn’s feature store for productive machine learning (linkedin.com)
  10. Near real-time features for near real-time personalization (linkedin.com)
  11. ML Feature Serving Infrastructure at Lyft (lyft.com)
  12. Optimal Feature Discovery: Better, Leaner Machine Learning Models Through Information Theory (uber.com)
  13. Building Riviera: A Declarative Real-Time Feature Engineering Framework (doordash.engineering)
  14. Feast: Bridging ML Models and Data (gojek.io)
  15. Accelerating Machine Learning with the Feature Store Service (condenast.com)
  16. Introducing Feast: An Open Source Feature Store for Machine Learning (google.com)
  17. Building the Activity Graph, Part 2 (Feature Storage Section) (linkedin.com)
  18. Distributed Time Travel for Feature Generation (netflixtechblog.com)
  19. Building Airbnb Categories with ML and Human-in-the-Loop (medium.com)
  20. Using a Human-in-the-Loop to Overcome the Cold Start Problem in Menu Item Tagging (doordash.engineering)
  21. Uncovering Online Delivery Menu Best Practices with Machine Learning (doordash.engineering)
  22. Testing Firefox More Efficiently with Machine Learning (mozilla.org)
  23. Teaching Machines to Triage Firefox Bugs (mozilla.org)
  24. Paper (kdd.org)
  25. Large-scale Item Categorization in e-Commerce Using Multiple Recurrent Neural Networks (kdd.org)
  26. Chimera: Large-scale Classification using Machine Learning, Rules, and Crowdsourcing (acm.org)
  27. High-Precision Phrase-Based Document Classification on a Modern Scale (linkedin.com)
  28. Prediction of Advertiser Churn for Google AdWords (research.google)
  29. Using Machine Learning to Predict the Value of Ad Requests (twitter.com)
  30. Using Machine Learning to Predict Value of Homes On Airbnb (medium.com)
  31. Causal Forecasting at Lyft (Part 1) (lyft.com)
  32. DeepETA: How Uber Predicts Arrival Times Using Deep Learning (uber.com)
  33. The history of Amazon’s forecasting algorithm (amazon.science)
  34. Greykite: A flexible, intuitive, and fast forecasting library (linkedin.com)
  35. Managing Supply and Demand Balance Through Machine Learning (doordash.engineering)
  36. Introducing Orbit, An Open Source Package for Time Series Inference and Forecasting (uber.com)
  37. Retraining Machine Learning Models in the Wake of COVID-19 (doordash.engineering)
  38. Under the Hood of Gojek’s Automated Forecasting Tool (gojek.io)
  39. Engineering Extreme Event Forecasting at Uber with RNN (uber.com)
  40. Paper (arxiv.org)
  41. Homepage Recommendation with Exploitation and Exploration (doordash.engineering)
  42. Evolving DoorDash’s Substitution Recommendations Algorithm (doordash.engineering)
  43. Recommend API: Unified end-to-end machine learning infrastructure to generate recommendations (slack.engineering)
  44. RecSysOps: Best Practices for Operating a Large-Scale Recommender System (medium.com)
  45. Blueprints for recommender system architectures: 10th anniversary edition (amatriain.net)
  46. Improving job matching with machine-learned activity features (linkedin.com)
  47. Beyond Matrix Factorization: Using hybrid features for user-business recommendations (yelp.com)
  48. Lessons Learned from Building out Context-Aware Recommender Systems (onepeloton.com)
  49. How We Built: An Early-Stage Machine Learning Model for Recommendations (onepeloton.com)
  50. Building a Deep Learning Based Retrieval System for Personalized Recommendations (ebayinc.com)
  51. The Amazon Music conversational recommender is hitting the right notes (amazon.science)
  52. Understanding Data Storage and Ingestion for Large-Scale Deep Recommendation Model Training (arxiv.org)
  53. "Are you sure?": Preliminary Insights from Scaling Product Comparisons to Multiple Shops (arxiv.org)
  54. On YouTube's Recommendation System (blog.youtube)
  55. Deep Retrieval: End-to-End Learnable Structure Model for Large-Scale Recommendations (arxiv.org)
  56. Self-supervised Learning for Large-scale Item Recommendations (arxiv.org)
  57. Lessons Learned Addressing Dataset Bias in Model-Based Candidate Generation (arxiv.org)
  58. Multi-task Learning and Calibration for Utility-based Home Feed Ranking (medium.com)
  59. Improving the Quality of Recommended Pins with Lightweight Ranking (medium.com)
  60. Multi-task Learning for Related Products Recommendations at Pinterest (medium.com)
  61. A Case Study of Session-based Recommendations in the Home-improvement Domain (acm.org)
  62. Improved Deep & Cross Network for Feature Cross Learning in Web-scale LTR Systems (arxiv.org)
  63. Zero-Shot Heterogeneous Transfer Learning from RecSys to Cold-Start Search Retrieval (arxiv.org)
  64. Building a Heterogeneous Social Network Recommendation System (linkedin.com)
  65. A Closer Look at the AI Behind Course Recommendations on LinkedIn Learning (Part 2) (linkedin.com)
  66. The Evolution of Kit: Automating Marketing Using Machine Learning (shopify.com)
  67. Contextual and Sequential User Embeddings for Large-Scale Music Recommendation (acm.org)
  68. For Your Ears Only: Personalizing Spotify Home with Machine Learning (atspotify.com)
  69. ATBRG: Adaptive Target-Behavior Relational Graph Network for Effective Recommendation (arxiv.org)
  70. MiNet: Mixed Interest Network for Cross-Domain Click-Through Rate Prediction (arxiv.org)
  71. Controllable Multi-Interest Framework for Recommendation (arxiv.org)
  72. TPG-DNN: A Method for User Intent Prediction with Multi-task Learning (arxiv.org)
  73. Paper (arxiv.org)
  74. Deep Interest with Hierarchical Attention Network for Click-Through Rate Prediction (arxiv.org)
  75. Temporal-Contextual Recommendation in Real-Time (amazon.science)
  76. Learning to be Relevant: Evolution of a Course Recommendation System (acm.org)
  77. Using Machine Learning to Predict what File you Need Next (Part 2) (dropbox.tech)
  78. Food Discovery with Uber Eats: Using Graph Learning to Power Recommendations (uber.com)
  79. Powered by AI: Instagram’s Explore recommender system (facebook.com)
  80. Personalized Recommendations for Experiences Using Deep Learning (tripadvisor.com)
  81. Multi-Interest Network with Dynamic Routing for Recommendation at Tmall (arxiv.org)
  82. Paper (arxiv.org)
  83. SDM: Sequential Deep Matching Model for Online Large-scale Recommender System (arxiv.org)
  84. Behavior Sequence Transformer for E-commerce Recommendation in Alibaba (arxiv.org)
  85. Explore, Exploit, and Explain: Personalizing Explainable Recommendations with Bandits (acm.org)
  86. Pixie: A System for Recommending 3+ Billion Items to 200+ Million Users in Real-Time (arxiv.org)
  87. Paper (nips.cc)
  88. A Meta-Learning Perspective on Cold-Start Recommendations for Items (nips.cc)
  89. Personalized Recommendations in LinkedIn Learning (linkedin.com)
  90. Session-based Recommendations with Recurrent Neural Networks (arxiv.org)
  91. Learning a Personalized Homepage (netflixtechblog.com)
  92. Recommending Music on Spotify with Deep Learning (benanne.github.io)
  93. Learning to Rank Recommendations with the k -Order Statistic Loss (acm.org)
  94. Deep Learning for Search Ranking at Etsy (etsy.com)
  95. How to Optimise Rankings with Cascade Bandits (medium.com)
  96. Learning To Rank Diversely (medium.com)
  97. SearchSage: Learning Search Query Representations at Pinterest (medium.com)
  98. Siamese BERT-based Model for Web Search Relevance Ranking (arxiv.org)
  99. Paper (arxiv.org)
  100. Graph Intention Network for Click-through Rate Prediction in Sponsored Search (arxiv.org)
  101. Using Learning-to-rank to Precisely Locate Where to Deliver Packages (amazon.science)
  102. Towards Personalized and Semantic Retrieval for E-commerce Search via Embedding Learning (arxiv.org)
  103. Embedding-based Retrieval in Facebook Search (arxiv.org)
  104. Things Not Strings: Understanding Search Intent with Better Recall (doordash.engineering)
  105. GDMix: A Deep Ranking Personalization Framework (linkedin.com)
  106. COLD: Towards the Next Generation of Pre-Ranking System (arxiv.org)
  107. AI at Scale in Bing (bing.com)
  108. Video (crossminds.ai)
  109. Ads Allocation in Feed via Constrained Optimization (acm.org)
  110. Quality Matches Via Personalized AI for Hirer and Seeker Preferences (linkedin.com)
  111. Improving Deep Learning for Airbnb Search (arxiv.org)
  112. Query2vec: Search query expansion with query embeddings (grubhub.com)
  113. How We Used Semantic Search to Make Our Search 10x Smarter (medium.com)
  114. Aggregating Search Results from Heterogeneous Sources via Reinforcement Learning (arxiv.org)
  115. Neural Code Search: ML-based Code Search Using Natural Language Queries (facebook.com)
  116. Paper (arxiv.org)
  117. Entity Personalized Talent Search Models with Tree Interaction Features (arxiv.org)
  118. Machine Learning-Powered Search Ranking of Airbnb Experiences (medium.com)
  119. Reinforcement Learning to Rank in E-Commerce Search Engine (arxiv.org)
  120. Paper (arxiv.org)
  121. Globally Optimized Mutual Influence Aware Ranking in E-Commerce Search (arxiv.org)
  122. Powering Search & Recommendations at DoorDash (doordash.engineering)
  123. An Ensemble-based Approach to Click-Through Rate Prediction for Promoted Listings at Etsy (arxiv.org)
  124. Using Deep Learning at Scale in Twitter’s Timelines (twitter.com)
  125. Learning to Rank Personalized Search Results in Professional Networks (arxiv.org)
  126. Paper (kdd.org)
  127. Ranking Relevance in Yahoo Search (kdd.org)
  128. Embeddings at Spotify's Scale - How Hard Could It Be? (arize.com)
  129. Multi-objective Hyper-parameter Optimization of Behavioral Song Embeddings (arxiv.org)
  130. The Embeddings That Came in From the Cold: Improving Vectors for New and Rare Products with Content-Based Inference (acm.org)
  131. BERT Goes Shopping: Comparing Distributional Models for Product Representations (aclanthology.org)
  132. Machine Learning for a Better Developer Experience (netflixtechblog.com)
  133. Should we Embed? A Study on Performance of Embeddings for Real-Time Recommendations (arxiv.org)
  134. Personalized Store Feed with Vector Embeddings (doordash.engineering)
  135. Towards Deep and Representation Learning for Talent Search at LinkedIn (arxiv.org)
  136. Understanding Latent Style (stitchfix.com)
  137. Paper (kdd.org)
  138. Embeddings@Twitter (twitter.com)
  139. ML-Enhanced Code Completion Improves Developer Productivity (googleblog.com)
  140. (Part 2) (arxiv.org)
  141. How we reduced our text similarity runtime by 99.96% (medium.com)
  142. WIDeText: A Multimodal Deep Learning Framework (medium.com)
  143. GeDi: A Powerful New Method for Controlling Language Models (einstein.ai)
  144. Photon: A Robust Cross-Domain Text-to-SQL System (aclweb.org)
  145. Deploying Lifelong Open-Domain Dialogue Learning (arxiv.org)
  146. A Highly Efficient, Real-Time Text-to-Speech System Deployed on CPUs (facebook.com)
  147. Paper (arxiv.org)
  148. Using Neural Networks to Find Answers in Tables (googleblog.com)
  149. Goal-Oriented End-to-End Conversational Models with Profile Features in a Real-World Setting (amazon.science)
  150. Building Smart Replies for Member Messages (linkedin.com)
  151. Search-based User Interest Modeling with Sequential Behavior Data for CTR Prediction (arxiv.org)
  152. Practice on Long Sequential User Behavior Modeling for Click-Through Rate Prediction (arxiv.org)
  153. Deep Learning for Electronic Health Records (googleblog.com)
  154. Continual Prediction of Notification Attendance with Classical and Deep Networks (arxiv.org)
  155. Paper (doogkong.github.io)
  156. Deep Learning for Understanding Consumer Histories (zalando.com)
  157. Doctor AI: Predicting Clinical Events via Recurrent Neural Networks (arxiv.org)
  158. An Efficient Training Approach for Very Large Scale Face Recognition (arxiv.org)
  159. Using Machine Learning to Detect Deficient Coverage in Colonoscopy Screenings (googleblog.com)
  160. On-device Supermarket Product Recognition (googleblog.com)
  161. RepNet: Counting Repetitions in Videos (googleblog.com)
  162. Machine Learning-based Damage Assessment for Disaster Relief (googleblog.com)
  163. Making machines recognize and transcribe conversations in meetings using audio and video (microsoft.com)
  164. How we Improved Computer Vision Metrics by More Than 5% Only by Cleaning Labelling Errors (deepomatic.com)
  165. Selecting the Best Image for Each Merchant Using Exploration and Machine Learning (doordash.engineering)
  166. Bandits for Online Calibration: An Application to Content Moderation on Social Media Platforms (arxiv.org)
  167. Shifting Consumption towards Diverse content via Reinforcement Learning (atspotify.com)
  168. Part 2 (towardsdatascience.com)
  169. Deep Reinforcement Learning in Production Part1 (towardsdatascience.com)
  170. Paper (arxiv.org)
  171. Dynamic Pricing on E-commerce Platform with Deep Reinforcement Learning (arxiv.org)
  172. Reinforcement Learning for On-Demand Logistics (doordash.engineering)
  173. Budget Constrained Bidding by Model-free Reinforcement Learning in Display Advertising (arxiv.org)
  174. Deep Reinforcement Learning for Sponsored Search Real-time Bidding (arxiv.org)
  175. Improving the accuracy of our machine learning WAF using data augmentation and sampling (cloudflare.com)
  176. Evolving our machine learning to stop mobile bots (cloudflare.com)
  177. Fighting fraud with Triplet Loss (olx.com)
  178. Cloudflare Bot Management: Machine Learning and More (cloudflare.com)
  179. Blocking Slack Invite Spam With Machine Learning (slack.engineering)
  180. Detecting and Preventing Abuse on LinkedIn using Isolation Forests (linkedin.com)
  181. Paper (aaai.org)
  182. Metapaths guided Neighbors aggregated Network for Heterogeneous Graph Reasoning (arxiv.org)
  183. Video (crossminds.ai)
  184. Traffic Prediction with Advanced Graph Neural Networks (deepmind.com)
  185. AliGraph: A Comprehensive Graph Neural Network Platform (arxiv.org)
  186. Graph Convolutional Neural Networks for Web-Scale Recommender Systems (arxiv.org)
  187. Building The LinkedIn Knowledge Graph (linkedin.com)
  188. Optimizing DoorDash’s Marketing Spend with Machine Learning (doordash.engineering)
  189. Next-Generation Optimization for Dasher Dispatch at DoorDash (doordash.engineering)
  190. How Trip Inferences and Machine Learning Optimize Delivery Times on Uber Eats (uber.com)
  191. (Part 1) (grab.com)
  192. One-shot Text Labeling using Attention and Belief Propagation for Information Extraction (arxiv.org)
  193. Paper (arxiv.org)
  194. AutoKnow: self-driving knowledge collection for products of thousands of types (amazon.science)
  195. Using Machine Learning to Index Text from Billions of Images (dropbox.tech)
  196. Bootstrapping Conversational Agents with Weak Supervision (aaai.org)
  197. Paper (ajratner.github.io)
  198. Snorkel DryBell: A Case Study in Deploying Weak Supervision at Industrial Scale (acm.org)
  199. Unit Test Case Generation with Transformers (arxiv.org)
  200. Paper (pixar.com)
  201. Deep Learned Super Resolution for Feature Film Production (pixar.com)
  202. Language Models are Few-Shot Learners (arxiv.org)
  203. Paper (openai.com)
  204. Better Language Models and Their Implications (openai.com)
  205. The Machine Learning Behind Hum to Search (googleblog.com)
  206. Improving On-Device Speech Recognition with VoiceFilter-Lite (googleblog.com)
  207. MPC-based machine learning: Achieving end-to-end privacy-preserving machine learning (facebook.com)
  208. Federated Learning with Formal Differential Privacy Guarantees (googleblog.com)
  209. Federated Learning: Collaborative Machine Learning without Centralized Training Data (googleblog.com)
  210. Accelerating our A/B experiments with machine learning (dropbox.tech)
  211. Meet Dash-AB — The Statistics Engine of Experimentation at DoorDash (doordash.engineering)
  212. Overtracking and Trigger Analysis: Reducing sample sizes while INCREASING sensitivity (booking.ai)
  213. Challenges in Experimentation (lyft.com)
  214. Interpreting A/B Test Results: False Negatives and Power (netflixtechblog.com)
  215. Iterating Real-time Assignment Algorithms Through Experimentation (doordash.engineering)
  216. Leveraging Causal Modeling to Get More Value from Flat Experiment Results (doordash.engineering)
  217. Improving Online Experiment Capacity by 4X with Parallelization and Increased Sensitivity (doordash.engineering)
  218. Improving Experimental Power through Control Using Predictions as Covariate (doordash.engineering)
  219. Our Evolution Towards T-REX: The Prehistory of Experimentation Infrastructure at LinkedIn (linkedin.com)
  220. Paper (mlr.press)
  221. Paper (nips.cc)
  222. Announcing a New Framework for Designing Optimal Experiments with Pyro (uber.com)
  223. Paper (arxiv.org)
  224. Constrained Bayesian Optimization with Noisy Experiments (fb.com)
  225. Under the Hood of Uber’s Experimentation Platform (uber.com)
  226. Analyzing Experiment Outcomes: Beyond Average Treatment Effects (uber.com)
  227. Building an Intelligent Experimentation Platform with Uber Engineering (uber.com)
  228. The Reusable Holdout: Preserving Validity in Adaptive Data Analysis (googleblog.com)
  229. Overlapping Experiment Infrastructure: More, Better, Faster Experimentation (research.google)
  230. Dealing with Train-serve Skew in Real-time ML Models: A Short Guide (nubank.com.br)
  231. Permute, Quantize, and Fine-tune: Efficient Compression of Neural Networks (arxiv.org)
  232. How We Scaled Bert To Serve 1+ Billion Daily Requests on CPUs (roblox.com)
  233. LiFT: A Scalable Framework for Measuring Fairness in ML Applications (linkedin.com)
  234. Elastic Distributed Training with XGBoost on Ray (uber.com)
  235. Didact AI: The anatomy of an ML-powered stock picking engine (principiamundi.com)
  236. Monzo’s machine learning stack (monzo.com)
  237. Zalando's Machine Learning Platform (zalando.com)
  238. The Magic of Merlin: Shopify's New Machine Learning Platform (shopify.engineering)
  239. DARWIN: Data Science and Artificial Intelligence Workbench at LinkedIn (linkedin.com)
  240. Redesigning Etsy’s Machine Learning Platform (etsy.com)
  241. Evolving Reddit’s ML Model Deployment and Serving Architecture (reddit.com)
  242. LyftLearn: ML Model Training Infrastructure built on Kubernetes (lyft.com)
  243. Introducing Flyte: Cloud Native Machine Learning and Data Processing Platform (lyft.com)
  244. Meet Michelangelo: Uber’s Machine Learning Platform (uber.com)
  245. ML Education at Uber: Frameworks Inspired by Engineering Principles (uber.com)
  246. Automatic Retraining for Machine Learning Models: Tips and Lessons Learned (nubank.com.br)
  247. Best Practices for Real-time Machine Learning: Alerting (nubank.com.br)
  248. Maintaining Machine Learning Model Accuracy Through Monitoring (doordash.engineering)
  249. Tuning Model Performance (uber.com)
  250. Continuous Integration and Deployment for Machine Learning Online Serving and Models (uber.com)
  251. Challenges in Deploying Machine Learning: a Survey of Case Studies (arxiv.org)
  252. 150 Successful Machine Learning Models: 6 Lessons Learned at Booking.com (booking.ai)
  253. On Challenges in Machine Learning Model Management (computer.org)
  254. Rules of Machine Learning: Best Practices for ML Engineering (google.com)
  255. Paper (nips.cc)
  256. Paper (arxiv.org)
  257. Practical Recommendations for Gradient-Based Training of Deep Architectures (arxiv.org)
  258. AlphaGo (oreil.ly)
  259. Scikit-Learn (oreil.ly)
  260. k-nearest neighbors (oreil.ly)
  261. KNeighborsRegressor (oreil.ly)
  262. Segment Anything Model (SAM) (segment-anything.com)
  263. YOKOT.AI (yokot.ai)
  264. Doctor of Engineering in A.I. & Machine Learning (gwu.edu)

Carrie Chambers

Senior Writer

Carrie Chambers is a seasoned blogger with years of experience in writing about a variety of topics. She is passionate about sharing her knowledge and insights with others, and her writing style is engaging, informative and thought-provoking. Carrie's blog covers a wide range of subjects, from travel and lifestyle to health and wellness.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.