Bespoke Models on a Grand Scale How FreshWorks uses AI to build products for sales teams.

Reading time
2 min read
Web-based software for managing customer relationships

When every email, text, or call a company receives could mean a sale, reps need to figure out who to reply to first. Machine learning can help, but using it at scale requires a highly automated operation.

What’s new: Freshworks, which provides web-based software for managing customer relationships, produces models that prioritize sales leads, suggest the best action to move toward a sale, and related tasks. The decade-old company rolls them out and keeps them updated with help from Amazon’s SageMaker platform.
Problem: To serve 150,000 sales teams that might be in any type of business and located anywhere in the world, Freshworks builds, deploys, and maintains tens of thousands of customized models. That takes lots of processing power, so the company needs to do it efficiently.

Solution: Instead of training each model sequentially, Freshworks saves time by training them in parallel, as shown in the diagram above. Rather than retraining all models on fresh data weekly — as the company did previously — it evaluates performance continually and automatically retrains those that fall short. When a model isn’t needed, the server it runs on moves on to other jobs, saving costs.
How it works: Freshworks’ system trains and fine-tunes models to order for each client. It uses the client’s data if possible. Otherwise, it uses a model trained for the client’s industry, both industry and region, or both industry and language. The company’s user interface queries models through an API.

  • To produce a model, Freshworks automatically builds and evaluates a number of different architectures, including neural networks, linear regression, random forests, and XGBoost. It deploys the best one.
  • As the models run and take in new customer data, the system automatically scales servers up or down based on the number of incoming API calls.
  • Freshworks is evaluating a feature that constantly evaluates model performance along with incoming data statistics. It flags models that show degraded performance and retrains only those models.

Results: The automated system reduced training time from about 48 hours to about one hour. It boosted accuracy by 10 to 15 percent while cutting server costs by about 66 percent.

Why it matters: Show of hands: Who wants to build, deploy, and maintain thousands of models by hand? Automatically choosing architectures, training them, turning servers on and off, monitoring performance and data, and retraining when needed makes highly customized, highly scalable machine learning more practical and affordable.

We’re thinking: Accurate predictions of who might buy a product or subscribe ought to cut down on unwanted sales calls to the rest of us!


Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox