page contents Verification: 9ffcbb9dc8386bf9 LG open-sources Auptimizer, a tool for optimizing AI models – News Vire
Home / Tech News / LG open-sources Auptimizer, a tool for optimizing AI models

LG open-sources Auptimizer, a tool for optimizing AI models

Despite the proliferation of open source tools like Databricks’ AutoML Toolkit, Salesforce’s TransfogrifAI, and IBM’s Watson Studio AutoAI, tuning machine learning algorithms at scale remains a challenge. Finding the right hyperparameters — variables in the algorithms that help control the overall model’s performance — often involves time-consuming ancillary tasks like job-scheduling and tracking parameters and their effects. That’s why scientists at LG’s Advanced AI division developed Auptimizer, an open source hyperparameter optimization framework intended to help with AI model tweaking and bookkeeping. It’s available from GitHub.

As the team explains in a paper describing their work, Auptimizer simplifies the process of configuring a volume of models with a variety of configurations — with reproducibility. Like all hyperparameter algorithms, it initializes a search space and configuration before proposing values for hyperparameters, after which it trains the target model and updates the results. It then repeats the proposition, training, and updating stages until it identifies the optimal values.

Auptimizer adopts a gradient-based architecture search in which an AI model-based controller generates strings of “child models,” whose architectures are specified by a string variable. The controller uses the accuracy of child models undergoing training as a reward signal, progressively assigning higher probabilities to architectures with higher accuracy, thus improving its search.

LG Auptimizer

Above: Algorithms and infrastructure supported by LG’s Auptimizer tool.

Auptimizer only requires a few lines of code, and it helpfully guides users step by step through experiment-related configurations setups. It supports switching among different hyperparameter algorithms and computing resources without requiring users to rewrite the training script, and it’s designed to extend to other algorithms and resources without outsized modification.

Once an experiment is defined and initialized, Auptimizer continuously checks for available resources and hyperparameter proposals and runs jobs to identify the best model. After a workload is finished, it kicks off a function that records and saves the results asynchronously. In the case of advanced algorithms where the resulting scores must be matched with specific input hyperparameters, Auptimizer automatically performs the mapping and saves the hyperparameter values to a file so that they can be restored for use in a particular job. Meanwhile, it tracks auxiliary values so that they can be customized for other usage, like further model fine-tuning.

Users can specify the resources to be used in experiment configurations if they prefer, including processors, graphics chips, nodes, and public cloud instances like Amazon Web Services EC2. Auptimizer is compatible with existing resource management tools like Boto 3, and it keeps track of available resources and jobs in a database. Plus, it provides a basic tool to visualize the results from history.

The paper’s coauthors say that in the future Auptimizer will support end-to-end model building for edge devices, including model compression and neural architecture search.

“Auptimizer addresses a critical missing piece in the application aspect of … [the] research. It provides a universal platform to develop new algorithms efficiently. More importantly, Auptimizer lowers the barriers for data scientists in adopting [hyperparameter optimization] into their practice,” wrote the team. “Its scalability helps users to train their models efficiently with all computing resources available … This allows practitioners to quickly explore their ideas with advanced algorithms less laboriously.”

Leave a Reply

Your email address will not be published. Required fields are marked *