Block Toy Manufacturing Process Optimization AI Contest


1. Objective

The AI Contest hosted by Dacon and LG.

  • Background information: With the increased demand for block toys before and after Children's Day, we aim to improve manufacturing process efficiency. Utilizing AI-based approaches, we seek to develop process design algorithms for timely and adequate production.
  • Objective: Optimization of block toy manufacturing processes using artificial intelligence (AI)
  • Competition Description: Design optimal block toy production processes to meet specified demand. Develop process plans using AI-based algorithms and submit them in a CSV file.
  • 2. Data Preprocessing

    We focused on three aspects in the given data files: order, max count, and stock. We preprocessed the data using Python pandas. Initially, we standardized the data, but found the performance unsatisfactory, so we decided to normalize the data instead. Additionally, since the data is time series data, we added a variable reflecting this by incorporating a 30-day window. Finally, we used Samplesubmission.csv to format and insert the resulting values according to the specified format.

    3. Hierarchical RL multi model

    Broadly, we utilized two main models, further categorized into a total of five models. The first model is the check model. In the check model, the states consist of inventory, inQueue, and orders. The "inQueue" variable refers to the blocks waiting to be produced. The actions are discrete actions, where for each line, there are a total of 16 possible action combinations such as (check 1, check 1), (check 1, check 2), and so on. The rewards are determined based on the function outlined in the competition rules. Through various attempts, we concluded that block 2 had a significant impact on the scores. Therefore, we excluded block 2 from the schedule and included only blocks 1, 3, and 4 in the reward function. When creating the check model, we aimed for diverse actions. For checks, we set the time to a minimum of 28 hours, and for processes, we set it to a minimum of 98 hours. Once this first model is completed, schedules for checks and processes are generated. The second model is called the process model. Here, we use one process model for each block, resulting in a total of four models. The states of these models include inventory, inQueue, and orders for each block after scheduling from the check model. Additionally, they include whether the processes are conducted simultaneously or separately on the two lines. The actions in the process model are discrete actions, either 0 or the maximum value obtained from maxcount.csv. This represents the number of parts produced for each process pair. To reduce the number of actions, we only used two values. Similar to the check model, the reward function for the process model was determined according to the function outlined in the competition rules.

    4. Hyperparameter Tuning

    After trying various algorithms, we used the SMAC algorithm for hyperparameter tuning and the PPO (Proximal Policy Optimization) algorithm for reinforcement learning. In the check model, the input data is fed, and the agent takes actions accordingly. These actions lead to a new state, where order, stock, and inQueueMol are updated. The agent observes this new state and takes appropriate actions until the process is completed. Once training is complete, schedules are generated, and this leads to the training of process models for each block. Initially, a single process model was used, but later it was divided into four models to improve performance and create block-specific models. For instance, when training the process model for Block 1, only schedules where Check 1 is active are considered. Similarly, for Check 2, the process model for Block 2 is executed. Just like the check model, the agent takes actions based on incoming data, receives the next state (new order, stock, inQueue), and repeats the process of taking appropriate actions.

    5. Result

    Placed 3rd out of approximately 100 teams.

    6. Supplementary Materials

    Final Report

    The final report can be found above.