FC1: Sales with Amazon Forecast

Sales forecasting in SAP with Amazon Forecast - Architecture

Let’s understand the flow first.

Preparation menu

Complete the below prerequisites before your move on to the step 1.

  1. Create an Amazon S3 bucket. For this Lab purpose, let us create a s3 bucket in us-east-1.
  2. Connect Amazon S3 Amazon EC2 instance. See this documentation to create a role for your EC2 instance to access S3 bucket. OR Delpoy awscli in your local SAP machine (primary application server) with access to S3 bucket.
  3. Sending data to an S3 bucket in the region where you want to consume Amazon Forecast: There are a couple of options to automate this process. For example, consume Amazon S3 REST API or call an external OS command (aws s3 cp) configured in SM49 via SAP function module SXPG_COMMAND_EXECUTE in the data extraction ABAP report
  4. Configure Amazon Forecast: Once the data is in S3 bucket, all it takes is a few clicks to generate a forecast in Amazon Forecast console. You do not need to be a data scientist or machine learning expert; simply follow the detailed the below configuration steps.
  5. Consume predicted data in SAP S/4HANA or SAP Fiori for reporting: Once the forecast is ready for consumption, we must make it accessible from the SAP system. We use the following additional AWS services. We recommend you to review the below AWS documentations for your understanding.

Step1: Setting up SXPG_COMMAND_EXECUTE for AWS S3 command

➡️ In this step, we will create a command in SAP to call AWS S3 bucket from the EC2 role to land the SAP data files to S3.

  1. Go SAP Transaction SM49, click create option.

Preparation menu

  1. Enter the command name as ZAWSS3(desired name). Operating System as Linux. In definition enter aws in operating system command and parameters for operation system, command is S3. Click Save.

Preparation menu

Step2: Writing a custom SAP Abap program for data extraction.

  1. In SE38 transaction, create a Z program – desired namee (Z_SAP_SALES_DATA_EXTRACTOR) to write an ABAP report. We will export Sales order line item data VBAP and Sales order Header data VBAK for Forecasting. Click Create.

Preparation menu

  1. We will use the following code snippet to extract the data. Here is some tips on this code. Review the below notes and copy and paste the code snippet.

➡️ Note this code is only for demonstration purpose.

  • In this code we do a simple query o Sales line item data in SAP resides in table VBAP and header information in table VBAK
  • The Features (SAP tables fields) in this query are:
  • select material number(Matnr) & sum of Order Quantity (KWMENG) for the material from the table VBAP. The output field name is extracted as item_id (Matnr) and demand (KWMENG).
  • Date on which the record was created (ERDAT) from the table VBAK.The output field name is extracted as timestamp(ERDAT).
  • Joining the VBAP and header information in table VBAK with the Sales document number(VBELN)
  • Where Clause on SAP material number ‘MZ-FG-C950’,‘MZ-FG-C900’ (Bike models). This is hard coded for this lab but you can create your own methods to do where clause on materials by using parameters.
  • Grouping by P~MATNR P~ERDAT.
  • Once data is extracted, data is saved in SAP primary application server local file path as csv -> /usr/sap//D/work/’ filename ‘.csv’.
  • Call the function module ‘CALL FUNCTION ‘SXPG_COMMAND_EXECUTE’ to call operating system command and place the file in the S3 path using the AWS IAM role assigneed to the server or using the aws secret keys for aws cli.
*&---------------------------------------------------------------------*
*& Report Z_SAP_SALES_DATA_EXTRACTOR
*&---------------------------------------------------------------------*
*&
*&---------------------------------------------------------------------*
REPORT z_sap_sales_data_extractor.

DATA : BEGIN OF it_sales_data OCCURS 0,
         matnr  LIKE vbap-matnr,
         erdat  LIKE vbap-erdat,
         kwmeng LIKE vbap-kwmeng,
       END OF it_sales_data.

TABLES vbap.
TABLES vbak.

* In this code we do a simple query o Sales line item data in SAP resides in table VBAP and header information in table VBAK
* The Features (SAP tables fields) in this query are:
* select material number(Matnr) & sum of Order Quantity (KWMENG) for the material from the table VBAP. The output field name is extracted as item_id (Matnr) and demand (KWMENG)
* Date on which the record was created (ERDAT) from the table VBAK. The output field name is extracted as timestamp (ERDAT).
* Joining the VBAP and header information in table VBAK with the Sales document number(VBELN)
* Where Clause on SAP material number 'MZ-FG-C950','MZ-FG-C900' (Bike models). This is hard coded for this lab but you can create your own methods to do where clause on materials by using parameters.
* Finally Grouping by P~MATNR P~ERDAT.

SELECT p~matnr
           p~erdat
           SUM( p~kwmeng )
       FROM vbak AS k INNER JOIN vbap AS p ON k~vbeln = p~vbeln INTO TABLE it_sales_data WHERE p~matnr IN ('MZ-FG-C950','MZ-FG-C900') GROUP BY p~matnr p~erdat.

*LOOP AT IT_SALES_DATA.
*  WRITE : IT_SALES_DATA-MATNR,
*          IT_SALES_DATA-ERDAT,
*          IT_SALES_DATA-KWMENG.
*  NEW-LINE.
*ENDLOOP.

*call function 'GUI_DOWNLOAD'
*     exporting
*          filename = 'C:/Users/Administrator/Desktop/reportout.txt'
*     tables
*          data_tab = IT_SALES_DATA.

*item_id,demand,timestamp

DATA lt_file_table TYPE rsanm_file_table.
DATA ls_file_table TYPE rsanm_file_line.

DATA: lv_lines_written TYPE i.

APPEND 'item_id,demand,timestamp' TO lt_file_table.

LOOP AT it_sales_data.
  DATA s_demand TYPE string.
  s_demand = it_sales_data-kwmeng.
  DATA s_date TYPE string.
  CONCATENATE it_sales_data-erdat+0(4) it_sales_data-erdat+4(2) it_sales_data-erdat+6(2) INTO s_date SEPARATED BY '-'.
  CONDENSE s_demand.
  CONDENSE s_date.
  CONCATENATE it_sales_data-matnr ',' s_demand ',' s_date  INTO ls_file_table.
  APPEND ls_file_table TO lt_file_table.
ENDLOOP.

*using FM "GENERAL_GET_RANDOM_STRING". Pass the length of the string you want and FM will generate the alphabetical string.

DATA : filename TYPE string.
DATA : filepath TYPE string.
CALL FUNCTION 'GENERAL_GET_RANDOM_STRING'
  EXPORTING
    number_chars  = 32
  IMPORTING
    random_string = filename.

*Extracting the data and storing in the file path as csv.

CONCATENATE '/usr/sap/S4E/D00/work/' filename '.csv' INTO filepath.

CALL METHOD cl_rsan_ut_appserv_file_writer=>appserver_file_write
  EXPORTING
    i_filename      = filepath
    i_overwrite     = abap_true
    i_data_tab      = lt_file_table
  IMPORTING
    e_lines_written = lv_lines_written
  EXCEPTIONS
    open_failed     = 1
    write_failed    = 2
    close_failed    = 3
    OTHERS          = 4.
IF sy-subrc <> 0.
  MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
             WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ELSE.
  WRITE :/ filepath, ' created'.
ENDIF.

DATA: lv_status  TYPE extcmdexex-status,
      lv_retcode TYPE extcmdexex-exitcode,
      lt_output  TYPE STANDARD TABLE OF btcxpm.
FIELD-SYMBOLS:
<lfs_output> LIKE LINE OF lt_output.

*Sends the data to s3 bucket*
DATA addcommand TYPE btcxpgpar.
DATA s3path TYPE string.
CONCATENATE 's3://ft/sapsalesdata/' filename '.csv' INTO s3path.
CONDENSE s3path.

CONCATENATE 'cp' filepath  s3path INTO addcommand SEPARATED BY ' '.

CALL FUNCTION 'SXPG_COMMAND_EXECUTE'
  EXPORTING
    commandname                   = 'ZAWSS3'
    additional_parameters         = addcommand
    operatingsystem               = 'Linux'
  IMPORTING
    status                        = lv_status
    exitcode                      = lv_retcode
  TABLES
    exec_protocol                 = lt_output
  EXCEPTIONS
    no_permission                 = 1
    command_not_found             = 2
    parameters_too_long           = 3
    security_risk                 = 4
    wrong_check_call_interface    = 5
    program_start_error           = 6
    program_termination_error     = 7
    x_error                       = 8
    parameter_expected            = 9
    too_many_parameters           = 10
    illegal_command               = 11
    wrong_asynchronous_parameters = 12
    cant_enq_tbtco_entry          = 13
    jobcount_generation_error     = 14
    OTHERS                        = 15.

IF lv_retcode = 0.
  WRITE : / 'File ', filepath , 'copied to ', s3path.
ENDIF.

*"Display the results of the command:
*LOOP AT LT_OUTPUT ASSIGNING <LFS_OUTPUT>.
*  WRITE: / <LFS_OUTPUT>-MESSAGE.
*ENDLOOP.

  1. Update the s3 file path in the code.

Preparation menu

  1. Align the code by clicking pretty printer & click check option to verify any errors.

Preparation menu

  1. Click Activate icon and click the green check mark and Save.

Preparation menu

  1. Execute the program.

Preparation menu

  1. Check the message and to verify the successful extraction data to Amazon S3 bucket. In the next steps, we will use the ABAP report to do the forecasting.

Preparation menu

Step3: Getting Started with Amazon Forecast (Console)

➡️ In this exercise, you use the Amazon Forecast console to import the extracted Sales data from AWS considering date of sale as the feature. However, you should consider all features like product attributes (color, shape, size etc.), time of the year, seasonality etc. that are relevant for your use case for reliable forecasting.

  1. Sign in to the AWS Management Console and to access your Amazon S3 bucket you provided in the previous step to extract the files.

Preparation menu

  1. Download the file to validate the data.

Preparation menu

item_id demand timestamp
MZ-FG-C900 124 10/18/17
MZ-FG-C900 124 10/18/17
MZ-FG-C950 102 10/18/17
  1. Sign in to the AWS Management Console and open the Amazon Forecast console at https://console.aws.amazon.com/forecast/

  2. On the Amazon Forecast home page, choose Create dataset group.

  3. On the Create dataset group page, for Dataset group details, provide the following information:

  • Dataset group name – Enter a name for your dataset group.

  • Forecasting domain – From the drop-down menu, choose Custom. For more information about how to choose a forecasting domain, see How Amazon Forecast Works and dataset domains and types.

Preparation menu

  • Your screen should look similar to the following:

Preparation menu

  1. On the Create target time series dataset page, for Dataset details, provide the following information:
  • Dataset name – Enter a name for your dataset.

  • Frequency of your data – Keep the default value of 1, and choose day from the drop-down menu. This setting must be consistent with the input time series data.

  • Data schema – Update the schema to match the columns of the time-series data in data types and order. For the sales forecasting input data, the columns correspond to: a timestamp, demand at the specified day (target_value), and the ID of the material you want to forecast.

  • Time stamp format is yyyy-mm-dd

Your screen should look similar to the following:

Preparation menu

  1. Choose Next.
  • On the Import target time series data page, for Dataset import job details, provide the following information:

  • Dataset import job name – Enter a name for your dataset.

  • Timestamp format – Select (yyyy-MM-dd). The format must be consistent with the input time series data.

  • IAM role – Keep the default Enter a custom IAM role ARN.

  • Alternatively, you can have Amazon Forecast create the required IAM role for you by choosing Create a new role from the drop-down menu and following the on-screen instructions.

  • Custom IAM role ARN – Enter the Amazon Resource Name (ARN) of the IAM role that you created in Create an IAM Role for Amazon Forecast (IAM Console).

  • Data location – Use the following format to enter the location of your .csv file on Amazon S3:

s3:////<filename.csv>

  • Your screen should look similar to the following:

Preparation menu

  1. Choose Start import.

Preparation menu

  1. The dataset group’s Dashboard page is displayed. Wait for the data set to be imported.
  • Copy the Dataset ARN in your Clip board for the next lab. You will update the Dataset ARN in the Lambda function for the lab.

Preparation menu

  • To create a predictor, which is a trained model, choose an algorithm and the number (length times frequency) of predictions to make. You can choose a particular algorithm, or you can choose AutoML to have Amazon Forecast process your data and choose an algorithm to best suit your dataset group. For information about algorithms, see Choosing an Amazon Forecast Algorithm.

Step4: Train a predictor

To create a predictor, which is a trained model, choose an algorithm and the number (length times frequency) of predictions to make. You can choose a particular algorithm, or you can choose AutoML to have Amazon Forecast process your data and choose an algorithm to best suit your dataset group.

Amazon Forecast provides six built-in algorithms for you to choose from. These range from commonly used statistical algorithms like Autoregressive Integrated Moving Average (ARIMA), to complex neural network algorithms like CNN-QR and DeepAR+.

For information about algorithms, see Choosing an Amazon Forecast Algorithm.

  1. After your target time series dataset has finished importing, your dataset group’s Dashboard should look similar to the following:

Under Train a predictor, choose Start. The Train predictor page is displayed.

Your screen should look similar to the following:

Preparation menu

  1. On the Train predictor page, for Predictor details, provide the following information:
  • Predictor name – Enter a name for your predictor.

Preparation menu

  • Forecast horizon – Choose how far into the future to make predictions. This number multiplied by the data entry frequency (daily) that you specified in the previous step. Import the Training Data determines how far into the future to make predictions. For this exercise, set the number to 365, to provide predictions for 1 year.

  • Forecast frequency – Keep the default value of 1. From the drop-down menu, choose hour. This setting must be consistent with the input time series data. The time interval in the sample electricity-usage data is an hour.

  • Algorithm selection – Keep the default value Manual. From the drop-down menu, choose the ETS algorithm. For more information about recipes, see Choosing an Amazon Forecast Algorithm.

  • The remaining settings are optional, so leave the default values. Choose Train predictor.

Preparation menu

  1. Your dataset group’s Dashboard page is displayed. Your screen should look similar to the following:

Preparation menu

  1. Under Predictor training, you will see the training status. Wait for Amazon Forecast to finish training the predictor. The process can take several minutes or longer. When your predictor has been trained, the status transitions to Active.

Preparation menu

Step5: Create a Forecast

➡️ To make predictions (inferences), you use a predictor to create a forecast. A forecast is a group of predictions, one for every item in the target dataset. To retrieve the prediction for a single item, you query the forecast. To retrieve the complete forecast, you create an export job.

  1. After your predictor has finished training, your dataset group’s Dashboard should look similar to the following:

Preparation menu

  1. Under Forecast generation, choose Start. The Create a forecast page is displayed.

  2. On the Create a forecast page, for Forecast details, provide the following information:

Forecast name – Enter a name for your forecast.

Predictor – From the drop-down menu, choose the predictor that you created in Step 2: Train a Predictor.

The remaining setting is optional, so leave the default value. Your screen should look similar to the following:

Preparation menu

  1. Choose Create a forecast. The dataset group’s Dashboard page is displayed. Your screen should look similar to the following:

Preparation menu

  • Copy the Forecast ARN in your Clip board for the next lab. You will update the Forecast ARN in the Lambda function for the lab.

    Preparation menu

Step6: Retrieve a Forecast

➡️ After the forecast has been created, you can query for a single item or export the complete forecast.

To query for a single item
  1. If the dashboard is not displayed, in the navigation pane, under your dataset group, choose Dashboard.

  2. In the Dashboard, under Generate forecasts, choose Lookup forecast. The Forecast lookup page is displayed.

  3. On the Forecast lookup page, for Forecast details, provide the following information.

  • Forecast – From the drop-down menu, choose the forecast that you created in Step 3: Create a Forecast.

  • Start date – Enter the start date for the historical demand you want to view. . Keep the default time of 00:00:00.

  • End date – Enter the end date for the forecast that you want to view. Keep the default time of 00:00:00.

  • The date range of 1 day corresponds to the Forecast horizon that you specified in Step 2: Train a Predictor.

  • Choose which keys/filters – Choose Add forecast key.

  • Forecast key – From the drop-down menu, choose item_id.

  • Value – Enter a value from the item_id column of the input time series of the sales data. An item_id (for example, MZ-FG-C900) identifies a particular item included in the dataset.

Your screen should look similar to the following:

Preparation menu

  1. Choose Get Forecast. When the forecast is displayed, review the forecast for material by MZ-FG-C950.

The forecast should look similar to the following:

Preparation menu

Accuracy metrics:
  • Amazon Forecast produces accuracy metrics to evaluate predictors and help you choose which to use to generate forecasts. Forecast evaluates predictors using Root Mean Square Error (RMSE), Weighted Quantile Loss (wQL), and Weighted Absolute Percentage Error (WAPE) metrics.

Forecast enables you to evaluate predictors using different forecast types, which can be a set of quantile forecasts and the mean forecast. The mean forecast provides a point estimate, whereas quantile forecasts typically provide a range of possible outcomes.

By default, Forecast computes wQL at 0.1 (P10), 0.5 (P50), and 0.9 (P90).

  • P10 (0.1) - For the p10 forecast, the true value is expected to be lower than the predicted value 10% of the time, and the wQuantileLoss[0.1] can be used to assess its accuracy. For a use case when there is not a lot of storage space and the cost of invested capital is high, or the price of being overstocked on an item is of concern, the p10 quantile forecast is useful to order a relatively low number of stock. The p10 forecast over-estimates the demand for an item only 10% of the time, meaning that approximately 90% of the time, that item will be sold out.

  • P50 (0.5) - For the p50 forecast (often also called the median) the true value is expected to be lower than the predicted value 50% of the time, and the wQuantileLoss[0.5] can be used to assess its accuracy. When being overstocked is not too concerning, and there is a moderate amount of demand for a given item, the p50 quantile forecast can be useful.

  • P90 (0.9) - For the p90 forecast, the true value is expected to be lower than the predicted value 90% of the time, and the wQuantileLoss[0.9] can be used to assess its accuracy. When being understocked on an item will result in a large amount of lost revenue, that is, the cost of not selling the item is extremely high or the cost of invested capital is low, the p90 forecast can be useful to order a relatively high number of stock.

➡️ See Amazon Forecast white-paper to get deeper insights.

Step 7: Export the complete forecast

To export the complete forecast:

  1. In the navigation pane, under your dataset group, choose Forecasts.

  2. Choose the radio button next to the forecast that you created in Step 3: Create a Forecast.

  3. Choose Create forecast export. The Create forecast export page is displayed.

  4. On the Create forecast export page, for Export details, provide the following information.

  • Export name – Enter a name for your forecast export job.

  • Generated forecast – From the drop-down menu, choose the forecast that you created in Step 3: Create a Forecast.

  • IAM role – Keep the default Enter a custom IAM role ARN.

  • Alternatively, you can have Amazon Forecast create the required IAM role for you by choosing Create a new role from the drop-down menu and following the on-screen instructions.

  • Custom IAM role ARN – Enter the Amazon Resource Name (ARN) of the IAM role that you created in Create an IAM Role for Amazon Forecast (IAM Console).

  • S3 forecast export location – Use the following format to enter the location of your Amazon Simple Storage Service (Amazon S3) bucket or folder in the bucket:

    s3://name of your S3 bucket/folder path/

  • Your screen should look similar to the following:

Preparation menu

  1. Choose Create forecast export. The my_forecast page is displayed. Your screen should look similar to the following:

Preparation menu

  • You should see the status progress. Wait for Amazon Forecast to finish exporting the forecast. The process can take several minutes or longer. When your forecast has been exported, the status transitions to Active and you can find the forecast files in your S3 bucket.
Step 8: Configure Lambda function

➡️ Lambda function with proxy integration proxy integration so these parameters are passed as-is to the Lambda function.

  1. To create a Lambda function with the console. Open the Functions page on the Lambda console.

  2. Choose Create function.

  3. Under Basic information, do the following:

    • For Function name, enter my-function.

    • For Runtime, confirm that Python 3.7 is selected.

    • Choose Create function.

    • Lambda creates a Python 3.7 function and an execution role that grants the function permission. The Lambda function assumes the execution role when you invoke your function, and uses the execution role to create credentials for the AWS SDK and to read data from event sources.

  4. Code snippet in python below provides detail of how request and response are processed in lambda function attached to API Gateway. First we validate the credentials passed by the SAP system against credentials stored in AWS Secrets Manager.

    • After successful credential validation, we extract the parameters passed by SAP systems, such as item id, from date, to date.Then, invoke the QueryForecast API with these parameters.
    • On successful execution, Amazon Forecast returns data in a JSON format.
    • You can format the result in Lambda before returning to SAP. For example, the the below snippet converts SAP ABAP internal table format in SAP system. You can customise it to your needs or process it in the SAP system, if you’d like.
    [
     {
    "I": "string",
    "D": "string",
    "ME": "string",
    "P10": "string",
    "P50": "string",
    "P90": "string"
     }
    ]
    
    
  5. Here is the complete code snippet in Lambda for for the lab. Copy and paste the code in

import json
import boto3
import base64
from botocore.exceptions import ClientError
 
forecastclient = boto3.client('forecastquery')
forecastARNclient = boto3.client('forecast')
 
 
def lambda_handler(event, context):
   
    if ('Authorization' not in event['headers']):
        # User ID Password not provided
        return { "statusCode": 401,"isBase64Encoded" : False }
    else:
        secret_name = "SAP_Sales_Forecast_UserPass"
        region_name = "us-east-1"
        # Create a Secrets Manager client
        session = boto3.session.Session()
        client=session.client(service_name='secretsmanager',region_name=region_name)
   
        # Get secret from secrets manager
        get_secret_value_response = client.get_secret_value(SecretId=secret_name) 
        secret = json.loads(get_secret_value_response['SecretString'])
        #Added on 02/
        print(json.loads)
   
        # Get credentials from authorization header and decode to string
        userdet= event['headers']['Authorization'].split()[1]
        userdet = base64.b64decode(userdet).decode("utf-8")
       
        if (secret ["SAPSALES"] != userdet.split(":")[1]):
           # Password Not Matched
           return { "statusCode": 401,"isBase64Encoded" : False }
        else:
                    # Password Matched, Authorization successful
            itemid = event['queryStringParameters']['item_id']
            fromdate = event['queryStringParameters']['from_date']
            todate = event['queryStringParameters']['to_date']
           
            response = get_forecast(itemid,fromdate,todate)
            result = [];
           
            for num, res in enumerate(response['Forecast']['Predictions']['p90']):
                   data = {}
                   data['I'] = itemid
                   data['D'] = res['Timestamp'][0:10]
                   if 'mean' in response['Forecast']['Predictions'] : data["ME"] = str(round(response['Forecast']['Predictions']['mean'][num]['Value'],2))
                   if 'p10' in response['Forecast']['Predictions'] : data["P10"] = str(round(response['Forecast']['Predictions']['p10'][num]['Value'],2))
                   if 'p50' in response['Forecast']['Predictions'] : data["P50"] = str(round(response['Forecast']['Predictions']['p50'][num]['Value'],2))
                   if 'p90' in response['Forecast']['Predictions'] : data["P90"] = str(round(response['Forecast']['Predictions']['p90'][num]['Value'],2))
                   if 'p99' in response['Forecast']['Predictions'] : data["P99"] = str(round(response['Forecast']['Predictions']['p99'][num]['Value'],2))
                   result.append(data)
       
            return {
                "statusCode": 200,
                "isBase64Encoded" : False,
                "body": json.dumps(result),
                "headers" : { 'Content-Type' : 'application/json' },
                "multiValueHeaders" : {}
            }
 
 
def get_forecast(item_id,from_date,to_date):
   
    response = forecastARNclient.list_forecasts(
    MaxResults=50,
    Filters=[
        {
            'Key': 'DatasetGroupArn',
            'Value': 'arn:aws:forecast:us-east-1:********:dataset/my_erp_sales_data', 
            'Condition': 'IS'
        },
    ]
    )
   
    for item in response['Forecasts']:
         print(item['ForecastArn'])
        
    response = forecastclient.query_forecast(
            ForecastArn='arn:aws:forecast:us-east-1:******:forecast/my_erp_forecast',
            StartDate = from_date+'T00:00:00',
            EndDate = to_date+'T00:00:00',
            Filters = {
               'item_id' : item_id
            }
    )
    return response
  1. Replace the belows in the forecast with your own Dataset and Forecast ARN’s you copied in the previous steps and click Deploy.
  • Region update - Code is created for us east region. Please modify the variable lambda region_name = “give the region where you created the forecast (us-east-1)" in line 17.
  • ‘Value’: ‘arn:aws:forecast:us-east-1:********:dataset/
  • ForecastArn='arn:aws:forecast:us-east-1:*****:forecast/my_erp_forecast’,

Preparation menu

Congratulations, you have completed your initial steps to extract the SAP Sales data, create forecasting with Amazon Forecast and a Lambda function to query the results to SAP. In the next lab, we will configure integration between SAP and AWS to consume the Forecast data in SAP.