BW1: SAP BW-S3 using open hub

The following architecture diagram shows the workflow for pulling the SAP BW data using open hub file destination to land the data in S3.

Preparation menu The open hub destination allows you to distribute data from a BW system to non-SAP data marts, analytical applications, and other applications. It allows you to ensure controlled distribution across several systems.

Open hub supports full and delta modes for extraction making this convenient to extract from info providers like Info cubes, DSO’s and ADSO’s or PSA’s. OpenHub destinations include Files, data base tables and other third party destinations. See SAP open Hub documentation for detailed information.

Hybrid cloud storage service with AWS

-> AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage.

-> Customers use Storage Gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases. These include moving tape backups to the cloud, reducing on-premises storage with cloud-backed file shares, providing low latency access to data in AWS for on-premises applications, as well as various migration, archiving, processing, and disaster recovery use cases.

-> For this LAB : We will be using the file gateway component in AWS Storage gateway. The file gateway acts as a File share which is mapped to an S3 bucket.

-> The file share is NFS mounted as a directory on the BW application server. The detailed steps to NFS mount an S3 bucket on to the SAP BW application server and transfer a cube to the S3 bucket is mentioned below.

Step by Step guide to create a data flow between BW and S3 using SAP Open Hub connection!

Please note that you can use similar flows to replicate from any info provider like ADSO’s etc.

AWS Storage gateway
  1. Set up your AWS Account
  2. Create an S3 Bucket and a folder within the bucket for eg “Product”
  3. Create an IAM role to provide full access for this bucket
  4. Create a user with programmatic access and attach the role to the user
  5. Download the access key and secret key information for the user. This is used to apply these credentials in Data services.

AWS Storage gateway Setup:

  1. To create a file gateway, you simply visit the Storage Gateway Console, click on Get started, and choose File gateway. See AWS Storage gateway documentation for more setup instructions.

  2. Then choose your host platform: Amazon EC2:

Preparation menu

  1. You will be guided to choose an AMI – Please atleast choose m4xlarge. In Storage – Add Storage Volume by clicking add new volume as shown below. Preparation menu

➡️ For Inbound security rules – Please ensure that ports 443, 80 and 2049 are accessible from your desktop and the SAP BW instance.

  1. Once the instance is launched, Please use the public IP address from the Ec2 console and paste it in the next step. Preparation menu

  2. The last step is to set the timezone and give your gateway a name – for eg BWgateway and click Activate Gateway. Preparation menu

  3. Once the Gateway is activated, it configures a local disk on the volume added in the instance configuration. You can assign these to cache in the next step.

Preparation menu

➡️ Your gateway is now up and running and you can see it in the console.

Preparation menu

  1. The next step is to create a file share and map it to an existing S3 bucket.

Click on Create file share

Preparation menu

➡️ Ensure to populate the bucket name that you created earlier. You can also configure allowed clients, For the sake of establishing the concept, we will allow all connections.

➡️ Please ensure that you only allow connections from the BW host in your AWS environment.

Preparation menu

  1. The file share output would be as below. Preparation menu

➡️ Please look at the Linux configuration for nfsmount. We will use this in our BW console.

SAP BW – Linux steps

  1. Login to your linux host in our SAP sand box.
  2. Create a directory – mkdir /gateway
  3. Assign permissions – chmod 744 /gateway
  4. Change owner chown adm:sapsys

sudo chown -R <sid>adm:sapsys /gateway

  1. Copy the NFS mount command for Linux appending the directory name gateway(Linux):

sudo mount -t nfs -o nolock,hard 18.xxx,xxx,xxx:/sapbwdemo /gateway

➡️ In this case 18.xxx.xxx.xxx is my public IP for storage gateway, sapbwdemo is my s3 nucket and /gateway is my linux directory that I map to the bucket.

  1. Create a file or subdirectory and check if it appears in your S3 bucket.

cd /gateway

sid-adm:/gateway # mkdir sap

SAP BW Application steps

  1. Login to SAP BW system through SAP GUI
  2. Enter transaction AL11
  3. Click on the configure user directories icon as highlighted

Preparation menu

Give the Directory name as DIR_AWS and path as /gateway and valid for server name as All and hit Save

Preparation menu

Double click on the directory in AL11. It should be in Sync with what you see in your S3 bucket. Now the next step is to create a logical file system.

  1. Now open a new transaction called FILE
  2. Select logical file path definition and select new entries.
  3. Enter ZBWAWSFILE and File Path BW AWS as below.

Preparation menu

  1. Select the entry and click on Assignment to physical paths. We are mapping the gateway directory and forming a dynamic file name as below.

Preparation menu

  1. Physical path is /gateway/_daily.csv_..

➡️ Select Logical file name definition cross client and create a new entry as below. Please note that the logical file path we created earlier is assigned here

Preparation menu

➡️ A logical file ZBWAWSOUT is created and assigned to the NFS directory shared with AWS Storage gateway.

Setting up the Open hub destination

  1. Go to Transaction RSA1
  2. Under Modelling -> Select Open Hub destination
  3. Under Info area Netweaver Demo – Right Click and select Create Openhub destination.
  4. Please provide a name and description and your source object cube as indicated below. ( This is just an example, you can literally choose any source)

Preparation menu

  1. Choose Destination type as File , Select Application server and Type of file name as logical file name as indicated below

Preparation menu

  1. Go to the field definition tab to verify the fields.
  2. Check and activate the destination
  3. The destination is created
  4. Right click on the destination and select create data transfer process
  5. A pop up as indicated below appears, Please confirm your selections.

Preparation menu

  1. Accept the default settings and choose activate ( Feel free to change them if needed)
  2. Click on Activate data transfer process
  3. Click on the execute tab and select execute, a log should appear as below.

Preparation menu

  1. Verify the data in your S3 bucket. You should get 2 files one with header and one with the data. These extracts can be scheduled as a job as necessary.

Preparation menu