The following architecture diagram shows the workflow for pulling the SAP BW data using open hub file destination to land the data in S3.
The open hub destination allows you to distribute data from a BW system to non-SAP data marts, analytical applications, and other applications. It allows you to ensure controlled distribution across several systems.
Open hub supports full and delta modes for extraction making this convenient to extract from info providers like Info cubes, DSO’s and ADSO’s or PSA’s. OpenHub destinations include Files, data base tables and other third party destinations. See SAP open Hub documentation for detailed information.
-> AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage.
-> Customers use Storage Gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases. These include moving tape backups to the cloud, reducing on-premises storage with cloud-backed file shares, providing low latency access to data in AWS for on-premises applications, as well as various migration, archiving, processing, and disaster recovery use cases.
-> For this LAB : We will be using the file gateway component in AWS Storage gateway. The file gateway acts as a File share which is mapped to an S3 bucket.
-> The file share is NFS mounted as a directory on the BW application server. The detailed steps to NFS mount an S3 bucket on to the SAP BW application server and transfer a cube to the S3 bucket is mentioned below.
Please note that you can use similar flows to replicate from any info provider like ADSO’s etc.
To create a file gateway, you simply visit the Storage Gateway Console, click on Get started, and choose File gateway. See AWS Storage gateway documentation for more setup instructions.
Then choose your host platform: Amazon EC2:
➡️ For Inbound security rules – Please ensure that ports 443, 80 and 2049 are accessible from your desktop and the SAP BW instance.
Once the instance is launched, Please use the public IP address from the Ec2 console and paste it in the next step.
The last step is to set the timezone and give your gateway a name – for eg BWgateway and click Activate Gateway.
Once the Gateway is activated, it configures a local disk on the volume added in the instance configuration. You can assign these to cache in the next step.
➡️ Your gateway is now up and running and you can see it in the console.
Click on Create file share
➡️ Ensure to populate the bucket name that you created earlier. You can also configure allowed clients, For the sake of establishing the concept, we will allow all connections.
➡️ Please ensure that you only allow connections from the BW host in your AWS environment.
➡️ Please look at the Linux configuration for nfsmount. We will use this in our BW console.
sudo chown -R <sid>adm:sapsys /gateway
sudo mount -t nfs -o nolock,hard 18.xxx,xxx,xxx:/sapbwdemo /gateway
➡️ In this case 18.xxx.xxx.xxx is my public IP for storage gateway, sapbwdemo is my s3 nucket and /gateway is my linux directory that I map to the bucket.
sid-adm:/gateway # mkdir sap
Give the Directory name as DIR_AWS and path as /gateway and valid for server name as All and hit Save
Double click on the directory in AL11. It should be in Sync with what you see in your S3 bucket. Now the next step is to create a logical file system.
➡️ Select Logical file name definition cross client and create a new entry as below. Please note that the logical file path we created earlier is assigned here
➡️ A logical file ZBWAWSOUT is created and assigned to the NFS directory shared with AWS Storage gateway.