Quantcast
Channel: SCN : Blog List - All Communities
Viewing all 2548 articles
Browse latest View live

SCN Vanity App

$
0
0

d_shop_blog_logo.jpg

If you are an SAP Employee, please follow us on Jam.


This one goes out to my friend and colleague Aaron Williams who gave the idea for this


For this one, we're going to use Python and some nice libraries...


 

Libraries

pip install pandas #For Data Analysis and Data Frames

pip install mechanize #Headless Web Browser

pip install beatifulsoup4 #For Web Scrapping


Of course...we're going to use some Regular Expressions as well...but that's already included in Python


So, the basic idea is that we need to log into SCN using our Username and Password and then read the first page of our "content" folder only for blog posts...then we can continue reading the following pages by using a parameter that will load the next 20 blogs...


Now...and before you say something This works (at least for me) only for the first 10 pages...because after that the HTML seems to be automatically generated...so there's nothing to get more data from or maybe my blogs come from a long time ago...my first blog ever on SCN was written on February 17, 2006 Tasting the mix of PHP and SAP...


Anyway...let's see the source code


 

SCN_Vanity_App.py

#coding= utf8

 

 

USR = 'YourUserName'

PWD = 'YourPassword'

 

 

import sys

import re

import mechanize

from BeautifulSoup import BeautifulSoup

import pandas as pd

 

 

reload(sys)

sys.setdefaultencoding("iso-8859-1")

 

 

cookies = mechanize.CookieJar()

br = mechanize.Browser()

br.set_cookiejar(cookies)

br.set_handle_robots(False)

 

 

res = br.open("http://scn.sap.com/login.jspa")

 

 

br.select_form(nr=0)

br["j_username"] = USR

br["j_password"] = PWD

br.submit()

 

 

br.select_form(nr=0)

res = br.submit()

 

 

result = res.read()

 

 

author = re.search("username: \'.*",result)

author = re.sub('username: \'|\'|\,','',author.group(0))

displayname = re.search("displayName: \'.*",result)

displayname = re.sub('displayName: \'|\'|\,','',displayname.group(0))

 

 

j = 0

df = pd.DataFrame()

 

 

while(1==1):

  try:

  link = "http://scn.sap.com/people/%s/content?filterID="\

        "contentstatus[published]~objecttype~objecttype[blogpost]" %(author)

 

  if(j>0):

  link = "http://scn.sap.com/people/%s/content?filterID="\

  "contentstatus[published]~objecttype~objecttype[blogpost]&start=%j" %(author,str(j))

 

  j += 20

 

  res = br.open(link)

 

 

  Titles = []

  Likes = []

  Bookmarks = []

  Comments = []

  Views = []

 

 

  soup = BeautifulSoup(res.read())

  list_items = [list_item for list_item in soup.findAll('td',{'class':'j-td-title'})]

  if(len(list_items) == 0):

  break;

  for i in range(0, len(list_items)):

  title = re.search('[^<>]+(?=<)',str(list_items[i]))

  Titles.append(title.group(0))

 

 

  list_items = [list_item for list_item in soup.findAll('a',{'class':'j-meta-number'})]

  for i in range(0, len(list_items), 2):

  like = re.sub('<[^>]+>|in.*','',str(list_items[i]))

  bookmark = re.sub('<[^>]+>|in.*','',str(list_items[i+1]))

  Likes.append(int(like))

  Bookmarks.append(int(bookmark))

 

 

  list_items = [list_item for list_item in soup.findAll('span',{'title':'Replies'})]

  for i in range(0, len(list_items)):

  comment = re.sub('<[^>]+>|in.*','',str(list_items[i]))

  Comments.append(int(comment))

 

 

  list_items = [list_item for list_item in soup.findAll('span',{'title':'Views'})]

  for i in range(0, len(list_items)):

  views = re.sub('<[^>]+>|in.*','',str(list_items[i]))

  Views.append(int(views))

 

 

  for i in range(0, len(Titles)):

  df = df.append({'Title': Titles[i], 'Likes': Likes[i], 'Bookmarks': Bookmarks[i],

                 'Comments': Comments[i], 'Views': Views[i]}, ignore_index=True)

 

  except:

  break

 

 

print("Welcome " + displayname + "\n")

sum_row = df[["Views"]].sum()

print("Total number of Views" + " ==> " + str(sum_row.values[0]))

sum_row = df[["Comments"]].sum()

print("Total number of Comments" + " ==> " + str(sum_row.values[0]))

sum_row = df[["Bookmarks"]].sum()

print("Total number of Bookmarks" + " ==> " + str(sum_row.values[0]))

sum_row = df[["Likes"]].sum()

print("Total number of Likes" + " ==> " + str(sum_row.values[0]))

 

 

print("\nTop 3 Blogs with most Views")

print("---------------------------")

df = df.sort_values(by=['Views'],ascending=[False])

for i in range(0, 3):

  print(df.iloc[[i]]['Title'].values[0] + " ==> " + str(df.iloc[[i]]['Views'].values[0]))

print("\nTop 3 Blogs with most Comments")

print("---------------------------")

df = df.sort_values(by=['Comments'],ascending=[False])

for i in range(0, 3):

  print(df.iloc[[i]]['Title'].values[0] + " ==> " + str(df.iloc[[i]]['Comments'].values[0]))

print("\nTop 3 Blogs with most Bookmarks")

print("---------------------------")

df = df.sort_values(by=['Bookmarks'],ascending=[False])

for i in range(0, 3):

  print(df.iloc[[i]]['Title'].values[0] + " ==> " + str(df.iloc[[i]]['Bookmarks'].values[0]))

print("\nTop 3 Blogs with most Bookmarks")

print("---------------------------")

df = df.sort_values(by=['Likes'],ascending=[False])

for i in range(0, 3):

  print(df.iloc[[i]]['Title'].values[0] + " ==> " + str(df.iloc[[i]]['Likes'].values[0]))

 

If we run this code, then we're going to have a nice report like this one

 

SCN_Vanity_App.jpgOf course...it would look better with a nicer UI...but that's not my forte So...if anyone wants to pick the project and improve it...I would really appreciate it


Painting murals to brighten hospitals

$
0
0

The last time I remember painting murals was back in high school. My month of Service project to paint murals for hospitals definitely brought back wonderful memories.


My work team volunteered together on this project to paint murals that'll brighten up hospital walls. The activity of painting a mural really brought out a sense of relaxation and camaraderie. With the soothing sound of music playing in the background, we tuned out our daily clutter and resigned from the constant churn of work to enjoy a fun, peaceful afternoon. There's nothing better!


Our mural was themed on marine life, specifically bright-colored fish and plants. We worked on six large pieces that combined to form a colorful, engaging masterpiece. It was so fascinating to see the whitespace turn into lively images of marine life.


This was a great source of rest and relaxation for me, especially while collaborating with my peers. We had a great time - you could tell by the paint smears on everyone's clothing by the end of the day!

SAPUI5 Application CREATE, CHECK-OUT, CHECK-IN, DEPLOY, VERSION mgt etc...

$
0
0

Below points will be covered in post:

 

  1. Creating new SAPUI5 Project
  2. Deploy Application on SAP HANA Cloud Platform
  3. SAP HANA Cloud Platform Overview
  4. SAP HANA Cloud Platform Version
  5. SAP HANA Cloud Platform Roles
  6. Committing SAPUI5 Application changes to Cloud
  7. Version management of Application in SAP HANA Cloud Platform

 

 

Creating new SAPUI5 Project

 

1.png

 

Select type of Application you want to create.

 

2.png

 

Follow the screen and Application will be created as shown below.

 

3.png

New Application XMLUI5 is created.

Now we will add this SAPUI5 Project to Git. To do this we need to deploy our application to SAP HANA Cloud Platform.

 

Lets checkout the Git settings first.

 

 

4.png

 

5.png

 

Deploy SAPUI5 Application on HANA Cloud Platform.

 

6.png

 

Here you need to provide your SAPID or PID for Authentication and click on Login button.

 

7.png

 

SAP HANA Cloud Platform it automatically maintain Versions, each time you deploy an Application it creates a new Version number to identify the latest changes. It also provides ability to switch to specific version of deployed Application when ever required and Activate that specific Version.

 

In below screen shot we are deploying very first version 1.0.0 of SAPUI5 Application which we are creating.

 

Click on Deploy button.

 

8.png

 

If your Application is deployed successfully you will see below Popup and Green dots in your Application.

 

9.png

 

If you want to see hot your Application works on HANA Cloud, you can click on first link available in popup.

Once you click on first link it looks like below.

 

12.png

 

Three more files (highlighted below) will be added once all things are correctly configured in HANA Cloud Platform

 

11.png

Once your Application (XMLUI5) is configured in SAP HANA Cloud you can see it in Cloud Platform as below.

SAP HANA Cockpit can be accessed using below URL

 

https://account.hanatrial.ondemand.com/cockpit

 

13.png

 

Once you click on your deployed Application you will see below screen which contains below details

 

  1. Application Details
  2. Current Status of your application
  3. Current Active Version URL (you can access your application using this URL)

 

 

Overview:

 

14.png

 

Version:


When you click on Version link on left side below screen will be opened

 

Versioning sections two main parts

 

Commits:

 

  1. This section will contains all of your commits done in Application along with comments and Versions you are allowed here to activate any of the available versions.
  2. Each time you perform a Commit in your SAPUI5 project from WEBIDE Git, the same will be updated here. Currently we are having only one new Commit available with active version 1.0.0.
  3. Git Repository URl can be used to retrieve the repository Application contents or push changes using a Git utility.

 

15.png

 

Versions:

 

Versions tab will show you the current Active as well as all other Inactive Versions available in HANA Cloud. You can change Active version any time as required by using Actions column Activate Button.

 

16.png

Roles:


Roles tab is basically used in scenario when there are multiple developers working on same application (authenticate them) we create Roles and then we assign Developers in that Role. In our example we have created one Role called Developer and add Three developers in that Role.

 

These should be PID or SAPID of the Developers who will work. This mainly used when you try to Deploy or Commit any changes done in Application to SAP HANA Cloud.

 

17.png

Once we are done with creating Roles we are ready to Assign Developers to that Role. PID or SAPID is required to add Developers.

 

18.png

 

Any time you can Un-assign/Assign any User from/to List.

 

19.png

 

Changing files and updating (Committing changes) on SAP HANA Cloud Platform


Let us see how Application looks, go as shown below

 

21.png

 

22.png

 

Now let us make some changes to Title of our Application and make a Commit to SAP HANA Cloud.

 

Once you make changes to file the Green Dot will gets changes to *.  This means you have checked out these files (Files are changed) and are no longer in sync with SAP HANA Cloud repository.

 

23.png

 

Once you are done with all your changes to a file you can again Checked-in the changes to SAP HANA Cloud.

Just Click Git Pane on right hand side. This will provide options to Check-In code as shown below

 

24.png

 

Choose the files which you want to Check-In by clicking on Stag checkbox individually.

It is suggested Click on Pull button before committing any changes.

 

Once you click on Pull button you will be asked for SAP ID/PID for Authentication as below, provide and click on OK button.

 

25.png

 

Now time to Commit the changes, select Stag checkbox put some Commit Description Comments and click on Commit and Push button.

 

26.png

 

 

 

28.png

Once Commit is successful you will see message on top right corner.

 

30.png

 

This means your changes are Committed in SAP HANA Cloud. The * will turn back to Green Dot in Application hierarchy.

You can see your Commits in History in SAP HANA Cloud as below.

 

31.png

 

After this Commit if you see in Versioning tab of SAP HANA Cloud Platform you still find the Old version which is active.

 

New Version can be Activate as shown below.

 

37.png

 

Click on Yes button to Activate.

 

Run Application again and changes will be reflected as below (Title updated)

 

38.png

 

 

Thanks to you for reading this post till end.

I have tried to provide details on each and every step you need to perform if you are working on SAP HANA Cloud platform, please suggest or comment if I missed any.


Please drop a comment or suggestion so I can update the document accordingly.

 

 

Thanks-

Abhishek

S/4HANA at TechEd 2015 Barcelona

$
0
0

Are you on your way to TechEd Barcelona? One of the topics I hope you have on your list is S/4HANA. There will be plenty of S/4HANA content, so much it might be difficult to know where to begin. Here are my recommendations.

 

Sessions

 

  • TEC 103: SAP S/4HANA overview, strategy, and roadmap
  • TEC 114: SAP S/4HANA migration and upgrade paths
  • TEC 800: SAP S/4HANA roadmap and Q&A
  • TEC 116: Key SAP S/4HANA use cases and how to enable them
  • TEC 206: Architecture and components of SAP S/4HANA

 

The sessions listed above are offered twice--the link is for the first instance.

 

For a complete list of all the S/4HANA sessions, check the session catalog.

 

Showcase

 

If you would like to talk directly with us about S/4HANA, be sure to stop by the Showcase in Hall 7. There you can ask the questions you want to ask and get a level of personal attention that is not possible in the lecture sessions. We have 4 "pods" in the showcase:

 

  1. Explore: general questions about S/4HANA and business use cases
  2. Deploy: information on the journey to S/4HANA, including the latest User Assistance technologies
  3. Extend: how to do in-application extensibility, including custom user fields and code
  4. Analyze: the embedded analytic capability within S/4HANA itself

 

Stop by the showcase to ask questions, see a quick demo, or just hear more about S/4HANA.

 

I hope you have a safe and productive trip to Barcelona--vamanos!

Visual BI Extensions - Listbox with Alerting Functionality

$
0
0

Right now it feels already a little bit like "waiting for Christmas" for me as our next major release of our Extensions for SAP BusinessObjects Design Studio is very close (and our SAP Lumira Extensions as well).

 

As part of the upcoming release we did add several components as part of our "Selectors and Filters" area. One of these components is a Listbox. I am sure there will be several of you that now say .. "... well SAP has a listbox as well".... so let me show why our Listbox is different and what additional functionality it provides.


So lets start with the configuration:


The Listbox is designed so that it doesn't need any scripting functionality for most of the scenarios. In the Additional Properties of the listbox you would first of all select the dimension that you would like to use for the listbox itself - as shown below.


LISTBOX_007.jpg


If you would like, you can limit the amount of entries as well and as you can see in the screen shot above, you can also list the Target Data Sources (1 or many) in the configuration.


LISTBOX_006.jpg


In addition you would then configure if you would like to show the key or text or both as the "Display" option and which of the values you would like to send to the Target Data source. In addition the Listbox provides features such as sorting or searching out of the box.


The option "Runtime Display" is giving the end user the option to switch between the different display options (Key, Key & Text, Text, ...) at runtime.


Those are all the basics for the listbox. So now lets go to the interesting parts.


LISTBOX_003.jpg


The Listbox also provides Alerting capabilities where you can create a conditional highlighting based on a simple measure comparison, or a measure calculation, or based on a Target Value as shown above.


You then have the option to select a color and a symbol, which then will be shown as part of the listbox itself - as shown below.


LISTBOX_008.jpg


As shown above, the alerts are shown in front of the product members, our listbox shows the Search dialog, the option to switch between the different display options, and provides an "All Member" out of the box.


In addition the listbox provides all the usual scripting and the option to select single values or multiple values.


So instead of just providing your users with a simple list of members, you can quickly provide your users with a searchable list and you can add additional help by using conditional formatting as part of the listbox already.






BPC NW Script Logic Troubleshooting Survival Guide

$
0
0

You have basic to intermediate knowledge of BPC script logic syntax and you have to troubleshoot a BPC logic issue.  By following the steps in this survival guide, you can identify the root cause for most logic issues and resolve them if they are due to known bugs.

 

Pre-requisites:

 

  1. Basic knowledge of BPC logic syntax.  The following are the online help for Logic Keyword References and syntax

 

BPC 10.1 classic

BPC 10

BPC 7.5

 

  1. Knowledge of UJKTUJKT is a BW transaction code that invokes an ABAP program through SAPGUI for testing BPC logic script.  UJKT will show you more detailed diagnostics and you can verify if your script is working correctly.
  2. Access to an exact copy of the BPC environment/model where the logic is not working as expected.  This is required to run tests and to simplify the problem logic script. If you do not have an exact copy create a copy using BPC Administration functions.
  3. The following steps should followed in the prescribed order.

 

Identify the problem with the following questions: The questions can be roughly divided as What, When and How

 

WHAT

 

  1. What version of BPC and NW are in use?
  2. What type of logic is executing? (Script logic or business rule)
  3. Is the logic producing a specific error? If yes,
    1. Review and analyze the error in the logs or system dump
    2. Look for any existing notes and KBA for the error
  4. Is the logic completing successfully but producing incorrect results?
  5. Is the problem data dependent, e.g. happens only to a specific date or account?  If yes,
  6. Is there anything special about the data region that is not producing correct results? For example, is it the start of a calendar year, or a specific base node that was recently added or moved (master data modification)?
  7. Does the problem happen in all systems, DEV, QA, and production? If the problem does not happen in all systems then what are the differences between the systems.  Be Specific about
    1. Notes applied, SP version difference for BPC and BW,
    2. Transaction and master data differences, especially if the problem is data dependent

 

WHEN

 

  1. When did the problem start
    1. Has the logic ever completed successfully for the problem data region?
    2. When was the last time the logic completed correctly?
    3. What has changed since it last worked as expected? If you don’t know the exact answers then establish a probable scenario by following questions
  2. How often and when do you execute the logic?
  3. When was the last time the logic validated correctly, was your system on the same version?
  4. Who executes the logic
  5. What is the usual process or the data region for executing the logic
  6. Does the problem happen, every time the logic is executed?
  7. If random when does it seem to happen most often?  Does it seem to happen after a recent master or transaction data modification?
  8. How often does master data change and what changed in the last modification?

 

HOW

 

  1. Steps to reproduce
    1. How is the problem logic invoked, is it through DM package or is it executed as default logic when sending data
    2. If the problem logic is the default logic, can be executed when sending data or through a DM package, are the results the same both ways?
  2. Does the logic produce the same behavior when executed through UJKT?

 

 

Trouble Shooting Steps

 

 

  1. Use a simplified version of the logic, it needs to be a single block of statements with one simplified *REC statement.  There are two ways that a logic script can have many blocks.
    1. The logic has multiple *WHEN/ENDWHEN, *RUNALLOCATION or *RUN_PROGRAM blocks repeated several times in one file.  Remove all but one block for testing.
    2. Logic file has several *INCLUDE statements.  Every *INCLUDE statement references a new logic file.  When you execute a logic with several included files, you are executing all the logic contained in all the included files. Find the logic within the included file that is causing the problem and test only that logic.  Simplify the logic as described in step 1.i before testing it.
  2. Replace as many parameterized values (anything between %) or variables (anything between $) with hard coded values. Often the problem is due to parsing of a parameterized value.  Removing the parameters and then adding them back one by one will show which one is causing the problem
  3. Use simplified test data region, either the problem data region or any data region with test data available.
    1. Reduce the data region that is tested by replacing parameterized scope statements in *XDIM_MEMBERSET with individual parent or child nodes.
    2. If the problem is occurring for a particular data region, then replace the parameters in *XDIM_MEMBERSET with only the members that are to be tested.
    3. If the problem is occurring for all data region, then use a random test date region.  Replace the parameters in *XDIM_MEMBERSET with the specific data region selected for testing.
    4. Reduce the scope by removing keywords likeunless if that is the root cause of the error.
  4. Reduce the complexity of *REC statements. Remove all the nested WHEN/ENDWHEN and REC statements. Refer to Example 1 in attached file for more clarification.
  5. Test the logic using UJKT, first by running in simulated mode
  6. Review the error in UJKT.If you do not find any error when executing the simplified version of the logic through UJKT, add to the complexity of the original script in stages and test.
  7. Review the diagnostics generated in UJKT.  Refer to example 2 in the attached document for more clarification
  8. If the logic is for executing business rules, validate that the scope and context are passed correctly and then follow the guide for troubleshooting business rules, and master data configuration

 

Analysis of the information:

 

 

  1. In most simple cases, you experience a problem after one or more of the following situations is met.  In such cases, you need to reproduce and verify the issue, using a simplified version of the logic and executing through UJKT. You can report the problem to SAP if it has not already been reported. The cases that qualify for this are:
    1. After System upgrade / migrate form a different version of BPC,
    2. After applying some notes,
    3. Changes made to master data
    4. Logic being executed for a new data region for the first time
  2. If the problem is not due to any of the above conditions then you need to analyze the information further. Typically, you have development, QA and production systems.  You need to focus on two different lines of analysis when the problem happens in all systems and when it does not.
    1. The problem happens in all systems, but the same logic was working in the past few years and you don’t know exactly when the problem started,
      1. Focus on the last time the logic was validated and tested successfully in each system and confirm if you were on the same version of BPC and BW.  Sometime code bugs can remain hidden until you validate the logic. Refer to Example 3 in the attached document for further clarification
    2. The problem happens only in one system.  Analyze differences in master data and transaction data as well as system versions and configuration.
      1. Focus on differences between the problem system and working system.  Make sure to consider, notes, system upgrades, master data, transaction data
      2. If you confirm all versions and notes applied are the same across the system, then focus on master and transaction data differences.  By narrowing the scope to just one data region, as described in trouble shooting steps – items 2 and 3,  you can find and test the problem data region.
  3. What if the error happens only when it is invoked a certain way?  This can be very well due to differences in the scope of the logic.  When the logic is invoked through data manager (DM) its scope is what is known as "external scope" set by the values selected through DM interface. When the logic is invoked through other methods it is subject to "internal scope" set by *XDIM_MEMBERSET.  For example default logic can be invoked by executing DM package DEFAULT_FORMULAS or when sending data through input forms.  The same default logic could have different results due to differences in the scope.  to learn more about the scope of default logic please review the SCN document.

When logic is executed through UJKT its scope can be controlled.  If the problem does not happen when executed with a simplified scope then the root cause is in the scope or the parsing of parameterized values for scope.

The Future of Business Applications – SAP HANA , HANA Migration events with Partners

[HANA Vora] The Simple Word Count Example

$
0
0

For an overview of what SAP Hana Vora is then please check out:

SAP HANA Vora: An Overview

[SAP HANA Academy] Learn How to Install SAP HANA Vora on a Single Node

SAP HANA Vora - YouTube

SAP HANA Vora 1.0 – SAP Help Portal Page

 

With the introductions aside, Spark is fast becoming the de-facto data processing engine for Hadoop; it's fast, flexible and operates "In-Memory" (when the dataset can fit). I like to think of HANA Vora as an add-on to Spark. It provides added business features as well as "best in class" integration with HANA Databases.

 

Lets now dive right into the typical "hello world" style example for Hadoop - The Simple Word Count.

 

How often is "Watson" referred to directly in the "The Adventures Of Sherlock Holmes"?

Is the answer:

A) 42

B) 81

C) 136

D) The Sum of the Above

 

Note: The answer is at the bottom.

 

Spark and Vora support several languages such as Scala, Python and Java. Since Scala is still slightly ahead, in terms of popularity with Spark, I'll use that. For utilising Vora you can use the Spark shell or use Notebooks application such as Zeppelin. In this example I use Zeppelin, which is also covered in the installation steps of Vora, as well as in the Hana Academy videos.

 

Firstly lets download a free copy of the book, strip out all special characters, collect the words, aggregate the results and finally store as, an "in-Memory", resilient distributed dataset (RDD):

 

Scala: Process The File

import java.net.URL

import java.io.File

import org.apache.commons.io.FileUtils

 

//Load External File to HDFS

val HDFS_NAMENODE = "107.20.0.138:8020"

val HDFS_DIR      = "/user/vora"

val tmpFile = new File(s"""hdfs://${HDFS_NAMENODE}${HDFS_DIR}/SherlockHolmes.txt""")

FileUtils.copyURLToFile(new URL("https://ia600300.us.archive.org/10/items/TheAdventuresOfHolmesSherlock/DoyleArthurConan-AdventuresOfSherlockHolmesThe.txt"), tmpFile)

 

//Read Files line as Array[String] into Spark RDD

val textFile = sc.parallelize(FileUtils.readLines(tmpFile).toArray.map(x => x.toString))

 

println("----------------------------------------")

 

//Print first 2 Lines of File

textFile.take(2).foreach(println)

 

//Rows

println("Rows in File: " + textFile.count() )

 

//Perform full word count, strip ou specify chacters

val word_counts = textFile.flatMap(line => line.replaceAll("[^\\p{L}\\p{Nd}\\s]+", "").toLowerCase.split(" ")).map(word => (word, 1)).reduceByKey(_ + _)

 

//Put results into a resilient distributed dataset (RDD)

case class WordCount(word: String, wordcount: Long)

var wcRDD = word_counts.map(t => WordCount(t._1, t._2))

 

//First 10 Rows of Word Count

println("----------------------------------------")

println("First 10 Rows of Word Count:")

wcRDD.take(10).foreach(println)

 

In Zeppelin it appears as follows:

bl1.PNG

 

The Results of Executing are:

bl2.PNG

 

Next lets use Vora to register the RDD as a temporary "In Memory" table and then perform a SQL Query to find how many times "Watson" appears in the book:

 

Scala: Use Vora to Register the RDD as a Temporary Table then Query Results

import org.apache.spark.sql._

val sapSqlContext = new SapSQLContext (sc)

 

 

val wordCountDataFrame = sapSqlContext.createDataFrame(wcRDD)

 

 

wordCountDataFrame.registerTempTable("wc")

 

 

val results = sapSqlContext.sql("SELECT word, wordcount FROM wc where word = 'watson' ").map{

case Row(word: String, wordcount: Long) => {

      word + "\t" + wordcount

}}.collect()

 

Execute in Zeppelin:

bl3.PNG

 

Finally  lets use Zeppelin to Visualise the results:

Visualise with Zeppelin
println("%table Word\tCount\n" + results.mkString("\n"))

 

Execute in Zeppelin:

bl4.PNG

 

Note:  Zepplin's Visualisation capabilities are better demonstrated with %vora sql statements if the results have been stored to HDFS.

 

 

So the Answer is  B) 81

 

Did you guess right?

 

From Tech Ed 2015 SAP Hana Vora has also been demonstrated to process a Petabyte of data for Intel, so hopeful some more challenging Vora examples in SCN will follow from  Mr Appleby and co soon.

 

But in the meantime it's still always fun to use a sledge hammer to smash a nut. I hope you enjoyed.


"Unexpected State of Changelist" issue in PI resolved

$
0
0

Due to an emergency situation a configuration object in Prod PI Integration directory (ICO) was edited directly, while activating the change list there was
some interruption because of which Change list got into an Unexpected State. I was not able to either reject or activate the change list.

 

ID Object is still hanging in my change list. Not able to activate.

 

ChangeList.JPG

Erorr message displayed while rejecting or activating change list.

ChangeList_Error.JPG

 

Root Cause:

The error "Unexpected state of Changelist (ID=<#>). Expected open,but given state is releasedLocally" is often caused by a hanging

thread which wasn't able to complete the changelist activation.

 

Solution:

Restarting the system has resolved the issue for us.

 

If this is still not resolved after restart, then run the SQL queries as per note #1399960

"Database Inconsistency for changelists in PI" and attach the result to SAP incident.

Presentation

$
0
0

Hello!

 

We've made available a presentation that will be demonstrated in the Localization Summit. We'll talk about some NF-e myths such as "upgrades do not bring business benefits", "It's difficult to find the necessary notes", "the technical issues are complex and there is no material to explain them" and "Incidents in PI are difficult to analyze".

 

You can download the presentation from the following link: Presentation about NF-e Myths

 

regards,

Renan Correa

SAP: Simplifying business and creating value with Technology and Innovation

$
0
0

On August 28, we SAP University Alliances, along with the HR team were invited to attend an event with students FEI campus in São Bernardo do Campo. IMG_20150828_151609.jpg

 

Approximately 50 students attended our event and we approach a very interesting topic in our technology today, the Internet of Things. Daniel Bio, Senior Principal IVE can talk a little about trends in which companies are preparing to face in the near future.

 

Daniel commented brilliantly the new forms of trade we have and how we all must adapt ourselves to this way of consumption, students that the vast majority was curious to know how long this would happen and in what

IMG_20150828_151711.jpg

way they would be impacted by more this technological advancement.

 


The event also featured two other colleagues at SAP, Mariana Martins, regional treasurer sole and Diego Anson, senior finance specialist. They could tell we

SAP are a team and all employees contribute to the growth of our company.

 

 

Diego Anson, talked about a solution that is extremely valuable nowadays, Simple Finance, this SAP solution aims to give greater dynamism in operation that  took days. He told in detail how companies could analyze market data and quickly create an action to attract new customers or do something that adds to the company.

IMG_20150828_151025.jpg

 

Without a doubt the students had the opportunity to meet new technological trends and know that we at SAP we are always involved with what is most innovative in the market.

 

Many thanks to all who were able to participate, especially Amanda Mello, Daniel Bio, Diego and Mariana; I am sure impacted very positively with the FEI.

SAP University Alliances na FEI, Sessão de Design Thinking

$
0
0

 


IMG_20150827_201659.jpg

On 27 August, we University Alliances with SAP Pre-Sales team were able to take an incredible event for students of directors of FEI.

 

 

 

Could lead a session Design Thinking for about 35 students from 7 period at FEI.

 

The event began at 19:45 with the help of Lilian Sanada, Customer Solutions Manager, developed a session themed on Trends and conclusions on new learning methods that go beyond the traditional method (classroom). From then on, we started to develop a highly entertaining and productive session for all who could attend the event.

 

 

IMG_20150827_195808.jpg

 

 

Students were involved and what has been proposed by the session, they were able to develop their characters in the best possible way according to the characteristics that were applied.

 

Students were divided into categories of thought, a table should think of academic way, to create academic solutions. Another table had to think like an entrepreneur, entrepreneurial solution that can help students and university and lastly the table of workers, which was responsible for creating situations that lives an ordinary worker to undergraduate students.

 

IMG_20150827_210432.jpg

 

 

 

 

 

With these three paradigms, students were challenged to find solutions that came from meeting with the difficulties that are faced by students, teachers  and the university, to reconcile the academic, professional and personal so that each table thought according to the title obtained.

 

 

 

 

 

 

IMG_20150827_214325.jpg

 

 

 

Therefore, the session went very interactive and fun way. Students participated in every step of our session, new ideas were created and way of how students think was seen by teachers and so they can, in a way, interfere with changes that are beneficial to all.

 

 

At the end of every process of our session, students had the opportunity to introduce the character of the group, its features, challenges, solutions and which the possibility and feasibility of putting these solutions into practice.

 

 

 

 

 

IMG_20150827_221032.jpg

IMG_20150827_220758.jpg

IMG_20150827_220900.jpg

 

 

 

 

        

 

Our agenda for this event was:

 

 

Horário

Design Thinking na FEI

 

 

19:45

Introdução ao Design Thinking

Objetivo: Explicar aos participantes o nosso entendimento dessa abordagem e como ela vem sendo utilizada para solucionar problemas e descobrir inovações ao redor do mundo.

 

 

20:05

Persona

Objetivo: Mapear as características e o perfil de cada tipo de aluno (executivos, empreendedores, acadêmicos, estudantes, consultores) sendo 1 persona por grupo.

 

 

20:20

Brainstorming de Desafios

Objetivo: Discutir os desafios que esses alunos enfrentam no lidar com os estudos, vida profissional e vida pessoal.

 

 

20:40

Tendências e conclusões sobre novos métodos de aprendizado que extrapolam o método tradicional (sala de aula)

Objetivo: Compartilhar com os alunos  as novidades que estão surgindo por aí e algumas conclusões e/ou números que possam ajudá-los a avaliar a eficiência desses novos métodos.

 

 

21:00

Brainstorming de Ideias

Objetivo: Pensar em novos métodos de aprendizado que podem substituir ou complementar a sala de aula (tele-aula (vídeos), e-learning, fórum de discussão, portal, aula particular), discutir a efetividade de cada um, e criar novos métodos que possam se encaixar melhor no perfil de cada tipo de aluno.

 

 

21:30

Solução

Objetivo: Criar uma sugestão flexível de grande + método para o perfil do aluno e discutir como essa informação deverá ser elaborada e validada com ele.

 

 

21:45

Apresentação da Solução

Objetivo: Cada grupo terá 2 minutos para apresentar a solução criada para o perfil do seu aluno.

 

 

22:00

Encerramento

Objetivo: Agradecer a participação de alunos, professores e facilitadores.

 

 

Every event was an indescribable learning, I am very happy to have colleagues at SAP that they can take these experiences for students participating in the SAP University Alliances program, greatly appreciate the presence of all the Pre team - Sales (Luke Ferreira, Lucas, Beppu, Flavia Lauletta, Claudia DeBenedictis and Sergio Kanashiro) especially Lilian Sanada and Amanda Mello. Thank you!

 

 

IMG_20150827_221641.jpg

 


 

Continuous delivery for SAP BusinessObjects - continued (pun intended)

$
0
0

Being able to also transport BusinessObjects content from one system to another, as outlined in my previous post, was a big step forward to achieving complete continuous delivery. But a few things still need some polishing up.

 

For instance, when using this method to transport connection objects, you end up with connections pointing to the development backend - assuming you have separate backends for the various environments (i.e. DEV/QA/STAG/TEST/PROD). But we all have that, right?

 

To fix this, we call yet another little tool in our delivery pipeline to fix the connection properties in each environment after deployment. And again, we use the Java API for SAP BusinessObjects BI.


As usual, a session and handle to the InfoStore will be needed:


IEnterpriseSession enSession =

  CrystalEnterprise.getSessionMgr().logon("cmsuser", "cmspasswd", "cmshost:cmsport", IsecEnterprise.KIND);

IInfoStore infoStore = (IInfoStore) enSession.getService("InfoStore");


Then we retrieve the connection object we wish to modify:


String query =

  "SELECT * FROM CI_APPOBJECTS WHERE SI_KIND = 'CCIS.DataConnection' AND SI_NAME = 'myConnection'";

IInfoObjects connObjects = infoStore.query(query);


IDataConnection dataConnection =

  (IDataConnection) connObjects.get(0);  # maybe some error handling?

Connection conn = dataConnection.getConnection();


To modify the connection, it first needs to be cast toMutableConnection. Only then can we change the properties and save the object:

 

MutableConnection mutConn = (MutableConnection) conn;

mutConn.putProperty(

  PropertySet.DATASOURCE,

  PropertySet.Entry.CATEGORY_CREDENTIALS,

  PropertySet.Entry.TYPE_STRING, "qaEnvHost:qaEnvPort");

mutConn.putProperty(

  "URL",

  PropertySet.Entry.CATEGORY_CREDENTIALS,

  PropertySet.Entry.TYPE_STRING, "jdbc:sap://qaEnvHost:qaEnvPort");

mutConn.putProperty(

  PropertySet.USER,

  PropertySet.Entry.CATEGORY_CREDENTIALS,

  PropertySet.Entry.TYPE_STRING, "DBUSER");

mutConn.putProperty(

  PropertySet.PASSWORD,

  PropertySet.Entry.CATEGORY_CREDENTIALS,

  PropertySet.Entry.TYPE_STRING, "DBPASSWD");

 

dataConnection.setConnection(mutConn);

dataConnection.save();

 

Using this after deployment in each environment, we ensure that the connection object will point to the correct backend.

 

There are also other things which need to be changed after deployment (e.g. permissions on objects) - I'll try to cover that in a separate post...sometime...


New SAP WPB success story available – Adarsch Credit

$
0
0

Hi everyone,

 

I’m happy to announce another SAP Workforce Performance Builder success story.

 

This time a customer out of the banking industry gives some insights on how SAP WPB is providing incremental value to their business and training roll-outs.

 

"We looked into a lot of leading applications for training purposes; but when we compared their features with SAP Workforce Performance Builder, we found that the SAP solution is the best on the market. We are now rolling training out 80% faster."

Himanshu Shah, Chief Technology Officer, Adarsh Credit Co-Operative Society Ltd.

 

Read the full story here:

http://www.sap.com/bin/sapcom/en_us/downloadasset.2015-11-nov-04-08.adarsh-credit-accelerating-training-rollout-by-80-with-sap-workforce-performance-builder-pdf.html

 

In case you have any question, please contact me directly.


Best,

Maik

Query producción

$
0
0

Hola buen día,

 

El presente es para consultarles si existe alguna manera de saber las operaciones que ha tenido un numero de lote pero referente a las ordenes de fabricación, es decir, que me dé el numero de orden de fabricación, la cantidad del lote que se tomo en la materia prima (que me de el numero de emisión para producción) y la cantidad que se entregó de producto terminado con ese lote (que me de el numero de Recibo de producción)

 

De antemano agradezco su atención y quedo atento a sus comentarios.

 

Saludos.


RPFIGLMX_EACCOUNTING - Company Codes sharing Tax ID (RFC)

$
0
0

Buen dia,

 

El programa standard RPFIGLMX_EACCOUNTING en la opción de Pólizas, permite unicamente especificar un código de compañía.

 

En el caso de tener más de una compañía compartiendo el tax id (RFC), es recomendable hacer una copia del programa estandard para modificarlo, permitiendo así generar el XML de polizas de varias compañías?; o se recomienda utilizar una herramienta de software que combine (merge) los XMLs generados individualmente, para así subirlos al SAT un solo archivo?

 

Anticipo mis agradecimiento por sus sugerencias o recomendaciones.

Is is possible to enter data in custom fields while creating transaction using Function modules?

$
0
0

Sometimes we need to create Financial Transaction automatically using such instruments as business objects (tr. SWO1) and function modules (tr. SE37), etc.

 

For example, business object BUS5550 "Interest Rate Instrument" has method to create transaction.

01.jpg

 

But we also need to use other business objects to create other parameters in our transaction.

  • BUS5102 "Conditions"
  • BUS5103 "Payment details"
  • BUS5101 "Additional flows"
  • etc.

 

So we have:

  1. Many business objects with functioin modules in order to create Interest rate instrument transaction.
  2. No business objects in order to fill our User-exit tab's fields.
  3. If we use all mentioned business objects in our program, we are not able to use "TESTRUN" flag, because it works for BUS5550 "Interest Rate Instrument", but for other business objects it requires already created transaction. But we need to test our function modules and structures with data before transaction creation.
  4. It's not possible to start mirror transaction creation.

 

What to do?

The answer is to use standalone function modules like BAPI_FTR_*_DEALCREATE

 

Function moduleDescription
BAPI_FTR_IRATE_DEALCREATECompletely Create an Interest Rate Instrument
BAPI_FTR_FTD_DEALCREATECreate a Fixed Term Deposit Completely
BAPI_FTR_SECURITY_DEALCREATECompletely Create a Security Transaction
BAPI_FTR_FXT_DEALCREATECompletely Create a Forex Transaction
BAPI_FTR_FST_DEALCREATECompletely Create a Forward Security
BAPI_FTR_COMS_COND_DEALCREATECompletely Create a Commodity Swap
BAPI_FTR_COMS_DEALCREATECompletely Create a Commodity Swap
BAPI_FTR_CTYFWD_DEALCREATECompletely Create a Forex Transaction
BAPI_FTR_FLP_DEALCREATECompletely Create a Forward Loan
BAPI_FTR_REPO_DEALCREATECompletely Create a Repo
BAPI_FTR_TRES_DEALCREATECompletely Create a Total Return Swap

 

It's a pity we don't have any function modules for: Cash flow transaction, Facility, Commercial paper and some other produc types.

 

The structure of funcion module is:

 

function BAPI_FTR_IRATE_DEALCREATE.  IMPORTING
* "Structure tab paramters"     VALUE(INTERESTRATEINSTRUMENT) TYPE  BAPI_FTR_CREATE_IRATE
* "What parameters we are entering at Structure tab"     VALUE(INTERESTRATEINSTRUMENTX) TYPE  BAPI_FTR_CREATE_IRATEX
* "Administration tab paramters"         VALUE(GENERALCONTRACTDATA) TYPE  BAPI_FTR_CREATE
* "Indicators what parameters we are entering at Administration tab"     VALUE(GENERALCONTRACTDATAX) TYPE  BAPI_FTR_CREATEX
* "Indicator whether we are entering Conditions"     VALUE(CONDITION_COMPLETE_INDICATOR) TYPE BAPI2042-COMPLETE_INDICATOR DEFAULT SPACE
* "Indicator whether we are entering Payment details"     VALUE(PAYDET_COMPLETE_INDICATOR) TYPE BAPI2042-COMPLETE_INDICATOR DEFAULT SPACE
* "Indicator whether we are entering Additional flows"        VALUE(ADDFLOW_COMPLETE_INDICATOR) TYPE BAPI2042-COMPLETE_INDICATOR DEFAULT SPACE
* "Indicator whether we are entering Main flows (Other changes in capital structure button)"       VALUE(MAINFLOW_COMPLETE_INDICATOR) TYPE BAPI2042-COMPLETE_INDICATOR DEFAULT SPACE
* "Indicator for test run"        VALUE(TESTRUN) TYPE  BAPI2042-TESTRUN DEFAULT SPACE
* "Indicator to start mirror transaction creation"     VALUE(MIRRORING) TYPE  BOOLEAN_FLG DEFAULT SPACE  EXPORTING
* "After transaction creation we have company code"     VALUE(COMPANYCODE) TYPE  BAPI2042-COMPANY_CODE
* "and transaction number"     VALUE(FINANCIALTRANSACTION) TYPE  BAPI2042-TRANSACTION  TABLES
* "Condition paramters"      CONDITION STRUCTURE  BAPI_FTR_CONDITION
* "What parameters we use in codition structure"      CONDITIONX STRUCTURE  BAPI_FTR_CONDITIONX
* "Condition's formula parameters"      FORMULAVARIABLE STRUCTURE  BAPI_FTR_CONDITION_FORMULA
* "Condition's single dates paramters"      SINGLEDATE STRUCTURE  BAPI_FTR_CONDITION_SINGLEDAT
* "Payment details parameters"      PAYMENTDETAIL STRUCTURE  BAPI_FTR_PAYDET
* "What parameters we use in payment details structure"      PAYMENTDETAILX STRUCTURE  BAPI_FTR_PAYDETX
* "Additional flows parameters"      ADDFLOW STRUCTURE  BAPI_FTR_FLOW
* "What parameters we use in main flow structure"      ADDFLOWX STRUCTURE  BAPI_FTR_FLOWX
* "Main flows parameters"      MAINFLOW STRUCTURE  BAPI_FTR_MAINFLOW
* "What parameters we use in main flow structure"      MAINFLOWX STRUCTURE  BAPI_FTR_MAINFLOWX
* "User-exit tab's field parameters"      EXTENSIONIN STRUCTURE  BAPIPAREX OPTIONAL
* "List of errors if we have them"      RETURN STRUCTURE  BAPIRET2 OPTIONAL

How to fill all structures in function module it is material for other articles and right now let's understand how to set/read User-exit tab's field.

 

What we need is:

  1. badi FTR_CUSTOMER_EXTENT in order to create user-exit tabs and fields. Article
  2. function module ZSET_CUSTOM1 created in ZTRM_CUSTOM_TABS function group - a copy from FTR_CUSTOM_BADI_SAMPLE function group (tr. SE38 -> SAPLFTR_CUSTOM_BADI_SAMPLE)
  3. badi FTR_CUSTOMER_EXTENT, methods:
    1. EVT_BAPI_SET_CUSTOM1 - set new parameters at user-exit tab 1
    2. EVT_BAPI_SET_CUSTOM2 - set new parameters at user-exit tab 2
    3. EVT_BAPI_GET_CUSTOM1 - read parameters from user-exit tab 1 in BAPI_FTR_*_DEALGET function module
    4. EVT_BAPI_GET_CUSTOM2 - read parameters from user-exit tab 2 in BAPI_FTR_*_DEALGET function module
  4. Fill EXTENSIONIN structure for BAPI_FTR_IRATE_DEALCREATE function module in our program.

 

 

1. Reffer this article to create user-exit tab and fields in your transaction. Basic step is to copy function group FTR_CUSTOM_BADI_SAMPLE (SAPLFTR_CUSTOM_BADI_SAMPLE) into your own function group ZTRM_CUSTOM_TABS (SAPLZTRM_CUSTOM_TABS), for example.

 

Here we'll have structure  G_TAB_FHA_APPENDS which will have our append data.

 

Append (append  ZTRM_CUSTOM_FIELDS in my exaple) usually is made to VTBFHA table.

02.jpg

 

2. Let's create function module ZSET_CUSTOM1 in our ZTRM_CUSTOM_TABSfunction group.

 

FUNCTION ZSET_CUSTOM1.
*"----------------------------------------------------------------------
*"*"Локальный интерфейс:
*"  EXPORTING
*"     REFERENCE(PI_RETURN) TYPE  BAPIRET2_TAB
*"  TABLES
*"      PT_EXTENSIONIN TYPE  BAPIPAREXTAB
*"----------------------------------------------------------------------  DATA: LEN  TYPE C LENGTH 3,        ZTAB LIKE LINE OF G_TAB_FHA_APPENDS.
* "getting data from append structure (user-exit tab 1)"  CALL METHOD G_PROXY_CUST_DATA->GET_CUST_DATA    IMPORTING      PE_TAB_FHA_APPENDS = G_TAB_FHA_APPENDS    EXCEPTIONS      OTHERS             = 4.
* "Read data from EXT structures from function module BAPI_FTR_IRATE_DEALCREAT into append structure G_TAB_FHA_APPENDS"  IF SY-SUBRC = 0.    READ TABLE PT_EXTENSIONIN INDEX 1.    LEN = STRLEN( PT_EXTENSIONIN-VALUEPART1 ).    READ TABLE G_TAB_FHA_APPENDS INDEX 1 INTO ZTAB.    ZTAB-CONTENT(LEN) = PT_EXTENSIONIN-VALUEPART1(LEN).    MODIFY G_TAB_FHA_APPENDS FROM ZTAB INDEX 1.  ENDIF.
* "Saving data from EXT structure into transaction"  CALL METHOD G_PROXY_CUST_DATA->SET_CUST_DATA    EXPORTING      PI_TAB_FHA_APPENDS = G_TAB_FHA_APPENDS    EXCEPTIONS      INVALID_DATA       = 1      INVALID_CALL       = 2      OTHERS             = 3.  IF SY-SUBRC <> 0.  ENDIF.
ENDFUNCTION.

 

3. In your implementation of FTR_CUSTOMER_EXTENT for method EVT_BAPI_SET_CUSTOM1 enter ZSET_CUSTOM1 function module.

 

METHOD IF_EX_FTR_CUSTOMER_EXTENT~EVT_BAPI_SET_CUSTOM1.  CALL FUNCTION 'ZSET_CUSTOM1'    IMPORTING      PI_RETURN      = PE_RETURN    TABLES      PT_EXTENSIONIN = PI_EXTENSIONIN.
ENDMETHOD.

 

4. Fill EXTENSIONIN structure in function module BAPI_FTR_IRATE_DEALCREATE.

 

* "Define EXTENSIONIN structure which have BAPIPAREX type"
DATA:  ZEXT TYPE STANDARD TABLE OF BAPIPAREX WITH HEADER LINE.  ...
ZEXT-STRUCTURE = 'ZTRM_CUSTOM_FIELDS'.
* "Flat structure without any delimeters. Fields order just like in append structure"
CONCATENATE  LT_FILE-ZZAUFNR LT_FILE-ZZGSBER LT_FILE-ZZBUKRS LT_FILE-ZZRFHA  INTO ZEXT-VALUEPART1 RESPECTING BLANKS.
ZEXT-VALUEPART2 = '"'.
ZEXT-VALUEPART3 = ''".
ZEXT-VALUEPART4 = ''".
APPEND ZEXT.
...
CALL FUNCTION 'BAPI_FTR_IRATE_DEALCREATE'       EXPORTING         INTERESTRATEINSTRUMENT       = ZINTERESTRATE         INTERESTRATEINSTRUMENTX      = ZINTERESTRATEX         GENERALCONTRACTDATA          = ZGENERAL         GENERALCONTRACTDATAX         = ZGENERALX         CONDITION_COMPLETE_INDICATOR = P_COND         PAYDET_COMPLETE_INDICATOR    = P_PAY         ADDFLOW_COMPLETE_INDICATOR   = P_ADD         MAINFLOW_COMPLETE_INDICATOR  = P_MAIN         TESTRUN                      = P_TEST         MIRRORING                    = ' '       IMPORTING         COMPANYCODE                  = BUKR         FINANCIALTRANSACTION         = TRAN       TABLES         CONDITION                    = ZCOND         CONDITIONX                   = ZCONDX         FORMULAVARIABLE              = ZFORMULA         SINGLEDATE                   = ZSINGLE         PAYMENTDETAIL                = ZPAYDET         PAYMENTDETAILX               = ZPAYDETX         ADDFLOW                      = ZADD         ADDFLOWX                     = ZADDX         MAINFLOW                     = ZMAIN         MAINFLOWX                    = ZMAINX         EXTENSIONIN                  = ZEXT "EXTENSIONIN structure"         RETURN                       = ZERR.

So, using function modules BAPI_FTR_*_DEALCREATE we:

  1. Use only single function module in order to create transaction.
  2. Can use TESRUN flag in order to test all structures and its data
  3. Can work with user-exit tab's data

 

P.S. In attachment you can find file EXAMPLE.txt with example of programm which uploads data from files into SAP and creates Interest rate instrument. This is a demo and it has some bugs (read Known bugs), but never the less it is working quite good.

Read Prerequisites part in file.

What makes us that one step closer to being a User Centric IT Organization?

$
0
0


You may have heard the term "User Centric IT Organization". I am pleased to tell you about how we at Business Innovation and Information Technology (BI&IT) Enterprise Mobility team headed by Martin Langactually not only embrace this but also bring it into effect in our efforts to make the work lives of over 75,000 colleagues at SAP simpler.

 

1. Feedback Shake


User Feedback ( good and bad ) is of paramount importance if an organization wants to be really user centric or wants to come even close. What if you could instantly hear what the users have to say when you are thousands of miles away from them? What if you could receive all the information that your team needs to analyze the feedback including screenshots and logs? Sounds like a bit of a challenge, isn't it?


At BI & IT Enterprise Mobility, we have made this possible by implementing what we call as the feedback shake. How it works is as follows:


Consider a scenario where you are using a mobile app, and you quickly have an idea about a new feature that might help you use the app in a better way. In a typical scenario, you would have to take a screenshot, open your email, search for the email address or DL that the feedback should go to ( in some cases this info is not readily available ). Ultimately, in most cases, you do not send feedback because it is just so much effort. Here comes feedback shake to your rescue. With the feedback shake, you can quickly shake your phone and tap on a button to send feedback. The email client will open on your phone with pre-attached screenshots ( the screen you were on when you shook your phone ), logs and some metadata (  app version, OS version ). All you have to do is type your feedback and tap send.


This is a much easier and quicker approach for the sender to provide feedback instantly with fewer steps, and also for the receiver to get feedback with all the required information needed to analyze the feedback. The cherry on the cake is that the framework is so easy to integrate with any app making the life of developers a lot easier and they can focus more on the functionality. Here are some screenshots that show how the framework functions.

 

           

IMG_2121.PNG                  

IMG_2122.PNG

 




2. Mobile Usage Framework


Every organization who wishes to be User Centric needs to know how it's users are interacting with their software.

 

Especially for us at Enterprise Mobility, we wanted to know the following:


  • Which apps our users are using the most regularly?
  • Which apps users don't use so much?
  • How are the users interacting with the apps?
  • Which features are being used and which one's not?
  • How is the regional usage as each region is very diverse in itself?
  • Are users coming back to the apps after they use them for the first time?

 

To answer all these questions plus many more, the Mobile Usage Framework was developed completely in-house. The framework shows a real time world map of internal apps being used by SAP colleagues worldwide. The top half indicates the country where the app was used and the bottom half shows the name of the app that was used. There is also color coding to distinguish between the platforms these apps were used on. All in real time!

 

Tracking_Framework_Snapshot.png

 

Not only this, the framework provides reports such as unique devices, total hits, usage by app versions, regional usage plus many more metrics. My favorite metric is actually user events which tells me the features that the users are using the most ( or the least ). This gives me insight into the usage patterns and what the users like the most, and which are the areas that we need to focus on to improve the user experience.


3. App Crash Log Framework


The worst thing that can happen to an app is that it crashes when the user is interacting with it. Of course, we try to minimize crashes as much as possible by following coding best practices but sometimes it just happens. So how do we make sure that we know that the crash has occurred, when it has occurred and during which interaction it has occurred? All these questions led us to create yet another useful framework called the App Crash Log Framework built on the SAP HANA Cloud Platform.

 

The App Crash Log framework is easily integrated with any app so it is super easy for developers to incorporate. Once an app crashes, the next time the user launches the same app again, a popup appears asking the user to send a crash report. Once the report is sent from the app, it gets stored on the Crash Log server. The information that the server receives from the client ( app ) also includes log files for developers to symbolicate and detect the memory leaks that caused the crash.


 

These frameworks enable us to be close to our users, and make the right choices when it comes to providing a great User Experience. I was happy ( and fortunate ) to showcase these frameworks at SAP TechEd Las Vegas which is SAP's premiere technology education event in North America. It was very exciting and satisfying to talk to our Customers and Partners about some really cool and innovative stuff that our team members at BI&IT Enterprise Mobility have worked so hard to bring to reality.


If you will be attending SAP TechEd Barcelona, please join Sanit’s colleague Simon Hofmann on Wednesday, November 11th for Networking Session EXP27453 Develop SAP Fiori Apps and Back-End Mobile Applications with SAP HANA Cloud Platform, from 11:30 to 12:00 in Lounge 2, Showfloor. There you can get more insight into what we think about User Centricity, what it took to develop frameworks like the above and the latest and greatest in SAPUI5 and SAP HANA Cloud Platform.

SAP OData SDK Android for absolute beginners – Part 1

$
0
0

In this tutorial, my intention is to introduce the SAP Mobile Platform SDK for Android, Harmonized OData API online store concept to you with a code snippet altogether 31 lines long. You can call it the „Hello World” for OData on Android. Around the code, I try to provide as much explanation as possible, so that anyone new to the topic can understand. The only prerequisites are basic knowledge of Java programming, the Android SDK, RESTful web services and the OData protocol, as well as Android Studio and SAP Mobile SDK being installed on your developer PC, the latter being downloadable from the location specified in this article at the time of writing this post: http://scn.sap.com/community/developer-center/mobility-platform/blog/2014/05/22/download-and-install-mobile-sdk-30 .

 

There is already plenty of other material available covering learning OData SDK for Android. They are all great, for example Claudia Pacheco’s „How To...Consume OData Services in Online Mode (Android)” tutorial (available here: http://scn.sap.com/docs/DOC-60634) is really excellent. It contains nicely designed and comprehensive code, from which you can learn a lot. In comparison, this post contains a code snippet that just wants to be short and simple, so please don’t criticize the shortcuts, they are intended.For example, all our code is synchronous and runs within the UI thread, that is something you don’t want to do in a „serious” app, as network can be really slow and it should be avoided that the app looks to be frozen. However, adding multithreading and asynchronity to the app would bring in more complexity, so I left them out for now.

 

So let’s start the adventure! I’ve broken down what we need to do into 7 simple steps (from which the 5th one „Write new code” is much longer than the others, as it contains the key part of this tutorial). TL;DR: If you are only interested in the code snippet as a whole, you can find it at the end.

 

Step 1: Create a new Android Studio project from Blank Activity template

 

Nothing to explain here, give your project a catchy name, Next, Next, Finish.

 

Step 2: Set up library dependencies

 

This is simple but not completely trivial. You can find a great description of what you should do in the „Android Applications” Developer Guide of SMP Native SDK, in the „Setting Up Android Studio for Native SDK Development” section: http://help.sap.com/saphelp_smp3010sdk/helpdata/en/f9/1c1c77615f441db0a99ea8c8f5b0b0/content.htm .

 

Step 3: Configure manifest file

 

In order to declare that your app needs to be able to access the network, add these 2 lines (if they are not yet there) just before the closing</manifest> tag of the manifests/AndroidManifest.xml file within your project:

 

<uses-permission android:name="android.permission.INTERNET"></uses-permission>

<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" ></uses-permission>

 

Step 4: Delete unnecessary generated code

 

We are going to write all our code within the onCreate method of the main Activity. It is enough to keep super.onCreate(savedInstanceState); as the first line of the method from all that has been generated by the New Project wizard, the rest can be deleted. All our new code created here in this tutorial comes after this one remaining line within the onCreate method.

 

Step 5: Write new code

 

Now we are getting serious: write our brand new code!

 

I’ll start with a step-by-step explanation and then list the complete code snippet to make sure you enter it into Android Studio in the right order. When unresolved libraries are highlighted with red, add imports via Alt+Enter when saving the file. In case multiple alternatives are offered for some classes or interfaces having names starting with OData, choose the ones in the package com.sap.smp.client.odata.

 

Our output (data retrieved from the backend service) is going to be shown to the user via a TextView, so let’s initialize that, as well as the String object that will accumulate the text containing the output:

 

TextView textView = new TextView(this);
textView.setTextSize(
20);
String text =
"";

Then we set a default output message, to make sure that even if something goes wrong (like network connection issue, odata.org service being down etc.), we show something meaningful to the user:

 

textView.setText("There was an error getting data from backend service.");

 

After making sure we’ll have something going out, let’s start taking care of getting something in: data from an OData service via the network. In order to manage data traffic via the internet, we are going to use our SDK’s HttpConversationManager class. As its name suggests, it manages the conversation between the client and the server via the HTTP (or HTTPS) protocol. As you may already know, OData transfers data via internet or intranet networks via HTTP as network protocol, the data being represented as XML or JSON structured text. However, HttpConversationManager is such a great thing that for now, you absolutely don’t have to worry about details of HTTP. We don’t even have to worry about authentication as the service being used is a public one without any logon credentials being requested. In subsequent parts of this series I’m planning to introduce how to handle authentication via using HttpConversationManager in combination with SAP Mobile App Framework (MAF) Logon Manager, but not yet now, for the sake of simplicity of first time introduction.

 

Let’s construct our HttpConversationManager object, providing this as Android context, as the only constructor parameter, then start a try block, as the things that follow are able to throw various kinds of exceptions (for example network access issues):

 

HttpConversationManager manager = new HttpConversationManager(this);
try {

Now we’ll specify where our backend service lives, as an URL object. The backend service we are going to use is the public test / demo service on odata.org website. The newest version of the OData protocol specification is V4, but as SAP OData SDK currently supports an extended version of OData V2, we should also use this V2 version of the protocol from the service as well, that’s why „/V2” is part of the URL. Then we open our OnlineODataStore, for which the first parameter is Android context, the second one is the service URL, the third one is the HttpConversationManager used for handling network connectivity, and the last one is an online store options object that is optional and not needed now, so we provide null here.

 

Opening the store doesn’t retrieve any business data yet, just connects to the OData service document and initializes the online store.

 

URL url = new URL("http://services.odata.org/V2/OData/OData.svc");
OnlineODataStore store = OnlineODataStore.open(
this, url, manager, null);

Once the store is opened successfully (which we can make sure as the result of a null check), the OData feed of a specific collection can be read through the network, by default in XML format (using JSON instead of XML can be specified via configuration within the online store options object). In this case we are retrieving the „Suppliers” collection, and as second parameter we provide null as options object, having no extra configurationfor now.

 

if (store != null) {
ODataResponseSingle resp = store.executeReadEntitySet(
"Suppliers", null);

Once we have the XML content retrieved in the memory (in the form of the ODataResponseSingle resp object, internally storing the data just as a big string), let’s parse it so that we can easily handle the content instead of needing to deal with the internal structure of the XML. The getPayload() method of the response object takes care of the XML parsing, and provides the data as a structured object representation of the OData feed in memory. Out of this feed, we can retrieve just the business data records as  a List of ODataEntity objects via the getEntities() method call.

 

ODataEntitySet feed = (ODataEntitySet) resp.getPayload();
List<ODataEntity> entities = feed.getEntities();

Now that we have the list of business object entities at hand, let’s prepare for processing them one by one, via declaring the variables that will temporarily store the data before processing: the OdataPropMap object will store all the properties of the current entity in the list, and the OdataProperty object will contain one specific property value of a given entity.

 

ODataPropMap properties;
ODataProperty property;

 

When iterating through the list of entities the ODataEntity object stores the current entity within the iteration. The first thing we do with this entity is to get the map of its properties.

 

for (ODataEntity entity : entities) {
properties = entity.getProperties();

Then we just go one by one and retrieve all the property objects, providing property names as parameters, and then getting the values of the properties. The given value is immediately concatenated to the string that is to be shown to the end user as output. Of course, in case of a real application (instead of this overly simplified tutorial code), you could for example construct a statically typed business object from these property names and values, in this case a class named Supplier would be a likely candidate.

 

property = properties.get("ID");
text +=
"ID: " + property.getValue();
property = properties.get(
"Name");
text +=
", Name: " + property.getValue();
property = properties.get(
"Address");
text +=
", Address: " + property.getValue();
property = properties.get(
"Concurrency");
text +=
", Concurrency: " + property.getValue() + "\n\n";

After iterating through the entities, we are done with data processing. The output text is ready, so we should just set it as the content of our textView object (right before the end of the block of our if statement that made sure we process data only in case connecting to the backend OData store was successful).

 

}
textView.setText(text);
}

Again, in a real application, you would have proper exception handling with error messages shown to the end user and logging etc. but here we just print the stack trace to make it easier to troubleshoot issues during the learning process.

 

} catch (Exception e) {
e.printStackTrace();
}

Last but not least, we set the textView object as the only UI element on the screen.

 

setContentView(textView);

 

This is it! As I said before, here comes the complete code snippet:

 

TextView textView = new TextView(this);
textView.setTextSize(
20);
String text =
"";
textView.setText(
"There was an error getting data from backend service.");
HttpConversationManager manager =
new HttpConversationManager(this);
try {
    URL url =
new URL("http://services.odata.org/V2/OData/OData.svc");
    OnlineODataStore store = OnlineODataStore.open(
this, url, manager, null);
    if (store != null) {
        ODataResponseSingle resp = store.executeReadEntitySet(
"Suppliers", null);
        ODataEntitySet feed = (ODataEntitySet) resp.getPayload();
        List<ODataEntity> entities = feed.getEntities();
        ODataPropMap properties;
        ODataProperty property;
       
for (ODataEntity entity : entities) {
            properties = entity.getProperties();
            property = properties.get(
"ID");
            text +=
"ID: " + property.getValue();
            property = properties.get(
"Name");
            text +=
", Name: " + property.getValue();
            property = properties.get(
"Address");
            text +=
", Address: " + property.getValue();
            property = properties.get("Concurrency");
            text +=
", Concurrency: " + property.getValue() + "\n\n";
        }
        textView.setText(text);
    }
}
catch (Exception e) {
    e.printStackTrace();
}
setContentView(textView);

 

Step 6: Check network connectivity

 

Before you run the app, please make sure that your test device or emulator can properly connect to the internet. The safest thing in order to avoid any proxy issues is usually to use a real device, turn off WIFI and go via 3G.

 

Step 7: Run

 

… and now, run the app and you are supposed to see something like this if everything went well.

Screenshot_2015-11-11-10-56-14_small.jpg

My next blog of this to-be-series is planned to be about how to handle authentication when connecting to an OData service.Stay tuned!

A sporting showcase for technology

$
0
0

In the run-up to the Super Bowl's 50th anniversary, Bill McDermott reveals how technology innovation will be the winning factor in the business of sports.

 

Innovation has always been a game-changer in sports. From instant replay to in-helmet headsets that allow quarterbacks to communicate, advances in technology not only impact the way people play, but whether they win or lose.


In this highly competitive world, SAP is a triple threat with solutions that help sports organizations simplify operations, improve player performance and increase fan engagement before, during and after the game.


Last week, SAP CEO Bill McDermott had a chance to showcase our expertise at the Super Bowl 50 Sports Innovation Summit.


In February 2016, the National Football League (NFL) will celebrate the 50th anniversary of the Super Bowl. As part of the festivities leading up to the event, the Super Bowl 50 Host Committee invited NFL executives, Silicon Valley business leaders and other guests to Levi's Stadium in Santa Clara, CA to talk about technology's role in sports. Levi's Stadium, where Super Bowl 50 will be played, is one of the world's most connected sports venues, and therefore, the perfect setting to discuss how technology is becoming more intertwined with the business of sports.

 

The event was intended to set the tone for Super Bowl 50-  the most technologically advanced Super Bow and set the stage for future NFL Super Bowl tech summits.

Participating in theinaugural summit was a great opportunity for SAP: we are in the unique position of being the only company that is a partner of the SB 50 host committee, Levi's Stadium, and the NFL.

 

Bill opened the day with a fireside  keynote with Lynn Swann, former professional American football player, Hall of Famer, sportscaster, and politician.

In their conversation, Bill emphaszed that every sports franchise is now a technology company.Because sports is big business - the industry generates as much as $700 billion yearly, or 1 percent of global GDP -  technology's benefits extend far beyond the field. The boardroom conversations happening in companies around the world are focused on digitizing work, customer experiences, supply chains and assets. SAP's solutions can help enable franchises to Run Simple - on the field and off.


The discussion explored how technology solutions are enhancing the fan experience in the stadium and at homeand how big data and analytics can help improve player performance.The event continued after the keynote with three panels. SAP was well-represented with Quentin Clark, Chief Business Officer, participating in the Player Tracking and the Connected Athlete session. Further panels featured NFL executives and players speaking on the future of sports and media.

 

The future of sports

When it comes to the future, the possibilities are endless. Wearables, as an example, will transform the amount of data we can access and analyze. Fans may even be able to experience a game live from the player's point of view someday."Imagine not just watching a game but experiencing it in virtual reality live," said Bill in his closing comments.

 

@magyarj

Viewing all 2548 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>