Quantcast
Channel: SCN : Blog List - All Communities
Viewing all 2548 articles
Browse latest View live

Information Steward Installation tip

$
0
0

I've recently gone through issue with IS installation (IS 4.2 SP 5) which took longer time to complete than expected (Even though all the prerequisites were in place). After some basic research , I found that the culprit is the mapped network drivers in the system. The workaround for this issue is covered in the IS release note.

 

Here are the details

 

Issue ID :

ADAPT01713377

 

Issue Description:

When you select Disk Cost in the Select features step of the installation, the installer calculates cost using all drives, including mapped drives. This may result in additional installation time if, for example, the network is slow.

 

Workaround:

Unmap the network drives before installing the software.


SAP RTOM Key Contact Persons

$
0
0

As many companies and SAP customers are interested in more information about the product, we would like to list the key persons in the SAP Team that can be contacted for any question and assistance needed:


 

Alon Barnes SAP RTOM Solution Architectalon.barnes@sap.com
Gad RavidSAP RTOM Solution Ownergad.ravid@sap.com
Oren HazeSAP RTOM Director for Product and Developmentoren.haze@sap.com

                 

Create Travel Request – Extending Fiori application to get default currency for the user

$
0
0

Following is the case when you open the create travel request application and click on “+” symbol to create a new request. There is a currency field in the create form which is after the input field for Estimated Cost.

1.JPG

 

Requirement was to fill the default currency applicable to the user as shown in the below image

 

2.JPG

 

Next thing was to look into the code of this application. View corresponding to this was DetailForm View.

Code corresponding to the view was

 

              <Input

id="Trc_SC_Currency"

value="{EstimatedCost/Currency}"

showValueHelp="true"

editable="{ parts:[{path:'UserInfo>/GlobalSettings>/FixedCurrency'}] , formatter : '.checkGlobalSettingsFixedCurrency'}"                

showSuggestion="true"             

suggestionItems="{Currencies>/result}"

valueHelpRequest="onCurrencyValueHelpRequest">

               <suggestionItems>

                <core:Itemtext="{Currencies>Id}"></core:Item>

               </suggestionItems>

              </Input>                             

 

As we can see the editable property if already having mapping to some value. We map it to the field which will be filled by us in the backend using our own logic for the user and also we need to change the logic in function for formatter (checkGlobalSettingsFixedCurrency).

So extended view will have the currency field as below

              <Input

id="Trc_SC_Currency"

value="{EstimatedCost/Currency}"

showValueHelp="true"

editable="{ parts:[{paxth:'UserInfo>/Currency'}] , formatter : '.checkGlobalSettingsFixedCurrency'}"                

showSuggestion="true"             

suggestionItems="{Currencies>/result}"

valueHelpRequest="onCurrencyValueHelpRequest">

               <suggestionItems>

                <core:Itemtext="{Currencies>Id}"></core:Item>

               </suggestionItems>

              </Input>

 

 

Changing the formatter function also as below

       checkGlobalSettingsFixedCurrency : function() {

         

            var UserInfo = this.getUserInfo();  

 

            if(UserInfo.Currency) 

             {

                    returnfalse;

          

             }

            else

             {

             returntrue;

             }

                                                       },

 

 

 

The above should be enough for the UI side. Now we need to fill the default currency in the placeholder for the user info.

 

For fetching the default currency for the employee using odata, we used the Enhancement Spot SRA004_MY_TRAVEL_REQUEST.

Its implementation will provide you a

method (“IF_SRA004_BADI_MY_TRAVEL_REQ~CHANGE_EMPLOYEE”)

for changing the user profile. We will fetch the default currency for the user’s assigned cost center and send it in the currency field for the Employee.

Following is the code we used.

 

METHOD if_sra004_badi_my_travel_req~change_employee.

data: lt_return type bapireturn,
lt_return2
type bapiret1,
ls_return
type bapiret2,
ls_org_assignement
type bapip0001b,
lt_org_assignment
like table of ls_org_assignement,
ls_personal_data
type bapip0002b,
lt_personal_data
like table of ls_personal_data,
ls_internal_control
type bapip0032b,
lt_internal_control
like table of ls_internal_control,
lt_return3
type standard table of bapiret2,
ls_costcenterdetails
type bapi0012_ccoutputlist.


CALL FUNCTION 'BAPI_EMPLOYEE_GETDATA'
EXPORTING
employee_id     
= is_employee-id
IMPORTING
return           = lt_return
TABLES
org_assignment  
= lt_org_assignment
personal_data   
= lt_personal_data
internal_control
= lt_internal_control.

READ TABLE lt_org_assignment INDEX 1 INTO ls_org_assignement.

IF sy-subrc EQ 0.
is_employee
-costcenter = ls_org_assignement-costcenter.
is_employee
-companycode = ls_org_assignement-comp_code.

CALL FUNCTION 'BAPI_COSTCENTER_GETDETAIL1'
EXPORTING
controllingarea 
= ls_org_assignement-co_area
costcenter      
= is_employee-costcenter        
IMPORTING
costcenterdetail
= ls_costcenterdetails
TABLES
return           = lt_return3.

is_employee
-costcentername = ls_costcenterdetails-name.

CALL FUNCTION 'HRCA_COMPANYCODE_GETDETAIL'
EXPORTING
companycode
= is_employee-companycode
IMPORTING
currency    = is_employee-currency.
IF sy-subrc <> 0.

ENDIF.
ENDIF.

ENDMETHOD.

 

 

This provides us the required default currency and it is passed through the odata service to the UI screen.

 

Hope it will be useful for someone.

 

Regards,

My Takeaways from HTML5DevConf2015

$
0
0

In late October of 2015, I attended the HTML5 Developer Conference in San Francisco which is one of the largest gatherings of developers working in the web tech area and features many renowned speakers. After presenting my key takeaways internally to colleagues and externally at SAP Inside Track Walldorf, I realized that I should share my leanings with a bigger audience and decided to summarize my impressions and selection of talks as an SCN blog post.

 

Overall it was an exciting experience for me, as I got the chance to meet tech stars and peers from all over the world and gained insights on latest trends relating to Web Performance, Protocols (HTTP/2, WebSockets), IoT/WoT, UX and some other hot topics. Apart from conference talks it also included training courses aimed for getting hands on experience through live coding sessions. Talks were mostly held at Yerba Buena Center For The Arts and Metreon Center which had  a great view of San Francisco that was well captured in this photo by Robert Dawson

https://pbs.twimg.com/media/CRswqTCUAAAuD3E.jpg:large

 

Design + Performance(Slides)

Steve Souders, SpeedCurve

 

This was an interesting talk by Steve Souders, a well-known expert for web performance related topics and former Chief Performance Engineer at Google/Yahoo. Currently he works for SpeedCurve, a front-end performance monitoring tool.

 

Steve started his talk by explaining the importance of having Inter-disciplinary teams. He suggests to bring together designers and developers and increase collaboration between them already in the initial stage of the project (rather than starting with designers producing the concepts in isolation and then developers taking over). This will help to produce “non-reckless” designs that consider the trade offs of performance and design.

 

The following are some of the main guiding principles that Steve presented for measuring/improving site performance.

 

1. Define Performance Budgets to track the progress and get alerts whenever limits exceed. Here’s a good blog about this concept.

in page reminders 2222.png

 

2. Introduce in-page metrics (visible only internally) as constant reminders to the team on how the performance is going and alert in case there are any "performance budget" violations or regressions. On the example of Etsy, Steve explained how this kind of small changes can help to establish "culture of performance".

 

3. Do not use window.onload as a performance metric as it's not suitable for dynamic behavior, preloading, lazy loading. Look rather at some other metrics that capture better the rendering experience like Speed Index, which is the time at which the 50th percentile pixel got painted.

 

4. Most importantly, define your own custom metrics as there is no all-purpose metric. By Custom Metrics he means defining the design elements that matter most to the user experience (e.g. twitter’s time to first tweet). These can be measured through the User Timing API which is part of W3C Web Timing API and helps to identify the hot spots in the code. It allows to register time measurements at different places in JavaScript that will be stored in the browser. The two main concepts are: Mark(timestamp) and Measure(time elapsed between two Marks). Web timing API also includes Resource Timing and Navigation Timing API.

For tracking custom metrics there are two types of website monitoring solutions: Synthetic and Real-User Monitoring (RUM).

  • RUM tools gather performance metrics directly from end-user browsers through the embedded JS beacons and collect insights on how people use the site (environments, browsing paths, etc.).
  • Synthetic tools simulate actions that users make and measure metrics like response time, load time from different locations (e.g. WebPageTest, SpeedCurve, DynaTrace).

 

You might want to checkout Steve’s famous 14 Rules for Faster-Loading Web Sites,  if you haven’t heard about them yet, but beware that in the context of HTTP/2 few of them are already viewed as anti-patterns.

 

Measuring Web Perf? Let’s write an app for that!!! (Slides)

Parashuram Narasimhan, Microsoft

 

This was another performance related session that evolved around the idea that performance needs to be treated as a feature and similar to any other feature it must have automated tests running against every build and must be monitored for regressions.  Making automated web performance measurement part of continuous integration allows to collect metrics and show trends of how the application works across multiple commits and understand which exact commits have introduced performance regressions.


perf is a feature.png

Parashuram showed how this task can be done using his open-source browser-perf web performance metrics tool. The tool collects (e.g. from Chrome DevTools Timeline panel) tracing information (e.g. frame rates, layouts, paints, load time) that is obtained by mimicking real user actions (e.g. through Selenium) and monitor site performance for every commit as part of CI process. An alternative to browser-perf is Phantomasthat is built on top of PhantomJS.

 

In the final part of his session, Parashuram talked about monitoring performance trends of web frameworks. Inspired by Topcoat’s example of integrating performance tests into daily commit process, he has started to experiment with analyzing performance trends over major releases of different JavaScript frameworks by plotting metrics like frame rates, style calculation times, layouts, etc. Of course, a natural question for me was whether he has made such analysis also for OpenUI5 and have found any interesting trends. So when I approached him after his session he told, that he has already done that, but unfortunately had no time to give any details as he had to leave early. So I hope to find some time soon and try out myself

 

The “entertainment factor” of this talk was also high, as Parashuram decorated his slides with cute and funny Stormtroopers

 

NextGen Web Protocols: What’s New? (Slides)

Daniel Austin, GRIN Technologies

 

The topic of HTTP/2 is becoming more and more popular these days as it helps to achieve up to 60% performance gains by addressing shortcomings of HTTP1.1 (which was not upgraded since 1999). So I decided to take the chance of getting an overview of it and attended this session.

 

Here are some quick facts about HTTP/2 that I learned from Daniel's talk:

- The main goal of HTTP 2.0 is to reduce HTTP response times. It improves the bandwidth efficiency and not latency!

- HTTP's semantics remain unchanged to avoid compatibility issues.

- It is based on SPDY, which was proposed by Google as a wire format extension to HTTP in 2011.

- Was standardized by IETF on May 14, 2015 as RFC 7540

- Though the standard does not require TLS, browsers support HTTP/2 only if TLS is in use. So all HTTP/2 enabled sites will be using HTTPS.

- Implementations:

Servers: Akamai Edge servers, F5 BigIP, Apache (mod_h2), Nginx, MS IIS (Windows 10)

Clients: Chrome, Firefox, Safari 9 (+ apps!), CURL, MS Edge (Windows 10)

 

At a high level HTTP/2 introduces the following changes:

  • Binary instead of textual format, which means that it does not need any parsing and is more compact. But it also means that debugging is trickier and one will need tools like Wireshark more often.
  • The number of physical HTTP connections is reduced to just one and instead of multiple connections we have streams that are divided into control and data frames (multiplexing). As these frames do not need to arrive sequentially, it solves the issue of Head-of-line blocking.
  • The number of bytes and (logical) messages sent gets considerably reduced through mechanisms like header compression, server push.  Headers are compressed using HPACK specification which uses two main methods: 1. Differential encoding: in the first request the header’s full information is sent, but in subsequent requests only the difference with the first one 2. Huffman coding to further compress the binary data. Server Push approach enables server to “push” multiple responses to client’s first request suggesting what other resources it might need. This helps to avoid the unnecessary round trips and waiting of server till client will parse and discover further dependencies.
  • Besides, HTTP/2 also prioritizes both messages and packets for queuing efficiency and improves caching.

 

Daniel shortly talked also about some other recently developed protocols like QUIC, which uses UDP instead of TCP/IP and also Scratch that is actually proposed by Daniel himself. These protocols are still in experimentation phase.

 

WebSocket Perspectives 2015 (Slides)

Frank Greco and Peter Moskovits, Kaazing Corporation

 

This talk provided some interesting insights about WebSockets in the context of IoT/WoT, cloud connectivity and microservices' transports.

 

human vs iot.png

The terms “Web of Things” and “Internet of Things” are sometimes used interchangeably, but making a distinction is actually important. Frank defines IoT as “Embedded computing endowed with Internet connectivity” and WoT as “Application and Services layer over IoT”, similar to Internet (network layer) vs Web (application layer). IoT relates more to the connectivity aspects, which is not sufficient without formal APIs, protocol standards and common frameworks. An interesting observation presented by Frank, was that the data flow model for human web and WoT is quite different and hence we need to rethink which protocols and architectures we use for the new model.

 

As another context, they mentioned Microservices. In scenarios with hundreds of microservices with REST-based calls, there is a lot of additional latency in the overall architecture as we have to wait for replies. So switching to async approach might be better in this sense as it will also increase the scalability.

Furthermore, using WebSockets can also be advantageous in the context of Hybrid Cloud Connectivity, where cloud services require frequent, on-demand and real-time access to on-premises.

 

In such event-driven world a question arises whether HTTP is the right choice as a web communication protocol, because it has many disadvantages in above mentioned scenarios such as inefficient consumption of resources and bandwidth, real-time behavior simulated through workarounds  like resource intensive polling, AJAX/Comet.

WebSocket protocol addresses many of these limitations by providing full-duplex persistent connection. However, it is important to understand that WebSocket is a peer protocol to HTTP and they can be used also in combination to take advantages of caching mechanisms, CDN and other benefits of HTTP.

The protocol has been standardized by IETF as RFC 6455 and its JavaScript API is currently being standardized by W3C. All modern browsers already support WebSockets and there are many server side implementations both commercial and open source.

 

Falcor: One Model Everywhere (Slides)

Jafar Husain, Netflix and TC39 (JavaScript Standards Committee)

 

Falcor is a new JavaScript library open sourced by Netflix that provides data access mechanism with following benefits:

  • optimized way of requesting as much or as little data as we want in a single request
  • asynchronous mechanism of fetching the data for populating the UI as soon as it’s there
  • flexibility to treat the data as a single unified JSON even though its segments are retrieved from multiple data sources.

 

Similar to most web applications, domain model of Netflix is a graph and it is not possible to represent the graph as JSON object (which has a tree format) without duplicates. To avoid this problem Falcor introduces JSON Graph convention which basically does the following “Instead of inserting an entity into the same message multiple times, each entity with a unique identifier is inserted into a single, globally unique location in the JSON Graph object.” Another utility used by Falcor is the server-side Router. When requesting a portion of a JSON model they match it against a certain route and the routes are made not through URLs but paths in the JSON document. This creates the illusion of having one single model served from multiple resources as in this way for each route the data requests can be delegated to different data sources.

Falcor does not have powerful dynamic query mechanism and compared to JSONPath it’s rather limited, but it enables optimizing queries that are expected and happen more often.

 

As part of HTML5DevConf Jafar offered also a training called “Async Programming in JS”. In this talk he summarizes main points of the topic which is actually quite interesting and I would recommend to check out. He also has the same course on FrontendMasters, egghead.io and has put the exercises online which he used during the training.


Drunk User Testing(Slides)

Austin Knight, HubSpot


theuser is drunk.png

This was a fun talk about unconventional user testing strategy that is based on “The User is Drunk”  paradigm. The underlying principle is “Your site should be so simple and well-designed that a drunk person could use it.” Some people took this concept so serious, that they even make money by conducting such tests (UX expert Richard Littauer http://theuserisdrunk.com/ who has also set up http://theuserismymom.com/). You can find the fundamental concepts of this methodology in Austin’s blog

 

He also suggests to give high importance to creation of UX culture within a company. According to him this can be achieved by following below principles:

  • Everyone is a UX Designer.
  • Involve your Designers and Developers
  • Fall in love with problems, not solutions
  • Listen to sales and support calls
  • Get your hands dirty

 

 

 

UX Super Powers with #ProjectComet (slides)

Demian Borba, Adobe

design thinking.png


Unfortunately I missed this session while I was attending a parallel one, but I found the topic interesting and learned more about it through the slides that were posted online.

 

Demian Borba is a Product Manager at Adobe working for Project Comet, a UX design and prototyping tool that will arrive this year. Inferring from the slides he did not talk merely about the tool, but also the underlying UX concepts and Design Thinking methodology developed at The Hasso Plattner Institute of Design at Stanford (a.k.a “d.school”).

The iterative process of this methodology is well summarized in this image which I found here. Although in my current role I don't work directly with the UI and don't make any design decisions, I believe embracing this mindset is crucial for me and for anyone who works in development, as we all eventually have an indirect impact on end-user's experience.

 

In his presentation Damian gave also some book recommendations that look very promising and I hope to read soon.

Creative Confidence” by IDEO founder and d.school creator David Kelley and his brother Tom Kelley

Mindset: The New Psychology of Success” by Carol Dweck (Fixed vs Growth Mindset => praising abilities vs effort)

Ten faces of Innovation” by Tom Kelley

 

Building Web Sites that Work Everywhere (slides)

Doris Chen, Microsoft


Doris talked about fundamentals of cross-browser website development and presented testing tools that check whether the site successfully displays across different browsers/devices/resolutions. The list included:

  • Site Scan - Reports back on common coding problems
  • Browser screenshots - Take screenshots of your site in a selection of common browsers and devices
  • Windows virtual machines - free downloads of Windows virtual machines used to test IE6 - IE11
  • BrowserStack - A paid online service that gives you access to hundreds of virtual machines

One of the main messages of her presentation was that feature detection should be always preferred over browser (navigator.userAgent) detection, as it is more reliable. Microsoft in general recommends to use Modernizr for this task as it detects all major HTML5 and CSS features. She also talked about Polyfills as a means to interpret standard API to avoid rewriting the code.

 

 

Prototyping the Internet of Things with Firebase (Slides)

Jennifer Tong, Google

 

Jenny did a great job showing how easy it can be to make a simple IoT project using JavaScript. In her demo she used node.js, Firebase, the Johnny Five library, and boards like Raspberry Pi.

Firebase, is a Google acquired company providing following cloud services: realtime database + Hosting + Authentication. The RB service allows application data to be synchronized across clients and stored on Firebase's cloud. The REST API uses the Server-Sent Events protocol, which is an API for creating HTTP connections for receiving push notifications from a server. In contrast to WebSocket protocol with SSE client cannot send push notifications, but its advantage is that it uses HTTP connections and no additional setup is needed. See Firebase in action in Real-time map of San Francisco bus locations

VIM Invoice Approval Fiori App

$
0
0

Open Text had released the Invoice Approval Fiori App based on 7.0 SP6 and 7.5 SP2.

The following functionality is covered by the app

- Non PO Invoice Approvals


While Open Text has made the step in the right direction, there are many functionalities which are not yet covered by Open Text Fiori App.

- PO Invoice Approvals

- Blocked Invoices / Exception Handling

- Invoice Coding and Verification


Sample Screen from OpenText Fiori App for Non PO Invoice Approvals

4.jpg


Issues faced in implementing / using the Open Text Fiori App:
- Not supported on Fiori Client App

- Indefinite wait indicator on screen

- Incorrect componentisation of code

- No extension points in UI

- Rendering issues

- Attachment Rendering issues

- App crashes

The above issues have made this Fiori app not that user friendly and does not work on mobile Fiori client (Open Text confirmed that this works only on mobile browser!! Fiori app that doesn't work on Fiori Client  ). Even though Open Text has mentioned the app can be extended by config in the back end that very much limited the ability to freely customize the UI code.

 

To overcome these issues and limitations we had to do the following

- Fixes and modifications to Open Text Fiori App for Non PO Invoice Approvals

- Redesign the code to run it in Fiori Client App

- Developed a completely New app for PO Invoice Approvals

- Still working on Invoice Coding and Verification

 

Here are the screenshots

1.jpg

Screenshot for PO Invoice Approval:

2.jpg

 

Road map given by Open Text shows that blocked invoices will be supported by April 2016 along with 7.5 SP4.

In the mean time while Open Text fixes their app, let me know if you need any help on deploying the Invoice Approval Fiori App in your organization.

Future of Web Development, when to use Fiori and what does Agility mean?

$
0
0

Recently I had a discussion with a colleague about developing Web Applications. We ended up mainly into two questions.

  1. Which tools / frameworks shall I use for which applications?
  2. Which project methodology should I use?

 

Tools / Frameworks

 

Application Types

He:

You have to strictly differentiate between intranet/extranet applications on the one hand and internet applications on the other hand.

I:

You have to differentiate between Business or Business-like applications on the one hand and Fun applications on the other hand.

 

I think I don't have to describe the term Business application. Have a look at the Fiori Apps library for this. Business-like applications are applications that a user uses to organize his life, be it managing his personal insurance contracts, sell personal gadgets on an auction platform, manage a personal todo list or managing a shopping list.

 

A fun application is an application that users only use to have fun like games of any kind, sharing photos, communicating with friends about the next party etc.

 

Frameworks for different Application Types

He:

In his opinion all internet applications, that are applications which are made for a private and/or (semi)business user group and are accessible via the internet, must not be developed with the new SAP UI technologies (Cloud, Fiori) cause those applications are not fancy and sexy.

He wants to develop Internet Applications with Twitter Bootstrap and design the UX aka UI layout, design and behaviour for every application individually.

Extranet / Intranet applications can, so his argumentation, be developed with SAPUI5 / OpenUI5 especially if you are living in a SAP business environment.

 

I:

I think that SAPUI5 / OpenUI5 is suitable for all business-like applications. But of course also Twitter Bootstrap with AngularJS is equally suitable. I agree and have to admit that UI5 with its current theming is not really sexy but if I have a look at the Fiori 2.0 previews I'm convinced that SAP will fill the little existing gap to the internet.

 

Design Guidelines

For me more important than the question which technical framework to use is the question: Do I use an existing design guideline or do I reinvent one for each application?

In my opinion we cannot afford to create a new design guideline for each application in todays fast changing world. Cause we have to be able to develop and adjust applications in very short time we should reuse the work others already did and shared with us on the internet. In this respect I know the two guidelines Material Design by Google and Fiori Design Guideline by SAP. If you have a look at both and the applications that are written utilizing them you will see that they are not that far away from each other.
Examples for Material Design are:

  • Google Inbox
  • G+
  • Google Account Management

Examples for Fiori Design Guideline are

As already mentioned both guidelines are not that far away from each other and I believe that they will grow together more in the future cause the principles and objectives are the same: Develop web applications that are easy to use for everyone, that are reduced to the max and that have a high recognition value.

Days in which internet applications had to be spectacular by including a lot of information on one page, showing complex graphics or enclosing a lot of advertisement are gone. Users want to be able to achieve their tasks very fast to have time for other things in their non virtual world. Internet changed from a playground to a place where we work, collaborate and communicate.

Our web applications have to meet these requirements.

 

Material Design vs. Fiori Design Guideline

If you agree with me that a guideline is more important than the tools / frameworks we have to ask: Which guideline is the best?

I think there is no general answer to this question. If you choose Material design you are closer to Google, Twitter Bootstrap and AngularJS as technical implementation frameworks. If you choose Fiori Design Guideline you are tied to SAPUI5 / OpenUI5 and Fiori.

Whatever guideline you decide to use you have to be aware that this is an important decision for the future of your company / business. You have to be aware of the fact that changing from one guideline to another is not done in days or weeks. Of course you can write an application with the one or the other tools within this time even if you don't have any experience with them, but to get the most out of them and produce really good applications that have a sustainable architecture, are maintainable, easy to expand and are future ready you have to dive deep into the eco system around the tools and frameworks. This is not achieved in days or weeks. It takes much more time.

 

After I had a 4-6 weeks look at AngularJS 1.0 (material design wasn't born at that time) I decided to move to SAPUI5 with Fiori and Fiori Design Guideline for my personal business-like applications cause in my customer projects I also use these frameworks. This decision is more than two years ago and I feel totally fine and comfortable with the tools and the eco system around it. But at the other side I'm learning new things and features every day.

Of course your decision may lead you to the Google way or another that I didn't mention here but don't try to sit on more than one chair at the same time unless you have a big company and can build several separate departments.

 

Project Procedure Model (Methodology)

The next question in our discussion was: How do we develop a new application?

 

He:

We have to write a business blueprint first, in parallel check the demand for the application, create a basic prototype and after that we have to convince the investors to give us money for it. In other words we have to create a business plan supported by a prototype to get money for our project. Depending on the project we need several weeks or months with a few FTEs for this.

After we got the money for the project we start with an agile development process.

 

I:

We have to be convinced of our idea, write down a very basic 1-2 page fact sheet with the main features of the planned product, transfer the features into a basic roadmap with several releases, create a very basic prototype with a prototyping tool that gives just an impression of what we want to realize and then go to the investors and try to convince them to give us money.

After we got the money to start our work we should setup an appropriate project environment, dive into the details of the first milestone, create a more expressive prototype, write more details into our backlog, create sprints and come back with this to the investors to get more money for the next steps.

If we also cleared that hurdle we start developing the first sprint and in parallel begin to plan the next one. But we do it step by step and it should be always possible to change the direction of the project into either this or that direction. Of course we have to discuss each direction change with the investors.

 

The definition of Agile Development

The question that arises from the above statements is: What is agile development and when does it start?

 

I think that my colleague does not a holistic agile thinking. He wants to develop the application with an agile approach but he does not want to change project management to agility. I admit that the latter is much more difficult than the first. People of a development team are used to fast technology changes and therefore are able and willing to adapt new methodologies. Investors often are old school. They have learned to minimize the risk of an investment before invest anything. Therefore they want to see a complete business plan with which they can estimate the risks. If they don't work in this way they may loose their jobs or at least loose their yearly bonus.

 

I think that if a company / project writes the word AGILE on its papers they have to be agile in any respect not only inside the development process/team. If projects are realized in that way there is a benefit for all involved parties.

  • The investors don't go a too high risk cause they invest only little money at the beginning.
  • The investors can stop their investment at each time if they think the performance of the project is not satisfactory or in the meantime there is no more necessity for the product.
  • The developers can be creative in spite of spending time on creating business plans.
  • Investors and developers get a result much earlier.
  • Investors and developers can steer the direction of the project into a new more effective and appropriate direction easily.
  • Cause there is a much earlier result, customers can influence the further features with their feedback.
  • Customers get a strong relationship to the product because of their engagement into the planning process.

 

At this point I would like to mention that SAP's new products are developed in this agile way and as we all can see when we e.g. look at the speed with which new features are introduced in HCP and Fiori we have to admit that it really works.

SAP develops their products by leveraging the design thinking process with great prototyping tools, involves it's customers by running CEI projects and develops as well as evolves their products in small chunks. This helps them to adjust the roadmap and features for their products in an agile way.

 

Conclusion

No matter which development solution you choose for your application but you should decide it beforehand very carefully and if you have decided to use an approach you should keep using it for more than just one project. Only if you really use a methodology, product, framework, tool in depth and over a longer time you get the best out of it. Don't switch from one approach to another with each new project and don't reinvent the wheel each time. Build on the results big companies like Google, SAP and others have worked out in their R&D departments.

If you use an agile methodology use it within the complete project resp. inside all involved departments of your company. 

 

At the end I would kindly ask you to share your view on this discussion in the comments. I'm looking forward to a great exchange of ideas.

Time to Influence SAP - The Future of Your Software - ASUG Influence Council Launch

$
0
0

The usual legal disclaimer applies - things in the future are subject to change.

legal.jpg

 

Source: SAP

 

 

SAP’s Alexander Peter kicked off ASUG Influence re-launch this month.  He said SAP collects feedback from ASUG members.  Joyce Butler is the customer ASUG point of contact

1fig.jpg

Figure 1

 

SAP collects feedback to learn what are the current issues you face with the product.  They are seeking feedback for next release

 

It is a forum to collect and prioritize feedback for several companies

 

The council tries to have calls every 2 months to discuss topics, roadmaps, prioritize features

 

The council meets regularly at ASUG Annual Conference (SAPPHIRE) and SAP TechEd

2fig.jpg

Figure 2

 

Joyce said “we love your feedback” and network amongst yourselves

3fig.jpg

Figure 3

 

This round we want a combined council to include both Analysis Office and EPM

4fig.jpg

Figure 4: Source: SAP

 

Figure 4 shows the plans for Q2 – planned for end of May

 

The plan is to continue convergence with Live Office and support for extensions such as Sales & Operation Planning, IBP – budgeting for public sector

 

They also are developing the business process flow integration

 

Comments in BW; today you can do comments on Analysis but they are local.

 

Cancel query feature is planned

 

DPI support is important for Windows 10

 

Future direction – this is where SAP wants ASUG Influence Council feedback, including scheduling enhancements

5fig.jpg

Figure 5

 

Figure 5 shows the ASUG Influence charter, influence the next release

6fig.jpg

Figure 6

 

You need to be an ASUG member to participate

 

A non-disclosure agreement is required

 

Next Steps

 

Take the survey: https://www.surveymonkey.com/r/analysis_IC

 

 

Joyce said she has been on ASUG influence councils for 8 years; it is the best part of ASUG, she said. It is a great way to get your voice heard and is great for networking

7fig.jpg

Source: ASUG

 

ASUG has over 25 influence councils with over 1K members participating

 

why.jpg

Source: ASUG

 

Above is an ASUG slide about why you want to participate in an ASUG influence council - to help shape the future of your SAP software investment.

Integrando Fiori en nuestro SAP Landscape.

$
0
0

Últimamente, es grato observar como la Comunidad SAP en Español

se va enriqueciendo con nuevos aportes en lo referente a SAP Fiori.


La pasada semana, recibimos un aporte de Joaquin Fornas

en blog SAP Fiori en Cinco Minutos (I): Visión General de Fiori

que se une al aporte del ya citado Juan Carlos Orta en SapUI5-Fiori, presente y futuro.

(Vuelvo a animaros a que votéis ambos artículos, debemos valorar el esfuerzo de los autores)

 

Hoy, pretendo ser yo quien aporte un poco sobre SAP Fiori,

espero poder estar a la altura y no repetirme en temas ya tratados.

 

Si os parece, empecemos.

¿Qué es SAP Fiori?

SAP Fiori pretende ser la nueva User eXperience (UX) para el software de SAP,

aplicando modernos principios de diseño para que la experiencia del usuario sea lo más agradable posible.

SAP Fiori UX es la nueva interfaz de SAP para todos los usuarios

independientemente de la plataforma (escritorio, móvil, etc)

o del medio escogido para su consumo (navegador, SAP NWBC, o SAP BC).



¿Qué es el SAP Fiori Launchpad?

SAP Fiori Launchpad es el punto de entrada de SAP Fiori.
Su diseño está basado en roles, siendo altamente personalizable y ofreciendo datos en tiempo real.


Ofreciendo además un acpecto simple e intuitivo al igual que multi-plataforma,

y multi-dispositivo de acuerdo a los principios de SAP Fiori UX.


 

¿Qué tipos de aplicaciones ofrece SAP Fiori?

SAP Fiori ofrece 3 tipos de aplicaciones:

  • Transaccionales.

    Las aplicaciones transaccionales SAP Fiori UX son aplicaciones diseñadas para empleados,

    responsables, etc. Cada una de estas aplicaciones transaccionales requiere

    de la instalación de un Add‐On específico en el sistema

  • Analíticas.

    También llamadas Smart Business Applications.

    Éstas tienen el objetivo de analizar y evaluar los KPI estratégicos u operativos en
    tiempo real y desencadenar las tomas de decisiones correcta

 

  • Fact Sheet / Object Page

    Este tipo de aplicaciones SAP Fiori UX permite al usuario navegar por la información en diferentes capas.
    Permite el acceso a información global a un nivel contextual, acceder al detalle y si es necesario,
    navegar a más bajo nivel de dato y acceder a toda la información relacionada con él.

 

 

Como se observa en la imagen, las aplicaciones transaccionales se pueden ejecutar en cualquier
base de datos soportada por SAP, mientras que las analíticas o las Fact Sheet,
solo son soportadas sobre SAP HANA.

 

¿Cómo podemos integrar Fiori en nuestro SAP Landscape?

El roadmap para implementar Fiori en nuestro SAP Landscape es siempre el mismo,
variando la arquitectura que soportará dicha implementación de acuerdo a las aplicaciones escogidas.

Como comentábamos anteriormente, sólo las aplicaciones transaccionales son soportadas fuera de SAP HANA.

 


PO Approval by BlackBerry - Using Extended Notification

Creating attachments to work items or to user decision in workflow - OO ABAP way

$
0
0

Hi Guys,

     I am inspired by my friends (Himanshu and Biswajit - though not from SAP world) to write blog and this is my first blog, therefore please bear with me and help me to improve with your valuable suggestions and feedback.

Reason for Development – Before approving the Trip, manager view the trip summary which open as HTML GUI from the UWL, which is very slow. It takes long time to scroll the page. Therefore it is a better approach to create an html page for the trip summary and attach the page  to the workflow. It mean that when the attachment will be review by the manager it will be plane HTML Page which is very easy to load and dosenot interact with the SAP, where earlier it was HTML GUI and I believe it was interacting with SAP on each interaction and therefore was very slow (Depending on the bandwith, It was fast if you are directly connected to the network, but if you are remotly by citrix then this problem creep up.)

Design/Implementation aspect

Function Module BAPI_TRIP_GET_FROM_HTML takes the employee number and trip number and provides you with the HTML page with the Trip summary. Then the new office document is created using this page and pass it to the workflow as an attachment.

I achieved this by writing a class in programming exit and then later I found that there is a blog in SDN which talk about the attachment but it requires extra background step (Please find the link), though it is simple and easy to implement and use classic buniess object method but it seems that you have to modify the workflow template with extra step and you are attaching your document to the workflow task and then importing the attached document to that particular step. Therefore I thought that creating attachment using OO ABAP will be easier and less cumbersome and fast/effecient. I also came across a blog about the programming exit, which I believe is worth reading (Link).My blog also answers how to create SOFM object in class asked by many people.

Exercise in action

Step 1:- Go to SE24 and create a class as per your company naming standard (I have created ZCL_SWF_IFS_WORKITEM_EXIT). You have to implement interface IF_SWF_IFS_WORKITEM_EXIT, method EVENT_RAISED.

Class Creation

 

Please find the code implemented for the enhancement.

* Data type decleration for Task Element

  data: lv_pernr          type p_pernr,                                 " Personnal Number

        lv_tripno         type REINR,                                   " Trip Number

        lv_task_id        type  SWW_WIID,                               " Task Id number

        lv_taks_container type ref to IF_SWF_IFS_PARAMETER_CONTAINER.   " Container

 

* Data type for document attachment

  Data:  lv_html          type standard table of bapihtml,

         lv_html1         type standard table of SOLISTI1,

         lv_html_xstring  type xstring,

         lv_html_string   type string,

         lv_user_data     type soudatai1,

         lv_doc_info      type sofolenti1,

         lv_object_header type standard table of solisti1,

         lv_soxobjcont    type soxobj,

         lv_user          type SOUDNAMEI1,

         lv_docid         type OBJ_RECORD,

         lv_exception     TYPE REF TO cx_swf_cnt_container,

         lv_document_data type SODOCCHGI1,

         lv_folder_id_1   type soodk,

         lv_folder_id_2   type soobjinfi1-object_id,

         lv_OBJTYPE       type SWO_OBJTYP ,

         lv_OBJKEY        type SWO_TYPEID,

         lv_sofm          type SWO_OBJHND,

         lv_sofm_read     type ref to sofm,

         lv_OBJ_RECORD    type OBJ_RECORD,

         lv_objects       TYPE sibflporbt,

         lv_SWOTRETURN    type SWOTRETURN.

* Read task ID which

  CALL METHOD IM_WORKITEM_CONTEXT->GET_WORKITEM_ID

    RECEIVING

      RE_WORKITEM = lv_task_id.

* Read Container

  CALL METHOD IM_WORKITEM_CONTEXT->GET_WI_CONTAINER

    RECEIVING

      RE_CONTAINER = lv_taks_container.

* Read Variable in container - Employee Number

  call method lv_taks_container->get

    EXPORTING

      NAME  = 'Empno'

    IMPORTING

      value = lv_pernr.

* Read Variable in container - Trip Number

  call method lv_taks_container->get

    EXPORTING

      NAME  = 'Tripno'

    IMPORTING

      value = lv_tripno.

* Read atttachment to confirm that there is no duplication

  clear lv_obj_record.

  call method lv_taks_container->get

    EXPORTING

      Name  = '_ATTACH_OBJECTS'

    IMPORTING

      Value = lv_objects.

  if lv_objects is initial.

*Retreive the Trip info in html format for an employee

    CALL FUNCTION 'BAPI_TRIP_GET_FORM_HTML'

      EXPORTING

        employeenumber = lv_pernr

        tripnumber     = lv_tripno

        DISPLAY_FORM   = 'X'

        EINKOPF        = 'X'

      TABLES

        tripform_html  = lv_html.

* convert the table format acceptable by function module  SO_DOCUMENT_INSERT_API1

    lv_html1[] = lv_html[].

* Identify the folder id

    call function 'SO_FOLDER_ROOT_ID_GET'

      EXPORTING

        owner     = sy-uname

        region    = 'B'

      IMPORTING

        folder_id = lv_folder_id_1.

* convert the field format acceptable by function module  SO_DOCUMENT_INSERT_API1

    lv_folder_id_2 = lv_folder_id_1.

* Prepare object header

    lv_soxobjcont-objtype = 'ZBUS2089'.

    concatenate lv_pernr lv_tripno into lv_soxobjcont-objkey.

    append lv_soxobjcont to lv_object_header.

*Prepare document data- contain description and sensitivity

    lv_document_data-OBJ_NAME = 'INITIAL'.

    lv_document_data-SENSITIVTY = 'O'.

    Concatenate 'Display Trip Result:' lv_tripno into lv_document_data-obj_descr separated by space.

    CALL FUNCTION 'SO_DOCUMENT_INSERT_API1'

      EXPORTING

        FOLDER_ID                  = lv_folder_id_2

        DOCUMENT_DATA              = lv_document_data

        DOCUMENT_TYPE              = 'HTM'

      IMPORTING

        DOCUMENT_INFO              = lv_doc_info

      TABLES

        OBJECT_HEADER              = lv_object_header

        OBJECT_CONTENT             = lv_html1

      EXCEPTIONS

        FOLDER_NOT_EXIST           = 1

        DOCUMENT_TYPE_NOT_EXIST    = 2

        OPERATION_NO_AUTHORIZATION = 3

        PARAMETER_ERROR            = 4

        X_ERROR                    = 5

        ENQUEUE_ERROR              = 6

        OTHERS                     = 7.

    IF SY-SUBRC <> 0.

      MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO

              WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.

    ENDIF.

* Populate object type and object key for create an instance

    lv_objtype = 'SOFM'.

    lv_objkey = lv_doc_info-doc_id.

* Create an instance

    CALL FUNCTION 'SWO_CREATE'

      EXPORTING

        OBJTYPE           = lv_objtype

        OBJKEY            = lv_objkey

      IMPORTING

        OBJECT            = lv_sofm

        RETURN            = lv_SWOTRETURN

      EXCEPTIONS

        NO_REMOTE_OBJECTS = 1

        OTHERS            = 2.

    IF SY-SUBRC <> 0.

      MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO

              WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.

    ENDIF.

* Prepare for attaching the object to container

    lv_OBJ_RECORD-HEADER = 'OBJH'.

    lv_OBJ_RECORD-TYPE = 'SWO '.

    lv_OBJ_RECORD-HANDLE = lv_sofm.

    call method lv_taks_container->set

      EXPORTING

        Name  = '_ATTACH_OBJECTS'

        Value = lv_obj_record.

* Commit the changes

    CALL METHOD IM_WORKITEM_CONTEXT->DO_COMMIT_WORK.

  endif.

Step 2:- Attach this class to your workflow task where you want to create the attachment.

Assign class to task

Step 3:- Do not forget to remove the _adhoc_objects assignment to the task else it will display two attachments, one for HTML GUI  and another attachment created by your class.

Task Binding

Step 4:- Execute the scenario and see the result J

BI4.2 Webi Getting GeoMaps and Custom Elements for Google Maps Data Viz

$
0
0

Maps are a hot button when it comes to Web Intelligence, which is one of the most requested features as long as I can remember. BI4.2 delivers a double whammy for BI administrators to deliver amazing geographic visualization and analysis features.


GeoMaps


New out of the box geomaps are long over-due but finally here for basic map requirements. GeoMaps transform report tables into interactive maps using common geographies. The components are the same found in Explorer and Lumira. Here is a fantastic article on SCN that explains how to use new 4.2 GeoMaps

SE10.png

 

Custom Elements

 

New to BI4.2 is custom elements, which allows BI administrators to enable a new wave of visualization options for Web Intelligence. Naturally, we immediately plugged in our CMaps Analytics JS API (which inherits Google Maps for Work), and with a few hours of experimenting we had a new custom element working.

 

web4.2cmaps.png

 

It is an exciting time as SAP modernizes it's best of breed reporting solution to take advantage of innovate "visualization as a service" offerings for BI4.2. If you want to see the above examples in action or how we built our custom element, live feel free to send me a message here or on social media and I am happy to share online.

 

We have just scratched the surface of what's possible, allowing customers to display multiple layers of information like custom regions, drive distance and radius bands, ESRI ArcGIS, and others to be officially announced shortly from CMaps Analytics.

Three Steps and Done

$
0
0

Last time I outlined our simple three steps methodology: Understand, Analyze and Prioritize. Sounds too simple? It may or may not be so simple depending on your viewpoint.

 

Let’s dive a bit deeper on the topics.

 

We aimed for a methodology that has no preconditions. The more preconditions required, the higher the entry hurdle for the development teams and you may end up in workshops where the preconditions are not met.

Having made that statement, there are still some implicit preconditions.

 

The first implicit precondition is doing workshops with experienced architects and developers of the product in scope. A big portion of the success of our approach and for Threat Modeling over all, is the fact that it can be a high quality white box approach, testing the internal structures of an application, rather than the functionality. If you have the right guys in the workshops you have great insights into the architecture, design and coding at hand. From my view point the main reason for its high efficiency and effectiveness.

The second implicit precondition is to have an architecture and clear understanding of the use case at hand. The right guys will have the knowhow for sure, but only if you are not early in development and trying to hit a moving target, which is at the very least time consuming. It is not an expectation to have an architecture diagram at hand, as it is a great activity if you draw the diagram during the workshop. (We will follow up on timing of a workshop in a separate blog also).

 

Last, but not least, the third implicit precondition is some security know how for the workshop. It should be noted that if the workshop moderator has security know how, the workshop experience was very good for all attendees. Be careful in the choice of the moderator, because not every security expert is a good moderator, or vice versa!

 

We let the guys from the development team explain their use case and their business background. Typically a standard presentation used in development for stakeholder management is sufficient as a high level starting point. Be sure that we dive much deeper – but one step after the other.

 

If there are multiple use cases we start with one this is security critical.

 

Phase 1, the understanding phase, allows for learning about what valuable assets you are protecting and where these assets are located in the architecture. This analysis consists of a mix of drawing the architecture diagram, noting down the data flow on this architecture, including the single steps and pointing out data in motion and at rest.

 

Interestingly you do not need to go down to the lowest level of design in the understand phase. It is sufficient if the external party or moderator understands the basics. Again we will dive deeper with each successive step.

 

Assuming we now have the architecture and a description of the use case, we start phase 2, analyzing the architecture element by element.

 

Per element we have a set of threats assigned generically. Keep in mind that the threats are our security requirements from a threat perspective.

A first check is if the threat is applicable or not. Though it is a yes / no decision it is not that simple, but I will discuss this in a later blog.

 

If the threat is applicable, we try to understand if there is an unmitigated risk. As an example, think about a potential SQL-Injection. If the infrastructure is not taking care of that and many do not do, you have an unmitigated risk. This is nothing that can be solved easily during design, so clearly a task for the development phase. We evaluate the risk and note it down for further processing.

 

As such we go through all elements and all assigned threats and once we have completed the exercise, which takes typically a few hours we are done with phase 2.

 

This approach sounds mechanical and checklist based. You might even think that you make or break threat modeling here. If you do threat modeling in a checklist style, you will be doomed. The participants will disengage from the workshop and turn to more interesting things like their smartphone or email. On the other hand the checklist is essential as it gives you a base line and an insurance that you cover the known things in your threat knowledge base. But do not stop here if you do not have an obvious threat. You should twist and turn it to check if a closely related attack is possible on the threat at hand and invite the team for brain storming.

 

For brevity I would like to refer again to a later blog where we discuss this in more depth. You need to know that the participants attention, collaboration and thinking in this really needed in this phase.

 

Once you are done with this phase, you can conclude the workshop. One item to be aware of is you might spend too much time in a given day for any phase. As the discussions on security in design are so intense, everyone’s brain will reach a point (typically that happens for me around 3 hours of discussion), where it’s hard to focus on the discussion. My brain starts prickling and one can easily observe that point in time, it is the moment I as a moderator start rushing through the checklist. Stop here. Postpone to another time slot…

 

Depending on the tooling you use, you might need to press a single button to get a full Threat Modeling report, or possibly you must write a report manually (that is the fun part of Threat Modeling).

 

Ideally as a rule, the moderator writes the report close to the when the workshop ended. Once the report is available it is handed over to the development team for further processing. Ensure that the project team feels the ownership of the report and are encouraged to follow up on the actions required from the threats found in the report.

 

Now comes the final phase, where the team discusses the findings again. This time they are deciding on how to handle the risk, with the options of accept, mitigate or delegate the risk. Each rish should be placed on their backlog list and prioritized against functional requirements.  If it is not on the backlog list, a product owner might assume that there is no effort involved, and that the developers can work on it in their spare time…

 

So our methodology is simple as it is really based on three phases or steps. It is a complex choreography of architecture, design and technology know how, combined with security expertise and know how. It takes these two for the threat modeling tango.

 

As I did last time, I would like to conclude with a question. If you are doing threat modeling what tools do you rely on?

 

In our next blog I would like to introduce our threat knowledge base / the SAP product standard security.

 

 

Author: Oliver Kling

Living Proof Boosts Revenue 300% with SAP Business ByDesign

$
0
0

Living Proof SAP banner.jpgLiving Proof is serious about the science of hair care. Its innovative solutions have revolutionized the prestige hair-care market and resulted in rapid company growth. It’s also serious about running its business with leading-edge technology. That’s why it switched from spreadsheets to SAP Business ByDesign as business bloomed.

 

Founded in 2004 by world-class biotech scientists and beauty experts, Living Proof has invented and patented many molecules never before used in beauty. These break-through products are sold through multiple channels. High-end retailers such as Sephora, Ulta Beauty, and Nordstrom carry its products, as do prestige salons across the country. In addition, it has a loyal following on livingproof.com, where it sells direct to consumers. The company is also expanding distribution internationally into salons and retail spaces.

 

Sustaining growth

In its early days Living Proof was running the business like most small companies on the rise. Lots of orders were coming in and the business was being managed with spreadsheets. There wasn’t much time to build the sophisticated business applications it would need to keep things running smoothly in the future.

 

“We were facing a number of challenges because were essentially a startup. We really didn’t have a lot of infrastructure. We were doing all of our work in spreadsheets, and that was okay because we only had one or two products and only one or two channels that we were selling into. But we realized very quickly that was not sustainable,” said Terry Rice, director of finance at Living Proof.

 

Living Proof needed a software solution that was strong in supply chain management and manufacturing that would allow it to scale business quickly. It also wanted a cloud-based solution because it didn’t want to have to create an IT department.

 

A cloud solution

The company looked at a number of solutions and selected the SAP Business ByDesign solution, a cloud-based business management suite. Designed for upper medium-sized business, SAP Business ByDesign actually had more functionality than Living Proof needed at the time, but the company knew it would grow into the solution.

 

“When we found SAP Business ByDesign we knew we had the right answer,” said Rice. “As we expanded our revenue, distribution channels, and manufacturing capacities we really didn’t have to increase headcount because we had the structure, the framework, in place.”

 

Living Proof can run its entire company on SAP Business ByDesign. Everything, including finance, human resources, supply chain, and production, is integrated. Real-time analytics and reporting are powered by the SAP HANA platform and are available on almost any mobile device. “One of the great things about SAP Business ByDesign is that the information that is available is all standardized, and you know where to go quickly to get what you need,” said Rice.

 

For example, if someone in sales wants to know what sales program spending has been in the last month or year, the information can be provided in a few clicks. If the R&D team asks how development is going on a new product line, Rice can quickly bring up the data in project accounting and show exactly where the spend is happening.

 

Revenue jumps 300%

As the business has grown, SAP Business ByDesign has helped to simplify the workload. “It’s much easier to run the business. We don’t wonder what is going to happen when we book a transaction,” said Rice. Production planning is much simpler than when it was done on spreadsheets. When forecasts are entered into the system it figures out when a product needs to be produced and which manufacturing sites and warehouses to use.

 

And if Living Proof wants to add a new distribution channel or product line to the system, there is a standardized and consistent set of steps to go through. The structure and reliability of the system gives Rice and his team more time to focus on future investments and profitability. They can analyze existing business channels more closely and be more discriminating of the new ones they may enter to ensure maximum profit in the future.

 

Business ByDesign has been a great investment for Living Proof. Since the implementation, revenue has grown 300%, the number of different items produced has increased by over 200%, and the volume of items produced and distributed is up over 400%. And the company only had to increase its headcount by 30 people to support all that growth. “I really don’t think that we could have achieved that level of success without a system like SAP Business ByDesign,” said Rice.

 

Watch this interview with Rice to learn more:

 

 

Related content:

ULTA Beauty Gives Beauty Product Shopping a Makeover

Brooks Brothers Closes in on Omnichannel Retail

 

 

Connect with me on Twitter and LinkedIn

Social Selling: A Hit in Manila

$
0
0

Thriller in Manila – Social selling is a knockout in the Philippines, boosting the sales pipeline seven fold. Now SAP plans to roll it out  worldwide.


Tom Becher is an account rep who works in SAP’s telephone sales organization.  Part of a social selling program for the SAP call center in Manila, Philippines, Tom recently closed a big SuccessFactors deal in Indonesia. 

 

Tom says using the social sales  approach gave him a huge advantage, compared with cold calling his prospect. So what is social selling? Put simply, social selling is when sales people use social media to interact with potential buyers.

 

“Before I engaged the general manager for human resources I had already seen his profile. So it gave me information very early and an idea of how to drive my conversation with him.”

 

The Philippines is often referred to as the call center capital of the world, due to the high concentration of companies operating call centers there. Cheap labor costs are the primary reason that many western companies have set up shop there.

 

But SAP is investing heavily in its Manila call center, turning it into a hub for social selling and sales innovation across the Asia Pacific region and company worldwide. 


The initial results have been overwhelmingly positive. Using social selling tools and a new enablement program, the Manila team generated seven times more pipeline sales than comparable teams.

 

Malin Lidén is a vice president of marketing at SAP. She heads up innovation and community programs such as social selling from Waldorf. “The Asia Pacific market is very young, very social media savvy,” she says. “There is a big affinity to sharing things online and via mobile. So for many of those employees, these kinds of tools and connections are second nature. This population is very open to new ways of selling.”

 

Tom agrees. “Everybody’s social now. That’s where everyone does business.”

 

Malin says the heart of social selling is technology. SAP subscribes to LinkedIn’s Sales Navigator tool which sales professionals to build lists of potential sales connections and sign up for alerts that provide updates on those individuals. The result? A sales person can listen to leads and research them so they are pursuing ‘warm’ rather than ‘cold’ leads.

 

As Tom explains, “This is a tool where you can gather all your prospects into one view. You can get an update everyday on what they do with status updates and posts. So it gives you an idea where they are right now.”

 

But as Malin emphasizes, social selling goes beyond tech tools. It’s a whole new way of engaging customers that “pulls” them in versus “pushing” SAP out. For example, a sales person can share a blog or article that is meaningful to potential customers. This helps establishes a conversation and relationship, before any sales discussion. For the customer, it’s the difference between being ‘sold to,’ and getting guidance in the buying process from someone you know and trust.

 

So far, SAP’s social selling pilot programs have yielded approximately 24 million Euro in pipeline worldwide. SAP is rolling out social selling tools and methodology to its entire global sales team, starting at the first sales kick off meeting, FKOM APJ, held in Singapore, January 9 – 12.

SAP Workflow Improvements (courtesy of SAP Customer Connection) - Part 3

$
0
0

Continuing on about the SAP Customer Connection for Workflow...In case you are just tuning in, I am running through the list of SAP Workflow improvements that have been delivered as a result of this Customer Connection.


Okay, so I really didn't intend for this to become a multi-blog series, so I hope you don't mind that I broke this up into manageable chunks... Honestly, there have been so many improvements delivered, and I wanted to be able to do each justice (without losing the blogs to the fickle gods of bloggers)...

Part 1 of the series is here, part 2 is here


ALL IMAGES ARE COURTESY OF SAP and property of SAP unless otherwise noted.


Note 2191614 - The specification of Workflow Administrator allows you to specify User, Role, Org Unit, Center, Position and Work Center.  In some cases, this is not really sufficient for the variety of customers out there (OK, I am one of them).  Now, if you implement this note, you will be allowed to specify an Agent Rule.  Sadly, no binding to the rule can be defined *but* the workflow container is passed at run time via the table parameter AC_Container.  So you will now have the flexibility to determine programmatically who the appropriate workflow administrator is.

For large SAP installations, I can see a huge benefit to implementing this.


Note 2188631 This is one I really wanted... it doesn't have such an impact on workflows running in core ECC, but I do a significant portion of my development in SRM, and it always frustrated me when I could not forward an item to myself (in order to execute through the UWL, for example).  That restriction is now addressed.  

 

 

Note 2202595 - This note will allow you to use a new option when sending workflow notificati9ons through SWNADMIN or SWNCONFIG (Extended Notifications).  Previously, you were restricted to Internet Mail, SMS or Pager (I bet Pager gets a lot of use these days ) or an Internal SAP User - which is fine, as long as you only wants to send to a single user.  With the implementation of this note, your notifications can go out to all SAP users identified by a specific 'User Group' (as seen on the SU01 record, not ASUG or VNSG or DSAG).

 

Note 2201971 - I know for a fact that doing a where-used list for workflow development objects can be frustrating.  Say you are looking for the workflows that use a particular task... well, you get ALL the workflows - every single version of a workflow - if it calls that task, you get it.  SAP really listened when they took on this Customer Connection Request - this is an item that causes the back office workflow developer some pain - but of course we hide it very well.  Anyway, with this note implemented, you'll now be able to see the Version Number and whether an ACTIVE version of a workflow uses that task. 

 

 

 

Note 2254783 - While we are on the subject of versions... Workflow developers always could see the versions of BOR objects - but only if they used SE38, and even then it would only show source code changes.  When you implement this note, you'll be able to see the difference of things like attributes!  W00t!  

 

I think I have covered all the requests that have already been delivered by this Customer Connection activity.  But I urge you to go to the Influence to see what other influence projects are out there, as well as to keep tabs on this activity.  Hopefully you will also check out some of these notes and get your friendly Basis team to implement them - and then provide feedback! 


Meanwhile, thanks to Ronen Weisz  who worked hard to get this influence activity going, to Daniel-Alexander Heller who led the activity, and of course, the people at SAP who tirelessly support the developers and administrators of SAP Workflow by making our day-to-day lives a little easier.  (This means you Alan Rickayzen and you Ralf Goetzinger)


Why Context Is Worth 80 IQ Points

$
0
0

“Context is worth 80 IQ points.” (Not my words – they belong to renown MIT professor and serial inventor, Alan Kay). His famous quote is particularly relevant for marketers today.  As digital channels have evolved, they’ve opened up new possibilities for reaching customers and prospects.

As marketers, we already know who our customers and prospects are, where they live, and what they like, and now we can garner more transient contextual information.


Contextual marketing is the next step for marketers as we move from mass marketing to segmentation, to personalisation and now contextualisation. We’ve all been on the receiving end of it. Many mainstream news websites run contextual advertising to match ads to the articles being viewed. Social media websites and blogs use keywords in members’ posts and comments to trigger contextual ads. Last summer, an ice tea brand used Facebook to advertise its drinks in areas of the UK experiencing particularly warm weather.


It seems our awareness of contextual marketing isn’t an issue. A recent survey by The Economist Intelligence Unit, sponsored by SAP, found that seventy-three per cent of west European marketers say they routinely collect information about customer behaviour. So if we’ve been on the receiving end of contextual marketing and most of us are routinely collecting it, why aren’t we seeing more of it?

Turns out most companies aren’t doing anything meaningful with it. The same survey found that only thirty-seven per cent use it for marketing purposes. While the proportion of marketers making some use of contextual information is high, much of the data is just sitting there. Why?


One of the main reasons is that inside many organisations customer data is scattered across multiple disparate systems. This makes is hard to find, almost impossible to pull together in a timely manner or respond in real time with the relevant marketing messages.

The company knows the data has been collected, it just can’t aggregate it or apply it very easily into one-to-one personalised and contextualised messages back to the customer.


Yet with every search, browse or email opening, your customers are telling you exactly what they want. They’re making your job incredibly easy by giving you signals of their intentions and interests so you can connect with them directly at the right time, through their nominated channel of choice and with the right set of messages. Why would you want to miss such a wide open window of opportunity?


One of the other reasons marketers aren’t fully leveraging contextual marketing is because their channels are too narrow. Most organisations are still relying on first generation digital channels to collect contextual information (think company websites and email).

Newer channels, such as social media and mobile apps are often overlooked or used less frequently by marketers for contextual gathering. But these newer channels have the potential to offer more fine grained contextual insights than conventional channels. In other words, mobile apps can reveal precise locations or current activities of prospects, while social media channels give unique insights into an individual’s mood or wider social network.


If you’re not able to easily gather, harness and monetise the contextual data that your prospects and customers are offering, you have a very big hole in your marketing strategy. I’d strongly urge you take a fresh look - not just at what type of data your organisation is collecting and from which channels - but how it’s being collected and centralised, and whether or not you’re actually able to act on it.


Start now by reading The Economist Intelligence Unit summary report, Beyond personalisation: a European perspective on contextual marketing.

[Webinar] Disruptions in Supply Chain – Are you Ready for 2016?

$
0
0

In the traditional supply chain landscape, new technologies and solutions are disrupting the customer’s buying journey. They’re becoming more vocal and agile in their purchasing decisions and COOs need to understand how these disruptions are affecting their supply chain processes. Executives need to have a more holistic overview of an extended supply chain where they can be more agile, be data driven, and to satisfy customers in the Digital Economy.

 

Learn:

  • Why the current supply chain is poorly equipped to manage high levels of customization
  • Discover why IDC believes “the future of the supply chain is one of an outwardly networked and collaborative organization that will sit at the center of three lobes – a demand network (‘demand aware’), a supply network (‘supply visible’), and a product network (‘innovation connected’)”
  • How cloud-based business-to-business platforms, and other technologies, are key enablers to reimagining operations and the supply chain

 

Join us on February 18, 2016 from 11:00 – 12:00 PM EST to learn more!

 

Date: Thursday, February 18, 2016

Time: 11:00 AM EST

 

Featured Speakers:

Hans Thalbauer
Senior Vice President,
Line-of-Business Solutions for Extended Supply Chain
SAP

Simon Ellis
Program Vice President
IDC

Register here.

Productivity Power Play video series – Introduction to Scripting

Smart Finance Through Advanced Analytics

$
0
0

"Business startups have been declining steadily in the U.S over the past 30 years. But the startup rate crossed a critical threshold in 2008, when the birth rate of new businesses dropped below the death rate for the first time since the metrics were first recorded." - Gallop based on US Census Bureau, Dynamic Statistics. There are several reasons attributed to this. I recently gave webinar on "Smart Finance Through Advanced Analytics" along with my colleague Dr Ying Wu, where we have discussed various reasons why business have failed, and how Advanced Analytics can help  handle Company Finances Smartly. There are several aspect of how cash, flows in and out of the organization, and we have picked up one problem of Time to Invoice payments as our usecase. In this webinar we also briefly discuss concepts of Ensemble Modelling and Hierarchical (Segmented) Modelling. The webinar also discusses one of the potential ways by which one can predict the time that the customer will take to pay the outstanding invoices . The recording of the webinar can be found at SAP Predictive Analytics: Enabling Smart Finance Through Advanced Analytics - YouTube

Unit test automation: A step towards Agile delivery and DevOps in SAP

$
0
0

With the constant demand to develop and change SAP applications faster, the need for continuous delivery has never been greater.


Agile, continuous delivery and DevOps have been commonplace in many IT organizations for some time now and the need to apply these concepts to SAP is starting to catch on.


But what effect will that have on the way development is done in SAP?


Continuous Integration


One of the core concepts in DevOps is that of Continuous Integration (CI).  When code is changed, unit tests are automatically executed to verify its quality.  The goal is to identify problems and bugs early in the development lifecycle so they don’t cause issues later in QA or worse, in production.


A good definition of CI is here.


Planning and executing unit tests is time-consuming and tedious, so a solution to automate it would seem like a no-brainer.  So why don’t we see this being done very often in SAP?


SAP development is different, but unit testing is still relevant


Outside of SAP development, CI ensures that changes are built and verified via automated unit tests when a developer checks their local code into the central repository.  If something has broken the build, developers are informed quickly so they can implement the necessary fixes.


SAP, and particularly ABAP, is different as we know as all development is carried out in a shared system.  Any changes are then immediately visible, and affect, everyone.


Unit testing capability exists as standard within SAP, so how can we start using it effectively to implement CI processes?


Unit testing and test-driven development (TDD)


Firstly let’s understand what unit testing is about.  Essentially we’re aiming to verify that when code is created or changed it behaves as intended.  We’re also trying to ensure that anything using that code will work properly as long as the unit test is passed.


When writing unit tests there are some key principles that need to be considered:

  • Independence - to make sure that one failing unit test doesn’t affect others, you must make sure that each can be run independently
  • Consistency - a test should deliver the same results when it is re-run (all other things being equal)
  • A single unit of work - it’s important to test small parts (units) of code that have distinct functions
  • Good code coverage - the test needs to ensure that all the code is executed, including exceptions
  • Run fast - if the test fails the developer needs to know quickly so that it can be addressed


TDD goes hand-in-hand with unit testing.  The concept is that the automated unit test is written before any code - initially the test will fail as the code is not written yet.  Then the minimum amount of code is written in order to pass the test, which can then be refactored as required.


A change in approach to SAP development


In order for unit testing to be effective we need to ensure that the code “units” are sized correctly so they can be effectively tested.  It therefore follows that how code is designed and written needs to change so that applications are built based on these units.


Traditional programming where 100s or 1000s of code lines are written to perform an application function are very hard to test as the code does many things.


This is where object-oriented programming comes in to break down applications into smaller reusable objects and classes.


It helps to follow design principles like SOLID and in particular the concepts of single responsibility and dependency inversion.

  • Having single responsibility in your code (i.e. it only does one thing) will improve your ability to inject mock data into the precise and isolated locations where it is needed, and thus enhances the granularity of the objects to be unit tested.
  • Dependency invertible code will allow you to easily replace the processing of real data by injecting your mock unit testing data generator object, so that you get predictable test results.


A definition of SOLID can be found here.


It can be a challenging mind shift to adopt this technique but as long as classes are designed and built in the correct way the process should be reasonably straightforward.


Tools like ABAP Unit can then be used to develop and build unit tests.  Ideally, these tests should be executed automatically when code is changed so they can all be verified before being moved anywhere.  Some useful examples on the usage of ABAP Unit can be found here or in SCN here.


Conclusions


With the pressure to deliver software both more quickly and safely, the adoption of unit test automation and continuous integration is becoming more relevant.


The end result is improved software quality along with substantial cost savings.  Typically, IT will spend 70-80% of their budget on the development and maintenance of existing systems and on keeping the lights on.  This dwarfs the initial development costs so the extra effort up-front to implement automated unit testing will still deliver significant benefits.


And it means that scarce IT resources can be allocated to work that can deliver more value.


For more information on this subject please download this e-Book here.


Viewing all 2548 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>