Experiences With the Use of Extended or Dynamic Safety Stock

What This Article Covers

  • What is dynamic safety stock? 
  • How often is it used in SAP accounts? 
  • What are some of the surprising results from the use of SAP’s dynamic safety stock? 


Extended safety stock SAP’s name for dynamic safety stock. This is a functionality which allows the safety stock to vary depending upon supply and demand variability. These values are entered into the Lot Size tab of the Product Location Master, as can be seen in the screen shot below.

I have often wondered why no client that I have worked with has ever configured this functionality. I had often attributed it to the problem in maintaining this master data. It should be understood that this is the absolutely standard dynamic safety stock method that is taught in textbooks. This is in no way SAP intellectual property. However, while interviewing for a contract position, I discussed this functionality with someone who had tested it. Interestingly, she stated that the safety stock it came up with was high (this is of course relative, as it actually calculates the  correct safety stock.) However, another comment was that it was not very adjustable, and that adjustability was a requirement for them. However, I question if these are the real reasons that the functionality not be used. Again the safety stock value calculated by the dynamic method is correct (as long as it is in fact calculating correctly, which she said it did), however, I do not agree that safety stock should be changed frequently. In fact, planners fall into a habit of adjusting the safety stock when it should be auto adjusted. Therefore I understand that the dynamic safety stock did not meet this client’s business requirement, but I supposed I question the validity of the business requirement. More specifically, I question if the requirement will lead to good planning outcomes.

Some Mystery Author Inside of Accenture Writes a Great Technical Paper on DP and SNP Setup


I have poked some holes in Accenture white papers in the past that were very generic and soft of glossy sales brochures. This is a problem because there is a lot of this type of commercial material on the web, that does not do much more than serve as a calling card for the company. They leave out a lot of the complexities of the topic, and simply propose that you contact them. I wrote on one example of this previously.


However, this paper on the complex steps of how to setup DP and SNP should really be lauded.

This is how technical documentation should be written. Clear, step by step and jargon free directions of how to do things. This documentation fairly well dominates the SAP Help documentation on these topics which is close to indecipherable. However, it is not listed who wrote the document, which I always think is kind of wrong. The company’s name can be on the work, but the document was written by a person, not a computer, and they deserve attribution. This is kind of an example of how institutions minimize the individual.


So if you are interested in basic setup of the DP and SNP environment, you will want to read this paper.

Days Supply Macro Walk Through


Macros are stored development objects that populate key figures in the planning book. They are very important on DP and SNP projects. When standard macros are used, this is simply the development repository. When a pre-existing macro is created (it is often easier to adjust a pre-existing macro rather than start from scratch), or when a new macro is created from scratch, this is a custom development activity. Because a macro is not coding, some people that work in APO may not consider it classical development, however it is. Macros can be designed to do a lot of things, however, when making new macros, they need to be approached within the development context as they have similar maintenance implications to other forms of development.

The Macro Walk Through

The first step is to go to the Macro Builder, and then move the macro from the Macro Depot to the Macro Work area.

I have chosen this macro as it is simple, and is commonly used. Many areas of functionality are actually just macros. The macro calculates a value, which is then stored in a key figure in the planning book. Because this is seamless it appears to users that the key figure is providing a value that is part of some deeper APO functionality, when in fact it is just a macro. This is an advantage and a disadvantage as the only real way to know for sure what is controlling a key figure in the planning book is to check the Macro Builder.

The macro shows as a series of steps. Everything that is created in a macro could be replicated in Excel. In fact Excel is much more powerful than the SAP APO Macro Builder. While Excel is easy to use, the Macro Builder takes a while to get used to, and its development productivity is low. That means that it takes a lot of experience to get good at it, and even experience macro builders produce less output than comparable environments given the same time. Also, SAP macros do not have absolute reference capability. This means that when a macro is moved (up or down in the key figure sequence), the key figure that it was pointing to is changed, and this means that all macro calculated key figures below the adjustment must have their macros changed. This is very different from Excel where formulas to can be moved in an unlimited fashion without causing the references to break.

Also, it should be recognized the term “macro” is used differently here than in Excel. In Excel a macro reverts to a recording that allows Excel to replicate a series of steps. The Macro Builder should probably have been named differently, especially considering the fact that Microsoft had already essentially created the term. Microsoft’s use of the term is not correct either as its official computer science definition is “Computing a single instruction that expands automatically into a set of instructions to perform a particular task.” – Apple Dictionary. If SAP and Microsoft had used existing terminology correctly a macro in Excel would have been called an “instruction recording,” and SAP would have called its macro builder something like “key figure custom calculator.” Terminology that is accurate in this way has the benefit of not having to be internally translated. However, when you are a big and influential software company you don’t have to follow any of the preexisting vocabulary of the English language, and create your own make believe set of terms, and while confusing to other people, it will eventually become used. Making up one’s own terminology from scratch saves large software companies’ countless hours from having to use a dictionary or thesaurus.

Macro Header

The macro header screen is shown below:

As to the third point above, there are several important types of macros that can be created and used.

  • Default macro: Carried out whenever the forecasting screen is regenerated, such as when the planning book is opened or closed, or when the planner chooses ENTER.
  • Start macro: Carried out whenever the planning book is opened.
  • Level change macro: Carried out whenever the planner drills up or down in interactive forecasting.
  • Exit macro: Carried out whenever the planning book is saved and closed. – SAP Help

The most frequently used type is the Default macro. The macro type is determined above by entering a value into one of the four boxes in the Macro header screenshot above.

Step 1

Step 1 is a very basic set of options. It provides the name and time line over which the macro should be calculated. It also defined what to do when the value cannot be calculated (either replace with 0, or provide no value). This particular macro only has one step, but on more complex macros there can be many steps, and this is screen and object is more of a step header.


Every macro applies to a row. This screen declares the row’s name or key figure as it will appear in the planning book (the planning book is shown above the macro). The column indicator controls which “columns” or timer periods the macro applies. The other settings on this screen are processing administration.

Cover Calculation and Open Parenthesis

This invokes the “cover calculation” and opens parenthesis in which the calculated values will reside.

The macro builder offers a wide variety of mathematical functions. These can be seen by performing the drop down from this box.

Some are self explanatory, but many are not. In order to find the descriptions so you can choose a function that meets your needs you can find them online.

They are categorized in the following way:

  • Mathematical Operators and Functions
  • Statistical Functions
  • Logical (Boolean) Functions
  • Functions for InfoObjects and Planning Books
  • Date Functions
  • General and Planning Table Functions
  • SNP Functions

As you can see SNP has its own category.

The COVER_CALC function is described below:

COVER_CALC(rows for stock on hand ; area (I call this area 1 in this article) for demand from the next period to the end of the time period ; area (I call this area 2 in this article) containing the number of workdays ; demand from the next period to the end of the time period) returns the days supply of a product by considering the current stock on hand, the total demand of subsequent time periods, and the number of workdays in this time period.

For example:

Time span w11 w12 w13 m04 m05

Total demand: 0 20,0 0 25,0 27,5

Stock level 68,7 48,7 48,7 23,7 0

Daysí supply 70 63 57 26 0

Workdays 7 7 7 30 31 – SAP Help

Row Stock on Hand

This declares the row to be calculated by the macro.


This macro simply compares the demand

Area 1

This declares the “from row” and the “to row,” which are both the Total Demand Key Figure (actually its helpful to simply consider the row synonymous with the term key figure).


Area 2

This sets the second area as workdays.


Close Parenthesis

The final part of the macro closes the parentheses, ending this step of the macro.



The macro builder is essentially a clumsy Excel. Like Excel it has many functions, which can be employed. The Days Supply macro is one of the most commonly employed on SNP projects. It allows planners to understand if the stocking level is within appropriate boundaries and is a good way to diagnose the system output. It is also entirely the result of the macro which was described in this post.

In future articles I will walk through more complex macros that have things like if then statements.




Running the Optimizer for a Single Location Versus the Sub-Problem


One very interesting question is whether the SNP optimizer should be run interactively. Companies that migrate to cost optimization do so most frequently from MRP or from supply planning heuristics. However both of these methods can be run interactively, without negatively affecting the rest of the product locations. However, cost optimization works differently. The CPLEX optimizer within SNP divides the overall supply network problem into a series of sub-problems, in a process called decomposition. Decomposition is explained at this link below:


SNP allows the interactive running of the optimizer from within the product location combination. This can be activated by selecting the Optimizer button within the planning book as can be seen below.

SAP documentation is lacking in this area, however, there seems to be a flaw in the interactive design. This is because an optimizer should never be run for a single product location combination but should follow the exact decomposition that is setup in the SNP Optimization Profile. However, the message log below shows that only one location is brought into the optimizer’s memory for processing. This means that the sub-problem for this product is not being respected. 

This is not a production facility, and there are therefore no production costs. In fact the costs are quite limited with the vast majority of costs only being storage, this is simply a recalculation of the storage costs that were calculated during the past optimization planning run.

The optimizer cannot in effect do anything because no transportation lanes are included in the optimization run, which means that it is not interacting with the other locations. However, for a supply plan to make any sense, a location must interact with different locations. Therefore, running the optimizer in this way is illogical, so it is difficult to see why it is an option. What SNP should do is perform the optimization for the location sub-network that is part of the decomposition which is in the SNP Optimization Profile. That is what is assigned to the interactive optimization run above, so why does it not go off what the Optimizer Profile is telling it to. Instead, it forces the planner to add all of the locations that are part of the sub-network in order to perform the optimization correctly. However, what if they miss one? The optimizer could produce a poor result because of just one missing location. This is not a user friendly form of interactive optimization. Simple mistakes like this, which would have been caught by a person experienced in how interactive modeling is performed in real life, were entirely missed by SAP development, and have never been corrected.


Best of breed supply planning applications are simply far ahead of SAP SNP, which is why I question the sanity of going forward with an exclusive SNP solution for simulation, and its not only in supply planning. PP/DS is also completely uncompetitive when it comes to simulation as this post describes.


I can’t wait for my next conversation with an SAP consultant to tell me that the problems with SAP are mostly due to ineffective user training. However, I know that somewhere this week some senior manager or partner at a large consulting company like IBM or Accenture will tell some prospect about how great SAP is in simulation. The partner could care less and will say anything for cash. However, this is one reason why SAP never has to improve its simulation capability, but can just continue to develop new products without fixing the older ones. This is because they know no matter how bad their functionality, the big consulting firms will always have their back.


Running the optimizer interactively for a single product location is an completely illogical way of performing optimization, and this is only necessary because when performing interactive optimization. SNP is strangely not respecting the sub-problems as setup in the SNP Optimization Profile. SAP could have made it easy to perform optimization interactively, but didn’t. This is a serious problem, because typically, there is only enough time to run the network optimizer once per week, and an interactive optimizer would be a great addition. In fact, there are companies that have developed entire workarounds because SNP cannot do this simple function.

If the planner desires to perform optimization on the fly, it can be done, but the planners must enter all the locations that are part of the product’s sub-network must be included in the planning book, and only then should the optimizer be run. It is unfortunate that SAP does not automatically perform the optimization for the entire sub-network for any product which is a part of the sub-network, rather than breaking the sub-network relationships and simply processing for one location.

Diagnosing the Reasons for Over Ordering with the SNP Cost Optimizer


One generally thinks of supply planning inaccuracy in terms of both equally distributing over and under ordering .However, on several occasions I have witnessed SNP both under and over-order. This is most common when either the CTM or the cost optimizer is used. Its rare for much in the way of comprehensive diagnostics to be run on SNP, and the way that companies generally find out about is when planners complain about the results. I have analyzed the overall system results and found over ordering as high as 50% with the cost optimizer. Actually, considering how most companies set their costs of unfulfilled demand so high in comparison to other costs, I am surprised it is not higher. (See the problems that companies have in setting costs appropriately in cost optimizers in the post below):


Disproportionate cost setting can be one reason for consistent over or under ordering. However, there are other reasons. This can be learned from analyzing the supply planning output. For instance, the graph below demonstrates overages that are not really driven by excessive unfulfilled demand penalty costs. The reason for this is that the overages are not high across the board, but only in minority of cases.

Over Ordering in R/3 or SNP?

When facing this issue, some people may think about the interaction with R/3 and question whether R/3 is partially responsible for the over ordering. Of course, when two systems can create orders, it makes the diagnostic of which is the guilty systems more complex. However, SNP does not convert and then rename R/3 generated Purchase Requisitions to SNP Purchase Requisitions. The same is true of SNP Planned (production) Orders. The reason SNP is set this up this way is so that there is trace-ability as to what system generated what recommendation. Pre-existing R/3 Purchase Requisitions do affect SNP, in that they reduce the size of or in some cases (that is when operating correctly), eliminate SNP generated purchase requisitions. That is the summation of the relationship between SNP generated purchase requisitions and R/3 (MRP) generated purchase requisitions. Therefore when an SNP Purchase Requisitions is viewed in the planning book, the viewer can be certain that the purchase requisitions were generated by SNP.

Therefore, the issue with high SNP ordering is not because purchase requisitions were created in R/3 and were then adjusted in SNP. Purchase requisitions created in R/3 have a specific order category and any previous R/3 or SNP generated purchase requisitions are used to make future SNP Purchase Order recommendations. R/3 cannot force SNP to make bad decisions, unless it sends inaccurate data such as the stock on hand.

When in the Time Horizon does Over Ordering Occur?

Another important clue to what is causing over ordering is when in the planning horizon the over-ordering is occurring. This is particularly important for cost optimization, because cost optimization is often run with a series of different decompositions, or divisions of the overall supply network problem. (to read more about decomposition, see this post below)


Decomposition with respect to time means that the problem is not processed by the CPLEX optimizer (its important to refer to CPLEX because CPLEX’s makes the optimizer SAP uses — SAP makes no optimizer, and CPLEX’s documentation is better than SAP’s and its useful to know CPLEX when the optimizer is not SAP as CPLEX is used as the optimizers by many supply chain applications.) all in one horizon. With time decomposition enabled, the optimizer processes earlier segments of the time horizon first. Sub-problems are given a maximum runtime (a subproblem being normally a single product-location subnetwork – see the link above for more details). After they reach the max runtime, the problem is no longer processed, and the end state is whatever the optimizer was able to get through. In some circumstances the sub-problem solves optimally, but sometimes it does not. On those that did not solve optimally, the later periods in the time horizon will be a lower quality solution than the earlier periods as the later periods were less processed, or not processed at all. This can be demonstrated by taking a sample of the over ordered products and checking to see if a majority of them did not solve optimally. If a high number of the product sub-problem did not solve optimally, there is a good chance that this is the problem, and then discussions can commence on making adjustments such as enlarging the processing time window buy adjusting the operational workflow, or the easiest and often least expensive alternative, adding more hardware.


There are a number of reasons for over ordering in any optimizer, including SNP. This article has described several things to look for which can lead one to determine the route cause. Once the route cause is determined, the solutions become obvious. While the current practice seems to be let the planners find the problems with the solution, this is not a very good practice. Diagnostics of the type I described in this article should be run right off the bat when the optimizer is first being tested. Planner input is valuable, but the overall system must be diagnosed in aggregate to determine if the solution is a good one.

Using Heuristic First Solution


The SNP optimizer has a selection called the Heuristic First Solution.

How it Works

Optimization is usually presented as mathematically pure. However, the truth is that optimization often requires a lot of help in order to solve problems. One form of help is called decomposition, which is the dimension under which the problem is segmented into small problems to reduce the solution space. This setting, is a second way of helping the optimizer. This is in fact very similar to simulated annealing. Wikipedia’s definition of simulate annealing is listed below:

Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more efficient than exhaustive enumeration — provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution. -Wikipedia

By the way, for those that might be wondering, “exhaustive enumeration,” is allowing the optimizer to run and to “enumerate” all of the different options before selection one. However, while the Heuristic First Solution is similar to simulated annealing, there is an important difference. Simulated annealing is itself a non-optimization method for solving the problem. The Heuristic First Solution starts by applying a heuristic to find the right “neighborhood,” and then uses the optimizer to search within the neighborhood.

Therefore in some cases, the best first move that an optimizer can do is to get assistance to performing the search. This would be a little a GPS system asking for directions to a city, before performing the calculation to get to a particular address. GPS system of course don’t have to do this because they problem they are solving computationally is much more simple. So simple in fact that it can be solved in most often less than a minute (unless really long distances are required to be calculated) by the low powered processor that resides within most GPS units.

Other Applications

Other linear programming based optimizers have this same type of functionality. Typically, to run LP on a completely empty solution space is quite time consuming. Two approaches for instance, which are available i2’s/JDA’s SCP are the following:

  1. Store the result in binary mode (memory map so sort), and then use it as the initial point for next run.
  2. Do some kind of heuristic approximation to get a initial point. <– this will be largely discrete by vendor. Even in i2, there were various ways doing this cross product line/solution.

So SAP has it (although it is very lightly documented) and i2’s SCP optimizer has the setting as well. Another vendor that has it as well. This should not be surprising as both i2 and SAP’s solver is CPLEX, so obviously the applications simply make the options available within application reflective of the same (although much more limited) switches that are available in the CPLEX solver.

However the question remains as to how to use the setting. Typically instructions are required in order to “form” the heuristic prior to its use. However, I have not found instructions on how to use this setting in any online documentation. Of course the setting could simply be tested by turning it on prior to an optimization run, but of course this would need to be performed in a simulation environment.



Posted in SNP