This is post #6 from a 12-part blog series that SAP commissioned from finance expert Steve Player, who runs his own practice, The Player Group and also heads up the Beyond Budgeting Roundtable in North America. This series explores how new technologies such as cloud computing, mobility, and in-memory processing of Big Data are transforming planning, budgeting and forecasting best practices.
I recently took the Maryland Area Regional Commuter train from Washington, D.C.’s Union Station out to Baltimore airport. It is a great way to avoid the D.C. traffic. I boarded just before it departed. As a result, the only seats available faced backward. I spent the entire ride seeing where we had just been. Unfortunately, most finance organizations spend the majority of their time looking at the same view. The problem is that you cannot add much value if you are always looking back.
Financial planning and analysis (FP&A) teams address this issue by developing predictive logic diagrams that become the key to driver-based rolling forecasts. These are powerful tools that help finance predict possible future outcomes. When organizations begin to use rolling forecasts, they typically start in a rudimentary fashion. They take historical numbers and apply simple mathematical trend lines such as the expense or revenue category is growing 3% (and hopefully revenue is growing faster than expenses). Simple scenario plans are used to increase or decrease the slope of the expenses.
As an organization becomes more sophisticated, they realize that the financial results come from a series of underlying transactions. The activities that produce those transactions can vary in both frequency as well as the cost per activity. This leads to the development of more accurate models that focus predictions on both the number of activities as well as the rates. It also pushes the predictive logic diagram further upstream to provide greater visibility on what causes the results to occur.
My first introduction to this concept was an essay by Arthur E. Andersen titled “Behind the Figures.” He argued that the way you understand any business is not by understanding the figures on a ledger but instead by understanding what was behind the figures — the transactions and activities that gave rise to the numbers. In essence, predicting those will predict the ultimate results.
Computing power then was a pencil and a slide ruler. Today, in-memory processing comes in and greatly expands what we can see as well as how we can understand it. It allows planning departments to push the predictive logic diagrams way back to the point of seeing how current and potential customers are responding to market and advertising activity.
An excellent example of this is sentiment analysis, which Wikipedia defines as the application of natural language processing, computational linguistics, and text analytics to identify and extract information in source materials. The bottom line is organizations are analyzing how their products and services are being discussed to identify how they can be expected to perform in the marketplace. And the further upstream you can move in the logic diagram, the more lead time you have to shape results to optimize your outcomes. This is another way that in-memory processing is harvesting Big Data to provide earlier planning insight.
What is really unique about moving all the way upstream to sentiment analysis is how the power of in-memory processing enables organizations to harvest unstructured data, allowing the gathering of information that doesn’t come through ledgers. In many cases this information comes from social channels outside the systems strictly owned by your organization. This can now be used to feed product planning, customer pricing analysis, market evaluations and all manner of strategic analysis.
Let me know how far upstream your organization can see.
Next post we will continue to examine the expansion of information now available to improve planning. The question we’ll explore is: Are you using the right data when planning and forecasting?