Sustainability Archives - Tiger Analytics Thu, 16 Jan 2025 10:35:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://www.tigeranalytics.com/wp-content/uploads/2023/09/favicon-Tiger-Analytics_-150x150.png Sustainability Archives - Tiger Analytics 32 32 Data Science Strategies for Effective Process System Maintenance https://www.tigeranalytics.com/perspectives/blog/harness-power-data-science-maintenance-process-systems/ https://www.tigeranalytics.com/perspectives/blog/harness-power-data-science-maintenance-process-systems/#comments Mon, 20 Dec 2021 16:42:57 +0000 https://www.tigeranalytics.com/?p=6846 Industry understanding of managing planned maintenance is fairly mature. This article focuses on how Data Science can impact unplanned maintenance, which demands a differentiated approach to build insight and understanding around the process and subsystems.

The post Data Science Strategies for Effective Process System Maintenance appeared first on Tiger Analytics.

]]>
Data Science applications are gaining significant traction in the preventive and predictive maintenance of process systems across industries. A clear mindset shift has made it possible to steer maintenance from using a ‘reactive’ (using a run-to-failure approach) to one that is proactive and preventive in nature.

Planned or scheduled maintenance uses data and experiential knowledge to determine the periodicity of servicing required to maintain the plant components’ good health. These are typically driven by plant maintenance teams or OEMs through maintenance rosters and AMCs. Unplanned maintenance, on the other hand, occurs at random, impacts downtime/production, safety, inventory, customer sentiment besides adding to the cost of maintenance (including labor and material).

Interestingly, statistics reveal that almost 50% of the scheduled maintenance projects are unnecessary and almost a third of them are improperly carried out. Poor maintenance strategies are known to cost organizations as much as 20% of their production capacity – shaving off the benefits that a move from reactive to preventive maintenance approach would provide. Despite years of expertise available in managing maintenance activities, unplanned downtime impacts almost 82% of businesses at least once every three years. Given the significant impact on production capacity, aggregated annual downtime costs for the manufacturing sector are upwards of $50 billion (WSJ) with average hourly costs of unplanned maintenance in the range of $250K.

It is against this backdrop that data-driven solutions need to be developed and deployed. Can Data Science solutions bring about significant improvement of the maintenance domain and prevent any or all of the above costs? Are the solutions scalable? Do they provide an understanding of what went wrong? Can they provide insights into alternative and improved ways to manage planned maintenance activities? Does Data Science help reduce all types of unplanned events or just a select few? These are questions that manufacturers need to be answered and it is for the experts from both maintenance and data science domains to address.

Industry understanding of managing planned maintenance is fairly mature. The highlight of this article is therefore focused on unplanned maintenance, which demands a differentiated approach to build insight and understanding around the process and subsystems.

Data Science solutions are accelerating the industry’s move towards ‘on-demand’ maintenance wherein interventions are made only if and when required. Rather than follow a fixed maintenance schedule, data science tools can now aid plants to increase run lengths between maintenance cycles in addition to improving plant safety and reliability. Besides the direct benefits that result in reduced unplanned downtime and cost of maintenance, operating equipment at higher levels of efficiency improves the overall economics of operation.

The success of this approach was demonstrated in refinery CDU preheat trains that use soft sensing triggers to decide when to process ‘clean crude’ (to mitigate the fouling impact) or schedule maintenance of fouled exchangers. Other successes were in the deployment of plant-wide maintenance of control valves, multiple-effect evaporators in plugging service, compressors in petrochemical service, and a geo-wide network of HVAC systems.

Instead of using a fixed roster for maintenance of PID control valves, plants can now detect and diagnose control valves that are malfunctioning. Additionally, in combination with domain and operations information, it can be used to suggest prescriptive actions such as auto-tuning of the valves, which improve maintenance and operations metrics.

Reducing unplanned, unavoidable events

It is important to bear in mind that not all unplanned events are avoidable. The inability to avoid events could be either because they are not detectable enough or because they are not actionable. The latter could occur either because the response time available is too low or because the knowledge to revert a system to its normal state does not exist. A large number of unplanned events however are avoidable, and the use of data science tools improves their detection and prevention with greater accuracy.

The focus of the experts working in this domain is to reduce unplanned events and transition events from unavoidable to avoidable. Using advanced tools for detection, diagnosis, and enabling timely actions to be taken, companies have managed to reduce their downtime costs significantly. The diversity of solutions that are available in the maintenance area covers both plant and process subsystems.

Some of the data science techniques deployed in the maintenance domain are briefly described below:

Condition Monitoring
This has been used to monitor and analyze process systems over time, and predict the occurrence of an anomaly. These events or anomalies could have short or long propagation times such as the ones seen in the fouling in exchangers or in the cavitation in pumps. The spectrum of solutions in this area includes real-time/offline modes of analysis, edge/IoT devices and open/closed loop prescriptions, and more. In some cases, monitoring also involves the use of soft sensors to detect fouling, surface roughness, or hardness – these parameters cannot be measured directly using a sensor and therefore, need surrogate measuring techniques.

Perhaps one of the most unique challenges working in the manufacturing domain is in the use of data reconciliation. Sensor data tend to be spurious and prone to operational fluctuations, drift, biases, and other errors. Using raw sensor information is unlikely to satisfy the material and energy balance for process units. Data reconciliation uses a first-principles understanding of the process systems and assigns a ‘true value’ to each sensor. These revised sensor values allow a more rigorous approach to condition monitoring, which would otherwise expose process systems to greater risk when using raw sensor information. Sensor validation, a technique to analyze individual sensors in tandem with data reconciliation, is critical to setting a strong foundation for any analytics models to be deployed. These elaborate areas of work ensure a greater degree of success when deploying any solution that involves the use of sensor data.

Fault Detection
This is a mature area of work and uses solutions ranging from those that are driven entirely by domain knowledge, such as pump curves and detection of anomalies thereof, to those that rely only on historical sensor/maintenance/operations data for analysis. An anomaly or fault is defined as a deviation from ‘acceptable’ operation but the context and definitions need to be clearly understood when working with different clients. Faults may be related to equipment, quality, plant systems, or operability. A good business context and understanding of client requirements are necessary for the design and deployment of the right techniques. From basic tools that use sensor thresholds, run charts, and more advanced techniques such as classification, pattern analysis, regression, a wide range of solutions can be successfully deployed.

Early Warning Systems
The detection of process anomalies in advance helps in the proactive management of abnormal events. Improving actionability or response time allows faults to be addressed before setpoints/interlocks are triggered. The methodology varies across projects and there is no ‘one-size-fits-all’ approach. Problem complexity could range from using single sensor information as lead indicators (such as using sustained pressure loss in a vessel to identify a faulty gasket that might rupture) to far more complex methods of analysis.

Typical challenges faced in developing early warning systems are in the 100% detectability of anomalies but an even larger challenge is in filtering out false indications of anomalies. The detection of 100% of the anomalies and the robust filtering techniques are critical factors to consider for successful deployment.

Enhanced Insights for Fault Identification
The importance of detection and response time in the prevention of an event cannot be overstated. But what if an incident is not easy to detect or the propagation of the fault is too rapid to allow us any time for action? The first level involves using machine-driven solutions for detection such as computer vision models, which are rapidly changing the landscape. Using these models, it is now possible to improve prediction accuracies of processes that were either not monitored or used manual monitoring. The second is to integrate the combined expertise of personnel from various job functions such as technologists, operators, maintenance engineers, and supervisors. At this level of maturity, the solution is able to baseline with the best that current operations aim to achieve. The third and by far the most complex is to move more faults in the ‘detectable’ and actionable realm. One such case was witnessed in a complex process from the metal smelting industry. Advanced-Data Science techniques using a digital twin amplified signal responses and analyzed multiple process parameters to predict the occurrence of an incident ahead of time. By gaining order of magnitude improvement in response time, it was possible to move the process fault from an unavoidable to an avoidable and actionable category.

With the context provided above, it is possible to choose a modeling approach and customize the solutions to suit the problem landscape:

data analytics in process system maintenance

Different approaches to Data Analytics

Domain-driven solution
First-principles and the rule-based approach is an example of a domain-driven solution. Traditional ways of delivering solutions for manufacturing often involve computationally intensive solutions (such as process simulation, modeling, and optimization). In one of the difficult-to-model plants, deployment was done using rule engines that allow domain knowledge and experience to determine patterns and cause-effect relationships. Alarms were triggered and advisories/recommendations were sent to the concerned stakeholders regarding what specific actions to undertake each time the model identified an impending event.

Domain-driven approaches also come in handy in the case of ‘cold start’ where solutions need to be deployed with little or no data availability. In some deployments in the mechanical domain, the first-principles approach helped identify >85% of the process faults even at the start of operations.

Pure data-driven solutions
A recent trend seen in the process industry is the move away from domain-driven solutions due to challenges in getting the right skills to deploy solutions, computation infrastructure requirements, customized maintenance solutions, and the requirement to provide real-time recommendations. Complex systems such as naphtha cracking, alumina smelting which are hard to model have harnessed the power of data science to not just diagnose process faults but also enhance response time and bring more finesse to the solutions.

In some cases, domain-driven tools have provided high levels of accuracy in analyzing faults. One such case was related to compressor faults where domain data was used to classify them based on a loose bearing, defective blade, or polymer deposit in the turbine subsystems. Each of these faults was identified using sensor signatures and patterns associated with it. Besides getting to the root cause, this also helped prescribe action to move the compressor system away from anomalous operation.

These solutions need to bear in mind that the operating envelope and data availability covers all possible scenarios. The poor success of deployments using this approach is largely due to insufficient data that covers plant operations and maintenance. However, the number of players offering a purely data-driven solution is large and soon replacing what was traditionally part of a domain engineer’s playbook.

Blended solutions
Blended solutions for the maintenance of process systems combine the understanding of both data science and domain. One such project was in the real-time monitoring and preventive maintenance of >1200 HVAC units across a large geographic area. The domain rules were used to detect and diagnose faults and also identify operating scenarios to improve the reliability of the solutions. A good understanding of the domain helps in isolating multiple anomalies, reducing false positives, suggesting the right prescriptions, and more importantly, in the interpretability of the data-driven solutions.

The differentiation comes from using the combined intelligence from AI / ML models, domain knowledge and knowledge of deployment success are integrated into the model framework.

Customizing the toolkit and determining the appropriate modeling approach are critical to delivery. Given the uniqueness of each plant and problem and the requirement for a high degree of customization, makes the deployment of solutions in a manufacturing environment is fairly challenging. This fact is validated by the limited number of solution providers serving this space. However, the complexity and nature of the landscape need to be well understood by both the client and the service provider. It is important to note that not all problems in the maintenance space are ‘big data’ problems requiring analysis in real-time, using high-frequency data. Some faults with long propagation times can use values averaged over a period of time while other systems with short response time requirements may require real-time data. Where maintenance logs and annotations related to each event (and corrective action) are recorded, one could go with a supervised learning approach, but this is not always possible. In cases where data on faults and anomalies are not available, a one-class approach to classify the operation into normal/abnormal modes has also been used. Solution maturity improves with more data and failure modes identified over time.

A staged solution approach helps in bringing in the right level of complexity to deliver solutions that evolve over time. Needless to say, it takes a lot of experience and prowess to marry the generalized understanding with the customization that each solution demands.

Edge/IoT

A fair amount of investment needs to be made at the beginning of the project to understand the hardware and solution architecture required for successful deployment. While the security of data is a primary consideration, other factors such as computational power, cost, time, response time, open/closed-loop architecture are added considerations in determining the solution framework. Experience and knowledge help understand additional sensing requirements and sensor placement, performance enhancement through edge/cloud-based solutions, data privacy, synchronicity with other process systems, and much more.

By far, the largest challenge is witnessed on the data front (sparse, scattered, unclean, disorganized, unstructured, not digitized, and so on) that prevent businesses from seeing quick success. Digitization and creating data repositories, which set the foundation for model development, take a lot of time.

There is also a multitude of control systems, specialized infrastructure, legacy systems within the same manufacturing complex that one may need to work through. End-to-end delivery with the front-end complexity in data management creates a significant entry barrier for service providers in the maintenance space.

Maintenance cuts across multiple layers of a process system. The maintenance solutions vary as one moves from a sensor to a control loop, equipment with multiple control valves all the way to a flowsheet/enterprise layer. Maintenance across these layers requires a deep understanding of both the hardware as well as process aspects, a combination that is often hard to put together. Sensors and control valves are typically maintained by those with an Instrumentation background, while equipment maintenance could fall in a mechanical or chemical engineer’s domain. On the other hand, process anomalies that could have a plant-level impact are often in the domain of operations/technology experts or process engineers.

Data Science facilitates the development of insights and generalizations required to build understanding around a complex topic like maintenance. It helps in the generalization and translation of learnings across layers within the process systems from sensors all the way to enterprise and other industry domains as well. It is a matter of time before analytics-driven solutions that help maintain safe and reliable operations become an integral part of plant operations and maintenance systems. We need to aim towards the successes that we witness in the medical diagnostics domain where intelligent machines are capable of detecting and diagnosing anomalies. We hope that similar analytics solutions will go a long way to keep plants safe, reduce downtime and provide the best of operations efficiencies that a sustainable world demands.

Today, the barriers to success are in the ability to develop, a clear understanding of the problem landscape, plan end-to-end and deliver customized solutions that take into account business priorities and ROI. Achieving success at a large scale will demand reducing the level of customization required in each deployment – a constraint that is overcome by few subject matter experts in the area today.

The post Data Science Strategies for Effective Process System Maintenance appeared first on Tiger Analytics.

]]>
https://www.tigeranalytics.com/perspectives/blog/harness-power-data-science-maintenance-process-systems/feed/ 1
Waste No More: Making a Difference with Tiger Analytics’ Data-Driven Solution for a Greener Tomorrow https://www.tigeranalytics.com/perspectives/blog/advanced-analytics-commercial-waste-management-system/ Mon, 20 Sep 2021 23:55:07 +0000 https://www.tigeranalytics.com/?p=5731 Improper commercial waste management devastates the environment, necessitating adherence to waste management protocols. Tiger Analytics’ solution for a waste management firm enhanced accuracy, efficiency, and compliance, promoting sustainable practices.

The post Waste No More: Making a Difference with Tiger Analytics’ Data-Driven Solution for a Greener Tomorrow appeared first on Tiger Analytics.

]]>
Improper commercial waste management has a devastating impact on the environment. The realization may not be sudden, but it is certainly gathering momentum – considering that more companies are now looking to minimize their impact on the environment. Of course, it’s easier said than done. Since the dawn of the 21st century, the sheer volume and pace of commercial growth have been unprecedented. But the fact remains that smart waste management is both a business and a social responsibility.

Importance of commercial waste management

The commercial waste management lifecycle comprises collection, transportation, and disposal. Ensuring that all the waste materials are properly handled throughout the process is a matter of compliance. After all, multiple environmental regulations dictate how waste management protocols should be implemented and monitored. Instituting the right waste management guidelines also helps companies fulfill their ethical and legal responsibility of maintaining proper health and safety standards at the workplace.

For instance, all the waste materials generated in commercial buildings are stored in bins placed at strategic locations. If companies do not utilize them effectively, it will lead to bin overflows causing severe financial, reputational, and legal repercussions.

Impact of data analytics on commercial waste management

Data analytics eliminates compliance issues that stem from overflowing bins by bridging any operational gaps. In addition, it provides the precise know-how for creating intelligent waste management workflows. With high-quality video cameras integrated into the chassis of their waste collection trucks, image-based analytics can be captured and shared through a cloud-hosted platform for real-time visual detection. From these, data insights can be extracted to evaluate when the trash bins are getting filled and schedule the collection to minimize transportation, fuel, and labor expenses. They can also determine the right collection frequency, streamline collection routes, and optimize vehicle loads.

By monitoring real-time data from the video cameras, the flow of waste in each bin can be managed promptly to avoid compliance-related repercussions. The trucks also receive real-time data on the location of empty bins, which helps them chart optimal routes and be more fuel-conscious.

Ultimately, leveraging sophisticated data analytics helps build a leaner and greener waste management system. In addition, it can improve operational efficiency while taking an uncompromising stance on environmental and health sustainability.

Tiger Analytics’ waste management modeling use case for a large manufacturer

Overflowing bins are a severe impediment in the waste management process as they increase the time required to process the waste. Waste collection trucks will have to spend more time than budgeted for ensuring that they handle overflowing bins effectively – without any spillage in and around the premises. It is also difficult for them to complete their trips on time. When dealing with commercial bins, the situation is even more complicated. The size and contents of the commercial bins vary based on the unique waste disposal requirements of businesses.

Recently, Tiger Analytics worked with a leading waste management company to harness advanced data analytics to improve compliance concerning commercial waste management.

Previously, the client had to record videos of the waste pick up process and send them for manual review. The videos were used to identify the commercial establishments that did not follow the prescribed norms on how much waste could be stored in a bin. However, their video review process was inefficient and tedious.

When the pick up takes place, the manual reviewer is expected to watch hours of video clips and images captured by each truck to determine violators. Thus, there was an uncompromising need for accuracy since overflowing bins led to compliance violations and potential penalties.

Tiger Analytics developed a solution that leveraged video analytics to help determine whether a particular bin in an image was overflowing or not. Using cutting-edge deep learning algorithms, the solution enabled a high level of accuracy and eliminated all activities related to the manual video review and the associated costs.

Tiger Analytics’ solution was based on a new data classification algorithm that increased the efficiency of the waste collection trucks. Based on the sensor data collected from the chassis, we empowered the client to predict the collection time when the truck was five seconds away from being in the vicinity of a bin. Furthermore, with advanced monitoring analytics, we reduced the duration of the review process from 10 hours to 1.5 hours, which boosted workforce efficiency too.

As a result, the client could effortlessly de-risk their waste management approach and prevent overflow in commercial bins. Some of the business benefits of our solution were:

  • More operational efficiency by streamlining how pickups are scheduled
  • Smarter asset management through increased fuel efficiency and reduced vehicle running costs
  • Improved workforce productivity – with accelerated critical processes like reviewing videos to confirm the pickup
  • Quick risk mitigation of any overflow negligence that leads to compliance violations

Conclusion

New avenues of leveraging advanced analytics continue to pave the way for eco-conscious and sustainable business practices. Especially in a highly regulated sector like commercial waste management, it provides the much-needed accuracy, convenience, and speed to strengthen day-to-day operations and prevent compliance issues.

Day by day, commercial waste management is growing into a more significant catalyst for societal progress. As mentioned earlier, more companies are becoming mindful of their impact on the environment. In addition, the extent of infrastructure development has taken its toll – thereby exponentially increasing the need to optimize waste disposal and collection methods. Either way, data provides a valuable understanding of how it should be done. 

This article was first published in Analytics India Magazine.

The post Waste No More: Making a Difference with Tiger Analytics’ Data-Driven Solution for a Greener Tomorrow appeared first on Tiger Analytics.

]]>
2024 CPG E-Commerce Trends: AI and Personalization Take Center Stage https://www.tigeranalytics.com/perspectives/blog/2024-cpg-e-commerce-trends-ai-and-personalization-take-center-stage/ Wed, 04 Apr 2018 18:35:39 +0000 https://www.tigeranalytics.com/blog/2024-cpg-e-commerce-trends-ai-and-personalization-take-center-stage/ Unearth insights on how the CPG industry is embracing e-commerce with gusto through AI integration, sustainability practices, and personalized strategies. From AI-powered recommendation engines to eco-friendly packaging, see how CPG brands are adapting to create loyalty-driven approaches.

The post 2024 CPG E-Commerce Trends: AI and Personalization Take Center Stage appeared first on Tiger Analytics.

]]>
Why eCommerce for CPG?

The clear, permanent shift towards digitally driven consumption is turning up the heat on retail, but for CPG manufacturers it represents an opportunity – at least till now, should we say? Ecommerce platforms are in fact fast becoming the main growth area for CPG companies, across geographies. (For some interesting statistics, we recommend reading this insightful article from Nielsen).

To win, CPG manufacturers adopt one of the following eCommerce models:

1. Partnerships with pure-play online retailers, of which Amazon has been a prime example (pun!?). While it is predominantly online, it is no more pure-play online only.

2. Online channels of traditional brick-and-mortar players (the walmart.coms of the world)

3. Direct-to-Consumer platforms with a content + commerce angle. Till now, this model is used more for content sharing with consumers than commerce – less than 1% sales across most categories, but this could change as CPG supply chains adapt.

Across all three of the above, especially on the commerce part, levers to win (as articulated in the pictorial below) are not very different from what worked in the pure offline world. Given our experience is mostly around delivering insights through data & analytics, the rest of this article is focused on that aspect.

Importance of eCommerce Data for CPG Manufacturers

In the offline world, CPG manufacturers get a broader market picture (volume/value sales, brand shares etc.) relying on data from syndicated providers (largely Nielsen & IRI) than having to pipe in PoS data* from every major retailer. We often hear in discussions that for the ecommerce side of the market, syndicated data sources are yet to fully mature to provide a reliable market view covering all relevant ecommerce players across the range of key categories. Given this context, an absence of any proactive data initiative by a CPG manufacturer will make it difficult to understand its brand/category growth drivers at the right level of granularity and use that to improve share. Striking the right data sharing partnerships with relevant ecommerce partners to get sales data in place and subsequently leveraging that for a range of insights-driven in-channel / cross-channel decisions is a high priority area for many top CPG companies.

[* Of course, CPGs leverage PoS as well, for account specific deep-dive insights and service level improvements, another trend that’s accelerated from around 2010 with the advent of affordable solutions to store and analyze data]

In an ideal state, marketing data – across traditional as well digital marketing channels, are brought in to comprehensively cover the ‘stimuli’ side, while sales data – shipments as well as data on consumption through ecommerce, PoS, syndicated, are brought in for a holistic picture of the ‘response’. Then there are others such as panel data, shopper cards etc. However, given such a large scope could take many years to materialize, CPGs often divide and conquer. When it comes to bringing large quantities of data in-house, or on to a platform managed by a partner on behalf of the CPG manufacturer, the scope of data that is brought in as well as the underlying technology infrastructure scales asymmetrically across functional areas, depending on:

1. Cost and ease of availability of data,

2. Ease of bringing the data on to internal platforms,

3. Utility of such data in relevant areas of decision making, and

4. Business value potential

These together drive the allocation of funds from the functions. In such a model, we have often seen acquiring data from ecommerce partners getting a higher priority over bringing highly granular data around digital consumer journeys which could be understood to a reasonable extent via agency provided solutions rather than bringing all marketing data in, on priority.

[Please note this is more of a sequencing decision than a substitute forever. Some of the respected global CPGs we work with already have a good hold on the marketing side of the data and are now moving to conquer the eCommerce side; but if you are just about starting on the data journey, the sequencing would be different, in our point of view. If not immediately, very soon, bringing comprehensive digital consumer journey data will become highly relevant to bring in rather than only rely on external inputs – if a CPG manufacturer’s aspiration is to be a category leader driven by differentiated insights. That’s another topic for a different day: back to ecommerce now]

Data Management

When it comes to building a solid data foundation for ecommerce, it is essential for CPGs to have the following covered:

Scope of data: Building out the overall strategy for data management requires scoping out data needed to align with business objective and availability. Data acquired from partner retailer includes ecommerce Point of Sales data, content & online behavior, and search terms. Internal firm data could be integrated with aggregated sales to provide a holistic picture. It could include promotion calendar, shipments specific to ecommerce, priority search terms, pricing strategy (ceiling/floor) etc.

Building robust data pipeline & quality control: It is crucial to build a scalable data pipeline. This includes collecting data from different sources, storing the data – including in-memory processing & storage, in an optimal way so that the data mining is effective and less time-consuming.

Data cataloging: Creating standardized metadata and ability to look up with master data is another critical element to ensure roll-up from individual components of data from various sources/geographies.

Governance: Establishing processes for data visibility (to users), data lifecycle management, aligned data aggregation rules vs arbitrary, and data quality monitoring to ensure effective data management

Covering these will ensure trusted & timely delivery of data to users. In summary, it is critical for CPG manufacturers to build expertise in eCommerce data management to ensure adequate data assets are available for the downstream analytics – whether such analyses are heavyweight predictive analytics by data scientists or lightweight DIY analyses by business users.

eCommerce Analytics

After* successfully building the data foundation, the next step is to arm the business with precise insights that drive differentiated execution. Yes, but in which areas? As outlined above, the other three critical pieces to win in eCommerce are Content& Communications, (R)etailer Partnerships, and Pricing & Promotions.

[*While the mention of analytics ‘after’ data may indicate the progress from data to analytics is very linear, most analytic themes are often tested for value using snippets of data even as the data foundation is taking shape and are quickly scaled across customer teams when a critical mass of data is in place.]

Analytics Around Content

Analytics is useful to optimize content and syndicate it across e-retailers. Causation analysis of product sales with product information is critical to not only optimize the content but also adapt to ever-changing needs for digital engagement. With deeper data from partners, the impact of product imagery and messaging on ecommerce sales is doable too, but not that prevalent as yet.

Partner-specific Analytics

A long tail of products is commonly observed in ecommerce where shelf space is unlimited. One of the struggles for CPG manufacturers is to understand their most profitable products and optimize its assortment for adequate coverage of consumer segments and price points for each retail partner.

If some form of consumer profile data (even aggregated) is available from the partner, that’s of great help. Even if not, classic price-ladders and blended data analyses with online purchase behavior from consumer panel providing a view of the preferences help make a start.

Another big area here is around service levels on shipments against ecommerce orders – which is as critical for ecommerce as in the off-line world. A quote seen in a recent article from BCG best summarizes this – “Stockouts, bad enough in traditional retail, can be deadly to an online seller”. It couldn’t have been said better. We recommend reading it for a detailed treatment of how CPG supply chains are adapting in the context of the three different ecommerce models.

Price & Promotions

Effective promotions – the online equivalent of traditional trade promotions, however it is named and accounted for, is highly critical in ecommerce as well for multiple reasons. It’s often a top spend item for CPG brands. In addition, pricing and promotions are highly visible for shoppers and comparison engines alike. Hence, it pays (quite literally!) to understand ROI through detailed impact attribution of promotional activity – what worked, where, when and what’s the true incrementality vs share shift. With good quality ecommerce partner data, this is an area that can be addressed with sufficient confidence. Even if only marketing spends are available and not detailed consumer journeys, it is still possible to get a good picture of promotional impact.

Consumer Digital Journey

This, of course, is not about ecommerce alone, but an integrated view of data from both ecommerce and marketing. Depending on the depth of the data, it is quite possible to build the equivalent of a Market/Media Mix Analysis specifically focused on consumption through the ecommerce channel; or get a complete view of the digital consumer journey replete with attribution analysis.

Of course, for such deeper analysis, data on offline + online media consumption as well as category consumption is required, which comes from specialized players, at additional cost. This approach helps get a comprehensive omnichannel consumption view of the consumer than having to draw artificial offline/online boundaries. Given that, such intensive data acquisition decisions are usually not taken alone by ecommerce teams, but with a total consumption perspective in conjunction with brand marketing and consumer market insights (CMI) teams.

Conclusion

For a USD 10 billion-sized CPG business with a 1.5% annual growth, if most of that growth in next few years is expected to come from ecommerce, that means a USD 500 million business opportunity over the course of 3-4 years. No wonder then that this space attracts much of leadership attention. As opposed to a general approach of evaluating ROI on individual initiatives, when it comes to eCommerce, looking at this bigger picture has helped many in the industry to quickly move from planning to execution mode.

The post 2024 CPG E-Commerce Trends: AI and Personalization Take Center Stage appeared first on Tiger Analytics.

]]>