Predictive Analytics Archives - Tiger Analytics Tue, 08 Apr 2025 10:30:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://www.tigeranalytics.com/wp-content/uploads/2023/09/favicon-Tiger-Analytics_-150x150.png Predictive Analytics Archives - Tiger Analytics 32 32 Implementing Context Graphs: A 5-Point Framework for Transformative Business Insights https://www.tigeranalytics.com/perspectives/blog/implementing-context-graphs-a-5-point-framework-for-transformative-business-insights/ Wed, 04 Sep 2024 05:49:00 +0000 https://www.tigeranalytics.com/?post_type=blog&p=23370 This comprehensive guide outlines three phases: establishing a Knowledge Graph, developing a Connected Context Graph, and integrating AI for auto-answers. Learn how this framework enables businesses to connect data points, discover patterns, and optimize processes. The article also presents a detailed roadmap for graph implementation and discusses the integration of Large Language Models with Knowledge Graphs.

The post Implementing Context Graphs: A 5-Point Framework for Transformative Business Insights appeared first on Tiger Analytics.

]]>
Remember E, the product manager who used Context Graphs to unravel a complex web of customer complaints? Her success story inspired a company-wide shift in data-driven decision-making.

“This approach could change everything,” her CEO remarked during her presentation. “How do we implement it across our entire operation?”

E’s answer lay in a comprehensive framework designed to unlock the full potential of their data. In this article, we’ll explore Tiger Analytics’ innovative 5-point Graph Value framework – a roadmap that guides businesses from establishing a foundational Knowledge Graph to leveraging advanced AI capabilities for deeper insights.

The 5-Point Graph Value

At Tiger Analytics, we have identified a connected 5-point Graph Value framework that enables businesses to unlock the true potential of their data through a phased approach, leading to transformative insights and decision-making. The 5-point Graph Value framework consists of three distinct phases, each building upon the previous one to create a comprehensive and powerful solution for data-driven insights.

Five-Point-Graph-values

Phase 1: Knowledge Graph (Base)

The first phase focuses on establishing a solid foundation with the Knowledge Graph. This graph serves as the base, connecting all the relevant data points and creating a unified view of the business ecosystem. By integrating data from various sources and establishing relationships between entities, the Knowledge Graph enables businesses to gain a holistic understanding of their operations.

In this phase, two key scenarios demonstrate the power of the Knowledge Graph:

1. Connect All Dots
Role-based Universal View: Gaining a Holistic Understanding of the Business
A business user needs to see a connected view of Product, Plant, Material, Quantity, Inspection, Results, Vendor, PO, and Customer complaints. With a Knowledge Graph, this becomes a reality. By integrating data from various sources and establishing relationships between entities, the Knowledge Graph provides a comprehensive, unified view of the product ecosystem. This enables business users to gain a holistic understanding of the factors influencing product performance and customer satisfaction, leading to context-based insights for unbiased actions.

2. Trace & Traverse
Trace ‘Where Things’: Context-based Insights for R&D Lead
An R&D Lead wants to check Package material types and their headspace combination patterns with dry chicken batches processed in the last 3 months. With a Knowledge Graph, this information can be easily traced and traversed. The graph structure allows for efficient navigation and exploration of the interconnected data, enabling the R&D Lead to identify patterns and insights that would otherwise be hidden in traditional data silos. This trace and traverse capability empowers the R&D Lead to make informed decisions based on a comprehensive understanding of the data landscape.

Phase 2: Connected Context Graph

Building upon the Knowledge Graph, the second phase introduces the Connected Context Graph. This graph incorporates the temporal aspect of data, allowing businesses to discover patterns, track changes over time, and identify influential entities within their network.

Two scenarios showcase the value of the Connected Context Graph

3. Discover more Paths & Patterns
Uncover Patterns: Change History and its weighted impacts for an Audit
An auditor wants to see all the changes that happened for a given product between 2021 and 2023. With a Connected Context Graph, this becomes possible. The graph captures the temporal aspect of data, allowing for the discovery of patterns and changes over time. This enables the auditor to identify significant modifications, track the evolution of the product, and uncover potential areas of concern. The Connected Context Graph provides valuable insights into the change history and its weighted impacts, empowering the auditor to make informed decisions and take necessary actions.

4. Community Network
Network Community: Identifying Influencers and Optimizing Processes
A business user wants to perform self-discovery on the Manufacturer and Vendor network for a specific Plant, Products, and Material categories within a specific time window. The Connected Context Graph enables the identification of community networks, revealing the relationships and interdependencies between various entities. This allows the business user to identify key influencers, critical suppliers, and potential risk factors within the network. By understanding the influential entities and their impact on the supply chain, businesses can optimize their processes and make strategic decisions to mitigate risks and improve overall performance.

Phase 3: Auto-Answers with AI

The final phase of the 5-point Graph Value framework takes the insights derived from the Knowledge Graph and Connected Context Graph to the next level by augmenting them with AI capabilities. This phase focuses on leveraging AI algorithms to identify critical paths, optimize supply chain efficiency, and provide automated answers to complex business questions.

The scenario in this phase illustrates the power of AI integration:

5. Augment with AI
Optimizing Supply Chain Critical Paths and Efficiency
A Transformation Lead wants to identify all the critical paths across the supply chain to improve green scores and avoid unplanned plant shutdowns. By augmenting the Knowledge Graph with AI capabilities, this becomes achievable. AI algorithms can analyze the graph structure, identify critical paths, and provide recommendations for optimization. This enables the Transformation Lead to make data-driven decisions, minimize risks, and improve overall operational efficiency. The integration of AI with the Knowledge Graph opens up new possibilities for business process optimization, workflow streamlining, and value creation, empowering organizations to stay ahead in today’s competitive landscape.

A 360-Degree View of Your Product with Context Graphs

By leveraging Knowledge Graphs, businesses can unlock a complete 360-degree view of their products, encompassing every aspect from raw materials to customer feedback. Graph capabilities enable organizations to explore the intricate relationships between entities, uncover hidden patterns, and gain a deeper understanding of the factors influencing product performance. From context-based search using natural language to visual outlier detection and link prediction, graph capabilities empower businesses to ask complex questions, simulate scenarios, and make data-driven decisions with confidence. In the table below, we will delve into the various graph capabilities that can enhace the way you manage and optimize your products.

Five-Steps-for-Graph-values

Use Cases of Context Graphs Across Your Product

Use-Cases-of-Context-Graphs

Graph Implementation Roadmap

The adoption of Context Graphs follows a structured roadmap, encompassing various levels of data integration and analysis:

  • Connected View (Level 1): The foundational step involves creating a Knowledge Graph (KG) that links disparate enterprise data sources, enabling traceability from customer complaints to specific deviations in materials or processes.
  • Deep View (Level 2): This level delves deeper into the data, uncovering hidden insights and implicit relationships through pattern matching and sequence analysis.
  • Global View (Level 3): The focus expands to a global perspective, identifying overarching patterns and predictive insights across the entire network structure.
  • ML View (Level 4): Leveraging machine learning, this level enhances predictive capabilities by identifying key features and relationships that may not be immediately apparent.
  • AI View (Level 5): The pinnacle of the roadmap integrates AI for unbiased, explainable insights, using natural language processing to facilitate self-discovery and proactive decision-making.

Graph-Implementation-Roadmap

Leveraging LLMs and KGs

A significant advancement in Context Graphs is the integration of Large Language Models (LLMs) with Knowledge Graphs (KGs), addressing challenges such as knowledge cutoffs, data privacy, and the need for domain-specific insights. This synergy enhances the accuracy of insights generated, enabling more intelligent search capabilities, self-service analytics, and the construction of KGs from unstructured data.

Context Graph queries are revolutionizing our machine learning and AI systems. They are enabling these systems to make informed and nuanced decisions swiftly. With these tools, we can preemptively identify and analyze similar patterns or paths in raw materials lots even before they commence the manufacturing process.

This need to understand the connections between disparate data points is reshaping how we store, connect, and interpret data, equipping us with the context needed for more proactive and real-time decision-making. The evolution in how we handle data is paving the way for a future where immediate, context-aware decision-making becomes a practical reality.

The post Implementing Context Graphs: A 5-Point Framework for Transformative Business Insights appeared first on Tiger Analytics.

]]>
Connected Context: Introducing Product Knowledge Graphs for Smarter Business Decisions https://www.tigeranalytics.com/perspectives/blog/connected-context-introducing-product-knowledge-graphs-for-smarter-business-decisions/ Wed, 04 Sep 2024 05:38:52 +0000 https://www.tigeranalytics.com/?post_type=blog&p=23364 Explore how Product Knowledge Graphs, powered by Neo4j, are reshaping data analytics and decision-making in complex business environments. This article introduces the concept of Connected Context and illustrates how businesses can harness graph technology to gain deeper insights, improve predictive analytics, and drive smarter strategies across various functions.

The post Connected Context: Introducing Product Knowledge Graphs for Smarter Business Decisions appeared first on Tiger Analytics.

]]>
E, a seasoned product manager at a thriving consumer goods company, was suddenly in the throes of a crisis. The year 2022 began with an alarming spike in customer complaints, a stark contrast to the relatively calm waters of 2021. The complaints were not limited to one product or region; they were widespread, painting a complex picture that E knew she had to decode.

The company’s traditional methods of analysis, rooted in linear data-crunching, were proving to be insufficient. They pointed to various potential causes: a shipment of substandard raw materials, a series of human errors, unexpected deviations in manufacturing processes, mismatches in component ratios, and even inconsistent additives in packaging materials. The list was exhaustive, but the connections were elusive.

The issue was complex-no single factor was the culprit. E needed to trace and compare the key influencers and their patterns, not just within a single time frame but across the tumultuous period between 2021 and 2022. The domino effect of one small issue escalating into a full-blown crisis was becoming a daunting reality.

To trace the key influencers and their patterns across the tumultuous period between 2021 and 2022, E needed a tool that could capture and analyze the intricate relationships within the data. At Tiger Analytics, we recognized the limitations of conventional approaches and introduced the concept of the Product Knowledge Graph, powered by Neo4j. The concept of the Context Graph, a term we coined to describe a specialized graph-based data structure. This specialized sub-graph from the Master Graph emphasized the contextual information and intricate connections specific to the issue at hand. It provided a visual and analytical representation that weighted different factors and their interrelations.

Why-Graph

Why-Graph

The Context Graph illuminated the crucial 20% of factors that were contributing to 80% of the problems—the Pareto Principle in action. By mapping out the entire journey from raw material to customer feedback, the Context Graph enabled E to pinpoint the specific combinations of factors that were causing the majority of the complaints. With this clarity, E implemented targeted solutions to the most impactful issues.

What is a Context Graph and Why we need it?

In today’s complex business landscape, traditional databases often fall short in revealing crucial relationships within data. Context Graphs address this limitation by connecting diverse data points, offering a comprehensive view of your business ecosystem.

“The term Context Graph refers to a graph-based data structure (sub-graph from Master Graph) used to represent the contextual information, relationships, or connections between data entities, events, and processes at specific points at the time. It might be used in various applications, such as enhancing natural language understanding, recommendation systems, or improving the contextual awareness of artificial intelligence.”

At Tiger Analytics, we combine graph technology with Large Language Models to build Product Knowledge Graphs, unifying various data silos like Customer, Batch, Material, and more. The power of Context Graphs lies in their ability to facilitate efficient search and analysis from any starting point. Users can easily query the graph to uncover hidden insights, enhance predictive analytics, and improve decision-making across various business functions.

By embracing Context Graphs, businesses gain a deeper understanding of their operations and customer interactions, paving the way for more informed strategies and improved outcomes.

Connected-Context-Graph

Connected-Context-Graph

This comprehensive approach is set to redefine the landscape of data-driven decision-making, paving the way for enhanced predictive analytics, risk management, and customer experience.

6 Ways Graphs Enhance Data Analytics

Why-Graph-DB

1. Making Connections Clear: If data is like a bunch of dots, by itself, each dot doesn’t tell you much. A Context Graph connects these dots to show how they’re related. This is like drawing lines between the dots to make a clear picture.

2. Understanding the Big Picture: In complex situations, just knowing the facts (like numbers and dates) isn’t enough. You need to understand how these facts affect each other. Context Graphs show these relationships, helping you see the whole story.

3. Finding Hidden Patterns: Sometimes, important insights are hidden in the way different pieces of data are connected. Context Graphs can reveal these patterns. For example, in a business, you might discover that when more people visit your website (one piece of data), sales in a certain region go up (another piece of data). Without seeing the connection, you might miss this insight.

4. Quick Problem-Solving: When something goes wrong, like a drop in product quality, a Context Graph can quickly show where the problem might be coming from. It connects data from different parts of the process (like raw material quality, production dates, and supplier information) to help find the source of the issue.

5. Better Predictions and Decisions: By understanding how different pieces of data are connected, businesses can make smarter predictions and decisions. For example, they can forecast which product combo will be popular in the future or decide where to invest their resources for the best results.

6. Enhancing Artificial Intelligence and Machine Learning: Context Graphs feed AI and machine learning systems with rich, connected data. This helps these systems make more accurate and context-aware decisions, like identifying fraud in financial transactions or personalizing recommendations for customers.

The power of Context Graphs in solving complex business problems is clear. By illuminating hidden connections and patterns in data, these graph-based structures offer a new approach to decision-making and problem-solving. From E’s product quality crisis to broader applications in predictive analytics and AI, Context Graphs are changing how businesses understand and utilize their data.

In Part 2 of this series, we’ll delve deeper into the practical aspects, exploring a framework approach to implementing these powerful graph structures in your organization.

The post Connected Context: Introducing Product Knowledge Graphs for Smarter Business Decisions appeared first on Tiger Analytics.

]]>
When Opportunity Calls: Unlocking the Power of Analytics in the BPO Industry https://www.tigeranalytics.com/perspectives/blog/role-data-analytics-bpo-industry/ https://www.tigeranalytics.com/perspectives/blog/role-data-analytics-bpo-industry/#comments Thu, 27 Jan 2022 10:26:37 +0000 https://www.tigeranalytics.com/?p=6933 The BPO industry has embraced analytics to optimize profitability, efficiency, and customer satisfaction. This blog delves into the specifics of data utilization, unique challenges, and key business areas where analytics can make a difference.

The post When Opportunity Calls: Unlocking the Power of Analytics in the BPO Industry appeared first on Tiger Analytics.

]]>
Around 1981, the term outsourcing entered our lexicons. Two decades later, we had the BPO boom in India, China, and the Philippines with every street corner magically sprouting call centers. Now, in 2022, the industry is transitioning into an era of analytics, aiming to harness its sea of data for profitability, efficiency, and improved customer experience.

In this blog, we delve into details of what this data is, the unique challenges it poses, and the key business areas that can benefit from the use of analytics. We also share our experiences in developing these tools and how they have helped our clients in the BPO industry.

The Information Ocean

The interaction between BPO agents and customers generates huge volumes of both structured and unstructured (text, audio) data. On the one hand, you have the call data that measures metrics such as the number of incoming calls, time taken to address issues, service levels, and the ratio of handled vs abandoned calls. On the other hand, you have customer data measuring satisfaction levels and sentiment.

Insights from this data can help deliver significant value for your business whether it’s around more call resolution, reduced call time & volume, agent & customer satisfaction, operational cost reduction, growth opportunities through cross-selling & upselling, or increased customer delight.
The trick is to find the balance between demand (customer calls) and supply (agents). An imbalance can often lead to revenue losses and inefficient costs and this is a dynamic that needs to be facilitated by processes and technology.

Challenges of Handling Data

When you are handling such sheer volumes of data, the challenges too can be myriad.
Our clients wage a daily battle with managing these vast volumes, harmonizing internal and external data, and driving value through them. For those that have already embarked on their analytical journey, the primary goals are finding the relevance of what they built, driving scalability, and leveraging new-age predictive tools to drive ROI.

Delivering Business Value

Based on our experience, the business value delivered from advanced Analytics in the BPO industry is unquestionable, exhaustive and primarily influences these key aspects:

1) Call Management

Planning agent resources based on demand (peak and off-peak) and skillsets accounting for how long they take to resolve issues can impact business costs. AI can help automate the process to help optimize costs We have built an automated and real-time scheduling and resource optimization tool that has led one of our BPO clients to a cost reduction of 15%.

2) Customer Experience

Call center analytics give agents access to critical data and insights to work faster and smarter, improve customer relationships and drive growth. Analytics can help understand the past behavior of a customer/similar customers and recommend products or services that will be most relevant, instead of generic offers. It can also predict which customers are likely to need proactive management. Our real-time cross-selling analytics has led to a 20% increase in revenue.

3) Issue Resolution

First-call resolution refers to the percentage of cases that are resolved during the first call between the customer and the call center. Analytics can help automate the categorization process of contact center data by building a predictive model. This can help with a better customer servicing model achieved by appropriately capturing the nuances of customer chats with contact centers. This metric is extremely important as it helps in reducing the customer churn rate.

4) Agent Performance

Analytics on call-center agents can assist in segmenting those who had a low-resolution rate or were spending too much time on minor issues, compared with top-performing agents. This helps the call center resolve gaps or systemic issues, identify agents with leadership potential, and create a developmental plan to reduce attrition and increase productivity.

5) Call Routing

Analytics-based call routing is based on the premise that records of a customer’s call history or demographic profile can provide insight into which call center agent(s) has the right personality, conversational style, or combination of other soft skills to best meet their needs.

6) Speech Analytics

Detecting trends in customer interactions and analyzing audio patterns to read emotions and stress in a speaker’s voice can help reduce customer churn, boost contact center productivity, improve agent performance and reduce costs by 25%. Our tools have clients in predicting member dissatisfaction to achieve a 10% reduction in first complaints and 20% reduction in repeat complaints.

7) Chatbots and Automation

Thanks to the wonders of automation, we can now enhance the user experience to provide personalized attention to customers available 24/7/365. Reduced average call duration and wage costs improve profitability. Self-service channels such as the help center, FAQ page, and customer portals empower customers to resolve simple issues on their own while deflecting more cases for the company. Our AI-enabled chatbots helped in strengthening engagement and quicker resolutions of 80% of user queries.

Lessons from The Philippines

Recently, in collaboration with Microsoft, we conducted a six-week Data & Analytics Assessment for a technology-enabled outsourcing firm in the Philippines. The client was encumbered by complex ETL processes, resource bottlenecks on legacy servers, and a lack of UI for troubleshooting leading to delays in resolution and latency issues. They engaged Tiger Analytics to assess their data landscape.

We recommended an Enterprise Data Warehouse modernization approach to deliver improved scalability & elasticity, strengthened data governance & security, and improved operational efficiency.

We did an in-depth assessment to understand the client’s ecosystem, key challenges faced, data sources, and their current state architecture. Through interactions with IT and business stakeholders, we built a roadmap for a future state data infrastructure that would enable efficiency, scalability, and modernization. We also built a strategic roadmap of 20+ analytics use cases with potential ROI across HR and contact center functions.

The New Era

Today, the Philippines has been recognized as the BPO capital of the world. The competition will toughen both from new players and existing ones. A digital transformation is underway in the BPO industry. Success in this competitive space lies with companies that will harness the huge volume of data they have into meaningful and actionable change.

The post When Opportunity Calls: Unlocking the Power of Analytics in the BPO Industry appeared first on Tiger Analytics.

]]>
https://www.tigeranalytics.com/perspectives/blog/role-data-analytics-bpo-industry/feed/ 187
Suez Canal Crisis & Building Resilient Supply Chains https://www.tigeranalytics.com/perspectives/blog/suez-canal-crisis-building-resilient-supply-chains/ https://www.tigeranalytics.com/perspectives/blog/suez-canal-crisis-building-resilient-supply-chains/#comments Thu, 01 Apr 2021 17:51:24 +0000 https://www.tigeranalytics.com/?p=5073 The Suez Canal crisis was a catalyst for change in supply chain management. In this piece, we explore how leading companies are using AI, analytics, and digital twins to build more resilient, agile supply chains. Discover how proactive planning and smart technology can turn disruption into a competitive advantage.

The post Suez Canal Crisis & Building Resilient Supply Chains appeared first on Tiger Analytics.

]]>
The Suez Canal crisis has brought the discourse on supply chain resilience back into focus. The incident comes at a time when global supply chains are inching back to normalcy in the hope that Covid-19 vaccinations will help the economy bounce back. Considering that the canal carries about 10% to 12% of global trade, logistics will take time to recover even though the crisis is now resolved.

The Cascading Impact

Despite the fact that the Suez Canal blockage may not be as significant as Covid-19 disruptions, it will take months to remove the pressure points in the global supply chain. In the world of the interconnected global supply chains, the choking of a significant artery such as the Suez Canal will have a cascading effect with delayed deliveries to consumers, rising prices due to shortage, loss of efficiency at factories due to short supply, and increased pressure on intermodal/road transportation when the traffic ramps up.

In the US market, the east coast ports will bear the brunt of the fallout. Data shows that nearly a third of imports into the east coast are routed via the Suez Canal. In the near term, there will be a lull period followed by an inbound rush when the backlog of delayed shipments arrives, stressing the logistics network.

 

 

This is not the first accident of its kind; it’s likely not the last either. Given this reality, companies would do well to build resilience in the supply chains proactively.

Strategies for Supply Chain Resilience

Companies have used several different strategies to mitigate the risk to supply chains. Multi-Geo Manufacturing – Developments such as the straining of the US/China relationship and the disruption caused by Covid-19 have led to many firms looking at alternate manufacturing locations outside China, such as India.

– Multi-Sourcing – Dual or more diversified supplier bases for critical raw materials or components.

– In-Sourcing / Near Shoring – Companies have started to build regional sourcing within the Americas or even in-house to mitigate the risk. One of our clients is exploring this option for critical products with much closer/tighter integration across the value chain.

– Inventory and Capacity Buffers – Moving away from the lean supply chain’s traditional mindset, customers are increasing the inventory and capacity buffers. One of our manufacturing clients had doubled down on stocks early last year to mitigate any supply risk due to Covid-19.

– Flexible Distribution – Companies are adopting multiple modes of transportation such as air and rail so that they have a backup in case of disruption of one of the modes of transportation. They are also moving warehouses closer to the customer.

 How Analytics can enable the resilience journey

The strategies elaborated in the previous section imply that there will be an additional cost of building the necessary redundancy rather than going with a lean principles approach. Most companies have accepted this additional expenditure since the risk of not doing it far outweighs the cost of redundancy. When supply chains become complex with multiple paths for product flow, analytics can help keep the operations nimble and make the right decision to balance cost and service levels. Analytics can enhance two types of capabilities:

– Operational Capabilities are primarily focused on risk containment. When the risk event is expected to occur or has occurred, machine learning models can generate real-time alerts and insights for the supply chain operations teams to take the next best actions. For example:

–  Freight Pricing Impact: One of our logistics clients uses a pricing model to use truckload equipment. We designed this pricing model to look at the demand/supply imbalance at the origin/destination and predict the prices accordingly. It is expected that US East Coast ports will see a surge in Inbound containers once the Suez Canal blockage eases, and transportation prices will increase when the demand is higher than supply. Visibility into pricing helps our client secure the capacity upfront at non-peak pricing and ensure timely delivery to its customers.

–  On-Time in Full (OTIF) Risk Prediction: One of our manufacturing clients uses an ML tool that predicts ‘OTIF miss risk’ at each order level. We have built-in recommendations on what levers can be used to meet the SLA or reduce the penalty, e.g., Pick/Pack/Load priority in warehouse OR air freight.

–  Risk Event Prediction: Risk data related to natural disasters, political strikes, labor disputes, financial events, environmental events, etc., can be tied to the enterprise supply chain. One of our clients uses risk models to simulate the impact of various risks on their supply chains’ better plan responses.

–  Strategic Capabilities are focused on avoiding risk impact and enabling faster recovery. A component of this capability is a Digital Twin Supply Chain, which mirrors the physical network. Some of our clients use digital twins to do both mid-term and long-term risk planning involving some of the below activities:

–  Assessing current network and identifying potential risk areas.

–  Scenario planning and risk & cost analysis to provide inputs into Sales & Operation Planning.

–  Planning and building long-term approaches such as multiple sourcing or multi manufacturing.

–  Revamping the supplier and distribution networks – Integrating supplier/carrier scorecard, cost, etc., into the network data to visualize multiple options’ tradeoffs.

–  Pressure testing design choices at various levels. E.g., impact missed orders, delays, and inventory levels if a particular site went down, or how much it will take to initiate the contingency plan and interim impact.

 Conclusion

Recent developments have just acted as catalysts for an already growing affinity for AI and analytics. Gartner states that by 2023 at least 50% of large global companies will be using AI, advanced analytics, and IoT in supply chain operations gearing towards a digital twin model.

The companies which are agile and can respond to rapidly changing conditions are the ones that will survive increasingly frequent disruptions and add real value to customers and communities. AI & Analytics will be key enablers in building resilient supply chains that are proactive, agile, and maintain a balance between various tradeoffs.

The post Suez Canal Crisis & Building Resilient Supply Chains appeared first on Tiger Analytics.

]]>
https://www.tigeranalytics.com/perspectives/blog/suez-canal-crisis-building-resilient-supply-chains/feed/ 1
A Brief Guide: Managing Strategic Risks with Data Science https://www.tigeranalytics.com/perspectives/blog/a-brief-guide-managing-strategic-risks-with-data-science/ Thu, 19 Sep 2019 15:03:39 +0000 https://www.tigeranalytics.com/blog/a-brief-guide-managing-strategic-risks-with-data-science/ Utilize data science to manage strategic risks in business operations through predictive analytics, risk modeling, and data-driven insights. Find out how you can support informed decision-making and enhance risk management practices with cutting-edge data science tools.

The post A Brief Guide: Managing Strategic Risks with Data Science appeared first on Tiger Analytics.

]]>
“Risk is the price you pay for opportunity,” goes a famous saying. Businesses are built to create value from opportunities, and it’s nearly impossible to run a business without risks. Organizations today face a variety of risks, and these risks come in different shapes and sizes. Broadly, these can be categorized into operational risk and strategic risk, based on how they emanate and the impact they can have. For organizations to continue creating value without being derailed along the way, they need to manage these risks effectively.

Operational Risk

It relates to the disruption of day-to-day business processes – situations pertaining to resources, systems, employees, and compliance. Examples include equipment breakdown, supply chain disruption, fraud, employee attrition, data loss, non-compliance, cyber threats and IT infra/network issues. Progressive organizations are already using data science heavily to mitigate operational risk. For example, today machine learning is helping build early warning systems that identify issues ahead of time and take corrective actions.

Strategic Risk

Arises when an organization is not able to react to the market conditions and needs in time. These include changes in customer preferences, regulations, technological advances, competition, market shifts, etc. Such risks usually have a deeper impact and can affect an organization significantly. Traditionally, executives in board rooms proposed risk mitigation strategies based on their experience and gut. However, today, data science is becoming a valuable tool to manage certain types of strategic risks.

Here I share examples of three of our clients, all leaders in their industries, who used data science to manage certain types of strategic risk:

A Global Technology Enterprise

They wanted to be prepared for the future in the face of rapid technological advances. In today’s world, several thousands of software technologies are being experimented with – most fade away, but some go on to become transformative technologies. Investing in the right technologies at the right time would mean the difference between being a market leader vs. a laggard. Rather than relying on business analysts, the company now uses an intelligent monitoring system that continuously scans through the new technology landscape, predicting which technologies stand the best chance of becoming game-changers two-three years later. This system is powered by machine learning which crunches through a dizzying variety of technology-related data, from open-source code to tech commentary, extracts relevant signals, and discovers patterns of how successful vs. not so successful technologies evolve. Today, this solution helps guide the company’s investments into nascent technologies.

A Global F&B Giant

The client wanted to ensure their market share was not disrupted by upstarts who were coming up with innovative products with unique ingredients, positioning, and targeting. Moreover, their competitors had also been launching and acquiring various products with varying degrees of success. While they too had launched new products to not be left behind, they wanted to manage this exercise systematically. Analyzing a whole host of trends and network effects in the marketplace helped them quantify evolving customer preferences and shifting markets. It led to data-driven recommendations on the types of products to launch. In one initiative, the organization was able to not only address strategic risks but also uncover their next billion-dollar opportunity.

A Transportation Company

They wanted to understand the risks that economic fluctuations pose to their business. Networks of econometric models revealed how economies of different countries affected consumer demands, which in-turn affected imports and exports, which subsequently had a bearing on demand for various transportation services in different regions. Simulating various global economic scenarios, from positive growth to a depression, helped identify potential spikes and troughs in demand across the client’s services, the stress on their supply chain, and the financial implications. It helped them not only plan for the right redundancies across their supply chain but also plan for financial contingencies.

As you see in the above examples, strategic risks are also strategic opportunities. They are not structured problems, and there is a lot of uncertainty and ambiguity around them. To take advantage of these opportunities and minimize the downside, companies need a systematic process to identify and track these risks. Whatever be the process, data science can be a powerful means of quantifying and managing strategic risks and opportunities.

First published onhttps://www.forbes.com/sites/pradeepgulipalli/2019/09/17/managing-strategic-risk-using-data-science/

The post A Brief Guide: Managing Strategic Risks with Data Science appeared first on Tiger Analytics.

]]>
Demystifying Election Predictions: The Art and Science of Forecasting https://www.tigeranalytics.com/perspectives/blog/demystifying-election-predictions-the-art-and-science-of-forecasting/ Tue, 14 May 2019 16:00:39 +0000 https://www.tigeranalytics.com/blog/demystifying-election-predictions-the-art-and-science-of-forecasting/ Examine how data science techniques can predict election outcomes and explore the basics of Psephology, the science of election forecasting. Find out how the Tiger Analytics team has adapted and enhanced these techniques for the Indian context, including the innovative use of a 'Poll of Polls' approach.

The post Demystifying Election Predictions: The Art and Science of Forecasting appeared first on Tiger Analytics.

]]>
As data scientists, we’ve all done a variety of forecasting – demand, sales, revenue, customer service calls, stock prices, bank deposits, traffic, etc. Along the same lines, can we predict the behavior of voters and, consequently, the outcome of an election? The answer is ‘Yes’ – political scientists have been doing this for a long time with varying degrees of success. In fact, this statistical analysis of elections and polls has its own niche – Psephology.

So, how do you go about predicting the outcome of an election? This article will walk you through the basic methodology, how it has evolved into today’s state-of-the-art approach, and how the Tiger Analytics team adopted and enhanced it to the Indian context.

Opinion Polls

Opinion Polls are the most widely used way of predicting elections. They are based on the fact that the preferences of a population can be estimated by studying a group of individuals. The process of identifying this group of individuals is called sampling. Statistically speaking, there are different types of sampling techniques – random, stratified, voluntary, panel, quota, cluster, etc. Different techniques get used in different contexts under various constraints.

Pollsters (those conducting the opinion poll) identify a set of respondents to be representative of the population and then interview them to understand their voting choices. In theory, if one identifies a statistically representative sample, and understands their preference, then one should be able to predict the population preference confidently – which should be the result of the election.

However, life is not that simple. A pollster’s design of their sampling methodology could result in very different samples. Similarly, the size of the sample determines the confidence in the population estimates (remember the relationship between the sample mean and the population mean?). In addition, how one asks the individual for their preference also has a bearing on whether or not their stated preference is the same as their actual preference. To complicate things further, some pollsters could have political inclinations, biasing their sample/results in favour of a particular party — attempting to subconsciously influence some voters (people like being part of the winning team).

So, while opinion polls today hold signals to what the election outcome is, they cannot be taken at face value.

Poll of Polls

A simple approach to get a more unbiased estimate of the population preference is a mean of the sample means, or in Psephology terminology, a ‘Poll of Polls’. In principle, bringing together different sample estimates helps remove biases and reduces the error margins. However, there are smarter ways to aggregate the means, than just a simple mean.

Some polls are better than others in forecasting the outcomes. We can assign a weight to each opinion poll (or pollster) depending on some key factors:

  • Past Success: Past error rates of pollsters are an important indicator of their credibility in predicting the current election. Nuances such as success in different scenarios (e.g., national vs. state, large states vs. small states) can be baked in.
  • Relative Performance: How a pollster performed compared to the field is also an indication of performance. This aspect is important in cases where all pollsters get the predictions wrong.
  • Recency: All else being equal, the closer a poll is to the election date, the better it captures the mood of the electorate.
  • Sampling methodology: Does the sampling technique used yield a representative sample? E.g. a random sample in a shopping mall would be a biased sample as opposed to dialing random phone numbers.
  • Sample size: The larger the sample size of the opinion polls, the more confidence we have in the result.

The above features (and more nuanced effects) are used to arrive upon weights for each pollster. A weighted average of the seats projected by the pollsters gives the final ‘Poll of Polls’ projection corrected for observed biases.

If you’re interested to understand what the next level of details could look like, we’d encourage you to look at Nate Silver’s methodology.

Economic & Political Drivers

The above ‘Poll of Polls’ model can forecast the election results to a certain degree, but there is more that can be understood by analysing the underlying economic and political factors. You can think of a model which uses various signals to predict the results.

There are thousands of factors which could impact a voter’s behaviour, and just identifying them and finding the relevant data in itself can be an exhausting exercise. It will help if we group the factors into various categories and analyze the impact of each of these factors on outcomes.

Hypotheses need to be developed through secondary research and a bit of intuition while analyzing the impact of each of these factors. For example, while GDP is a global standard for measuring the growth of a country, its impact is not felt directly by the common man. On the other hand, even a single percentage point increase in inflation is experienced acutely by the common man.

Challenges in the Indian Context

As you’d expect, a big challenge in the Indian context (compared to the developed world) is availability of data. Data is strewn across various websites in CSV files, PDFs, plain text, etc., with lots of gaps, and collating all the data is a significant exercise in itself. Availability of data at the same level of aggregation, computed consistently, is another challenge. Missing data poses a dilemma – “go with a smaller sample or go with fewer features”, either of which would affect model performance. Tree-based ensemble models, which can handle a certain degree of missing data, can help.

Similarly, details around what and how of different opinion polls are not consistently available. Metadata such as sampling methodology, sample sizes, channels, questioning methodology, etc., are missing in several instances. It becomes a balancing act of excluding some of these from the weighting scheme vs. making intelligent assumptions where possible.

Beyond data, the multi-party system in India (as opposed to a 2-party system in the US) poses significant modeling complexity. In the 2019 general elections, the election commission reported more than 2,300 political parties (of which 150 were significant ones). We need a multinomial classification model, which has significantly higher data requirements – data, as we noted earlier, is the primary challenge. Rather than building a model for all classes, a smarter way would be to identify the right number classes so as to strike the right balance between model accuracy and efficiency. Of course, this will change by constituency.

Because of multi-party setup, India has a first-past-the-post electoral system. According to this, the winner needs to secure the highest vote share and not majority vote share. As such, the vote share needed to win an election is dependent on how many candidates are in the fray. For example, if there are only two candidates, the winner should garner more than 50% of votes. But the required vote share drops as the number of candidates increase. This is an important factor that needs to be accounted for. This can be effectively captured by a metric called Index of Opposition Unity (IOU), coined for the Indian context by Prannoy Roy and Ashok Lahiri.

Conclusion

A lesson we learnt during the process was that forecasting elections requires significant domain knowledge, just like any valuable business problem. Data is a challenge, but smart heuristics and assumptions can help significantly. And, you could be surprised about the relative impact of economic factors vis-à-vis populist actions.

Hopefully, some of this information has given you new ideas on how to analytically think about forecasting an election. Do put on this lens in your discussions with friends and colleagues as you analyze opinion polls and make your predictions for this election.

The post Demystifying Election Predictions: The Art and Science of Forecasting appeared first on Tiger Analytics.

]]>