Manufacturing Archives - Tiger Analytics Thu, 16 Jan 2025 09:40:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://www.tigeranalytics.com/wp-content/uploads/2023/09/favicon-Tiger-Analytics_-150x150.png Manufacturing Archives - Tiger Analytics 32 32 Why India-Targeted AI Matters: Exploring Opportunities and Challenges https://www.tigeranalytics.com/perspectives/blog/need-india-centric-ai/ Wed, 11 May 2022 13:42:19 +0000 https://www.tigeranalytics.com/?p=7604 The scope for AI-focused innovation is tremendous, given India’s status as one of the fastest-growing economies with the second-largest population globally. Explore the challenges and opportunities for AI in India.

The post Why India-Targeted AI Matters: Exploring Opportunities and Challenges appeared first on Tiger Analytics.

]]>
To understand the likely impact of India-centric AI, one needs to appreciate the country’s linguistic, cultural, and political diversity. Historically, India’s DNA has been so heterogeneous that extracting clear perspectives and actionable insights to address past issues, current challenges, and moving towards our vision as a country would be impossible without harnessing the power of AI.

The scope for AI-focused innovation is tremendous, given India’s status as one of the fastest-growing economies with the second-largest population globally. India’s digitization journey and the introduction of the Aadhaar system in 2010 – the largest biometric identity project in the world – has opened up new venues for AI and data analytics. The interlinking of Aadhaar with banking systems, the PDS, and several other transaction systems allows greater visibility, insights, and metrics that can be used to bring about improvements. Besides using these to raise the quality of lives of citizens while alleviating disparities, AI can support more proactive planning and formulation of policies and roadmaps. Industry experts concur a trigger and economic growth spurt, opining that “AI can help create almost 20 million jobs in India by 2025 and add up to $957 billion to the Indian economy by 2035.”

The current state of AI in India

The Indian government, having recently announced the “AI for All” strategy, is more driven than ever to nurture core AI skills to future-proof the workforce. This self-learning program looks to raise awareness levels about AI for every Indian citizen, be it a school student or a senior citizen. It targets meeting the demands of a rapidly emerging job market and presenting opportunities to reimagine how industries like farming, healthcare, banking, education, etc., can use technology. A few years prior, in 2018, the government had also increased its funding towards research, training, and skilling in emerging technologies by 100% as compared to 2017.

The booming interest has been reflected in the mushrooming of boutique start-ups across the country, as well. With a combined value of $555 million, it is more than double the previous year’s figure of $215 million. Interestingly, analytics-driven products and services contribute a little over 64% of this market -clocking over $355 million. In parallel, the larger enterprises are taking quantum leaps to deliver AI solutions too. Understandably, a large number of them use AI solutions to improve efficiency, scalability, and security across their existing products and services.

Current challenges of making India-centric AI

There is no doubt that AI is a catalyst for societal progress through digital inclusion. And in a country as diverse as India, this can set the country on an accelerated journey toward socio-economic progress. However, the socio, linguistic and political diversity that is India also means more complex data models that can be gainfully deployed within this landscape. For example, NLP models would have to adapt to text/language changes within just a span of a few miles! And this is just the tip of the iceberg as far as the challenges are concerned.

Let’s look at a few of them:

  • The deployment and usage of AI have been (and continues to be) severely fragmented without a transparent roadmap or clear KPIs to measure success. One of the reasons is the lack of a governing body or a panel of experts to regulate, oversee and track the implementation of socio-economic AI projects at a national level. But there’s no avoiding this challenge, considering that the implications of AI policy-making on Indian societies may be irreversible.
  • The demand-supply divide in India for AI skills is huge. The government initiatives such as Startup India as well as the boom in AI-focused startups have only contributed to extending this divide. The pace of getting a trained workforce to cater to the needs of the industry is accelerating but unable to keep up with the growth trajectory that the industry finds itself in. Large, traditionally run institutions are also embracing AI-driven practices having witnessed the competitive advantage it brings to the businesses. This has added to the scarcity that one faces in finding good quality talent to serve today’s demand.
  • The lack of data maturity is a serious roadblock on the path to establishing India-centric AI initiatives – especially with quite a few region-focused datasets being currently unavailable. There is also a parity issue with quite a few industry giants having access to large amounts of data as compared to the government, let alone start-ups. There is also the added challenge of data quality and a single source of truth that one can use for AI model development
  • Even the fiercest AI advocates would admit that its security challenges are nowhere close to being resolved. There is a need for security and compliance governance protocols to be region-specific so that unique requirements are met and yet there is a generalisability that is required to rationalize these models at the national level.
  • There is also a lot of ongoing debate at a global level on defining the boundaries that ethical AI practices will need to lean on. Given India’s diversity, this is a challenge that is magnified many times over

Niche areas where AI is making an impact

Farming

The role of AI in modern agricultural practices has been transformational – this is significant given that more than half the population of India depends on farming to earn a living. In 2019-2020 alone, over $1 billion was raised to fuel agriculture-food tech start-ups in India. It has helped farmers generate steadier income by managing healthier crops, reducing the damage caused by pests, tracking soil and crop conditions, improving the supply chain, eliminating unsafe or repetitive manual labor, and more.

Healthcare

Indian healthcare systems come with their own set of challenges – from accessibility and availability to quality and poor awareness levels. But each one represents a window of opportunity for AI to be a harbinger of change. For instance, AI-enabled platforms can extend healthcare services to low-income or rural areas, train doctors and nurses, address communication gaps between patients and clinicians, etc. Government-funded projects like NITI Aayog and the National Digital Health Blueprint have also highlighted the need for digital transformation in the healthcare system.

BFSI

The pandemic has accelerated the impact of AI on the BFSI industry in India, with several key processes undergoing digital transformation. The mandatory push for contactless remote banking experience has infused a new culture of innovation in mission-critical back-end and front-end operations. A recent PwC-FICCI survey showed that the banking industry has the country’s highest AI maturity index – leading to the deployment of the top AI use cases. The survey also predicted that Indian banks would see “potential cost savings up to $447 billion by 2023.”

E-commerce

The Indian e-commerce industry has already witnessed big numbers thanks to AI-based strategies, particularly marketing. For retail brands, capturing market share is among the toughest worldwide – with customer behavior being driven by a diverse set of values and expectations. By using AI and ML technologies – backed by data science – it would be easier to tap into multiple demographics without losing the context of messaging.

Manufacturing

Traditionally, the manufacturing industry has been running with expensive and time-consuming manually driven processes. Slowly, more companies realize the impact of AI-powered automation on manufacturing use cases like assembly line production, inventory management, testing and quality assurance, etc. While still at a nascent stage, AR and VR technologies are also seeing adoption in this sector in use cases like prototyping and troubleshooting.

3 crucial data milestones to achieve in India’s AI journey

1) Unbiased data distribution

Forming India-centric datasets starts with a unified framework across the country so that no region is left uncovered. This framework needs to integrate with other systems/data repositories in a secure and seamless manner. Even private companies can share relevant datasets with government institutions to facilitate strategy and policy-making.

2) Localized data ownership

In today’s high-risk data landscape, transferring ownership of India-centric information to companies in other countries can lead to compliance and regulatory problems. Especially when dealing with industries with healthcare or public administration, it is highly advised to maintain data control within the country’s borders.

3) Data ethics and privacy

Data-centric solutions that work towards improving human lives require a thorough understanding of personal and non-personal data, matters of privacy, and infringement among others. The responsible aspect to manage this information takes the challenges beyond the realms of deployment of a mathematical solution. Building an AI mindset that raises difficult questions about ethics, policy, and law, and ensures sustainable solutions with minimized risks and negative impact is key. Plus, data privacy should continue to be a hot button topic, with an uncompromising stance on safeguarding the personal information of Indian citizens.

Final thoughts

India faces a catch-22 situation with one side of the country still holding to its age-old traditions and practices. The other side embraces technology change, be it using UPI transfers, QR codes, or even the Aarogya Setu app. But sheer size and diversity of languages, cultures, and politics dictate that AI will neither fail to find areas to cause a profound impact nor face fewer challenges while implementing it.

As mentioned earlier, the thriving startup growth adds a lot of fuel to AI’s momentum. From just 10 unicorns in India in 2018, we have grown to 38. This number is expected to increase to 62 by 2025. In 2020, AI-based Indian startups received over $835 million in funding and are propelling growth few countries can compete with. AI is a key vehicle to ring in the dawn of a new era for India-centric AI– an India which despite the diversity and complex landscape, leads the way in the effective adoption of AI.

This article was first published in Analytics India Magazine.

The post Why India-Targeted AI Matters: Exploring Opportunities and Challenges appeared first on Tiger Analytics.

]]>
Data Science Strategies for Effective Process System Maintenance https://www.tigeranalytics.com/perspectives/blog/harness-power-data-science-maintenance-process-systems/ https://www.tigeranalytics.com/perspectives/blog/harness-power-data-science-maintenance-process-systems/#comments Mon, 20 Dec 2021 16:42:57 +0000 https://www.tigeranalytics.com/?p=6846 Industry understanding of managing planned maintenance is fairly mature. This article focuses on how Data Science can impact unplanned maintenance, which demands a differentiated approach to build insight and understanding around the process and subsystems.

The post Data Science Strategies for Effective Process System Maintenance appeared first on Tiger Analytics.

]]>
Data Science applications are gaining significant traction in the preventive and predictive maintenance of process systems across industries. A clear mindset shift has made it possible to steer maintenance from using a ‘reactive’ (using a run-to-failure approach) to one that is proactive and preventive in nature.

Planned or scheduled maintenance uses data and experiential knowledge to determine the periodicity of servicing required to maintain the plant components’ good health. These are typically driven by plant maintenance teams or OEMs through maintenance rosters and AMCs. Unplanned maintenance, on the other hand, occurs at random, impacts downtime/production, safety, inventory, customer sentiment besides adding to the cost of maintenance (including labor and material).

Interestingly, statistics reveal that almost 50% of the scheduled maintenance projects are unnecessary and almost a third of them are improperly carried out. Poor maintenance strategies are known to cost organizations as much as 20% of their production capacity – shaving off the benefits that a move from reactive to preventive maintenance approach would provide. Despite years of expertise available in managing maintenance activities, unplanned downtime impacts almost 82% of businesses at least once every three years. Given the significant impact on production capacity, aggregated annual downtime costs for the manufacturing sector are upwards of $50 billion (WSJ) with average hourly costs of unplanned maintenance in the range of $250K.

It is against this backdrop that data-driven solutions need to be developed and deployed. Can Data Science solutions bring about significant improvement of the maintenance domain and prevent any or all of the above costs? Are the solutions scalable? Do they provide an understanding of what went wrong? Can they provide insights into alternative and improved ways to manage planned maintenance activities? Does Data Science help reduce all types of unplanned events or just a select few? These are questions that manufacturers need to be answered and it is for the experts from both maintenance and data science domains to address.

Industry understanding of managing planned maintenance is fairly mature. The highlight of this article is therefore focused on unplanned maintenance, which demands a differentiated approach to build insight and understanding around the process and subsystems.

Data Science solutions are accelerating the industry’s move towards ‘on-demand’ maintenance wherein interventions are made only if and when required. Rather than follow a fixed maintenance schedule, data science tools can now aid plants to increase run lengths between maintenance cycles in addition to improving plant safety and reliability. Besides the direct benefits that result in reduced unplanned downtime and cost of maintenance, operating equipment at higher levels of efficiency improves the overall economics of operation.

The success of this approach was demonstrated in refinery CDU preheat trains that use soft sensing triggers to decide when to process ‘clean crude’ (to mitigate the fouling impact) or schedule maintenance of fouled exchangers. Other successes were in the deployment of plant-wide maintenance of control valves, multiple-effect evaporators in plugging service, compressors in petrochemical service, and a geo-wide network of HVAC systems.

Instead of using a fixed roster for maintenance of PID control valves, plants can now detect and diagnose control valves that are malfunctioning. Additionally, in combination with domain and operations information, it can be used to suggest prescriptive actions such as auto-tuning of the valves, which improve maintenance and operations metrics.

Reducing unplanned, unavoidable events

It is important to bear in mind that not all unplanned events are avoidable. The inability to avoid events could be either because they are not detectable enough or because they are not actionable. The latter could occur either because the response time available is too low or because the knowledge to revert a system to its normal state does not exist. A large number of unplanned events however are avoidable, and the use of data science tools improves their detection and prevention with greater accuracy.

The focus of the experts working in this domain is to reduce unplanned events and transition events from unavoidable to avoidable. Using advanced tools for detection, diagnosis, and enabling timely actions to be taken, companies have managed to reduce their downtime costs significantly. The diversity of solutions that are available in the maintenance area covers both plant and process subsystems.

Some of the data science techniques deployed in the maintenance domain are briefly described below:

Condition Monitoring
This has been used to monitor and analyze process systems over time, and predict the occurrence of an anomaly. These events or anomalies could have short or long propagation times such as the ones seen in the fouling in exchangers or in the cavitation in pumps. The spectrum of solutions in this area includes real-time/offline modes of analysis, edge/IoT devices and open/closed loop prescriptions, and more. In some cases, monitoring also involves the use of soft sensors to detect fouling, surface roughness, or hardness – these parameters cannot be measured directly using a sensor and therefore, need surrogate measuring techniques.

Perhaps one of the most unique challenges working in the manufacturing domain is in the use of data reconciliation. Sensor data tend to be spurious and prone to operational fluctuations, drift, biases, and other errors. Using raw sensor information is unlikely to satisfy the material and energy balance for process units. Data reconciliation uses a first-principles understanding of the process systems and assigns a ‘true value’ to each sensor. These revised sensor values allow a more rigorous approach to condition monitoring, which would otherwise expose process systems to greater risk when using raw sensor information. Sensor validation, a technique to analyze individual sensors in tandem with data reconciliation, is critical to setting a strong foundation for any analytics models to be deployed. These elaborate areas of work ensure a greater degree of success when deploying any solution that involves the use of sensor data.

Fault Detection
This is a mature area of work and uses solutions ranging from those that are driven entirely by domain knowledge, such as pump curves and detection of anomalies thereof, to those that rely only on historical sensor/maintenance/operations data for analysis. An anomaly or fault is defined as a deviation from ‘acceptable’ operation but the context and definitions need to be clearly understood when working with different clients. Faults may be related to equipment, quality, plant systems, or operability. A good business context and understanding of client requirements are necessary for the design and deployment of the right techniques. From basic tools that use sensor thresholds, run charts, and more advanced techniques such as classification, pattern analysis, regression, a wide range of solutions can be successfully deployed.

Early Warning Systems
The detection of process anomalies in advance helps in the proactive management of abnormal events. Improving actionability or response time allows faults to be addressed before setpoints/interlocks are triggered. The methodology varies across projects and there is no ‘one-size-fits-all’ approach. Problem complexity could range from using single sensor information as lead indicators (such as using sustained pressure loss in a vessel to identify a faulty gasket that might rupture) to far more complex methods of analysis.

Typical challenges faced in developing early warning systems are in the 100% detectability of anomalies but an even larger challenge is in filtering out false indications of anomalies. The detection of 100% of the anomalies and the robust filtering techniques are critical factors to consider for successful deployment.

Enhanced Insights for Fault Identification
The importance of detection and response time in the prevention of an event cannot be overstated. But what if an incident is not easy to detect or the propagation of the fault is too rapid to allow us any time for action? The first level involves using machine-driven solutions for detection such as computer vision models, which are rapidly changing the landscape. Using these models, it is now possible to improve prediction accuracies of processes that were either not monitored or used manual monitoring. The second is to integrate the combined expertise of personnel from various job functions such as technologists, operators, maintenance engineers, and supervisors. At this level of maturity, the solution is able to baseline with the best that current operations aim to achieve. The third and by far the most complex is to move more faults in the ‘detectable’ and actionable realm. One such case was witnessed in a complex process from the metal smelting industry. Advanced-Data Science techniques using a digital twin amplified signal responses and analyzed multiple process parameters to predict the occurrence of an incident ahead of time. By gaining order of magnitude improvement in response time, it was possible to move the process fault from an unavoidable to an avoidable and actionable category.

With the context provided above, it is possible to choose a modeling approach and customize the solutions to suit the problem landscape:

data analytics in process system maintenance

Different approaches to Data Analytics

Domain-driven solution
First-principles and the rule-based approach is an example of a domain-driven solution. Traditional ways of delivering solutions for manufacturing often involve computationally intensive solutions (such as process simulation, modeling, and optimization). In one of the difficult-to-model plants, deployment was done using rule engines that allow domain knowledge and experience to determine patterns and cause-effect relationships. Alarms were triggered and advisories/recommendations were sent to the concerned stakeholders regarding what specific actions to undertake each time the model identified an impending event.

Domain-driven approaches also come in handy in the case of ‘cold start’ where solutions need to be deployed with little or no data availability. In some deployments in the mechanical domain, the first-principles approach helped identify >85% of the process faults even at the start of operations.

Pure data-driven solutions
A recent trend seen in the process industry is the move away from domain-driven solutions due to challenges in getting the right skills to deploy solutions, computation infrastructure requirements, customized maintenance solutions, and the requirement to provide real-time recommendations. Complex systems such as naphtha cracking, alumina smelting which are hard to model have harnessed the power of data science to not just diagnose process faults but also enhance response time and bring more finesse to the solutions.

In some cases, domain-driven tools have provided high levels of accuracy in analyzing faults. One such case was related to compressor faults where domain data was used to classify them based on a loose bearing, defective blade, or polymer deposit in the turbine subsystems. Each of these faults was identified using sensor signatures and patterns associated with it. Besides getting to the root cause, this also helped prescribe action to move the compressor system away from anomalous operation.

These solutions need to bear in mind that the operating envelope and data availability covers all possible scenarios. The poor success of deployments using this approach is largely due to insufficient data that covers plant operations and maintenance. However, the number of players offering a purely data-driven solution is large and soon replacing what was traditionally part of a domain engineer’s playbook.

Blended solutions
Blended solutions for the maintenance of process systems combine the understanding of both data science and domain. One such project was in the real-time monitoring and preventive maintenance of >1200 HVAC units across a large geographic area. The domain rules were used to detect and diagnose faults and also identify operating scenarios to improve the reliability of the solutions. A good understanding of the domain helps in isolating multiple anomalies, reducing false positives, suggesting the right prescriptions, and more importantly, in the interpretability of the data-driven solutions.

The differentiation comes from using the combined intelligence from AI / ML models, domain knowledge and knowledge of deployment success are integrated into the model framework.

Customizing the toolkit and determining the appropriate modeling approach are critical to delivery. Given the uniqueness of each plant and problem and the requirement for a high degree of customization, makes the deployment of solutions in a manufacturing environment is fairly challenging. This fact is validated by the limited number of solution providers serving this space. However, the complexity and nature of the landscape need to be well understood by both the client and the service provider. It is important to note that not all problems in the maintenance space are ‘big data’ problems requiring analysis in real-time, using high-frequency data. Some faults with long propagation times can use values averaged over a period of time while other systems with short response time requirements may require real-time data. Where maintenance logs and annotations related to each event (and corrective action) are recorded, one could go with a supervised learning approach, but this is not always possible. In cases where data on faults and anomalies are not available, a one-class approach to classify the operation into normal/abnormal modes has also been used. Solution maturity improves with more data and failure modes identified over time.

A staged solution approach helps in bringing in the right level of complexity to deliver solutions that evolve over time. Needless to say, it takes a lot of experience and prowess to marry the generalized understanding with the customization that each solution demands.

Edge/IoT

A fair amount of investment needs to be made at the beginning of the project to understand the hardware and solution architecture required for successful deployment. While the security of data is a primary consideration, other factors such as computational power, cost, time, response time, open/closed-loop architecture are added considerations in determining the solution framework. Experience and knowledge help understand additional sensing requirements and sensor placement, performance enhancement through edge/cloud-based solutions, data privacy, synchronicity with other process systems, and much more.

By far, the largest challenge is witnessed on the data front (sparse, scattered, unclean, disorganized, unstructured, not digitized, and so on) that prevent businesses from seeing quick success. Digitization and creating data repositories, which set the foundation for model development, take a lot of time.

There is also a multitude of control systems, specialized infrastructure, legacy systems within the same manufacturing complex that one may need to work through. End-to-end delivery with the front-end complexity in data management creates a significant entry barrier for service providers in the maintenance space.

Maintenance cuts across multiple layers of a process system. The maintenance solutions vary as one moves from a sensor to a control loop, equipment with multiple control valves all the way to a flowsheet/enterprise layer. Maintenance across these layers requires a deep understanding of both the hardware as well as process aspects, a combination that is often hard to put together. Sensors and control valves are typically maintained by those with an Instrumentation background, while equipment maintenance could fall in a mechanical or chemical engineer’s domain. On the other hand, process anomalies that could have a plant-level impact are often in the domain of operations/technology experts or process engineers.

Data Science facilitates the development of insights and generalizations required to build understanding around a complex topic like maintenance. It helps in the generalization and translation of learnings across layers within the process systems from sensors all the way to enterprise and other industry domains as well. It is a matter of time before analytics-driven solutions that help maintain safe and reliable operations become an integral part of plant operations and maintenance systems. We need to aim towards the successes that we witness in the medical diagnostics domain where intelligent machines are capable of detecting and diagnosing anomalies. We hope that similar analytics solutions will go a long way to keep plants safe, reduce downtime and provide the best of operations efficiencies that a sustainable world demands.

Today, the barriers to success are in the ability to develop, a clear understanding of the problem landscape, plan end-to-end and deliver customized solutions that take into account business priorities and ROI. Achieving success at a large scale will demand reducing the level of customization required in each deployment – a constraint that is overcome by few subject matter experts in the area today.

The post Data Science Strategies for Effective Process System Maintenance appeared first on Tiger Analytics.

]]>
https://www.tigeranalytics.com/perspectives/blog/harness-power-data-science-maintenance-process-systems/feed/ 1
From Awareness to Action: Private Equity’s Quest for Data-Driven Growth https://www.tigeranalytics.com/perspectives/blog/private-equity-firms-facing-data-analytics-paradox/ Thu, 02 Dec 2021 16:42:44 +0000 https://www.tigeranalytics.com/?p=6732 Data analytics is crucial for Private Equity (PE) firms to navigate a diverse client portfolio and complex data. Despite challenges such as data overflow and outdated strategies, a data-driven approach enables better decision-making, transparent valuation, and optimized investment opportunities, ensuring competitiveness in a dynamic market.

The post From Awareness to Action: Private Equity’s Quest for Data-Driven Growth appeared first on Tiger Analytics.

]]>
Data has become the lifeblood of many industries as they unlock the immense potential to make smarter decisions. From retail and insurance to manufacturing and healthcare, companies are leveraging the power of big data and analytics to personalize and scale their products and services while unearthing new market opportunities. However, it has been proven that when the volume of data is high, and the touchpoints are unsynchronized, it becomes difficult to transform raw information into insightful business intelligence. Through this blog series, we will take an in-depth look at why data analytics continues to be an elusive growth strategy for Private Equity firms and how this can be changed.

State of the Private Equity (PE) industry

For starters, Private Equity (PE) firms have to work twice as hard to make sense of their data before turning them into actionable insights. This is because their client portfolios are often diverse, as is the data – spread across different industries and geographies, which limits the reusability of frameworks and processes. Furthermore, each client may have its own unique reporting format, which leads to information overflow.

Other data analytics-related challenges that PE firms have to overcome include:

  • No reliable sources and poor understanding of non-traditional data
  • Archaic and ineffective data management strategy
  • Inability to make optimal use of various data assets
  • Absence of analytics-focused functions, resources, and tools

These challenges offer a clear indication of why the adoption of data analytics in the PE industry has been low – compared to others. According to a recent study conducted by KPMG, only a few PE firms are currently exploring big data and analytics as a viable strategy, with “70% of surveyed firms still in the awareness-raising stage.

Why PE firms need to incubate a data-first mindset

So, considering these herculean challenges, why is a data analytics strategy the need of the hour for Private Equity firms? After all, according to Gartner, “By 2025, more than 75% of venture capital (VC) and early-stage investor executive reviews will be informed using artificial intelligence (AI) and data analytics.”

First, it’s important to understand that as technology continues to skyrocket, a tremendous amount of relevant data is generated and gathered around the clock. And without leveraging data to unearth correlations and trends, they can only rely on half-truths and gut instincts. For instance, such outdated strategies can mislead firms regarding where their portfolio companies can reduce operating costs. Hence, the lack of a data analytics strategy means they can no longer remain competitive in today’s dynamic investment world.

Plus, stakeholders expect more transparency and visibility into the valuation processes. So, Private Equity firms are already under pressure to break down innovation barriers and enable seamless access and utilization of their data assets to build a decision-making culture based on actionable insights. They can also proactively identify good investment opportunities, which can significantly help grow revenue while optimizing the bandwidth of their teams by focusing on the right opportunities.

Some of the other benefits for PE firms are:

  • Enriched company valuation models
  • Enhanced portfolio monitoring
  • Reduced dependency on financial data
  • Pipeline monitoring and timely access for key event triggers
  • Stronger due diligence processes

Final thoughts

The emergence of data analytics as a game-changer for Private Equity firms has caused some to adopt piecemeal solutions – hoping that it could reap low-hanging fruits. However, this could prove to be hugely ineffective because it would further decentralize the availability of data, which has been this industry’s biggest problem in the first place.

In reality, the key is for Private Equity firms to rethink how they collect data and what they can do with it – from the ground up. There’s no doubt that only by building a data-led master strategy can they make a difference in how they make key investment decisions and successfully navigate a hyper-competitive landscape.

We hope that we helped you understand the current data challenges Private Equity firms face while adopting a data analytics strategy and why it’s still a competitive differentiator. Stay tuned for the next blog in the series, in which we will shed light on how Private Equity firms can overcome these challenges.

The post From Awareness to Action: Private Equity’s Quest for Data-Driven Growth appeared first on Tiger Analytics.

]]>
Waste No More: Making a Difference with Tiger Analytics’ Data-Driven Solution for a Greener Tomorrow https://www.tigeranalytics.com/perspectives/blog/advanced-analytics-commercial-waste-management-system/ Mon, 20 Sep 2021 23:55:07 +0000 https://www.tigeranalytics.com/?p=5731 Improper commercial waste management devastates the environment, necessitating adherence to waste management protocols. Tiger Analytics’ solution for a waste management firm enhanced accuracy, efficiency, and compliance, promoting sustainable practices.

The post Waste No More: Making a Difference with Tiger Analytics’ Data-Driven Solution for a Greener Tomorrow appeared first on Tiger Analytics.

]]>
Improper commercial waste management has a devastating impact on the environment. The realization may not be sudden, but it is certainly gathering momentum – considering that more companies are now looking to minimize their impact on the environment. Of course, it’s easier said than done. Since the dawn of the 21st century, the sheer volume and pace of commercial growth have been unprecedented. But the fact remains that smart waste management is both a business and a social responsibility.

Importance of commercial waste management

The commercial waste management lifecycle comprises collection, transportation, and disposal. Ensuring that all the waste materials are properly handled throughout the process is a matter of compliance. After all, multiple environmental regulations dictate how waste management protocols should be implemented and monitored. Instituting the right waste management guidelines also helps companies fulfill their ethical and legal responsibility of maintaining proper health and safety standards at the workplace.

For instance, all the waste materials generated in commercial buildings are stored in bins placed at strategic locations. If companies do not utilize them effectively, it will lead to bin overflows causing severe financial, reputational, and legal repercussions.

Impact of data analytics on commercial waste management

Data analytics eliminates compliance issues that stem from overflowing bins by bridging any operational gaps. In addition, it provides the precise know-how for creating intelligent waste management workflows. With high-quality video cameras integrated into the chassis of their waste collection trucks, image-based analytics can be captured and shared through a cloud-hosted platform for real-time visual detection. From these, data insights can be extracted to evaluate when the trash bins are getting filled and schedule the collection to minimize transportation, fuel, and labor expenses. They can also determine the right collection frequency, streamline collection routes, and optimize vehicle loads.

By monitoring real-time data from the video cameras, the flow of waste in each bin can be managed promptly to avoid compliance-related repercussions. The trucks also receive real-time data on the location of empty bins, which helps them chart optimal routes and be more fuel-conscious.

Ultimately, leveraging sophisticated data analytics helps build a leaner and greener waste management system. In addition, it can improve operational efficiency while taking an uncompromising stance on environmental and health sustainability.

Tiger Analytics’ waste management modeling use case for a large manufacturer

Overflowing bins are a severe impediment in the waste management process as they increase the time required to process the waste. Waste collection trucks will have to spend more time than budgeted for ensuring that they handle overflowing bins effectively – without any spillage in and around the premises. It is also difficult for them to complete their trips on time. When dealing with commercial bins, the situation is even more complicated. The size and contents of the commercial bins vary based on the unique waste disposal requirements of businesses.

Recently, Tiger Analytics worked with a leading waste management company to harness advanced data analytics to improve compliance concerning commercial waste management.

Previously, the client had to record videos of the waste pick up process and send them for manual review. The videos were used to identify the commercial establishments that did not follow the prescribed norms on how much waste could be stored in a bin. However, their video review process was inefficient and tedious.

When the pick up takes place, the manual reviewer is expected to watch hours of video clips and images captured by each truck to determine violators. Thus, there was an uncompromising need for accuracy since overflowing bins led to compliance violations and potential penalties.

Tiger Analytics developed a solution that leveraged video analytics to help determine whether a particular bin in an image was overflowing or not. Using cutting-edge deep learning algorithms, the solution enabled a high level of accuracy and eliminated all activities related to the manual video review and the associated costs.

Tiger Analytics’ solution was based on a new data classification algorithm that increased the efficiency of the waste collection trucks. Based on the sensor data collected from the chassis, we empowered the client to predict the collection time when the truck was five seconds away from being in the vicinity of a bin. Furthermore, with advanced monitoring analytics, we reduced the duration of the review process from 10 hours to 1.5 hours, which boosted workforce efficiency too.

As a result, the client could effortlessly de-risk their waste management approach and prevent overflow in commercial bins. Some of the business benefits of our solution were:

  • More operational efficiency by streamlining how pickups are scheduled
  • Smarter asset management through increased fuel efficiency and reduced vehicle running costs
  • Improved workforce productivity – with accelerated critical processes like reviewing videos to confirm the pickup
  • Quick risk mitigation of any overflow negligence that leads to compliance violations

Conclusion

New avenues of leveraging advanced analytics continue to pave the way for eco-conscious and sustainable business practices. Especially in a highly regulated sector like commercial waste management, it provides the much-needed accuracy, convenience, and speed to strengthen day-to-day operations and prevent compliance issues.

Day by day, commercial waste management is growing into a more significant catalyst for societal progress. As mentioned earlier, more companies are becoming mindful of their impact on the environment. In addition, the extent of infrastructure development has taken its toll – thereby exponentially increasing the need to optimize waste disposal and collection methods. Either way, data provides a valuable understanding of how it should be done. 

This article was first published in Analytics India Magazine.

The post Waste No More: Making a Difference with Tiger Analytics’ Data-Driven Solution for a Greener Tomorrow appeared first on Tiger Analytics.

]]>
Maximizing Efficiency: Redefining Predictive Maintenance in Manufacturing with Digital Twins https://www.tigeranalytics.com/perspectives/blog/ml-powered-digital-twin-predictive-maintenance/ Thu, 24 Dec 2020 18:19:09 +0000 https://www.tigeranalytics.com/?p=4867 Tiger Analytics leverages ML-powered digital twins for predictive maintenance in manufacturing. By integrating sensor data and other inputs, we enable anomaly detection, forecasting, and operational insights. Our modular approach ensures scalability and self-sustainability, yielding cost-effective and efficient solutions.

The post Maximizing Efficiency: Redefining Predictive Maintenance in Manufacturing with Digital Twins appeared first on Tiger Analytics.

]]>
Historically, manufacturing equipment maintenance has been done during scheduled service downtime. This involves periodically stopping production for carrying out routine inspections, maintenance, and repairs. Unexpected equipment breakdowns disrupt the production schedule; require expensive part replacements, and delay the resumption of operations due to long procurement lead times.

Sensors that measure and record operational parameters (temperature, pressure, vibration, RPM, etc.) have been affixed on machinery at manufacturing plants for several years. Traditionally, the data generated by these sensors was compiled, cleaned, and analyzed manually to determine failure rates and create maintenance schedules. But every equipment downtime for maintenance, whether planned or unplanned, is a source of lost revenue and increased cost. The manual process was time-consuming, tedious, and hard to handle as the volume of data rose.

The ability to predict the likelihood of a breakdown can help manufacturers take pre-emptive action to minimize downtime, keep production on track, and control maintenance spending. Recognizing this, companies are increasingly building both reactive and predicted computer-based models based on sensor data. The challenge these models face is the lack of a standard framework for creating and selecting the right one. Model effectiveness largely depends on the skill of the data scientist. Each model must be built separately; model selection is constrained by time and resources, and models must be updated regularly with fresh data to sustain their predictive value.

As more equipment types come under the analytical ambit, this approach becomes prohibitively expensive. Further, the sensor data is not always leveraged to its full potential to detect anomalies or provide early warnings about impending breakdowns.

In the last decade, the Industrial Internet of Things (IIoT) has revolutionized predictive maintenance. Sensors record operational data in real-time and transmit it to a cloud database. This dataset feeds a digital twin, a computer-generated model that mirrors the physical operation of each machine. The concept of the digital twin has enabled manufacturing companies not only to plan maintenance but to get early warnings of the likelihood of a breakdown, pinpoint the cause, and run scenario analyses in which operational parameters can be varied at will to understand their impact on equipment performance.

Several eminent ‘brand’ products exist to create these digital twins, but the software is often challenging to customize, cannot always accommodate the specific needs of each and every manufacturing environment, and significantly increases the total cost of ownership.

ML-powered digital twins can address these issues when they are purpose-built to suit each company’s specific situation. They are affordable, scalable, self-sustaining, and, with the right user interface, are extremely useful in telling machine operators the exact condition of the equipment under their care. Before embarking on the journey of leveraging ML-powered digital twins, certain critical steps must be taken:

1. Creation of an inventory of the available equipment, associated sensors and data.

2. Analysis of the inventory in consultation with plant operations teams to identify the gaps. Typical issues may include missing or insufficient data from the sensors; machinery that lacks sensors; and sensors that do not correctly or regularly send data to the database.

3. Coordination between the manufacturing operations and analytics/technology teams to address some gaps: installing sensors if lacking (‘sensorization’); ensuring that sensor readings can be and are being sent to the cloud database; and developing contingency approaches for situations in which no data is generated (e.g., equipment idle time).

4. A second readiness assessment, followed by a data quality assessment, must be performed to ensure that a strong foundation of data exists for solution development.

This creates the basis for a cloud-based, ML-powered digital twin solution for predictive maintenance. To deliver the most value, such a solution should:

  • Use sensor data in combination with other data as necessary
  • Perform root cause analyses of past breakdowns to inform predictions and risk assessments
  • Alert operators of operational anomalies
  • Provide early warnings of impending failures
  • Generate forecasts of the likely operational situation
  • Be demonstrably effective to encourage its adoption and extensive utilization
  • Be simple for operators to use, navigate and understand
  • Be flexible to fit the specific needs of the machines being managed

predictive maintenance cycle

When model-building begins, the first step is to account for the input data frequency. As sensors take readings at short intervals, timestamps must be regularized and resamples taken for all connected parameters where required. At this time, data with very low variance or too few observations may be excised. Model data sets containing sensor readings (the predictors) and event data such as failures and stoppages (the outcomes) are then created for each machine using both dependent and independent variable formats.

To select the right model for anomaly detection, multiple models are tested and scored on the full data set and validated against history. To generate a short-term forecast, gaps related to machine testing or idle time must be accounted for, and a range of models evaluated to determine which one performs best.

Tiger Analytics used a similar approach when building these predictive maintenance systems for an Indian multinational steel manufacturer. Here, we found that regression was the best approach to flag anomalies. For forecasting, the accuracy of Random Forest models was higher compared to ARIMA, ARIMAX, and exponential smoothing.

predictive maintenance analysis flow

Using a modular paradigm to build ML-powered digital twin makes it straightforward to implement and deploy. It does not require frequent manual recalibration to be self-sustaining, and it is scalable so it can be implemented across a wide range of equipment with minimal additional effort and time.

Careful execution of the preparatory actions is as important as strong model-building to the success of this approach and its long-term viability. To address the challenge of low-cost, high-efficiency predictive maintenance in the manufacturing sector, employ this sustainable solution: a combination of technology, business intelligence, data science, user-centric design, and the operational expertise of the manufacturing employees.

This article was first published in Analytics India Magazine.

The post Maximizing Efficiency: Redefining Predictive Maintenance in Manufacturing with Digital Twins appeared first on Tiger Analytics.

]]>