Responsible AI Archives - Tiger Analytics Tue, 29 Apr 2025 08:24:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://www.tigeranalytics.com/wp-content/uploads/2023/09/favicon-Tiger-Analytics_-150x150.png Responsible AI Archives - Tiger Analytics 32 32 The Data Leader’s Guide to Responsible AI: Why Strong Data Governance Is Key to Mitigating AI Risks https://www.tigeranalytics.com/perspectives/blog/the-data-leaders-guide-to-responsible-ai-why-strong-data-governance-is-key-to-mitigating-ai-risks/ Tue, 29 Apr 2025 08:24:37 +0000 https://www.tigeranalytics.com/?post_type=blog&p=24687 AI has moved from science fiction to everyday reality, but its success hinges on strong data governance. In this blog, we explore why effective governance is crucial for AI, how data leaders can build effective data governance for AI, and practical steps for aligning data governance with AI initiatives, ensuring transparency, mitigating risks, and driving better outcomes.

The post The Data Leader’s Guide to Responsible AI: Why Strong Data Governance Is Key to Mitigating AI Risks appeared first on Tiger Analytics.

]]>
In 1968, HAL 9000’s “I’m sorry, Dave. I’m afraid I can’t do that” marked the beginning of a new era in entertainment. As the years passed, films like 2004’s IRobot, and 2015’s Chappie continued to explore AI’s potential – from “One day they’ll have secrets… one day they’ll have dreams” to “I am consciousness. I am alive. I am Chappie.” While these fictional portrayals pushed the boundaries of our imagination, they also laid the groundwork for AI technologies such as self-driving cars, consumer personalizations, Generative AI and the like, that are shaping the world today.

Today, the rise of GenAI and copilots from various tool vendors and organizations has generated significant interest, driven by advancements in NLP, ML, computer vision, and other deep learning models. For CIOs, CDOs, and data leaders, this shift underscores a critical point: AI-powered technologies must be responsible, transparent, ensure privacy, and free of bias to truly add business value.

Since AI and GenAI both depend on data for fuel, it cements the need for the availability of the right data with the right quality, trust, and compliance. Without strong data governance, organizations risk AI models that reinforce bias, misinterpret data, or fail to meet regulatory requirements. This further underscores the importance of Data Governance as a critical discipline that serves as a guiding light.

Hence, ‘The lighthouse remains a beacon amidst shifting tides’ – In today’s context, this metaphor reflects the challenges faced by both data-driven and AI-driven enterprises. The landscape of data generation, usage, and transformation is constantly evolving, presenting new complexities for organizations to navigate. While data governance is not new, with many a change in weather (data) patterns and the infusion of AI across industries, it has grown increasingly relevant, acting as the foundation on which AI can be governed and enabled.

Ai and data governance

At Tiger Analytics, we are constantly exploring new opportunities to optimize the way we work. Take, for example, enterprises where time-to-market is critical, product vendors have developed copilots using GenAI. We have also observed many initiatives among our Fortune 100 clients leveraging models and various AI elements to achieve a faster time-to-market or develop new offerings. Many of these projects are successful, scalable, and continue to drive efficiency. However, the inevitable question arises: How do we govern AI?

What are the biggest challenges in Data Governance – Answering key questions

Data governance is not just about compliance — it is essential to enhance data quality and trustworthiness, efficiency, scalability, and produce better AI outcomes. Strong governance practices (process, op model, R&R) empower enterprises to unlock the full potential of their data assets.

Below are a few important questions that stakeholders across the enterprise, including CxOs, business leaders, Line of Business (LoB) owners, and data owners, are seeking to answer today. As organizations strive towards data literacy and ethical AI practices, these questions highlight the importance of implementing governance strategies that can support both traditional data management and emerging AI risks.

  • Who is in charge of the model or the data product that uses my model?
  • Who can control (modify/delete/archive) the dataset?
  • Who will decide how to control the data and make key decisions?
  • Who will decide what is to be controlled in the workflow or data product or model that my data is part of?
  • What are the risks to the end outcome if intelligence is augmented without audits or controls, or quality assurance?
  • Are controls for AI different from current ones or can existing ones be repurposed?
  • Which framework will guide me?
  • Is the enterprise data governance initiative flexible to accommodate my AI risks and related work?
  • With my organization in the process of becoming data literate and ensuring data ethics, how can AI initiatives take advantage of the same?
  • Is user consent still valid in the new AI model, and how is it protected?
  • What are the privacy issues to be addressed?

Let’s consider an example. A forecasting model is designed to help predict seasonal sales to launch a new apparel range targeted at a specific customer segment within an existing market. Now, assume the data is to be sourced from your marketplace and there are readymade data products that can be used – How do you check the health of the data before you run a simulation? What if you face challenges such as ownership disputes, metadata inconsistencies, or data quality issues? Is there a risk of privacy breaches if, for example, someone forgets to remove sample data from the dataset?

This is why Data Governance (including data management) and AI must work in tandem, even more so when we consider the risk of non-compliance, for which the impact is far greater. Any governance approach must be closely aligned with data governance practices and effectively integrated into daily operations. There are various ways in which the larger industry and we at Tiger Analytics are addressing this. In the next section, we take a look at the key factors that can serve as the foundation for AI governance within an enterprise.

Untangling the AI knot: How to create a data governance framework for AI

At Tiger Analytics, we’ve identified seven elements that are crucial in establishing a framework for Foundational Governance for AI – we call it HEal & INtERAcT. We believe a human-centric and transparent approach is essential in governing AI assets. As AI continues to evolve and integrate into various processes within an organization, governance must remain simple.

Rather than introducing entirely new frameworks, our approach focuses on accessible AI governance in which existing data governance operations are expanded to include new dimensions, roles, processes, and standards. This creates a seamless extension rather than a separate entity, thereby eliminating the complexities of managing AI risks in silos and untangling the “AI knot” through smooth integration.

Ai and data governance

The seven elements ensure AI governance remains transparent and aligns with the larger enterprise data governance strategy, influencing processes, policies, standards, and change management. For instance, Integrity and Trustworthiness reinforce reliability in model outputs and help create a trustworthy output that ensures privacy, while Accountability and Responsibility establish clear ownership of AI-driven decisions, ensuring compliance and ethical oversight. As AI introduces new roles and responsibilities, governance frameworks are revised to cover emerging risks and complexities like cross-border data, global teams, mergers, and varying regulations.

In addition, the data lifecycle in any organization is dependent on data governance. AI cannot exist without enterprise data. Synthetic data can only mimic actual data and issues. Therefore, high-quality, fit-for-purpose data is essential to train AI models and GenAI for more accurate predictions and better content generation.

Getting started with AI governance

Here is how an enterprise can begin its AI governance journey:

  • Identify all the AI elements and list out every app and area that uses it
  • What does your AIOps look like, and how is it being governed?
  • Identify key risks from stakeholders
  • Map them back to the principles
  • Define controls for the risks identified
  • Align framework with your larger data governance strategy
    • Enable specific processes for AI
    • Set data standards for AI
    • Tweak data policies for AI
    • Include an AI glossary for Cataloging and Lineage, providing better context
    • Data observability for AI to set up proactive detection for better model output and performance

Essentially, Enterprise DG+AI principles (framework) along with Identification & Mitigation strategies and Risk Controls, will pave the way for efficient AI governance. Given the evolving nature of this space, there is no one-size-fits-all solution. Numerous principles exist, but expert guidance and consulting are essential to navigate this complexity and implement the right approach.

The Road Ahead

AI has moved from science fiction to everyday reality, shaping decisions, operations, and personalized customer experiences. The focus now is on ensuring it is transparent, ethical, and well-governed. For this, AI and data governance must work in tandem. From customer churn analysis to loss prevention and identifying the right business and technical metrics, managing consent and privacy in the new era of AI regulations, AI can drive business value — but only when built on a foundation of strong data governance. A well-structured governance program ensures AI adoption is responsible and scalable, minimizing risks while maximizing impact. By applying the principles and addressing the key questions above, you can ensure a successful implementation, enabling your business to leverage AI for meaningful outcomes.

So while you ponder on these insights, ’til next time — just as the T800 said, “I’ll be back!”

The post The Data Leader’s Guide to Responsible AI: Why Strong Data Governance Is Key to Mitigating AI Risks appeared first on Tiger Analytics.

]]>
Consulting with Integrity: ‘Responsible AI’ Principles for Consultants https://www.tigeranalytics.com/perspectives/blog/consulting-with-integrity-responsible-ai-principles-for-consultants/ Wed, 05 Jan 2022 15:22:35 +0000 https://www.tigeranalytics.com/?post_type=blog&p=19156 Third-party AI consulting firms engaged in multiple stages of AI development must point out any ethical red flags to their clients at the right time. This article delves into the importance of a structured ethical AI development process.

The post Consulting with Integrity: ‘Responsible AI’ Principles for Consultants appeared first on Tiger Analytics.

]]>
AI goes rogue and decimates or enslaves humanity — the internet is full of such horrendous fictional movies. The fictional AI risk may be far-fetched, but the current state of Narrow AI could soon have a profound impact on humanity. AI developers and leaders around the world have an ethical obligation toward society. They have a responsibility to create a system suited for the benefit of society and the environment surrounding it.

AI could go wrong in many ways and have unintended consequences in the shorter or longer term. In a certain case, an AI algorithm was found to unintentionally reinforce racial bias when it predicted lower health risk scores for people of color. It turned out that the algorithm was using patients’ historical healthcare spending to model future health risks. As this bias perpetuates through the algorithm in operation, it becomes like a disastrous self-fulfilling prophecy leading to healthcare disparity.

In another incident, Microsoft had to bear the brunt when Tay — its millennial chatbot — engaged in trash talk on social media and had to be taken offline within 16 hours of going live.

Only the juiciest stories make it to the front page of news, but the ethical conundrum runs deep for any organization building AI-driven applications. Leading organizations have concurred on the very core principles for the ethical development of AI — Fairness, Safety, Privacy, Security, Interpretability, and Inclusiveness. Numerous product-led companies champion the need for a responsible AI with a human-centric approach. But, these products are not built entirely by a single team. Many a time, the use of multiple pre-packaged software brings the AI use case to fruition. In some other cases, it involves specialized AI consulting companies to bring in bespoke solutions, capabilities, datasets, or skill sets to complement the speed and scale of AI development.

As third-party AI consulting firms are involved in the various phases of AI development — data gathering and data wrangling, model training, and building, and finally, model deployment and adoption — it is crucial for them to understand the reputational implications of even a mildly rouge AI for their clients. Without certain systems in place, AI development teams scramble to solve the issues as they come, brewing a regulatory and humanitarian storm. In such a situation, it is imperative for these consulting or vendor organizations to follow a certain process for ethical AI development. Some of the salient points of such a process should be:

1. Recognize and flag an AI ethical issue early.

We can solve ethical dilemmas only if we have the mechanisms to recognize one. A key step at the beginning of any AI ethical quandary is locating and isolating ethical aspects of the issue. This involves educating the employees and consultants alike toward AI ethics sensitivity. Experienced data modelers in the team should have the eye to identify any violations of the core ethical principles in any of their custom-made solutions.

2. Documentation helps you trace unethical behavior.

Documenting how the key AI services operate, are trained, their performance metrics, fairness, robustness, and their systemic biases goes a long way in avoiding ethical digression. The devil is in the details, and the details are captured better by documentation.

3. Work in tandem with the client’s team to understand business-specific ethical risks within AI.

Similar industries share a theme across their AI risks. A healthcare or banking company must build extra guard rails around probable violations of privacy and security. E-commerce companies, pioneers in creating state-of-the-art recommendation engines, must keep their ears and eyes open to mitigate any kind of associative bias leading to stereotypical associations within certain populations. Identifying such risks narrows the search for probable violations.

4. Use an ethical framework like the Consequentialist Framework for an objective assessment of ethical decision-making.

 A consequential framework evaluates an AI project by looking at its outcomes. Such frameworks help teams meditate over probable ethical implications. For example, a self-driving AI that has even a remote possibility of being unable to recognize pedestrians covered with facemasks could be fatal and shouldn’t ever make it to the markets.

5. Understand the trade-off between accuracy, privacy, and bias at different stages of model evaluation.

Data scientists must be cognizant of the fact that their ML models should be optimized not only for best performance and high accuracy but also for lower (unwanted) bias. Like any other non-binary decision, leaders should be aware of this trade-off too. Fairness metrics and bias mitigation tool kits like the IBM’s AI Fairness 360 could be used to mitigate unwanted bias in datasets and models.

6. Incentivize open-source and white-box approaches.

An open-source and explainable AI approach is crucial in establishing trust between vendors and clients. It ensures that the system is working as expected and any anomalies can be backtracked to the precise code or data item that might have originated it. Ease of regulatory compliance with open-source approaches makes it a favorite in the financial services and healthcare sector.

7. Run organizational awareness initiatives.

An excellent data scientist may not be aware of the ethical implication of autonomous systems. Organizational awareness, adequate training, and a robust mechanism to bring forth any AI risks should be inculcated into culture and values. Employees should be incentivized to escalate the tiniest of such situations. An AI ethics committee should be formed to provide broader guidance to on-ground teams regarding grey areas.

Final Thoughts

A successful foundation to each of these steps is smooth coordination and handshake between vendor and client teams with a responsible common vision. Vendors should not hesitate to bring forth any AI ethical risks that they might be running for their clients. Clients, meanwhile, should involve their strategic vendors in such discussions and training. Whistleblowers for AI ethical risks might be analysts and data scientists, yet it won’t be possible for them to flag those issues unless there is a top-down culture that encourages them to look for it.

The post Consulting with Integrity: ‘Responsible AI’ Principles for Consultants appeared first on Tiger Analytics.

]]>
Defining Financial Ethics: Transparency and Fairness in Financial Institutions’ use of AI and ML https://www.tigeranalytics.com/perspectives/blog/transparency-financial-institutions-use-artificial-intelligence-machine-learning/ Fri, 10 Dec 2021 19:35:26 +0000 https://www.tigeranalytics.com/?p=6785 While time, cost, and efficiency have seen drastic improvement thanks to AI/ML, concerns over transparency, accountability, and inclusivity prevail. This article provides important insight into how financial institutions can maintain a sense of clarity and inclusiveness.

The post Defining Financial Ethics: Transparency and Fairness in Financial Institutions’ use of AI and ML appeared first on Tiger Analytics.

]]>
The last few years have seen a rapid acceleration in the use of disruptive technologies such as Machine Learning and Artificial Intelligence in financial institutions (FI). Improved software and hardware, coupled with a digital-first outlook, has led to a steep rise in the use of such applications to advance outcomes for consumers and businesses alike.

By embracing AI/ML, the early adopters in the industry have been able to streamline decision processes involving large amounts of data, avoid bias, and reduce chances of error and fraud. Even the more traditional banks are investing in AI systems are using state-of-the-art ML and deep learning algorithms that have paved the way for quicker and better reactions to the changing consumer needs and market dynamics.

The Covid-19 pandemic has only aided in making the use of AI/ML-based tools more widespread and easily scalable across sectors. At Tiger Analytics, we have been at the heart of the action and have assisted several clients to reap the benefits of AI/ML across the value chain.
Pilot-use cases where FIs have seen success by using AI/ML-based solutions:

  • Smarter risk management
  • Real-time investment advice
  • Enhanced access to credit
  • Automated underwriting
  • Intelligent customer service and chatbots

The challenges

While time, cost, and efficiency have seen drastic improvement thanks to AI/ML, concerns over transparency, accountability, and inclusivity prevail. Given how highly regulated and impactful the industry is, it becomes pertinent to maintain a sense of clarity and inclusiveness.
Problems in governance of AI/ML:

  • Transparency
  • Fairness
  • Bias
  • Reliability/soundness
  • Accountability

How can we achieve this? By, first and foremost, finding and evaluating safe and responsible ways to integrate AI/ML into everyday processes to better suit the needs of clients and customers.

By making certain guidelines uniform and standardized, we can set the tone for successful AI/ML implementation. This involves robust internal governance processes and frameworks, as well as timely interventions and checks, as outlined in Tiger’s response document and comments to the regulatory agencies in the US.

These checks become even more relevant where regulatory standards or guidance are inadequate specifically on the use of AI in the FI. However, efforts are being made to hold FIs against some kind of standard.

The table below illustrates the issuance of AI guidelines across different countries:

artificial intelligence for financial institutions

Source: FSI Insights on Policy Implementation No. 35, By Jeremy Prenio & Jeffrey Yong, August 2021

Supervisory guidelines and regulations must be understood and customized to suit the needs of the various sectors.

To overcome these challenges, this step of creating uniform guidance by the regulatory agencies is essential — it opens up a dialogue on the usage of AI/ML-based solutions, and also brings in different and diverse voices from the industry to share their triumphs and concerns.

Putting it out there

As a global analytics firm that specializes in creating bespoke AI and ML-based solutions for a host of clients, at Tiger, we recognize the relevance of a framework of guidelines that enable feelings of trust and responsibility.

It was this intention of bringing in more transparency that led us to put forward our response to the Request for Information and Comment on Financial Institutions’ Use of Artificial Intelligence, including Machine Learning (RFI) by the following agencies:

  • Board of Governors of the Federal Reserve System (FRB)
  • Bureau of Consumer Financial Protection (CFPB)
  • Federal Deposit Insurance Corporation (FDIC)
  • National Credit Union Administration (NCUA) and,
  • Office of the Comptroller of the Currency (OCC)

Our response to the RFI is structured in such a way that it is easily accessible to even those without the academic and technical knowledge of AI/ML. We have kept the conversation generic, steering away from deep technical jargon in our views.

Ultimately, we recognize that the role of regulations around models involving AI and ML is to create fairness and transparency for everyone involved.

Transparency and accountability are foundation stones at Tiger too, which we apply and deploy while developing powerful AI and ML-based solutions to our clients — be it large or community banks, credit unions, fintech, and other financial services.

We are eager to see the outcome of this exercise and hope that it will result in consensus and uniformity of definitions, help in distinguishing facts from myth, and allow for a gradation of actual and perceived risks arising from the use of AI and ML models.

We hope that our response not only highlights our commitment to creating global standards in AI/ML regulation, but also echoes Tiger’s own work culture and belief system of fairness, inclusivity, and equality.

Want to learn more about our response? Refer to our recent interagency submission.

You can download Tiger’s full response here.

The post Defining Financial Ethics: Transparency and Fairness in Financial Institutions’ use of AI and ML appeared first on Tiger Analytics.

]]>