Data Science and ML

Innovation vs. Ethical Implementation: Where Does AI Stand Today?

Enterprises exploring AI implementation—which constitutes most enterprises as of 2024—are currently assessing how to do so safely and sustainably. AI ethics can be an essential part of that conversation. Questions of particular interest include:

  • How diverse or representative are the training data of your AI engines? How can a lack of representation impact AI’s outputs?
  • When should AI be trusted with a sensitive task vs. a human? What level of oversight should organizations enact over AI?
  • When—and how—should organizations inform stakeholders that AI has been used to complete a certain task?

Organizations, especially those leveraging proprietary AI engines, must answer these questions thoroughly and transparently to satisfy all stakeholder concerns. To ease this process, let’s review a few pressing developments in AI ethics over the past six months.

The rise of agentic AI

We are quietly entering a new era in AI. “Agentic AI,” as it’s known, can act as an “agent” that analyzes situations, engages other technologies for decision-making, and ultimately reaches complex, multi-step decisions without constant human oversight. This level of sophistication sets agentic AI apart from versions of generative AI that first came on the market and couldn’t tell users the time or add simple numbers.

Agentic AI systems can process and “reason” through a complex dilemma with multiple criteria. For example, planning a trip to Mumbai. You’d like this trip to align with your mother’s birthday, and you’d like to book a flight that cashes in on your reward miles. Additionally, you’d like a hotel close to your mother’s house, and you’re looking to make reservations for a nice dinner on your trip’s first and final nights. Agentic AI systems can ingest these disparate needs and propose a workable itinerary for your trip, then book your stay and travel—interfacing with multiple online platforms to do so.

These capabilities will likely have enormous implications for many businesses, including ramifications for very data-intensive industries like financial services. Imagine being able to synthesize, analyze, and query your AI systems about diverse customer activities and profiles in just minutes. The possibilities are exciting. 

However, agentic AI also begs a critical question about AI oversight. Booking travel might be harmless, but other tasks in compliance-focused industries may need parameters set around how and when AI can make executive decisions.

Emerging compliance frameworks

FIs have an opportunity to codify certain expectations around AI right now, with the goal of improving client relations and proactively prioritizing the well-being of their customers. Areas of interest in this regard include:

  • Safety and security
  • Responsible development
  • Bias and unlawful discrimination
  • Privacy

Although we cannot guess the timeline or likelihood of regulations, organizations can conduct due diligence to help mitigate risk and underscore their commitment to client outcomes. Important considerations include AI transparency and consumer data privacy.

Risk-based approaches to AI governance

Most AI experts agree that a one-size-fits-all approach to governance is insufficient. After all, the ramifications of unethical AI differ significantly based on application. For this reason, risk-based approaches—such as those adopted by the EU’s comprehensive AI act—are gaining traction.

In a risk-based compliance system, the strength of punitive measures is based on an AI system’s potential impact on human rights, safety, and societal well-being. For example, high-risk industries like healthcare and financial services might be scrutinized more thoroughly for AI use because unethical practices in these industries can significantly impact a consumer’s well-being.

Organizations in high-risk industries must remain especially vigilant about ethical AI deployment. The most effective way to do this is to prioritize human-in-the-loop decision-making. In other words, humans should retain the final say when validating outputs, checking for bias, and enforcing ethical standards.

How to balance innovation and ethics

Conversations about AI ethics usually reference the necessity for innovation. These phenomena (innovation and ethics) are depicted as counteractive forces. However, I believe that progressive innovation requires a dedication to ethical decision-making. When we build upon ethical systems, we create more viable, long-term, and inclusive technologies.

Arguably, the most critical consideration in this realm is explainable AI, or systems with decision-making processes that humans can understand, audit, and explain.

Many AI systems currently operate as “black boxes.” In short, we cannot understand the logic informing these systems’ outputs. Non-explainable AI can be problematic when it limits humans’ abilities to verify—intellectually and ethically—the accuracy of a system’s rationale. In these instances, humans cannot prove the truth behind an AI’s response or action. Perhaps even more troublingly, non-explainable AI is more difficult to iterate upon. Leaders should consider prioritizing deploying AI that humans can regularly test, vet, and understand.

The balance between ethical and innovative AI may seem delicate, but it’s critical nonetheless. Leaders who interrogate the ethics of their AI providers and systems can improve their longevity and performance.

About the Author

Vall Herard is the CEO of Saifr.ai, a Fidelity labs company. He brings extensive experience and subject matter expertise to this topic and can shed light on where the industry is headed, as well as what industry participants should anticipate for the future of AI. Throughout his career, he’s seen the evolution in the use of AI within the financial services industry. Vall has previously worked at top banks such as BNY Mellon, BNP Paribas, UBS Investment Bank, and more. Vall holds an MS in Quantitative Finance from New York University (NYU) and a certificate in data & AI from the Massachusetts Institute of Technology (MIT) and a BS in Mathematical Economics from Syracuse and Pace Universities.

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insideainews/

Join us on Facebook: https://www.facebook.com/insideAINEWSNOW



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button