Generative AI: Benefits, risks and a framework for responsible innovation

KEY PLAYERS INSIGHTS
By Kristi Boyd, Trustworthy AI Specialist at SAS

Generative AI (GAI) is a category of AI that can create new content, including video, audio, images and text. GAI has the potential to change the way we approach content creation.

It’s gotten much attention lately. Take ChatGPT for example. The AI chatbot has captivated the public’s imagination with clever answers, creative writing and helpful problem-solving – all driven by GAI technologies. In recent months, multiple GAI tools have taken social media by storm. Moreover, the global market for GAI is expected to reach US$110.8 billion by 2030!

So how does GAI work and what should be considered before putting it to work?

An introduction to GAI

GAI uses various technological approaches such as deep learning, reinforcement learning or transformers. While the specifics vary conceptually, the models work similarly.
The most common approach is to use deep neural networks, which consist of multiple layers of interconnected nodes that accept large quantities of data as inputs and identify patterns or structures within the data set. A large amount of data allows the model to draw from millions of data points to generate new data similar in structure and content to the training data. These data sets are incredibly cumbersome to accumulate, requiring significant oversight to maintain data quality.

The quantity of data necessary is mind-boggling. McKinsey reports that OpenAI used approximately 45 terabytes of text data. The terabytes of data are expensive to maintain; by some estimates, data storage can cost more than $250,000 annually (not including processing costs).

The market's rapid growth has inspired a barrage of utopian and dystopian responses about the possibilities and risks of GAI technologies. Dozens of authors have written about the incredible potential of GAI to answer tax questions, provide translation services, diagnose Alzheimer’s, reduce health care spending and serve as a 24/7 communication tool.
At the same time, research has shown that users overly trust automated programs. This automation bias amplifies the risks of GAI tools, as individuals may inadvertently make decisions based on misinformation, fake content, or dubious facts promoted by the algorithms. Many authors have explored how GAI can perpetuate misinformation, empower malicious actors, breach privacy, infringe on intellectual property and threaten jobs.
While many articles have raised valid concerns, others have highlighted the genuine benefits of the technology. Some articles have contributed to the need for more clarity around GAI and whether organizations should embrace this technological advancement. We recently published a blog that dives into core values at SAS that highlight human-centricity, inclusivity, accountability, transparency, robustness, privacy and security regarding development. In it, readers learn how these values are reflected in our people, processes and products.
This blog introduces the steps necessary to lay out a practical framework for organizations adopting GAI tools.

So you want to use GAI?

The novelty of GAI offers the first mover advantage to the most agile businesses. However, most innovation introduces uncertainty and risks to an organization. Before investing capital into GAI business improvements and divulging proprietary information to a new tool, organizations should weigh their use cases' potential risks and rewards. The best way for a business to establish an effective strategy is to start with organizational values. At SAS, we established our principles to help us answer the question of “can we” and “should we” adopt this new technology. We propose this model and the questions below as a starting point for the responsible usage of GAI. It's possible that some of the questions may not be relevant for every GAI. In those situations, organizations should decide whether to adopt alternative principles.

Human-centricity: Putting people at the forefront

AI tools should never harm; they should promote human well-being, agency and equity. While most developers do not create GAI with the intent to generate hateful content and harassment, we should be intentional always to prioritize human well-being. Consider:
  • Do the relevant employees understand how this GAI tool can assist the organization?
  • Does the project align with the organization’s ethical principles?​
  • Does this use case have positive intent for society?​
  • Who may be harmed by the use of GAI?
  • What is the impact on individuals and society over time?​

Transparency: Understanding the reasons behind development

GAI tools will undoubtedly change the global business landscape. Transparency ensures that we understand the reasons and methods for these changes. When using any GAI tool, it is important to openly communicate the intended use, potential risks and decisions made. Consider:
  • Can the responses of the GAI tool be interpreted and explained by human experts in the organization?
  • What are the legal, financial and reputational risks of GAI-generated content?
  • Should the organization indicate when GAI created content?
  • Would it be clear to people if they were interacting with a GAI system? ​
  • What testing satisfies expectations for audit standards [FAT-AI, FEAT, ISO, etc.]?

Robustness: Awareness of limitations and risks

Most tools (analog or digital) come with a warning to only use as intended by the designers to ensure that they can operate reliably and safely. Systems that are used beyond their intended purpose may cause unforeseen real-life harm. Many GAI systems still have significant limitations that impact their accuracy. Since there is still considerable room for improvement, all users should be cautious when using GAI tools. Consider:
  • Was the GAI sufficiently trained on data for the organization’s specific use case?
  • Has the creator of the GAI system documented any limitations, and is the use case within those limitations?
  • Can solution results be reliably reproduced?​
  • What guardrails are required to ensure safe operation? ​
  • How might the solution go awry, and what should be the response?

Privacy and security: Keeping everything safe

Users and businesses may engage with GAIs under the false assumption of confidentiality and accidentally disclose inappropriate information about themselves or others. For example, doctors using ChatGPT to write clinical notes may inadvertently violate HIPAA. Developers using GitHub Copilot may accidentally contribute proprietary data to the model. Remember that any input into a GAI may be permanent and is unlikely to remain private. Consider:
  • Is there a risk of sharing any private or sensitive information?
  • Is there a risk of sharing intellectual property or other information that should not be disclosed to the algorithm?
  • What legal and regulatory compliance measures apply? ​
  • How might cyber or adversarial attacks exploit this solution?

Inclusivity: Recognizing diverse needs and perspectives

GAI – just like humans – carries bias. If this bias is not identified and mitigated, we could exacerbate power imbalances and marginalize vulnerable communities. Awareness of the potential bias enables users to include diverse needs and perspectives. Responsible innovation requires considering diverse perspectives and experiences. Consider:
  • Does the solution perform differently for different groups? Why?​
  • Do people with similar characteristics experience similar outcomes? Why not?
  • Are all for whom the solution is intended equally prioritized? ​
  • How might the training data impact the inclusivity of the GAI response?

Accountability: Prioritizing receiving feedback

Organizations using and developing GAI systems are responsible for identifying and mitigating adverse impacts of decisions based on AI recommendations. Consider:
  • How are outcomes monitored, and can they be overridden and corrected? ​
  • Can the end users and those impacted raise their concerns for remediation?
  • How can the organization ensure the tool does not promote misinformation or perpetuate negative topics?
  • Is the organization familiar enough with the use case to spot errors in the GAI?
  • When are humans involved in the decision-making process? Should that change? ​
  • What mechanisms exist to provide feedback on the results to improve the technology?
While generative AI can provide a competitive edge, organizations should consider the potential risks and rewards before investing capital. This consideration should start with organizational values to establish an effective strategy. We propose a model based on the principles of human-centricity, transparency, robustness, privacy and security, inclusivity and accountability as a starting point for responsible usage of generative AI. By intentionally implementing generative AI, organizations can mitigate adverse impacts and promote positive outcomes for individuals and society.