ChatGPT and the Ethical and Legal Implications of Data and Technology

Share:

The global phenomenon of ChatGPT has taken the world by storm, but have we considered the ethical and legal implications of its use? OpenAI, the creators of ChatGPT, did not anticipate its viral uptake when they released it as a free research preview service in November 2022. Companies have become enamored with it and are advocating for its use in core operations and new business development.

However, in just a short period of time, there have been unintended consequences, including potential corporate secrets being leaked and output providing non-factual statements. Before adopting technologies such as ChatGPT, companies must understand the risk and take a structured approach, using ethical principles and responsible use of data and technology.

To the prepare for the ethical and legal implications of your organization’s use of data and technology, be sure to ask the following four questions inspired by the Belmont ethical principles:

  1. Do you understand the purpose of why you are using AI? When using any new technology, it is important to understand its purpose and holistic benefits for the organization. For example, some firms have opted to replace their existing workforce that interacts with customers with ChatGPT. While this may result in cost reduction and efficiencies, the question is whether it really aligns with the organization's values and improves or worsens customer satisfaction. When dealing with customer information, privacy measures need to be in compliance with global laws and AI legislation.
  2. Do you have safety measures if AI “goes rogue?” When using technology, it is important to seek informed consent from employees and customers on how their information is being handled and the benefit it provides. Safety measures should also be in place to mitigate scenarios where inaccurate or non-factual statements are made, or algorithmic bias occurs. Minimize harm to customers and employees prior to any situation that may result in wrong or inaccurate output.
  3. Who is responsible for the end-to-end AI usage in your organization? It is important to assign responsibility for understanding the purpose, the vision, the stakeholder ecosystem and roadmap of the future when introducing a tool like ChatGPT. Assigning accountability to the technology or the technology teams who may not fully understand the business requirements or ethical implications can result in harmful or negative consequences.
  4. How transparent is your organization’s AI usage? Transparency and concepts such as explainability are essential when using AI. Organizations must be transparent in how they are using AI and how it is affecting employees, customers and society. For example, ChatGPT under the hood uses what is referred to as a generative pre-trained transformer (GPT). Since the first GPT version was released in 2018 to GPT-3 (used for ChatGPT) the number of parameters increased from 117 million to 175 billion. It not only needs large amounts of data but also relies on human interaction for ‘feedback.’ This complexity should not be treated as a black box; organizations should understand the purpose of the technology, the data used to train the AI, who owns the IP, limitations and how outputs are generated.

Preparing for Potential Impacts of AI

In addition to these questions, companies should also consider the following when adopting AI technology:

  • Bias and inclusivity: AI has the potential to exacerbate existing biases and inequalities that can exclude or misrepresent individuals. Companies should ensure their AI systems are designed to be inclusive and unbiased. This includes understanding how the data used to train AI may reflect existing prejudices.
  • Data privacy and security: AI systems require large amounts of data to function, so companies must ensure they are using data ethically and legally. This includes understanding global laws, such as GDPR, PDPA, CCPA and FADP, and having measures in place to secure data.
  • The impact on society and the environment: Companies should consider AI’s potential effects on job displacement and on consumption and disposal of natural resources. This not only includes impact to today but future generations.

Companies must approach the use of AI in an ethical way and take a structured approach to understand the risks and unintended consequences. The European Union Council adopted the EU Artificial Intelligence Act in December 2022. As of early 2023, it was identified to have limitations, particularly in terms of technologies like ChatGPT. This will likely be the case as AI continues to evolve.

By asking the four questions inspired by the Belmont ethical principles, considering bias and inclusivity, data privacy and security, and the impact on society and the environment, companies will be better positioned to use AI in a responsible and ethical manner.

ISG helps enterprises navigate the rapidly changing AI market and responsibly implement emerging technologies that support their business goals. Contact us to find out how we can help you.

Share:

About the author

Dorotea Baljevic

Dorotea Baljevic

Dorotea Baljević is a Principal Consultant in the ISG solution of Cognitive and Analytics, enabling clients in their data transformations while delivering value across the entire ecosystem.

Dorotea provides the support and counsel to customers in their current and future digital journeys. Dorotea focuses on improving and enhancing the data and decision-making ecosystem to ensure healthy organisational longevity and relevance. Her spectrum of experience includes innovation, green-field environments, existing transformations (incl. building high performing teams) and decommissioning.