With its ability to generate swaths of copy, create and fix code, and perform complex financial calculations all within a matter of seconds, ChatGPT by OpenAI seems primed to transform the workplace forever. The tool, an advanced chatbot built to respond to queries in a conversational way, has already broken records as the fastest growing user base in history, surpassing even TikTok with its 100 million monthly active users in January.
But generative AI technology — which includes chat tools like ChatGPT and image creating tools like Midjourney or Dall-E — already has raised red flags, especially around its accuracy, bias, data privacy, and plagiarism concerns. CNET, one of the first publications to openly publish AI-generated content (though not via ChatGPT specifically), had to issue corrections on 41 of the 77 articles the AI produced. Academics who recently published a research paper predominantly written by ChatGPT also found that the AI fabricated references, convincingly naming experts who were commonly cited in their field.
Those missteps should guide leaders as they explore the use of AI in their business. “The implications are massive,” says Denise Graziano, CEO of consulting firm Graziano Associates. “One mistake can completely ruin market share.”
So let’s first talk about the pitfalls with AI. In today’s economic uncertainty, overworked staff can turn to AI in a pinch — and may be less likely to disclose it. “People who are overworked, or in places where there’s not enough of the right people in the right seats could decide to use this to cut corners,” warns Graziano. “That could be a real danger if it’s not verified.”
With the rapid proliferation of ChatGPT, IP ownership and plagiarism issues could also arise. For example, if two agencies both used ChatGPT to generate copy for a brand client pitch, the work might look remarkably similar without a human eye or voice to make it unique. Ultimately, the output from ChatGPT is derivative not net new.
Essentially, any AI tool is only as good as the data it scrapes, and unfortunately, historical data comes loaded with systemic bias. One executive Erica Bartman recently queried the tool for team names, one for a group of young professional women and another for young professional men. The team names for men included more gender-neutral options like “The Trailblazers” and “The Visionaries,” while the names for women were reductive and gendered like “Boss Babes United” and “Femme Force.” The ACLU has also called attention to how AI tools can reinforce systemic discrimination, pointing to AI-assisted lending tools that overcharge marginalised communities, and AI-driven hiring tools which reinforce disability discrimination by not making the necessary (and human-driven) accommodations.
Lastly, privacy and data concerns must be top of mind. Major corporations like JPMorgan Chase and Amazon recently banned or restricted internal use of ChatGPT, hesitant that its employees may have been inputting sensitive customer data or proprietary code into the platform. Italy has even temporarily banned the service country-wide until it complies with privacy laws and investigates whether it is complying with the EU’s General Data Protection Regulation (GDPR), which regulates how user data is stored, used, and processed.
That’s not to say that leaders shouldn’t encourage using AI to explore its capabilities, but now is the time to start developing and articulating policies on usage — even as policies themselves are likely to be a moving target for the foreseeable future. Nearly half of HR executives recently surveyed by Gartner say that firms are already drafting policies on how to use AI.
Building a culture of transparency around these new tools can help. “Leaders should coach anyone using ChatGPT to be transparent about its use, not claim content produced by ChatGPT as their own,” says Andrea Lagan, COO of HR platform Betterworks. At Betterworks, they have invested in an AI team that will review any plans using ChatGPT before it’s used in a production or publicly, and ensure any materials that might be used externally are all analysed and fact-checked by humans. Policies and practices like this should be adopted and communicated as soon as possible to manage downside risk. “All it takes is one error to create a brand firestorm,” says Graziano.
With some of the greatest pitfalls now outlined with this tech thanks to the first responders, now is a great time to get teams introduced to ChatGPT and test its vulnerabilities. “I encourage doing a SWOT analysis with the team. Have people poke holes in it, ask ‘What’s wrong with this?’ and ‘How could we get into trouble?’” says Graziano. “But also ask, ‘What can we do with this that our competitors aren't doing? How can this be an opportunity for growth?’” This helps people get involved so they can feel part of the solution, she says, rather than feeling disempowered that it’s just one more policy to follow.
Companies should also be asking rigorous questions to vendors about how their AI systems are trained, and be prepared to do the same with any AI-informed tool developed internally. “It is important for vendors to demonstrate that models work and audit them for bias,” says Dr. Lindsey Zuloaga, Chief Data Scientist at HireVue, which uses AI in some of their recruitment tools. “Creators of these tools should prioritise creating an AI Explainability Statement, which is a valuable third-party process that documents to the public, customers, and users how a given technology is developed and tested.”
Like any technology, ChatGPT has the power to accelerate innovation and simplify mundane tasks. But without careful thought, questioning, and regulation, the tool and others like it will only re-inscribe long-held biases and beliefs. Now is the time leaders can shape the conversation around ChatGPT, encouraging robust debate around the technology both externally and internally. That way, AI can be used more inclusively and equitably for the real-life humans it affects.