4 Tips for Checking the Accuracy of AI-Generated Output

Is AI output trustworthy?

The answer is, of course, “It depends.” The reliability of AI-generated output depends on the quality of the model and the accuracy of the data used to train it. Proprietary AI models that have been carefully designed for specific uses and trained using highly accurate data can generally be trusted (although human oversight is still essential).

robot touching circut board checking accuracy of ai generated output

However, popular tools such as ChatGPT are trained on data gathered from the Internet, which is often inaccurate, incomplete, outdated or even satirical. AI has no way of knowing whether information is correct — it simply generates responses based on probabilities.

Training data may also contain biases, and the AI model may perpetuate those biases in its outputs. Amazon had to scrap an AI recruiting tool found to be biased against women, and several companies have been sued for biased facial recognition.

Worst of all, AI can “hallucinate” and generate output that is completely fabricated. Several attorneys have been rebuked for filing AI-generated legal briefs that included fabricated legal citations. The City of Detroit and a police officer were sued after a woman was wrongfully arrested based on AI-generated evidence.

These AI blunders illustrate the legal, ethical and reputational risks associated with AI output. Much of the responsibility lies with organizations to develop, select and implement trustworthy AI technologies and ensure that they are used responsibly. However, users should also be skeptical of AI output and take simple steps to verify its accuracy.

  • If possible, ask the AI model to include sources when generating output. Be aware that AI may hallucinate sources or provide broken links. Verify that the sources exist and support the output.

  • Do not just rely on the AI tool’s sources. Check facts such as names, dates, ages, company names and statistics using multiple, authoritative sources. Compare the information generated by AI with reliable, human-created content to identify inaccuracies, gaps or alternative perspectives.

  • Consider the AI tool itself. What is its purpose? Who funded it? How recent and comprehensive is its training data? Transparency is an important indicator of an AI model’s reliability.

  • Use critical thinking to evaluate the output. Look for logical inconsistencies, contradictions, and emotional or manipulative language. Check whether the information presents multiple perspectives and whether any bias may be influencing the output.