AI accountability is a critical aspect of responsible AI development and deployment. Privacy professionals play a key role in ensuring AI governance aligns with data protection and privacy practices, addressing concerns about bias, transparency, and ethical responsibility. By collaborating with legal and other stakeholders, privacy teams can help organizations navigate the complexities of AI regulation and mitigate risks associated with AI systems.
AI Risk Management
An Associated Press investigation revealed that OpenAI’s Whisper transcription tool creates fabricated text in medical and business settings despite warnings against such use. These tools are known to produce fabricated text, or “confabulations,” which can lead to serious consequences in medical settings, such as incorrect diagnoses and treatment plans. Despite OpenAI’s advice against using Whisper in high-risk domains, over 30,000 medical professionals are reportedly using it for transcriptions. The tool’s inaccuracies are attributed to its reliance on predicting likely text rather than ensuring accuracy, often filling gaps with incorrect or biased information.
The OMB has issued guidance to agencies on responsible artificial intelligence acquisitions. The guidance emphasizes the importance of managing AI risks, promoting competition, and improving information sharing. Agencies are advised to identify AI early in the acquisition process, engage relevant equities, and include contractual requirements to manage AI risks effectively.