Generative AI could raise questions for federal records laws
A clause in a DHS agreement with OpenAI opens the door to some debate on transparency issues.
By Rebecca Heilweil, FEDSCOOP, Apr. 22, 2024
The Department of Homeland Security has been eager to experiment with generative artificial intelligence, raising questions about what aspects of interactions with those tools might be subject to public records laws.
In March, the agency announced several initiatives that aim to use the technology, including a pilot project that the Federal Emergency Management Agency will deploy to address hazard mitigation planning, and a training project involving U.S. Citizenship and Immigration Services staff. Last November, the agency released a memo meant to guide the agency’s use of the technology. A month later, Eric Hysen, the department’s chief information officer and chief AI officer, told FedScoop that there’s been “good interest” in using generative AI within the agency.
But the agency’s provisional approval of a few generative AI products — which include ChatGPT, Bing Chat, Claude 2, DALL-E2, and Grammarly, per a privacy impact assessment — call for closer examination in regard to federal transparency. Specifically, an amendment to OpenAI’s terms of service uploaded to the DHS website established that outputs from the model are considered federal records, along with referencing freedom of information laws.
Read more here.