Large language models and generative artificial intelligence agents like ChatGPT have captured the public’s attention, but the Defense Department’s chief sigital and AI officer, said he worries about the profound havoc that such tools could wreak across society. Craig Martell, the chief AI officer at the Pentagon’s sigital and AI division said Wednesday that he was “frightened to death” by how consumers might use ChatGPT or other consumer-facing AI agent.
Such tools, which can respond to simple prompts with long text answers, have raised concerns about the end of academic essays and have even been floated as a better way to answer medical patient questions. They don’t produce accurate content because they use human-created material. Martell’s opinion was not a tame one when asked about the impact of large language models such as ChatGPT on society and national safety. He has experience managing machine learning for Lyft as well as in academia.
“My concern is that the service providers do not provide the necessary safeguards or the capability for us to verify” the information. This could lead to people trusting the answers or content provided by such search engines, even when it is inaccurate. He said that adversaries who want to use disinformation to influence Americans can also make great use of such tools. He said that the information produced by these tools is written so well that they are perfect for this purpose. “This information triggers our own psychology to think ‘of course this thing is authoritative.'”
While using such tools can feel like an exchange with a human being, Martell warns they lack a human understanding of context, which is why reporter Aza Raskin was able to pose as a 13-year old and get an LLM to give him advice on how to seduce a 45 year-old man.
The Chief Digital and Artificial Intelligence Office, which Martell heads, is primarily responsible for the Defense Department’s AI efforts and all the computer infrastructure and data organization that goes into those efforts. Martell made his comments during AFCEA’s TechNetCyber event in Baltimore to a room full of software vendors, many of whom were selling AI platforms, tools, and solutions.
“I want to tell the industry: Don’t only sell us generation. Work on detection,” so that users and consumers of content can more easily differentiate AI-generated content from humans, Martell said.
In terms of his own priorities for the Defense Department, Martell said the first is putting in place data sharing infrastructure and policies to allow the military to realize its aspirations for Joint All Domain Command and Control, or JADC2.
“It needs the appropriate infrastructure to allow data to flow in the right places. If I could set up the construction of this infrastructure so that data can flow correctly, back and forth, and across different levels of classifications, then that would be an important first step to realizing his vision. Part of that is helping combatant commands get a much better understanding of the data they have, the data they need, and the data they need to share.
Not all in the Defense Department share Martell’s concerns about AI and large-scale language models. ChatGPT had written a portion of the speech given by Lt.Gen. Robert Skinner a few days earlier. Speaking to reporters during a roundtable discussion on Wednesday, he said, “I’m not scared generally about it… I think it’s gonna be a challenge,” for the Defense Department to use AI correctly, but the challenge is one the Defense Department can rise to. “What I’m cautious of is: this has to be a national-level issue.”
Steve Wallace, DISA’s chief technology officer, said “There’s a number of places…that we’re looking to possibly take advantage of [next-generation AI], from back office capabilities and contract generation, data labeling, right?”
But even here, Martell cautioned against being too enthusiastic about the promise of AI, particularly AI tools for labeling data. “They just don’t work…What works is human beings who are experts in their field telling the machine this is A; this is B; this is A; this is B; and this is B; and then that’s what gets fed into the algorithm generator…to generate a model for you.”
Martell isn’t necessarily opposed to deploying AI even in very high-stakes instances. Martell’s main concern is the perception that such tools are so easy to use that they make the user feel like he doesn’t have to put in the effort of monitoring and training them. AI, in Martell’s view, is a highly human-driven asset.
“No first-contact model survives. Every model ever built is already stale, by the time you get it. The model was built on historical data because it’s all they could use to build it. …. We need to build tools that allow the systems to be monitored to make sure they’re continuing to bring the value that they were paid for in the first place.”
The post The Pentagon’s AI Chief Is ‘Scared to Death’ of ChatGPT appeared first on Defense One.