Hackers could exploit ChatGPT for cyber attacks on NHS and MOD, warns Oxford professor

ChatGPT could be exploited by hackers to launch cyber attacks on the NHS and the Ministry of Defence (MoD), an Oxford professor has warned.

The artificial intelligence tool is able to quickly produce computer programs that criminals could use to overwhelm and cripple government websites, said Michael Wooldridge, a professor of computer science at the university.

The programs can launch a denial of service (DoS) attack, where a website is flooded with a huge amount of requests and online traffic, triggering a crash.

ChatGPT and other large language models give access to an “extremely capable computer programmer” that has no “ethical principles whatsoever”, Prof Wooldridge told an audience at the Cheltenham Literature Festival on Wednesday evening.

He said: “At the moment, [if] you want to do that, launch a DoS attack on the NHS or MoD or whatever it is, or the air traffic control systems, you would have to write that code yourself or try and pay someone to do it.

“And, by and large, most programmers would not be inclined to accept that gig. They have some ethical principles, but now we have a large language model where everybody has that.”

‘No guardrails to stop capability’

He explained that ChatGPT is capable of quickly writing 20 to 30 lines of computer code that would normally take a day to produce conventionally.

OpenAI, the developers of ChatGPT, would “probably” try to stop users from using it for cyber attacks, Prof Wooldridge added, but warned of the looming dangers from other AI tools.

“Within a few years, what happens if we have all got a large language model downloaded from the web on our laptop or smartphone and there are just no barriers, there are no guardrails to stop that capability to do harms [sic] in the hands of a lot more people.

“Cyber attacks are something that is very high profile and it seems clear to me that our Government is very worried about that as one of our key harms.”

His comments come as Rishi Sunak, the Prime Minister, prepares to host a global summit on AI safety next month, which will focus on the potential risks associated with the technology and how to control them.

The conference will take place in Bletchley Park, the home of British code-breaking operations during the Second World War, and will examine, among other things, how AI could be weaponised by bad actors.

The summit on Nov 2 will bring together governments, including representatives from China, tech companies and academics from across the globe.

Related Posts