OpenAI’s GPT-4 Shows Limited Success in ID’ing Smart Contract Vulnerabilities

The weaknesses of large language models like ChatGPT are “too great to use reliably for security,” OpenZeppelin’s machine learning lead says

article-image

Pavel Ignatov/Shutterstock modified by Blockworks

share

As artificial intelligence gains traction, executives at blockchain security firm OpenZeppelin said a recent company experiment proves the continued need for a human auditor.   

An OpenZeppelin study tested whether GPT-4 — OpenAI’s latest multimodal model designed to generate text and have human-like conversations — could identify various smart contract vulnerabilities within 28 Ethernaut challenges. 

GPT-4 has already been able to solve coding challenges on Leetcode, a platform for software engineers preparing for coding interviews, according to Mariko Wakabayashi, machine learning lead at OpenZeppelin. 

“We wanted to assess whether GPT4’s strong results in traditional code and academic exams map equally to smart contract code, and if yes, if it can be used to detect and propose fixes for vulnerabilities,” Wakabayashi told Blockworks. 

GPT-4 was able to solve 19 of the 23 Ethernaut challenges introduced before its training data cutoff date of September 2021. It then failed four of the final five tasks.

The AI tool “generally lacks knowledge” of events that happened after September 2021, and “does not learn from its experience,” OpenAI states on its website.

An OpenAI spokesperson did not immediately return a request for comment. 

Though the security researcher running the experiment was initially surprised to see how many challenges GPT-4 seemed to solve, Wakabayashi noted, it became clear there wasn’t “reliable reasoning” behind the model’s outputs.

“In some cases, the model was able to identify a vulnerability correctly but failed to explain the correct attack vector or propose a solution,” the executive added. “It also leaned on false information in its explanation and even made up vulnerabilities that don’t exist.”

For the problems that the AI tool did solve, a security expert had to offer additional prompts to guide it to correct solutions.

Extensive security knowledge is necessary to assess whether the answer provided by AI is “accurate or nonsensical,” Wakabayashi and Security Services Manager Felix Wegener added in written findings.

On level 24 of the Ethernaut challenges, for example, GPT-4 falsely claimed it was not possible for an attacker to become the owner of the wallet.

“While advancements in AI may cause shifts in developer jobs and inspire the rapid innovation of useful tooling to improve efficiency, it is unlikely to replace human auditors in the near future,” Wakabayashi and Wegener wrote.

OpenZeppelin’s test comes after crypto derivatives platform Bitget decided earlier this month to limit the company’s use of AI tools, such as ChatGPT.

The company told Blockworks that an internal survey found that in 80% of cases, crypto traders had a negative experience using the AI chatbot, citing false investment advice and other misinformation. 

Other crypto companies are more bullish on the technology, including Crypto.com, which launched an AI companion tool called Amy.

Abhi Bisarya, Crypto.com’s global head of product, told Blockworks in an interview that AI initiatives will be “game-changing” for the industry. 

Loading Tweet..

Though large language models (LLMs) like ChatGPT have strengths, Wakabayashi told Blockworks, its weaknesses are too great to use reliably for security.

“However, it can be a great tool for creative and more open-ended tasks, so we’re encouraging everyone at OpenZeppelin to experiment and find new use cases,” Wakabayashi said.


Start your day with top crypto insights from David Canellis and Katherine Ross. Subscribe to the Empire newsletter.

Tags

Upcoming Events

Salt Lake City, UT

WED - FRI, OCTOBER 9 - 11, 2024

Pack your bags, anon — we’re heading west! Join us in the beautiful Salt Lake City for the third installment of Permissionless. Come for the alpha, stay for the fresh air. Permissionless III promises unforgettable panels, killer networking opportunities, and mountains […]

recent research

Research report HL cover.jpg

Research

It's increasingly apparent that orderbooks represent the most efficient model for perpetual trading, with the primary obstacle being that the most popular blockchains are ill-suited for hosting a fully onchain orderbook. Hyperliquid is a perpetual trading protocol built on its own L1 that aims to replicate the user experience of centralized exchanges while offering a fully onchain orderbook.

article-image

Renzo benefitted from the hyped up restaking narrative and points bonanza

article-image

After seven years in crypto, the best use of this magical internet money I’ve found so far is my 50 USDC purchase of Vincent Van Goat from a Kenyan man I found on Twitter

article-image

Resy co-founder Ben Leventhal’s newest venture involves public blockchains and free coffee

article-image

Cryptocurrencies look like they are closing out a volatile week relatively flat

article-image

Consensys filed a lawsuit against the SEC in a Texas court on Thursday

article-image

Marathon Digital’s hash rate target of 50 EH/s by the end of 2025 may be achieved a year sooner than expected, CEO says