Large Language Models for Code Analysis: Do LLMs Really Do Their Job?

Authors: 

Chongzhou Fang, Ning Miao, and Shaurya Srivastav, University of California, Davis; Jialin Liu, Temple University; Ruoyu Zhang, Ruijie Fang, and Asmita, University of California, Davis; Ryan Tsang, University of California - Davis; Najmeh Nazari, University of California, Davis; Han Wang, Temple University; Houman Homayoun, University of California, Davis

Abstract: 

Large language models (LLMs) have demonstrated significant potential in the realm of natural language understanding and programming code processing tasks. Their capacity to comprehend and generate human-like code has spurred research into harnessing LLMs for code analysis purposes. However, the existing body of literature falls short in delivering a systematic evaluation and assessment of LLMs' effectiveness in code analysis, particularly in the context of obfuscated code.

This paper seeks to bridge this gap by offering a comprehensive evaluation of LLMs' capabilities in performing code analysis tasks. Additionally, it presents real-world case studies that employ LLMs for code analysis. Our findings indicate that LLMs can indeed serve as valuable tools for automating code analysis, albeit with certain limitations. Through meticulous exploration, this research contributes to a deeper understanding of the potential and constraints associated with utilizing LLMs in code analysis, paving the way for enhanced applications in this critical domain.

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.