CAT-LM Training Language Models on Aligned Code And Tests

Nikitha Rao, Kush Jain, Uri Alon, Claire Le Goues, Vincent J. Hellendoorn

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Testing is an integral but often neglected part of the software development process. Classical test generation tools such as EvoSuite generate behavioral test suites by optimizing for coverage, but tend to produce tests that are hard to understand. Language models trained on code can generate code that is highly similar to that written by humans, but current models are trained to generate each file separately, as is standard practice in natural language processing, and thus fail to consider the code-under-test context when producing a test file. In this work, we propose the Aligned Code And Tests Language Model (CAT-LM), a GPT-style language model with 2.7 Billion parameters, trained on a corpus of Python and Java projects. We utilize a novel pretraining signal that explicitly considers the mapping between code and test files when available. We also drastically increase the maximum sequence length of inputs to 8,192 tokens, 4x more than typical code generation models, to ensure that the code context is available to the model when generating test code. We analyze its usefulness for realistic applications, showing that sampling with filtering (e.g., by compilability, coverage) allows it to efficiently produce tests that achieve coverage similar to ones written by developers while resembling their writing style. By utilizing the code context, CAT-LM generates more valid tests than even much larger language models trained with more data (CodeGen 16B and StarCoder) and substantially outperforms a recent test-specific model (TeCo) at test completion. Overall, our work highlights the importance of incorporating software-specific insights when training language models for code and paves the way to more powerful automated test generation.

Original languageEnglish
Title of host publicationProceedings - 2023 38th IEEE/ACM International Conference on Automated Software Engineering, ASE 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages409-420
Number of pages12
ISBN (Electronic)9798350329964
DOIs
StatePublished - 2023
Externally publishedYes
Event38th IEEE/ACM International Conference on Automated Software Engineering, ASE 2023 - Echternach, Luxembourg
Duration: 11 Sep 202315 Sep 2023

Publication series

NameProceedings - 2023 38th IEEE/ACM International Conference on Automated Software Engineering, ASE 2023

Conference

Conference38th IEEE/ACM International Conference on Automated Software Engineering, ASE 2023
Country/TerritoryLuxembourg
CityEchternach
Period11/09/2315/09/23

Bibliographical note

Publisher Copyright:
© 2023 IEEE.

Keywords

  • code-test alignment
  • large language models
  • test completion
  • test generation

ASJC Scopus subject areas

  • Software
  • Safety, Risk, Reliability and Quality
  • Control and Optimization

Fingerprint

Dive into the research topics of 'CAT-LM Training Language Models on Aligned Code And Tests'. Together they form a unique fingerprint.

Cite this