From 6b4502b5d23200be2d5466e86898b36d4e92a545 Mon Sep 17 00:00:00 2001 From: Miltos Allamanis Date: Tue, 14 Nov 2023 10:57:39 +0000 Subject: [PATCH] Add Liu et al. --- _publications/liu2023code.markdown | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_publications/liu2023code.markdown b/_publications/liu2023code.markdown index 1600487d..15cf547a 100644 --- a/_publications/liu2023code.markdown +++ b/_publications/liu2023code.markdown @@ -7,6 +7,6 @@ conference: year: 2023 additional_links: - {name: "ArXiV", url: "https://arxiv.org/abs/2305.05383"} -tags: ["execution", "dynamic"] +tags: ["Transformer", "execution"] --- Code execution is a fundamental aspect of programming language semantics that reflects the exact behavior of the code. However, most pre-trained models for code intelligence ignore the execution trace and only rely on source code and syntactic structures. In this paper, we investigate how well pre-trained models can understand and perform code execution. We develop a mutation-based data augmentation technique to create a large-scale and realistic Python dataset and task for code execution, which challenges existing models such as Codex. We then present CodeExecutor, a Transformer model that leverages code execution pre-training and curriculum learning to enhance its semantic comprehension. We evaluate CodeExecutor on code execution and show its promising performance and limitations. We also demonstrate its potential benefits for code intelligence tasks such as zero-shot code-to-code search and text-to-code generation. Our analysis provides insights into the learning and generalization abilities of pre-trained models for code execution.