Meta does coding assistance
Your next coding assistant. © iStock.
After GitHub’s Copilot and Amazon’s CodeWhisperer, you get Meta’s Code Llama. This large language model (LLM) uses text prompts to generate and analyze code. Like other assistants, it seeks to make workflows faster and more efficient for developers, and to lower the entry barrier for people who are learning to code. The model is distributed free of charge under an in-house license. It is based on the Llama 2 language model, and thanks to its specific training, it can generate code and natural language about code, taking its cue from code and natural language prompts (for example, “Write me a function that produces the Fibonacci sequence”). It can also complete code and debug it. It supports many of the most popular current programming languages, including Python, C++, Java, PHP, Typescript (Javascript), C#, and Bash.
The model comes in three sizes, with 7, 13 and 34 billion parameters respectively. The last two were trained with FIM (fill-in-the-middle) capability, which allows them to insert code into existing code. Your choice of model will depend on your available computing power and desired latency. Meta mentions that the 7B model can be served on a single GPU. Then there are two specialized variants: Code Llama - Python and Code Llama - Instruct. The former, as the name suggests, is optimized for programming in Python, and the latter is optimized for understanding natural language instructions. Meta remains tight-lipped about where its training data comes from, as usual.
⇨ Meta Newsroom, “Introducing Code Llama, an AI tool for coding.”
2023-08-24