Abstract
Large Language models (LLMs) based on Transformers architecture have demonstrated the state-of-the-art performance in different code generation benchmarks such as MBPP and HumanEval. In this talk, we will demonstrate how we have used open source LLM models to develop a code generation workflow that can be trained internally in an on-prem infrastructure and used for improving developer productivity by aiding in tasks such as unittest generation, code documentation, code refactoring, code translation, search, and code alignment.