Please note: We are currently experiencing some performance issues across the site, and some pages may be slow to load. We are working on restoring normal service soon. Importing new articles from Word documents is also currently unavailable. We apologize for any inconvenience.

Serverless computing has become a very popular cloud-based solution for designing, testing and deploying the applications as serverless functions. Serverless computing has become a buzzword in both industry and academia as many large IT giants have rolled out their own serverless platforms. Independent design, deployment and auto scalability are its major features with Function-as-a-Service (FaaS) as its popular implementation. On the other hand, microservices are an emerging trend for design of large enterprise applications as many companies like Amazon, Uber and Spotify have already migrated their existing applications to microservices style. However, with the advent of serverless platforms, to design efficient applications with simpler infrastructure management, reduced operational overhead and cost factor, migrating microservices to serverless has become inevitable. Additionally, existing studies report that serverless platforms suffer from cold start latency. Therefore, in this paper, a framework is proposed which has two phases: (i) an empirical investigation to find out the impact of programming language on the cold start of serverless functions and (ii) an approach for migration of microservices to serverless platforms. The evaluation results help us to identify the platform with low cold start latency and also recommend the choice of programming language with lower latency. After migrating the containerized microservices to serverless, a comparison in terms of performance is conducted. The applications designed as serverless functions exhibit better response time and throughput compared to containerized microservices.