Skip to content

A Successful Journey: Migrating from On-Premise Kubernetes to AWS Serverless

AWS

The transition from On-Premise Kubernetes to AWS Serverless presents organisations with an opportunity to enhance scalability, streamline maintenance, and foster innovation. Operating on-premise Kubernetes infrastructures comes with challenges, particularly in scaling and maintaining individual components. If the system is designed to be tightly coupled, it can hinder independent scaling, often leading to performance bottlenecks and sluggish response times. In some cases, an application hosted on on-premise Kubernetes can lead to having infrastructure that is not used, and therefore adding unnecessary cost.  However, with the emergence of AWS Serverless and its modern technologies, businesses are breaking free from legacy constraints and embracing a more agile and cost-effective approach.

On-premise Kubernetes to AWS Serverless Migration was the subject of a recent podcast hosted by Devoteam. Prabhat Handoo, the host, spoke with Robert Lotter, a lead DevOps and AWS consultant at Devoteam, on helping a large organisation in Germany transition from on-premise Kubernetes to serverless on AWS. During the episode, Robert offers advice on overcoming migration obstacles and discusses the finest techniques contributing to this achievement.

Key Components of the Service and its Significance

  • The one core service, which is the heart of the system, consists of an HTTP service with very complex business logic, which needs to be running 24×7 and doesn’t allow any downtimes and failures of essential services. 
  • We have the sync service, which synchronises data from an external data source to a local database, and it’s also quite important because the database is the base for all of the complete business logic. 
  • Additionally, there are simulation services for testing for simulating campaigns. 
  • A front-end application that the customer uses directly for configuring purposes log visualisation, testing, and so on. 
  • Lastly, this is not an application component, but it’s the database itself, and in our case, we have a Postgres database. It’s the most critical part because our system no longer works without a database. The customer was using a PostgreSQL server on-premise for this setup.

Discovering and Assessing Customer’s Infrastructure

The infrastructure already looked quite good; it has been running for a few years using this design, and all the applications were hosted as Docker containers on the Kubernetes cluster. Also, there needed to be more redundancy. So, for example, for the integration environment and the production environment, we had two same complete clusters just for high availability purposes. As you can imagine, this is a costly undertaking, and for the CI/CD part, we were sharing a Jenkins server with other teams and other projects. Also, there was a Harbor artefact registry for storing Docker images shared by other teams. The customer provided it, and as already mentioned in the database cluster, it was also running on a dedicated VMS and was set up and operated by us, so this was one of the pain points.

If we were doing something like this ten years back, this was good in that sense, but as the event of the cloud has come up, people are moving more to cloud-native services, microservices, and even containers for example, on the cloud, Amazon has three services, EKS, ECS, and Fargate. Regarding how to get there, with some of the research work we did for this project, we realised that there were some scaling issues, the customer’s shared resources were being used, and the costs for running all of this on-premise were relatively High. There has been a one-fifth of the cost of what the customer was spending on a premise versus on AWS. The numbers look somewhere between 10,000 dollars on-premise per month dollars per month, and on AWS, it is approximately 2000. That’s a perfect cost-saving for the customer.

The work that we have been doing for customers, like modernisation via the process of migration, is an excellent approach for customers to look at, and this is the kind of project that we want to look at.

Resolving Initial Challenges: Customer Issues and Solutions

We had a lot of redundancy and high cost regarding the Clusters, and also, many of these applications were running in idle nodes because they were rarely used. For example, the simulation services and different things resulted in high costs and unused resources; Lambda solved all these pain points in this case. So, Lambda is a serverless function that is like a function as a service in the cloud. It’s running utterly serverless without dedicated servers. Mentioning the CI-CD part here, possibly long builds and interrupted builds because of other teams. This is eliminated by the code pipeline AWS or pipeline dedicated to us. It was also running serverless, and yeah, also the docker registry, we had many connection problems. Now with Lambda, we don’t need the docker registry anymore. The most significant pain points were the database because the database storage was not scaling automatically. We really ran into problems sometimes, and the patching of the database servers of the database engine was a real pain point that was causing downtimes sometimes. Also, the setup and maintenance of the database itself could have been more pleasant. All of these points were resolved by using Amazon RDS.

RDS has gained a lot of popularity over the years; back in the day, when customers had to patch their database Services, it was a nightmare for many database administrators not to touch a database server. It was said usually do not touch a running system, but one must handle it because you need to patch it. Do it correctly using more managed services like Amazon RDS or in a repeatable way.

AWS Setup: Unveiling the Range of Services and Architecture

In this case, briefing a high-level overview of space, so of course. We used many more AWS services like VPC, security groups, and IAM roles, but this is still in stock here. Hence, this is the high-level architecture. In the top left corner, we have the entry point for the clients, which is firstly secured by a web application firewall for security purposes. Yeah, the light blue path, the front end path, which is represented by a CloudFront distribution, an ALB, and S3, and the other path is going through an API Gateway and is calling the first service, which is our leading service that has already been mentioned and also on the bottom it can be seen that we also have some event-driven architecture where we trigger some Lambda functions based on an event bridge event and reads. Then read messages from the message queue and write back to the message queue, so yeah, as it can be seen, the whole back-end code is deployed as a Lambda function. Hence, we do not have one server running on our side.

Serverless Technologies vs Microservices and Containers for Migration

That’s important why you’re doing this and intending to have a Highly available and cost-effective solution. Nothing gets better than that. The customer had seen some of the benefits of running their infrastructure here, so some of the AWS services mentioned before were used for this customer. We’ve spoken about this quite like all these services are AWS native Services. Nothing is a third party like you could have utilized GitLab or GitHub something for CI-CD, but you continue to use code pipeline, so that is quite brilliant.

Setting up Monitoring and Alerting for the Project

For example, we also set up alarms with Cloudwatch and SNS to send email notifications in case of errors in place of unexpected behaviour, which is also done serverless. So you don’t need to provide servers or listeners; you configure your alarms and are ready to go, which is impressive. The team did the entire infrastructure build and the backend development of the application.

Application Code Adaptations: Enabling Seamless Integration with Lambda

Our full responsibility was to change the application code to allow it to work with Lambda. Fortunately, there wasn’t a lot to do, but some code adaptations were necessary. Working with Lambda required slight modifications but was a straightforward process overall.

One of the things customers do look for when you talk to them about the re-platforms, the re-architect is that we don’t have access to the code developers, we don’t have access to the people who wrote the code, we don’t know what exists and going back to the line why it disturbs something when it is running but when you are talking about Innovation and modernisation of your infrastructure then you need to touch something which is even running because it’s not fit for future or it’s not fit for a business driver in that sense.

Customer involvement or team effort: Who drove the majority of the work?

The customer, of course, needs to support us in understanding the business logic itself, but regarding the infrastructure, we came after the architectural approach.

What did we achieve?

With this large Enterprise from Germany and a customer who is such an end user and a kind of B2C model, some of the achievements would be:

Customers had more highly available infrastructure. The cost reduction was approximately 80% down, and the cost for Lambda was around 20 dollars per month. Scalability, Lambda does provide thousand concurrent connection executions. There has been database storage; the database performance with RDS increased as well, and because we were using more managed Services, there were no things for us to manage and not to patch. AWS did it on its own and sustainability because a Core theme of anything we do is using sustainable ID on-premise versus AWS; this is a far more sustainable solution. We saw 95% powered by renewable energy in 2021.

How can customers get started?

With this, we’ve got in Devoteam a migration to modernise offering. We were AWS’s migration partner of the Year, and much work alludes to the kind of work Robert and the rest of the team in France, Germany, Belgium, Italy, and the UK are doing. We have a migrate to modernise offering wherein we provide a workshop with your teams and give you a report explaining the right path for you to move to AWS. The migration plan and the cloud strategy are aligned and considered when we write this report, so it’s not a report about what we think is right to do. It aligns with your business drivers and goals.