As a tech nerd, few things pique my interest like the cutting edge. As the architect behind the IDaaS cloud infrastructure at BIO-key, I’ve had the pleasure and rare opportunity to build a system from the ground up that utilizes the latest design philosophies and toolsets available to the industry. What are these tools and what makes them different? Cloud Computing as a paradigm can have its origins traced back to the early 2000s, which in turn emerged from the primordial soup of the “Datacenter” computing model used at IBM and DEC in the 1960s. Over time, Cloud Computing has coalesced into our contemporary understanding of the idea, the epitome of which is the whimsically named “Kubernetes,” a piece of open source software created and maintained by Google. Here, I’d like to talk a bit about Kubernetes, how it came to be, and why we use it, especially for our IDaaS Cloud Infrastructure.
A Brief History
As it turns out, Google had been experimenting with Cloud style tools long before the idea had caught widespread interest. Kubernetes is the latest in a series of systems conceived of by Google after nearly two decades of constant iterations and operations on their own internal Cloud management systems. The first of these systems, known as “Borg,” was created in 2003 and introduced the ideas of clustering machines as a way of improving compute resource utilization. Borg also added isolation between performance critical user facing services and under-the-hood CPU heavy ones. A follow-up to this system known as “Omega” further improved the design of Borg by breaking its functionality into separate components, rather than handling everything from within a large centralized application. This allowed the decoupling of components and alleviation of constraints, opening up the system for rapid response development. In 2014, Kubernetes became the third take on this system and the first to be open source. Kubernetes was developed with a strong focus on ease-of-use from the perspective of application developers. But what does it do?
High Availability
One of the primary challenges when hosting applications in the Cloud is making sure they’re always available. This is known as High Availability (HA), when a system is expected to operate continuously without failing. In general, this is accomplished by running multiple instances of the same application in parallel, which introduces redundancy into the system. This way, if any instances of the application fail for any reason, there’s always another one ready to take its place. With Kubernetes, this kind of fallback maneuver is built in and fully automated. Another way redundancy is helpful in achieving HA is how it distributes incoming traffic. By having all incoming traffic distributed across a number of replica applications, we spread out the work each replica has to do, reducing bottlenecks and lightening the load any one replica takes on.
Scalability
Once an application is Highly Available to its users, the next consideration may be, how many users can this application handle? 10,000? 100,000? Kubernetes helps to eliminate this concern as well. As mentioned, applications may achieve HA by running a number of replicas at once, but this is also a core feature of how applications can scale to meet any demand. By actively monitoring the strain placed on the system, Kubernetes can adjust the number of application replicas to alleviate the strain when it becomes too great. Conversely, it may also detect a surplus of compute resources, and choose to reduce the number of replicas in order to optimize resource use. This process is known as horizontal scaling. Horizontal in this context refers to the adding and removing of application replicas, in essence spreading the application out horizontally. Though less common, vertical scaling also exists. By graduating the application to more and more powerful machines, vertical scaling provides the application with more and more resources to meet demand.
Self Healing
Yet another building block for reliable distributed systems is Kubernetes’ self-healing functionality. In addition to monitoring the strain on the application, the application health is also monitored. In the event that a replica of an application is found to be unhealthy - which could indicate anything from an application crash to a machine dropping off the network - that application is immediately restarted or replaced with a healthy one. By constantly looking out for and remedying parts of the system that have entered a failed state, we eliminate concerns regarding inevitable hardware and application malfunctions.
Seeing IDaaS in Action
Any of these features on their own are powerful, but when they come together and cooperate to keep your application alive, the system becomes practically unbreakable. What’s more, these are just a subset of features provided by Kubernetes. With this, it’s easy to see the decades of lessons that have gone into hardening the hosting of a Cloud application. All of this makes an undoubtedly impressive piece of software, and a piece of what has enabled us to create IDaaS.
At BIO-key, I can tell you from personal experience when developing PortalGuard IDaaS that the ideal IDaaS solution aims to improve customer experience, giving users the options and flexibility to remain secure. In hybrid environments, adding an IDaaS solution secures all access, meaning all users are secured, no matter their location, and you get to choose the unique requirements for each individual and account. This means third-party vendors and remote workers go through more rigorous security measures.
An IDaaS solution must configure to your IAM strategy, being flexible yet offering many authentication options and possibilities. As a solution that started on-premises over 10 years ago and has made the migration to the cloud, PortalGuard IDaaS offers full support for all environments. Including support for multiple single sign-on (SSO) protocols, PortalGuard provides a seamless experience for customers and employees to access ALL their applications.
See PortalGuard IDaaS in action to see how we put you back in control of your IAM strategy. You can also reach out to me if you have any questions.