As the world that we are living in has been developing new technology and utilizing those new trends, such as OpenStack and computing environments, our daily lives have been improved within businesses, enterprises, and data centers, by computing and making things easier and faster than ever before. We, the people and anyone who works in the IT field, often take things for granted by not knowing where things have come from in the past. For us to understand the many concepts/resources that we have today, we must know the history to see how far we have advanced based on previous individuals’ contributions. Reflecting on what has come before us allows us to change our way of thinking on how to deliver those services.
In the 1950s, large-scale mainframes were made available through corporations, government, and schools, where they only had a basic interactive interface called “dumb terminal” to the mainframes. According to IBM, with the high cost of the mainframe in the early days for each user, an organization may have allowed multiple users to access the CPU power and share the same data storage layer from any station so that the organization could have the most for their investment in the advanced technology of the time (we still use a similar concept today). The primary work of the mainframe was to handle a high volume of inputs and outputs (I/O) through computing by organizing and storing large data that would otherwise be tedious and time consuming to do by hand. It was and is very common for the mainframe to deal with massive data of databases and files. From then on, the designs for the mainframes should include the peripheral processors to manage with I/O devices.
Moving forward to the 1970s, virtualization changed everything. IBM 370 mainframes were released as of June of 30, 1970, where multiple virtual machines could be running on a single physical node. The IBM 370 models 155 and 165 provided users with faster performance as well as higher storage capacity for data processing. Additionally, these models could use nearly all peripheral devices and have up to 800 million character-capacity disk storage. In accordance with IBM and with this design, better speed and performance was made five times better with it’s I/O capabilities. Schools, governments, and corporations were and have been looking forward to structured computers, larger storage, and its effectiveness/speed; IBM 370 at that time met those requirements that many organizations had been searching for. Furthermore, other features that IBM 370 offered was the ability to compute logical and arithmetic operations in nanoseconds by monolithic integrated circuits, the ability to run 15 independent tasks at the same time, and the ability to deliver at a faster speed between the memory and any other system units expanded by the peripheral devices, such that the pipes’ width and flow pressure have been increased. Moreover, the virtual machines share their own resources such as CPU, keyboard, and network. All of these functionalities were and are able to be virtualized into a shared hosting environment for both public and private servers, which are still very essential to today’s businesses and corporations.
In today’s world, to deliver a high quality application environment, there are several resources for users to run, an instance or virtual machine, of the application that nearly matches what our expectations are for production, such as computation, networking, backup, storage and retrieval, security, and more. As many big hosting companies are starting to move their companies toward a new trend called OpenStack, also referred to as “the future of cloud computing,” biggest companies including but not limited to Red Hat, HP, IBM, and Cisco as well as thousands of individuals have been the members of the OpenStack community. OpenStack is an IaaS (Infrastructure as a Service) open-source software and operating system project that was founded and joined by Rackspace and NASA on July 21st of 2010. Its main purpose is to develop a massively deployable and scalable cloud operating system platform regardless of the size, allowing users to build and manage the cloud computing platforms for both public and private clouds. According to the OpenStack website, “[it] is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed and provisioned through APIs with common authentication mechanisms.” OpenStack is made with nine different components to allow users to help with their needs:
● Keystone provides API client authentication; token, catalog, policy, and assignment services once the client is authenticated.
● Nova is where the main computing engine happens behind OpenStack.
● Glance provides users to upload and discover the “repository” of images as layouts while launching up instances.
● Neutron provides the networking service capability where it is responsible for the communication of components between one another speedily and efficiently.
● Heat is the coordination of OpenStack; template-based cloud application.
● Cinder provides the block storage service.
● Horizon is the dashboard that provides the web frontend for users to see their instances.
OpenStack components work together by first using the horizon component to get the credentials of the user by the REST API call, then it authenticates that user by the keystone client authentication, which gets sent back to the user as a token for components to communicate to each other. In order to create and launch an instance, a user can create one by using the nova component (which is shown on the horizon dashboard), which then gets authenticated again by the keystone component to see if the user’s permission has been granted and checks to see if there are any errors with the database. If everything is perfectly fine, a new instance is created. After this, nova is making a call to glance to retrieve the image and load it from the image storage. Furthermore, nova calls neutron to get an IP address for the instance as well as to allocate and configure the network. The cinder component then gets invoked from the nova component to get the block storage services. Eventually, the nova gets what it needs to create the instance on the hypervisor. Once successful, the instance gets displayed on the horizon dashboard.
In order to operate OpenStack fully, the minimal hardware requirements are the CPU (4 core at 2.4 GHz), RAM (8 GB), HDD (2 x 500GB), and RAID (to improve read performance almost twice as fast). Additionally, software requirements for Ubuntu system packages are gcc, python-pip, python-dev, libxml2-dev, libxslt-dev, libffi-dev, libpq-dev, python-openssl, and mysql-client. You should run OpenStack on either Windows, Linux, or Solaris operating system.
To achieve 100% uptime for OpenStack, the network deployments must work successfully and not get disrupted when any component is running. In order to do so, humans have to scale and predict the flow of the traffic reliably for the performance over the network, and have the lowest latency and high bandwidth of these networks. In addition, if we do not have the correct network capacity, then there would be some failure to be accounted for when the traffic is overwhelmed with many ports and network links. Some considerations to solve this problem is to have policy requirements that would guarantee to have the network uptime as well as to have API availability to guarantee the services to work.
To understand and relate more to OpenStack, OpenFlow has some relationships that OpenStack has. To begin, OpenFlow is a communications protocol that tells network switches or routers where to send the packets over the network, both virtually and physically as well as software and hardware-based. You can control the entire infrastructure and the traffic type when creating a flow by connecting multiple switches together. Both OpenFlow and OpenStack are integrated with software defined networking (SDN) to manage and configure the network configuration flexibly, efficiently, and reliably for the cloud operating system. According to Quora, OpenStack uses OpenFlow “when the OpenStack provisions a virtual machine in a hypervisor, [where] it needs some networking configuration to be done on the Open vSwitch.” Since OpenStack uses plug-in implementations, OpenFlow offers the mechanism to provide the OpenStack’s abstraction to become reality. In order to do so, the OpenStack and OpenFlow must communicate to each other by the Open vSwitch API call. On the other hand, a competitive product that is based on OpenFlow is OpenDaylight; OpenDaylight provides open source SDN to everyone who wants to develop and test their application on an environment, where OpenFlow is developed and visualized to run on the OpenDaylight topologies for real traffic testing purposes.
Throughout the history of application/environment computing, some trends that I have noticed were to share resources, to create higher performance, and to have more efficiency, as well as to lower all costs as much as possible. OpenStack has been growing as one of the largest open-source projects and an eye-opening event for many small and big companies to move their technology, of all sizes, to OpenStack because it is free and has many components that fit everyone’s needs toward their applications. Based on the relevancy in the market for the upcoming 3-5 years, OpenStack will continue to progress by a large pool of companies and individuals all over the world due to private clouds and datacenter automation. Every big organization that relies on heavy cloud computing required to run on their own application, and with OpenStack, is carefully designed with simple APIs and management systems for computation and storage resources to meet most of the demands. Additionally, OpenStack can satisfy the needs for more people who are outside of the United States, so they can have privacy to avoid being spied on from another country as well as the capability for computations to be completed at a faster rate when their data centers are closer to them. Furthermore, OpenStack is significantly stable when released since the code that is written must be reviewed and tested carefully for the quality to make sure every component works exactly like it has intended to on top of other connected components
OpenStack overall is a great open-source project for the cloud computing platform for any person, business, or corporation for use with both public and private clouds. It especially has many components that are connected to OpenStack as a whole for the user’s needs toward their application on the fly as well as having the complete control of the resources of the data center. Additionally, it is mostly deployed by IaaS that provides computation, networking, backup, storage and retrieval, security, and more for users; users can speed up their process by using many instances, or concurrency, which we have seen throughout the history of computing that is shared by the abstract resources. With OpenStack, scaling is much easier and is more flexible toward this infrastructure. Today and in the future, many places will use OpenStack for its massive scale and its contribution to the computing world, such as the components that make up the computing environment and how they work together as a whole, especially for those who are concerned with privacy issues and affordable solutions in the longer run. With many big tech companies who are using OpenStack, such as Cloudscaling, Nebula, VMware, IBM, HP, Dell, Red Hat, and many more, I am excited to see what is ahead for OpenStack in the near future.