Thursday, July 20, 2017

Asymetrical Resource Procurement and Allocation for Virtual Applications and Virtual Desktops


There is a lot of aging hardware and infrastructure out in the wild, and with things becoming more digitized, we've started to accept things that were once completely foreign to us.

Virtual Machines, a computer that runs completely on software and resources from another machine, are standard business practice.

Aging servers can work together to pool resources and provide a desktop experience for a single user, or many users. These can run web applications, other servers, or desktop versions of windows, linux, or other operating systems.

These resources can be managed dynamically, allowing resources to be assigned and expanded as need and use-case expands. But in our pockets, we have small computers that may be aging out of their intended purpose, but can still donate a sizable chunk of their resources in order to create or support a desktop environment.

Limitations of these devices come down to their power consumption, heat output, form factor, and architecture, with architecture being the most difficult to adapt into modern infrastructure. Still, these machines have specifications and hardware that rivel some desktop computers, and using these devices as resource pools to run dedicated applications is a viable solution to the hardware limitation problem -- we simply distribute our computing across devices by running solo applications in their environments.

The common user already does this by running facebook or twitter from the desk drawer, using these devices to play music or order lunch. We don't think of it like that, but that's what the user is doing.

In testing, we've been able to successfully run a linux chroot environment on top of android, then ssh into the console, and run a full graphics interface solution for either a solo application, or a generalized desktop experience via VNC. This runs as a seamless environment, capable of a lot (before crashing due to optimization issues).

I think that we can use this as a distributed computing platform, allowing old, out of use devices to perform useful tasks past their primary function. A pool of devices would be flashed with custom operating systems that put them in a client mode. This would make their resources availible to a main server that would leverage those resources to create traditional experiences.

With the main challenge being incorporating the cross-architecture environment, a secondary challenge would be to build the structure to distribute these resources, and build in enough redundancy and monitoring to make this a viable solution. Most of these devices would be communicating through wireless standards, and they're built for only a few tasks at a time, for a short amount of time.

Once the distribution client application is built, it would have to be generic enough and allowing enough customization to take advantage of diversified hardware from the past 10+ years. Fortunately, the work involved in getting the image on the device has already been done by various modification communities. Unfortunately, there is no single path or single image that works for all instances, like their might be for a standard installation of windows or linux.

A typical hypervisor like ESXi should be able to manage the resources made available, again, as long as the differences between the x64 architecture of the server and application, and the ARM architecture of the client devices don't get in the way.

We could get around this by running the entire system in the ARM architecture, but we're still left with the next problem, allowing those resources to be distributed. Modern ESXi clients allow for distributed computing, as does Proxmox and other hypervisor programs.

Elements available provide something unique, an untapped resource of old phones that can be used to create an otherwise powerful machine or series of machines to run modern applications. These devices are cheap, disposable, and in abundance. The challenges of this project, and the viability for uses outside the small lab environment may not be enough to justify the cost of time, energy, and resources involved in order to make this work. Communication taking place over wifi, this networking and distribution structure creates a bottleneck. Finally, this is impractical for casual home users, as it requires constant maintenance, and doesn't scale to a size that can be useful for enterprise technology. The spectrum of people this could potentially reach is limited to a select group of enthusiasts.


Issues::
Crossing archetecture provides challenge to incorperating devices into traditional schema

No Resource Distribution platform availible for these devices

Asymmetrical processing

Process redundancy management
-- What happens when one device goes down?
-- How is redundant work handled?

Cluster networking resources over wifi

Monitoring and Stability of devices to avoid overheating.