Edge-enabled Distributed Network Measurement

Publications
Abstract: In the area of network monitoring and measurement a number of good tools are already available. However, most mature tools do not account for changes in network management brought about through Software Defined Networking (SDN). New tools developed to address the SDN paradigm often lack both observation scope and performance scale to support distributed management of accelerated measurement devices, high-throughput network processing, and distributed network function monitoring. In this paper we present an approach to distributed network monitoring and management using an agent-based edge computing framework. In addition, we provide a number of real-world examples where this system has been put into practice. Published in: 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) Date of Conference: 19-23 March 2018 Date Added to IEEE Xplore: 08…
Read More

An Edge-Focused Model for Distributed Streaming Data Applications

Publications
Abstract: This paper presents techniques for the description and management of distributed streaming data applications. The proposed solution provides an abstract description language and operational graph model for supporting collections of autonomous components functioning as distributed systems. Using our approach, applications supporting millions of devices can be easily modeled using abstract configurations. Through our models applications can implemented and managed through socalled server-less application orchestration. Published in: 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) Date of Conference: 19-23 March 2018 Date Added to IEEE Xplore: 08 October 2018 ISBN Information: Electronic ISBN: 978-1-5386-3227-7 USB ISBN: 978-1-5386-3226-0 Print on Demand(PoD) ISBN: 978-1-5386-3228-4 INSPEC Accession Number: 18133966 DOI: 10.1109/PERCOMW.2018.8480196 Publisher: IEEE Conference Location: Athens, Greece Citation: Bumgardner, VK Cody, Caylin Hickey, and Victor W. Marek. "An Edge-Focused…
Read More

Constellation: A secure self-optimizing framework for genomic processing

Publications
Abstract: The Constellation framework is designed for process automation, secure workload distribution, and performance optimization for genomic processing. The goal of Constellation is to provide a flexible platform for the processing of custom ”write once run anywhere” genomic pipelines, across a range of computational resources and environments, through the agent-based management of genomic processing containers. An implementation of the Constellation framework is currently in use at the University of Kentucky Medical Center for clinical diagnostic and research genomic processing. Published in: e-Health Networking, Applications and Services (Healthcom), 2016 IEEE 18th International Date of Conference: 14-16 Sept. 2016 Date Added to IEEE Xplore: 21 November 2016 ISBN Information: Electronic ISBN: 978-1-5090-3370-6 Print on Demand(PoD) ISBN: 978-1-5090-3371-3 DOI: 10.1109/HealthCom.2016.7749534 Publisher: IEEE Citation: Bumgardner, VK Cody, et al. “Constellation: A secure self-optimizing framework…
Read More

Encyclopedia of Cloud Computing : Educational Applications of the Cloud

Publications
Encyclopedia of Cloud Computing Editor(s): San Murugesan Irena Bojanova First published:13 May 2016 Print ISBN:9781118821978 |Online ISBN:9781118821930 |DOI:10.1002/9781118821930 Copyright © 2016 John Wiley & Sons, Ltd. Chapter 41: Educational Applications of the Cloud This chapter covers a broad range of cloud computing concepts, technologies, and challenges related to education. We will define educational applications as resources related to learning, encompassing stages of development from preschool through higher and continuing education. In the second section we describe the ways cloud technology is being adopted in education. We will cover the benefits and barriers of cloud adoption based on technology found in education. In the third section we describe cloud resources used in the direct instruction of students, front-office applications (user facing), and back office-applications (administrative). The final section will describe cloud…
Read More

Collating time-series resource data for system-wide job profiling

Publications
Abstract: Through the collection and association of discrete time-series resource metrics and workloads, we can both provide benchmark and intra-job resource collations, along with system-wide job profiling. Traditional RDBMSes are not designed to store and process long-term discrete time-series metrics and the commonly used resolution-reducing round robin databases (RRDB), make poor long-term sources of data for workload analytics. We implemented a system that employs “Big-data” (Hadoop/HBase) and other analytics (R) techniques and tools to store, process, and characterize HPC workloads. Using this system we have collected and processed over a 30 billion time-series metrics from existing short-term high-resolution (15-sec RRDB) sources, profiling over 200 thousand jobs across a wide spectrum of workloads. The system is currently in use at the University of Kentucky for better understanding of individual jobs and…
Read More

OpenStack in Action

Publications
ACM Link Publisher Link Amazon Link Citation: Bumgardner, VK Cody. OpenStack in Action. Manning Publications Company, 2016. Summary OpenStack in Action offers the real world use cases and step-by-step instructions you can take to develop your own cloud platform from from inception to deployment. This book guides you through the design of both the physical hardware cluster and the infrastructure services you'll need to create a custom cloud platform. Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications. About the Technology OpenStack is an open source framework that lets you create a private or public cloud platform on your own physical servers. You build custom infrastructure, platform, and software services without the expense and vendor lock-in associated with proprietary cloud platforms…
Read More

Scalable hybrid stream and hadoop network analysis system

Publications
Collections of network traces have long been used in network traffic analysis. Flow analysis can be used in network anomaly discovery, intrusion detection and more generally, discovery of actionable events on the network. The data collected during processing may be also used for prediction and avoidance of traffic congestion, network capacity planning, and the development of software-defined networking rules. As network flow rates increase and new network technologies are introduced on existing hardware platforms, many organizations find themselves either technically or financially unable to generate, collect, and/or analyze network flow data. The continued rapid growth of network trace data, requires new methods of scalable data collection and analysis. We report on our deployment of a system designed and implemented at the University of Kentucky that supports analysis of network traffic…
Read More