Edge-enabled Distributed Network Measurement

Publications

Abstract: In the area of network monitoring and measurement a number of good tools are already available. However, most mature tools do not account for changes in network management brought about through Software Defined Networking (SDN). New tools developed to address the SDN paradigm often lack both observation scope and performance scale to support distributed management of accelerated measurement devices, high-throughput network processing, and distributed network function monitoring. In this paper we present an approach to distributed network monitoring and management using an agent-based edge computing framework. In addition, we provide a number of real-world examples where this system has been put into practice.

Published in: 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)

Date of Conference: 19-23 March 2018 Date Added to IEEE Xplore: 08 October 2018 ISBN Information: Electronic ISBN: 978-1-5386-3227-7 USB ISBN: 978-1-5386-3226-0 Print on Demand(PoD) ISBN: 978-1-5386-3228-4 INSPEC Accession Number: 18133902 DOI: 10.1109/PERCOMW.2018.8480233

Publisher: IEEE Conference Location: Athens, Greece

Citation: Bumgardner, VK Cody, Caylin Hickey, and Victor W. Marek. “Edge-enabled Distributed Network Measurement.” 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). IEEE, 2018.

An Edge-Focused Model for Distributed Streaming Data Applications

Publications

Abstract:
This paper presents techniques for the description and management of distributed streaming data applications. The proposed solution provides an abstract description language and operational graph model for supporting collections of autonomous components functioning as distributed systems. Using our approach, applications supporting millions of devices can be easily modeled using abstract configurations. Through our models applications can implemented and managed through socalled server-less application orchestration.
Published in: 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)

Date of Conference: 19-23 March 2018
Date Added to IEEE Xplore: 08 October 2018
ISBN Information:
Electronic ISBN: 978-1-5386-3227-7
USB ISBN: 978-1-5386-3226-0
Print on Demand(PoD) ISBN: 978-1-5386-3228-4
INSPEC Accession Number: 18133966
DOI: 10.1109/PERCOMW.2018.8480196

Publisher: IEEE
Conference Location: Athens, Greece

Citation: Bumgardner, VK Cody, Caylin Hickey, and Victor W. Marek. “An Edge-Focused Model for Distributed Streaming Data Applications.” 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). IEEE, 2018.

Constellation: A secure self-optimizing framework for genomic processing

Publications

Abstract: The Constellation framework is designed for process automation, secure workload distribution, and performance optimization for genomic processing. The goal of Constellation is to provide a flexible platform for the processing of custom ”write once run anywhere” genomic pipelines, across a range of computational resources and environments, through the agent-based management of genomic processing containers. An implementation of the Constellation framework is currently in use at the University of Kentucky Medical Center for clinical diagnostic and research genomic processing.

Published in: e-Health Networking, Applications and Services (Healthcom), 2016 IEEE 18th International

Date of Conference: 14-16 Sept. 2016 Date Added to IEEE Xplore: 21 November 2016 ISBN Information: Electronic ISBN: 978-1-5090-3370-6 Print on Demand(PoD) ISBN: 978-1-5090-3371-3

DOI: 10.1109/HealthCom.2016.7749534 Publisher: IEEE

Citation: Bumgardner, VK Cody, et al. “Constellation: A secure self-optimizing framework for genomic processing.” e-Health Networking, Applications and Services (Healthcom), 2016 IEEE 18th International Conference on. IEEE, 2016.

Encyclopedia of Cloud Computing : Educational Applications of the Cloud

Publications

Encyclopedia of Cloud Computing Editor(s): San Murugesan Irena Bojanova First published:13 May 2016 Print ISBN:9781118821978 |Online ISBN:9781118821930 |DOI:10.1002/9781118821930 Copyright © 2016 John Wiley & Sons, Ltd.

Chapter 41: Educational Applications of the Cloud

This chapter covers a broad range of cloud computing concepts, technologies, and challenges related to education. We will define educational applications as resources related to learning, encompassing stages of development from preschool through higher and continuing education. In the second section we describe the ways cloud technology is being adopted in education. We will cover the benefits and barriers of cloud adoption based on technology found in education. In the third section we describe cloud resources used in the direct instruction of students, front-office applications (user facing), and back office-applications (administrative). The final section will describe cloud computing in research.

Citation: Bumgardner, V. K., Victor Marek, and Doyle Friskney. “Educational Applications of the Cloud.” Encyclopedia of Cloud Computing (2016): 505-516.

Collating time-series resource data for system-wide job profiling

Publications

Abstract:
Through the collection and association of discrete time-series resource metrics and workloads, we can both provide benchmark and intra-job resource collations, along with system-wide job profiling. Traditional RDBMSes are not designed to store and process long-term discrete time-series metrics and the commonly used resolution-reducing round robin databases (RRDB), make poor long-term sources of data for workload analytics. We implemented a system that employs “Big-data” (Hadoop/HBase) and other analytics (R) techniques and tools to store, process, and characterize HPC workloads. Using this system we have collected and processed over a 30 billion time-series metrics from existing short-term high-resolution (15-sec RRDB) sources, profiling over 200 thousand jobs across a wide spectrum of workloads. The system is currently in use at the University of Kentucky for better understanding of individual jobs and system-wide profiling as well as a strategic source of data for resource allocation and future acquisitions.

Published in: Network Operations and Management Symposium (NOMS), 2016 IEEE/IFIP
Date of Conference: 25-29 April 2016
Date Added to IEEE Xplore: 04 July 2016
ISBN Information:
Electronic ISSN: 2374-9709

INSPEC Accession Number: 16124063
DOI: 10.1109/NOMS.2016.7502958
Publisher: IEEE

Citation:
Bumgardner, VK Cody, Victor W. Marek, and Ray L. Hyatt. “Collating time-series resource data for system-wide job profiling.” Network Operations and Management Symposium (NOMS), 2016 IEEE/IFIP. IEEE, 2016.

OpenStack in Action

Publications

ACM Link

Publisher Link

Amazon Link

Citation: Bumgardner, VK Cody. OpenStack in Action. Manning Publications Company, 2016.

Summary

OpenStack in Action offers the real world use cases and step-by-step instructions you can take to develop your own cloud platform from from inception to deployment. This book guides you through the design of both the physical hardware cluster and the infrastructure services you’ll need to create a custom cloud platform.

Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications.

About the Technology

OpenStack is an open source framework that lets you create a private or public cloud platform on your own physical servers. You build custom infrastructure, platform, and software services without the expense and vendor lock-in associated with proprietary cloud platforms like Amazon Web Services and Microsoft Azure. With an OpenStack private cloud, you can get increased security, more control, improved reliability, and lower costs.

About the Book

OpenStack in Action offers real-world use cases and step-by-step instructions on how to develop your own cloud platform. This book guides you through the design of both the physical hardware cluster and the infrastructure services you’ll need. You’ll learn how to select and set up virtual and physical servers, how to implement software-defined networking, and technical details of designing, deploying, and operating an OpenStack cloud in your enterprise. You’ll also discover how to best tailor your OpenStack deployment for your environment. Finally, you’ll learn how your cloud can offer user-facing software and infrastructure services.

What’s Inside

Develop and deploy an enterprise private cloud Private cloud technologies from an IT perspective Organizational impact of self-service cloud computing About the Reader

No prior knowledge of OpenStack or cloud development is assumed.

Table of Contents

PART 1 GETTING STARTED
Introducing OpenStack
Taking an OpenStack test-drive
Learning basic OpenStack operations
Understanding private cloud building blocks

PART 2 WALKING THROUGH A MANUAL DEPLOYMENT
Walking through a Controller deployment
Walking through a Networking deployment
Walking through a Block Storage deployment
Walking through a Compute deployment

PART 3 BUILDING A PRODUCTION ENVIRONMENT
Architecting your OpenStack
Deploying Ceph
Automated HA OpenStack deployment with Fuel
Cloud orchestration using OpenStack

Scalable hybrid stream and hadoop network analysis system

Publications

Collections of network traces have long been used in network traffic analysis. Flow analysis can be used in network anomaly discovery, intrusion detection and more generally, discovery of actionable events on the network. The data collected during processing may be also used for prediction and avoidance of traffic congestion, network capacity planning, and the development of software-defined networking rules. As network flow rates increase and new network technologies are introduced on existing hardware platforms, many organizations find themselves either technically or financially unable to generate, collect, and/or analyze network flow data. The continued rapid growth of network trace data, requires new methods of scalable data collection and analysis. We report on our deployment of a system designed and implemented at the University of Kentucky that supports analysis of network traffic across the enterprise. Our system addresses problems of scale in existing systems, by using distributed computing methodologies, and is based on a combination of stream and batch processing techniques. In addition to collection, stream processing using Storm is utilized to enrich the data stream with ephemeral environment data. Enriched stream-data is then used for event detection and near real-time flow analysis by an in-line complex event processor. Batch processing is performed by the Hadoop MapReduce framework, from data stored in HBase BigTable storage.

In benchmarks on our 10 node cluster, using actual network data, we were able to stream process over 315k flows/sec. In batch analysis were we able to process over 2.6M flows/sec with a storage compression ratio of 6.7:1.

Citation:
Bumgardner, Vernon KC, and Victor W. Marek. “Scalable hybrid stream and Hadoop network analysis system.” Proceedings of the 5th ACM/SPEC international conference on Performance engineering. ACM, 2014.