Recent Computing RET Research and Modules

2016

Project 01:  Deep learning for visual object recognition

  • Teachers:  Allen Westendorp (Clay HS)

  • Summary:

    • Mr. Westendorp’s project incorporated elements of computer vision, machine learning, and neuroscience.  Emerging models, like convolutional neural networks from the research area of deep learning mimic this process to achieve excellent performance on real-world computer vision tasks – e.g., like identifying people in social media photos. By learning different network parameters, for a host of different network architectures, and from large collections of labeled images from the Internet, it is possible to teach a computer to understand images like humans do.  Module content based on this work was mapped to high school math courses as neural network represent a straightforward algorithm for introducing the concepts of statistical parameter estimation, experimental error rates, and learning from data.

  • Module Materials:

    • Westendorp -- Activities suitable for Geometry courses (part01, part 02)

Project 02:  Data Centers and CO2 Emissions

  • Teachers:  Vincent Ferro (Mishawaka HS), Brenda Mueller (Elkhart HS)

  • Summary:

    • While processing speed remains an important benchmark, the amount of energy required to perform a computation is itself an equally important design driver.  For example, the power budget for a typical data center now exceeds 10s of megawatts.  Collectively, data centers now account for roughly 2% of all greenhouse gas emissions – about the same as all global air traffic.  Mr. Ferro and Ms. Mueller considered how new information processing devices/models could impact energy efficiency at larger scales.  He worked to frame potential energy savings, etc. in the context of greenhouse gas emissions to (a) demonstrate scale and (b) illustrate the impact that those interested in computing may ultimately have on environmental issues.  There were also able to port work that considered historical device scaling trends into his early algebra courses, etc.

  • Module Materials:

Project 03:  Mixed/machine learning for facial recognition

  • Teachers:  Seth Ponder (Riley HS)

  • Summary:

    • The image understanding community has begun to adopt automatic techniques for feature learning from training imagery to improve the performance of object classifiers. A key consideration for machine-learned features is stability.  Namely, are the features learned by deep architectures for different sets of training data similar? If not, why not?  To address this question, Mr. Ponder worked with large face image datasets collected by ND’s computer vision research group and access to GPU-based deep learning computers and software.

  • Module Materials:

Project 04:  Searching & retrieving data

  • Teachers:  Thomas Falcone (La Lumiere School), George Logsdon (Riley HS)

  • Summary:

    • New information is being produced and curated at an increasing rate, and as a result, economies and industries will depend heavily on the effective storage and retrieval of that information.  Teachers worked with faculty and graduate student mentors to learn how search engines like Google, Bing and Yahoo perform keyword searches. They learned how terms and documents are represented and compared digitally, how massive storage systems quickly retrieve results, and how everyday users naturally parse and refine complex questions into a handful of key words using models created by recurrent and convolutional neural networks

  • Module Materials:

Project 05:  Technology scaling and efficiency of information processing hardware

  • Teachers:  Clinton Jepkema (Niles HS) and Timothy Knoester (Niles HS)

  • Summary:

    • The focus of these modules are WHY new, physics-inspired computing models are needed.  Moreover, PI Niemier is actively involved with benchmarking related work related to device scaling, challenges associated with device scaling, etc. – which serves as the motivation for physics-inspired computing primitives and maps well to other math/physics course content (i.e., current-voltage-resistance).  

  • Module Materials:

Project 06:  Computing and Music

  • Teachers:  Matt Modlin (Riley HS)

  • Summary:

    • Mr. Modlin used how to use digital signal processing to generate and manipulate sound waves to produce music.  He used the Mozzi sound synthesis library for Arduino, and then look at how similar operations may be performed more efficiently by a custom processor using a field programmable gate array (FPGA).  He also learned how custom musical instrument enclosures can be fabricated using either 3D printing or laser cutting.  Applications of this research were used in the production of the musical work "Wild Sound," composed by Glenn Kotche of the band Wilco for Notre Dame artists-in-residence Third Coast Percussion, which has been performed at venues in the U.S. and Europe including the NY Metropolitan Museum of Art, St. Paul Chamber Orchestra, and SF Jazz Center.  (Note that many of the information processing primitives that are required for audio processing can be mapped to hardware that can evolve from physics inspired computing models – see Thread 3 in the goals section.  As such, this work will form a nice foundation for application level case studies/future projects in this thread.)

  • Module Materials:

Project 07:  Cellular Neural Networks

  • Teachers:  Angela Kramer (Marian HS)

  • Summary:

    • Notre Dame faculty are targeting hardware realizations of non-von Neumann CeNNs based on emerging technologies that are evolving from various sponsored research programs.  A CeNN is a spatially parallel computing paradigm consisting of identical processing elements (which are often analog in nature, and are connected to their nearest neighbors) that can significantly improve both the power and performance of various computation-intensive information processing applications – e.g., image processing, pattern recognition, etc.  Ms. Kramer’s work involved learning CeNN fundamentals, and also considered how new devices could lead to improved CeNN, how CeNNs can be employed to solve complex problems (e.g,. targeting tracking), and worked to benchmark CeNN-based computing against von Neumann algorithms for common problems. Ms. Kramer began to teach Project Lead The Way curriculum at her school this year, and used her experience to create supplements to the existing materials associated with neural networks, etc.

  • Module Materials:

Project 08:  Natural Language Generation for Software Documentation and Accessibility Technologies

  • Teachers:  Michael Domino (Concord HS)

  • Summary:

    • Mr. Domino worked to automatically generate English descriptions of software source code – e.g., to create comments for source code that does not already have comments in it.  The descriptions have two uses: one, to include with software to help programmers understand code, and two, as tools to help visually impaired programmers and students navigate code more easily.  The strategy that his project/team followed is: 1) perform empirical study to determine the process that human programmers use to write descriptions, 2) design algorithms that mimic a portion of that process, and 3) evaluate the effectiveness of those algorithms.

  • Module Materials:

2017

Project 09:  Searching and Retrieving Data (Machine Learning Focus)

  • Teachers:  Thomas Falcone (La Lumiere School), George Logsdon (Riley HS)

  • Summary:

    • New information is being produced and curated at an increasing rate, and as a result, economies and industries will depend heavily on the effective storage and retrieval of that information.  Teachers will work closely with faculty and graduate students to learn how search engines like Google, Bing and Yahoo perform keyword searches. To that end, they will improve their understanding on how terms and documents are represented and compared digitally, how massive storage systems quickly retrieve results, and how everyday users naturally parse and refine complex questions into a handful of key words using models created by recurrent and convolutional neural networks. In addition, teachers have also begun working to apply these techniques to create innovative science fair projects that encourage students to independently investigate specific tasks in computing.

       

      Representative content includes introducing students to topics such as:  (i) AI and machine learning (+ what their current state is/uses are); (ii) the basics of Python programming and TensorFlow via guided lessons as well as tutorials by Hvass Labs on YouTube; (iii) neural networks – and how to train and classify images with the Inception model; (iv) Deep Dream and face recognition software.

  • Module Materials:

Project 10:  IoT Technology for the Bowman Creek Educational Ecosystem

  • Teachers:  Matt Modlin (Riley HS) and Seth Ponder (Riley HS)

  • Summary:

    • Teachers conducted research with the Bowman Creek Educational Ecosystem (BCe2) project, in the area of remote sensing and Internet of Things (IoT) technologies.  BCe2 is a collaboration between the University of Notre Dame, Indiana University South Bend, Ivy Tech Community College, and area high schools that provides summer internships that focus on issues affecting the quality of life in the Southeast Neighborhood of South Bend.  Riley High School, where Modlin/Ponder are both teachers in the Engineering Magnet Program, is located in this neighborhood.  One of the challenges in the neighborhood is flooding and storm water runoff around Bowman Creek, sections of which were channelized into underground pipes that literally run under the Riley HS football field.  Teachers worked with a team of interns to develop Arduino-based systems for collecting data from the creek and wirelessly uploading for analysis.  In particularly, they worked on approaches for measuring the effectiveness of "rain gardens" using local plants with deep roots systems as a means for collecting storm water runoff.  Technologies included the use of soil moisture sensing and the development of novel approaches for measuring water inflow and outflow from the gardens that was not absorbed.  

  • Module Materials:

    • Updated link forthcoming.

Project 11:  Machine Learning and Robotics

  • Teachers:  David Lawrence (Culver Academies)

  • Summary:

    • There is growing interest in applying deep learning techniques to robotics applications.  This project considers learning visual-motor-tactile sensory robotic processes (visual servo-ing) using deep neural networks. Lawrence began to investigate how one might train a robotic system to grasp different kind of objects. Lawrence teachers two engineering design courses at Culver Academy – Engineering 1, which focuses on mechanics and robotics, and Engineering 2, which focuses on Programming and Autonomous Control.  Lawrence uses the above context for new modules on Arduino sensing and Matlab interfaces.

  • Module Materials:

Project 12:  Deep Neural Networks

  • Teachers:  Thomas Finke (Trinity School at Greenlawn)

  • Summary:

    • Teachers explore the training process of deep neural networks/various neural network topologies, and consider the relative impact of performance metrics such as energy, delay, etc. for the inference and training phases.  Note that Finke developed modules for his calculus courses, where students considered/implemented gradient descent in Matlab – i.e., as gradient descent represents a core computation in neural network training. 

  • Module Materials:

    • Updated link forthcoming.

Project 13:  Using CeNNs for CoNNs

  • Teachers:  Clinton Jepkema (Niles HS) and Timothy Knoester (Niles HS)

  • Summary:

    • Teachers considered the design and evaluation of mixed-signal systems based on cellular neural networks (CeNNs) for deep learning algorithms (i.e., convolution-based neural networks). Our preliminary projections suggest that a CeNN approach based on just conventional CMOS can offer two to three orders of magnitude improvements in terms of energy-delay product (EDP) for a given dataset at iso-classification accuracy (i.e., compared to other state-of-the-art algorithms/architectures). Additional improvements are possible if CeNNs based on emerging technologies are considered. As such the potential for achieving extra two orders of magnitude improvement exists by compounding the benefits of algorithms, architecture, and technology. To truly gauge the impact of the CeNN-based approach, we have constructed a power measurement system where the real-time power of CoNN inference is measured on a CPU-GPU system.  Teachers worked to validate initial results using a new measurement setup, and also studied the impact of different data sets on inference power.

  • Module Materials:

Project 14:  Let Physics do the Computing: Computation with Coupled Oscillators

  • Teachers:  Lauren Coil (Culver Academies) and Jonathan Lockwood (Penn HS)

  • Summary:

    • The goal of this project was to study the collective dynamics of physical devices, and investigate non-Boolean information processing architectures that fundamentally embrace the “let the physics do the computing” paradigm. Participants explored the fundamental limits of computational complexity that are associated with such devices, and consider how practical issues such as oscillation frequency, synchronization latency, inherent device-to-device variability, etc. can impact an application-level task.  Note that such systems might ultimately be capable of providing solutions to problems that are deemed to be “computationally hard” – i.e., they cannot be solved in polynomial time (where time is a polynomial function of input size), and typically require exponential time.  This makes addressing large input sizes – which are typically representative of the problems of greatest interest – quite challenging.  Examples of such problems include assigning colors to nodes of a graph such that two connected nodes do not have the same color.  (This problem has direct applicability to various resource allocation problems – e.g., airline scheduling, etc.)  Participants also investigated existing solutions to the aforementioned application-spaces/problems and began to benchmark them against oscillator-based solutions.  (This is actually of great interest to our research sponsors, and seeded a new graduate student project.  Note that Lockwood has planned one large module around applications of coupled oscillations. First, students will be designing and implementing a coupled LRC circuit to measure power dissipation. They will then apply some of the concepts we used with the inductively coupled LRC circuits to the function of the pickups of a guitar. Finally, they will design, build, and test a fully functioning electric guitar.

  • Module Materials:

Project 15:  Brain-inspired spiking neural networks

  • Teachers:  John Gensic (Penn HS), Sally Troxel (Adams HS)

  • Summary:

    • At present, researchers at Notre Dame are working to solve various medical imaging problems (i.e., identifying and classifying disease/cancer cells for diagnosis).  Algorithmic solutions have been based on various types of neural networks – e.g., fully convolutional networks, recurrent neural networks, etc.  There is also interest in mapping said problems to brain-inspired spiking neural networks – i.e., where information is encoded in spatial-temporal streams of spikes.  These “spike trains” are then used to process information, perform image recognition tasks, etc.  Moreover, recently, IBM introduced a new chip (TrueNorth) that realizes a digital spiking neural network (see  http://www.research.ibm.com/articles/brain-chip.shtml for more details).  Notre Dame researchers are currently working with IBM researchers to consider said mappings.  Teachers explored how different types of neural networks faired in three important design metrics for problems of interest –  energy, delay, and accuracy.  (i.e., teachers worked to compare spiking network platforms to ASIC solutions, GPU solutions, etc.) Based on this work, teachers have developed the following activities for his classes where students compare the efficiency/utility of different digital assistants, consider AI's impact on jobs (this will tie into natural selection units), and compare digital learning mechanisms with neurons.

  • Module Materials:

Project 16:  Drones!

  • Teachers:  David Chase Pinion (Schmucker MS)

  • Summary:

    • At present, there is great interest in using artificial intelligence for autonomous vehicles.  Pinion worked with simple, airborne drones.  Course module content has been mapped to 7th and 8th grade math courses – i.e., where students learn how to plot a course for the drone to fly, arrange for the drone to stop, hover, and grasp an object, etc.  (This represents a natural extension to existing Indiana math standards.)

  • Module Materials:

    • Updated link forthcoming.

2018

(The 2018 program concluded in August and module content is undergoing final refinements. Thus content will be periodically updated in September and October as school years commence.)

Project 17:  Machine Learning for Drug Detection

  • Teachers:  Jonathan Lockwood (Penn HS)

  • Summary:

    • A machine learning model was developed to characterize test results from cards being developed by ND chemistry professor Marya Liberman’s research group that can detect the presence or absence of various drugs – i.e., by a law enforcement office in the field. A card (referred to as an idPAD) is a chemical color test that responds to different functional groups of illicit drugs (e.g., cocaine, heroin, methamphetamine, and crack cocaine).  Based on the color produced in various lanes (color barcode), a specific drug or combination of drugs can be identified. At present, image recognition requires officer training (e.g., test results that suggest the presence of crack cocaine are nearly identical to those for cocaine). A project goal is to have a mobile device that can be used to detect the presence (or absence) of a specific drug based on the signature of the idPAD.  This problem is amenable to machine learning algorithms as a program can learn features associated with a drug’s signature to make subsequent predictions about the drug associated with a new test case. Participants used knowledge/skills that were developed during initial RET site activities (e.g., Keras tutorials, etc.) to develop a CNN model to address this classification problem.

  • Module Materials:

Project 18:  Crossbar Architectures and Convolutional Neural Networks

  • Teachers:  Clinton Jepkema (Niles HS)

  • Summary:

    • Convolution neural networks (CNNs) have achieved great successes in numerous applications, such as image classification, object detection, natural language processing, etc.  Still, it is extremely challenging to run deep CNNs on resource-limited embedded platforms, such as smart phones. To tackle this challenge, binary neural networks (NNs) have recently been proposed. In binary NNs, the weights and/or activations are binarized to ±1. Such an approximation significantly reduces the energy, memory usage, and execution time, with acceptable accuracy. To further reduce energy and computation time, researchers are studying hardware NN accelerators, especially based on emerging devices. As a nonvolatile device, resistive random-access memories (RRAMs) can realize analog multiplications by programming the devices to multi-level states, and are promising candidates for area-efficient NN accelerators.  Crossbar array architectures are of great interest for this space. When computations for a neural network are performed digitally, the energy required to fetch values for learned filters can overwhelm that of the computation itself. An analog crossbar array could perform computations necessary for machine learning-based inference much more efficiently. Inputs to a crossbar array can be represented as voltages on horizontal rows, (ii) new technologies can serve as tunable resistors with multiple analog states, and (iii) the results of multiply and accumulate operations for CNNs are captured by the summation of currents associated with the crosspoint connections in a given column. That said, analog hardware also suffers from lower precision when compared to digital equivalents, and we must study the impact on application-level accuracy -- among others.

  • Module Materials:

Project 19:  The Bowman Creek Educational Ecosystem

  • Teachers:  Holly Austin (Penn HS), John Raul-Buison (Riley HS), Seth Ponder (Riley HS)

  • Summary:

    • Participants conducted research with the Bowman Creek Educational Ecosystem (BCe2) project, in the area of remote sensing and Internet of Things (IoT) technologies.  BCe2 is a collaboration between the University of Notre Dame, Indiana University South Bend, Ivy Tech Community College, and area high schools that provides summer internships that focus on issues affecting the quality of life in the Southeast Neighborhood of South Bend.  One of the challenges in the neighborhood is flooding and storm water runoff around Bowman Creek, sections of which were channelized into underground pipes that literally run under the Riley HS football field.  Representative work includes:  (1) developing  models for neighboring school districts and applying techniques developed for storm water runoff in South Bend to their surrounding ecosystems such as runoff from a parking lot at Penn HS to the St. Joseph River. Arduino-based systems were developed to collecting data from the creek and subsequent analysis. (This work is well suited for environmental science courses.) (2) developing a variety of Arduino-based sensor modules for measuring weather conditions including rainfall, as well as for monitoring environmental conditions in the creek. Activities can be introduced into earth science classes, and measurements can be correlated with readings from a weather station located at the school. (3) developing approaches for measuring the effectiveness of "rain gardens" using local plants with deep roots systems as a means for collecting storm water runoff.  Technologies included the use of soil moisture sensing and the development of novel approaches for measuring water inflow and outflow from the gardens that was not absorbed. 

  • Module Materials:

Project 20:  Object Recognition for Autonomous Robots

  • Teachers:  Jim Langfeldt (Penn HS), Kyle Marsh (Mishawaka HS)

  • Summary:

    • At present, there is great interest in developing new machine learning models that can support tasks such as one-shot or few-shot learning -- i.e., where a neural network can be trained with just a few representative examples of a given class.  These models have clear utility in robotics. This is in contrast to many ML approaches, such as traditional reinforcement learning, that typically require a large number of training attempts. Developing one-shot/few-shot learning models that are applicable to robotics problems – and that can be realized via fast/energy efficient hardware being developed by ND researchers to support one/few-shot learning models -- given the frequent constraints of this application space is of great interest (e.g., battery life).  To begin to explore this new application space, participants will develop initial machine learning models and datasets for tasks that may need to be performed by an autonomous robot, establish baseline model accuracies, etc.  (This project would be amenable to activities to competitions/classes associated with FIRST robotics, etc.)

  • Module Materials:

Project 21:  Machine Learning Models with Mixed Precision

Project 22:  Graph Coloring with Coupled Oscillators

  • Teachers:  Rebecca Humbarger (Riley HS), Erica Price (Trinity School at Greenlawn)

  • Summary:

    • The application of the von Neumann computing model to problems such as associative computing and optimization is challenging. Consider the graph coloring optimization problem with a goal of finding a minimum set of colors such that any nodes in a graph that are connected by an edge do not share the same color. Vertex coloring has applicability to scheduling, resource allocation, etc. For example, in a social network individuals are represented by nodes, and are connected by edges if they are related. By solving the graph coloring problem, we can identify unrelated individuals.  It is important to evaluate the efficacy of physics-based solutions with respect to figures of merit (FOM) such as (i) solution quality (the minimum number of colors needed so no two adjacent nodes have the same color – i.e., the chromatic number), (ii) the time required for a physical system to generate a solution, (iii) the energy required for said solution, etc. Potential solutions based on a new model/technology should also be compared to heuristic solutions that run on a CPU or GPU. Preliminary results from the NSF-SRC project have considered simulation studies of physical systems for which chromatic numbers were calculated. An ideal baseline should include the time/energy required to run highly optimized heuristics. The objective of this project was geared toward this end. Participants implemented and debugged a Python implementation of the Brelaz heuristic for graph coloring and applied it to graphs studied in prior work. They measured program run times for use in benchmarking efforts that PI Niemier is responsible for via SRC-NSF funding

  • Module Materials:

Project 23:  Benchmarking New Hardware Kernels for Reinforcement Learning

  • Teachers:  John Gensic (Penn HS), Kathryn Meier (Riley HS)

  • Summary:

    • As part of a NSF/SRC project, Niemier and collaborators at other universities are looking at physics-inspired computing systems / coupled dynamical systems to solve computationally hard problems and to realize neuromorphic computing primitivesNiemier’s collaborators have developed an energy efficient, stochastic spiking neuron that has utility in training/inference. Colleagues at Georgia Tech have realized a chip-level prototype and used it in the context of a (simple) reinforcement learning (RL) problem for micro-robotsWhile GT colleagues/authors have made effort to compare their approach to existing state of the art (i.e., neuromorphic ASICs), comparisons are coarse grained and effectively only consider chip-level metrics like performance/Watt. Research sponsors are interested in more fine-grained/application-centric/apples-to-apples comparisons.  To establish a baseline, particiapnts worked to realize RL model with similar functionality using toolsets such as Keras. This code can then be analyzed, timed, etc. to establish baseline for more traditional CPU/GPU realization, and can serve as a more “fine grained” comparison point for hardware implementations based on new technologies and models.
  • Module Materials: