Research

Dr. Kwon’s research has been supported by funding agencies including the National Science Foundation, GM Foundation, The Donald Lee Smith Fund through the Community Foundation of Greater Flint, Ministry of Knowledge Economy,  Korea.

Research Interests

  • Artificial Intelligence and Machine Learning
  • Autonomous Vehicles, Self-Driving Cars
  • Computational Neuroscience and Neuroanatomy

Labs

Research Statements:

Bio-Inspired Machine Intelligence for Autonomous Systems

Executive Summary

My research interests are in the areas of brain-inspired machine intelligence and its applications such as mobile robots and autonomous vehicles. To achieve true machine intelligence, I have taken two different approaches: bottom-up data-driven and top-down theory-driven approach. For the bottom-up data-driven approach, I have investigated the neuronal structure of the brain to understand its function. The development of a high-throughput and high-resolution 3D tissue scanner was a keystone of this approach. This tissue scanner has a 3D virtual microscope that allows us to investigate the neuronal structure of a whole mammalian brain in a high resolution. The top-down theory-driven approach is to study what true machine intelligence is and how it can be implemented. True intelligence cannot be investigated without embracing the theory-driven approach such as self-awareness, embodiment, consciousness, and computational modeling. I have studied the internal dynamics of a neural system to investigate the self-awareness of a machine and model neural signal delay compensation. These two meet in the middle where machine intelligence is implemented for mechanical systems such as mobile robots and autonomous vehicles. I have a strong desire to bridge the bottom-up and top-down approaches that lead me to conduct research focusing on mobile robotics and autonomous vehicles to combine the data-driven and theory-driven approaches. To promote my research agenda, I established two research laboratories at Kettering University: Brain-Inspired Intelligent Systems (BI2S) Laboratory and Mobile Intelligent Robotics (MIR) Laboratory. 

Research Activities

The previous funded works below exemplify my interdisciplinary research to bridge the bottom-up data-driven and top-down theory-driven approaches.

Development of High-Throughput ad High-Resolution Three-Dimensional Tissue Scanner with Internet-Connected 3D Virtual Microscope for Large-Scale Automated Histology

This project was to develop a high-throughput and high-resolution 3D tissue scanner with an Internet-connected 3D virtual microscope for large-scale automated histology. The instrument will digitize a wide variety of organs and generate 3D data sets. A data processing pipeline will automatically convert the raw data sets into aligned volumetric data sets without human intervention. National Science Foundation (NSF) funded this project, and the awarded amount was $341,563.00 for four years from 2013 to 2017.

Development of Low-Cost Autonomous Vehicle Experimental Platform

The scope of this work is as follows. Explore the current technology in the field of autonomous vehicles and identify the current challenges. Propose and test ideas for possible solutions. Develop experimental vehicle platforms. This project was funded by Provost’s Research Matching Fund in 2017.

Development of Software Tools for Driver Assistance Systems

My lab developed various tools for driver assistance systems of Magna Electronics which is operated by Magna International, the largest automobile parts manufacturer in North America. This project was a good chance to have students to understand how real-world system uses technologies. The project was funded by Magna Electronics, and the awarded amount was $115,570.00 in 2015.

Touch-Enabled Virtual Room for Scientific Data Exploration

I proposed to develop a virtual room where a user can touch objects to explore scientific data. The virtual room system will consist of Oculus Rift (a headset for virtual reality), Leap Motion 3D Jam (a position tracking system of hands), GloveOne (a haptic glove), and a host computer where my custom software system and 3D scientific data reside. Users can get haptic feedback from touching objects while they are exploring scientific data. This project was funded by Research Fellowship in 2015.

Development of LED System Lighting Module

My lab with a collaborator at Wonkwang University in Korea developed an LED system lighting module with compact sized data communication modules and driver IC/Processor control parts based on multi-sensors. This project was supported by the Ministry of Knowledge Economy (MKE), Korea supervised by the Korea Evaluation Institute of Industrial Technology (KEIT) from 2012 to 2015.

Research Agenda – Short Term

My short-term research goal is to extend my research and explore new opportunities. Here are some potential research ideas for this short-term goal.

Intelligent Vehicle Testbed

I have been building an autonomous vehicle testbed. I would like to extend this testbed with higher precision sensors and their fusion. This testbed can be built through NSF-MRI (Major Research Instrumentation) programs in the Division of Computer and Network Systems or through Smart and Autonomous Systems (S&AS) in the Division of Information & Intelligent Systems. In addition, I would like to propose an NSF’s Research Experiences for Undergraduate (REU) using this testbed. Through the REU, undergraduate students will design algorithms and use techniques to control an autonomous vehicle. The project will give undergraduate students to get involved in quality research experiences in the autonomous vehicle driving.

Small-Scale Autonomous Car

I would like to build a small-scale autonomous car and propose a competition of small-scale self-driving cars. Having a real autonomous vehicle costs high. Managing it with expensive sensors requires the even higher cost. So I will propose a design of a small-scale autonomous car by transforming a kid-ride-on remote control car with inexpensive sensors to the NSF’s Smart and Autonomous Systems (S&AS) program in the Division of Information & Intelligent Systems. This project could establish a solid platform in which researchers and students can test algorithms without using a real autonomous car. With this design, I would like to propose an NSF-REU that allows undergraduate students to have quality research experiences on autonomous vehicle study.

Intelligent Autonomous Vehicle Simulator

I would like to build an autonomous vehicle simulator that is a test platform for evaluating autonomous vehicle technologies. This project allows hardware and human in the loop as well. The proposed system can be built by through NSF’s MRI Development program, Smart and Autonomous Systems (S&AS) in the Division of Information & Intelligent System or Advanced Technological Education (ATE) and Improving Undergraduate STEM Education: Education and Human Resources (IUSE: EHR) under the Division of Undergraduate Education (DUE). With the simulator, students will have design experiences of an autonomous vehicle in perception, planning, and control for autonomous driving.  

Affordable Humanoid Robots

Humanoid robots are a great research platform where we can study a wide variety of research topics including cognition development, human behaviors, developmental robotics, intelligent systems, and collaboration of multi-agents. Some affordable humanoid robots are available, but they can hardly be used as a research platform since they often do not have sensory feedback and enough processing power. So I would like to develop an affordable humanoid robot powered by cloud computing to overcome onboard computing power. An out-of-box robot will be extended by adding sensors including a camera for vision and a gyro sensor for detecting falling. The computational power will be extended by using a smartphone that will be connecting other desktop computers to process heavy-duty computations. This idea can be built through NSF’s National Robotics Initiative 2.0: Ubiquitous Collaborative Robots in the Division of Information & Intelligent Systems or similar program.  

Immersive Data Exploration using Virtual/Augmented Reality

More than half the brain cells are dedicated to processing sensory experience. And much of the sensory experience is touch. What we touch shapes what we feel. Touch can also shape what we learn. There is mounting evidence that touch strengthens ownership of things and enhances memory of what we have learned. I will propose to develop an immersive data exploration system where a user can touch objects and get haptic feedback to explore scientific data. I will seek funding opportunities through NSF’s Cyber-Physical Systems (CPS) program and National Institutes of Health (NIH) to build the immersive data exploration system.

Research Agenda – Long Term

I would like to explore a broad range of topics relating machine intelligence as well as multidisciplinary areas of computational neuroscience and its applied research. My work has spanned a diverse set of topics. They seem somewhat unrelated, but they are all connected to investigate true intelligence that will be the sturdy foundation of intelligent systems. My long-term research goal is to establish a truly intelligent system inspired by the brain. In the future, I plan to continue my efforts in several directions:  

  • Situation awareness that is a key to safe autonomous vehicles. The main research question is how autonomous vehicles can read their full environment as humans do.
  • A neurorobotic testbed to study brain functions.
  • Self-awareness and prediction.
  • Emotional Artificial Intelligence in the loop for an intelligent system.
  • Brain information processing based on brain connectivity networks.
  • Computer vision algorithms that allow the vehicle to drive safely by recognizing pedestrians, bicyclists, cars, buses, trucks and emergency vehicles.
  • High-throughput instrumentation and analysis for the brain
  • Large-scale reconstruction of microvascular networks
  • Developing efficient algorithms and tools for analyzing and visualizing biological data sets.
  • Investigating multiscale imaging, analysis, and integration of brain networks
  • Exploring building whole mouse brain 3D atlas based on the data sets from my tools and new algorithms

These tasks require multidisciplinary knowledge; robotics, deep-learning, artificial intelligence, neuroscience, cognitive science, parallel programming, data visualization, image processing, and big-data analytics. The tremendous progress was made in the last few years in these areas. The recent progress has been possible by the availability of tools for artificial intelligence, robotic operating systems, better-performed sensors, sophisticated algorithms, faster computing power with GPUs, and more data. But autonomous systems often require extreme system performance with higher demands in quality and reliability that significantly increase with the rising degree of automation. Such requirements give rise to new problems and open new questions. My work has been crossed over many research areas, and I have expert skills and pieces of knowledge for many areas from my research experiences and intensive industrial experiences. I will continue my efforts in these research areas to build true machine intelligence inspired by the brain.

Comments are closed.