Aarhus University Seal / Aarhus Universitets segl

Artificial Intelligence in Robotics

ABOUT THE RESEARCH GROUP

Photo of the research group. Photo: Lars Kruse/AU Foto
Photo of the research group. Photo: Lars Kruse/AU Foto

A digital revolution has been happening

Given the rapid pace of developments in associated areas such as artificial intelligence, machine learning, computer vision, cloud systems, cyber-physical systems as well as availability of low-cost processors and various sensors, robotics has been, and will continue to be, a game changer in academia and industry. Several companies invest billions of dollars in robotics, and different authorities in different countries have already started discussions on how they can adapt themselves to the revolutionary research activities in robotics and artificial intelligence.

Our ultimate goal: Smarter robots

Motivated by the factors above, our primary research interests and contributions lie within the areas of control systems, computational intelligence and robotic vision with applications in guidance, control and automation of unmanned ground and aerial vehicles.

In today’s world, it is not sufficient to design autonomous systems that are able to repeat the given tasks in repetitive manner.  We must be ready to push the boundaries by leveraging the current state-of-the-art autonomy level towards smarter systems which will learn and interact with their environment, collaborate with people and other systems, plan their future actions and execute the given task accurately.

Dream for the future: Replacement of Enhancement?

To dream of having "slave" type of robots was very common in the past. In scientific terminology, we call this dream "replacement" because people always tried to replace a human worker with a robot. Self-driving cars, customer service robots and many other attempts are towards an end goal which is to remove humans from processes. In our opinion, instead of "replacement", we can also think of "enhancement" which means we can enhance the human capabilities with robots. This brings us to the idea of having “collaborative robots”.

Student project ideas

MAPPING FROM CROP IMAGES IN WINDY ENVIRONMENT TO CROP IMAGES IN STATIONARY (WITHOUT WIND) FIELD ENVIRONMENT

 Detection algorithms can be mistaken to classify objects in case of distorted shapes. In crop/weed detection problem, plant types may be misclassified and their location may be misestimated due to wind effect. The wind (natural wind or caused by propellers) can distort plant shape. To overcome this problem, Generative Adversarial Networks (GANs) can be trained and used for producing stationary images corresponding to windy images. In this project, the dataset containing stationary and windy images should be collected and GANs should be trained with this dataset. 

 

CROP ROW DETECTION USING DRONES

Drones are applied to agriculture in order to increase efficiency and crop monitoring. Crop row detection is crucial step for navigation of robot in the field and crop detection. This project will consist of two parts that each student works on individual part:

 a. Drone navigation & RTK GPS sensor fusion & Obstacle avoidance

The aim of this part is placing GPS sensors on the drone and making drone to follow given trajectory. GPS information should be saved in particular area of the field (i.e. when the request comes from weed detector). Sonar sensors should be placed to the drone to avoid obstacles. At the end of this project, the robot should be navigated in the field environment.

b. Developing detection algorithms

The aim of this project is detecting plant rows using RGB cameras. Aerial images of plant rows will be captured from mounted camera on the drone, and rows will be detected in the image using on board computers in real time manner. For this end, image processing techniques can be used for preprocessing to images, ansd Artificial Neural Networks or Deep Neural Networks can be used as detection algorithms.

DESIGNING FIELD ENVIRONMENT SIMULATOR FOR DRONES USING UNREAL ENGINE

 It might be dangerous to conduct experiments in the real field environment in early stage of projects. Therefore, it is better to use simulators to see results of algorithms. The aim of this project is designing basic field environment with adjustable settings (i.e. size of crops, type of crops, lightening etc.) using Unreal Engine in Linux. This type of simulator can be used to generate artificial data for basic agricultural problem.

TRANSFER LEARNING USING DEEP NEURAL NETWORKS FOR DRONE CONTROL

 

Due to the cost associated with data collection and training, approaches such as transfer learning con be used to transfer knowledge between drones and thereby increase the efficiency of their control. In this project, the goal is to use the knowledge from a source drone on a target drone (e.g., from one drone to another drone with different mass or aerodynamic properties) to achieve high-accuracy control with minimal data collection and training. Given the ability of deep learning to generalize knowledge from training samples, it can be used to learn the control for a source drone, and adapt it for a target drone. This transferred knowledge is expected to speed up the training of the target drone.

 

 

PERSPECTIVE TRANSFORMATION OF CROP IMAGES CAPTURED FROM TILTED CAMERA

 

Stationary cameras mounted on UGVs or UAVs can be tilted easily if gimbal system is not used. This may cause perspective effect in the image. Object classification algorithms might be effected by perspective effect causing classification errors of plants. Perspective transformation (p.t) can be used eliminate this drawback. In traditional methods, 4 points are selected manually for p.t that requires human supervision. The aim of this project is specifying four points autonomously without human supervision. For this ends, machine learning based methods can be developed to understand the content of the image and determine the best four points according to this content to apply perspective transformation.

 

VISION BASED CROP ROW RECOGNITION ALGORITHMS

 

Drones should be aware of their locations on plant rows and should know which parts they passed before. By using GPS sensors, it is easy to determine location and store location history. However, without GPS data support, the drone should recognize rows that passed before. Therefore, rows should be classified according to their features such as vegetation layout, size etc. The aim of this project is modelling the row using predefined features and implementing memory for row models to store row history. Images captured from camera on drone will be inputs for the system. Therefore, image processing skills and basic machine learning knowledge is required.

For more detailed info: Assoc. Prof. Erdal Kayacan (erdal@eng.au.dk)

Contact

Erdal Kayacan

Associate professor
M
H 5341
P +4593521062