Yash Karan Sodhi

POWER WHEEL
CHAIR

AI-ENHANCED PRECISION AND STABILITY

Disability is a pressing public health issue in India, with wheelchairs being the most commonly used assistive device. However, smart wheelchairs using robotics and AI are emerging as a potential solution to the mobility challenges faced by disabled individuals.

AIROLA

The Future of Semi-Autonomous Wheelchairs: Enabling Mobility and Independence

In recent years, advances in robotics technology have led to the development of semi-autonomous wheelchairs that can assist users in navigating their environment while still allowing them to maintain control. These innovative devices are designed to help users with limited mobility to move around more easily and independently, and can be a game-changer for people with disabilities.

Semi-autonomous wheelchairs combine reactive navigation systems with user input to create a seamless and intuitive experience. With the ability to detect obstacles and adjust the wheelchair’s path accordingly, these devices allow users to navigate even in unfamiliar environments with ease.

 

The benefits of semi-autonomous wheelchairs go beyond just mobility. For many people with disabilities, increased independence can lead to better mental health and a greater sense of purpose. By providing a sense of control and autonomy, these devices can improve overall quality of life and provide greater opportunities for socialization and community engagement.

As technology continues to advance, we can expect to see more innovations in the field of semi-autonomous wheelchairs. From improved obstacle detection and avoidance to more sophisticated user interfaces, these devices have the potential to transform the lives of millions of people with disabilities.

 

AIROLA

Revolutionizing Mobility: The Cutting-Edge Architecture of AI Wheelchairs

The system described in this text is an autonomous wheelchair that follows the subsumption architecture, a layered model where each layer generates a specific behavior. The wheelchair is designed as a human-machine system, where the user shares control with the machine. In the first phase of the project, a semi-autonomous wheelchair is being developed, which does not require representational world modeling or global trajectory planning. Instead, the system uses reactive methods that directly couple real-time sensory information with motor actions. The architecture includes safe travel, collision avoidance, and obstacle avoidance behaviors, which are implemented using a potential field approach. The second phase of the project aims to develop a self-cognitive module for the wheelchair to provide the system with perception and cognitive capabilities necessary for more advanced autonomous navigation.

At a glance

  • The IoU metric is used to label large objects, with a threshold of 0.50 for considering a detection as incorrect.

  • The IoU becomes less relevant for small objects, as even a low IoU value can correspond to a correct detection when the object is small and far away.

  • The distribution of IoU values for detected doors shows that 89% of them are above the 0.5 threshold, indicating a high number of correct detections.

  • The low recall rate for handles is likely due to the lack of diversity in the dataset, which will be addressed in future work by developing a data collection system and using data augmentation.

  • Data augmentation involves performing transformations on the base training data to expose the model to a wider range of semantic variation.

  • YOLOv5 provides advanced tools for automatic data augmentation, which the authors plan to use to increase the diversity of their dataset.

  • When tested on a desktop computer with an Nvidia GTX 980 GPU, the image flow from the camera can be processed in real-time at 30 FPS.

  • When tested on a Jetson TX2 GPU, the framerate drops to 5.8 FPS, but this is still suitable for the use case in which the wheelchair moves at most 1 m per second.

  • Resizing the images on the Jetson TX2 leads to a faster inference but results in a degradation in detection performance, especially for far and small objects, due to the loss of details caused by downsampling.

Looking for a powered wheelchair controller that’s both accurate and safe? Look no further! This innovative device uses deep learning neural networks (DLNNs) to process sEMG signals of hand gestures and produce precise movements. And the results speak for themselves – with an average accuracy of 98.4% across all subjects and hidden layers, the Bayesian Regularization DLNN is the clear winner. DLNNs with 12, 13, and 14 hidden layers were also highly accurate, with the 14 hidden layers Bayesian Regularization DLNN achieving an incredible 98.6% average training accuracy. Say goodbye to imprecise movements and hello to smooth, accurate control with this groundbreaking wheelchair controller.

AIROLA

Revolutionizing Mobility: Conclusion

The development of a powered wheelchair controller based on human hand gestures recognized by a deep learning neural network has shown immense promise. The Bayesian Regularization architecture was found to be the most accurate, with an average accuracy of 98.4% across all subjects and hidden layers. This breakthrough technology has the potential to significantly improve the lives of individuals with disabilities who require powered wheelchair control, and its applications can be extended to other brain-machine devices such as prosthetic hands. Further research is required to test the effectiveness of the wheelchair controller and explore the use of different user inputs, as well as the addition of other sensors to aid in data collection and analysis. Nonetheless, this research highlights the potential of combining machine learning techniques with biomedical signals to create effective assistive devices for individuals with disabilities.