ترجمه مقاله نقش ضروری ارتباطات 6G با چشم انداز صنعت 4.0
- مبلغ: ۸۶,۰۰۰ تومان
ترجمه مقاله پایداری توسعه شهری، تعدیل ساختار صنعتی و کارایی کاربری زمین
- مبلغ: ۹۱,۰۰۰ تومان
abstract
Currently, there is an increasing number of patients that are treated in-home, mainly in countries such as Japan, USA and Europe. As well as this, the number of elderly people has increased significantly in the last 15 years and these people are often treated in-home and at times enter into a critical situation that may require help (e.g. when facing an accident, or becoming depressed). Advances in ubiquitous computing and the Internet of Things (IoT) have provided efficient and cheap equipments that include wireless communication and cameras, such as smartphones or embedded devices like Raspberry Pi. Embedded computing enables the deployment of Health Smart Homes (HSH) that can enhance in-home medical treatment. The use of camera and image processing on IoT is still an application that has not been fully explored in the literature, especially in the context of HSH. Although use of images has been widely exploited to address issues such as safety and surveillance in the house, they have been little employed to assist patients and/or elderly people as part of the home-care systems. In our view, these images can help nurses or caregivers to assist patients in need of timely help, and the implementation of this application can be extremely easy and cheap when aided by IoT technologies. This article discusses the use of patient images and emotional detection to assist patients and elderly people within an in-home healthcare context. We also discuss the existing literature and show that most of the studies in this area do not make use of images for the purpose of monitoring patients. In addition, there are few studies that take into account the patient’s emotional state, which is crucial for them to be able to recover from a disease. Finally, we outline our prototype which runs on multiple computing platforms and show results that demonstrate the feasibility of our approach.
6. Final remarks
This paper discussed the use of images and emotions for helping the healthcare in smart home environments in an automatic way through an IoT infrastructure (i.e. without the need for human intervention to detect new features). We made use of images to identify each person to ensure that the right person is tracked in the house and that a particular kind of treatment is ensured. Experiments were conducted on this front and obtained an accuracy of 99.75% when identifying each person. Account was taken of the fact that people in the house could have their own individual healthcare scheme or undergo specialist treatment. We also drew on the images to check if the emotional part of each person relies on the same IoT technology. The rate of accuracy was again around 80% and there was a good convergence obtained through our proposed Ensemble model. According to psychologists, emotions play a crucial role while a patient is trying to recover from a wide range of diseases. Hence, we believe that this is an important feature to measure while monitoring patients. All these IoT developments and experiments were conducted to show the suitability of our approach which was carried out by finding a prototype for our system in resource constrained devices. As a result, we adopted an approach that took into consideration all these features and conducted experiments to validate the whole idea in terms of performance, accuracy and statistical analysis. As future research in this area, we plan to exploit evolutionary mechanisms to ensure that the classification algorithms of facial expression can make progress while taking into account advances in the treatment of certain ailments like Parkinson’s disease. Patients of Parkinson’s disease do not have full control of their facial expressions particularly when the disease progresses. This can be again a good challenge for our model as it will require processing cycles and communications along with evolutionary approaches in a heterogeneous environment. Furthermore, it is worthwhile to mention that we plan to improve our emotion recognition system by taking into account different facial angles, gestures and postures.