Falls are a serious public ill health for the elderly population because they cause many injuries or maybe death. To avoid these type of tragedies that arise from falls, the quick responses are important when falls happen. it's been proved that the sooner a fall is reported, the highest probability the patient would be able to recover through the accident. At the identical time, the fear of falling also prevents elderly people and patients from living unaccompanied reception which absolutely increases labour interns of the presence of nurses or support staff. supported the above facts, the demand for developing some intelligent monitoring systems with the power which is able to detect falls automatically has boomed within the healthcare industry. An intelligent surveillance system which is able of detecting falls accurately, can't only improve the standard of living for the elderly, but also save on manual labour.
Fall Detection Strategies :
There are many ways to detect falls. From a broad perspective fall detection systems could classified into environmentally smart systems and wearable devices. Environmental systems use externally deployed sensors such as cameras, floor sensors, infrared sensors, microphones and/or pressure sensors. Wearable devices use sensors placed on a subject’s body, such as accelerometers gyroscope , on device range from fitness wearables to mobile phones.
VISION-BASED FALL DETECTORS
Cameras
are increasingly included in home assistive and care systems as they possess
many advantages over other sensor-based systems. Cameras is used to
detect multiple actions simultaneously with less intrusion. Vision-based
methods can be divided into three categories:
- fall detection using a
single RGB camera.
- 3D-based methods using multiple camera.
- 3D-based methods
using depth cameras.
Fall Detection Using a Single RGB Camera
Fall detections use by single RGB camera are studied, because this systems are easy set up and they are inexpensive. Shape related features, like inactivity detection and human motion analysis are the foremost commonly used clues for detecting falls.
Shape related features are widely used for fall detection, in fall detection depend on width to height ratio of the person.
Mirmahboub e tal. use a straightforward background separation method to make the silhouette of the person, and several other features are then extracted from the silhouette area. Finally, A SVM classifier is used to perform the classification supported these silhouette-related features. Rougier et al. use a shape matching technique to trace the silhouette of the person within the target video clip. the form deformation is then quantified from these silhouettes, and also the classification relies on the form deformation using a Gaussian mixture model. An adaptive background Gaussian mixture model is used to get the moving object in and an ellipse shape is constructed from the moving object for body modeling. Several features are then extracted from ellipse model Unlike , two Hidden Markov Models are wont to classify falls and normal activities.
Arie et al. present a completely unique method to tell apart different postures including standing, sitting, bending/squatting, lying on the side and lying toward the camera. The proposed method extracts the projection histograms of the segmented physical body silhouette because the main feature vector. Posture classification is completed by k-Nearest Neighbor algorithm and evidence accumulation technique. The motion pattern differences between falls and other daily activities, like walking, sitting down, drinking are significant. Much of the research is predicated on motion analysis.
Liao et al. uses a human motion analysis and human silhouette shape variations to detect slip-only and the fall events. The motion measure is obtained by analyzing the energy of the motion active area within the integrated spatiotemporal energy map.
Homa et al. is discuss applying Integrated Time Motion Image to fall detection. Integrated Time Motion Image could be a form of spatiotemporal database that has motion and time of motion occurrence. Given a video clip and the integrated time motion images are calculated to represent the motion pattern that occurred within the video and so PCA is employed for feature reduction. Finally, a pre-trained MLP Neural Network is adopted for precise distinguish of motions and determination of a fall event. Zhang et al. describe experiments with three computer vision methods for fall detection in an exceedingly simulated home environment. the primary method makes a call supported one frame, simply supported the vertical position of the image centroid of the person. The second method makes a threshold-based decision which is supported to last few frames, by considering the amount of frames during which the person has been falling, the magnitude of the fall, and therefore the maximum velocity of the fall. The third method could be a statistical procedure that produces a decision supported the identical features because the second method, but using probabilistic models as against thresholds for creating the choice. Caroline et al. extract the 3D head trajectory use a single calibrated camera. With the assistance of 3D head trajectory, the velocity characteristics are calculated for fall detection.
Feng et al. propose a completely unique vision-based fall detection method for monitoring elderly people during a house care environment. The foreground human silhouette is extracted via background modeling and tracked throughout the video sequence. The material body is represented with ellipse fitting, and therefore the silhouette motion is modeled by an integrated normalized motion energy image computed over a short-term video sequence. Then, the form of deformation quantified from the fitted silhouettes is employed because the features to classified different postures of the human. Inactivity detection is adopted by to detect falls. In ceiling-mounted, wide-angle cameras with vertically oriented optical axes are wont to reduce influence occlusion.
Nait-Charif et al. use learned models of spatial context, which are useful in conjunction with a tracker to realize these goals. Nater et al. present an approach for unusual event detection supported a tree of trackers. Each tracker is specialized for a particular variety of activity. Falls are detected when none of the specialized trackers for normal activities can explain the observation.
Charfi et al. introduce a spatiotemporal human fall descriptor, named STHF, that uses several combinations of transformations of geometrical features and well-known SVM classifier is applied to the STHF descriptor to classify falls and normal activities.
3D-based Methods Using Multiple RGB Cameras
Another category of vision-based methods for fall detection is 3D-based methods using many RGB cameras. The calibrated multi-camera systems allow 3D reconstruction of the thing but require a careful and time-consuming calibration process. A uvinet et al. use a network of many calibrated cameras to reconstruct the 3D shape of the person. Fall events are detected by analyzing the quantity distribution along the vertical axis, and an alarm is triggered when the main a part of this distribution is abnormally near the ground. during a later work , the fall alarm would be triggered when the key a part of this distribution is abnormally near the ground during a predefined period of your time. Anderson et al. employ multiple cameras and a hierarchy of formal logic to detect falls. Overall, using many cameras offers the advantage of allowing 3D reconstruction and extraction of 3D features for fall detection. The proposed method introduces the voxel person, which could be a linguistic summarization of temporal fuzzy inference curves, to represent the states of a three-dimensional object.
3D-based Methods Using Depth Cameras
The earliest depth camera used for fall detection is that the Time-Of-Flight 3D camera . Since the amount of a Time-Of-Flight 3D camera is costly, only a few number of researchers adopted it for fall detection. But this case has changed since the appearance of the affordable depth sensing technology, like Microsoft Kinect. With the help of the depth cameras, the calculation of the space from the highest of the person to the ground is easy, which may then be used as a feature to detect falls. Diraco et al. use a wall-mounted Time-Of-Flight 3D camera to observe the scene. The system identifies a fall event when the human centroid gets closer than a specific threshold to the floor, and also the person doesn't move for a specific number of seconds once near the floor. in a very related approach, Leone et al. employ a 3D range camera. A fall event is detected depend on two rules:
(1) the space of the person’s Centre-of-mass from the floor plane decreases below a threshold within a time window of about 900ms.
(2) The person’s motion remains negligible within a time window of about 4s.
The
following sections present the basic
structure of a FD system and the different types
of sensors (external or
wearable) that are used. Figure shows a flow chart of the sensors that
will be
discussed here
Following figure shows falling
factors:
Fall Detection Using Threshold-Based and Data-Driven Algorithms
Threshold-based and data-driven algorithms are the two main methods that have used for fall detection. Threshold-based steps are usually used for data coming from individual sensors, such as accelerometers, gyroscopes, and electromyography. Their decisions are made by comparing measured values from interested sensors to empirically established threshold values. Data driven steps are more applicable for sensor fusion as they can learn non-trivial non-linear relationships from the data of all involved sensors. In terms of the algorithms used to analyze data collected using wearable devices, demonstrates that there is a significant shift to machine learning based approaches, in comparison to the work conducted between 1998 and 2012.From papers presented between 1998 to 2012, threshold-based methods account for 71%, while only 4% applied for machine learning based methods . We believe that this shift is happen because of two reasons. First, the rapid development of affordable sensors and the rise of the Internet-of-Things made it possible to more easily deploy multiple sensors in different applications. As mentioned above the non-linear fusion of multiple sensors can be modeled very well by machine learning methods. Second, with the breakthrough of deep learning, threshold-based methods have become even less preferable. Moreover, different types of machine learning steps have been explored, namely, Bayesian networks, rule-based systems, nearest neighbor-based techniques, and neural networks. These data-driven methods show better accuracy and they are more robust in comparison to threshold-based methods. Notable is the fact that data-driven methods are more resource hungry than threshold-based methods. With the ever advancement of technology, however, this is not a serious concern and we foresee that more effort will be invested in this direction.

Fall Detection Using Deep Learning
Traditional machine learning methods determine mapping functions between extracted handcrafted features from raw training data and the respective output labels . The extraction of handcrafted features requires domain expertise and are, therefore, limited to the knowledge of the domain experts. still such a limitation is imposed, literature shows that traditional machine learning, based on support vector machines, hidden Markov models, and decision trees are still very active in the field of fall detection that uses individual wearable non-visual sensors (e.g., accelerometer) . For visual sensors the trend has been moving toward deep learning for convolutional neural networks .Deep learning is a sophisticated learning framework that besides the mapping function (as mentioned above and used in traditional machine learning(ML)), it also learns the features (in a hierarchy fashion) that characterize the interested classes (e.g., falls and no falls). This method has been inspired by the visual system of the mammalian brain . In computer vision applications, which take as input images or videos, deep learning establish as state-of-the-art. In this regard, similar to other computer vision applications, fall detection approaches that depends on vision data have been shifting from traditional machine learning to deep learning in recent years.

Open Challenges
The rarity of information of real falls: there's no convincing public data set which could provide a gold standard. Many simulated data sets by individual sensors are available, but it's debatable whether models trained on data collected by young and healthy subjects may be applied to elderly people in real-life scenarios. To the most effective of our knowledge, only Liu et al. (2014) used an information set with nine real falls together with 445 simulated ones. As for data sets with multiple sensors, the information sets are even scarcer. There is, therefore, an urgent have to create a benchmark data set of information coming from multiple sensors.
1. Detection in real-time: The attempts that we've seen are all depend on offline methods that detect falls. While this can be a very important step, it's time that research starts focusing more on real-time systems that may be applied within the real-world.
2. Security and privacy: we have seen little attention to the safety and privacy concerned with fall detection approaches. Security or safety and privacy is therefore another topic which to our opinion must be addressed in cohesion with fall detection methods.
3. Platform of sensor fusion: it's still a novice topic with lots of potential. Studies to this point have treated this subject to a minimum as they mostly focused on the analytics aspect of the matter. so as to bring solutions closer to the market more holistic studies are needed to develop full information systems which will handle the management and transmission of information in an efficient, effective and secure way.
4. Limitation of location: Some sensors, like visual ones, have limited capability because they're fixed and static. it's necessary to develop fall detection systems which may be applied to controlled (indoor) and uncontrolled (outdoor) environments.
5. Scalability and flexibility: With the increasing number of affordable sensors there's an important necessity to review the scalability of fall detection systems especially when inhomogeneous sensors are considered and there's an growing requirements for scalable fall detection approaches that don't lose robustness or security. When considering cloud-based trends, fall detection modules, like the data transmission, processing, applications, and services, should be configurable and scalable so as to accommodate to the expansion of economic demands. Cloud-based systems enable more scalability of health monitoring systems at different levels because the need for resources of both hardware and software level changes with time. Cloud-depend systems can be add or remove sensors and services with little effort on the architecture.
Very Nicely Explained💯
ReplyDeleteNice one
ReplyDeleteWell explained 🙌
ReplyDeleteGreat work!
ReplyDeleteInformative👏
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteAwsome work very informative.
ReplyDeleteGreat Work and informative.
ReplyDeleteGreat work and good content
ReplyDeleteWell explained! Great work .
ReplyDelete