Proceedings of 1st International Conference on Machine Learning and Data Engineering, Sydney, Australia (2017)

ISBN: 978-0-6480147-3-7

Abstract: The geometry of a given space characterizes the proximity between data and plays a key role in machine learning. The traditional methods of simply and naively treating data spaces as "flat" Euclidean’s may not offer desired effect in variety of learning tasks. In this talk, I would like to report some of my recent research on clustering tasks over manifolds and try to give an introduction to the state-of-the-art learning on manifolds. The focus will be on the low-rank representation (LRR) models on the Grassmann manifold and the curves manifold used in learning tasks from computer vision and time series.

Abstract: This paper presents a novel feature vector generation technique of malware data which retains high classification accuracy over an extended time period. The proposed approach is to combine the features and accumulating these features with intervals over time. Experimental results show that the proposed method maintains constant classification accuracy and with a standard deviation of 0.92 over the extended time period. These results strongly support the hypothesis that it is possible to develop any classification strategy that will work well into the future.

Abstract: The constrained optimization benchmarks can successfully formulate various practical engineering designs. Therefore, there are various algorithms have been applied to optimize the engineering design by the formulation of the constrained optimization benchmarks since the last decades. The value of the solutions of any objective function that is obtained by any algorithm is ensured only if the solutions of the constraints are falling in feasible solutions. Therefore, it is very important to make sure the solutions are feasible that is the constraints are satisfied before declaring the objective solution obtained by any algorithm. In this work, we use a population-based swarm intelligence algorithm, called simplified swarm optimization (SSO) to optimize the constrained engineering design benchmarks based on the feasible solutions. For the purpose to evaluate the optimization performance, the SSO has performed on three well known constrained engineering design benchmarks including two different types of minimization constrained benchmarks and one engineering constrained and mechanical design benchmark. The computational results have been compared favourably with those obtained using existing algorithms in the literatures. The comparison results demonstrate the proposed SSO well optimize the benchmarks with feasible solutions compared to the other considered algorithms.

Abstract: In this paper, we describe an automated real-time system that estimates age and gender by utilizing a set of facial image sequences from a video camera. The age and gender estimation system consists of four steps: i) detection and extraction of the facial region from input video; ii) selection of the frontal face images from the extracted facial regions using head pose estimation; iii) duplicated face detection and removal by tracking the faces; and iv) age and gender estimation using statistical facial features. Here, LBP features with AdaBoost classifiers are used to detect the face region in a video frame, and the frontal face images are selected using a 3D pose estimation method. In addition, a particle filter-based tracking framework is employed to remove duplicated faces and to improve the accuracy of people counting, and Gabor-LBP features are used to estimate age and gender using a linear SVM and Adaboost classifiers. In experiments, a large number of face datasets are used to train and evaluate the proposed method, and higher performance is achieved in terms of age and gender estimation: 72.53% for age and 98.90% for gender.

Abstract: the field of data mining, the social network is one of the complex systems that poses significant challenges in this area. Time series anomaly detection is one of the critical applications. Recent developments in the quantitative analysis of social networks, based largely on graph theory, have been successfully used in various types of time series data. In this survey paper, we review the studies on graph theory to investigate and analyze time series social networks data including different efficient and scalable experimental modalities. We provide some applications, challenging issues and existing methods for time series anomaly detection.

Abstract: Emoticons are used in the situation of textual communication such as web mails and internet forums. Many of the existing studies dealing with classification or extraction of emoticons regard emoticons as a kind of character string and focus on what characters constitute the emoticons or how they are lined up. However, emoticons are used to express human facial expressions, and characters constituting them represent various facial parts such as eyes, nose, mouth, etc. Such characters can be identified as different facial parts depending on their positions, and facial expressions are thought to be represented by the combinations of their shape features. In this study, we classified the facial expressions of emoticons by focusing on the shape features of those emoticons. To deal with shape features of emoticons, we converted emoticons, which are text data, to image data. Emoticons are mainly formed by line segments of characters, and use only black and white colors. Therefore, other factors such as colors and shades were not considered as the feature to classify the facial expressions. In the experiments, we used image features that did not require color information. As the result of comparative experiment with the 1-nearest neighbor method using character features, the facial expression recognition rate is 52% when using the Histograms of Oriented Gradients(HOG) was used as image feature. By this result, proposed method improved recognition rate by 2 % than using baseline.

Abstract: In this paper, an efficient real-time iris center detection and tracking system has been proposed based on the vector field of image gradients (VFOID). The system uses video frames from cost-effective low-resolution USB cameras and natural lighting condition instead of high resolution sensors and infrared illumination. The proposed algorithm includes multi-stage approaches. Firstly, the AdaBoost-based cascaded classifiers are used to detect and track face region and its location in the image. Then, the facial feature points in the face region are extracted to locate the rough area of the eyes. Finally, the VFOID are used to accurately detect and track the iris center within the estimated eye areas. The experimental result shows a high accuracy on existing database and in a real-world environment.

Abstract: Accident likelihood is growing due to a correlation for gas and electricity installed in the area of dense energy consumption like traditional market and underground shopping center. In order to prevent accident risks related to gas and electricity in this area, it should be monitored and predicted for factors of gas leak or electricity. Accident risks related to gas and electricity are divided to two cases in the area of dense energy consumption. First case for accident risks is gas leak due to several factors. Then fire and explosion occur from a source of ignitions among many factors. The other case for accident risks is fire caused by several factors. Then gas leak due to damage of gas facilities causes fire spreading and explosion. In this study, gas leak CFD(Computational Fluid Dynamics) simulation was conducted to analyze accident risks related to gas leak. Because of no data for gas density, gas leak simulations were carried out to analyze gas density variation characteristic at gas detector position. Then, the gas accident prediction technique was developed based on variation characteristic of gas leak by using regression and statistic method.

Abstract: In this paper, we describe a spoken term detection (STD) method for a spoken document information retrieval. In general, STD method detects a query term from the spoken documents which are translated from acoustic signal data to text data by the automatic speech recognition system. Because the automatic speech recognition systems are able to output some types of recognition results, we are available for various types of the translated text data for STD. In this paper, we focus on the syllable-based transcriptions and the word-based transcriptions. Because of detecting the query term from a large size of these transcriptions, a rapid STD method is required. Therefore, we have proposed the rapid STD method using a bit-sequence representation and the suffix array. Our method, first, extracts the sub-sequences from the syllable-based transcriptions, and then converts them into the bit-sequence using a hash function. The STD candidates are retrieved using these bit-sequences. Finally, the distance between the query term and these candidates represented as bit-sequences is calculated by using Dynamic Programing (DP) matching. At the same time, our method searches the query term from the word-based transcription using a suffix array method. Then, our method detects the query term by combining these results. In the workshop of NTCIR10, our method has achieved the best performance in STD task. In this workshop, we have submitted the results under the limit conditions of our method. Hence, in this paper, we conduct STD experiments using NTCIR10 SpokenDoc2 Task under the other conditions and evaluate our method. In this experiment, we investigate the STD performances as a function of the number of the candidates of speech recognition and type of candidates of speech recognition. Experimental results show that our method significantly improve the STD method using each transcription. Therefore, we conclude that our method is useful for the STD.

Abstract: This paper proposes a user-friendly panorama image stitching system with a real-time preview. Previous approaches do not have a choice but to the select source images while the user predicts the stitching results through trial and error. Our contribution shows a real-time preview of the stitching results based on the multi-threaded tracking and the blending system. It can help the user easily generate a desired panorama image such as a wide-angle view or building. In our system, the object image designated as an initial frame is tracked to generate a real-time preview. We evaluated the accuracy of the proposed tracking method as compared with a scale-invariant feature transform, such as speeded-up robust features, and oriented FAST, rotated BRIEF. Our experiment results show that our approach can robustly track the object image and provide quality real-time preview images.

Abstract: Internet of things (IoT) is an emerging technology that is spreading very fast in today’s world. We can see the use of IoT in every sphere of our modern life. It is being said that the next era will be the era of IoT. Robotics is also a very popular research area in this modern world. Robots are like electronic human which has a processor instead of mind and memory instead of brain. As IoT is an Internet based internetwork between machines, it can be used to control a robot. A user-friendly interface can be introduced to build up the interaction between human and robot, through which we can easily control a robot from anywhere over HTTP connection. The idea is to build a teleoperated robot controlling interface using bash language that has low memory consumption and high processing speed. A prototype is being made and discussed in this paper to control a robot using a very cheap IoT core-Raspberry pi with clock speed of 1.2GHz and only using HTML shell scripts, CGI programs and very light HTTP server.

Abstract: In spite of the ability of Artificial Neural Network (ANN) to handle nonlinear relationships in data, ANNs fail to predict with high accuracy in the presence of non- stationarity. Hydrological processes in nature exhibits non stationarity due to many interrelated physical and other interrelated factors such as chaotic weather conditions. This paper presents the modelling of one such hydrological process, inflow, to the top most reservoir in the major cascaded reservoir system in Sri Lanka. This daily inflow series has been investigated to be nonlinear and nonstationary. Thus, the difficulties encountered in modelling the inflow series is addressed through a pre-processing strategy based on wavelet transform. Among the methods available in dealing with the nonstationary nature, the wavelet transform was used due to its ability to determine the frequency content of the signal and to assess and determine the temporal variation of this frequency content. The inflow series is decomposed in to several sub series using discrete wavelet transform (DWT). Consequently the appropriate sub series resulted through the wavelet transform are used to model the original inflow using Nonlinear Autoregressive Artificial Neural Network with Exogenous Inputs (NAR-ANN). The results of the NAR-ANN with modified inputs are compared with the results of the base model i.e. NAR-ANN with raw inputs as well as a previously fitted cluster based modular NAR-ANN. The results confirms the superiority of the wavelet based approach over the other approaches, as it has the ability of capturing useful information on various resolution levels.

Abstract: Short term load forecasting plays an important role for the energy industry as accurate predictions are vital in utilizing available resources to optimize the electricity production. Literature reveals that commonly used approaches for such predictions are done using multiple regression, stochastic time series, exponential smoothing, state-space models, neural network models and fuzzy logic, to name a few. In the recent past however, functional data analysis has been an emerging trend to analyse time related data. The electricity demand for a particular day varies with time and as such the daily load curve can be modelled using functional data analysis. Instead of using hourly electricity demand of each day, the dimensionality of the dataset could be reduced using functional principal component analysis. The principal component scores of the selected components were used for prediction. Seasonality and speciality of the day were incorporated using dummy variables and ARIMA models with regressors have been used to predict the principal component scores. Significant variables for the model were identified prior to ARIMA modelling and residual analysis was performed to validate the fitted models. From the predicted principal component scores, the next day’s load curve is evaluated. A moving window has been used to make the predictions real time and for the prediction process to be more efficient. This proposed functional analysis based methodology when compared with a commonly used error back-propagated neural network approach have shown improved predictions of the forecasted electricity demands.

Abstract: This paper presents the methodologies and systematic approaches in software design development process which incorporate semantic technology and intelligent knowledge (which also known as “elements” in this paper) identification in Data Analytics. Domain Knowledge (Ontology) approach was applied where users’ requirements and knowledge were identified in early design stage. Requirement analysis were also conducted in earlier stage to understand the data model and knowledge management of existing Data Analytics in the market, at the same time finding gaps for continuous improvement in enhancing the quality of the Domain Knowledge. The development and discovery of the Domain Knowledge as well as the process of how to extract useful knowledge for Data Analytics creation were also discussed in the paper. Unlike the conventional process checklist, the Relevancy factor(s) is taking into consideration during the identification of closest matching knowledge which is significant in both Design and Development stage. Relevancy factor(s) is the key prevention method to eliminate irrelevant data and at the same time retain important information for Data Analytics in the later stage. Furthermore, case studies which captured in this paper demonstrated the effectiveness in both research and development of Domain Knowledge in addressing Data Analytics challenging problems.

Abstract: This paper aims to take a look at the way a search engine works when a user query is posted. The query processing part involves optimization of the user queries so that unnecessary and redundant parts of the query may be removed. Only the keywords are forwarded to the next stage of searching, which decreases the length of the posted query as well as the load on the further stages to increase throughput as well as response time. A number of linguistic aspects like methods of optimization, (namely stop words and name words that can be deleted without any change to the meaning of the original query) have been discussed in this paper. The spellchecking of the user query has been looked upon so that it is lingual and in sync with an English Dictionary.

Abstract: Automatic accurate detection of 3D Buildings using Google Street view of a city is challenging due to the presence of various horizontal and vertical occlusions such as trees, cars etc. Using different algorithms in such cases e.g. Laser Scanning (ALS), Unmanned Aerial Vehicle and a Depth camera (UAV RGB-D) map reconstruction, Light Detection And Ranging (LiDAR), is very costly. In order to obtain the correct coordinate value, developers need to match with wrong coordinate information, such as, Point Cloud Data (PCL), Digital Surface Model(DSM), and Digital Terrain Model (DTM) data. In this paper, we propose a 3D Building Reconstruction algorithm that uses 2D images. In our experiment, we use Hough line detection algorithm of OpenCV computer vision library to find outline information in a graph of 2D position. We transform a 2D position to a 3D space position coordinate. 2D image building only provides the frontal view of a 2D image which is not enough to find the coordinate in order to model a 3D building. To solve this problem multiple camera view information is required. Street view can add different view 2D image when detecting noise or occluded object in a scene. Firstly, we locate the noise and divide the noise palace to find the building edge for building construction. Secondly, we detect the line in a building and construct it to the 3D world. Our proposed new algorithm is very efficient to detect lines and 3D object reconstruction in street view. Moreover, it can also use internet map API to simulate a smart city.

Abstract: It is the era of competitive technology, it is the era of cloud computing and smart world. The Internet of Things (IoT) provides the opportunity to connect billions of different types of every day’s objects (things) with one another as well as with the Internet directly or through Local Area Network (LAN), paving a smooth and easy way to interact and share data and information. The primary concept of IoT is the pervasive presence of various types of wireless objects and embedded devices around us. However, the ultimate success of this IoT smart sensing environment depends heavily on a generally accepted, well-defined standard architecture that can provide a dynamic, scalable and secure framework for deployment of IoT. Full deployment of IoT and its applications in real life environment is not an easy task due to the limitations, challenges and security issues. IoT in collaboration with Fog computing and Cloud computing paradigm, revolutionizes the technology into a completely new dimension. In this paper, we discuss the employing and deploying of intelligent smart IoT devices in combination with Cloud and Fog computing paradigm. We also presented an improved and latency-minimized version of the Cloud and Fog centric IoT architecture. We then briefed on security and privacy issues along with challenges and future opportunities.

Abstract: Ear based biometric identification can be the solution for instance such as surveillance where other biometric traits are simply very hard to access. Although many semi-automatic approaches have been made to detect ear and use it for human recognition purpose, most of them are based on feature extraction or shallow machine learning approaches. Very few approaches those used deep neural network architectures are either having less hidden layers, or a combination of deep neural network and feature extraction classifiers or already trained complex deep convolutional network. In this research approach, a deep but simple raw convolutional neural network have been used to detect ear from an ear and non-ear environment which is the initial part of ear based biometric implication. Data-Augmentation have also used and a comparative analysis have been done for original data set and Augmented dataset. Using this deep but from scratch architecture trained by 792 original images we achieved promising output which show larger data could achieve higher accuracy.

Abstract: In this paper, we present a new method for analysing the damage to a logistics container by automatic analysis of its integrity based on parts-zone segmentation and hierarchical image analysis. To achieve this, we use a smart phone or tablet in order to acquire an image from the rear side of the container. The automatic damage detection system consists of three stages: i) dividing the image into small areas as minimum analysis units, ii) calculating the score of the individual units, and iii) summation of the scores for each unit to estimate the degree of integrity. In experiments, we have successfully evaluated the performance of the proposed method with data from a smart phone camera. As such, the proposed method provides automated damage detection which has the advantage of offering a relatively inexpensive and expeditious estimate of container damage.

Abstract: Face expression recognition is an important area of computer vision research which has been extensively used for HCI (human computer interaction), identity confirmation, persons emotion recognition and different access control. It is difficult to recognize Human’s facial expression in the real-world due to various circumstances, such as face dissimilarity in pose, ages, lighting condition, occlusion and face motion. Large amount of labeled and trained facial expression data can be one of the key idea. If the training dataset and test dataset collect from a particular person and environment the chances of getting higher accuracy is possible due to similar variation in facial expression. In this paper, we propose a method for gathering many facial expression data within an indoor environment and train them through ASSL(Active Semi-Supervised Learning) framework to gain high accuracy. We gather a large number of unlabeled facial expression data from intelligent technology Lab members of Inha University and BU-3DFE (Binghamton University 3D Facial Expression benchmark datasets. We train our initial model with the ASSL framework using deep learning network VGG(visual geometry group, University of Oxford) model. Our framework adopted MTCNN(Multi-task Cascaded Convolutional Networks) detector for face detection and we also modify the last two layer of VGG network for better performance. Repeat this entire process support us to get better performance improvement. Therefore using the ASSL method, we gain better performance and higher accuracy with less labor force. Our experimental result shows the high efficiency with various training data.

Abstract: Support Vector Machine is one of the most popular classification algorithms that is based on statistics learning. SVM has many advantages such as over-fitting and disaster effect. One of the challenges for SVM is it cannot fit for large samples. When we train a model by isolated SVM for a large set of data, the execution time to train the model is extremely high and often unacceptable.
To solve this issue we propose a Cascade SVM algorithm, which is a parallel algorithm with a good effect that can sharply reduce the cost of training time. We split the large-scale dataset into several small-scale datasets and concurrently iterate those clusters in order to increase its train speed. Therefore, the cost is further reduced. However, it produces lower accuracy compared to isolated SVM. To increase the accuracy, we use k nearest neighbour (KNN) algorithm. If the distance between a data point and hyperplane is less than the threshold value when SVM is predicting the test data, we accept the prediction result. Otherwise we consider randomly combine the KNN’s result. In this way, we train our model efficiently. Therefore, our proposed model will increase the accuracy and will reduce the prediction speed obviously.

Abstract: Action recognition is gaining importance due to its numerous uses in security assistance, video analysis, and surveillance application. Action recognition from an image and online video is challenging works due to the complex background, object scale, variation of pose and miss interpretation of action. In this paper, we proposed a Deep learning based method for human action recognition using Convolutional Neural Network (CNN).The CNN use jointly learning feature transformation which classifies the object in image optimally and provides better performance on the other hand traditional manually designed features cannot perform well. Deep learning based approach exploits GPU as well as computational resources and produces a competent performance on real time videos and intelligent technology Lab dataset of Inha University. In our experiment, we use different dataset images for training such as simple and noisy and cluttered environment. We have trained the intelligent technology Lab dataset by pre-trained network (VGG16 network by the University of Oxford) model. We have found cropped ROI with higher iteration has better action recognition than large image while performing test owing to the less noisy background. Total five activities were considered due to the limited number of labeled training examples. We also integrated an active semi-supervised learning (ASSL) method which helps in improving the overall human action recognition and achieved average accuracy of 96%. For testing, we recorded videos which contain phoning, taking photo, using computer, jumping and reading activities. Our approaches significantly achieved higher MAP on intelligent technology Lab dataset.

Abstract: Vehicle tracking system is an android based application using GPS sensor which will track and show the nearer vehicles in order to their location. Sometimes it is quite difficult for people to find a vehicle in time of emergency. In this regard vehicle tracking system reduces the suffering of people and also to consume time and energy by creating a contact between the driver and the passenger. Mobile trend and the development of 3G network have changed our lifestyle by making interest in our own smart phone. In such a scenario, mobile application development is one of the most beneficial platforms. Android is one of the largest platforms that run in most smart phone. Vehicle tracking is an innovative android phone based vehicle reservation application which aims to fulfil the demand of user to ease their journey. This paper outlines about vehicle tracking application which helps individuals to hire a vehicle with a phone through sending driver details to passenger. This application helps the user to select a suitable cab and make a phone call to the driver of the cab to pick him up. Java programming language using android operating system for mobile client, PHP as web server, MYSQL as repository, GPS (Global Positioning System) as location provider, etc., benefits are taken in this proposed system.

Abstract: Much research on biometric technology and information system focuses on privacy and security such as access control and information security. Few researches infer issues and challenges in biometrics and how they relate to privacy and security within biometric technology. Thus, this research investigates how issues and challenges in biometric are correlated with privacy and security of data and information in biometric technology and organizational information system network. The outcomes would assist organizational management to determine whether implementing biometric technology invades. A survey that has been conducted on fingerprint and facial recognition (aged in between 20-50) found that there is a substantial relationship between biometric user authentication system and privacy and security of user data and information. Additionally, the implementation of biometric technology for user authentication and access control draw the attention and curiosity and this technology were not apparent to be depraved, abusive and fierce. Biometric technology that contains user information is more efficient in identification and verification purpose and does not need to remember like password but once stolen could not be replaced. The outcomes further point out that biometric data and information are irreplaceable and once compromised, then cannot be replaced or changed like passwords. Biometrics technology is apposite and user friendly even though the contentious stratagem of data uniqueness and its cost of stolen. Organization management should therefore carefully understand the need of organization and the feature and risk factor of biometric technology should be considered while implementing it.

Abstract: Information security training and awareness program is a bottom line in preventing the information security incidents in an organization. The success factor of this program would be unknown unless its efficiency is measured. Prior to implementing the information security training and awareness program, organization management should identify the organizational needs, metrics to measure the efficiency of the program and update the activities in the program. Information security training and awareness program teaches the employees how their knowledge, attitude and behaviour affect organization’s overall performance. This paper is not proposing any specific information security products, but giving practical suggestion to organizational management and users regarding information security. This paper also proposes that it’s everyone’s responsibility to secure organization’s information system. This paper measures the success factor of the information security training and awareness program and ensures organization management that investment in such program is valuable in long-term. This paper discussed on research methodology, outlining the experiment and statistical analysis and measuring the effectiveness of information security training and awareness program. This paper further analyse on some other factors that contributes in making training and awareness program effective and also provide suggestion for future research.

Abstract: Agriculture is the main food production system of a country. Within the decades there are many research work has been demonstrated to improve the cultivation of agriculture using new technology. In this paper, an agricultural environment monitoring system is described using Internet of Things, open source platform Raspberry Pi and Wireless Sensor Network. In this system, the Wireless Sensors collect the data of various environmental parameters such as temperature, humidity, soil moisture etc. and send the collected data to Raspberry Pi. The Raspberry Pi stores the data in a database of a web server. The data can be monitored from anywhere of the world. A client side website is developed to show the data. If the collected environmental data reaches in a threshold condition, a gmail notification is sent to the user as a warning. In this paper, overall description of the system architecture, used hardware and software components are described in details.

Abstract: Real-time traffic information plays an important role in our daily life. Traffic information helps us to understand the traffic congestion situation of different part of the city for various reasons such as traffic flow of a particular area, exact travel time to a destination, road condition, accident information etc. These days many vehicles carry an electronic device called black box to facilitate the personnel who operate the vehicle. Specifically, to find out the reason behind incident for an insurance claim and correct decision making. In this paper, we proposed a deep learning based smart traffic updating system which exploits black box video data. We collect the black box video data, process them and train them using deep learning model for accurate vehicle detection. After vehicle detection and counting the number, we send traffic information to the server for better decision making such as avoid gridlock situation or incident. Experiment result shows that our proposed method is a very cost-effective way to manage and update GIS information with cutting age technology. A Deep learning based GIS database will ease the needs for traffic information related to the perception and decision-making of transportation. DSTUS system will assist to predict model of the travel time of each road section in a specific city.

Abstract: TSP is a popular and demanding NP hard problem. Many researchers get interest into it. There are a numerous methods such as SCX, ERX, and GNX etc. for solving TSP with GA. In our paper we proposed a new solution for Traveling Salesman Problem (TSP) using genetic algorithm. A triple crossover technique is applied for finding best & optimal solution of this problem. Triple Crossover Operator (TCO) separate the parent’s strings into three substrings comparing cost and derive new partial Offspring containing duplicate nodes then replacing the missing nodes with duplicate nodes allows the system to generate high performance chromosomes. This solution is compared with different well performing Crossover technique. Our Experimental result shows that, due to the well crossover technique has improved performance. Moreover the complexity of this algorithm is negligible.

Abstract: In recent years, domestic and foreign researchers using the robot arm in traditional manufacturing industry, medical treatment, space exploration, education and other fields and made some successful results. The accuracy and stability of the manipulator has been an important topic of many scholars, especially in some specific areas ex) industrial assembly, security, explosion prevention, medical and other fields with special requirements ex) higher precision and movement are required to produce very small deviations. on the mechanical stability arm. This paper introduces the attitude operator sensor MPU6050 with 7BOT multi DOF robot arm manipulator to improve the stability of extremely precise experiments.I fix MPU6050 on 7BOT mechanical arm, MPU6050 mechanical arm posture by clearing the number According to the esp8266 wireless module to transmit to the server, the waveform and generation of 3D position speed and angular speed, including processing and produce the 3D mechanical arm motion model, while the 7BOT attitude data sent to the server, automatic data and waveform than on. If large deviations are corrected in a timely manner, the revised data sent to the 7BOT arm, arm posture timely correction. The experiment proved that can greatly improve the mechanical arm to complete a given task accuracy in initial position.

Abstract: The paper presents a framework methodology for identifying the real website is vulnerable to code injection attack, our proposed methodology gives a solution for thousands of websites to identify their vulnerability against code injection attacks, so it can help the administrator of the website by checking his own website and knows if it is vulnerable to code injection attacks. There are not too many researchers about this subject because most of the cyber securityresearchers are about detecting and protecting against the malware, the good thing in this research is that it can provide a self-checking protection for every website, so the administrator can know if his website needs to get more protection support against malware or if the anti-malware that the administrator is using for protection against code injection attack is enough.
The framework that presented in this paper can do a code injection attacks such as SQL injection and XSS attacks by itself to check whether the website is vulnerable to code injection attack. The checking methodology is contained all ways that the attacker can do to success his code injection attack, so it gives a precision checking for the website whether is vulnerable against code injection attack or no. By this way, we can protect many of websites against hackers that using code injection attack to make damage such as stealing or deleting or altering important data from website systems.

Abstract: In this paper, we have described an approach to detect the visual focus of attention (VFOA) of a person from his/her gaze direction with varying lighting conditions and varying distance from the camera and evaluate their performance. The continuous video of the target person is captured and fed into an expert system for further processing. From frame by frame analysis, the head and eyeball of the target person is detected using vector field of image gradient. If the person changes his/her gaze, the corresponding coordinate of pupil also changes. The frontal view of the person is divided into three regions corresponding to three target objects and the gaze direction are detected based on in which region the coordinate of pupil is located in the eye area. Our technique also combines the both Rapid Eye Movement (REM) and head rotation detection, which provides an efficient tool for VFOA tracking. With a low cost camera, it is very hard to work at night and cloudy days because of lack of proper lighting. So considering these limitations, we have developed a system that can successfully track visual focus of attention of the target person by means of sustained and transient attention or distraction and finally control the attention by deploying an external signaling or alarming system.