Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shufan Yang is active.

Publication


Featured researches published by Shufan Yang.


Neural Computing and Applications | 2017

A neuro-inspired visual tracking method based on programmable system-on-chip platform

Shufan Yang; KongFatt Wong-Lin; James Andrew; Terrence Mak; T. Martin McGinnity

Using programmable system-on-chip to implement computer vision functions poses many challenges due to highly constrained resources in cost, size and power consumption. In this work, we propose a new neuro-inspired image processing model and implemented it on a system-on-chip Xilinx Z702c board. With the attractor neural network model to store the object’s contour information, we eliminate the computationally expensive steps in the curve evolution re-initialisation at every new iteration or frame. Our experimental results demonstrate that this integrated approach achieves accurate and robust object tracking, when they are partially or completely occluded in the scenes. Importantly, the system is able to process 640 by 480 videos in real-time stream with 30 frames per second using only one low-power Xilinx Zynq-7000 system-on-chip platform. This proof-of-concept work has demonstrated the advantage of incorporating neuro-inspired features in solving image processing problems during occlusion.


uk workshop on computational intelligence | 2018

A Low Computational Approach for Assistive Esophageal Adenocarcinoma and Colorectal Cancer Detection

Zheqi Yu; Shufan Yang; Keliang Zhou; Amar Aggoun

In this paper, we aim to develop a low-computational system for real-time image processing and analysis in endoscopy images for the early detection of the human esophageal adenocarcinoma and colorectal cancer. Rich statistical features are used to train an improved machine-learning algorithm. Our algorithm can achieve a real-time classification of malign and benign cancer tumours with a significantly improved detection precision compared to the classical HOG method as a reference when it is implemented on real time embedded system NVIDIA TX2 platform. Our approach can help to avoid unnecessary biopsies for patients and reduce the over diagnosis of clinically insignificant cancers in the future.


international conference on human-computer interaction | 2017

Interactive reading using low cost brain computer interfaces

Fernando Loizides; Liam Naughton; Paul Wilson; Michael Loizou; Shufan Yang; Thomas P. Hartley; Adam Worrallo; Panayiotis Zaphiris

This work shows the feasibility for document reader user applications using a consumer grade non-invasive BCI headset. Although Brain Computer Interface (BCI) type devices are beginning to aim at the consumer level, the level at which they can actually detect brain activity is limited. There is however progress achieved in allowing for interaction between a human and a computer when this interaction is limited to around 2 actions. We employed the Emotiv Epoc, a low-priced BCI headset, to design and build a proof-of-concept document reader system that allows users to navigate the document using this low cast BCI device. Our prototype has been implemented and evaluated with 12 participants who were trained to navigate documents using signals acquired by Emotive Epoc.


Proceedings of the 16th World Conference on Mobile and Contextual Learning | 2017

Presenting and Investigating the Efficacy of an Educational Interactive Mobile Application for British Sign Language Using Hand Gesture Detection Techniques

Cristian A. Restituyo; Fernando Loizides; Shufan Yang; Kurtis Weir; Adam Worrallo; Thomas P. Hartley; Nicos Souleles; Michael Loizou

In this paper we present the design and development of a mobile application that assists learners of British Sign Language. The application uses interaction; namely, image recognition of a participants gestures to give feedback as to the correctness of the gestures. Through designing with different algorithms and a user test, the efficacy of such an application is presented along with limitations that are present with current technology.


IEEE Sensors Journal | 2018

Human activity classification with radar: optimization and noise robustness with iterative convolutional neural networks followed with random forests

Yier Lin; Julien Le Kernec; Shufan Yang; Francesco Fioranelli; Olivier Romain; Zhiqin Zhao


IEEE Design & Test of Computers | 2018

A highly integrated hardware-software co-design and co-verification platform

Shufan Yang; Zheqi Yu


EasyChair Preprints | 2018

Radar for assisted living in the context of Internet of Things for Health and beyond

Julien Lekernec; Francesco Fioranelli; Shufan Yang; Jordane Lorandel; Olivier Romain


international conference on consumer electronics | 2017

Towards a scalable hardware/software co-design platform for real-time pedestrian tracking based on a ZYNQ-7000 device

Zheqi Yu; Shufan Yang; Ian P. Sillitoe; Kevan Buckley


International Journal of Central Banking | 2017

Unconstrained Face Detection and Open-Set Face Recognition Challenge

Manuel Günther; Peiyun Hu; C. Herrmann; Chi-Ho Chan; M. Jiang; Shufan Yang; A. R. Dhamija; Deva Ramanan; J. Beyerer; Josef Kittler; M. Al Jazaery; M. I. Nouyed; Guodong Guo; C. Stankiewicz; Terrance E. Boult


Eurasip Journal on Embedded Systems | 2017

An intelligible implementation of FastSLAM2.0 on a low-power embedded architecture

Albert A. Jiménez Serrata; Shufan Yang; Renfa Li

Collaboration


Dive into the Shufan Yang's collaboration.

Top Co-Authors

Avatar

Zheqi Yu

University of Wolverhampton

View shared research outputs
Top Co-Authors

Avatar

Adam Worrallo

University of Wolverhampton

View shared research outputs
Top Co-Authors

Avatar

Fernando Loizides

University of Wolverhampton

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas P. Hartley

University of Wolverhampton

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amar Aggoun

University of Wolverhampton

View shared research outputs
Top Co-Authors

Avatar

C. Stankiewicz

University of Wolverhampton

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge