I'm a high performance computing and algorithm researcher, thanks for coming to my website!
I'm a postdoctoral researcher at Sorbonne University. Before that, I worked at Univerzita Karlova with Erin Caron and obtained my PhD degree in Applied Mathematics (Numerical Analysis) in the Department of Mathematics at The University of Manchester and also a member of the Numerical Linear Algebra group, supervised by Prof. Stefan Güttel and co-supervised by Prof. Nick Higham. My research interests include the Scientific Computing, Algorithm Design and Analysis, High Performance Computing, and Machine Learning. I was invited as reviewer for Springer Statistics and Computing, PeerJ Computer Science, and International Journal of Forecasting.
我目前在巴黎六大从事科研工作。在此之前我分别在查理大学从事研究工作和曼彻斯特大学获得应用数学博士学位,博士期间由Stefan Güttel担任导师以及Nick Higham教授担任副导师。我曾隶属于数学系下的数值代数组。我的研究兴趣包括科学计算,算法,高性能计算和机器学习。我曾受邀担任Springer Statistic and Computing, PeerJ 计算机科学, 和 International Journal of Forecasting期刊的审稿人。
My CV
I‘m currently researching machine learning applications on scientific computing, with a particular interest in matrix computation. My research projects cover the domains of graph neural networks, deep learning, algorithms and algorithmic analysis, mixed precision computing, and parallel computing.
我目前研究机器学习在科学计算方面的应用,对矩阵计算尤其感兴趣。我的研究项目涵盖图神经网络、深度学习、算法和算法分析、混合精度计算、并行计算等领域。
CLASSIX is a fast and explainable clustering algorithm based on sorting. CLASSIX is a contrived acronym of CLustering by Aggregation with Sorting-based Indexing and the letter X for explainability. CLASSIX clustering consists of two phases, namely a greedy aggregation phase of the sorted data into groups of nearby data points, followed by a merging phase of groups into clusters. The algorithm is controlled by two parameters, namely the distance parameter radius for the group aggregation and a minPts parameter controlling the minimal cluster size.
fABBA is a fast and accurate symbolic representation method for temporal data. It is based on a polygonal chain approximation of the time series followed by an aggregation of the polygonal pieces into groups. The aggregation process is sped up by sorting the polygonal pieces and exploiting early termination conditions. In contrast to the ABBA method [S. Elsworth and S. Güttel, Data Mining and Knowledge Discovery, 34:1175-1200, 2020], fABBA avoids repeated within-cluster-sum-of-squares computations which reduces its computational complexity significantly. Furthermore, fABBA is fully tolerance-driven and does not require the number of time series symbols to be specified by the user.
SNN is a fast exact fixed-radius nearest neighbor search algorithm. It uses the first principal component of the data to prune the search space and speeds up Euclidean distance computations using high-level BLAS routines. SNN is implemented in native Python. On many problems, SNN is faster than KDtree and Balltree in the scikit-learn package. There is also a C++ implementation of SNN.
Emal:
xinyechenai@gmail.com
Language:
Mandarin (普通话, native), Hakka (客家话, native), Cantonese (粤语) , and English (英语)
OCRID:
https://orcid.org/0000-0003-1778-393X