TheK-NN is a technique in which objects are classified depends on nearest trainingexamples which is present in the feature query space. The K-NN is the simplestclassification method in data mining. In K-NN objects are classified when thereis no information about the distribution of the data objects is known. In K-NNperformance of classification is depend on K and it can be determined by thechoice of K as well as the distance metric of query. The performance of K-NNclassification is largely affected by selection of K which is having a suitableneighborhood size. It is a key issue for classification. This paper proposed adata structure which is for K-NN search, called as RANK COVER TREE to increasethe computational cost of K-NN Search. In RCT pruning test involves thecomparison of objects similar values relevant to query. In Rank Cover Tree eachobject can assign a specific order and according to that order object canselected which can be relevant to the respective object query. It can controlthe overall query execution cost .It provides result for Non-metric pruningmethods for similarity search and when high dimensional data is processed itprovides the same result. It returns corrects query execution result inrequired time that relies on a intrinsic dimensionality of objects of the dataset. RCT can exceed the performance of methods involving metric pruning andmany selection tests involving distance values having numerical constraints onit
Keywords: K-Nearest neighbor search, intrinsic dimensionality, rank-basedsearch, RCT.