Let's imagine we generated a 200 dimension word vector using any pre-trained model of the word ('hello') as shown in the below image.
So, by any means can we tell which linguistic feature is represented by each d_i of this vector?
For example, d1 might be looking at whether the word is a noun; d2 might tell whether the word is a named entity or not and so on.
Because these word vectors are dense distributional representations, it is often difficult / impossible to interpret individual neurons, and such models often do not localize interpretable features to a single neuron (though this is an active area of research). For example, see Analyzing Individual Neurons in Pre-trained Language Models for a discussion of this with respect to pre-trained language models).
A common method for studying how individual dimensions contribute to a particular phenomenon / task of interest is to train a linear model (i.e., logistic regression if the task is classification) to perform the task from fixed vectors, and then analyze the weights of the trained linear model.
For example, if you're interested in part of speech, you can train a linear model to map from the word vector to the POS [1]. Then, the weights of the linear model represent a linear combination of the dimensions that are predictive of the feature. For example, if the weight on the 5th neuron has large magnitude (very positive or very negative), you might expect that neuron to be somewhat correlated with the phenomenon of interest.
[1]: Note that defining a POS for a particular word is nontrivial, since the POS often depends on context. For example, "play" can be a noun ("he saw a play") or a verb ("I will play in the grass").