I've been working with facial embeddings but I think Word2Vec is a more common example.
Each entry in that matrix is a number that came from some prediction program/algorithm, but what are they? Are they learned features?
Those numbers are learned vectors that each represents a dimension that best separates each word from each other, given some limiting number of dimensions (normally ~200). So if one group of words tends to appear in the same context, then they'd likely share a similar score on one or more dimensions.
For example, words like North, South, East, West are likely to be very close since they are interchangeable in many contexts.
The dimensions are chosen by algorithm to maximize the variance they encode, and what they mean is not necessarily something we can talk about in words. But imagine a bag of fridge-magnets each representing a letter of the alphabet - if you shine a light on them so as to cast a shadow, there will be some orientations of the letters that yield more discriminatory information in the shadows than for other orientations.
The dimensions in a word-embedding represent the best "orientations" that give light to the most discriminatory "shadows". Sometimes these dimensions might approximate things we recognise as having direct meaning, but very often, they wont.
That being said, if you collect words that do have similar functions, and find the vectors from those words to other words that are the endpoint of some kind of fixed relationship - say England, France, Germany as one set of words consisting of Countries, and London, Paris, Berlin as another set of words consisting of the respective Capital-Cities, you will find that the relative vectors between each country and its capital are often very, very similar in both direction and magnitude.
This has an application for search because you can start with a new word location, say "Argentina" and by looking in the location arrived at by applying the relative "has_capital_city" vector, you should arrive at the word "Buenos Aires".
So the raw dimensions probably have little meaning of their own, but by performing these A is to B as X is to Y comparisons, it is possible to derive relative vectors that do have a meaning of sorts.