Consider any situation in which you ultimately use a gradient descent optimization method. Assume that you've successfully created a hypothesis which fits your training set and works fine. After some time your algorithm receives more and more new data, which it has to learn from.
Questions: 1) Can this algorithm continue to be considered supervised ?
2) If so, is there a way to learn from the new data without iterating again through all ( new + old ) data ?
There is no generic answer to your question, as this is a very broad problem/issue in machine learning, you should do research in two topics:
There are dozens of approaches to both problems (and it does not really matter that you use gradient descent, it is more important what exact model are you fitting), everything depends on the particular dataset and application.
So in general:
And yes, it is still supervised learning, although there are also semi- and un- supervised algorithms used for dealing with concept drift.