I have a servlet.Filter
implementation that does a lookup of a client's user ID in a database table (based on the IP address), it attaches this data to an HttpSession
attribute. The filter does this whenever it receives a request from a client without a defined HttpSession
.
In other words, if there is no session attached to a request, the filter will:
This all works fine if there is some time in between requests from a "session-less" client.
But if a "session-less" client sends 10 requests within milliseconds of each other I end up with 10 sessions and 10 database queries. It still "works" but I don't like all of these sessions and queries for resource reasons.
I think this is because the requests are so close together. When a "session-less" client sends a request and gets a response before another request is sent I don't have this problem.
The relevant parts of my filter are:
// some other imports
import org.apache.commons.dbutils.QueryRunner;
import org.apache.commons.dbutils.handlers.MapHandler;
public class QueryFilter implements Filter {
private QueryRunner myQueryRunner;
private String myStoredProcedure;
private String myPermissionQuery;
private MapHandler myMapHandler;
@Override
public void init(final FilterConfig filterConfig) throws ServletException {
Config config = Config.getInstance(filterConfig.getServletContext());
myQueryRunner = config.getQueryRunner();
myStoredProcedure = config.getStoredProcedure();
myUserQuery = filterConfig.getInitParameter("user.query");
myMapHandler = new MapHandler();
}
@Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
throws ServletException {
HttpServletRequest myHttpRequest = (HttpServletRequest) request;
HttpServletResponse myHttpResponse = (HttpServletResponse) response;
HttpSession myHttpSession = myHttpRequest.getSession(false);
String remoteAddress = request.getRemoteAddr();
// if there is not already a session
if (null == myHttpSession) {
// create a session
myHttpSession = myHttpRequest.getSession();
// build a query parameter object to request the user data
Object[] queryParams = new Object[] {
myUserQuery,
remoteAddress
};
// query the database for user data
try {
Map<String, Object> userData = myQueryRunner.query(myStoredProcedure, myMapHandler, queryParams);
// attach the user data to session attributes
for (Entry<String, Object> userDatum : userData.entrySet()) {
myHttpSession.setAttribute(userDatum.getKey(), userDatum.getValue());
}
} catch (SQLException e) {
throw new ServletException(e);
}
// see below for the results of this logging
System.out.println(myHttpSession.getCreationTime());
}
// ... some other filtering actions based on session
}
}
Here are the results of logging myHttpSession.getCreationTime()
(timestamps) from ONE client:
1343944955586
1343944955602
1343944955617
1343944955633
1343944955664
1343944955680
1343944955804
1343944955836
1343944955867
1343944955898
1343944955945
1343944955945
1343944956007
1343944956054
As you can see, almost all the sessions are different. These timestamps also give a good idea of how close the requests are spaced together (20ms - 50ms).
I can't redesign all client-side applications to ensure that they get at least one response before they send another request intially, so I want to do that in my filter.
Also, I don't want to just make the subsequent requests fail, I would like to figure out a way to handle them.
Question
Is there a way to put subsequent requests from the same client (IP address) into "limbo" until a session has been established from the first request?
And, if I manage that, how can I get the correct HttpSession
(the one that I attached the user data to) when I call aSubsequentRequest.getSession()
afterwards? I don't think I can assign a session to a request but I could be wrong.
Maybe there is some better way to go about this entirely. I basically would just like to stop this filter from running the lookup query 10 - 20 times unnecessarily within a 2 second time period.
I think what you need to do is require that your clients authenticate (successfully) first, then make additional requests. Otherwise, they run the risk of generating multiple sessions (and having to maintain them separately). That's really not so bad of a requirement IMO.
If you are able to rely on NTLM credentials, then you could perhaps set up a map of user->token where you place a token into the map upon first connect and then all requests block (or fail) until one of them successfully completes the authentication step at which point the token is removed (or updated so you can use the preferred session id).