I have a download operation code that look's like this
while(true){
if(target.flagStop){
break;
}else{
x=target.check();
}
len=in.read(buff,0,min(BUFFER_SIZE,x));
out.write(buff,0,len);
target.position+=len;
}
which flagStop
is a volatile boolean
and position
is a non volatile long value
and inside a check()
method I have a synchronized
block
long check(){
//some code here
synchronized(aLock){
//some code here
return something;
}
}
I update(write access) position
only in this thread(only care about this to be exactly have lastest updated) but also some reads from different threads occurs, im my case thease are just for monitoring purposes
so a few bytes lower than I expected it does'nt matter vs than declarimg value as volatile
which that costs on performance on my main purpose
I know for a CPU instruction to be completed data comes to CPU register after computation progress result will come back to memory
which
- if that variable declared as volatile
the result immediately will be written to main memory (not cached anymore)
- otherwise this will be stored in thread cache memory after that in future this value will be written to main memory(write to main memory from cache, time can't be determined (this can be immediately or has a delay no one knows) in my case my question is about this situation that value is not volatile
and only in one thread
according to an answer from a dear User in StackOverflow in here when we enter a synchronized
block first of all
(case 1): we have a read operation from main memory (mentioned as read barrier)
and at the end of synchronized
block
(case 2): we have write operation to main memory (mentioned as write barrier)
I know about case 2
all the modified thread cache variables will be written into main memory
but something that maybe I'm thinking wrong is that in case 1:
we have a read operation from main memory which that overrides thread's cache with version that stored in main memory.(main -> cache)
As I mentioned earlier my position
value is not volatile
(so have not directly access read/write to main memory use cached value instead) and if I enter to a synchronized
block which that case 1 occurs (since that possible ,newer position value from thread's cache have not yet have chance to writes its value to main memory) and overrides main memory(possibly older one) version of position into thread cache(i.e. destroy newer one by overriding older value that retrieved by synchronization monitor enter
operation)
is that really I'm thinking true?
and I must declare position
as a volatile
or not?
and tell me if I'm wrong that what's happening in thread cache at monitor enter
(or case 1 that I mentioned before)
Thanks in advance for your guidance.
Part of what you are looking for:
https://docs.oracle.com/javase/specs/jls/se14/html/jls-17.html#jls-17.4.1
Memory that can be shared between threads is called shared memory or heap memory.
All instance fields, static fields, and array elements are stored in heap memory. In this chapter, we use the term variable to refer to both fields and array elements.
Local variables (§14.4), formal method parameters (§8.4.1), and exception handler parameters (§14.20) are never shared between threads and are unaffected by the memory model.
"Unaffected by" here means that they don't need to be synchronized. As long as only one thread sees a variable, it's always fine.
This also helps:
https://docs.oracle.com/javase/specs/jls/se14/html/jls-17.html#jls-17.4.7
The execution obeys intra-thread consistency.
For each thread t, the actions performed by t in A [actions by that thread] are the same as would be generated by that thread in program-order in isolation, with each write w writing the value V(w), given that each read r sees the value V(W(r)). Values seen by each read are determined by the memory model. The program order given must reflect the program order in which the actions would be performed according to the intra-thread semantics of P.
Actions means both reads and writes. So your variable position
is not allowed to be updated with some strange values because of synchronization. The reads and writes within a single thread of execution happens in the same order as the program statements specify. The system will not pull strange reads or writes out of cache or main memory out of order.