I am trying to log data from multiple vehicles into the same CSV file in VEINS, but it stops logging after the second vehicle spawns, i.e., it only works for a single-vehicle scenario.
void MyVeinsApp::handlePositionUpdate(cObject* obj) {
DemoBaseApplLayer::handlePositionUpdate(obj);
simtime_t currentTime = simTime();
double offsetX = 0.0, offsetY = 0.0, offsetZ = 0.0;
int isOffsetApplied = 0, isAttacker = attackerVehicles.count(vehicleId) ? 1 : 0;
double currentSpeed = mobility->getSpeed();
if (isAttacker && offsetDistribution(generator)) {
offsetX = offsetValue(generator);
offsetY = offsetValue(generator);
offsetZ = offsetValue(generator);
isOffsetApplied = 1;
currentSpeed /= 2.0;
}
vehicleData[vehicleId] = std::make_tuple(curPosition.x + offsetX, curPosition.y + offsetY,
curPosition.z + offsetZ, isOffsetApplied,
isAttacker, currentSpeed);
logFile << std::fixed << std::setprecision(2) << currentTime.dbl();
for (int i = 0; i < 50; ++i) {
if (vehicleData.count(i)) {
// Vehicle data exists; log it
auto tupleData = vehicleData[i];
double x = std::get<0>(tupleData);
double y = std::get<1>(tupleData);
double z = std::get<2>(tupleData);
int offset = std::get<3>(tupleData);
int attacker = std::get<4>(tupleData);
double speed = std::get<5>(tupleData);
logFile << "," << x << "," << y << "," << z << "," << offset << "," << attacker << "," << speed;
} else {
// Vehicle data is missing; log blanks
logFile << ",,,," << "0,0,";
}
}
logFile << std::endl;
}
Could this be due to a race condition, where multiple instances of the application layer for each vehicle are trying to write to the CSV file simultaneously? If so, how can it be fixed?
Yes. This looks like a race condition where multiple vehicles are trying to write to the same file at the same time, which can lead to file corruption or incomplete writes. Below is a thread-safe solution using mutex to synchronise file access:
#include <mutex>
class MyVeinsApp : public DemoBaseApplLayer {
private:
// Add mutex as class member
static std::mutex logMutex;
// Other existing class members...
public:
void handlePositionUpdate(cObject* obj) {
DemoBaseApplLayer::handlePositionUpdate(obj);
simtime_t currentTime = simTime();
double offsetX = 0.0, offsetY = 0.0, offsetZ = 0.0;
int isOffsetApplied = 0, isAttacker = attackerVehicles.count(vehicleId) ? 1 : 0;
double currentSpeed = mobility->getSpeed();
if (isAttacker && offsetDistribution(generator)) {
offsetX = offsetValue(generator);
offsetY = offsetValue(generator);
offsetZ = offsetValue(generator);
isOffsetApplied = 1;
currentSpeed /= 2.0;
}
// Update vehicle data
{
std::lock_guard<std::mutex> lock(logMutex); // Scope-based lock for vehicle data
vehicleData[vehicleId] = std::make_tuple(curPosition.x + offsetX, curPosition.y + offsetY,
curPosition.z + offsetZ, isOffsetApplied,
isAttacker, currentSpeed);
}
// Write to file with mutex protection
{
std::lock_guard<std::mutex> lock(logMutex);
logFile << std::fixed << std::setprecision(2) << currentTime.dbl();
for (int i = 0; i < 50; ++i) {
if (vehicleData.count(i)) {
// Vehicle data exists; log it
auto tupleData = vehicleData[i];
double x = std::get<0>(tupleData);
double y = std::get<1>(tupleData);
double z = std::get<2>(tupleData);
int offset = std::get<3>(tupleData);
int attacker = std::get<4>(tupleData);
double speed = std::get<5>(tupleData);
logFile << "," << x << "," << y << "," << z << ","
<< offset << "," << attacker << "," << speed;
} else {
// Vehicle data is missing; log blanks
logFile << ",,,," << "0,0,";
}
}
logFile << std::endl;
logFile.flush(); // Ensure data is written to disk
}
}
};
// Define the static mutex
std::mutex MyVeinsApp::logMutex;
The changes made above are:
Added a static mutex (logMutex
) as a class member to synchronize access across all instances
Used std::lock_guard
to RAII-style locking to ensure the mutex is always released
Protected both the vehicleData
update and file writing operations
Added separate scoping blocks for the locks to minimise the critical section
Added logFile.flush()
to ensure data is written to disk immediately
The following is also recommended:
Consider using a buffered approach where data is collected in memory and written periodically to reduce I/O overhead
You might want to add error handling for file operations
Consider using a more efficient data structure than checking 50 sequential IDs